[HN Gopher] Why you might want async in your project
       ___________________________________________________________________
        
       Why you might want async in your project
        
       Author : jdon
       Score  : 102 points
       Date   : 2023-09-09 18:17 UTC (4 hours ago)
        
 (HTM) web link (notgull.net)
 (TXT) w3m dump (notgull.net)
        
       | PaulHoule wrote:
       | I find async is so much fun in Python and meshes with the other
       | things you can do with generators but that is because I have the
       | reference collector cleaning up behind me.
       | 
       | Looking back with like 30 years of hindsight it seems to me that
       | Java's greatest contribution to software reuse was efficient
       | garbage collection; memory allocation is a global property of an
       | application that can't efficiently be localized as you might want
       | a library to use a buffer it got from the client or vice versa
       | and fighting with the borrow checker all the time to do that is
       | just saying "i choose to not be able to develop applications
       | above a certain level of complexity."
        
         | airstrike wrote:
         | And Java's worst contribution to software was how painfully
         | slow and resource hungry most of the software written with it
         | tends to be...
         | 
         | Your argument is looking at the advantages Java brought to
         | development speed and entirely disregarding runtime speed
        
           | fnord77 wrote:
           | java benchmarks are close to C's benchmarks; thousands of
           | times faster than python
        
             | cies wrote:
             | not in terms of memory usage and startup time. otherwise
             | it's quite fast.
        
           | jshen wrote:
           | I don't like Java, but you are completely wrong.
        
           | notamy wrote:
           | Any tool can be misused - the same comment could be made
           | about Javascript, PHP, Perl, C, C++, Python, really any
           | language.
        
           | charrondev wrote:
           | The only Java thing I work with is ElasticSearch (and in the
           | past other lucene based search tools like Solr). These can be
           | resource hungry depending on what your indexing but they are
           | also faster and more scalable than other tools I'd used
           | before.
        
         | fnord77 wrote:
         | the JVM is an underappreciated engineering marvel
        
           | cies wrote:
           | > underappreciated
           | 
           | widely used though. not sure if that count for appreciation,
           | but i think it's one of the highest forms.
           | 
           | it's not bad, not not great either. i miss proper sum types,
           | and it really lament the fact that static things are nearly
           | impossible to be mocked which prompts everyone to use DI for
           | everything instead of static.
        
         | codeflo wrote:
         | I agree that memory management can't be solved locally. The
         | situation in C++, where every library or API you use has a
         | different cleanup convention, that you need to carefully read
         | about in the documentation to even properly review a pull
         | request, is proof of that.
         | 
         | I disagree that this criticism applies to Rust. For 99% of the
         | cases, the idiomatic combination of borrow checking, Box and
         | Arc gets back to a unified, global, compiler-enforced
         | convention. I agree that there's a non-trivial initial skill
         | hurdle, one that I also struggled with, but you only have to
         | climb that once. I don't see that there's a limit to program
         | complexity with these mechanisms.
        
           | meindnoch wrote:
           | >The situation in C++, where every library or API you use has
           | a different cleanup convention, that you need to carefully
           | read about in the documentation to even properly review a
           | pull request, is proof of that.
           | 
           | Lol wut. The C++ resource management paradigm is RAII. If you
           | write a library that doesn't use RAII, it's a bad library.
           | Not a fault of the language.
        
             | moregrist wrote:
             | There's a lot of C++ code out there and a lot that
             | interfaces with C.
             | 
             | RAII is one method of cleanup but it doesn't work in all
             | situations. One that comes to mind is detecting errors in
             | cleanup and passing them to the caller.
             | 
             | So it's not right to call every library that doesn't use
             | RAII "bad." There are other constraints, as well. Part of
             | the strength of C++ is to give you a choice of paradigms.
        
             | codeflo wrote:
             | You have two choices.
             | 
             | Either you write code with good performance, which means
             | that functions do take references and pointers sometimes,
             | in which case you do have all of the usual lifetime issues.
             | This is the proper way to use C++, and it's perfectly
             | workable, but it's by no means automatic. That's the
             | reality that my comment was referencing.
             | 
             | Or you live in a fantasy land where RAII solves everything,
             | which leads to code where everything is copied all the
             | time. I've lived in a codebase like this. It's the mindset
             | that famously caused Chrome to allocate 25K individual
             | strings for every key press:
             | https://groups.google.com/a/chromium.org/g/chromium-
             | dev/c/EU...
        
               | dataflow wrote:
               | You're missing a bunch of very important stuff in that
               | page you linked to. See what they listed as the culprits:
               | 
               | > strings being passed as char* (using c_str()) and then
               | converted back to string
               | 
               | > Using a temporary set [...] only to call find on it to
               | return true/false
               | 
               | > Not reserving space in a vector
               | 
               | c_str() isn't there for "good performance" to begin with;
               | it's there for interfacing with C APIs. RAII or not, GC
               | or not, you don't convert to/from C strings in C++ unless
               | you have to.
               | 
               | The other stuff above have nothing to do with C++ or
               | pointers, you'd get the same slowdowns in any language.
               | 
               | The language has come a long way since 2014. Notice what
               | they said the solutions are:
               | 
               | > base::StringPiece [...]
               | 
               | a.k.a., C++17's std::string_view.
        
               | codeflo wrote:
               | I'm responding to a comment that claims all lifetime
               | issues are solved by RAII.
               | 
               | My argument was that for efficient code, you need to pass
               | references or pointers, which means you do need to care
               | about lifetimes.
               | 
               | And your argument is that's not true because we now have
               | std::string_view? You do realize that it's just a pointer
               | and a length, right? And that this means you need to
               | consider how long the string_view is valid etc., just as
               | carefully as you would for any other pointer?
        
               | dataflow wrote:
               | > I'm responding to a comment that claims all lifetime
               | issues are solved by RAII.
               | 
               | I don't see anybody claiming this. The parent I see you
               | initially replied to said "the C++ resource management
               | paradigm is RAII", not "all lifetime issues are solved by
               | RAII".
               | 
               | > My argument was that for efficient code, you need to
               | pass references or pointers, which means you do need to
               | care about lifetimes.
               | 
               | Of course you do. Nobody claimed you don't need to care
               | about lifetimes. (Even in a GC'd language you still need
               | to worry about not keeping objects alive for too long.
               | See [1] for an example. It's just not a memory safety
               | issue, is all.) The question was whether "every library
               | or API you use" needs to have "a different cleanup
               | convention" for performance reasons as you claimed, for
               | which you cited the Chromium std::string incident as an
               | example. What I was trying to point out was:
               | 
               | > that's not true because we now have std::string_view?
               | You do realize that it's just a pointer and a length,
               | right?
               | 
               | ...because it's not merely a pointer and a length. It's
               | both of those bundled into a _single object_ (making it
               | possible to drop them in place of a std::string much more
               | easily), _and_ a bunch of handy methods that obviate the
               | ergonomic motivations for converting them back into
               | std::string objects, hence preventing these issues.
               | (Again, notice this isn 't just me claiming this. The
               | very link you yourself pointed to was pointing to
               | StringPiece as the solution, not as the problem.)
               | 
               | So what you have left is just 0 conventions for cleanup,
               | 1 convention for passing read-only views (string_view), 1
               | convention for passing read-write views (span), and 1
               | convention for passing ownership (the container). No need
               | to deal with the myriads of old C-style conventions like
               | "don't forget to call free()", "keep calling with a
               | larger buffer", "free this with delete[]", or whatever
               | was there over a decade ago.
               | 
               | > And that this means you need to consider how long the
               | string_view is valid etc., just as carefully as you would
               | for any other pointer?
               | 
               | Again, nobody claimed you don't have to worry about
               | lifetimes.
               | 
               | [1] https://nolanlawson.com/2020/02/19/fixing-memory-
               | leaks-in-we...
        
               | cma wrote:
               | 2014, isn't that pre-C++11 in Chromium?
        
               | jeremyjh wrote:
               | I agree that a lot of that happens in the real world. I
               | disagree that RAII is not used in the real world. I
               | worked on a very large codespace for ATM client software
               | and we used it pervasively, and the only memory leak we
               | had in my time there was in a third-party library which
               | ... required the careful reading of documentation you
               | mentioned.
        
         | Nullabillity wrote:
         | The problem with garbage collection is that it doesn't work for
         | other kinds of resources than memory, so basically every
         | garbage collected runtime ends up with an awkward and kinda-
         | broken version of RAII anyway (Closeable, defer, using/try-
         | with-resources, context managers, etc).
         | 
         | Static lifetimes are also a large part of _the rest_ of Rust 's
         | safety features (like statically enforced thread-safety).
         | 
         | A usable Rust-without-lifetimes would end up looking a lot more
         | like Haskell than Go.
        
           | eternityforest wrote:
           | Python handles all kinds of stuff with garbage collection.
           | 
           | The problem is that things like closing a socket are not just
           | generic resources, a lot of the time nonmemory stuff has to
           | be closed at a certain point in the program, for correctness,
           | and you can't just let GC get to it whenever.
        
             | mplanchard wrote:
             | I don't think this is true. Context managers call special
             | magic "dunder" methods on the instance (I don't remember
             | the specific ones), and I'm pretty sure those don't get
             | called during regular garbage collection of those
             | instances. It's been a few years since I was regularly
             | writing python, so I might be wrong, but I don't believe
             | that context manager friendly instances are the same as
             | Rust's Drop trait, and I don't think their cleanup code
             | gets called during GC.
        
               | Nullabillity wrote:
               | Python is a fun case of "all of the above" (or rather, a
               | layering of styles once it turns out a previous one isn't
               | workable).
               | 
               | Originally, they used pure reference counting GC, with
               | finalizers used to clean up when freed. This was "fine",
               | since RC is deterministic. Everything is freed when the
               | last reference is deleted, nice and simple.
               | 
               | But reference counting can't detect reference cycles, so
               | eventually they added a secondary tracing garbage
               | collector to handle them. But tracing GC isn't
               | deterministic anymore, so this also meant a shift to
               | manual resource management.
               | 
               | That turned out to be embarrassing enough that context
               | managers were eventually introduced to paper over it. But
               | all four mechanisms still exist and "work" in the
               | language today.
        
               | im3w1l wrote:
               | Are you saying that a finalizer is _guaranteed_ to run
               | when the last reference is deleted? So you could actually
               | rely on them to handle the resources, as long as you are
               | careful not to use reference cycles?
        
               | Nullabillity wrote:
               | In CPython 2.7, yes. In CPython in general, I believe
               | it's currently still the case, but I don't think it's
               | guaranteed for future versions.
               | 
               | For Python in general, no. For example, as far as I know
               | Jython reuses the JVM's GC (and its unreliable finalizers
               | with it).
               | 
               | It's also easy to introduce accidental cycles. For one, a
               | traceback includes a reference to every frame on the call
               | stack, so storing that somewhere on the stack would
               | create an unintentional cycle!
        
               | mplanchard wrote:
               | Wrote Python professionally for years and didn't know all
               | of this. Thanks!
        
           | pphysch wrote:
           | > The problem with garbage collection is that it doesn't work
           | for other kinds of resources than memory
           | 
           | Why is that a "problem with GC"?
           | 
           | Abstracting away >90% of resource management (i.e. local
           | memory) is a significant benefit.
           | 
           | It's like saying the "problem with timesharing OS" is that it
           | doesn't address 100% of concurrency/parallelism needs.
        
           | pbourke wrote:
           | I quite like context managers and try-with-resources style
           | constructs. They make lifetimes explicit in the code in a
           | fairly intuitive way. You can get yourself turned around if
           | you deeply nest them, etc but there are usually ways to avoid
           | those traps.
        
           | ok123456 wrote:
           | Don't static lifetimes just mean that leaking memory is
           | considered 'safe' in rust?
        
             | Nullabillity wrote:
             | Static lifetimes as in "known and verified at compile-
             | time", not "the 'static lifetime".
        
           | nextaccountic wrote:
           | > basically every garbage collected runtime ends up with an
           | awkward and kinda-broken version of RAII anyway (Closeable,
           | defer, using/try-with-resources, context managers, etc).
           | 
           | RAII works only for the simplest case: when your cleanup
           | takes no parameters, when the cleanup doesn't perform async
           | operations, etc. Rust has RAII but it's unusable in async
           | because the drop method isn't itself async (and thus may
           | block the whole thread if it does I/O)
        
             | paholg wrote:
             | There are workarounds. You could, for example, have a drop
             | implementation spawn a task to do the I/O and exit.
             | 
             | Also, if your cleanup takes parameters, you can just store
             | them in the struct.
        
           | jayd16 wrote:
           | You make an interesting point. Has any language introduced a
           | generic-resource-collector? You're not supposed to use
           | deconstructors to clean up resources because you're left to
           | the whims of the GC which is only concerned about memory.
           | 
           | Has anyone build a collector that tracks multiple types of
           | resources an object might consume? It seems possible.
        
             | jerf wrote:
             | Erlang is probably the closest. The word you want to search
             | for is "port". If it doesn't seem like it at first, keep
             | reading. It's a very idiosyncratic take on the topic of you
             | view it from this perspective because it isn't exactly
             | their focus. But it does have a mechanism for collecting
             | files, sockets, open pipes to other programs, and a number
             | of other things. Not fully generic, though.
        
       | francasso wrote:
       | Am I the only one that after reading opening sentences like
       | 
       | "There is a common sentiment I've seen over and over in the Rust
       | community that I think is ignorant at best and harmful at worst."
       | 
       | just refuses to read the rest? If you are actually trying to make
       | a point to people that think differently than you, why antagonize
       | them by telling them they don't know what they are talking about?
        
         | winwang wrote:
         | I agree with your sentiment, but want to point out the irony of
         | choosing ignorance after being insulted as ignorant.
        
           | francasso wrote:
           | You are assuming that the article would actually increase my
           | knowledge
        
           | quickthrower2 wrote:
           | Heuristics. Not going to read every article that negs me.
        
       | andy_xor_andrew wrote:
       | > I've written quite a few Rust projects where I expect it to
       | only involve blocking primitives, only to find out that,
       | actually, I'm starting to do a lot of things at once, guess I'd
       | better use async.
       | 
       | In my experience (which, admittedly, is _far_ less than the
       | author, a developer of smol!) the answer to  "I'm starting to do
       | a lot of things at once" in Rust is usually to spin up a few
       | worker threads and send messages between them to handle jobs, a
       | la Ripgrep's beautiful implementation.
       | 
       | In a way, it seems like async Rust appears more often when you
       | need to do io operations, and not so much when you just need to
       | do work in parallel.
       | 
       | Of course, you surely _can_ use async rust for work in parallel.
       | But it 's often easier to keep async out of it if you just need
       | to split up some work across threads without bringing an entire
       | async executor runtime into the mix.
       | 
       | I don't think async/await was poorly implemented in Rust - in
       | fact, I think it avoids a lot of problems and pitfalls that
       | _could_ have happened. The complications arise because async
       | /await is, kind of, ideologically antithetical to Rust's other
       | goal of memory safety and single-writer. Rust really wants to
       | have its cake (compile-time memory safety) and eat it too
       | (async/await). And while you can criticize it, you have to admit
       | they did a pretty good job given the circumstances.
        
       | a-dub wrote:
       | do any of the async libraries for rust have good visualization
       | tools for inspecting the implicit state machine that is
       | constructed via this type of concurrency primitive?
        
         | mplanchard wrote:
         | Not sure if it's exactly what you're looking for, but tokio-
         | console is pretty nice
        
         | Arnavion wrote:
         | The state machine transformation is not specific to any async
         | libraries. The compiler is the one that desugars async fns /
         | blocks to state machines. AFAIK there is nothing other than
         | dumping the HIR / MIR from rustc to inspect it. But even
         | without that the transformation is pretty straightforward to do
         | mentally.
         | 
         | The first transformation is that every async block / fn
         | compiles to a generator where `future.await` is essentially
         | replaced by `loop { match future.poll() { Ready(value) => break
         | value, Pending => yield } }`. ie either polling the inner
         | future will resolve immediately, or it will return Pending and
         | yield the generator, and the next time the generator is resumed
         | it will go back to the start of the loop to poll the future
         | again.
         | 
         | The second transformation is that every generator compiles to
         | essentially an enum. Every variant of the enum represents one
         | region of code between two `yield`s, and the data of that
         | variant is all the local variables that in the scope of that
         | region.
         | 
         | Putting both together:                   async fn foo(i: i32,
         | j: i32) {             sleep(5).await;             i + j
         | }
         | 
         | ... essentially compiles to:                   fn foo(i: i32,
         | j: i32) -> FooFuture {             FooFuture::Step0 { i, j }
         | }              enum FooFuture {             Step0 { i: i32, j:
         | i32 }             Step1 { i: i32, j: i32, sleep: SleepFuture }
         | Step2,         }              impl Future for FooFuture {
         | fn poll(self) -> Poll<i32> {                 loop {
         | match self {                         Self::Step0 { i, j } => {
         | let sleep = sleep(5);                             self =
         | Self::Step1 { i, j, sleep };                         }
         | Self::Step1 { i, j, sleep } => {
         | let () = match sleep.poll() {
         | Poll::Ready(()) => (),
         | Poll::Pending => return Poll::Pending,
         | };                             self = Self::Step2;
         | return Poll::Ready(i + j);                         }
         | Self::Step2 => panic!("already run to completion"),
         | }                 }             }         }
        
       | klysm wrote:
       | In C#, I put anything doing IO in an async function and make
       | cancelation tokens required
        
       | Arnavion wrote:
       | >Except, this isn't a problem with Rust's async, it's a problem
       | with tokio. tokio uses a 'static, threaded runtime that has its
       | benefits but requires its futures to be Send and 'static.
       | 
       | It's not a problem with tokio either. The author's point is
       | specifically about the multi-threaded tokio runtime that allows
       | tasks to be moved between worker threads, which is why it
       | requires the tasks to be Send + 'static. Alternatively you can
       | either a) create a single-threaded tokio runtime instead which
       | will remove the need for tasks to be Send, or b) use a LocalSet
       | within the current worker that will scope all tasks to that
       | LocalSet's lifetime so they will not need to be Send or 'static.
       | 
       | If you go the single-threaded tokio runtime out, that doesn't
       | mean you're limited to one worker total. You can create your own
       | pseudo-multi-threaded tokio runtime by creating multiple OS
       | threads and running one single-threaded tokio runtime on each.
       | This will be similar to the real multi-threaded tokio runtime
       | except it doesn't support moving tasks between workers, which
       | means it won't require the tasks to be Send. This is also what
       | the author's smol example does. But note that allowing tasks to
       | migrate between workers prevents hotspots, so there are pros and
       | cons to both approaches.
        
       | vc8f6vVV wrote:
       | > Why don't people like async?
       | 
       | That's pretty simple. The primary goal of every software engineer
       | is (or at least should be) ... no, not to learn a new cool
       | technology, but to get the shit done. There are cases where async
       | might be beneficial, but those cases are few and far in between.
       | In all other cases a simple thread model, or even a single thread
       | works just fine without incurring extra mental overhead. As
       | professionals we need to think not only if some technology is
       | fun, but how much it actually costs to our employer and about
       | those who are going to maintain our "cool" code when we leave for
       | better pastures. I know, I know, I sound like a grandpa (and I
       | actually am).
        
         | diarrhea wrote:
         | But async Python is a single threaded. I'd prefer async over
         | multithreading in python nowadays. Otherwise code can be slow
         | as piss, if it's doing a lot of I/O. Then, async is almost
         | table stakes for almost any level of reasonable performance
         | (GIL and all).
        
         | hgomersall wrote:
         | Your answer boils down to: "I know this technique, I don't want
         | to learn blub technique. My job is to get stuff done, not learn
         | new techniques." In which case, good for you; enjoy your sync
         | code (seriously), and please stop telling the rest of us that
         | have learnt the new blub technique that we shouldn't use it.
        
         | __MatrixMan__ wrote:
         | Another reason to is that it lets you handle bursty input with
         | bursty CPU usage. Sounds great, right? Round peg, round hole.
         | 
         | But nobody will sell you just a CPU cycle. They come in bundles
         | of varying size.
         | 
         | I recently heard a successful argument that we should take the
         | pod that's 99% unutilized and double its CPU capacity so it can
         | be 99.9% unutilized, that way we don't get paged when the data
         | size spikes.
         | 
         | When I proposed we flatten those spikes since they're only
         | 100ms wide it was sort down because "implementing a queueing
         | architecture" wasn't worth the developer time.
         | 
         | I suppose you could call it a queueing architecture. I'd call
         | it a for loop.
        
         | maxbond wrote:
         | Most commercial code is running an almost entirely IO workload,
         | acting as a gatekeeper to a database or processing user
         | interactions - places where async shines.
         | 
         | Async isn't a lark, it's a workhorse. The goal is not to write
         | sexy code, it's to achieve better utilization (which is to say,
         | save money).
        
       | chrisweekly wrote:
       | FYI This is an interesting response by the maintainer* of smol to
       | this recent discussion:
       | https://news.ycombinator.com/item?id=37435515
       | 
       | * EDIT: corrected, thanks
        
         | Nullabillity wrote:
         | Maintainer, not creator. Smol was originally created by
         | stjepang, who has basically disappeared these days.
         | 
         | EDIT: I originally incorrectly claimed that stjepang also
         | created rather than maintained crossbeam, making the same
         | msitake as I was correcting.
        
           | chrisweekly wrote:
           | whoops! thanks
        
           | urschrei wrote:
           | crossbeam was created by Aaron Turon (who has - inevitably -
           | also left the Rust project):
           | https://aturon.github.io/blog/2015/08/27/epoch/
        
             | Nullabillity wrote:
             | Oops, shame on me!
        
       | aaomidi wrote:
       | I really wish we had stuff like the switch to scheduler that
       | effectively makes asyncish behavior possible at the kernel level.
       | 
       | I'm tired of everyone implementing async on their own.
        
       | api wrote:
       | The author is a maintainer of smol, which I think is a far
       | superior runtime to tokio for numerous reasons including
       | structured concurrency, performance, size, and an ownership
       | parameter that reduces the need for Arc<> all over the place by
       | letting you scope on the runtime or task. The whole thing is just
       | tighter and better thought out.
       | 
       | Yet tokio is the de facto standard and everything links against
       | it. It's really annoying. Rust should have either put a runtime
       | in the standard library or made it a lot easier to be runtime
       | neutral.
        
         | lawn wrote:
         | Is there a solid web framework tht uses smol instead of tokio?
        
       | nyanpasu64 wrote:
       | I'd argue that libraries forcing programs to include an async
       | runtime upfront because there's a chance that it may someday grow
       | to the extent you want a central executor (when they are
       | absolutely ill-suited for audio programming, and probably not the
       | case for Animats's metaverse client at
       | https://news.ycombinator.com/item?id=37437676), imposes
       | unnecessary dependency bloat on applications. And unless you use
       | block_on(), async code pushes applications from a "concurrency
       | library" model where apps block on mutexes/signals/channels as
       | needed, to a "concurrency framework" model where code only runs
       | under the runtime's callbacks (which is not the right choice for
       | all apps).
        
       | dpc_01234 wrote:
       | I might need a lot of stuff in my software. Eventually. I might
       | need distributed database, or to to scale it out to run on
       | multiple machines, and then maybe Raft, or reactive architecture,
       | zero-copy IO, or incremental updates or ... or ... the list goes
       | on and on.
       | 
       | Thinking too much and in particularly going with over complicated
       | solutions from the very start because "might" is just bad
       | engineering.
       | 
       | Also, even if I do need async in a certain place, doesn't mean I
       | need to endure the limitations and complexity of async Rust
       | _everywhere_ in my codebase. I can just spawn a single executor
       | and pass around messages over channels to do what requires async
       | in async runtime, and what doesn 't in normal and simpler (and
       | better) blocking IO Rust.
       | 
       | You need async IO? Great. I also need it sometimes. But that
       | doesn't explain the fact that every single thing in Rust
       | ecosystem nowadays is async-only, or at best blocking wrapper
       | over async-only. Because "async is web-scale, and blocking is not
       | web-scale".
       | 
       | Edit: Also the "just use smol" comically misses the problem.
       | Yeah, smol might be simpler to use than tokio (it is, I like it
       | better personally), but most stuff is based on tokio. It's an
       | uphill battle for the same reasons using blocking IO Rust is
       | becoming an uphill battle. Only thing better than using async
       | when you don't want to is having to use 3 flavors (executors) of
       | async, when you didn't want to use any in the first place.
       | 
       | Everything would be perfect and no one would complain about async
       | all the time if the community defaulted to blocking,
       | interoperable Rust, and then projects would pull in async in that
       | few places that _do actually need_ async. But nobody wants to
       | write a library that isn 't "web-scale" anymore, so tough luck.
        
         | Ar-Curunir wrote:
         | > I also need it sometimes. But that doesn't explain the fact
         | that every single thing in Rust ecosystem nowadays is async-
         | only, or at best blocking wrapper over async-only.
         | 
         | Now that's just plainly untrue.
        
           | dpc_01234 wrote:
           | Yes, yes. You're right. Though it does sometimes feel like
           | it.
        
         | the__alchemist wrote:
         | I also see the `Async` vs `blocking` false dichotomy in
         | embedded rust discussions. `Async/Await` != asynchronous
         | execution.
        
       | IshKebab wrote:
       | > Even the simple, Unix-esque atomic programs can't help but do
       | two or three things at once. Okay, now you set it up so, instead
       | of waiting on read or accept or whatnot, you register your file
       | descriptors into poll and wait on that, then switching on the
       | result of poll to figure out what you actually want to do.
       | 
       | > Eventually, two or three sockets becomes a hundred, or even an
       | unlimited amount. Guess it's time to bring in epoll! Or, if you
       | want to be cross-platform, it's now time to write a wrapper
       | around that, kqueue and, if you're brave, IOCP.
       | 
       | This feels like a straw man. Nobody is saying "don't use async;
       | use epoll!". The alternative to async is traditional OS threads.
       | This option is weirdly not mentioned in the article at all.
       | 
       | And yes they have a reputation for being very hard - and they can
       | be - but Rust makes traditional multithreading _MUCH_ easier than
       | in C++. And I would argue that Rust 's async is equally hard.
       | 
       | Rust makes traditional threading way easier than other languages,
       | and traditional async way harder than other languages, enough
       | that threads are arguably simpler.
        
         | conradludgate wrote:
         | It's necessary to use some kind of poll construction if you
         | want cancellation and timeouts without shutting down the entire
         | application
        
       | adamch wrote:
       | > tokio uses a 'static, threaded runtime that has its benefits
       | but requires its futures to be Send and 'static.
       | 
       | This is only partly true -- if you want to `spawn` a task on
       | another thread then yes it has to be Send and 'static. But if you
       | use `spawn_local`, it spawns on the same thread, and it doesn't
       | have to be Send (still has to be 'static).
        
       | carterschonwald wrote:
       | Isn't the real problem the lack of monads and monad transformers
       | so you can't have async machinery used for domain specific stuff?
        
       | [deleted]
        
       | delusional wrote:
       | The author starts by citing greenspun's tenth rule and goes on to
       | elaborate on the argument that if you are going to have a half
       | implementation of async anyway, why not just pull it in? Yet
       | fails to interrogate the relationship between this argument and
       | the cited "rule". If you should use async because you might need
       | it in the future, shouldn't we all be writing in lisp?
       | 
       | If we presuppose that all software eventually develops an async,
       | and we therefore should use async. Would it not stand to reason
       | that greenspun's rule that all software contains a lisp would
       | imply that we must also all use lisp?
        
         | orangea wrote:
         | The rule isn't really about lisp, it's about the kinds of
         | functions and structures you find in the standard library of a
         | typical programming language, such as strings and arrays and
         | file IO and so on. Rust already has those things so your
         | argument doesn't really apply.
        
         | Arnavion wrote:
         | The author said what you wrote in the first sentence, ie "use
         | async if you are going to have a half implementation of async
         | anyway". "Use async because you might need it in the future" is
         | something you made up, not what the author said.
        
           | delusional wrote:
           | greenspun's tenth rule is about the inevitability of the half
           | baked implementation of lisp. By evoking the sentiment of the
           | rule the author is implicitly making the argument that all
           | "sufficiently complicated programs" will eventually contain a
           | half baked implementation of async.
           | 
           | The implicit argument doesn't stand alone though. The author
           | goes on to write:
           | 
           | > It happens like this: programs are naturally complicated.
           | Even the simple, Unix-esque atomic programs can't help but do
           | two or three things at once. Okay, now you set it up so,
           | instead of waiting on read or accept or whatnot, you register
           | your file descriptors into poll and wait on that, then
           | switching on the result of poll to figure out what you
           | actually want to do.
           | 
           | The implication is clear. Even simple programs will
           | eventually require async, and should therefore just use it
           | right now. unix-esque in this paragraph is supposed to evoke
           | ls or cat. Is your program really going to be simpler than
           | cat? No? Then you apparently need async.
        
             | Arnavion wrote:
             | >The implication is clear. Even simple programs will
             | eventually require async, and should therefore just use it
             | right now.
             | 
             | There's no implication. Read what you quoted instead of
             | digging for quick jabs. "Even the simple, Unix-esque atomic
             | programs can't help but do two or three things at once.
             | Okay, now you set it up so, instead of waiting on read or
             | accept or whatnot..."
             | 
             | >unix-esque in this paragraph is supposed to evoke ls or
             | cat. Is your program really going to be simpler than cat?
             | No? Then you apparently need async.
             | 
             | cat and ls don't do two or three things at once.
        
       | dang wrote:
       | Recent and related:
       | 
       |  _Maybe Rust isn't a good tool for massively concurrent,
       | userspace software_ -
       | https://news.ycombinator.com/item?id=37435515 - Sept 2023 (567
       | comments)
        
       | nevermore wrote:
       | Lots of comments and arguments about async being big or complex,
       | and it's really not, it's pulling in the runtimes that's big and
       | complex, and I think Rust really failed by forcing libraries to
       | explicitly choose a runtime. As a library developer you're then
       | put in the position of not using async, or fragmenting yourself
       | to just the subset of users or other libraries on your runtime.
        
       | eternityforest wrote:
       | I love async in Python and JS. I used to be one of those "Threads
       | aren't that hard, just use threads" people, but that was back
       | when the trend was doing "async" with layers of nested callbacks
       | like as if this was LISP or something where people just accept
       | deep nesting.
       | 
       | Now we have async/await and I'm always happy to see it.
        
       | klabb3 wrote:
       | I mean.. I appreciate that there are proponents and people trying
       | to improve the state of async rust but to allude that everything
       | is dandy is either dishonest or more likely a strong curse of
       | knowledge bias.
       | 
       | I've worked deeply in an async rust codebase at a FAANG company.
       | The vast majority chooses a dialect of async Rust which involves
       | arcs, mutices, boxing etc everywhere, not to mention the giant
       | dep tree of crates to do even menial things. The ones who try to
       | use proper lifetimes etc are haunted by the compiler and give up
       | after enough suffering.
       | 
       | Async was an extremely impressive demo that got partially
       | accepted before knowing the implications. The remaining 10%
       | turned out to be orders of magnitude more complex. (If you
       | disagree, try to explain pin projection in simple terms.) The
       | damage to the ecosystem from fragmentation is massive.
       | 
       | Look, maybe it was correct to skip green threads. But the layer
       | of abstraction for async is too invasive. It would have been
       | better to create a "runtime backend" contract - default would be
       | the same as sync rust today (ie syscalls, threads, atomic ops etc
       | - I mean it's already half way there except it's a bunch of
       | conditional compilation for different targets). Then, alternative
       | runtimes could have been built independently and plugged in
       | without changing a line of code, it'd be all behind the scenes.
       | We could have simple single-threaded concurrent runtimes for
       | embedded and maybe wasm. Work stealing runtimes for web servers,
       | and so on.
       | 
       | I'm not saying it would be easy or solve _all_ use-cases on a
       | short time scale with this approach. But I do believe it would
       | have been possible, and better for both the runtime geeks and
       | much better for the average user.
        
         | biomcgary wrote:
         | Your position wrt green threads sounds like Graydon's
         | (https://graydon2.dreamwidth.org/307291.html).
        
       | fnordpiglet wrote:
       | Biggest issue I have with async is the lack of native async
       | traits and the lack of support for async closures. You can work
       | around the traits issue but the closure issue you can't. I've
       | spent hours trying to work around closures that wrap async code.
        
       | charcircuit wrote:
       | async is not free. It will turn your code into a big state
       | machine and each thing you await will likely create its own
       | thread.
       | 
       | There is simplicity in a avoiding that and having code that gets
       | compiled to something that is straightforward and single
       | threaded.
        
         | [deleted]
        
         | conradludgate wrote:
         | This is not true for Rust. Await in rust builds a larger state
         | machine from the former. It does no implicit thread or task
         | spawns (unless the future you're awaiting does them
         | explicitly).
         | 
         | Furthermore, async rust can be run single threaded
        
         | maxbond wrote:
         | This is true of all abstractions; if you don't need them, then
         | they'll make your program more complex and more painful to
         | write and maintain.
         | 
         | Exercising judgement about when to use or shirk an abstraction
         | is a lot of what being a software engineer is about.
        
           | eternityforest wrote:
           | When does async/await ever make your program harder to
           | maintain? Maybe to people who don't already know it, but
           | almost all the big languages have async, it would be hard for
           | a programmer to get away with not learning it, at least if
           | they're a python of JS programmer.
           | 
           | It adds complexity, but it's at the level where you don't
           | have to think about it. If you're doing something advanced
           | enough to where async is a leaky abstraction, you're probably
           | doing something big enough to where you would want the
           | advantages it offers.
           | 
           | If you're doing something simple, async is just a black box
           | primitive that is pretty easy to use.
        
         | api wrote:
         | Awaits don't create threads, at least not in any runtime I know
         | of. There is usually a fixed number of threads at launch.
        
           | tomwojcik wrote:
           | FastAPI docs, case when you don't create an async route
           | 
           | > When you declare a path operation function with normal def
           | instead of async def, it is run in an external threadpool
           | that is then awaited, instead of being called directly (as it
           | would block the server).
           | 
           | https://fastapi.tiangolo.com/async/#path-operation-functions
           | 
           | OP either meant this, or its variation, such as async_to_sync
           | and sync_to_async.
           | https://github.com/django/asgiref/blob/main/asgiref/sync.py
           | 
           | Ofc this is a python example. I have no idea how it works in
           | different languages.
        
             | maxbond wrote:
             | NB: In Python >= 3.9 the idiomatic way to do this is
             | to_thread(), not familiar with these ASGI functions but I
             | would guess they're a polyfill and/or predate 3.9.
             | 
             | https://docs.python.org/3/library/asyncio-
             | task.html#asyncio....
        
               | pdhborges wrote:
               | They are not polyfills. Multiple scheduling modes are
               | provided for libraries that are not thread safe (it's a
               | total mess and I avoid these wrappers like the plague)
        
             | [deleted]
        
             | mplanchard wrote:
             | "run in a threadpool" isn't the same as creating a thread
             | though
        
           | rugina wrote:
           | Tokio uses a pool of threads for disk I/O because it uses the
           | synchronous calls of the operating system.
        
           | charcircuit wrote:
           | That is an implementation detail on where you put the code
           | that is blocking or running concurrently from the main code.
           | An executor could use a separate OS thread, or the
           | application could itself schedule application levels threads
           | onto a number of OS threads.
           | 
           | When writing a Future that will block for 5 seconds you will
           | need to find somewhere to that you can put the code to block
           | for 5 seconds. You don't technically need to even use an
           | executor here.
        
           | maxbond wrote:
           | I think they meant it was likely to spin off additional
           | tasks/green threads.
        
             | Arnavion wrote:
             | If they meant that they are still wrong.
        
       ___________________________________________________________________
       (page generated 2023-09-09 23:00 UTC)