[HN Gopher] Why Discord is switching from Go to Rust
       ___________________________________________________________________
        
       Why Discord is switching from Go to Rust
        
       Author : Sikul
       Score  : 791 points
       Date   : 2020-02-04 17:30 UTC (5 hours ago)
        
 (HTM) web link (blog.discordapp.com)
 (TXT) w3m dump (blog.discordapp.com)
        
       | The_rationalist wrote:
       | Borrowed from a comment:
       | 
       | Garbage collection has gotten a lot of updates in the last 3
       | years. Why would you not take the exceedingly trivial step of
       | just upgrading to the latest Go stable in order to at least _try_
       | for the free win? From the go 1.12 release notes: "Go 1.12
       | significantly improves the performance of sweeping when a large
       | fraction of the heap remains live. This reduces allocation
       | latency immediately following a garbage collection." -\\_(tsu)_
       | /- This sounds like "we just wanted to try Rust, ok?" Which is
       | fine. But like, just say that.
        
       | _--___-___ wrote:
       | "We want to make sure Discord feels super snappy all the time" is
       | hilarious coming from a program that is infamous for making you
       | read 'quirky' loading lines while a basic chat application takes
       | several seconds to start up.
       | 
       | Don't really know about Go versus Rust for this purpose, but
       | don't really care because read states (like nearly everything
       | that makes Discord less like IRC) is an anti-feature in any
       | remotely busy server. Anything important enough that it shouldn't
       | be missed can be pinned, and it encourages people to derail
       | conversations by replying out of context to things posted hours
       | or days ago.
        
         | anchpop wrote:
         | I don't see why that's hilarious. Lots of programs take a
         | second or two to load and it only happens once on boot for me.
         | "Read states" is just discord telling you which channels and
         | servers you have unread messages in
        
           | wvenable wrote:
           | Discord takes longer to start up than Microsoft Word.
           | 
           | Desktop development is a total wasteland these days -- there
           | isn't nearly as much effort put into optimization as server
           | side. They're not paying for your local compute, so they can
           | waste as much of it as they want.
        
             | penagwin wrote:
             | I feel that it's not really fair to expect them to natively
             | implement their app on every platform and put tons of
             | resources into it's client performance - anecdotally
             | discord is a very responsive app - see [0].
             | 
             | But think of it this way, all the effort they put into
             | their desktop app works on all major OSes without a
             | problem. They even get to reuse most of the code for access
             | from the browser, with no installation required.
             | 
             | Now imagine approaching your PM and saying "Look I know we
             | put X effort into making our application work on all the
             | platforms, but it would be even faster if we instead did 4x
             | effort for native implementations + the browser".
             | 
             | [0] From what I've seen in the "gamer community" is that
             | most gamers don't care that much about that kind of extra
             | performance. Discord itself doesn't feel slow once it's
             | started. Joining a voice channel is instant, quickly
             | switching to a different server and then to a chat channel
             | to throw in a some text is fast and seamless (Looking at
             | you MS Teams!!!).
             | 
             | Sure Mumble/Teamspeak are native and faster, but where are
             | their first party mobile apps and web clients? One of the
             | incredible things Discord did to really aid in adoption was
             | allow for web clients, so when you found some random person
             | on the internet, you didn't have to tell them to download
             | some chat client, they could try it through their browser
             | first.
             | 
             | tl;dr
             | 
             | Yes electron apps can be slow, but discord IMO has fine
             | client side performance, and they clearly do put resources
             | into optimizing it. Yes it "could be faster" with native
             | desktop apps, but their target community seems perfectly
             | content as is.
        
               | jhgg wrote:
               | A lot of the startup cost right now is in our really
               | ancient update checker. There are plans to rewrite all of
               | this, now that we understand why it's bad, and have some
               | solid ideas as to what we can do better.
               | 
               | I do think it's reasonable to get the startup time of
               | Discord to be near what VS Code's startup times are. If
               | we remove the updater, it actually starts pretty fast
               | (chromium boot -> JS loaded from cache) is <1s on a
               | modern PC. And there's so much more we can do from there,
               | for example, loading smaller chunks in the critical "load
               | and get to your messages path" - being able to use v8
               | heap snapshots to speed up starting the app, etc...
               | 
               | The slow startup time is very much an _us_ problem, and
               | not an electron problem and is something I hope we 'll be
               | able to address this year.
        
               | penagwin wrote:
               | When you guys do address it could I pretty please request
               | you do a blog article about it?
               | 
               | Electron startup time as well as v8 snapshots have been a
               | hot topic for a looooong time. I actually started a pull
               | request for it in 2015 [0]. My pull request was focusing
               | on source code protection, but ANY information on how you
               | use v8 snapshots, etc. would be awesome!
               | 
               | [0] https://github.com/electron/electron/issues/3041
        
             | graphememes wrote:
             | Microsoft Word isn't patching the application on startup.
             | That's the difference.
             | 
             | Once it's loaded, how much slower than Word is it?
        
               | wvenable wrote:
               | You're telling me Discord is patching itself on every
               | single launch and this somehow a valid excuse for slow
               | startup performance?
               | 
               | Almost every single app I run auto-updates itself in some
               | form.
        
               | graphememes wrote:
               | In the case of Discord, yes. That's a valid argument,
               | whether or not it's truly important, I'm not sure. It
               | certainly is a waste of time to invest improving when
               | their current system works perfectly fine.
        
               | wvenable wrote:
               | They're investing in server-side project that are also
               | perfectly fine. In this case, re-writing an entire module
               | in a different language to eek out a tiny bit more
               | performance!
               | 
               | But on the client side, it's arguably the slowest to
               | launch application I have installed even among other
               | Electron apps. Perfectly _fine_.
               | 
               | This completely re-enforces my original statement:
               | "Desktop development is a total wasteland these days --
               | there isn't nearly as much effort put into optimization
               | as server side" Desktop having horrible performance is
               | "fine" but a little GC jitter on the server requires a
               | complete re-write from the ground up.
        
         | penagwin wrote:
         | To be fair it really doesn't take that long, and often it's
         | because it's auto updating, but it's not more then a couple
         | seconds.
         | 
         | The big thing IMO is that once started I normally leave discord
         | running, and most actions within discord itself feel very
         | snappy - E.g. You click on a voice channel and you're instantly
         | there. I think that's what they mean, they're trying to keep
         | the delay for such an action low. Sometimes you click a voice
         | Chanel and there's a few seconds of delay, those for some
         | reason more annoying then the long (ish) startup time
        
       | StreamBright wrote:
       | Pretty amazing write up from Jesse. I really like how they maxed
       | out Go first before even thinking about a rewrite in Rust. It
       | turns out no-GC has pretty significant advantages in some cases.
        
       | [deleted]
        
       | jrockway wrote:
       | This seems like a nice microservices success story. It's so easy
       | to replace a low-performing piece of infrastructure when it is
       | just a component with a well-defined API. Spin up the new
       | version, mirror some requests to see how it performs, and turn
       | off the old one. No drama, no year-long rewrites. Just a simple
       | fix for the component that needed it the most.
        
       | [deleted]
        
       | bradhe wrote:
       | Replatforming to solve this problem was a bit silly in my
       | opinion. The solution to the problem was "do fewer allocations"
       | which can be done in any language.
        
         | nabla9 wrote:
         | I wonder if they attempted manual memory allocation in Go?
         | 
         | In many languages with GC you can actually do manual memory
         | management relatively easily with few helper functions. You
         | write your own allocate() and free() functions/methods. When
         | you allocate, you check the free list first, if nothing is
         | available, you do normal allocation. When you call free you add
         | the object into a free list. If you memory management leaks, it
         | triggers GC.
         | 
         | Usually you need to do that kind of stuff to only in few places
         | and few data structures to cut GC 90%.
        
         | pjmlp wrote:
         | Yeah, but this way one can add experience in yet another
         | language to the CV.
        
         | qidydl wrote:
         | They addressed this in the article:
         | 
         | > These latency spikes definitely smelled like garbage
         | collection performance impact, but we had written the Go code
         | very efficiently and had very few allocations. We were not
         | creating a lot of garbage.
         | 
         | The problem was due to the GC scanning all of their allocated
         | memory and taking a long time to do so, regardless of it all
         | being necessary and valid memory usage.
        
         | jhgg wrote:
         | Your reply misses the point. We were already doing so few
         | allocations that the GC only ran because it "had to" at every 2
         | minute mark. The issue was the large heap of many long lived
         | objects.
        
           | _ph_ wrote:
           | Did you try to change that interval to a much larger time?
        
             | jhgg wrote:
             | When we investigated, there was no way to change that that
             | we could find - barring compiling go from source (something
             | we could have done, but wanted to avoid.)
        
               | _ph_ wrote:
               | Yes, you have to rebuild go, but that is literally done
               | in a minute. It also would be interesting, if you happen
               | to have some conclusive benchmarks, how the latest Go
               | runtime would perform in this sense.
        
         | xyzzyz wrote:
         | You haven't read the post carefully. Their garbage collection
         | in Go was spiking every 2 minutes precisely because they were
         | doing too few allocations to have it run more often.
        
         | staticassertion wrote:
         | A) They had spent a lot of time optimizing the Go service
         | 
         | B) They _weren 't_ allocating a lot, and Go was enforcing a GC
         | sweep every 2 minutes, and it was spending a lot of time on
         | their LRU cache. To "reduce allocations" they had to cut their
         | cache down, which negatively impacted latency.
        
       | h2odragon wrote:
       | Excellent write up, and effective argument for Rust in this
       | application and others. My cynical side sums it up as:
       | 
       | "Go sucked for us because we refused to own our tooling and make
       | a special allocator for this service. Switching to Rust forced us
       | to do that, and life got better"
        
         | staticassertion wrote:
         | I'm confused. Build a special allocator for Go you mean? That
         | feels like going well beyond typical "own your tooling".
        
         | monocasa wrote:
         | They were already not allocating, they were just stuck with a
         | GC cycle that'd scan, not find any garbage, and scan again in
         | two minutes.
        
       | echopom wrote:
       | This was an extremely interesting read.
       | 
       | I'm quiet disappointed though they did not update their Go
       | Version to 1.13[0][1] which would normally have remove the spike
       | issue and thus he latency before they move to Rust...
       | 
       | Rust seems more performant with proper usage ( tokio + async )
       | but I'm more worried about the ecosystem that doesn't seem has
       | mature has Go.
       | 
       | We could quote the recent[2] Drama with Actix...
       | 
       | [0]https://golang.org/doc/go1.13#runtime
       | [1]https://golang.org/doc/go1.12#runtime
       | [2]https://github.com/fafhrd91/actix-web-postmortem
        
         | chc wrote:
         | Why would you want to bring up the Actix author's drama? That
         | doesn't seem like something that should reflect on a language
         | one way or the other.
        
           | deweller wrote:
           | As an outsider to both the Go and Rust cultures, I read the
           | Actix news and walked away with the impression that the Rust
           | ecosystem is less mature.
        
             | chc wrote:
             | I actually agree that there are many parts of Rust's
             | ecosystem that are relatively immature -- I just don't see
             | how the Actix situation reflects on that. It's not like
             | Actix was a core part of the Rust ecosystem. It was a
             | framework that was most notable for doing very well on the
             | Techempower benchmarks. People get hurt feelings and have
             | flameouts in the C, Java, JavaScript, etc. ecosystems too.
        
             | faitswulff wrote:
             | Every community has it. The dep vs. vgo drama gave me the
             | same impression of Go at the time:
             | https://news.ycombinator.com/item?id=17063724
        
             | cies wrote:
             | Go's is more pragmatic. Rust's is more purist, and that
             | reflects on the language features (more functional, more
             | free in allowing you to use it for any purpose where Go is
             | network-app specific, more strict in typing), the licensing
             | and the attitude towards collaboration.
             | 
             | That collaboration thing is why Actix exploded I think.
             | While mostly an isolated incident it does show some clash
             | between the author's values (and possibly the author's
             | employer's (MSFT) values) and the values of the general
             | Rust community. I would not say that reflects on the
             | maturity of the langues or ecosystem.
             | 
             | In Go a lot of stuff is Google dictated. In Rust it's a
             | true open governance innovation project (looking to become
             | a non-profit). Since the Go is a very specific language
             | --made for networked apps and only has one way to do
             | concurrency-- and Rust very broad --a true general purpose
             | prog lang-- it is easy to see how Go mature so quickly (not
             | much to mature) and also why it got a bit old so quickly as
             | well (ignores most innovations in computer science of the
             | last decades).
        
               | rob74 wrote:
               | > Go is network-app specific
               | 
               | Just because it is used most for "network apps" doesn't
               | mean it's limited to that. On the other hand, you could
               | argue that Rust is a wrong fit for anything _except_
               | performance-critical applications, because for anything
               | else it's not worth to saddle yourself with the added
               | complexity.
               | 
               | > and Rust very broad --a true general purpose prog
               | lang-- it is easy to see how Go mature so quickly (not
               | much to mature) and also why it got a bit old so quickly
               | as well
               | 
               | This simplicity is the thing Go opponents like to point
               | out (or mock) most, and what Go fans actually would tell
               | you is one of the best features of the language. It's
               | actually refreshing to have one language that doesn't try
               | to be everyone's darling by implementing every
               | conceivable feature - we already have enough of those,
               | Rust, C++, Java etc. etc. But you don't have to take my
               | word for it, you can also read the first sentences from
               | this blog post: https://bradfitz.com/2020/01/30/joining-
               | tailscale - he puts it better than I could...
        
               | cies wrote:
               | As the grant parent commentor i cannot down vote, so that
               | wasn't me.
               | 
               | Google explicitly shown no intent to make Go a fit beyond
               | network apps. You can hack something into doing more than
               | originally intended, but then you are usually operating
               | "outside of warranty".
               | 
               | > On the other hand, you could argue that Rust is a wrong
               | fit for anything _except_ performance-critical
               | applications
               | 
               | Well, Rust does more than C-level high perf. It also
               | allows for very safe/maintainable code that's high perf.
               | Both of these are not something like a special feature,
               | nope, ANY software needs to be high perf, bug free and
               | maintainable to some degree. And as the size of the
               | codebase grows, lack of these properties in a languages
               | rears is ugly head.
               | 
               | The added complexity cost, as you mentioned, is IMHO not
               | a real cost. It's more like an investment. You go with
               | Rust, you have to pay up front: learning new concepts,
               | slower dev't, more verbosity/syntax/gibberish-y code. But
               | once the codebase grows, you(r team) have grown
               | accustomed to this and you start to reap the benefits of
               | Rust's safety, freedom to choose your concurrency
               | patterns, maintainability and verbosity.
               | 
               | Now I do want to point out a REAL cost that was not
               | mentioned yet, that Rust brings with it much more than
               | Go: compile time. This sucks for Rust. Given the
               | complexity of Rust, I dont expect it to ever come close
               | to Go's lightning compiles. It will improve/ it
               | constantly improving. And IDE features that prevent
               | compiles (e.g.: error highlights) are maturing and will
               | help too. But this is a big reason for picking Go.
               | 
               | Your Jedi mind trick about Go's "simplicity" does not
               | work on me :) ... It's fast compiles (a result of
               | simplicity) are the bonus. Not being able to use the
               | language beyond network apps or go-routine-concurrency
               | are simply a minus for every learner (not for Google as a
               | creator), as you limit the use of your new skill. The
               | reason they kept the 1B$ mistake (null) in there is
               | simply unforgivable.
               | 
               | And if Go will never add features we have to see. Java
               | also intended to stay lean, well...
        
               | nemothekid wrote:
               | The Go community has a very similar story, where someone
               | released a web framework, with an unorthodox set of
               | features, and was flamed to the point where he abandoned
               | the project and quit OSS.
               | 
               | https://github.com/go-martini/martini
        
               | fjp wrote:
               | What was so unorthodox/upsetting to people there?
        
               | nemothekid wrote:
               | Martini used the service injection pattern and made use
               | of reflection to do so. It was a very popular framework
               | and one of the first in Go (it currently has ~10k stars),
               | and the use of reflection became a very contentious
               | viewpoint in the community.
        
             | andoriyu wrote:
             | I wouldn't call rust ecosystem less mature than go, but it
             | wouldn't call either of them mature.
             | 
             | Both have ups and downs. Rust definitely has immature web
             | service ecosystem and it's a result of immature async i/o
             | ecosystem. At the same time go has those things otb.
        
           | Communitivity wrote:
           | Agreed. One could argue that a level of drama in the
           | community is a sign of growing maturity and wider interest in
           | the language, because it is evidence there is no longer a
           | niche monoculture of devs all thinking the same way.
           | 
           | In the words of Steve Klabnik "Rust has been an experiment in
           | community building as much as an experiment in language
           | building. Can we reject the idea of a BDFL? Can we include as
           | many people as possible? Can we be welcoming to folks who
           | historically have not had great representation in open
           | source? Can we reject contempt culture? Can we be inclusive
           | of beginners?" https://words.steveklabnik.com/a-sad-day-for-
           | rust
           | 
           | The Actix issue was resolved, and Actix will continue under
           | new maintainers (https://github.com/actix/actix-
           | web/issues/1289). So I'd argue the answer to those questions
           | is a 'yes'.
        
       | thedance wrote:
       | These kinds of posts would be much more interesting if they
       | discussed alternatives considered and rejected. For example why
       | did they choose Rust over C++?
        
         | The_rationalist wrote:
         | The most pressing undiscussed alternative is: why didn't they
         | update their 3 years old Go version yet had the double standard
         | of using rust nightly... This blog post is a scam and their
         | only reason to use rust should be assumed: it's because they
         | wanted to.
        
           | [deleted]
        
           | thedance wrote:
           | That's exactly the trolling I was looking for :-) . It just
           | reads like they thought it would be cool to use Rust, so they
           | did, which is fine.
        
         | therockhead wrote:
         | The article mentioned that they have already used Rust
         | successfully in house, so when you consider that Rust is
         | inherently safer than C++, it seems like they picked the right
         | language.
        
       | brundolf wrote:
       | It's always good to see a case-study/anecdote, but nothing in
       | here is surprising. It also doesn't really invalidate Go in any
       | way.
       | 
       | Rust is faster than Go. People use Go, like any other technology,
       | when the tradeoffs between developer
       | iteration/throughput/latency/etc. make sense. When those cease to
       | make sense, a hot path gets converted down to something more
       | efficient. This is the natural way of things.
        
         | calcifer wrote:
         | This is a weirdly defensive comment, fighting against a
         | strawman. The article doesn't claim it's surprising, that
         | "invalidates" Go or that it isn't the "natural way of things".
        
           | brundolf wrote:
           | I'm not pushing back against the article, but against the
           | comments that tend to appear below articles like this. The
           | headline in particular, to someone who doesn't read the
           | article, could be taken as "Discord has decided that Rust is
           | better than Go and here's why", and run with.
        
         | kerkeslager wrote:
         | > It's always good to see a case-study/anecdote, but nothing in
         | here is surprising. It also doesn't really invalidate Go in any
         | way.
         | 
         | Well, sure, because categorizing languages as "valid/invalid"
         | doesn't make any sense.
         | 
         | But it does show _yet another_ example of how designing a
         | language to solve Google 's fairly-unique problems doesn't
         | result in a general-purpose language suitable for solving most
         | people's problems.
        
           | d1zzy wrote:
           | > But it does show yet another example of how designing a
           | language to solve Google's fairly-unique problems doesn't
           | result in a general-purpose language suitable for solving
           | most people's problems.
           | 
           | How much of Google's infrastructure actually runs on Go tho?
           | :)
        
           | kikimora wrote:
           | Long GC pauses caused by large collections/caches are decade
           | long problem with no real wide spread solution so far. With
           | Java and .NET you can resort to off-heap data. Not sure if
           | this is possible with Go.
        
             | dnautics wrote:
             | Erlang's basically solved it (and, arguably, solved it
             | decades ago); relevant as discord uses erlang VM in places.
        
               | bsder wrote:
               | Erlang "solved" the problem by breaking having lots of
               | little heaps so a GC can blast through the entire heap
               | extremely quickly.
               | 
               | Would that actually work in this instance? It seems like
               | that LRU cache they're talking about is kind of large.
        
               | kerkeslager wrote:
               | > Would that actually work in this instance? It seems
               | like that LRU cache they're talking about is kind of
               | large.
               | 
               | I can't say for sure without knowing what the contents of
               | that heap is, but I suspect that yes, it would work.
               | 
               | However, the reason the heaps are so small is that
               | they're each a lightweight thread, and in Erlang,
               | spinning up new threads is a way of life. It would be
               | hard to overstate what a fundamentally different
               | architecture this is.
        
             | kerkeslager wrote:
             | > Long GC pauses caused by large collections/caches are
             | decade long problem with no real wide spread solution so
             | far.
             | 
             | This is arguable, but the fact is that "large
             | collections/caches" isn't Discord's situation.
        
           | vogre wrote:
           | Keeping LRU cache that large with these performance
           | requirements is not a "most people's problem".
           | 
           | Go is actually great to solve most people's problem with web
           | servers, while Rust is better for edge cases.
        
             | kerkeslager wrote:
             | > Keeping LRU cache that large with these performance
             | requirements is not a "most people's problem".
             | 
             | Sure, but that's not what I said.
             | 
             | Any program of sufficient complexity will run into at least
             | one critical problem that isn't a "most people's problem".
             | A well-written general-purpose language implementation will
             | have been written in such a way that that problem isn't
             | totally intractable.
             | 
             | > Go is actually great to solve most people's problem with
             | web servers, while Rust is better for edge cases.
             | 
             | Most people's problem with web servers is writing a CRUD
             | app, which is going to be easiest in something like
             | Python/Django/PostGres/Apache. It's not the new shiny, but
             | it includes all the usual wheels so you don't have to
             | reinvent them in the name of "simple". Similar toolsets
             | exist for Ruby/Java/.NET. Give it a few years and similar
             | toolsets will be invented for Go, I'm sure.
        
         | Symmetry wrote:
         | I think the way I'd put it is that languages with manual memory
         | management, like Rust, have more scope for optimization than
         | languages that don't. You can just use the gc crate in Rust and
         | have almost the same ease of development but the same
         | performance problems you do in Go.
        
           | nicoburns wrote:
           | Notably, it didn't sound like the development in Rust was
           | particularly difficult anyway. That's certainly been my
           | experience in any case.
        
       | nemo1618 wrote:
       | I wonder if it would be feasible to rewrite the LRU cache (either
       | fully or in part) in a way that does not require the GC to scan
       | the entire cache.
        
         | kerkeslager wrote:
         | Yes, it's possible: that's generational garbage collection. But
         | last I heard, Google decided writing a modern GC was too
         | complicated.
         | 
         | They're probably right, because Google doesn't need it. But for
         | everyone else who decided to use a language designed to solve
         | Google's fairly-unique problems as if it were a general-purpose
         | language: that kind of sucks, doesn't it?
        
         | ssoroka wrote:
         | Check out this post, that describes exactly that process.
         | https://blog.gopheracademy.com/advent-2018/avoid-gc-overhead...
        
       | dancemethis1 wrote:
       | Well, none of it matters since Discord is hostile software. No
       | language will solve their privacy-trampling deeds.
        
       | dfee wrote:
       | The one problem I'm curious as to how channel-based chat
       | applications solve, to which my google-fu has never lead me in
       | the right direction: how do you handle subscriptions?
       | 
       | I imagine a bunch of front end servers managing open web sockets
       | connections, and also proving filtering/routing of newly
       | published messages. Alas, it's probably best categorized as a
       | multicast-to-server, multicast-to-user problem.
       | 
       | Anyways, if there's an elegant solution to this problem, would
       | love to learn more.
        
         | Pfhreak wrote:
         | Not sure if this is exactly what you are looking for, but I'd
         | do some digging into consistent hash rings.
        
           | dfee wrote:
           | Oh, interesting:
           | https://en.m.wikipedia.org/wiki/Consistent_hashing
           | 
           | > Consistent hashing maps objects to the same cache machine,
           | as far as possible. It means when a cache machine is added,
           | it takes its share of objects from all the other cache
           | machines and when it is removed, its objects are shared among
           | the remaining machines.
           | 
           | I guess the challenge here is that subscriptions are sparse:
           | I.e. one ws connection can carry multiple channel
           | subscriptions, thus undermining the consistent hash.
        
             | Pfhreak wrote:
             | There's a number of ways to tweak the algorithm, e.g. by
             | generating multiple hashes per endpoint and then
             | distributing them around a unit circle.
             | 
             | I've seen this used to consistently allocate customers to a
             | particular set of servers, not just ensure you are hitting
             | the right cache. It doesn't fully solve the subscription
             | issue where multiple people are in multiple channels, but
             | it could probably be used as a building block there.
        
       | johnmc408 wrote:
       | Non programmer here, but would it make sense to add a keyword (or
       | flag) to Go to manually allocate a piece of memory (ie not use
       | GC). That way, for some use cases, you could use avoid GC for the
       | critical path. Then when GC happened, it could be very fast as
       | there would be far less to pause-and-scan (in this use case
       | example). Obviously this would have to be optional and
       | discouraged...but there seems to be no way to write an intensive
       | real-time app with a GC based language. (again non-programmer
       | that is writing this to learn more ;-)
        
         | nanny wrote:
         | >would it make sense
         | 
         | Probably, yeah. But the Golang team would never add such a
         | feature because of their philosophy of keeping the language
         | simple.
        
         | zys5945 wrote:
         | That would imply a drastic change to the language design.
         | Essentially you are asking for 2 code generators (one for code
         | managed by the go runtime and one managed by the programmer).
         | It might be possible but it's most likely not gonna happen.
        
         | correct_horse wrote:
         | I think the bigger problem with Go is a lack of GC options.
         | Java, on the other end of the spectrum has multiple GC
         | algorithms (i.e. the Z garbage collector, Shenandoah, Garbage-
         | First/G1) each with tunables (max heap size, min heap size, for
         | more see [1]). Java other issues, but it solves real business
         | problems by having so many garbage collector tunables. Go's
         | philosophy on the matter seems to be that the programmer
         | shouldn't have to worry about such details (and GC tunables are
         | hard to test). Which is great, until the programmer does have
         | to worry about them.
         | 
         | [1] https://docs.oracle.com/javase/9/gctuning/garbage-first-
         | garb...
        
           | geodel wrote:
           | > Java other issues, but it solves real business problems by
           | having so many garbage collector tunables.
           | 
           | That real business problem is Java generates boat load of
           | garbage so GC needs a lot more performance tuning to make
           | application run normal.
        
         | vips7L wrote:
         | So like D?
         | 
         | https://dlang.org/spec/attribute.html#nogc
        
           | bluebasket wrote:
           | i thought go has a GOGC=off option or something? did they
           | remove it?
        
             | wwarner wrote:
             | no it's there.
             | https://golang.org/pkg/runtime/debug/#SetGCPercent
        
         | ozten wrote:
         | Yes and no. You can get very clever by pre-allocating memory
         | and ensuring it is never garbage collected, but at that point
         | you're opening yourself up to new types of bugs and other
         | performance issues as you try to scale your hack.
         | 
         | As you fight your language, you're GC avoidance system will
         | become larger and larger. At some point you might re-evaluate
         | your latency requirements, your architecture, and which are the
         | right tools for the job.
        
           | sascha_sl wrote:
           | Go has a tool for this job.
           | 
           | https://golang.org/pkg/sync/#Pool
        
             | jhgg wrote:
             | checked in objects in a sync pool gets cleaned up on GC. It
             | used to clean the whole pool, but now I think it does half
             | each GC cycle. If you want to say "objects checked in
             | should live here forever and not free themselves unless I
             | want them to" sync pool is not the tool for the job.
        
         | unlinked_dll wrote:
         | GC isn't "slow" insofar as it's non deterministic. Modern
         | garbage collectors are extremely fast, in fact.
        
           | sorokod wrote:
           | Running every two minutes sounds pretty deterministic.
        
             | jhgg wrote:
             | It was perhaps too deterministic. What's not mentioned in
             | the blog is that after running for long enough, the cluster
             | would line up it's GCs, and each node would do the 2 minute
             | GC at exactly the same time causing bigger spikes as the
             | entire cluster would degrade. I'm guessing all it takes is
             | a few day night cycles combined with a spike in traffic to
             | make all the nodes reset their forced GC timers to the same
             | time.
        
               | cellularmitosis wrote:
               | Interesting, sounds like 2 minutes + random fuzz might
               | avoid the thundering herd. Might be worth submitting a
               | patch to the golang team!
        
         | oconnor663 wrote:
         | There are two things you'd have to do at the same time that
         | make this complicated:
         | 
         | - You'd have to ensure that your large data structure gets
         | allocated entirely within the special region. That's simple
         | enough if all you have is a big array, but it gets more
         | complicated if you've got something like a map of strings. Each
         | map cell and each string would need to get allocated in the
         | special region, and all of the types involved would need new
         | APIs to make that happen.
         | 
         | - You'd have to ensure that data structures in your special
         | region never hold references to anything outside. Since the
         | whole point of the region is that the GC doesn't scan it,
         | nothing in the region will be able to keep anything outside the
         | region alive. Any external references could easily become
         | dangling pointers to freed memory, which is the sort of
         | security vulnerability that GC itself was designed to prevent.
         | 
         | All of this is doable in theory, but it's sufficiently
         | difficult, and it comes with sufficiently many downsides, that
         | it makes more sense for a project with these performance needs
         | to just use C or Rust or something.
        
           | zozbot234 wrote:
           | > Since the whole point of the region is that the GC doesn't
           | scan it, nothing in the region will be able to keep anything
           | outside the region alive.
           | 
           | You can treat external references as GC roots.
        
           | jimbo1qaz wrote:
           | Both of these requirements kinda remind me of Microsoft's
           | Verona language.
        
       | highfrequency wrote:
       | Curious about their definition of "response time" in the graph at
       | the end. They're quoting ~20 microseconds so I assume this
       | doesn't involve network hops? Is this just the CPU time it takes
       | a Read State server to do one update?
        
         | jhgg wrote:
         | Correct. This is internal time it takes to process the message.
         | Since once a node is "warm" thanks to their large caches, it's
         | mostly in memory operations and queueing for persistence which
         | happens in the background.
        
           | Sikul wrote:
           | Also worth noting: Most requests to the service have to
           | update many Read States. For instance, when you @everyone in
           | the Minecraft server we have to update over 500,000 Read
           | States.
        
       | reggieband wrote:
       | When I see this kind of GC performance, I wonder why you wouldn't
       | change the implementation to use some sort of pool allocator. I
       | am guessing each Read State object is identical to one another
       | (e.g. some kind of struct) so why not pre-allocate your memory
       | budget of objects and just keep an unused list outside of your
       | HasMap? In a way this is even closer to a ring where upon
       | ejection you could write the object to disk (or Cassandra), re-
       | initialise the memory and then reuse the object for the new
       | entry.
       | 
       | I suppose that won't stop the GC from scanning the memory though
       | ... so maybe they had something akin to that. I assume that a
       | company associated with games and with some former games
       | programmers would have thought to use pool allocators. Honestly,
       | if that strategy didn't work then I would be a bit frustrated
       | with Go.
       | 
       | I have to say, out of all of the non-stop spamming of Rust I see
       | on this site - this is definitely the first time I've thought to
       | myself that this is a very appropriate use of the language. This
       | kind of simple yet high-throughput workhorse of a system is a
       | great match for Rust.
        
         | monocasa wrote:
         | Yeah, they already weren't allocating, it was a GC pause that
         | just scanned and would come up with essentially no extra
         | garbage every two minutes.
        
           | azakai wrote:
           | A pool allocator could have reduced the number of existing
           | allocations (1 big one instead of many small ones), making
           | those spikes less significant. (But that depends on how Go
           | handles interior pointers and GC, so I'm not sure.)
        
             | runevault wrote:
             | Allocations weren't the problem. It was the fact that,
             | every 2 minutes, the GC would trigger because of an
             | arbitrary decision by the Go team and scan their entire
             | heap, find little to nothing to deallocate, then go on its
             | merry way.
        
         | _ph_ wrote:
         | Here it wasn't the problem, that the GC was lacking performance
         | when collecting garbage, which a pool allocator would have
         | helped with, but rather, that they didn't produce garbage
         | (good), but the GC ran nevertheless to check whether memory
         | could be returned to the OS. Probably supressing that would
         | have removed the spikes.
        
         | KMag wrote:
         | In this case, since the lines of code that can touch the
         | manually managed object pool are probably few and easily
         | reviewed and audited, I don't have any problem with your
         | advice.
         | 
         | I realize you're not advocating pervasive use of the technique,
         | but if someone reading this is going to make pervasive use of
         | manually managed object pools in a GC'd language, they should
         | at least consider the possibility of moving to a language with
         | both good language support for manually managed memory and a
         | good ecosystem of tooling around manual memory management.
         | 
         | Manually managed object pools in a language designed around GC
         | don't fully get rid of the costs of GC, and re-expose the
         | program to most of the errors (primarily use-after-free,
         | double-free, and leaks related to poorly reasoned ownership)
         | that motivated so much effort in developing garbage collectors
         | in the first place.
        
         | masklinn wrote:
         | > When I see this kind of GC performance, I wonder why you
         | wouldn't change the implementation to use some sort of pool
         | allocator.
         | 
         | The allocations were not the issue, the article notes that they
         | did little to no allocations, hence the GC only running on
         | forced triggers (every 2mn)
        
       | the-alchemist wrote:
       | Looks like the big challenge is managing a large, LRU cache,
       | which tends to be a difficult problem for GC runtimes. I bet the
       | JVM, with its myriad tunable GC algorithms, would perform better,
       | especially Shenandoah and, of course, the Azul C4.
       | 
       | The JVM world tends to solve this problem by using off-heap
       | caches. See Apache Ignite [0] or Ehcache [1].
       | 
       | I can't speak for how their Rust cache manages memory, but the
       | thing to be careful of in non-GC runtimes (especially non-copying
       | GC) is memory fragmentation.
       | 
       | Its worth mentioning that the Dgraph folks wrote a better Go
       | cache [2] once they hit the limits of the usual Go caches.
       | 
       | From a purely architectural perspective, I would try to put
       | cacheable material in something like memcache or redis, or one of
       | the many distributed caches out there. But it might not be an
       | option.
       | 
       | It's worth mentioning that Apache Cassandra itself uses an off-
       | heap cache.
       | 
       | [0]: https://ignite.apache.org/arch/durablememory.html [1]:
       | https://www.ehcache.org/documentation/2.8/get-started/storag...
       | [2]: https://blog.dgraph.io/post/introducing-ristretto-high-
       | perf-...
        
         | stingraycharles wrote:
         | > The JVM world tends to solve this problem by using off-heap
         | caches. See Apache Ignite [0] or Ehcache [1].
         | 
         | For those who care, I was interested how off-heap caching works
         | in Java and I did some quick searching around the Apache Ignite
         | code.
         | 
         | The meat is here:
         | 
         | - GridUnsafeMemory, an implementation of access to entries
         | allocated off-heap. This appears to implement some common
         | Ignite interface, and invokes calls to a "GridUnsafe" class
         | https://github.com/apache/ignite/blob/53e47e9191d717b3eec495...
         | 
         | - This class is the closest to the JVM's native memory, and
         | wraps sun.misc.Unsafe:
         | https://github.com/apache/ignite/blob/53e47e9191d717b3eec495...
         | 
         | - And this, sun.misc.Unsafe, is what it's all about:
         | http://www.docjar.com/docs/api/sun/misc/Unsafe.html
         | 
         | It's very interesting because I did my fair share of JNI work,
         | and context switches between JVM and native code are typically
         | fairly expensive. My guess is that this class was likely one of
         | the reasons why Sun ended up implementing their (undocumented)
         | JavaCritical* etc functions and the likes.
        
           | chrisseaton wrote:
           | > context switches between JVM and native code are typically
           | fairly expensive
           | 
           | Aren't these Unsafe memory read and write methods
           | intrinsified by any serious compiler? I don't believe they're
           | using JNI or doing any kind of managed/native transition,
           | except in the interpreter. They turn into the same memory
           | read and write operations in the compiler's intermediate
           | representation as Java field read and writes do.
        
           | winrid wrote:
           | The idea is that that call is still less expensive than going
           | over the wire and MUCH less expensive than having the GC go
           | through that heap now and then.
        
             | stingraycharles wrote:
             | Yes sorry I should have elaborated, those Critical JNI
             | calls avoid locking the GC and in general are much more
             | lightweight. This is available for normal JNI devs as well,
             | its just not documented. They were primarily intended for
             | some internal things that Sun needed.
             | 
             | I'm now guessing that this might actually have been those
             | Unsafe classes as an intended use case. It makes total
             | sense and I can see how that will be very fast.
        
         | tsimionescu wrote:
         | > I can't speak for how their Rust cache manages memory, but
         | the thing to be careful of in non-GC runtimes (especially non-
         | copying GC) is memory fragmentation.
         | 
         | As far as I know, a mark-and-sweep collector like Go's doesn't
         | have any advantage over malloc/free when it comes to memory
         | fragmentation. Am I missing some way in which Go's GC helps
         | with fragmentation?
        
           | lossolo wrote:
           | Go GC implementation uses memory allocator that was based on
           | TCMalloc (but derived from it quite a bit). They use a free
           | list of multiple fixed allocatable size-classes, which helps
           | in reducing fragmentation. That's why Go GC is non-copying.
        
         | pshc wrote:
         | Great comment and thanks for the reading material.
         | 
         | Now I'm wondering if there's a Rust library for a generational
         | copying arena--one that compacts strings/blobs over time.
        
           | steveklabnik wrote:
           | Generational arenas yes, but copying, I'm not aware of one.
           | It's very hard to get the semantics correct, since you can't
           | auto-re-write pointers/indices.
        
             | ithkuil wrote:
             | Perhaps such a library could help you record the location
             | of the variables that contain pointers to the strings and
             | keep that pointer up to data as the ownership of the string
             | moves from variable to variable?
             | 
             | I'm other words, doing some of the work a moving compacting
             | collector would do during compaction but continuously
             | during normal program execution.
        
               | steveklabnik wrote:
               | There's no way to hook into the move, so I don't see how
               | it would be possible, or at least, not with techniques
               | similar to compacting GCs.
        
               | masklinn wrote:
               | Maybe by reifying the indirection? The compacting arena
               | would hand out smart pointers which would either always
               | bounce through something (to get from an indentity to the
               | actual memory location, at a cost) or it'd keep track and
               | patch the pointers it handed out _somehow_.
               | 
               | Possibly half and half, I don't remember what language it
               | was (possibly obj-c?) which would hand out pointers, and
               | on needing to move the allocations it'd transform the
               | existing site into a "redirection table". Accessing
               | pointers would check if they were being redirected, and
               | update themselves to the new location if necessary.
               | 
               | edit: might have been the global refcount table? Not
               | sure.
        
               | steveklabnik wrote:
               | Yeah so I was vaguely wondering about some sort of double
               | indirection; the structure keeps track of "this is a
               | pointer I've handed out", those pointers point into that,
               | which then points into the main structure.
               | 
               | I have no idea if this actually a good idea, seems like
               | you get rid of a lot of the cache locality advantages.
        
               | masklinn wrote:
               | I don't know that the cache locality would be a big issue
               | (your indirection table would be a small-ish array),
               | however you'd eat the cost of doubling the indirections,
               | each pointer access would be two of them.
        
               | mypalmike wrote:
               | This sounds a lot like classic MacOS (pre-Darwin) memory
               | allocation. You were allocated a handle, which you called
               | Lock on to get a real pointer. After accessing the
               | memory, you called Unlock to release it. There was
               | definitely a performance hit for that indirection.
        
         | dochtman wrote:
         | One the one hand, yes. On the other hand, all of this sounds
         | much more complex and fragile. This seems like an important
         | point to me:
         | 
         | "Remarkably, we had only put very basic thought into
         | optimization as the Rust version was written. Even with just
         | basic optimization, Rust was able to outperform the hyper hand-
         | tuned Go version."
        
           | pkolaczk wrote:
           | This is consistent with my observations of porting Java code
           | to Rust. Much simpler and nicer to read safe Rust code (no
           | unsafe tricks) compiles to programs that outperform carefully
           | tuned Java code.
        
             | marta_morena wrote:
             | Sorry, but `Much simpler and nicer` is something that I
             | highly doubt when you talk about Java to Rust. Unless the
             | people writing the Java code were C programmers, lol, in
             | which case I feel for you.
        
               | efaref wrote:
               | Rust's type system is more expressive than Java's so you
               | can end up with much nicer to read code with stricter and
               | more obvious invariants. There also tends to be way less
               | of the `EnterpriseJavaBeanFactory`-style code in
               | idiomatic Rust.
        
               | Polyisoprene wrote:
               | Trolling is fun and all, but I wouldn't say Rust's type
               | system that much more advanced than Java's. The borrow
               | checker definitely helps to catch errors, but I would
               | rate them at basically the same level.
        
               | gameswithgo wrote:
               | So, being able to have a value or a reference vs
               | everything always being a reference is a pretty massive
               | difference. Option types instead of nulls is also a
               | pretty large difference. Generics being better from a
               | performance perspective is a large difference too.
               | 
               | As well, traits are quite a bit different than classes,
               | but not always in a good way!
        
               | gameswithgo wrote:
               | Remember that the comparison is a hand performance tuned
               | Java program, vs a naive Rust implementation. When you
               | are trying to extract performance out of Java, things can
               | get pretty messy.
        
           | novok wrote:
           | A C app will tend to outperform a Java or Golang app by 3x,
           | so it isn't too surprising.
        
             | georgebarnett wrote:
             | Could you please provide a source for this?
             | 
             | Java is very fast and 3X slower is a pretty wild claim.
        
               | gameswithgo wrote:
               | It depends greatly on the problem domain. The difference
               | might be near zero, or you might be able to get ~16x
               | better performance (using say, AVX-512 intrinsics). Then
               | again, is intrinsics really C? Not really, but you can do
               | it. What if you have to abandon using classes when you
               | want to, in order to get the memory layout you want in
               | Java, are you still using Java?
        
               | squarefoot wrote:
               | 3x might be a bit too much today, but it's definitely
               | slower than C. Also to be considered is the VM overhead,
               | not just the executed code.
               | 
               | Here are some benchmarks; I'll leave to the experts out
               | there to confirm or dismiss them.
               | 
               | https://benchmarksgame-
               | team.pages.debian.net/benchmarksgame/...
        
               | marcosdumay wrote:
               | Those people have a really good claim to have the most
               | optimized choice on each language. They've found Java to
               | be 2 to 3 times slower than C and Rust (with much slower
               | outliers).
               | 
               | https://benchmarksgame-
               | team.pages.debian.net/benchmarksgame/...
               | 
               | On the real world, you won't get things as optimized in
               | higher level languages, because optimized code looks
               | completely unidiomatic. A 3x speedup from Java is a
               | pretty normal claim.
        
           | chubs wrote:
           | I found similarly when I ported an image resizing algorithm
           | from Swift to Rust: I'm experienced in swift thus was able to
           | write in an idiomatic way, and have little Rust experience
           | thus I wrote it in a naive way; yet still the rust algorithm
           | was twice(!) as fast. And swift doesn't even have a GC
           | slowing things down!
        
             | dgellow wrote:
             | ARC, used by Swift, has its own cost.
        
       | correct_horse wrote:
       | I've heard lots of hot takes on "what Go really is". Here's mine.
       | 
       | Go is what would have happened if Bell Labs wrote Java.
        
         | kick wrote:
         | Minor nitpick: That already happened, Limbo is what happened
         | when Bell Labs wrote Java.
        
           | anthk wrote:
           | More like Limbo and Inferno.
        
             | kick wrote:
             | Limbo doesn't only run on Inferno; anything with the Dis VM
             | will work.
        
           | [deleted]
        
           | monocasa wrote:
           | And go is very very derived from plan 9. It could be
           | considered a sibling of limbo in a lot of ways.
        
         | _ph_ wrote:
         | Interesting comment, as 2 of the main Go creators (Ken Thompson
         | and Rob Pike) did work at the Bell Labs. So while I doubt they
         | tried to write Java, Go in a sense was written by the Bell Labs
         | :).
         | 
         | (And Kernighan was their floor-mate too, that must have been a
         | stunningly great environment)
        
         | cdelsolar wrote:
         | OK boomer
        
       | joseluisq wrote:
       | That's why the {blazing-fast} term is becoming popular.
       | 
       | Rust won again.
        
       | Thaxll wrote:
       | Really interesting post, however they're using a 2+years old
       | runtime, Go 1.9.2 was released 2017/10/25 why did they not even
       | try Go 1.13?
       | 
       | For me the interesting part is that their new implementation in
       | Rust with a new data structure is less than 2x faster than an
       | implementation in Go using a 2+years old runtime.
       | 
       | It shows how fast Go is vs an very optimized language + new data
       | structure with no GC.
       | 
       | Overall I'm pretty sure there was a way to make the spikes go
       | away.
       | 
       | Still great post.
        
       | mangatmodi wrote:
       | Why would they switch to rust, rather than upgrading from 3 years
       | old version?
        
         | jhgg wrote:
         | This blog post perhaps is a bit "after the fact" we had made
         | the switch over mid 2019, and wanted to try out rust as well
         | for services like this, due to adoption elsewhere in the
         | company. Also, after upgrading between 4 golang versions on
         | this service and noticing it didn't materially change
         | performance, we decided to just spend our time on the rewrite
         | (for fun, and latency) and to get a head start into the
         | asynchronous rust ecosystem.
         | 
         | This blog post kinda internally matches our upgrade to
         | std::futures and tokio 0.2, away from futures 0.1.
        
           | typical182 wrote:
           | Do you have any load tests or synthetic benchmarks that are
           | still capable of producing this?
           | 
           | It would be interesting to see what a more modern Go would do
           | given there have been a bunch of tail latency GC improvements
           | since your older 1.9 Go version... and in an ideal world, it
           | would be nice to file an issue on the tracker if you were
           | still seeing this.
           | 
           | (Maybe that ends up later helping another one of your Go
           | services, or maybe it just helps the community, or maybe it's
           | a topic for another interesting blog...).
           | 
           | In any event, thanks for taking the time to write up and
           | share this one.
        
           | graphememes wrote:
           | Shiny toy syndrome, basically.
        
           | The_rationalist wrote:
           | Out of curiosity, why didn't you choose Kotlin? It can reuse
           | the Java ecosystem which allow you to save tons of money, and
           | give you advanced features and scalability. It is a sexier
           | and more ergonomic language too. And with e.g ZGC, you can
           | have a GC that is fine tunable, and that has very low
           | latency.
           | 
           | By choosing rust you will suffer a great deal of the
           | limitations of it's poor, not production ready, ecosystem.
           | I'm not even talking about the immaturity of the async await
           | support.
        
             | therockhead wrote:
             | > By choosing rust you will suffer a great deal of the
             | limitations of it's poor, not production ready, ecosystem.
             | 
             | Why do you think that? Seems like Rust is a great choice
             | for this type of high performance work.
        
               | ncmncm wrote:
               | Rust does best when the number of lines of code that must
               | be parsed in an edit-compile-test loop is small. When the
               | sources that must be parsed get large, coders suffer.
               | 
               | It is doubtful that this will improve, much, without
               | breaking changes to the language. The range of code over
               | which type inference operates, or at least programmers'
               | reliance on it, would need to contract by quite a lot.
               | There would be Complaints.
        
               | steveklabnik wrote:
               | Type inference only operates within function bodies. It's
               | also not the thing that causes compilation to be slow.
        
             | kaoD wrote:
             | > you can have a GC
             | 
             | But can I _not_ have it?
        
             | jhgg wrote:
             | More people at our company know Rust than Kotlin. It's used
             | across multiple teams (from our game SDK, native
             | encoder/capture pipeline, chat infra team for Erlang rust
             | NIFs.) where as Kotlin is only used by our android team.
             | 
             | We are willing to adopt early technologies we think are
             | promising, and contribute to or fund projects to continue
             | to advance the ecosystem. Yes, this means the path less
             | traveled, but in the case of rust (and in the past Elixir,
             | and even React Native) we think the trade offs are worth
             | it.
             | 
             | Also the tokio team uses Discord for their chat stuff, so
             | it's nice to pop in to be able to ask for and offer help.
        
             | jen20 wrote:
             | > I'm not even talking about the immaturity of the async
             | await support.
             | 
             | The one that is 100% more mature than the Java async/await
             | support?
        
             | mping wrote:
             | Ecosystem is a real problem, but motivated engineers will
             | make it work no matter what. I mean, banks run on COBOL.
             | 
             | Besides, I am willing to bet idiomatic rust is 2x-10x
             | faster than idiomatic kotlin.
        
               | codehalo wrote:
               | Then that motivation should be applicable to Go as well.
        
           | crystaldev wrote:
           | > to get a head start into the asynchronous rust ecosystem.
           | 
           | Sounds resume-driven.
        
           | The_rationalist wrote:
           | _Also, after upgrading between 4 golang versions on this
           | service and noticing it didn 't materially change
           | performance, we decided to just spend our time on the
           | rewrite_ So you basically don't read release changelogs of
           | the slow iterating language like go, yet have the double
           | standard of keeping up with rust nightly? Because Go 1.12
           | explicitly mention performance improvements to it's GC.
           | 
           | You just wanted to do it "for fun" (but is rust and it' s
           | immature ecosystem with all it's issues that fun?). This blog
           | is dishonest and show amateurism at discord.
           | 
           | BTW it's not too late, prove us right or wrong by
           | benchmarcking latest Go GC vs rust.
        
       | shanev wrote:
       | When a company switches languages like this, it's usually because
       | the engineers want to learn something new on the VC's dime.
       | They'll make any excuse to do it. As many comments here show,
       | there are other ways to solve this problem.
        
       | deepsun wrote:
       | Wait, isn't Go devs said they solved GC latency problems [1]?
       | 
       | (from 2015): "Go is building a garbage collector (GC) not only
       | for 2015 but for 2025 and beyond: A GC that supports today's
       | software development and scales along with new software and
       | hardware throughout the next decade. Such a future has no place
       | for stop-the-world GC pauses, which have been an impediment to
       | broader uses of safe and secure languages such as Go." [2]
       | 
       | [1] https://www.youtube.com/watch?v=aiv1JOfMjm0
       | 
       | [2] https://blog.golang.org/go15gc
        
       | buboard wrote:
       | maybe next year : why discord is switching to C
        
       | flafla2 wrote:
       | > After digging through the Go source code, we learned that Go
       | will force a garbage collection run every 2 minutes at minimum.
       | In other words, if garbage collection has not run for 2 minutes,
       | regardless of heap growth, go will still force a garbage
       | collection.
       | 
       | > We figured we could tune the garbage collector to happen more
       | often in order to prevent large spikes, so we implemented an
       | endpoint on the service to change the garbage collector GC
       | Percent on the fly. Unfortunately, no matter how we configured
       | the GC percent nothing changed. How could that be? It turns out,
       | it was because we were not allocating memory quickly enough for
       | it to force garbage collection to happen more often.
       | 
       | As someone not too familiar with GC design, this seems like an
       | absurd hack. That this 2-minute hardcoded limitation is not even
       | configurable comes across as amateurish even. I have no
       | experience with Go -- do people simply live with this and not
       | talk about it?
        
         | brundolf wrote:
         | It does sound like Discord's case was fairly extraordinary in
         | terms of the _degree_ of the spike:
         | 
         | > We kept digging and learned the spikes were huge not because
         | of a massive amount of ready-to-free memory, but because the
         | garbage collector needed to scan the entire LRU cache in order
         | to determine if the memory was truly free from references.
         | 
         | So maybe this is one of those things that just doesn't come up
         | in most cases? Maybe most services also generate enough garbage
         | that that 2-minute maximum doesn't really come into play?
        
           | wongarsu wrote:
           | Games written in the Unity engine are (predominately) written
           | in C#, a garbage collected language. Keeping large amounts of
           | data around isn't that unusual since reading from disk is
           | often prohibitively slow, and it's normal to minimize memory
           | allocation/garbage generation (using object pools, caches
           | etc), and manually trigger the GC in loading screens and in
           | other opportune places (as easy as calling
           | System.GC.Collect()). At 60 fps each frame is about 16ms. You
           | do a lot in those 16ms, adding a 4ms garbage collection
           | easily leads to dropping a frame. Of course whether that
           | matters depends on the game, but Unity and C# seem to handle
           | it well for the games that need tiny or no GC pauses.
           | 
           | But (virtually) nobody is writing games in Go, so it's
           | entirely possible that it's an unusual case in the Go
           | ecosystem. Being an unsupported usecase is a great reason to
           | switch language.
        
             | brundolf wrote:
             | Right; Go is purpose-built for writing web services, and
             | web services tend to be pretty tolerant of (small) latency
             | spikes because virtually anyone who's calling one is
             | already expecting at least some latency
        
             | erk__ wrote:
             | C# uses a generational gc iirc so it may be better suited
             | for a system where you have a relativly stable collection
             | that does not need to be fully garbage collected all the
             | time and have a smaller and more volitile set of objects
             | that will be gc'ed more often. I don't think the current
             | garbage collector in go does anything similar to that.
        
               | danbolt wrote:
               | This might have changed with more recent updates, but I
               | was under the impression that the Mono garbage collector
               | in Unity was a bit dated and not as up-to-date as a C#
               | one today.
        
               | martindevans wrote:
               | Unity has recently added the "incremental GC" [1] which
               | spreads the work of the GC over multiple frames. As I
               | understand it this has a lower overall throughput, but
               | _much_ better worst case latency.
               | 
               | [1] https://blogs.unity3d.com/2018/11/26/feature-preview-
               | increme...
        
           | loeg wrote:
           | A GC scan of a large LRU (or any large object graph) is
           | expensive in CPU terms because the many of the pointers
           | traversed will not be in any CPU cache. Memory access latency
           | is extremely high relative to how fast CPUs can process
           | cached data.
           | 
           | You could maybe hack around the GC performance without
           | destroying the aims of LRU eviction by batching additions to
           | your LRU data structure to reduce the number of pointers by a
           | factor of N. It's also possible that a Go BTree indexed by
           | timestamp, with embedded data, would provide acceptable LRU
           | performance and would be much friendlier on the cache. But it
           | might also not have acceptable performance. And Go's lack of
           | generic datastructures makes this trickier to implement vs
           | Rust's BtreeMap provided out of the box.
        
           | jerf wrote:
           | Yes, this is a maximally pessimal case for most forms of
           | garbage collection. They don't say, but I would imagine these
           | are very RAM-heavy systems. You can get up to 768GB right now
           | on EC2. Fill that entire thing up with little tiny objects
           | the size of usernames or IDs for users, or even merely 128GB
           | systems or something, and the phase where you crawl the RAM
           | to check references by necessity is going to be slow.
           | 
           | This is something important to know before choosing a GC-
           | based language for a task like this. I don't think
           | "generating more garbage" would help, the problem is the scan
           | is slow.
           | 
           | If Discord was _forced_ to do this in pure Go, there is a
           | solution, which is basically to allocate a []byte or a set of
           | []bytes, and then treat it as expanse of memory yourself,
           | managing hashing, etc., basically, doing manual arena
           | allocation yourself. GC would drop to basically zero in that
           | case because the GC would only see the []byte slices, not all
           | the contents as individual objects. You 'll see this
           | technique used in GC'd languages, including Java.
           | 
           | But it's tricky code. At that point you've shucked off all
           | the conveniences and features of modern languages and in
           | terms of memory safety within the context of the byte
           | expanses, you're writing in assembler. (You can't _escape_
           | those arrays, which is still nice, but hardly the only
           | possible issue.)
           | 
           | Which is, of course, where Rust comes in. The tricky code
           | you'd be writing in Go/Java/other GC'd language with tons of
           | tricky bugs, you end up writing with compiler support and
           | built-in static checking in Rust.
           | 
           | I would imagine the Discord team evaluated the option of just
           | grabbing some byte arrays and going to town, but it's fairly
           | scary code to write. There are just too many ways to even
           | describe for such code to end up having a 0.00001% bug that
           | will result in something like the entire data structure
           | getting intermittently trashed every six days on average or
           | something, virtually impossible to pick up in testing and
           | possibly even escaping canary deploys.
           | 
           | Probably some other languages have libraries that could
           | support this use case. I know Go doesn't ship with one and at
           | first guess, I wouldn't expect to find one for Go, or one I
           | would expect to stand up at this scale. Besides, honestly, at
           | feature-set maturity limit for such a library, you just end
           | up with "a non-GC'd inner platform" for your GC'd language,
           | and may well be better off getting a real non-GC'd platform
           | that isn't an inner platform [1]. I've learned to really hate
           | inner platforms.
           | 
           | By contrast... I'd bet this is fairly "boring" Rust code, and
           | way, way less scary to deploy.
           | 
           | [1]: https://en.wikipedia.org/wiki/Inner-platform_effect
        
             | brundolf wrote:
             | > I don't think "generating more garbage" would help
             | 
             | To be clear: I wasn't suggesting that generating garbage
             | would _help_ anyone. Only that in a more typical case,
             | where more garbage _is_ being generated, the two minute
             | interval itself might never surface as the problem because
             | other things are getting in front of it.
        
           | spullara wrote:
           | Heap caches that keep things longer than a GC cycle are
           | terrible under GC unless you have a collector in the new
           | style like ZGC, Azul or Shenandoah.
        
             | sitkack wrote:
             | Systems with poor GC and the need to keep data for
             | lifetimes greater than a request should have an easy to use
             | off heap mechanism to prevent these problems.
             | 
             | Often something like Redis is used as a shared cache that
             | is invisible to the garbage collector, there is a natural
             | key with a weak reference (by name) into a KV store. One
             | could embed a KV store into an application that the GC
             | can't scan into.
        
               | spullara wrote:
               | 100%. In Java, you would often use OpenHFT's ChronicleMap
               | for now and hopefully inline classes/records in Java 16
               | or so.
        
               | cgh wrote:
               | Ehcache has an efficient off-heap store:
               | https://github.com/Terracotta-OSS/offheap-store/
               | 
               | Doesn't Go have something like this available? It's an
               | obvious thing for garbage-collected languages to have.
        
             | [deleted]
        
         | recuter wrote:
         | If you want to force it you can call "runtime.GC()" but that's
         | almost always a step in the wrong direction.
         | 
         | It is worth it to read and understand:
         | https://blog.twitch.tv/en/2019/04/10/go-memory-ballast-how-i...
        
         | ericflo wrote:
         | It comes from a desire to run in the exact opposite direction
         | as the JVM, which has options for every conceivable parameter.
         | Go has gone through a lot of effort to keep the number of
         | configurable GC parameters to 1.
        
           | erik_seaberg wrote:
           | Anyone who pushes the limits of a machine needs tuning
           | options. If you can't turn knobs you have to keep rewriting
           | code until you _happen_ to get the same effect.
        
             | tannhaeuser wrote:
             | That might be true, but from a language design PoV it isn't
             | convincing to have dozens of GC-related runtime flags a la
             | Java/JVM. If you need those anyway, this might point to
             | pretty fundamental language expressivity issues.
        
             | kllrnohj wrote:
             | Tuning options don't work well with diverse libraries,
             | though. If you use 2 libraries and they both are designed
             | to run with radically different tuning options what do you
             | do? Some bad compromise? Make one the winner and one the
             | loser? The best you can do is do an extensive monitoring &
             | tuning experiment, but that's quite involved as well and
             | still won't get you the maximum performance of each
             | library, either.
             | 
             | At least with code hacking around the GC's behavior that
             | code ends up being portable across the ecosystem.
             | 
             | There doesn't seem to really be a _good_ option here either
             | way. This amount of tuning-by-brute-force (either by knobs
             | or by code re-writes) seems to just be the cost of using a
             | GC.
        
             | ericflo wrote:
             | There's definitely a happy medium. One setting may indeed
             | be too few, but JVM's many options ends in mass cargo-cult
             | copypasta, often leading to really bad configurations.
        
               | [deleted]
        
             | tomc1985 wrote:
             | YeAh BuT wE dOnT wAnT tO hAvE tO tEsT eVeRy oPtIoN!!!
             | 
             | - lazy devs and product managers, everywhere
        
               | hombre_fatal wrote:
               | This was the first time I've seen that annoying cAsE meme
               | on HN and I pray it's the last. It is a lazy way to make
               | your point, hoping your meme-case does all the work for
               | you so that you don't have to say anything substantial.
               | 
               | Or do you think it adds to the discussion?
        
           | Scarbutt wrote:
           | Surely tuning some GC parameters is less effort that having
           | to do a rewrite in another language.
        
         | _ph_ wrote:
         | With recent Go releases, GC pauses have become neglible for
         | most applications. So this should not get into your way.
         | However, it can easily tweaked, if needed. There is
         | runtime.ForceGCPeriod, which is a pointer to the forcegcperiod
         | variable. A Go program, which _really_ needs to change this,
         | can do it, but most programs shouldn 't require this.
         | 
         | Also, it is almost trivial to edit the Go sources (they are
         | included in the distribution) and rebuild it, which usually
         | takes just a minute. So Go is really suited for your own
         | experiments - especially, as Go is implemented in Go.
        
           | andoriyu wrote:
           | > Also, it is almost trivial to edit the Go sources (they are
           | included in the distribution) and rebuild it, which usually
           | takes just a minute. So Go is really suited for your own
           | experiments - especially, as Go is implemented in Go.
           | 
           | Ruby 1.8.x wants to say "Hello"
        
           | nemo1618 wrote:
           | runtime.ForceGCPeriod is only exported in testing, so you
           | wouldn't be able to use it in production. But as you said,
           | the distribution could easily be modified to fit their needs.
        
             | _ph_ wrote:
             | Thanks, didn't catch that this is for testing only.
        
           | robocat wrote:
           | You still may have significant CPU overhead from the GC e.g.
           | the twitch article (mentioned elsewhere in comments) measured
           | 30% CPU used for GC for one program (Go 1.5 I think).
           | 
           | Obviously they consider spending 50% more on hardware is a
           | worthwhile compromise for the gains they get (e.g. reduction
           | of developer hours and reduced risk of security flaws or
           | avoiding other effects of invalid pointers).
        
             | _ph_ wrote:
             | In this case, as they were running into the automatic GC
             | interval, their program did not create much, if any
             | garbage. So the CPU overhead for the GC would have been
             | quite small.
             | 
             | If you do a lot of allocations, the GC overhead rises of
             | course, but also would the effort of doing
             | allocations/deallocations with a manual managing scheme. In
             | the end it is a bit trade-off, what fits the problem at
             | hand best. The nice thing about Rust is, that "manual"
             | memory management doesn't come at the price of program
             | correctness.
        
           | calcifer wrote:
           | > especially, as Go is implemented in Go.
           | 
           | Well, parts of it. You can't implement "make" or "new" in Go
           | yourself, for example.
        
             | _ph_ wrote:
             | You have to distinguish between the features available to a
             | Go program as the user writes it and the implementation of
             | the language. The immplementation is completely written in
             | Go (plus a bit of low-level assembly). Even if the
             | internals of e.g. the GC are not visible to a Go program,
             | the GC itself is implemented in Go and thus easily
             | readeable and hackeable for experienced Go programmers. And
             | you can quickly rebuild the whole Go stack.
        
               | calcifer wrote:
               | > You have to distinguish between the features available
               | to a Go program as the user writes it and the
               | implementation of the language.
               | 
               | I do, I'm just objecting to "Go is implemented in Go".
        
             | slrz wrote:
             | And yet, maps and slices _are_ implemented in Go.
             | 
             | https://golang.org/src/runtime/map.go
             | 
             | https://golang.org/src/runtime/slice.go
             | 
             | I don't see why you couldn't do something similar in your
             | own Go code. It just won't be as convenient to use as the
             | compiler wouldn't fill in the type information (element
             | size, suitable hash function, etc.) for you. You'd have to
             | pass that yourself or provide type-specific wrappers
             | invoking the unsafe base implementation. More or less like
             | you would do in C, with some extra care to abide by the
             | rules required for unsafe Go code.
        
               | calcifer wrote:
               | Nothing you wrote contradicts what I said. You _can 't_
               | implement "make" in Go. The fact that you can implement
               | some approximation of it with a worse signature and worse
               | runtime behaviour (since it won't be compiler assisted)
               | doesn't make it "make".
        
         | dickeytk wrote:
         | I think for most applications (especially the common use-case
         | of migrating a scripting web monolith to a go service), people
         | just aren't hitting performance issues with GC. Discord being a
         | notable exception.
         | 
         | If these issues were more common, there would be more
         | configuration available.
         | 
         | [EDIT] to downvoters: I'm not saying it's not an issue worth
         | addressing (and it may have already been since they were on
         | 1.9), I was just answering the question of "why this might
         | happen"
        
           | biomcgary wrote:
           | Or, in the case of latency, just wait a few months because
           | the Go team obsesses about latency (no surprise from a Google
           | supported language). Discord's comparison is using Go1.9.
           | Their problem may well have been addressed in Go1.12. See
           | https://golang.org/doc/go1.12#runtime.
        
         | Nyra wrote:
         | Funnily enough, something similar happened at Twitch regarding
         | their API front end written in Go:
         | https://blog.twitch.tv/en/2019/04/10/go-memory-ballast-how-i...
        
           | robocat wrote:
           | summary: Go 1.5, memory usage (heap) of 500MB on a VM with
           | 64GiB of physical memory, with 30% of CPU cycles spent in
           | function calls related to GC, and unacceptable problems
           | during traffic spikes. Optimisation hack to somewhat fix
           | problem was to allocate 10GiB, but not using the allocation
           | at all, which caused a beneficial change in the GC behaviour!
        
           | mrpotato wrote:
           | Interesting, they went a totally different route.
           | 
           | > The ballast in our application is a large allocation of
           | memory that provides stability to the heap.
           | 
           | > As noted earlier, the GC will trigger every time the heap
           | size doubles. The heap size is the total size of allocations
           | on the heap. Therefore, if a ballast of 10 GiB is allocated,
           | the next GC will only trigger when the heap size grows to 20
           | GiB. At that point, there will be roughly 10 GiB of ballast +
           | 10 GiB of other allocations.
        
             | firethief wrote:
             | Wow, that puts Discord's "absurd hack" into perspective! I
             | feel like the moral here is a corollary to that law where
             | people will depend on any observable behavior of the
             | implementation: people will use any available means to tune
             | important performance parameters; so you might as well
             | expose an API directly, because doing so actually results
             | in less dependence on your implementation details than if
             | people resort to ceremonial magic.
        
               | tick_tock_tick wrote:
               | I mean if you read Twitch's hack they intentionally did
               | it in code so they didn't need to tune the GC parameter.
               | They wanted to avoid all environment config.
        
               | firethief wrote:
               | I missed that part. I thought they would use a parameter
               | if it were available, because they said this:
               | 
               | > For those interested, there is a proposal to add a
               | target heap size flag to the GC which will hopefully make
               | its way into the Go runtime soon.
               | 
               | What's wrong with the existing parameter?
               | 
               | I'm sure they aren't going this far to avoid all
               | environment config without a good reason, but any good
               | reason would be a flaw in some part of their stack.
        
         | nickserv wrote:
         | This is in line with Go's philosophy, they try to keep the
         | language as simple as possible.
         | 
         | Sometimes it means an easy thing in most other languages is
         | difficult or tiresome to do in Go. Sometimes it means hard-
         | coded values/decisions you can't change (only tabs anyone?).
         | 
         | But overall this makes for a language that's very easy to
         | learn, where code from project to project and team to team is
         | very similar and quick to understand.
         | 
         | Like anything, it all depends on your needs. We've found it
         | suits ours quite well, and migrating from a Ruby code base has
         | been a breath of fresh air for the team. But we don't have the
         | same performance requirements as Discord.
        
           | kerkeslager wrote:
           | "Simple" when used in programming, doesn't mean anything. So
           | let's be clear here: what we mean is that compilation occurs
           | in a single pass and the artifact of compilation is a single
           | binary.
           | 
           | These are two things that make a lot of sense _at Google_ if
           | you read why they were done.
           | 
           | But unless you're working at Google, I struggle to guess why
           | you would care about either of these things. The first
           | requires sacrificing anything resembling a reasonable type
           | system, and even with that sacrifice Go doesn't really
           | deliver: are we really supposed to buy that "go generate"
           | isn't a compilation step? The second is sort of nice, but not
           | nice enough to be a factor in choosing a language.
           | 
           | The core language is currently small, but every language
           | grows with time: even C with its slow-moving, change-averse
           | standards body has grown over the years. Currently people are
           | refreshed by the lack of horrible dependency trees in Go, but
           | that's mostly because there aren't many libraries available
           | for Go: that will also change with time (and you can just not
           | import all of CPAN/PyPy/npm/etc. in any language, so Go isn't
           | special anyway).
           | 
           | If you like Go for some _aesthetic_ of  "simplicity", then
           | sure, I guess I can see how it has that. But if we're
           | discussing pros and cons, aesthetics are pretty subjective
           | and not really work talking about.
        
             | papaf wrote:
             | I don't agree with your definition of simplicity.
             | 
             | I like Go and I consider it a simple language because:
             | 
             | 1. I can keep most of the language in my head and I don't
             | hit productivity pauses where I have to look something up.
             | 
             | 2. There is usually only one way to do things and I don't
             | have to spend time deciding on the right way.
             | 
             | For me, these qualities make programming very enjoyable.
        
               | kerkeslager wrote:
               | > I don't agree with your definition of simplicity.
               | 
               | You mean where I explicitly said that "simple" didn't
               | mean anything, so we should talk about what we mean more
               | concretely?
               | 
               | > 1. I can keep most of the language in my head and I
               | don't hit productivity pauses where I have to look
               | something up.
               | 
               | The core language is currently small, but every language
               | grows with time: even C with its slow-moving, change-
               | averse standards body has grown over the years.
               | 
               | > 2. There is usually only one way to do things and I
               | don't have to spend time deciding on the right way.
               | 
               | Go supports functional programming and object-oriented
               | programming, so pretty much anything you want to do has
               | at least two ways to do it--it sounds like you just
               | aren't familiar with the various ways.
               | 
               | The problem with having more than one way to do things
               | isn't usually choosing which to use, by the way: the
               | problem is when people use one of the many ways
               | differently within the same codebase and it doesn't play
               | nicely with the way things are done in the codebase.
               | 
               | This isn't really a criticism of Go, however: I can't
               | think of a language that actually delivers on there being
               | one right way to do things (most don't even make that
               | promise--Python makes the promise but certainly doesn't
               | deliver on it).
        
               | sk0g wrote:
               | Does Go support functional programming? There's no
               | support for map, filter, etc. It barely supports OOP too,
               | with no real inheritance or generics.
               | 
               | I've been happy working with it for a year now, though
               | I've had the chance to work with Kotlin and I have to
               | say, it's very nice too, even if the parallelism isn't
               | quite easy/ convenient to use.
        
               | kerkeslager wrote:
               | It supports first-class functions, and it supports
               | classes/objects. Sure, it doesn't include good tooling
               | for either, but:
               | 
               | 1. map/filter are 2 lines of code each. 2. Inheritance is
               | part of mainstream OOP, but there are some less common
               | languages that don't support inheritance in the way
               | you're probably thinking (i.e. older versions of
               | JavaScript before they caved and introduced two forms of
               | inheritance). 3. Generics are more of a strong type thing
               | than an OOP thing.
        
           | andai wrote:
           | Offtopic but what are you missing when you have to use tabs
           | instead of spaces? I can understand different indentation
           | preferences but I can change the indentation width per tab in
           | my editor. And then everyone can read the code with the
           | indentation they prefer, while the file stays the same.
        
             | Uristqwerty wrote:
             | I don't know about anyone else, but I like aligning certain
             | things at half-indents (labels/cases half an indent back,
             | so you can skim the silhouette of _both_ the surrounding
             | block and jump targets within it; braceless if /for bodies
             | to emphasize their single-statement nature (that convention
             | alone would have made "goto fail" blatantly obvious to
             | human readers, though not helped the compiler); virtual
             | blocks created by API structure (between glBegin() to
             | glEnd() in the OpenGL 1.x days)).
             | 
             | Thing is, few if any IDEs support the concept, so if I want
             | to have half-indents, I _must_ use spaces. Unfortunately,
             | these days that means giving up and using a more common
             | indent style most of the time, as the extra bit of
             | readability generally isn 't worth losing automatic
             | formatting or multi-line indent changes.
        
             | tasty_freeze wrote:
             | > everyone can read the code with the indentation they
             | prefer, while the file stays the same.
             | 
             | Have you ever worked in a code base with many contributors
             | that changed over the course of years? In my experience it
             | always ends up a jumble where indentation is screwed up and
             | no particular tab setting makes things right. I've worked
             | on files where different lines in the same file might
             | assume tab spacing of 2, 3, 4, or 8.
             | 
             | For example, say there is a function with a lot of
             | parameters, so the argument list gets split across lines.
             | The first line has, say, two tabs before the start of the
             | function call. The continuation line ideally should be two
             | tabs then a bunch of spaces to make the arguments line up
             | with the arguments from the first line. But in practice
             | people end up putting three or four tabs to make the 2nd
             | line line up with the arguments of the first line. It looks
             | great with whatever tab setting the person used at that
             | moment, but then change tab spacing and it no longer is
             | aligned.
        
               | _ph_ wrote:
               | On the good side, the problem of mixing tabs and spaces
               | does normally not appear in Go sources, as gofmt always
               | converts spaces to tabs, so there is no inconsistant
               | indentation. Normally I prefer spaces to tabs because I
               | dislike the mixing, but gofmt solves this nicely for me.
        
               | tasty_freeze wrote:
               | Please explain to me how this works for the case I
               | outlineed, eg:                       some_function(arg1,
               | arg2, arg3, arg4,                           arg5, arg6);
               | 
               | For the sake of argument, say tabstop=4. If the first
               | line starts with two tabs, will the second line also have
               | two tabs and then a bunch of spaces, or will it start
               | with five tabs and a couple spaces?
        
               | dennisgorelik wrote:
               | You should NOT do such alignment anyway, because if you
               | rename "some_function" to "another_function", then you
               | will lose your formatting.
               | 
               | Instead, format arguments in a separate block:
               | some_function(             arg1, arg2, arg3, arg4,
               | arg5, arg6);
               | 
               | When arguments are aligned in a separate block, both
               | spaces and tabs work fine.
               | 
               | My own preference is tabs, because of less visual noise
               | in code diff [review].
        
               | steveklabnik wrote:
               | You wouldn't use an alignment-based style, but a block-
               | based one instead:                 some_function(
               | arg1,           arg2,           arg3,           arg4,
               | arg5,           arg6,       );
               | 
               | (I don't know what Go idiom says here, this is just a
               | more general solution.)
        
               | masklinn wrote:
               | Checking the original code on the playground, Go just
               | reindents everything using one tab per level. So if the
               | funcall is indented by 2 (tabs), the line-broken
               | arguments are indented by 3 (not aligned with the open
               | paren).
               | 
               | rustfmt looks to try and be "smarter" as it will move the
               | argslist and add linebreaks to it go not go beyond
               | whatever limit is configured on the playground, gofmt
               | apparently doesn't insert breaks in arglists.
        
               | Uristqwerty wrote:
               | In an ideal world, I'd think you would put a "tab stop"
               | character before arg1, then a _single_ tab on the
               | following line, with the bonus benefit that the
               | formatting would survive automatic name changes and not
               | create an indent-change-only line in the diff. Trouble
               | being that all IDEs would have to understand that
               | character, and compilers would have to ignore it (hey,
               | ASCII has form feed and vertical tab that could be
               | repurposed...).
        
               | Will_Parker wrote:
               | > In my experience it always ends up a jumble where
               | indentation is screwed up and no particular tab setting
               | makes things right.
               | 
               | Consider linting tools in your build.
        
             | nickserv wrote:
             | It's just an example of something that the Go team took a
             | decision on, and won't allow you to change. I mean, even
             | Python lets you choose. I don't really have a problem with
             | it however, even if I do prefer spaces.
        
               | giancarlostoro wrote:
               | Python chose spaces a la PEP8 by the way.
        
               | dragonwriter wrote:
               | PEP8 isn't a language requirement, but a style guide.
               | There are tools to enforce style on Python, but the
               | language itself does not.
        
               | giancarlostoro wrote:
               | Same thing with Go... Tabs aren't enforced, but the out
               | of the box formatter will use tabs. PyCharm will default
               | to trying to follow PEP8, and GoLand will do the same, it
               | will try to follow the gofmt standards.
               | 
               | See:
               | 
               | https://stackoverflow.com/questions/19094704/indentation-
               | in-...
        
               | marsokod wrote:
               | You can use tabs to indent your python code. Ok, you
               | might be lynched if you share your code but as long as
               | you don't mix tabs and spaces, it is fine.
        
               | giancarlostoro wrote:
               | Same with Go, you can use spaces.
        
               | _wldu wrote:
               | Or none at all: https://play.golang.org/p/VeEaJbJ6sYT
        
               | monocasa wrote:
               | There's a difference between making decisions that are
               | really open to bikeshedding, and making sweeping
               | decisions in contexts that legitimately need per app
               | tuning like immature GCs.
               | 
               | The Azul guys get to claim that you don't need to tune
               | their gc, golang doesn't.
        
               | geodel wrote:
               | Hmm ..this is why Azul's install and configure guide run
               | in hundreds of pages. All the advanced tuning, profiling
               | and configuring OS commands, setting contingency memory
               | pools are perhaps for GCs which Azul does not sell.
        
               | monocasa wrote:
               | I mean, they'll let you because the kind of customers
               | want to be able to are the kinds of customers that Azul
               | targets. But everything I've heard from their engineers
               | is that they've solved a lot of customer problems by
               | resetting things to defaults and just letting it have a
               | giant heap to play with.
               | 
               | Not sure how that makes the golang position any better.
        
         | abraxas wrote:
         | Seems like Go is more suitable for the "spin up, spin down,
         | never let the GC run" kind of scenario that is being pushed by
         | products like AWS Lambda and other function as a service
         | frameworks.
        
           | _ph_ wrote:
           | Why do you think it is? Go has a really great gc which mostly
           | runs in parallel to your program with gc stops only in the
           | doman of less than milliseconds. Discord ran into a corner
           | case where they did not create enough garbage to trigger gc
           | cycles, but had a performance impact due to scheduled gc
           | cycles for returning memory to the OS (which they wouldn't
           | need to do either).
        
             | [deleted]
        
             | abraxas wrote:
             | Because many services eventually become performance
             | bottlenecked either via accumulation of users or
             | accumulation of features. In either case eventually
             | performance becomes very critical.
        
               | _ph_ wrote:
               | Sure, but that doesn't make Go unsuitable for those tasks
               | on a fundamental basis. Go is very high performance.
               | Whether Go or another language is the best match very
               | much depends on the problem at hand and the especial
               | requirements. Even in the described case they might have
               | tweaked the GC to fit their bill.
        
         | JBReefer wrote:
         | Go always feels like an amateur language to me, I've given up
         | on it. This feels right in line - similar to the hardcode
         | GitHub magic.
        
           | reificator wrote:
           | I could be wrong, but I don't believe there is "hardcoded[d]
           | GitHub magic".
           | 
           | IIRC I have used GitLab and Bitbucket and self-hosted Gitea
           | instances the same exact way, and I'm fairly sure there was
           | an hg repo in one of those. Don't recall doing anything out
           | of the ordinary compared to how I would use a github URL.
        
             | heinrich5991 wrote:
             | There are a couple of hosting services hardcoded in Go. I
             | believe it was about splitting the URL into the actual URL
             | and the branch name.
        
               | Nullabillity wrote:
               | https://github.com/golang/go/blob/e6ebbe0d20fe877b111cf4c
               | cf8...
               | 
               | Ouch, Go never ceases to amaze. The Bitbucket case[0] is
               | even more crazy, calling out to the Bitbucket API to
               | figure out which VCS to use. It has a special case for
               | private repositories, but seems to hard-code cloning over
               | HTTPS.
               | 
               | If only we had some kind of universal way to identify
               | resources, that told you how to access it...
               | 
               | [0]: https://github.com/golang/go/blob/e6ebbe0d20fe877b11
               | 1cf4ccf8...
        
           | [deleted]
        
         | tedunangst wrote:
         | Typically a GC runtime will do a collection when you allocate
         | memory, probably when the heap size is 2x the size after the
         | last collection. But this doesn't free memory when the process
         | doesn't allocate memory. The goal is to return unused memory
         | back to the operating system so it's available for other
         | purposes. (You allocate a ton of memory, calculate some result,
         | write the result to a file, and drop references to the memory.
         | When will it be freed?)
        
         | ptrincr wrote:
         | You are able to disable GC with:                 GOGC=off
         | 
         | As someone mentions below.
         | 
         | More details here: https://golang.org/pkg/runtime/
        
           | marrs wrote:
           | How does Go allow you to manage memory manually? Malloc/free
           | or something more sophisticated?
        
             | masklinn wrote:
             | It doesn't. If you disable the GC... you only have an
             | allocator, the only "free" is to run the entire GC by hand
             | (calling runtime.GC())
        
           | singron wrote:
           | Keeping GC off for a long running service might become
           | problematic. Also, the steady state might have few
           | allocations, but startup may produce a lot of garbage that
           | you might want to evict. I've never done this, but you can
           | also turn GC off at runtime with SetGCPercent(-1).
           | 
           | I think with that, you could turn off GC after startup, then
           | turn it back on at desired intervals (e.g. once an hour or
           | after X cache misses).
           | 
           | It's definitely risky though. E.g. if there is a hiccup with
           | the database backend, the client library might suddenly
           | produce more garbage than normal, and all instances might OOM
           | near the same time. When they all restart with cold caches,
           | they might hammer the database again and cause the issue to
           | repeat.
        
             | ignoramous wrote:
             | > ...all instances might OOM near the same time.
             | 
             | CloudFront, for this reason, allocates heterogeneous fleets
             | in its PoPs which have diff RAM sizes and CPUs [0], and
             | even different software versions [1].
             | 
             | > When they all restart with cold caches, they might hammer
             | the database again and cause the issue to repeat.
             | 
             | Reminds me of the DynamoDB outage of 2015 that essentially
             | took out us-east-1 [2]. Also, ELB had a similar outage due
             | to unending backlog of work [3].
             | 
             | Someone must write a book on design patterns for
             | distributed system outages or something?
             | 
             | [0] https://youtube.com/watch?v=pq6_Bd24Jsw&t=50m40s
             | 
             | [1] https://youtube.com/watch?v=n8qQGLJeUYAt=39m0s
             | 
             | [2] https://aws.amazon.com/message/5467D2/
             | 
             | [3] https://aws.amazon.com/message/67457/
        
       | adamnemecek wrote:
       | Rust is maturing. I legit don't think there are too many good
       | reasons to use Go over Rust. You can call Rust from Go but not
       | vice versa.
        
         | steveklabnik wrote:
         | (You can call Go from Rust:
         | https://blog.arranfrance.com/post/cgo-sqip-rust/ )
        
       | karma_daemon wrote:
       | I wish the article would show a graph of the golang heap usage.
       | I'm reminded of this cloudflare article [0] from a while back
       | where they created an example that seemed to exhibit similar
       | performance issues when they created many small objects to be
       | garbaged collected. They solved it by using a pooled allocator
       | instead of relying solely on the GC. Wonder if that would have
       | been applicable here to the go version.
       | 
       | [0] https://blog.cloudflare.com/recycling-memory-buffers-in-go/
        
       | justadudeama wrote:
       | > Changing to a BTreeMap instead of a HashMap in the LRU cache to
       | optimize memory usage.
       | 
       | Can someone explain to me how BTreeMap is more memory efficient
       | than a HashMap?
        
         | afranchuk wrote:
         | A BTreeMap should typically have O(n) memory usage, whereas a
         | HashMap (depending on load factor) will usually have O(kn)
         | memory usage, where k > 1. This is because a HashMap allocates
         | the table into which it will store hashed values upfront (and
         | when the load is too great), so it can't anticipate how many
         | values may be added nor what sorts of collisions may occur at
         | this time. Yes, collisions are typically stored as some
         | allocate-per-item collection, but the desire of a HashMap is to
         | avoid such collisions. A BTreeMap allocates for each new value.
         | 
         | Note that this explanation is a bit handwavy, as both data
         | structures have numerous optimizations in production scenarios.
        
           | cesarb wrote:
           | > collisions are typically stored as some allocate-per-item
           | collection
           | 
           | Rust's HashMap stores the collisions in the same table as the
           | non-collisions (open addressing), not in a separate
           | collection.
        
             | afranchuk wrote:
             | This is true, thanks for the specifics. I was answering the
             | question from a more generic perspective, but failed to
             | mention that many implementations rehash on collision...
        
           | nybble41 wrote:
           | There is no difference between O(n) and O(kn), if k is a
           | constant. The notation deliberately ignores constant factors.
           | (That's why you can say a BTreeMap requires O(n) memory
           | independent of the size or type of data being stored,
           | provided there is _some_ finite upper bound on the sizes of
           | the keys and values.)
        
             | afranchuk wrote:
             | Yeah I know, it was just the fastest way to indicate that
             | the constant factor was almost definitely larger for
             | HashMaps. But thank you for clarifying!
        
         | jhgg wrote:
         | This is a bit unclear. The root map is still a hash map, but
         | it's a "map of maps" the inner map is a BTreeMap - this is for
         | memory efficiency, as the inner map is relatively smaller and
         | we wouldn't have to deal with the growth factor of a hash map
         | (and having to manually manage that.) where as the root hash
         | map is pre allocated to its max size.
        
       | jaten wrote:
       | just use an off heap hash table. simple.
       | https://github.com/glycerine/offheap
       | 
       | Also, as others have said, lots of big GC improvements were
       | ignored by insisting on go1.9.2 and not the latest.
        
       | yippir wrote:
       | I chose Rust over Go after weighing the pros and cons. It was an
       | easy decision. I wouldn't consider using a high level language
       | that lacks generics. The entire point of using a high level
       | language is writing less code.
        
       | rvcdbn wrote:
       | Seems like you were hitting: runtime: Large maps cause
       | significant GC pauses #9477 [0]
       | 
       | Looks like this issue was resolved for maps that don't contain
       | pointers by [1]. From the article, sounds like the map keys were
       | strings (which do contain pointers, so the map would need to be
       | scanned by the GC).
       | 
       | If pointers in the map keys and values could be avoided, it would
       | have (if my understanding is correct) removed the need for the GC
       | to scan the map. You could do this for example by replacing
       | string keys with fixed size byte arrays. Curious if you
       | experimented this approach?
       | 
       | [0] https://github.com/golang/go/issues/9477 [1] https://go-
       | review.googlesource.com/c/go/+/3288
        
         | jasondclinton wrote:
         | Finding out if that does resolve the author's issue would be
         | interesting but I'm not sure that that would be particularly
         | supportive data in favor of Go. If anything it would reinforce
         | the downsides of Go's GC implementation: prone sudden pitfalls
         | only avoidable with obtuse, error-prone fiddling that makes the
         | code more complex.
         | 
         | After spending weeks fighting with Java's GC tuning for a
         | similar production service tail latency problem, I wouldn't
         | want to be caught having to do that again.
        
           | rvcdbn wrote:
           | For any tracing GC, costs are going to be proportional to the
           | number of pointers that need to be traced. So I would not
           | call reducing the use of pointers to ameliorate a GC issue
           | "obtuse, error-prone fiddling". On the contrary, it seems
           | like one of the first approaches to look at when faced with
           | the problem of too much GC work.
           | 
           | Really all languages with tracing GC are at a disadvantage
           | when you have a huge number of long-lived objects in the
           | heap. The situation is improved with generational GC (which
           | Go doesn't have) but the widespread use of off-heap data
           | structures to solve the problem even in languages like Java
           | with generational GC suggests this alone isn't a good enough
           | solution.
           | 
           | In Go's defense, I don't know another GC'ed language in which
           | this optimization is present in the native map data
           | structure.
        
           | masklinn wrote:
           | The good news are that Go's GC has basically no tunables, so
           | you wouldn't have spent weeks on that. The bad news is that
           | it has basically no tunables so if it's a tuning issue you're
           | either fucked or have to put "tuning" hacks right into the
           | code if you find any that works (e.g. twitch's "memory
           | ballast" to avoid overly aggressive GC runs:
           | https://blog.twitch.tv/en/2019/04/10/go-memory-ballast-
           | how-i...)
        
             | PeterCorless wrote:
             | There are tradeoffs with all languages. C++ avoids the GC,
             | but you then have to make sure you know how to avoid the
             | common pitfalls of that language.
             | 
             | We use C++ at Scylla (saw that we got a shout-out in the
             | blog! Woot!) but it's not like there isn't a whole industry
             | about writing blogs avoiding C++ pitfalls.
             | 
             | C++ pitfalls through the years... *
             | https://www.horstmann.com/cpp/pitfalls.html (1997) *
             | https://stackoverflow.com/questions/30373/what-c-pitfalls-
             | sh... (2008) * http://blog.davidecoppola.com/2013/09/cpp-
             | pitfalls/ (2013) * https://www.typemock.com/pitfalls-c/
             | (2018)
             | 
             | I am not saying any of these (Go, Rust, C++, or even Java)
             | are "right" or "wrong" per se, because that determination
             | is situational. Are you trying to optimize for performance,
             | for code safety, for taking advantage of specific OS hooks,
             | or oppositely, to be generically deployable across OSes, or
             | for ease of development? For the devs at Scylla, the core
             | DB code is C++. Some of our drivers and utilities are
             | Golang (like our shard aware driver). There's also a
             | Cassandra Rust driver -- it'd be sweet if someone wants to
             | make it shard-aware for Scylla!
        
               | xb95 wrote:
               | (Discord infra person here.)
               | 
               | Actually we didn't update the reference to Cassandra in
               | the article -- the read states workload is now on Scylla
               | too, as of last week. ;)
               | 
               | We'll be writing up a blog post on our migration with
               | Scylla at some point in the next few months, but we've
               | been super happy with it. I replaced our TokuMX cluster
               | with it and it's faster, more reliable, _and_ cheaper
               | (including the support contract). Pretty great for us.
        
               | PeterCorless wrote:
               | Woot! Go you! (Or Rust you! Whichever you prefer!)
        
           | hinkley wrote:
           | The common factor in most of my decisions to look for a new
           | job has been realizing that I feel like a very highly
           | compensated janitor instead of a developer.
           | 
           | Once I spend even the plurality of my time cleaning up messes
           | instead of doing something new (and there are ways to do
           | both), then all the life is sucked out of me and I just have
           | to escape.
           | 
           | Telling me that I have to keep using a tool with known issues
           | that we have to process or patches to fix would be super
           | frustrating. And the more times we stumble over that problem
           | the worse my confirmation bias will be.
           | 
           | Even if the new solution has a bunch of other problems, the
           | set that is making someone unhappy is the one that will cause
           | them to switch teams or quit. This is one area where
           | management is in a tough spot with respect to rewrites.
           | 
           | Rewrites don't often fix many things, but if you suspect
           | they're the only thing between you and massive employee
           | turnover, you're between a rock and a hard place. The product
           | is going to change dramatically, regardless of what decision
           | you make.
        
             | outworlder wrote:
             | While I completely agree with the "janitor" sentiment...
             | and for Newton's sake I feel like Wall-E daily...
             | 
             | > Telling me that I have to keep using a tool with known
             | issues that we have to process or patches to fix would be
             | super frustrating.
             | 
             | All tools have known issues. It's just that some have way
             | more issues than others. And some may hurt more than
             | others.
             | 
             | Go has reached an interesting compromise. It has some
             | elegant constructs and interesting design choices (like
             | static compilation which also happens to be fast). The
             | language is simple, so much so that you can learn the
             | basics and start writing useful stuff in a weekend. But it
             | is even more limiting than Java. A Lisp, this thing is not.
             | You can't get very creative - which is an outstanding
             | property for 'enterprises'. Boring, verbose code that makes
             | you want to pull your teeth out is the name of the game.
             | 
             | And I'm saying this as someone who dragged a team kicking
             | and screaming from Python to Go. That's on them - no-one
             | has written a single line of unit tests in years, so now
             | they at least get a whiny compiler which will do basic
             | sanity checks before things blow up in prod. Things still
             | 'panic', but less frequently.
        
               | pstuart wrote:
               | I'll take boring over WTF code any day :-)
        
               | nicoburns wrote:
               | It's not necessarily an either-or though. I'll take
               | clear, concise expressive code over either!
        
             | [deleted]
        
             | tensor wrote:
             | Most development jobs on products that matter involve
             | working on large established code bases. Many people get
             | satisfaction from knowing that their work matters to end
             | users, even if it's not writing new things in the new shiny
             | language or framework. Referring to these people as
             | "janitors" is pretty damn demeaning, and says more about
             | you than the actual job. Rewrites are rarely the right
             | call, and doing simply to entertain developers is
             | definitely not the right call.
        
               | heinrich5991 wrote:
               | >Referring to these people as "janitors" is pretty damn
               | demeaning,
               | 
               | "Referring to the term of "janitors" as demeaning is
               | pretty demeaning and says more about you than your
               | judgement of the parent."
               | 
               | I don't like this rhetoric device you just used.
               | 
               | Also, I think that janitors do important work as well.
        
         | bearcherian wrote:
         | The article also mentions the service was on go 1.9.2, which
         | was released 10/2017. I'd be curious to see if the same issues
         | exist on a build based on a more recent version of Go.
        
         | gwbas1c wrote:
         | Everything I've read indicates that RAM caches work poorly in a
         | GC environment.
         | 
         | The problem is that garbage collectors are optimized for
         | applications that mostly have short-lived objects, and a small
         | amount of long-lived objects.
         | 
         | Things like large in-RAM LRU are basically the slowest thing
         | for a garbage collector to do, because the mark-and-sweep phase
         | always has to go through the entire cache, and because you're
         | constantly generating garbage that needs to be cleaned.
        
           | pkolaczk wrote:
           | A high number of short lived allocations is also a bad thing
           | in a compacting GC environment, because every allocation gets
           | you a reference to a memory region touched very long time ago
           | and it is likely a cache miss. You would like to do an object
           | pool to avoid this but then you run into a pitfall with long
           | living objects, so there is really no good way out.
        
           | sorokod wrote:
           | In this[1] video at about 32 min, mark there is a discussion
           | on GC and apps that do caching.
           | 
           | [1] https://www.youtube.com/watch?v=VCeHkcwfF9Q
        
         | asimpletune wrote:
         | Ok but in rust those pointers can just be borrowed obviating
         | the need for gc at all.
        
           | masklinn wrote:
           | Given it's a cache the entries would not have an existing
           | natural owner... except for the cache itself.
           | 
           | There would be no need for a GC to traverse the entire map,
           | but that's because rust doesn't use a GC.
        
             | falcolas wrote:
             | While Rust does not have a discrete runtime GC process, it
             | does utilize reference counting for dynamic memory cleanup.
             | 
             | So you could argue that they are still going to suffer some
             | of the downsides of a GC'ed memory allocation. Some
             | potential issues include non-deterministic object lifespan,
             | and ensuring that any unsafe code they write which
             | interacts with the cache does the "right thing" with the
             | reference counts (potentially including de-allocation; I'm
             | not sure what unsafe code needs to do when referencing
             | reference counted boxes).
        
               | iknowstuff wrote:
               | I think you're confusing Rust's ownership model with
               | Swift's ARC. Rust doesn't do reference counting unless
               | you use Rc<T> or Arc<T>.
        
               | masklinn wrote:
               | > While Rust does not have a discrete runtime GC process,
               | it does utilize reference counting for dynamic memory
               | cleanup.
               | 
               | That's so misleading as to essentially be a lie.
               | 
               | Rust uses reference counting _if and only if you opt into
               | it via reference-counted pointers_. Using Rc or Arc is
               | not the normal or default course of action, and I 'm not
               | aware of any situation where it is ubiquitous.
               | 
               | > So you could argue [nonsense]
               | 
               | No, you really could not.
        
         | typical182 wrote:
         | Maybe that is what they hit... but it seems there is a pretty
         | healthy chance they could have resolved this by upgrading to a
         | more modern runtime.
         | 
         | Go 1.9 is fairly old (1.14 is about to pop out), and there have
         | been large improvements on tail latency for the Go GC over that
         | period.
         | 
         | One of the Go 1. 12 improvements in particular seems to at
         | least symptomatically line up with what they described, at
         | least at the level of detail covered in the blog post:
         | 
         | https://golang.org/doc/go1.12#runtime
         | 
         |  _"Go 1.12 significantly improves the performance of sweeping
         | when a large fraction of the heap remains live."_
        
         | ryuukk_ wrote:
         | exactly, they use 3 years old version
         | 
         | with the improvements made to runtime and that issue fixed,
         | then is safe to say that GO is much faster than rust, based on
         | their graphs
        
       | kerkeslager wrote:
       | Go is not a general-purpose language. It's a Google language
       | designed to solve Google's problems. If you aren't Google, you
       | probably have different problems, which Go isn't intended to
       | solve.
       | 
       | EDIT: Currently at -4 downvotes. Would downvoters care to discuss
       | their votes?
        
         | ben0x539 wrote:
         | I downvoted. "Go is not a general-purpose language" is a
         | statement I could see myself agreeing with, so I started
         | reading your comment excited to read a brief outline of what
         | use-cases Go is specifically aimed at and how that makes it
         | sub-optimal for Discord's use-cases.
         | 
         | But "it's for Google, and you aren't Google" isn't a novel
         | perspective, doesn't leave me with new insights, and isn't
         | really actionable for either Google or people who aren't
         | Google.
         | 
         | Usually this criticism is leveled at Go's dependency management
         | story, with the implication being that it's suited to Google's
         | monorepo but not normal people's repo habits. It's not clear to
         | me how the criticism relates to the issues discussed in the
         | article, which seem to be more about the runtime and GC
         | behavior.
         | 
         | Your comment also doesn't come off as amusing or otherwise
         | entertaining, so it feels like you're just dunking on Go users
         | without really aiming to make anyone's day better.
         | 
         | Disclaimer: I use Go at work and think it's incredibly
         | frustrating at times.
        
         | vogre wrote:
         | I am not downvoter, but you should learn the history of the
         | language. Most of the concepts in the language were first
         | implemented long before Google even existed, for systems that
         | were very different from modern ones.
         | 
         | It was made by people who had been designing languages for
         | about 40 years now. While some design choices seem weird, they
         | usually have very strong argumentation and solid experience
         | behind them.
         | 
         | Also if you read the list of problems tha Go is intended to
         | solve, you will be surprised how common they are in software
         | development.
        
           | kerkeslager wrote:
           | > I am not downvoter, but you should learn the history of the
           | language.
           | 
           | What makes you think I haven't been following Go since its
           | inception?
           | 
           | > Most of the concepts in the language were first implemented
           | long before Google even existed, for systems that were very
           | different from modern ones.
           | 
           | Yes, some of the languages which created those concepts are
           | languages which I've used and which I feel did it better,
           | which is why I am particularly frustrated that Go has gained
           | such popularity with so little substance.
           | 
           | > It was made by people who had been designing languages for
           | about 40 years now. While some design choices seem weird,
           | they usually have very strong argumentation and solid
           | experience behind them.
           | 
           | Yes. Most of the strong argumentation is Google specific.
           | 
           | > Also if you read the list of problems tha Go is intended to
           | solve, you will be surprised how common they are in software
           | development.
           | 
           | Such as?
        
         | cmrdporcupine wrote:
         | As a Googler, I don't consider this accurate. I've been here 8
         | years and have yet to work on a Go code base. Yes, there are
         | projects in Go. Certainly not a majority, nor even a
         | significant minority, honestly.
         | 
         | No, I wouldn't say Go is specific to Google's problems, though
         | I'm sure some of the engineers had them in mind. I see Go used
         | far more outside of Google than in.
        
           | kerkeslager wrote:
           | Well, that's pretty interesting.
           | 
           | I don't know if that disproves that Go was intended to solve
           | Google's problems, though. I think from the early writings of
           | the authors of the language in its infancy, it was pretty
           | clear that they intended it to solve problems they were
           | having at Google (i.e. the single-pass compilation design was
           | intended to help with the compilation of their gigantic
           | codebase). If it hasn't gained traction at Google, that only
           | proves that it failed to solve a lot of Google's problems.
           | 
           | That's still not to say it's a failure in an absolute sense:
           | it may have solved the problems it was intended to solve.
        
           | takeda wrote:
           | Isn't that indication of a failure? It seems like Go aimed to
           | replace Python and Java code at Google.
        
         | Corrado wrote:
         | I agree. One of Go's design goals was to be simple enough for
         | thousands of developers to use it simultaneously across a huge
         | monorepo. To me this is in the same class as companies use k8s;
         | unless your Google (or Facebook or Netflix ...) you probably
         | shoudn't be using it.
        
           | kerkeslager wrote:
           | > To me this is in the same class as companies use k8s;
           | unless your Google (or Facebook or Netflix ...) you probably
           | shoudn't be using it.
           | 
           | I'll actually even say, that if you're Facebook or Netflix,
           | you still shouldn't use Go, because you can write your own
           | tools that solve your problems.
        
       | mperham wrote:
       | Better title: "One Discord microservice with extremely high
       | traffic is moving to Rust"
        
         | tybit wrote:
         | Given the rampant misuse of Microservices, this was a really
         | nice read about a seemingly well designed system.
         | 
         | They were able to rewrite their hot spot in a new language
         | without having to rewrite all their business logic in a new
         | language. Not that there wouldn't have been solutions with a
         | monolith, but this certainly seems elegant and precise.
        
         | jhgg wrote:
         | This is one of multiple, we did not blog about this one, but
         | switching a Python http service for analytics ingest that was
         | purely CPU bound to rust resulted in a 90% reduction in compute
         | required to power it. However, that's not too interesting
         | because it's known that Python is slow haha.
         | 
         | We have 2 golang services left, one of them has a rewrite in
         | rust in PR as of last week (as a fun side project an engineer
         | wanted to try out.)
         | 
         | Additionally, as we move towards a more SOA internally, we plan
         | to write more high velocity data services, and rust will be our
         | language of choice for that.
        
           | onebot wrote:
           | Think replacing elixir with Rust would ever be a
           | consideration? Rust isn't there yet, but if you are NIF'ing a
           | bunch of stuff, seems like it could make sense at some point?
        
           | okgood288 wrote:
           | Well sure when it's a micro service that probably has more
           | lines of infra config than biz logic LOC.
           | 
           | This isn't exactly "Linux kernel: now in Rust!"
           | 
           | Glad you're making tech for you all better.
           | 
           | We get to take up the externalized runtime costs of the mess
           | that is the Electron app.
           | 
           | Engineers are super efficient at offloading the last mile of
           | effort.
        
             | snazz wrote:
             | Let's not start the Electron debate again. That's been
             | argued to death already.
        
               | [deleted]
        
       | nottorp wrote:
       | Can someone wake me up when they switch from javascript to
       | something native in the _client_?
       | 
       | I just checked and as usually, I have an entry labeled "Discord
       | Helper (Not Responding)" in my process list. I don't think i've
       | ever seen it in a normal state.
        
         | zlynx wrote:
         | That is kind of bad Windows programming but easy to do when
         | writing an app that doesn't need to handle Windows event
         | messages. It probably sits in a loop waiting on socket events
         | and doesn't care if you sent it a WM_QUIT or not. It would be
         | easy to pump the message loop and ignore all, but why bother?
        
       | blackrock wrote:
       | Would it have been better if they went with Elixir?
       | 
       | Write their code in a functional style. Get the benefits of the
       | Erlang BEAM platform.
       | 
       | Their system runs over the web, so time sensitivity isn't as
       | important, in comparison to video games, VR, or AR.
       | 
       | Anyone ever done a performance comparison breakdown between
       | something like Elixir vs. Rust?
        
         | steveklabnik wrote:
         | Discord is a heavy Elixir user, and even uses it with Rust via
         | NIF: https://blog.discordapp.com/using-rust-to-scale-elixir-
         | for-1...
        
         | jerf wrote:
         | "Would it have been better if they went with Elixir?"
         | 
         | No. It would have been unshippably bad. BEAM is generally
         | fairly slow. It was fast at multitasking for a while, but that
         | advantage has been claimed by several other runtimes in 2020.
         | As a language, it is much slower than Rust. Plus, if you tried
         | to implement a gigantic shared cache map in Erlang/Elixir,
         | you'd have two major problems: One is that you'd need huge
         | chunks of the map in single (BEAM) processes, and you'd get hit
         | by the fact BEAM is not set up to GC well in that case. It
         | wants lots of little processes, not a small number of processes
         | holding tons of data. Second is that you'd be trading what in
         | Rust is "accept some bytes, do some hashing, look some stuff up
         | in memory" with generally efficient, low-copy operations, with
         | "copy the network traffic into an Erlang binary, do some
         | hashing, compute the PID that actually has the data, _send a
         | message_ to that PID with the request, _wait for the reply
         | message_ , and then send out the answer", with a whole lot of
         | layers that expect to have time to make copies of lots of
         | things. Adding this sort of coordination into these nominally
         | fast lookups is going to slow this to a crawl. It's like when
         | people try to benchmark Erlang/Elixir/Go's threading by
         | creating processes/goroutines to receive two numbers and add
         | them together "in parallel"; the IPC completely overshadows the
         | tiny amount of work being done. (They mention tokio, but that's
         | still going to add a lot less coordination overhead than Erlang
         | messages.)
         | 
         | Go is a significantly better language for this use case than
         | Elixir/Erlang/BEAM is, let alone Rust.
         | 
         | (This is not a "criticism" of Erlang/Elixir/BEAM. It's an
         | engineering analysis. Erlang/Elixir/BEAM are still suitable for
         | many tasks, just as people still use Python for many things
         | despite the fact it would be a catastrophically bad choice for
         | this _particular_ task. This just isn 't one of the tasks it
         | would be suitable for.)
        
           | sergiotapia wrote:
           | >It was fast at multitasking for a while, but that advantage
           | has been claimed by several other runtimes in 2020.
           | 
           | Such as?
        
       | dennisgorelik wrote:
       | > Changing to a BTreeMap instead of a HashMap in the LRU cache to
       | optimize memory usage.
       | 
       | Why would BTreeMap be faster than HashMap? HashMap performance is
       | O(1), while BTreeMap performance is O(log N).
        
         | nemothekid wrote:
         | 1. They never said it was faster, only that memory usage was
         | better. Regardless, it could be the case that log N < C, if C
         | is sufficiently large.
         | 
         | 2. Memory usage on a hash map would be worse especially if the
         | fill ratio is relatively low.
        
         | scott_s wrote:
         | This subthread explains why it's more _memory efficient_ to use
         | a tree-based structure:
         | https://news.ycombinator.com/item?id=22239393. Short version is
         | that in order to get good performance out of a hashtable based
         | structure, you want to have _more_ than _n_ slots in order to
         | achieve good performance.
         | 
         | Which brings me to my second point: hashtable based data
         | structures are not worst-case _O(1)_. They are worst-case
         | _O(n)_ , because in the worst case, you will either have to
         | scan every entry in your table (open addressing) or walk a list
         | of size _n_ (separate chaining). Of course, good hashtable
         | implementations will not allow a situation with so many
         | collisions, but in order to avoid that, they will need to
         | allocate a new table and copy over the contents of the old,
         | which is also a _O(n)_ operation.
         | 
         | Given two kinds of data structures, one which is average-case
         | _O(1)_ , but worst-case _O(n)_ versus best- and worst-case
         | _O(log n)_ , which one you choose depends on what kinds of
         | performance you're optimizing for, and how bad the constants
         | are that we've been ignoring. If you care more about
         | throughput, then you usually want average-case _O(1)_ , as the
         | occasional latency spikes aren't important to you. But if you
         | care more about latency, then you'll probably want to choose
         | worst-case _O(log n)_ , assuming that its implementations
         | constants aren't too bad.
        
       | harikb wrote:
       | > Discord has never been afraid of embracing new technologies
       | that look promising.
       | 
       | > Embracing the new async features in Rust nightly is another
       | example of our willingness to embrace new, promising technology.
       | As an engineering team, we decided it was worth using nightly
       | Rust and we committed to running on nightly until async was fully
       | supported on stable.
       | 
       | > Changing to a BTreeMap instead of a HashMap in the LRU cache to
       | optimize memory usage.
       | 
       | It is always an algorithm change
        
       | carllerche wrote:
       | Tokio author here (mentioned in blog post). It is really great to
       | see these success stories.
       | 
       | I also think it is great that Discord is using the right tool for
       | the job. It isn't often that you _need_ the performance gains
       | that Rust  & Tokio so pick what works best to get the job done
       | and iterate.
        
         | Polyisoprene wrote:
         | No offense to Tokio and Rust, I really like Rust, but having
         | someone rewriting their app because of performance limitations
         | in their previous language choice, isn't really someone picking
         | the right tool for the job necessary.
         | 
         | I'm not so sure they would have done the rewrite if the Go GC
         | was performing better, and the choice of Rust seems primarily
         | based on prior experience at the company writing performance
         | sensitive code rather than delivering business value.
        
         | joseluisq wrote:
         | Basically because of:
         | 
         | > Rust is blazingly fast and memory-efficient: with no runtime
         | or garbage collector, it can power performance-critical
         | services, run on embedded devices, and easily integrate with
         | other languages.
        
       | kardianos wrote:
       | I'm glad they found a good solution (rust) to solve their
       | problem!
       | 
       | Also note this was with Go1.9. I know GC work was ongoing during
       | that time, I wonder if this time of situation would still happen?
        
         | faitswulff wrote:
         | From /u/DiscordJesse on reddit:
         | 
         | > We tried upgrading a few times. 1.8, 1.9, and 1.10. None of
         | it helped. We made this change in May 2019. Just getting around
         | to the blog post now since we've been busy.
         | 
         | https://www.reddit.com/r/programming/comments/eyuebc/why_dis...
        
         | dilyevsky wrote:
         | LOL that should've been at the top. The improvement in gc
         | between 1.9 and 1.12 is absolutely massive. They could've just
         | upgraded go toolchain.
        
           | kerkeslager wrote:
           | The GC changes in 1.12 supposedly target large heaps, which
           | is not Discord's situation.
        
             | dilyevsky wrote:
             | This post needed a lot more depth to really understand what
             | was going on. Statements like
             | 
             | > During garbage collection, Go has to do a lot of work to
             | determine what memory is free, which can slow the program
             | down.
             | 
             | read like blogospam to me (which it is).
             | 
             | For comparison sake - similar post from Twitch has a lot
             | more technical detail and generally makes me view their
             | team in a lot better light than Dicord's after reading
             | both.
        
         | biomcgary wrote:
         | I know latency for GC with large heaps improved in Go1.12. See:
         | https://golang.org/doc/go1.12#runtime
        
           | kerkeslager wrote:
           | The article states specifically that part of the problem was
           | heaps were never large.
           | 
           | EDIT: Actually, no it didn't, I misunderstood it.
        
             | typical182 wrote:
             | Where does it say that?
             | 
             | It says things like:
             | 
             |  _"We were not creating a lot of garbage."_
             | 
             | ... but that statement there doesn't say anything about the
             | heap size, including the size and count of live objects
             | (i.e., not garbage).
             | 
             | It also says:
             | 
             |  _"There are millions of Users in each cache. There are
             | tens of millions of Read States in each cache."_
             | 
             | Large is often in the eye of the beholder, but I missed it
             | if it said anything specifically about not having a large
             | heap size.
        
               | kerkeslager wrote:
               | > ... but that statement there doesn't say anything about
               | the heap size, including the size and count of live
               | objects (i.e., not garbage).
               | 
               | Not sure why you got downvoted, you're actually right,
               | I'm wrong: I misread that and/or assumed one meant the
               | other.
               | 
               | That said, this is a case that should be ideal for
               | generational GC, which Go specifically eschewed at one
               | point. I'm not sure this is still the case, however--I
               | have yet to wade through this[1] to update my knowledge
               | here.
               | 
               | [1] https://blog.golang.org/ismmkeynote
        
         | ascv wrote:
         | It's surprising they didn't test upgrading to 1.13.
        
           | topspin wrote:
           | > It's surprising they didn't test upgrading to 1.13.
           | 
           | It isn't surprising to me. It's stated elsewhere they tried 4
           | difference version of Go, up through 1.10 apparently, and had
           | performance problems with all of them. At some point you
           | can't suffer garbage collector nonsense anymore and since
           | they'd already employed Rust on other services they tried it
           | here.
           | 
           | It worked on the first try.
           | 
           | That's not surprising either.
           | 
           | What would be surprising is if any of these "but version such
           | and such is Waaay better and they should just use that"
           | actually panned out. The best case would be that the issue
           | just manifests as some other garbage collector related
           | performance problem. That's the deal you sign up for when you
           | saddle yourself with a garbage collector.
        
       | viraptor wrote:
       | The next step I expected after LRU tunning was to do simple
       | sharding per user, so that there are more services with smaller
       | caches, (cancelling out the impact) with smaller GC spikes,
       | offset in time from each other. I'm curious if that was
       | considered and not done for some reason.
        
       | mister_hn wrote:
       | Why not C++, if performance was an issue?
        
         | wmf wrote:
         | Why C++? Why would you want the same performance as Rust with
         | less safety?
        
         | loeg wrote:
         | Why would you pick C++ for a new codebase in 2019 or 2020 if
         | Rust met your needs?
        
         | bluGill wrote:
         | Modern C++ is the right choice if you have an existing code
         | base in C++, or you need to use features that only exist in a
         | third party C++ library - there is a large collection of C++
         | libraries to choose from.
         | 
         | Their use case doesn't seem to have either consideration (note
         | that even when these are considerations a hybrid of languages
         | is often a good idea) so there isn't a compelling reason to
         | choose C++. That doesn't mean C++ is wrong, just that there is
         | nothing wrong with rust. Maybe a great C++ programmer can get a
         | few tenths of a percent faster code (mostly because compiler
         | writers spend more effort figuring out how to optimize C++ -
         | rust uses the same llvm optimizer but it might sometimes do
         | something less optimal because it assumed C++ input), but in
         | general if the difference matters in your environment you are
         | too close to the edge and need to scale.
         | 
         | Rust might be easier/faster to write than modern C++. If so
         | that is a point in favor of rust. They seem to have people who
         | know rust, which is important. There might be more people who
         | know C++, but I can take any great programmer and make them
         | good in any programming language in a few weeks in the worst
         | case (worst case would be writing a large program in intercal
         | or some such intentionally hard language) - not to be confused
         | with expert which takes more experience.
        
       | RcouF1uZ4gsC wrote:
       | > Changing to a BTreeMap instead of a HashMap in the LRU cache to
       | optimize memory usage.
       | 
       | Collections are one of the big areas where Go's lack of generics
       | really hurts it. In Go, if one of the built in collections does
       | not meet your needs, you are going to take a safety and ergonomic
       | hit going to a custom collection. In Rust, if one of the standard
       | collections does not meet your needs, you (or someone else) can
       | create a pretty much drop-in replacement that does that has
       | similar ergonomic and safety profiles.
        
         | correct_horse wrote:
         | I'm not sure what you mean by standard collections, but
         | BTreeMap is in Rust's standard library.
        
           | pdpi wrote:
           | I think the point the GP is trying to make is that there's no
           | reason why BTreeMap couldn't be an external crate, while only
           | the core Go collections are allowed to be generic.
           | 
           | A corollary to this is that adding more generic collections
           | to Go's standard library implies expanding the set of magical
           | constructs.
        
             | The_rationalist wrote:
             | Rust has it's lot of weird hacks too. E.g array can take
             | traits impls only if they have less than 32 elements...
             | https://doc.rust-
             | lang.org/std/array/trait.LengthAtMost32.htm...
        
               | steveklabnik wrote:
               | This is purely temporary; it used to be less hacky, but
               | in order to move to the no-hacks world, we had to make it
               | a bit more hacky to start.
        
               | masklinn wrote:
               | That's... not that at all. You can absolutely implement
               | traits for arrays of more than 32 elements[0].
               | 
               | It is rather that due to a lack of genericity (namely
               | const generics) you can't implement traits for [T;N], you
               | have to implement them for each size individually. So
               | there has to be an upper bound somehow[1], and the stdlib
               | developers arbitrarily picked 32 for stdlib traits on
               | arrays.
               | 
               | A not entirely dissimilar limit tends to be placed on
               | _tuples_ , and implementing traits / typeclasses /
               | interfaces on them. Again the stdlib has picked an
               | arbitrary limit, here 12[2], the same issue can be seen
               | in e.g. Haskell (where Show is "only" instanced on tuples
               | up to size 15).
               | 
               | These are not "weird hacks", they're logical consequences
               | of memory and file size not being infinite, so if you
               | can't express something fully generically... you have to
               | stop at one point.
               | 
               | [0] here's 47 https://play.rust-
               | lang.org/?version=stable&mode=debug&editio...
               | 
               | [1] even if you use macros to codegen your impl block
               | 
               | [2] https://doc.rust-
               | lang.org/src/core/fmt/mod.rs.html#2115
        
               | jolux wrote:
               | That's a completely different and much more minor issue
               | (red herring, more or less) than eschewing the one core
               | language feature that makes performant type-safe custom
               | data structures _possible_.
        
           | [deleted]
        
           | zerr wrote:
           | In Go, standard collections are compiler's magic while in
           | Rust or e.g. C++ - they are implemented as libraries.
        
       | donatj wrote:
       | I feel like from the definition of the service, the entire thing
       | could easily be replaced with a Redis cluster.
        
         | [deleted]
        
         | Sikul wrote:
         | We originally cached this data with a Redis cluster but we hit
         | scaling issues. The Read States service only exists because
         | Redis had issues.
        
           | donatj wrote:
           | Hah, well now I feel like a dufus. Good info!
        
             | Sikul wrote:
             | No worries, we could have mentioned that in the post as
             | part of the service history :)
        
       | _ph_ wrote:
       | If you have a problem at hand which does not really benefit from
       | the presence of a garbage collector, switching to an
       | implementation without a garbage collector has quite a potential
       | to be at least somewhat faster. I remember myself to run onto
       | this time trigger for garbage collection long in the past -
       | though I don't remember why and mostly forgot about ever since
       | until I read this article. As also written in the article, even
       | if there are no allocations going on, Go forces a gc every two
       | minutes, it is set here:
       | https://golang.org/src/runtime/proc.go#L4268
       | 
       | The idea for this is (if I remember correctly) to be able to
       | return unused memory to the OS. As returning memory requires a gc
       | to run, it is forced in time intervals. I am a bit surprised that
       | they didn't contact the corresponding Go developers, as they seem
       | to be interested in practical use cases where the gc doesn't
       | perform well. Besides that newer Go releases improved the gc
       | performance, I am a bit surprised that they didn't just increase
       | this time interval to an arbitrary large number and checked, if
       | their issues went away.
        
         | KMag wrote:
         | Not only is there good potential for a speed improvement, but
         | languages built around the assumption of pervasive garbage
         | collection tend not to have good language constructs to support
         | manual memory management.
         | 
         | To be fair, most languages without GCs also don't have good
         | language constructs to support manual memory management. If
         | you're going to make wide use of manual memory management, you
         | should think very carefully about how the language and
         | ecosystem you're using help or hinder your manual memory
         | management.
        
       | fmakunbound wrote:
       | Why /does/ it run a GC every 2 minutes? I went looking and
       | didnd't find a reason in the code...
       | 
       | https://github.com/golang/go/search?q=forcegcperiod&unscoped...
       | 
       | Go's GC seems kind of primitive.
        
       | truthwhisperer wrote:
       | other example of bad IT management. Spend those millions on
       | improving Go instead of refactoring code and moving to rust. And
       | why the hell did you choose for Go anyway. Because some hancy
       | fancy developper try to copy Google?
       | 
       | Bad.
        
       | geodel wrote:
       | Makes sense write most efficient stuff for in-house and give
       | resource hog Electron apps to users.
        
         | jrockway wrote:
         | Discord pays for their servers, but not for their users's
         | computers.
        
           | Hamuko wrote:
           | That's fine as long as you ignore the fact that the users are
           | the customers.
        
       | unlinked_dll wrote:
       | It'd be cool to look at more signal statistics from the CPU plot.
       | 
       | It appears that Go has a lower CPU floor, but it's killed by the
       | GC spikes, presumably due to the large cache mentioned by the
       | author.
       | 
       | This is interesting to me. It suggests that Rust is better at
       | scale than Go, and I would have thought with Go's mature
       | concurrency model and implementation would have been optimized
       | for such cases while Rust would shine in smaller services with
       | CPU bound problems.
       | 
       | Great post!
        
         | arnsholt wrote:
         | My first guess for the slightly higher CPU floor of the Rust
         | version is that the Rust code has to do slightly more work per
         | request, since it will free memory as it gets dropped, whereas
         | the Go code doesn't do any freeing per request, but then gets
         | hit with the periodic spike every two minutes where the entire
         | heap has to be traversed for GC.
        
           | jhgg wrote:
           | tokio 0.1 was definitely less efficient, when we compare go
           | to 0.2, tokio uses less cpu consistently, even when compared
           | to a cluster of the same size almost a year later with our
           | growth over the time since we switched over.
        
       | romaniitedomum wrote:
       | You're switching to Rust because Go is too slow? Colour me
       | sceptical, but this seems more like an excuse to adopt a trendy
       | language than a considered technical decision. Rust is designed
       | first and foremost for memory safety, and it sacrifices a lot of
       | developer time to achieve this, so if memory safety isn't high in
       | your list of concerns Rust is probably not going to bring many
       | benefits.
        
         | hajile wrote:
         | Did you read the article? The naive Rust version was better
         | than the tuned golang version in every metric. The most
         | important one (latency) simply wasn't fixable due to golang's
         | GC (something that is a bit of a general GC issue I might add).
        
         | smabie wrote:
         | What would you recommend that doesn't have a GC? Zig? C? Rust
         | is a fine choice. Besides if you really don't care, just make
         | the entire program unsafe and you'll still reap benefits over C
         | or C++.
        
         | iruoy wrote:
         | So decreasing workload on the servers and avoiding spikes in
         | the read states queue is bad business?
         | 
         | The article also states that is was quite easy to port over and
         | didn't need any quirky tuning.
        
         | lllr_finger wrote:
         | The goals of Rust are stated boldly right on the official
         | website - "Performance" is one of them. In Discord's case, the
         | hit in productivity was worth avoiding the GC issues in Go. I
         | read the article and didn't come to the same conclusion, so I'm
         | curious which passages led you to believe this was done to
         | "adopt a trendy language"?
        
       | tiffanyh wrote:
       | It should also be noted that Rust interoperates extremely well
       | with Erlang, which is the basis of Discord (via Rustler).
       | 
       | https://github.com/rusterlium/rustler
       | 
       | https://blog.discordapp.com/scaling-elixir-f9b8e1e7c29b
        
       ___________________________________________________________________
       (page generated 2020-02-04 23:00 UTC)