[HN Gopher] Olric: Distributed in-memory data structures in Go
       ___________________________________________________________________
        
       Olric: Distributed in-memory data structures in Go
        
       Author : mastabadtomm
       Score  : 189 points
       Date   : 2020-08-10 12:20 UTC (10 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | meddlepal wrote:
       | Reminds me of a stripped down version of Hazelcast.
        
       | [deleted]
        
       | [deleted]
        
       | pojntfx wrote:
       | Played around w/ olric a while back as a kv store for a routing
       | graph, really enjoyed it even though I ended up home-brewing it
       | later (the network topology meant that olric wasn't the right
       | tool for the job). Highly recommend it ;)
        
       | dang wrote:
       | Also discussed 6 months ago:
       | https://news.ycombinator.com/item?id=22297507
        
       | phpjsnerd wrote:
       | Thanks man!
        
       | ddorian43 wrote:
       | Distributed in-process cache can make things very fast. See vimeo
       | with group cache: https://medium.com/vimeo-engineering-
       | blog/video-metadata-cac...
       | 
       | Or in-process distributed rate limiting:
       | https://github.com/mailgun/gubernator
       | 
       | https://github.com/golang/groupcache: groupcache is in production
       | use by dl.google.com (its original user), parts of Blogger, parts
       | of Google Code, parts of Google Fiber, parts of Google production
       | monitoring systems, etc.
       | 
       | There was also a paper by google that I can't find right now.
       | 
       | Edit: Found it https://blog.acolyer.org/2019/06/24/fast-key-
       | value-stores/
        
         | rrdharan wrote:
         | Maybe you're thinking of the RAMCloud paper?
         | 
         | https://web.stanford.edu/~ouster/cgi-bin/papers/ramcloud.pdf
        
           | ddorian43 wrote:
           | Nope. It's "Fast key-value stores: An idea whose time has
           | come and gone":
           | 
           | https://blog.acolyer.org/2019/06/24/fast-key-value-stores/
        
             | FridgeSeal wrote:
             | This is such a fascinating idea, and I dig that blog-anyone
             | know of any similar blogs/sites?
        
       | bellwether wrote:
       | Love this idea, can't wait to try it out!
        
       | mancini0 wrote:
       | So would this be similar to Apache Ignite?
        
       | didip wrote:
       | I have been curious about Olric for a while now.
       | 
       | * It is embeddable.
       | 
       | * Seems easy to install, even without Kubernetes.
       | 
       | I am just missing performance number to answer questions like: Is
       | it significantly faster than etcd? Is it faster than tikv? And
       | finally, is it faster than Redis Cluster?
        
         | shahsyed wrote:
         | Great questions!
         | 
         | What's the underlying algo/impl used for this? According to the
         | release notes it uses a DTopic data structure. While
         | interesting, I'm not really sure if this is a unique/new
         | structure by the author (buraksezer) or if it is an impl. from
         | a paper I'm not aware of.
         | 
         | If anyone has more information about it, let me know!
        
         | lnsp wrote:
         | I don't think it should be compared to etcd since it only
         | offers best-effort consistency while etcd is strictly
         | consistent.
        
       | Thaxll wrote:
       | Looks like Hazelcast kind of solution.
        
       | jwineinger wrote:
       | Maybe just a personal thing, but going to a github repo and
       | seeing "build failing" and low test coverage icons are usually
       | turn offs for me to continue reading.
        
         | mholt wrote:
         | Why? It's normal for builds to fail between releases. And
         | (warning: controversial opinion inbound) code coverage is
         | hardly a useful metric -- one can write a single test case that
         | gets 100% coverage, but doesn't test anything; what you _want_
         | is _assertion coverage_ but I don 't know how possible that is.
        
           | jnwatson wrote:
           | It strongly depends on the branch style. An older style makes
           | the main/master branch to be the dev branch, allowing
           | temporary deviations from working. If you want a working
           | branch, you have to pull a release branch.
           | 
           | The modern trend is that master is sacred and should always
           | pass.
        
           | reggieband wrote:
           | > It's normal for builds to fail between releases.
           | 
           | Why justify this with some categorical normative? In the last
           | 7 or 8 years of my development life the master branch on
           | every project I've worked on has always been clean. Breaking
           | changes live in a branch and those branches are becoming
           | shorter and shorter lived (previously lived for months, then
           | weeks, now days).
           | 
           | That doesn't necessarily mean master is at all time ready to
           | release (although some hold that extreme position). But it
           | does mean that whenever you want to start work on some new
           | feature you can be sure that branching from master is safe.
           | And rebasing against master at any point should be safe.
           | 
           | In fact, almost all repos I've worked with recently
           | explicitly deny merges into master unless a suite of tests
           | pass, including basic builds, unit tests, and static code
           | linting. The very idea that I could ever pull master and get
           | a build failure makes me shudder.
        
           | sneak wrote:
           | Is it normal for builds to fail on master between releases?
           | 
           | I would think a green build CI should be required for merge
           | to master, even if perhaps the tests don't all pass.
        
             | mholt wrote:
             | Those badges are also often useless. For example, here's
             | their current failure reason:                   Bad
             | response status from coveralls: 422
             | {"message":"service_job_id (716381595) must be unique for
             | Travis Jobs not supplying a Coveralls Repo
             | Token","error":true}
             | 
             | Has nothing to do with the actual compilation status.
             | 
             | In projects I'm involved with, the _vast_ majority of CI
             | errors were due to stupid things, not actual code problems.
             | For example, last week our tests were failing because the
             | east coast IBM data center that ran the tests was offline
             | due to extended power outages from the weather.
        
               | jnwatson wrote:
               | Like all metrics, badges are just a model. However, for
               | open source projects, optics matter, and failing badges
               | are a hint that perhaps quality isn't the top priority.
        
         | eat_veggies wrote:
         | Is 74% low? The 70-80% range feels about average, and decently
         | acceptable for most projects.
        
           | [deleted]
        
           | eikenberry wrote:
           | 74% isn't low, that's right in the sweet spot. I generally
           | shoot for 65-85%. Lower and you miss important stuff, higher
           | and you start testing implementation instead of APIs and the
           | tests become to brittle.
        
       | hellofunk wrote:
       | I'm curious how you actually would access this, would you need
       | some kind of an RPC library to use in conjunction with it? The
       | page doesn't really seem to go far enough to answer actual usage
       | information.
       | 
       | What would be the alternative to using a library like this? I've
       | been looking for a good way to distribute workload to many
       | machines and don't want to invent it myself or jump into a very
       | heavy system.
        
         | harikb wrote:
         | This example gives a flavor of the client-server mode. Seems to
         | be using some custom binary protocol - that seems like a
         | surprising choice.
         | 
         | https://github.com/buraksezer/olric#golang-client
         | var clientConfig = &client.Config{         Addrs:
         | []string{"localhost:3320"},         DialTimeout: 10 *
         | time.Second,         KeepAlive:   10 * time.Second,
         | MaxConn:     100,       }            client, err :=
         | client.New(clientConfig)       if err != nil {          return
         | err       }            dm := client.NewDMap("foobar")       err
         | := dm.Put("key", "value")       // Handle this error
        
       | hu3 wrote:
       | Perhaps a better link would be the README which explains what's
       | the project about and its features.
       | 
       | https://github.com/buraksezer/olric/blob/master/README.md
        
       | whalesalad wrote:
       | Very curious about the origins of this tool and how it is used by
       | the creator. Looks very comprehensive, but some real world
       | examples of use and performance would be great.
        
       ___________________________________________________________________
       (page generated 2020-08-10 23:00 UTC)