[HN Gopher] How Palo Alto Networks Replaced Kafka with ScyllaDB ...
       ___________________________________________________________________
        
       How Palo Alto Networks Replaced Kafka with ScyllaDB for Stream
       Processing
        
       Author : carpintech
       Score  : 55 points
       Date   : 2022-06-15 20:44 UTC (2 hours ago)
        
 (HTM) web link (www.scylladb.com)
 (TXT) w3m dump (www.scylladb.com)
        
       | smohare wrote:
        
         | switchbak wrote:
         | It's a 27 min video from an actual customer doing real things
         | with this product. The product in particular has a reputation
         | for being obsessive about performance.
         | 
         | Personally, I think this could be interesting and useful for
         | those with similar needs.
        
           | nerdponx wrote:
           | And it was a conference talk. If the company didn't actually
           | believe in the product, they wouldn't have sent someone to do
           | the talk.
        
       | redwood wrote:
       | Interesting to see space for multiple commercial backers of
       | Cassandra
       | 
       | Anyone seeing Cassandra adoption for new use cases in the public
       | cloud?
        
         | throwaway5752 wrote:
         | Scylla isn't really Cassandra, but a drop-in that's compatible
         | up to 3.11. Don't take my word for it:
         | https://docs.scylladb.com/using-scylla/cassandra-compatibili...
         | 
         | You already had Instaclustr (acquired by NetApp) and The Last
         | Pickle (acquired by DataStax) until recently in the space. You
         | could probably look at the committers
         | (https://projects.apache.org/committee.html?cassandra) to get a
         | handle on where it is finding use cases.
        
       | throwaway81523 wrote:
       | ScyllaDB and its related parts like Seastar always struck me as
       | real performance-oriented programming, though it was based on
       | leveraging language tech (C++14 early on) that was painful. I
       | wonder if a nicer approach is possible nowadays.
        
       | scottcodie wrote:
       | I get not wanting to add yet-another-system to reduce operational
       | complexity but it seems more economical to use a system like
       | Flink to do a time windowed join and emit single records to be
       | written to a persistence store. The Flink time window can be
       | sufficiently large to encompass the disparity between ingest and
       | event time without much RAM consumption by using a RocksDB state
       | backend on the operator. Let me know if I miss something, every
       | use case is different :)
        
       ___________________________________________________________________
       (page generated 2022-06-15 23:00 UTC)