[HN Gopher] Comparing Fauna and DynamoDB: Architecture and Pricing
       ___________________________________________________________________
        
       Comparing Fauna and DynamoDB: Architecture and Pricing
        
       Author : evanweaver
       Score  : 36 points
       Date   : 2020-12-09 19:16 UTC (1 days ago)
        
 (HTM) web link (fauna.com)
 (TXT) w3m dump (fauna.com)
        
       | yazaddaruvala wrote:
       | I'm sure Fauna is a great database and probably cheaper in many
       | cases. I just have some issues with the "Complex Example". I just
       | don't feel it is realistic that anyone familiar with DynamoDB
       | would create such a schema. It comes across like a good schema
       | for Fauna is being forced onto DynamoDB, without an evaluation of
       | what would be the recommended "DynamoDB way" of solving the
       | customer's needs.
       | 
       | > We have an accounts table with 20 secondary indexes defined for
       | all the possible sort fields (DynamoDB's maximum--Fauna has no
       | limit).
       | 
       | The usecase of having 20 secondary indexes in DDB is an extremely
       | rare case. Arguably should be considered an anti-pattern, only
       | used for an application transitioning query patterns in some way.
       | If this is the norm for an application, I'd argue the product
       | managers/developers do not understand their customer's needs well
       | enough. I'd assume that at this stage in the product's life, a
       | basic Postgress installation is likely a better choice.
       | 
       | Additionally, if the query pattern really needs to be "super
       | flexible" for the long term, you'll find that eventually you'll
       | need more and more of ElasticSearch's tech (or similar tech). A
       | very common pattern is to use DDB Streams to ElasticSearch
       | connector (obviously sacrificing query-after-write consistency).
       | 
       | > Viewing just the default account screen queries 7 indexes and
       | 25 documents. A typical activity update transactionally updates 3
       | documents at a time with 10 dependency checks and modifies all 35
       | indexes.
       | 
       | This is such a red flag. If your application requires this from
       | DDB, you should change your schema (probably more de-
       | normalization). However, the example doesn't have enough
       | information for me to suggest a better schema to meet the
       | customer's needs.
       | 
       | Disclaimer: I work at Amazon, but not in AWS. My opinions are my
       | own.
        
         | pier25 wrote:
         | > _I 'd argue the product managers/developers do not understand
         | their customer's needs well enough. I'd assume that at this
         | stage in the product's life, a basic Postgress installation is
         | likely a better choice._
         | 
         | You mean compared to DynamoDB?
         | 
         | Because Fauna is just as flexible as Postgres.
        
         | arpinum wrote:
         | > I'd argue the product managers/developers do not understand
         | their customer's needs well enough
         | 
         | The limited sorting options in AWS services seem to be
         | optimising AWS' costs rather than understanding customer needs.
         | I'm often frustrated by the experience when I think I can click
         | on a column and can't. DynamoDB doesn't handle the use case of
         | diverse user groups exploring data and slicing through it.
         | That's ok, every database has its strengths. But don't dismiss
         | the idea that unconstrained access patterns can be the solution
         | to a customer need.
        
         | evanweaver wrote:
         | We are in agreement, the difference in experience as you move
         | between denormalized key/value style modeling and normalized
         | relational modeling is the core of the post. DynamoDB has added
         | relational-like features, but using them in a traditional
         | relational way goes against its architectural grain.
         | 
         | Is it necessary that data modeling flexibility must decline as
         | an application matures and scales, though? This was one of the
         | larger millstones around our neck at Twitter and what we are
         | building Fauna to avoid.
        
       | rafaelturk wrote:
       | Amazingly Fauna pricing is even more confusing than DynamoDB's
        
       | sargun wrote:
       | One of the really cool DynamoDB features I love (at least in
       | theory) is CDC / Streams. Also the fact it automagically hooks up
       | to Kinesis is neat. Unfortunately, for personal projects, this
       | can lead to runaway spending.
       | 
       | Does Fauna have strongly ordered CDC stream?
        
         | pier25 wrote:
         | Yeah they recently announced strongly consistent streaming.
         | 
         | https://fauna.com/blog/live-ui-updates-with-faunas-real-time...
        
       | pier25 wrote:
       | Dear HN
       | 
       | I've been commenting on this thread and I'd like to add a
       | disclaimer. While I'm not a Fauna employee, I've been paid by
       | Fauna to write articles that have been published in their blog.
       | My opinions are my own though.
       | 
       | That said, I've been using and studying Fauna for almost a year
       | now so if you have any questions let me know!
        
       | mNovak wrote:
       | In their simple example of a website hit-counter, can someone
       | explain how you would aggregate batches of 50 requests to
       | amortize compute costs? I thought the whole point of the DB is to
       | store information between disparate requests?
        
         | pier25 wrote:
         | AFAIK you can't for that particular contrived example. The
         | article probably mentions the batching of 50 queries just to
         | give you an idea of pricing, not because it works for that
         | example.
         | 
         | Still, even in the worst use case for Fauna I find that $7.50
         | for 2M queries with all the features it offers is still a good
         | price (multi-region, ACID, realtime, FQL, authentication and
         | authorization, etc).
        
       | abadid wrote:
       | IMO, it's hard to put a price on strong isolation and
       | consistency. Being able to write an app that that uses atomic
       | transactions, that are isolated from concurrently running
       | transactions, and that see the correct data is something that
       | translates to reduced programmer time and effort, and improves
       | user experience. Many programmers discount those important
       | features when they start out, but they'd be better served
       | including them in the price comparisons of different products
       | that are out there.
        
       ___________________________________________________________________
       (page generated 2020-12-10 23:01 UTC)