[HN Gopher] The 5-Hour CDN
       ___________________________________________________________________
        
       The 5-Hour CDN
        
       Author : robfig
       Score  : 148 points
       Date   : 2021-08-03 19:36 UTC (3 hours ago)
        
 (HTM) web link (fly.io)
 (TXT) w3m dump (fly.io)
        
       | [deleted]
        
       | vmception wrote:
       | >The term "CDN" ("content delivery network") conjures Google-
       | scale companies managing huge racks of hardware, wrangling
       | hundreds of gigabits per second. But CDNs are just web
       | applications. That's not how we tend to think of them, but that's
       | all they are. You can build a functional CDN on an 8-year-old
       | laptop while you're sitting at a coffee shop.
       | 
       | huh yeah never thought about it
       | 
       | I blame how CDNs are advertised for the visual disconnect
        
       | youngtaff wrote:
       | Some of the things they miss in the post are Cloudflare uses a
       | customised version or Nginx, same with Fastly for Varnish (don't
       | know about Netlify and ATS)
       | 
       | Out of the box nginx doesn't support HTTP/2 prioritisation so
       | building a CDN with nginx doesn't mean you're going ti be
       | delivering as good service as Cloudflare
       | 
       | Another major challenge with CDNs is peering and private
       | backhaul, if you're not pushing major traffic then your customers
       | aren't going to get the best peering with other carriers /
       | ISPs...
        
         | mike_d wrote:
         | HTTP/2 prioritization is a lot of hype for a theoretical
         | feature that yields little real world performance. When a
         | client is rendering a page, it knows what it needs in what
         | order to minimize blocking. The server doesn't.
        
       | legrande wrote:
       | I like to blog from the raw origin and not use CDNs because if a
       | blogpost is changed I have to manually purge the CDN cache, which
       | can happen a lot. Also CDNs have the caveat in that if they're
       | down, it can make a page load very slow since it tries to load
       | the asset.
        
         | tshaddox wrote:
         | If you're okay with every request having the latency all the
         | way to your origin, you can have the CDN revalidate its cache
         | on every request. Your origin can just check date_updated (or
         | similar) on the blog post to know if the cache is still valid
         | without needing to do any work to look up and render the whole
         | post.
         | 
         | To further reduce load and latency to your origin, you can use
         | stale-while-revalidate to allow the CDN to serve stale cache
         | entries for some specified amount of time before requiring a
         | trip to your origin to revalidate.
        
           | cj wrote:
           | > If you're okay with every request having the latency all
           | the way to your origin, you can have the CDN revalidate its
           | cache on every request.
           | 
           | It's also worth mentioning that even when revalidating on
           | every request (or not caching at all), routing through a CDN
           | can still improve overall latency because the TLS can be
           | terminated at a local origin server, significantly shortening
           | the TLS handshake.
        
             | spondyl wrote:
             | Ah, the TLS shortening aspect of a CDN is something that
             | seems obvious in hindsight but I'd never really thought
             | about it. Thanks!
        
             | champtar wrote:
             | Also CDN providers will hopefully have good pearing. My
             | company uses OpenVPN TCP on port 443 for maximum
             | compatibility. When around the globe the VPN is pretty
             | slow, so I proxy the tcp connection via a cheap VPS, and
             | speed goes from maybe 500kbit/s to 10Mbit/s, just because
             | the VPS provider pearing is way better than my company
             | "business internet". (The VPS is in the same country as the
             | VPN server).
        
         | raro11 wrote:
         | I set an s-maxage of at least a minute. Keeps my servers from
         | being hugged to death while not having to invalidate manually.
        
       | jabo wrote:
       | Love the level of detail that Fly's articles usually go into.
       | 
       | We have a distributed CDN-like feature in the hosted version of
       | our open source search engine [1] - we call it our "Search
       | Delivery Network". It works on the same principles, with the
       | added nuance of also needing to replicate data over high-latency
       | networks between data centers as far apart as Sao Paulo and
       | Mumbai for eg. Brings with it another fun set of challenges to
       | deal with! Hoping to write about it when bandwidth allows.
       | 
       | [1] https://cloud.typesense.org
        
       | Rd6n6 wrote:
       | Sounds like a fun weekend project
        
       | ksec wrote:
       | It is strange that you put a Time duration in front of CDN (
       | content delivery network ), because given all the recent incident
       | with Fastly, Akamai and Bunny, I read it as 5 hours Centralised
       | Downtime Network.
        
       | chrisweekly wrote:
       | This is so great. See also https://fly.io/blog/ssh-and-user-mode-
       | ip-wireguard/
        
       | babelfish wrote:
       | fly.io has a fantastic engineering blog. Has anyone used them as
       | a customer (enterprise or otherwise) and have any thoughts?
        
         | mike_d wrote:
         | I run my own worldwide anycast network and still end up
         | deploying stuff to Fly because it is so much easier.
         | 
         | The folks who actually run the network for them are super
         | clueful and basically the best in the industry.
        
         | cgarvis wrote:
         | just started to use them for an elixir/phoenix project. multi
         | region with distributed nodes just works. feels almost
         | magically after all the aws work I've done the past few years.
        
           | tiffanyh wrote:
           | What's magically?
           | 
           | I was under the impression that fly.io today (though they are
           | working on it) doesn't do anything unique to make hosting
           | elixir/Phoenix app easier.
           | 
           | See this comment by the fly.io team.
           | 
           | https://news.ycombinator.com/item?id=27704852
        
             | mcintyre1994 wrote:
             | They're not doing anything special to make Elixir
             | specifically better yet, but their private networking is
             | already amazing for it - you can cluster across arbitrary
             | regions completely trivially. It's a really good fit for
             | Elixir clustering as-is even without anything specially
             | built for it. I have no idea how you'd do multi-region
             | clustering in AWS but I'm certain it'd be a lot harder.
        
         | alopes wrote:
         | I've used them in the past. All I can say is that the support
         | was (and probably still is) fantastic.
        
         | joshuakelly wrote:
         | Yes, I'm using it. I deploy a TypeScript project that runs in a
         | pretty straightforward node Dockerfile. The build just works -
         | and it's smart too. If I don't have a Docker daemon locally, it
         | creates a remote one and does some WireGuard magic. We don't
         | have customers on this yet, but I'm actively sending demos and
         | rely on it.
         | 
         | Hopefully I'll get to keep working on projects that can make
         | use of it because it feels like a polished 2021 version of
         | Heroku era dev experience to me. Also, full disclosure, Kurt
         | tried to get me to use it in YC W20 - but I didn't listen
         | really until over a year later.
        
       | parentheses wrote:
       | Author has a great sense of humor. I love it!
        
       | simonw wrote:
       | This article touches on "Request Coalescing" which is a super
       | important concept - I've also seen this called "dog-pile
       | prevention" in the past.
       | 
       | Varnish has this built in - good to see it's easy to configure
       | with NGINX too.
       | 
       | One of my favourite caching proxy tricks is to run a cache with a
       | very short timeout, but with dog-pile prevention baked in.
       | 
       | This can be amazing for protecting against sudden unexpected
       | traffic spikes. Even a cache timeout of 5 seconds will provide
       | robust protection against tens of thousands of hits per second,
       | because request coalescing/dog-pile prevention will ensure that
       | your CDN host only sends a request to the origin a maximum of
       | once ever five seconds.
       | 
       | I've used this on high traffic sites and seen it robustly absorb
       | any amount of unauthenticated (hence no variety on a per-cookie
       | basis) traffic.
        
         | anonymoushn wrote:
         | Do you know if varnish's request coalescing allows it to send
         | partial responses to every client? For example, if an origin
         | server sends headers immediately then takes 10 minutes to send
         | the response body at a constant rate, will every client have
         | half of the response body after 5 minutes?
         | 
         | Thanks!
        
           | simonw wrote:
           | I don't know for certain, but my hunch is that it streams the
           | output to multiple waiting clients as it receives it from the
           | origin. Would have to do some testing to confirm that though.
        
       | amirhirsch wrote:
       | This is cool and informative and Kurt's writing is great:
       | 
       | The briny deeps are filled with undersea cables, crying out
       | constantly to nearby ships: "drive through me"! Land isn't much
       | better, as the old networkers shanty goes: "backhoe, backhoe,
       | digging deep -- make the backbone go to sleep".
        
       ___________________________________________________________________
       (page generated 2021-08-03 23:00 UTC)