[HN Gopher] New setup for 2020
       ___________________________________________________________________
        
       New setup for 2020
        
       Author : fagnerbrack
       Score  : 73 points
       Date   : 2020-12-26 14:59 UTC (8 hours ago)
        
 (HTM) web link (changelog.com)
 (TXT) w3m dump (changelog.com)
        
       | mdoms wrote:
       | Isn't this just a podcast website, or is there something more to
       | it? Why on earth would a podcast website need Kube?
        
         | jerodsanto wrote:
         | The site does a lot more than merely hosting podcasts.
         | 
         | There's a very active news feed with submissions, commenting,
         | newsletter subscriptions and management, a blog, episode
         | requests, live streams, etc.
         | 
         | Check out the source to see what all the app does:
         | 
         | https://github.com/thechangelog/changelog.com
        
         | notsureaboutpg wrote:
         | A good portion of the article (the beginning portion no less!)
         | is explicitly meant to answer exactly this question. When the
         | writer anticipates this question and preemptively responds to
         | it, least we could do is discuss their answer rather than
         | repeat the question.
        
         | that_guy_iain wrote:
         | Yea that is what I was thinking. And the next thing was what is
         | the price difference. A bit of me thinks this is just some
         | people doing some really cool tech work for the sake of it,
         | which I can't fault them for.
        
         | jlgaddis wrote:
         | Not knowing what changelog.com is, I was wondering the same
         | thing... from the header of the page, I could tell that there's
         | a blog, a podcast, and a newsletter.
         | 
         | Without some additional insight that I don't have, it does seem
         | that this is an enormously over-engineered "solution" for a
         | website -- I've NFI why it requires 99.99% uptime!
         | 
         | Perhaps they look it as a goal or challenge, an opportunity to
         | showcase their knowledge and skills to potential customers, or,
         | hell, maybe they just enjoy that kind of thing? If that's the
         | case, I completely understand and can even relate (my home
         | network is a textbook example of an "over-engineered solution":
         | close to a dozen "enterprise-class" servers in the basement,
         | ~35 various subnets, VMware Enterprise Plus clusters, BGP for
         | anycast, and so on).
         | 
         | AFAICT, though, this is just some developers running a blog and
         | podcasts aimed at other developers? I mean, we're not exactly
         | talking about a "mission critical" web site that's going to
         | result in death and destruction the next time it goes down or
         | Linode shits itself, right?
         | 
         | Or am I missing something?
         | 
         | --
         | 
         |  _EDIT:_ I 've read through the rest of the comments now ...
         | 
         | > _This is for fun, to some degree ... We don't really need
         | this setup. One, it's about learning ourselves, but then also
         | sharing that ... It's fun to do._
         | 
         | ... and I completely understand!
        
           | jgalt212 wrote:
           | It's kind a brand marketing paradox.
           | 
           | 1. We're great engineers because we can set and maintain such
           | an impressive set-up.
           | 
           | 2. We're terrible engineers because all of this could
           | probably done one server for dynamic content + S3 for dynamic
           | content. Or not even S3, maybe just some Cloudflare or Akamai
           | caching.
           | 
           | Of course, like the posters above, I could be missing
           | something due to my outsider/consumer view of changelog.
        
         | archsurface wrote:
         | They don't: "It's worth noting that we don't really need what
         | we have around Kubernetes." from another comment somewhere in
         | here.
        
       | PietKachelhout wrote:
       | I'm disappointed that blogs and podcasts keep promoting Linode. I
       | currently maintain about 30 Linodes and I have been doing so for
       | the past 2 years.
       | 
       | Some things I noticed:
       | 
       | * The internal network is not private. But people don't realise
       | it. You share a /16 with other Linodes. So many open databases,
       | file shares and other services in there.
       | 
       | * Block storage performance is really poor, around 100 iops. Same
       | as a SATA disk from 10 years ago.
       | 
       | * No proper snapshot / image functionality.
       | 
       | * Linode Kubeternetes Engine was based on Debian Oldstable when
       | it launched.
       | 
       | * Excessive CPU steal, even on dedicated cores. 25% CPU steal is
       | considered normal. Over 50% happens a lot.
       | 
       | * Problems with their hosts. I can only guess what the reason is
       | but 4 to 8 hours of unannounced downtime of a VM happend to me 6
       | times in the past 2 years.
       | 
       | Yes, support is friendly. But my international phone bill is huge
       | because the fastest way to get them to do something is to call.
        
         | brobinson wrote:
         | With all these negatives, there must be a really compelling
         | reason to stay. What is it?
        
           | PietKachelhout wrote:
           | The owner of the company thinks they are great because you
           | can call them when there is a problem. I'm having a hard time
           | convincing him that these are issues I've never had at other
           | providers. Certainly not so many.
           | 
           | We are moving everything away. Most of our servers are with
           | another provider already. And we haven't had any similar
           | issues there. I've never called them!
           | 
           | And I forgot to mention the connectivity issues at Linode.
           | When the whole London datacenter was unreachable for 2 hours
           | we lost some customers.
        
             | tgsovlerkhgsel wrote:
             | This is something more companies need to realize: Yes, you
             | should be easily and quickly reachable by phone so that
             | _when_ (not if) things go wrong, I can get good service and
             | resolve it quickly.
             | 
             | But if I'm calling you, you have most likely already
             | failed. If I'm calling for information, your documentation
             | has failed to make that information accessible (it either
             | wasn't documented, or not easy enough to find). If I'm
             | calling to resolve an issue (technical or billing), it
             | would have been a lot better if it didn't happen in the
             | first place.
        
       | fairramone wrote:
       | Is this really "simpler" as they claim? It reads a bit like the
       | honeymoon phase and I'm a lot more interested in how they feel
       | about the new stack 1 or 2 years down the line.
        
       | skinnyarms wrote:
       | I'm a fan of the show, it was really entertaining to listen to
       | knocking the site over "live".
        
         | gerhardlazu wrote:
         | That was my favourite part too!
         | 
         | Yes, we could have mitigated that entirely with CDN stale
         | caching, but it was good to see what happens today, and then
         | iterate towards better Fastly integration.
        
       | jacques_chester wrote:
       | I'd be interested in learning more about the move from Concourse
       | to Circle (I'm a notorious Concourse fanboy). What went well,
       | what didn't, what you miss, what prompted it -- that sort of
       | thing.
        
       | johnchristopher wrote:
       | It seemed interesting and I tried to subscribe but the process
       | doesn't work without disabling ublock/adblock. I am okay with
       | that so I disabled it but then I failed two captchas round (find
       | bikes in thumbnails, some thumbnails are upside down ?!) :(.
        
         | jerodsanto wrote:
         | Sorry for the hassle! We've been hit by lots of spammers lately
         | so I battened down the hatches. Unfortunately this has the side
         | effect of also blocking some legit humans as well. :(
        
       | lifeisstillgood wrote:
       | >>> We no longer provision load balancers, or configure DNS; we
       | simply describe the resources that we need, and Kubernetes makes
       | it happen.
       | 
       | This is (part) of what keeps me in the stone age. _You are_
       | provisioning load balancers and DNS - but just one step removed
       | through k8s
       | 
       | And my _prior_ is that we need to understand that, be aaare of
       | it, have a model of what is going on to help develop and debug.
       | 
       | And so it feels a bit like "magic abstraction". And then to peek
       | through the abstraction you suddenly not only need to know about
       | DNS and which machine is running bind, but also how kubernetes
       | internally stores it's DNS config and how it spits that out and
       | what version they changed it with.
       | 
       | In other words you have to become expect in two things to debug
       | it.
       | 
       | And maybe it's worth it - but I struggle to see why it's not
       | simpler to keep my install scripts going.
       | 
       | (OK I guess I am writing my own answer - but surely the point is
       | what is the simplest level of thin install scripts needed to
       | deploy containers?)
        
         | tgsovlerkhgsel wrote:
         | By using K8s and similar technologies, you're buying
         | standardization with underlying complexity and reduced
         | efficiency.
         | 
         | In many cases, it's a good tradeoff, because you can now use
         | standard tooling on everything.
         | 
         | Just like it's cheaper to ship an entire (physical) shipping
         | container that's half-full than to ship the same stuff loosely.
         | Or why companies will send you two separate letters on the same
         | day with a small note that this is more efficient for them than
         | collating them.
         | 
         | I assume that k8s also makes it much easier to move to a
         | different cloud provider if you're unhappy with one (or the new
         | one offers better pricing). Instead of rewriting your bespoke
         | scripts that only you understand, anyone familiar with the
         | technology will know which modules to swap to make it work with
         | the new provider.
        
         | enos_feedler wrote:
         | Isn't this true of all operating systems? Debugging is going to
         | require some knowledge about the OS and the syscalls it makes
         | but you don't want to write directly to the machine as an OS
         | would. Same goes for internet connected clustered machines
        
         | theptip wrote:
         | In the happy path, your developers no longer need to worry
         | about this stuff. It's possible for a team to stand up a new
         | service and plumb it through all the way to the external LB
         | just using k8s yaml templates.
         | 
         | In the unhappy path, sure, you need someone who knows how to
         | debug networking issues, and in some cases it's going to be
         | harder to debug because of the layers of indirection. But the
         | total amount of toil is significantly reduced.
         | 
         | A bad abstraction doesn't carry its weight in complexity. A
         | good abstraction allows you to ignore the lower levels most of
         | the time without missing something important; I'd put k8s
         | firmly in the latter category.
        
         | smallnamespace wrote:
         | > And my prior is that we need to understand that, be aaare of
         | it, have a model of what is going on to help develop and debug.
         | 
         | If it's a sufficiently robust abstraction, you don't, you just
         | learn the abstraction. Kubernetes has reached that point for
         | many folks.
         | 
         | I no longer have a detailed mental model of how my compiler or
         | LLVM works, I just trust that it does. When was the last time
         | you needed to (or were capable of) debugging a bug in your
         | compiler? A couple of human generations of work went into
         | making that happen.
         | 
         | Note that it turns out compiling code well, or making a
         | reliable orchestration system, is an enormously _complex_
         | problem. At some point, the complexity outstrips the ability of
         | even generalists in the field to keep up, yet the systems keep
         | getting more reliable.
         | 
         | So in these types of cases, you can either do it yourself
         | poorly (you're an amateur), do it yourself well (congrats,
         | you've become an expert), or delegate.
         | 
         | This isn't really limited to computing. I delegate maintenance
         | on my car to a mechanic, while I'm pretty sure a generation
         | ago, everybody (in the US) changed their own oil and understood
         | how the carb worked. Times change.
        
         | kbar13 wrote:
         | i think the idea is that k8s does away with having to glue all
         | those pieces of infra together, not that you lose understanding
         | of how it all works together. part of the headache with
         | managing infra is that it rots over time... things come and go
         | (sysv to systemd, apt/snap/whatever, config files change,
         | things break). it's easier to keep up to date on k8s than all
         | the disparate parts of the OS and provider-specific APIs and
         | whatnot
        
           | lifeisstillgood wrote:
           | That's an interesting perspective I have not heard before.
           | 
           | Does this imply there is a cloud abstract layer that _should_
           | come (assuming all providers can put aside commercial
           | interests etc)
           | 
           | And is k8s the _simplest possible abstraction_? And if not -
           | what is?
        
       | remram wrote:
       | Isn't changelog.com mostly a static site? What kind of workloads
       | do they run/monitor/update with all this infrastructure?
        
         | awinter-py wrote:
         | I think they're sponsored by linode, and they're developer-
         | themed -- there may be team / content reasons to use a lot of
         | unnecessary tools in order to review them
        
           | jerodsanto wrote:
           | This quote from Adam on our episode about the setup explains
           | some of our motivations here:
           | 
           | > It's worth noting that we don't really need what we have
           | around Kubernetes. This is for fun, to some degree. One, we
           | love Linode, they're a great partner... Two, we love you,
           | Gerhard, and all the work you've done here... We don't really
           | need this setup. One, it's about learning ourselves, but then
           | also sharing that. Obviously, Changelog.com is open source,
           | so if you're curious how this is implemented, you can look in
           | our codebase. But beyond that, I think it's important to
           | remind our audience that we don't really need this; it's fun
           | to have, and actually a worthwhile investment for us, because
           | this does cost us money (Gerhard does not work for free), and
           | it's part of this desire to learn for ourselves, and then
           | also to share it with everyone else... So that's fun. It's
           | fun to do.
        
       | js4ever wrote:
       | I was baffled by this: "The worst part is that serving the same
       | files from disks local to the VMs is 44x faster than from
       | persistent volumes (267MB/s vs 6MB/s)."
       | 
       | Is it a configuration issue on their side or do the LKE volumes
       | are really limited to 6MB/s on linode?
       | 
       | How can you be happy with this for production??
        
         | gerhardlazu wrote:
         | Block storage is an area that we are working with Linode to
         | improve. That's the random read/write performance, as measured
         | by fio.
         | 
         | We have mostly sequential reads & writes (mp3 files) that peak
         | at 50MB/s, then rely on CDN caching (Fastly makes us happy in
         | this respect).
         | 
         | CDN caching is something that we are currently improving, which
         | will make things quicker and more reliable.
         | 
         | The focus is on reality vs the ideal, and the path that we are
         | taking to improving not just changelog.com, but also our
         | sponsors' products. No managed K8S or IaaS is perfect, but we
         | enjoy the Linode partnership & collaboration ;)
        
         | sweeneyrod wrote:
         | Hey, that speed was good enough in ~1995!
        
       | manigandham wrote:
       | The static files should definitely be on some kind of object
       | storage like S3, that's what it's built for. Much faster, more
       | reliable, more scalable, and likely much cheaper too.
       | 
       | As for persistent volumes, might be better to just offload
       | Postgres to a managed DB service and downsize the K8S instances,
       | or use something like CockroachDB which is natively distributed
       | and can make use of local volumes instead.
        
       | awinter-py wrote:
       | woo self-managed postgres on k8s
       | 
       | vendor-managed DBs limit plugins + versions, excited to see this
       | space advance
        
       ___________________________________________________________________
       (page generated 2020-12-26 23:00 UTC)