[HN Gopher] Kubernetes is our generation's Multics
       ___________________________________________________________________
        
       Kubernetes is our generation's Multics
        
       Author : genericlemon24
       Score  : 239 points
       Date   : 2021-07-21 08:19 UTC (14 hours ago)
        
 (HTM) web link (www.oilshell.org)
 (TXT) w3m dump (www.oilshell.org)
        
       | nickjj wrote:
       | I'd be curious what a better alternative looks like.
       | 
       | I'm a huge fan of keeping things simple (vertically scaling 1
       | server with Docker Compose and scaling horizontally only when
       | it's necessary) but having learned and used Kubernetes recently
       | for a project I think it's pretty good.
       | 
       | I haven't come across too many other tools that were so well
       | thought out while also guiding you into how to break down the
       | components of "deploying".
       | 
       | The idea of a pod, deployment, service, ingress, job, etc. are
       | super well thought out and are flexible enough to let you deploy
       | many types of things but the abstractions are good enough that
       | you can also abstract away a ton of complexity once you've
       | learned the fundamentals.
       | 
       | For example you can write about 15 lines of straight forward YAML
       | configuration to deploy any type of stateless web app once you
       | set up a decently tricked out Helm chart.. That's complete with
       | running DB migrations in a sane way, updating public DNS records,
       | SSL certs, CI / CD, having live-preview pull requests that get
       | deployed to a sub-domain, zero downtime deployments and more.
        
         | kitd wrote:
         | It simpler than that for simple scenarios. `kubectl run` can
         | set you up with a standard deployment + service. Then you can
         | describe the resulting objects, save the yaml, and adapt/reuse
         | as you need.
        
         | ljm wrote:
         | > once you set up a decently tricked out Helm chart
         | 
         | I don't disagree but this condition is doing a hell of a lot of
         | work.
         | 
         | To be fair, you don't need to do much to run a service on a toy
         | k8s project. It just gets complicated when you layer on all
         | production grade stuff like load balancers, service meshes,
         | access control, CI pipelines, o11y, etc. etc.
        
           | nickjj wrote:
           | > To be fair, you don't need to do much to run a service on a
           | toy k8s project.
           | 
           | The previous reply is based on a multi-service production
           | grade work load. Setting up a load balancer wasn't bad. Most
           | cloud providers that offer managed Kubernetes make it pretty
           | painless to get their load balancer set up and working with
           | Kubernetes. On EKS with AWS that meant using the AWS Load
           | Balancer Controller and adding a few annotations. That
           | includes HTTP to HTTPS redirects, www to apex domain
           | redirects, etc.. On AWS it took a few hours to get it all
           | working complete with ACM (SSL certificate manager)
           | integration.
           | 
           | The cool thing is when I spin up a local cluster on my dev
           | box, I can use the nginx ingress instead and everything works
           | the same with no code changes. Just a few Helm YAML config
           | values.
           | 
           | Maybe I dodged a bullet by starting with Kubernetes so late.
           | I imagine 2-3 years ago would have been a completely
           | different world. That's also why I haven't bothered to look
           | into using Kubernetes until recently.
           | 
           | > I don't disagree but this condition is doing a hell of a
           | lot of work.
           | 
           | It was kind of a lot of work to get here, but it wasn't
           | anything too crazy. It took ~160 hours to go from never using
           | Kubernetes to getting most of the way there. This also
           | includes writing a lot of ancillary documentation and wiki
           | style posts to get some of the research and ideas out of my
           | head and onto paper so others can reference it.
        
           | edoceo wrote:
           | o11y = observability
        
             | tovej wrote:
             | You couldn't create a parody of this naming convention
             | that's more outlandish than the way it's actually being
             | used.
        
               | handrous wrote:
               | o11y? In my head it sounds like it's a move in "Tony
               | Hawk: Pro K8er"
        
           | verdverm wrote:
           | You still have to do that prod grade stuff, K8s creates a
           | cloud agnostic API for it. People can use the same terms and
           | understand each other
        
         | worldsayshi wrote:
         | My two cents is that docker compose is an order of magnitude
         | simpler to troubleshoot or understand than Kubernetes but the
         | problem that Kubernetes solves is not that much more difficult.
        
         | handrous wrote:
         | > That's complete with DB migrations in a safe way
         | 
         | How?! Or is that more a "you provide the safe way, k8s just
         | runs it for you" kind of thing, than a freebie?
        
           | nickjj wrote:
           | Thanks, that was actually a wildly misleading typo haha. I
           | meant to write "sane" way and have updated my previous
           | comment.
           | 
           | For saFeness it's still on us as developers to do the dance
           | of making our migrations and code changes compatible with
           | running both the old and new version of our app.
           | 
           | But for saNeness, Kubernetes has some neat constructs to help
           | ensure your migrations only get run once even if you have 20
           | copies of your app performing a rolling restart. You can
           | define your migration in a Kubernetes job and then have an
           | initContainer trigger the job while also using kubectl to
           | watch the job's status to see if it's complete. This
           | translates to only 1 pod ever running the migration while
           | other pods hang tight until it finishes.
           | 
           | I'm not a grizzled Kubernetes veteran here but the above
           | pattern seems to work in practice in a pretty robust way. If
           | anyone has any better solutions please reply here with how
           | you're doing this.
        
             | handrous wrote:
             | Hahaha, OK, I figured you didn't mean what I _hoped_ you
             | meant, or I 'd have heard a lot more about that already.
             | That still reads like it's pretty handy, but way less "holy
             | crap my entire world just changed".
        
       | wwwaldo wrote:
       | Hi Andy: if you see this, I'm the other 4d polygon renderer! I
       | read the kubernetes whitepaper after RC and ended up spending a
       | lot of the last year on it. Maybe if I had asked you about
       | working with Borg I could have saved myself some trouble. Glad to
       | see you're still very active!
        
       | asciimov wrote:
       | Anyone want to fill me in on what this "Perlis-Thompson
       | Principle" is?
        
       | flowerlad wrote:
       | The alternatives to Kubernetes are even more complex. Kubernetes
       | takes a few weeks to learn. To learn alternatives, it takes
       | years, and applications built on alternatives will be tied to one
       | cloud.
       | 
       | See prior discussion here:
       | https://news.ycombinator.com/item?id=23463467
       | 
       | You'd have to learn AWS autoscaling group (proprietary to AWS),
       | Elastic Load Balancer (proprietary to AWS) or HAProxy, Blue-green
       | deployment, or phased rollout, Consul, Systemd, pingdom,
       | Cloudwatch, etc. etc.
        
         | rantwasp wrote:
         | cloud agnosticism is, in my experience, a red herring. It does
         | not matter and the effort required to move from one cloud to
         | another is still non-trivial.
         | 
         | I like using the primitives the cloud provides, while also
         | having a path to - if needed - run my software on bare metal.
         | This means: VMs, decoupling the logging and monitoring from the
         | cloud svcs (use a good library that can send to cloudwatch for
         | eg. prefer open source solutions when possible), do proper
         | capacity planning (and have the option to automatically scale
         | up if the flood ever comes), etc.
        
         | dolni wrote:
         | Kubernetes uses all those underlying AWS technologies anyway
         | (or at least an equivalently complex thing). You still have to
         | be prepared to diagnose issues with them to effectively
         | administrate Kubernetes.
        
           | hughrr wrote:
           | Kubernetes on AWS is always broken somewhere from experience
           | as well.
           | 
           | Oh it's Wednesday, ALB controller has shat itself again!
        
           | gizdan wrote:
           | Only the k8s admins need to know that though, not the users
           | of it.
        
             | dolni wrote:
             | "Only the k8s admins" implies you have a team to manage it.
             | 
             | A lot of things go from not viable to viable if you have
             | the luxury of allocating an entire team to it.
        
               | gizdan wrote:
               | Fair point. But this is where the likes of EKS and GKE
               | come in. It takes away a lot of the pain that comes from
               | managing K8s.
        
           | flowerlad wrote:
           | That hasn't been my experience. I use Kubernetes on Google
           | cloud (because they have the best implementation of K8s), and
           | I have never had to learn any Google-proprietary things.
        
           | giantrobot wrote:
           | At least with building to k8s you can shift to another cloud
           | provider if those problems end up too difficult to diagnose
           | or fix. Moving providers with a k8s system can be a weeks
           | long project rather than a years long project which can
           | easily make the difference between surviving and closing the
           | doors. It's not a panacea but it at least doesn't make your
           | system dependent on a single provider.
        
             | SahAssar wrote:
             | > At least with building to k8s you can shift to another
             | cloud provider if those problems end up too difficult to
             | diagnose or fix.
             | 
             | You're saying that the solution to k8s is complicated and
             | hard to debug is to move to another cloud and hope that
             | fixes it?
        
               | giantrobot wrote:
               | > You're saying that the solution to k8s is complicated
               | and hard to debug is to move to another cloud and hope
               | that fixes it?
               | 
               | Not in the slightest. I'm saying that building a platform
               | against k8s let's you migrate between cloud providers
               | because the _cloud provider 's system_ might be causing
               | you problems. These problems are probably related to
               | _your_ platform 's design and implementation which is
               | causing an impedance mismatch with the cloud provider.
               | 
               | This isn't helpful knowledge when you've only got four
               | months of runway and fixing the platform or migrating
               | from AWS would take six months or a year. It's not like
               | switching a k8s-based system is trivial but it's easier
               | than extracting a bunch of AWS-specific products from
               | your platform.
        
             | dolni wrote:
             | If you can literally pick up and shift to another cloud
             | provider just by moving Kubernetes somewhere else, you are
             | spending mountains of engineering time reinventing a bunch
             | of different wheels.
             | 
             | Are you saying you don't use any of your cloud vendor's
             | supporting services, like CloudWatch, EFS, S3, DynamoDB,
             | Lambda, SQS, SNS?
             | 
             | If you're running on plain EC2 and have any kind of sane
             | build process, moving your compute stuff is the easy part.
             | It's all of the surrounding crap that is a giant pain (the
             | aforementioned services + whatever security policies you
             | have around those).
        
               | flowerlad wrote:
               | I use MongoDB instead of DynamoDB, and Kafka instead of
               | SQS. I use S3 (the Google equivalent since I am on their
               | cloud) through Kubernetes abstractions. In some rare
               | cases I use the cloud vendor's supporting services but I
               | build a microservice on top of it. My application runs on
               | Google cloud and yet I use Amazon SES (Simple Email
               | Service) and I do that by running a small microservice on
               | AWS.
        
               | dolni wrote:
               | Sure, you can use those things. But now you also have to
               | maintain them. It costs time, and time is money. If you
               | don't have the expertise to administrate those things
               | effectively, it may not be a worthwhile investment.
               | 
               | Everyone's situation is different, of course, but there
               | is a reason that cloud providers have these supporting
               | services and there is a reason people use them.
        
               | flowerlad wrote:
               | > _But now you also have to maintain them._
               | 
               | In my experience it is less work than keeping up with
               | cloud provider's changes [1]. You can stay with a version
               | of Kafka for 10 years if it meets your requirements. When
               | you use a cloud provider's equivalent service you have to
               | keep up with their changes, price increases and
               | obsolescence. You are at their mercy. I am not saying it
               | is always better to set up your own equivalent using OSS,
               | but I am saying that makes sense for a lot of things. For
               | example Kafka works well for me, and I wouldn't use
               | Amazon SQS instead, but I do use Amazon SES for emailing.
               | 
               | [1] https://steve-yegge.medium.com/dear-google-cloud-
               | your-deprec...
        
               | dolni wrote:
               | But "keeping up with changes" applies just as much to
               | Kubernetes, and I would argue it's even more dangerous
               | because an upgrade potentially impacts every service in
               | your cluster.
               | 
               | I build AMIs for most things on EC2. That interface never
               | breaks. There is exactly one service on which
               | provisioning is dependent: S3. All of the code (generally
               | via Docker images), required packages, etc are baked in,
               | and configuration is passed in via user data.
               | 
               | EC2 is what I like to call a "foundational" service. If
               | you're using EC2 and it breaks, you wouldn't have been
               | saved by using EKS or Lambda instead, because those use
               | EC2 somewhere underneath.
               | 
               | Re: services like SQS, we could choose to roll our own
               | but it's not really been an issue for us so far. The only
               | thing we've been "forced" to move on is Lambda, which we
               | use where appropriate. In those cases, the benefits
               | outweigh the drawbacks.
        
               | lanstin wrote:
               | And don't you have specific yaml for "AWS LB
               | configuration option" and stuff? The concepts in
               | different cloud providers are different. I can't image
               | it's possible to be portable without some jquery-type
               | layer expressing concepts you can use and that are built
               | out of the native concepts. But I'd bet the different
               | browsers were more similar in 2005 than the different
               | cloud providers are in 2021.
        
               | dolni wrote:
               | Sure, there is configuration that goes into using your
               | cloud provider's "infrastructure primatives". My point is
               | that Kubernetes is often using those anyway, and if you
               | don't understand them you're unprepared to respond in the
               | case that your cloud provider has an issue.
               | 
               | In terms of the effort to deploy something new, for my
               | organization it's low. We have a Terraform module creates
               | the infrastructure, glues the pieces together, tags
               | stuff, makes sure everything is configured uniformly. You
               | specify some basic parameters for your deployment and
               | you're off to the races.
               | 
               | We don't need to add yet more complexity with a
               | Kubernetes-specific cost tracking software, AWS does it
               | for us automatically. We don't have to care about how
               | pods are sized and how those pods might or might not fit
               | on nodes. Autoscaling gives us consistently sized EC2
               | instances that, in my experience, have never run into
               | issues because we have a bad neighbor. Most importantly
               | of all, I don't have upgrade anxiety because there are a
               | ton of services stacked on one Kubernetes cluster which
               | may suffer issues if an upgrade does not go well.
        
       | fungiblecog wrote:
       | If Antoine de Saint-Exupery was right that: "Perfection is
       | achieved, not when there is nothing more to add, but when there
       | is nothing left to take away." then IT as an industry is heading
       | further and further away from perfection at an exponentially
       | accelerating rate.
       | 
       | The only example I can think of where a modern community is
       | actively seeking to simplify things is Clojure. Rich Hickey is
       | very clear on the problem of building more and more complicated
       | stuff and is actively trying to create software by composing
       | genuinely simpler parts.
        
         | TameAntelope wrote:
         | I'd argue that perfection achievement is not a linear process.
         | Sometimes you have to add way too many things before you can
         | remove all of the useless things.
         | 
         | Nobody is puppeteering some grand master plan, we're on a
         | journey of discovery. When we're honest with ourselves, we
         | realize nobody knows what will stick and what won't.
        
         | harperlee wrote:
         | Jonathan Blow has also been vocal on that regard.
        
       | honkycat wrote:
       | People love to pooh-pooh "complicated" things like unit tests,
       | type systems, Kubernetes, GraphQL, etc. Things that are solving a
       | specific problem for LARGE SCALE ENTERPRISE users.
       | 
       | I will quote myself here: A problem does not cease to exist just
       | because you decided to ignore it.
       | 
       | Without Kubernetes, you still need to:
       | 
       | - Install software onto your machines
       | 
       | - Start services
       | 
       | - Configure your virtual machines to listen on specific ports
       | 
       | - have a load balancer directing traffic to and watching the
       | health of those ports
       | 
       | - a system to re-start processes when they exit
       | 
       | - something to take the logs of your systems and ship them to a
       | centralized place so you can analyze them.
       | 
       | - A place to store secrets and provide those secrets to your
       | services.
       | 
       | - A system to replace outdated services with newer versions ( for
       | either security updates, or feature updates ).
       | 
       | - A system to direct traffic to allow your services to
       | communicate with one another. ( Service discovery )
       | 
       | - A way to add additional instances to a running service and tell
       | the load balancer about them
       | 
       | - A way to remove instances when they are no longer needed due to
       | decreased load.
       | 
       | So sure, you don't need Kubernetes at an enterprise organization!
       | Just write all of that yourself! Great use of your time, instead
       | of concentrating on writing features that will make your
       | organization more money.
        
         | fierro wrote:
         | thank you lol. Hello World engineers will never stop
         | criticizing K8s.
        
         | alisonatwork wrote:
         | That escalated quickly. Unit tests and type systems are not
         | complicated at all, and are applied by solo developers all the
         | time. GraphQL and Kubernetes are completely different beasts,
         | technologies designed to solve problems that not all developers
         | have. There really isn't a comparison to be made.
        
           | doctor_eval wrote:
           | I agree. GraphQL is conceptually straightforward, even if
           | certain implementations can be complex. Any developer
           | familiar with static typing is going to get it pretty easily.
           | 
           | I'm far from an expert, but ISTM that Kubernetes is complex
           | both conceptually and in implementation. This has
           | implications well beyond just operational reliability.
        
           | strken wrote:
           | Almost every team I've worked on has needed to deploy
           | multiple services somewhere, and almost every app has run
           | into escalating round trip times from nested data and/or
           | proliferating routes that present similar data in different
           | ways. While it's true to say not _all_ developers have those
           | problems, they 're very common.
        
         | timr wrote:
         | Sure, but k8s isn't the only way to do any of those things, and
         | it's certainly a heavyweight way of doing most of them.
         | 
         | It's not a question of k8s or bespoke. That's a false
         | dichotomy.
         | 
         | I see way too many young/inexperienced tech teams using k8s to
         | build things that could probably be hosted on a couple of AWS
         | instances (if that). The parasitic costs are high.
        
           | rhacker wrote:
           | I see way too many young/inexperienced tech teams STILL using
           | an unmaintainable process of just spinning up an EC2 instance
           | for random crap because there is no deployment strategy at
           | the company.
        
             | breakfastduck wrote:
             | Not sure why this is being downvoted.
             | 
             | "We can do it ourselves!" attitude by people who are
             | unskilled is the source of many legacy hell-webs sitting in
             | companies all over the world that are desperately trying to
             | be maintained by their inheritors.
        
             | mountainriver wrote:
             | Yup, k8s is at least standardized in a way that's somewhat
             | sane.
             | 
             | Before k8s every org I worked for had an absolute mess of
             | tangled infrastructure
        
         | markzzerella wrote:
         | Your post reads like a teenager yelling "you don't understand
         | me" at parents who also were teenagers at one point. You really
         | think that those are new and unique problems? Your bullet
         | points are like a list of NixOS features. I just did all of
         | that across half a dozen servers and a dozen virtual machines
         | with `services.homelab.enable = true;` before I opened up HN
         | while its deploying. I'm not surprised that you can't see us
         | lowly peasants from your high horse but many of us have been
         | doing everything you mentioned, probably far more reliably and
         | reproducibly, for a long time.
        
         | api wrote:
         | It's not K8S or nothing. It's K8S or Nomad, which is a much
         | simpler and easier to administrate solution.
        
           | remram wrote:
           | While this is not false, I don't think many of the posts
           | critical of K8s hitting the front page are advertising for
           | Nomad, or focusing on drawbacks that don't apply to Nomad.
        
           | rochacon wrote:
           | This is partially true. If the only feature you care about
           | Kubernetes is container scheduling, then yes, Nomad is
           | simpler. The same could probably be said about Docker Swarm.
           | However, if you want service discovery, load balancing,
           | secret management, etc., you'll probably need
           | Nomad+Vault+Consul+Fabio/similar to get all the basic
           | features. Want easy persistent storage provisioning? Add CSI
           | to the mix.
           | 
           | Configuring these services to work together is not all
           | trivial either (considering proper security, such as TLS
           | everywhere) and there aren't many solutions available from
           | the community (or managed) that package this in an easy way.
        
         | jimbokun wrote:
         | Yes, but you don't need many or any of those things to launch a
         | Minimum Viable Product.
         | 
         | So Kubernetes can become invaluable once you need to scale, but
         | when you are getting started it will probably only slow you
         | down.
        
           | aequitas wrote:
           | If you want your MVP to be publicly available and your
           | corporations ops/sec to be on board with your plans then
           | Kubernetes is an answer as well. Even if your MVP only needs
           | a single instance and no scaling. Kubernetes provides a
           | common API between developers and operations so both can do
           | the job they where hired for while being in each others way
           | as least as possible.
        
             | jimbokun wrote:
             | Pre-MVP, development and ops are likely the same people.
        
               | aequitas wrote:
               | With Pre-MVP you mean installing it on your laptop right?
               | It all just really depends on your companies size and the
               | liberties you are given. At a certain size your company
               | will have dedicated ops and security teams which call all
               | the shots. For a lot of companies, Kubernetes gives
               | developers the liberties they would normally only get
               | with a lot of bureaucracy or red tape.
        
         | fxtentacle wrote:
         | You're mixing together useful complexity with useless
         | complexity.
         | 
         | Plus at the very least, I'd be very careful about putting type
         | systems into the same basket as Kubernetes. One is a basic
         | language feature used offline and before deploying. The other
         | is a highly complex interwoven web of tools that might take
         | your systems offline if used incorrectly.
         | 
         | Without Kubernetes, you need Debian and it's Apache and MySQL
         | packages. It's called a LAMP stack and for many production
         | deployments, that's good enough. Because without all that
         | "cloud magic", a $50 per month sever running a bare metal OS is
         | beyond overpowered for most web apps, so you can skip all the
         | scaling exercises. And with a redundant PSU and a redundant
         | network port, 99.99% uptime is achievable. A feat so difficult,
         | I'd like to mention, that Amazon Web Services or Heruko rarely
         | manage to...
         | 
         | Complexity has high costs. Just because you don't see
         | Kubernetes' complexity now, doesn't mean you won't pay for it
         | through reduced performance, increased bug surface, increased
         | downtime, or additional configuration nightmares.
        
           | bob1029 wrote:
           | > You're mixing together useful complexity with useless
           | complexity.
           | 
           | > Complexity has high costs
           | 
           | Complexity management is the central theme of building any
           | large, valuable system. We would probably find that the more
           | complex (and correct) a system, the more valuable it becomes
           | on a relative basis to other competing solutions. The US tax
           | code is a pretty damn good example of complexity
           | _intentionally_ taken to the extreme (for purposes of total
           | market capture). We shouldn 't be surprised to find other
           | technology vendors framing problems & marketing their wares
           | under similar pretenses.
           | 
           | The best way to deal with complexity is to eliminate it or
           | the conditions under which it must exist. For example, we
           | made the engineering & product choice that says we do not
           | ever intend to scale an instance of our application beyond
           | the capabilities of a single server. Consider the
           | implications of this constraint when reviewing how many
           | engineers we actually need to hire, or if Kubernetes even
           | makes sense.
           | 
           | I think one of the biggest failings in software development
           | is a lack of respect for the nature and impact of complexity.
           | If we are serious about reducing or eliminating modes of
           | complexity, we have to be willing to dig really deep and
           | consider dramatic changes to the ways in which we architect
           | these systems.
           | 
           | I know its been posted to death on HN over the last ~48
           | hours, but Out of the Tar Pit is the best survey of
           | complexity that I have seen in my career so far:
           | 
           | http://curtclifton.net/papers/MoseleyMarks06a.pdf
        
           | debarshri wrote:
           | Absolutely agree with you. I have seen the debate between
           | accidental and necessary complexities very often. It actually
           | depends upon stage of the organisation. In my opinion many
           | devs in startups and smaller orgs try to accomodate the
           | future expectations around product and create accidental
           | complexities Accidental complexity becomes necessary
           | complexity when an organisation scales out.
        
             | kyleee wrote:
             | Another case of premature optimization, really
        
           | moonchild wrote:
           | > You're mixing together useful complexity with useless
           | complexity.
           | 
           | Or, to channel Fred Brooks, essential and inessential
           | complexity.
        
           | threeseed wrote:
           | This argument is often made and is ridiculous.
           | 
           | No one should or is using Kubernetes to run a simple LAMP
           | stack.
           | 
           | But if you have dozens of containers and want them to be
           | manager in a consistent, secure, observable and maintainable
           | way then Kubernetes is going to be a better solution than
           | anything you build yourself.
        
             | tmp_anon_22 wrote:
             | > No one should or is using Kubernetes to run a simple LAMP
             | stack.
             | 
             | Yes they are. Some developer got all excited about the
             | capabilities of k8s, and had an initial larger scope for a
             | project, so they set it up with GKE or EKS, and it managed
             | to provide just enough business value to barrow in like a
             | tick that won't be going away for years.
             | 
             | Developers get all excited for new shiny tools and chuck it
             | into production all the time, particularly at smaller orgs.
        
               | threeseed wrote:
               | > initial larger scope for a project
               | 
               | So they weren't trying to run a simple LAMP stack then.
        
               | naniwaduni wrote:
               | Trying and doing can be very different things.
        
               | debarshri wrote:
               | I have seen smaller RoR, Django or lampstack apps being
               | deployed on kubernetes exactly for reasons you mentioned.
               | It is often pitch as a silver bullet for the future.
        
               | michaelpb wrote:
               | When Boss says "my idea is gonna be HUGE, so make this go
               | fast", you can either spend 4 hours optimize some DB
               | queries, or you can spend 40+ hours in a broadly scoped
               | "conversion" project and have a new thing to add to your
               | resume, and then spend 4 hours optimizing some DB
               | queries...
        
             | dvtrn wrote:
             | I agree that you probably shouldn't but if you think no one
             | "is", I'd point to my last job, an enterprise that went to
             | k8s for a single-serving php service that reads PDFs.
             | 
             | I recently asked a friend who still works there if anything
             | else has been pushed to k8s since I left (6 months ago).
             | The answer: no.
        
               | derfabianpeter wrote:
               | Sounds familiar.
        
             | rodgerd wrote:
             | Alas, a lot of people are. One of the reasons there's such
             | a backlash against k8s - other than contrarianism, which is
             | always with us - is that there are quite a few people who
             | have their job and hobby confused, and inflicted k8s (worse
             | yet, raw k8s) on their colleagues not because of a
             | carefully thought out assessment of its value, but because
             | it is Cool and they would like to have it on their CV.
        
             | michaelpb wrote:
             | > No one [...] is using Kubernetes to run a simple LAMP
             | stack.
             | 
             | Au contraire! This is very common. Probably some combo of
             | resume-driven devops and "new shiny" excitement.
        
             | lamontcg wrote:
             | I love the way that the Kubernetes debate always
             | immediately devolves into Kubernetes vs. DIY where
             | Kubernetes is the obviously correct answer.
             | 
             | Two groups of people shouting past each other.
        
         | slackfan wrote:
         | "Write it all yourself"
         | 
         | - Install software onto your machines
         | 
         | Package managers, thousands of them.
         | 
         | - Start services
         | 
         | SysVinit, and if shell is too complicated for you, you can
         | write totally not-complicated unit files for SystemD. For most
         | services, they already exist.
         | 
         | - Configure your virtual machines to listen on specific ports
         | 
         | Chef, Puppet, Ansible, other configuration tools, literally
         | hundreds of them etc.
         | 
         | - have a load balancer directing traffic to and watching the
         | health of those ports
         | 
         | Any commercial load balancer.
         | 
         | - a system to re-start processes when they exit
         | 
         | Any good init system will do this.
         | 
         | - something to take the logs of your systems and ship them to a
         | centralized place so you can analyze them.
         | 
         | Syslog has had this functionality for decades.
         | 
         | - A place to store secrets and provide those secrets to your
         | services.
         | 
         | A problem that is unique to kubernetes and serverless. Remember
         | the days of assuming that your box was secure without having to
         | do 10123 layers of abstraction?
         | 
         | - A system to replace outdated services with newer versions (
         | for either security updates, or feature updates ).
         | 
         | Package managers.
         | 
         | - A system to direct traffic to allow your services to
         | communicate with one another. ( Service discovery )
         | 
         | This is called an internal load balancer.
         | 
         | - A way to add additional instances to a running service and
         | tell the load balancer about them
         | 
         | Most load balancers have built up processes for these.
         | 
         | - A way to remove instances when they are no longer needed due
         | to decreased load.
         | 
         | maybe the only thing you may need to activelly configure, again
         | in your load balancer.
         | 
         | None of this really needs to be written itself, and these
         | assumptions come from a very specific type of application
         | architecture, which, no matter how much people try to make it,
         | is not a one-size-fits-all solution.
        
           | jchw wrote:
           | I can setup a Kubernetes cluster, a container registry, a
           | Helm repository, a Helm file and a Dockerfile before you are
           | finished setting up the infrastructure for an Apt repository.
        
             | zdw wrote:
             | My experience is the opposite - an APT repo is just files
             | on disk behind any webserver, a few of them signed.
             | 
             | Setting up all the infra for publishing APT packages (one
             | place to start: https://jenkins-debian-glue.org ) is far
             | easier than trying to understand all the rest of the things
             | you mention.
        
               | jchw wrote:
               | I mean, Kubernetes is just some Go binaries; you can have
               | it up and running in literal seconds by installing a
               | Kubernetes distribution like k3s. This is actually what I
               | do personally on a dedicated server; it's so easy I don't
               | even bother automating it further. Helm is just another
               | Go binary, you can install it on your machine with cURL
               | and it can connect to your cluster and do what it needs
               | from there. The Docker registry can be run inside your
               | cluster, so you can install it with Helm, and it will
               | benefit from all of the Infra as Code that you get from
               | Kubernetes. And finally, the Helm repo is "just files"
               | but it is less complex than Apt.
               | 
               | I've been through the rigmarole for various Linux package
               | managers over the years and I'm sure you could automate a
               | great deal of it, but even if it were as easy as running
               | a bash script (and it's not,) setting up Kubernetes
               | covers like half this list whereas setting up an Apt repo
               | covers one item in it.
        
             | mlnj wrote:
             | Exactly, an autoscaling cluster of multiple nodes with
             | everything installed in a declarative way with load
             | balancers and service discovery, all ready in about 10
             | minutes. Wins hands down.
        
             | rantwasp wrote:
             | no. you cannot.
        
             | slackfan wrote:
             | Now make it not-brittle and prone to falling over, without
             | using _hosted_ k8s. ;)
        
               | api wrote:
               | ... but then you could pay a fraction for bare metal
               | cloud hosting instead of paying out the nose for managed
               | K8S at Google or AWS.
               | 
               | Its complexity and fragility are features. It's working
               | as intended.
        
           | mlnj wrote:
           | There was some project where one wrote all of that
           | (essentially what Kubernetes does) in like 8k lines of bash
           | script. Brilliant, yes. But there is not way I want any
           | anything similar in my life.
           | 
           | I am not the biggest fan of the complexity Kubernetes is, but
           | it solves a problems there is no way I want to solve
           | individually and on my own.
        
             | AlexCoventry wrote:
             | I think the point of the blog post in the OP is that it
             | should be a bunch of bash scripts with very few
             | interdependencies, because most of the requirements in the
             | grandparent comment are independent of each other, and
             | tying them all together in a tool like kubernetes is
             | unwieldy.
        
           | rhacker wrote:
           | So instead of knowing about K8s services, ingests and
           | deployments/pods I have to learn 15 tools.
           | 
           | Ingests are not much more complicated than an nginx config,
           | services are literally 5 lines each pod, and the deployments
           | are roughly as complicated as a 15 line docker file.
        
             | smoldesu wrote:
             | If you're familiar with Linux (which should be considered
             | required-reading if you're learning about containers), most
             | of this stuff is handled perfectly fine by the operating
             | system. Sure, you could write it all in K8 and just let the
             | layers of abstraction pile up. Or, most people will be
             | suited perfectly fine by the software that already runs in
             | their box.
        
               | ownagefool wrote:
               | Okay, so let's add a couple of things.
               | 
               | How do you do failover?
               | 
               | Sharing servers to save on costs?
               | 
               | Orchestrate CI/CD pipelines, preferably on the fly?
               | 
               | Infrastructure as Code?
               | 
               | Eventually you a point where the abstraction wins. Most
               | people will say "but AWS...", but the reality is quicker,
               | easier to use, and runs via multiple providers, so I
               | think it's going to keep doing well personally.
        
               | eropple wrote:
               | I have been professionally working in the infrastructure
               | space for a decade and in an amateur fashion running
               | Linux servers and services for another decade before that
               | and I am pretty certain that I would screw this up in a
               | threat-to-production way at least once or twice along the
               | way and possibly hit a failure-to-launch on the product
               | itself. I would then have to wrestle with the cognitive
               | load of All That Stuff and by the way? The failure case,
               | from a security perspective, of a moment's inattention
               | has unbounded consequences. (The failure case from a
               | scaling perspective is less so! But still bad.)
               | 
               | And I mean, I don't even like k8s. I typically go for the
               | AWS suite of stuff when building out systems
               | infrastructure. But this assertion is _bonkers_.
        
               | tapoxi wrote:
               | This really depends on how many boxes you have.
        
               | emerongi wrote:
               | I work in a small company, we don't have a sysadmin, so
               | mostly we want to use managed services. Let's say we want
               | a simple load balanced setup with 2 nodes. Our options
               | are:
               | 
               | - Run our own load balancing machine and manage it (as
               | said, we don't want this)
               | 
               | - Use AWS/GCP/Azure, setup Load Balancer (and rest of the
               | project) manually or with
               | Terraform/CloudFormation/whatever scripts
               | 
               | - Use AWS/GCP/Azure and Kubernetes, define Load Balancer
               | in YAML, let K8S and the platform handle all the boring
               | stuff
               | 
               | This is the simplest setup and already I will always go
               | for Kubernetes, as it's the fastest and simplest, as well
               | as the most easily maintainable. I can also easily slap
               | on new services, upgrade stuff, etc. Being able to define
               | the whole architecture in a declarative way, without
               | actually having to manually do the changes, is a huge
               | time-saver. Especially in our case, where we have more
               | projects than developers - switching context from one
               | project to another is much easier. Not to mention that I
               | can just start a development environment with all the
               | needed services using the same (or very similar)
               | manifests, creating a near-prod environment.
        
               | philjohn wrote:
               | I think the argument there is that it's only simple
               | because the complexity of k8s has been taken away. I
               | don't think anybody has claimed deploying to a k8s
               | cluster is overly complex; running it well, handling
               | upgrades, those are huge time sinks that need the
               | requisite expertise.
               | 
               | Much like Multics was "simple" for the users, but not for
               | the sysadmins.
        
               | raffraffraff wrote:
               | Taking the complexity of k8s away was just gonna happen.
               | As someone who built everything from scratch at a
               | previous company, I chose eks at a start-up because it
               | meant that the one-man-systemsguy didn't have to worry
               | about building and hosting every single cog wheel that is
               | required for package repos, OS deployment, configuration
               | management, consul+vault (minimum), and too many other
               | things that k8s does for you. Also, you can send someone
               | on a CKA course and they know how your shit works. Try
               | doing _that_ with the hodge-podge system you built.
        
               | smoldesu wrote:
               | You run a small company, I'd argue that you aren't "the
               | average user". For you, Kubernetes sounds like it
               | integrates pretty well into your environment and covers
               | your blind spots: that's good! That being said, I'm not
               | going to use Kubernetes or even teach other people how to
               | use it. It's certainly not a one-size-fits-all tool,
               | which worries me since it's (incorrectly) marketed as the
               | "sysadmin panacea".
        
               | echelon wrote:
               | > most of this stuff is handled perfectly fine by the
               | operating system
               | 
               | No, you have to write or adopt tools for each of these
               | things. They don't just magically happen.
               | 
               | Then you have to maintain, secure, integrate.
               | 
               | k8s solves a broad class of problems in an elegant way.
               | Since other people have adopted it, it gets patched and
               | improved. And you can easily hire for the skillset.
        
           | mplewis wrote:
           | > Remember the days of assuming that your box was secure
           | without having to do 10123 layers of abstraction?
           | 
           | Yep, I remember when I deployed insecure apps to prod and
           | copied secrets into running instances, too.
        
             | rhacker wrote:
             | Remember how the ops team kept installing Tomcat with the
             | default credentials?
        
             | puffyvolvo wrote:
             | This was the funniest point in that comment to me.
             | 
             | Read the intended way, it's borderline wrong.
             | 
             | Read as "remember when people assumed security without
             | knowing" is basically most of computing the further back in
             | time you go.
        
           | tlrobinson wrote:
           | This is supposed to be an argument _against_ Kubernetes?
        
             | slackfan wrote:
             | Nope, just an argument against the "you must write all of
             | this yourself" line. :)
        
           | ferdowsi wrote:
           | I'm personally glad that Kubernetes has saved me from needing
           | to manage all of this. I'm much more productive as an
           | applications engineer now that I don't have to stare at a
           | mountain of bespoke Ansible/Chef scripts operating on a Rube
           | Goldberg machine of managed services.
        
             | cjalmeida wrote:
             | This x10. Each such setup is a unique snowflake of brittle
             | Ansible/Bash scripts and unit files. Anything slightly
             | different from the initial use case will break.
             | 
             | Not to mention operations. K8s give you for free things
             | that are a pain to setup otherwise. Want to autoscale your
             | VMs based on load? Trivial in most cloud managed k8s.
        
             | zdw wrote:
             | Instead, you can now admin a Rube Goldberg machine of Helm
             | charts, which run a pile Docker containers which are each
             | their own microcosm of outdated packages and security
             | vulnerabilities.
        
               | kube-system wrote:
               | > Rube Goldberg machine of Helm charts
               | 
               | I love k8s but I do want to say that I hate the
               | 'standard' way that people write general purpose Helm
               | charts. They all try to be super configurable and
               | template everything, but most make assumptions that
               | undermine that idea, and I end up having to dig through
               | them to make changes anyway.
               | 
               | I have found much more success by writing my _own_ helm
               | charts for everything I deploy, and putting in exactly
               | the amount of templating that makes sense for me. Much
               | more simple that way. Doing things this way has avoided a
               | Rube Goldberg scenario.
        
           | staticassertion wrote:
           | You're making their point for them.
        
           | orf wrote:
           | " For a Linux user, you can already build such a system
           | yourself quite trivially by getting an FTP account, mounting
           | it locally with curlftpfs, and then using SVN or CVS on the
           | mounted filesystem. From Windows or Mac, this FTP account
           | could be accessed through built-in software."
           | 
           | Or... you could not.
           | 
           | https://news.ycombinator.com/item?id=9224
        
             | breakfastduck wrote:
             | That comment has aged brilliantly.
             | 
             | Thanks for that!
        
             | bcrosby95 wrote:
             | So you have a version of Kubernetes that is as easy to use
             | as Dropbox? Where do I sign up for the beta?
        
               | orf wrote:
               | https://aws.amazon.com/eks/
        
             | adamrezich wrote:
             | how is quoting this here relevant? nobody's saying k8s
             | isn't successful or going to be successful--the argument is
             | whether its complexity and layers of abstraction are
             | worthwhile. dropbox is a tool, k8s is infrastructure. the
             | only similarity between this infamous post and the argument
             | here is that existing tools can be used to achieve the same
             | effect as a product. the response here is "that'll never
             | catch on" (because obviously it has), rather it's "as far
             | as infrastructure for your company goes, maybe the
             | additional complexity isn't worth the turnkey solution"
        
           | Thaxll wrote:
           | Have you ever tried to package things with .dep or .rpm? It's
           | a f** nightmare.
           | 
           | A place to store secrets and provide those secrets to your
           | services.
           | 
           | "A problem that is unique to kubernetes and serverless.
           | Remember the days of assuming that your box was secure
           | without having to do 10123 layers of abstraction?"
           | 
           | I remember 10 years ago things were not secur, you know when
           | people baked their credentials in svn for example.
        
             | rantwasp wrote:
             | lol. as someone who has packaged stuff I can tell you that
             | this K8S is orders of magnitudes more complicated. Also,
             | once you figure out how to package stuff, you can do it in
             | a repeatable manner - vs K8s which you basically have to
             | babysit (upgrade/deprecations/node health/etc) forever and
             | pay attention to all developments in the space.
        
             | nitrogen wrote:
             | Checkinstall makes packaging pretty easy for anything you
             | aren't trying to distribute through the official distro
             | channels.
             | 
             | https://help.ubuntu.com/community/CheckInstall
        
         | lrdswrk00 wrote:
         | You have to do those things WITH Kubernetes.
         | 
         | It doesn't configure itself.
         | 
         | I focused on fixing Kubernetes problems at my last job (usually
         | networking). How is that supporting the business (hint: it
         | didn't so management forced us off Kubernetes)
         | 
         | No piece of software is a panacea and shilling for project
         | that's intended to remind people Google exists, is not really
         | putting time on anything useful either
        
         | [deleted]
        
         | secondcoming wrote:
         | We did all that on AWS, and do it now on GCE. Load balancers,
         | instance groups, scaling policies, rolling updates... it's all
         | automatic. If I wasn't on mobile I'd go into more detail.
         | Config is ansible, jinja, blah blah the usual yaml mess.
        
         | bajsejohannes wrote:
         | > solving a specific problem
         | 
         | The problem to me is that Kubernetes is not solving _a specific
         | problem_ , but a whole slew of problems. And some of them it's
         | solving really poorly. For example, you can't really have
         | downtime-free deploys in kubernetes (set a longish timer from
         | SIGTERM to increase the chance that there's no downtime).
         | 
         | Instead I'd rather solve each problem in a good way. It's not
         | that hard. I'm not implementing it from scratch, but with good
         | tools that exists outside of kubernetes and _actually solve a
         | specific problem_.
        
           | rhacker wrote:
           | K8s has like, probably the most complete support for
           | readiness/no downtime deploys in the whole damn industry so
           | it's surprising to hear that...
           | 
           | https://cloud.google.com/blog/products/containers-
           | kubernetes...
        
           | ferdowsi wrote:
           | Why can you not have downtime-free deploys? You tell your
           | applications to drain connections and gracefully exit on
           | SIGTERM. https://pkg.go.dev/net/http#Server.Shutdown
           | 
           | If your server is incapable of gracefully exiting, that's not
           | a K8s problem.
        
             | afrodc_ wrote:
             | > Why can you not have downtime-free deploys? You tell your
             | applications to drain connections and gracefully exit on
             | SIGTERM. https://pkg.go.dev/net/http#Server.Shutdown
             | 
             | > If your server is incapable of gracefully exiting, that's
             | not a K8s problem.
             | 
             | Also whatever load balancer/service mesh you have can be
             | configured for 503 rerouting within DC as necessary too.
        
         | ryanobjc wrote:
         | Recently I had cause too try kubernetes... it has quite the rep
         | so I gave myself an hour to see if I could get a simple
         | container job running on it.
         | 
         | I used GCP autopilot k8 cluster... and it was a slam dunk. I
         | got it done in 30 minutes. I would highly recommend to others!
         | And the cost is totally reasonable!
         | 
         | Running a k8 cluster from scratch is def a bigco thing, but if
         | you're in the cloud then the solutions are awesome. Plus you
         | can always move your workload elsewhere later if necessary.
        
           | mountainriver wrote:
           | This is also my experience, k8s can get harder but for simple
           | stuff it's pretty dang easy
        
         | exdsq wrote:
         | You know, if there's one thing I've learnt from working in
         | Tech, it's never ignore a pooh-pooh. I knew a Tech Lead who got
         | pooh-poohed, made the mistake of ignoring the pooh-pooh. He
         | pooh-poohed it! Fatal error! 'Cos it turned out all along that
         | the developer who pooh-poohed him had been pooh-poohing a lot
         | of other project managers who pooh-poohed their pooh-poohs. In
         | the end, we had to disband the start-up. Morale totally
         | destroyed... by pooh-pooh!
         | 
         | Edit: Seeing as I'm already on negative karma people might not
         | get the reference - https://www.youtube.com/watch?v=QeF1JO7Ki8E
        
           | AnIdiotOnTheNet wrote:
           | What makes you so sure that the downvotes aren't because all
           | you posted was a comedic reference?
        
             | exdsq wrote:
             | I didn't know pooh-pooh was a genuine logical fallacy
             | before now!
        
         | chubot wrote:
         | (author here) The post isn't claiming that you should be doing
         | something differently _right now_.
         | 
         | It's claiming that there's something better that isn't
         | discovered. Probably 10 years in the future.
         | 
         | I will be really surprised if anyone really thinks that
         | Kubernetes and even AWS is going to be the state of the art in
         | 2031.
         | 
         | (Good recent blog post and line of research I like about
         | compositional cloud programming, from a totally different
         | angle: https://medium.com/riselab/the-state-of-the-serverless-
         | art-7...)
         | 
         | FWIW I worked with Borg for 8 years on many applications (and
         | at Google for over a decade), so this isn't coming from
         | nowhere. The author of the post I quoted worked with it even
         | more: https://news.ycombinator.com/item?id=25243159
         | 
         | I was never an SRE, but I have written and deployed code to
         | every data center at Google, as well as helping dozens of
         | people like data scientists and machine learning researchers
         | use it, etc. It's hard to use.
         | 
         | I gave this post a modest title since I'm not doing anything
         | about this right now, but I'm glad @genericlemon24 gave it some
         | more visibility :)
        
           | doctor_eval wrote:
           | This article really resonated with me. We are starting to run
           | into container orchestration problems but I really don't like
           | what I read about K8s. Apart from anything else, it seems
           | designed for much bigger problems than mine, and require the
           | kind of huge mental effort to understand which, ironically,
           | will make it harder for my business to grow.
           | 
           | I'm curious if you've taken a look at Nomad and the other
           | HashiCorp tools? They appear focussed and compositional, as
           | you say, and this is why we are probably going to adopt them
           | instead of K8s - they seem to be in a strong position to
           | replace the core of K8s with something simpler.
        
             | carlosf wrote:
             | I use Nomad a lot in my company and I really like it.
             | 
             | Our team tried to migrate to AWS ECS a few times and found
             | it much harder to abstract stuff from devs / create self-
             | service patterns.
             | 
             | That said it's not a walk in the park. You will need to
             | scratch your head a little bit to setup consul + nomad +
             | vault + a load balancer correctly.
        
         | wyager wrote:
         | Comparing type systems to kubernetes seems like an incredible
         | category error to me. They have essentially nothing in common
         | except they both have something to do with computers. Also,
         | there are plenty of well-designed and beautiful type systems,
         | but k8s is neither of those.
        
         | iammisc wrote:
         | Comparing Kubernetes to type systems is like comparing a shack
         | to a gothic cathedral. Type systems are incredibly stable. They
         | have to be proved both sound and complete via meticulous
         | argumentation. Once proven such, they work and their guarantees
         | exist... no matter what. If you avoid the use of the
         | `unsafe...` functions in languages like Haskell, you can be
         | guaranteed of all the things the type system guarantees for
         | you. In more structured languages like Idris or Coq, there is
         | an absolute guarantee even on termination. This does not break.
         | 
         | Whereas on kubernetes... things break all the time. There is no
         | well-defined semantic model for how the thing works. This is a
         | far cry from something like the calculus of inductive
         | constructions (basis of COQ) for which there is a well-
         | understood 'spec'. Anyone can implement COIC in their language
         | if they understand the spec. You cannot say the same for
         | kubernetes.
         | 
         | Kubernetes is a nice bit of engineering. But it does not
         | provide the same guarantees as type systems. In fact, of the
         | four 'complicated' things you mentioned, only one thing has a
         | well-defined semantic model and mathematically provable
         | guarantees behind it. GraphQL is a particular language (and not
         | one based on any great algebra either, like SQL), Kubernetes is
         | just a program, and unit tests are just a technique. None of
         | them are abstract entities with proven, unbreakable guarantees.
         | 
         | Really, comparing Kubernetes to something like system FC or
         | COIC is like comparing Microsoft Word to Stoke's theorem.
         | 
         | The last thing I'll say is that type systems are incredibly
         | easy. There are a few rules to memorize, but they are applied
         | systematically. The same is not true of Kubernetes. Kubernetes
         | breaks constantly. Its abstractions are incredibly leaky. It
         | provides no guarantees other than an 'eventually'. And it is
         | very complicated. There are myriad entities. Myriad operations.
         | Myriad specs, working groups, etc. Type systems are relatively
         | easy. There is a standard format for rules, and some proofs you
         | don't really need to read through if you trust the experts.
        
         | InternetPerson wrote:
         | OK, true ... but if you do all that yourself, then "they" can
         | never fire you, because no one else will know how the damn
         | thing works. (Just be sure not to document anything!)
        
         | sweeneyrod wrote:
         | Unit tests and "type systems" have very little in common with
         | Kubernetes and GraphQL.
        
           | doctor_eval wrote:
           | Also, GraphQL is not complex. You can learn the basics in an
           | hour or so.
        
         | NicoJuicy wrote:
         | I have a Windows server and use dot net.
         | 
         | I press right click - publish and for prod, i have to enter the
         | password.
         | 
         | Collecting logs uses the same mechanism as backups. They go to
         | a cloud provider and are then easy to view by a web app.
         | 
         | Never needed more for after hours than this, perhaps upgrading
         | a server instance from running too many apps on 1 server.
        
       | ransom1538 wrote:
       | Eh. I think people over complicate k8s in their head. Create a
       | bunch of docker files that let your code run, write a bunch of
       | yaml files on how you want your containers to interact, get an
       | endpoint: the end.
        
         | klysm wrote:
         | I mean people over complicated software engineering in their
         | head. Write a bunch of files of code, write some other files:
         | the end. /s
        
           | ransom1538 wrote:
           | Why the "/s"? -- sounds right.
        
         | sgt wrote:
         | ... That's also a little bit like saying it's super simple to
         | develop an app using framework X because a "TODO list" type of
         | app could be written in 50 loc.
        
       | powera wrote:
       | Eh. Kubernetes is complex, but I think a lot of that is that
       | computing is complex, and Kubernetes doesn't hide it.
       | 
       | Your UNIX system runs many daemons you don't have to care about.
       | Whereas something like lockserver configuration is still a thing
       | you have to care about if you're running Kubernetes.
        
       | etaioinshrdlu wrote:
       | As a Kubernetes outsider, I get confused why so much new jargon
       | had to be introduced. As well as so many little new projects
       | coupled to Kubernetes with varying degrees of interoperability.
       | It makes it hard to get a grip on what Kube really is for
       | newcomers.
       | 
       | It also has all the hallmarks of a high-churn product where you
       | need to piece together your solution from a variety of lower-
       | quality information sources (tutorials, QA sites) rather than a
       | single source of foolproof documentation.
        
         | a_square_peg wrote:
         | I think one part of this the lack of accepted nomenclature in
         | CS - naming convention is typically not enforced, unlike if
         | you'd have to produce an engineering drawing for it and have it
         | conform to a standard.
         | 
         | For engineering, the common way is to use a couple of
         | descriptive words + basic noun so things do get boring quite
         | quickly but very easy to understand, say something like Google
         | 'Cloud Container Orchestrator' instead of Kubernetes.
        
         | sidlls wrote:
         | > I get confused why so much new jargon had to be introduced.
         | 
         | Consider the source of the project for your answer (mainly, but
         | not entirely, bored engineers who are too arrogant to think
         | anybody has solved their problem before).
         | 
         | > It also has all the hallmarks of a high-churn product where
         | you need to piece together your solution from a variety of
         | lower-quality information sources (tutorials, QA sites) rather
         | than a single source of foolproof documentation.
         | 
         | This describes 99% of open source libraries used.The
         | documentation _looks_ good because auto doc tools produce a
         | prolific amount of boilerplate documentation. In reality the
         | result is documentation that 's very shallow, and often just a
         | re-statement of the APIs. The actual usage documentation of
         | these projects is generally terrible, with few exceptions.
        
           | nhooyr wrote:
           | I find it's low quality libraries that tend to have poor
           | documentation. Perhaps that's 99% of open source libraries.
        
           | joshuamorton wrote:
           | > Consider the source of the project for your answer (mainly,
           | but not entirely, bored engineers who are too arrogant to
           | think anybody has solved their problem before).
           | 
           | This seems both wrong and contrary to the article (which
           | mentions that k8s is a descendant of Borg, and in fact if
           | memory serves many of the k8s authors were borg maintainers).
           | So they clearly were aware that people had solved their
           | problem before, because they maintained the tool that had
           | solved the problem for close to a decade.
        
       | commanderjroc wrote:
       | This feels like a post ranting against SystemD written from
       | someone who likes init.
       | 
       | I understand that K8 does many things but its also how you look
       | at the problem. K8 does one thing well, manage complex
       | distributed systems such as knowing when to scale up and down if
       | you so choose and when to start up new pods when they fail.
       | 
       | Arguably, this is one problem that is made up of smaller problems
       | that are solved by smaller services just like SystemD works.
       | 
       | Sometimes I wonder if the Perlis-Thompson Principle and the Unix
       | Philosophy have become a way to force a legalistic view of
       | software development or are just out-dated.
        
         | throwaway894345 wrote:
         | > I understand that K8 does many things but its also how you
         | look at the problem. K8 does one thing well, manage complex
         | distributed systems such as knowing when to scale up and down
         | if you so choose and when to start up new pods when they fail.
         | 
         | Also, in the sense of "many small components that each do one
         | thing well", k8s is even more Unix-like than Unix in that
         | almost everything in k8s is just a controller for a specific
         | resource type.
        
         | dolni wrote:
         | I don't find the comparison to systemd to be convincing here.
         | 
         | The end-result of systemd for the average administrator is that
         | you no longer need to write finicky, tens or hundreds of line
         | init scripts. They're reduced to unit files which are often
         | just 10-15 lines. systemd is designed to replace old stuff.
         | 
         | The result of Kubernetes for the average administrator is a
         | massively complex system with its own unique concepts. It needs
         | to be well understood if you want to be able to administrate it
         | effectively. Updates come fast and loose, and updates are going
         | to impact an entire cluster. Kubernetes, unlike systemd, is
         | designed to be built _on top of_ existing technologies you'd be
         | using anyway (cloud provider autoscaling, load balancing,
         | storage). So rather than being like systemd, which adds some
         | complexity and also takes some away, Kubernetes only adds.
        
           | thethethethe wrote:
           | > The end-result of systemd for the average administrator is
           | that you no longer need to write finicky, tens or hundreds of
           | line init scripts.
           | 
           | Wouldn't the hundreds of lines of finicky, bespoke
           | Ansible/Chef/Puppet configs required to manage non-k8s infra
           | be the equivalent to this?
        
           | mst wrote:
           | Right, I _really_ dislike systemd in _many_ ways ... but I
           | love what it enables people to do and accept that for all my
           | grumpyness about it, it is overall a net win in many
           | scenarios.
           | 
           | k8s ... I think is often overkill in a way that simply
           | doesn't apply to systemd.
        
           | 0xEFF wrote:
           | Kubernetes removes the complexity of keeping a process
           | (service) available.
           | 
           | There's a lot to unpack in that sentence, which is to say
           | there's a lot of complexity it removes.
           | 
           | Agree it does add as well.
           | 
           | I'm not convinced k8s is a net increase in complexity after
           | everything is accounted for. Authentication, authorization,
           | availability, monitoring, logging, deployment tooling, auto
           | scaling, abstracting the underlying infrastructure, etc...
        
             | dolni wrote:
             | > Kubernetes removes the complexity of keeping a process
             | (service) available.
             | 
             | Does it really do that if it you just use it to provision
             | an AWS load balancer, which can do health checks and
             | terminate unhealthy instances for you? No.
             | 
             | Sure, you could run some other ingress controller but now
             | you have _yet another_ thing to manage.
        
               | cjalmeida wrote:
               | If that's all you use k8s for, you don't need it.
               | 
               | Myself I need a to setup a bunch of other cloud services
               | for day 2 operations.
               | 
               | And I need to do it consistently across clouds. The kind
               | of clients I serve won't use my product as a SaaS due to
               | regulatory/security reasons.
        
           | throwaway894345 wrote:
           | > So rather than being like systemd, which adds some
           | complexity and also takes some away, Kubernetes only adds.
           | 
           | Here are some bits of complexity that _managed_ Kubernetes
           | takes away:
           | 
           | * SSH configuration
           | 
           | * Key management
           | 
           | * Certificate management (via cert-manager)
           | 
           | * DNS management (via external-dns)
           | 
           | * Auto-scaling
           | 
           | * Process management
           | 
           | * Logging
           | 
           | * Host monitoring
           | 
           | * Infra as code
           | 
           | * Instance profiles
           | 
           | * Reverse proxy
           | 
           | * TLS
           | 
           | * HTTP -> HTTPS redirection
           | 
           | So maybe your point was "the VMs still exist" which is true,
           | but I generally don't care because the work required of me
           | goes away. Alternatively, you have to have most/all of these
           | things anyway, so if you're not using Kubernetes you're
           | cobbling together solutions for these things which has the
           | following implications:
           | 
           | 1. You will not be able to find candidates who know your
           | bespoke solution, whereas you can find people who know
           | Kubernetes.
           | 
           | 2. Training people on your bespoke solution will be harder.
           | You will have to write a lot more documentation whereas there
           | is an abundance of high quality documentation and training
           | material available for Kubernetes.
           | 
           | 3. When something inevitably breaks with your bespoke
           | solution, you're unlikely to get much help Googling around,
           | whereas it's very likely that you'll find what you need to
           | diagnose / fix / work around your Kubernetes problem.
           | 
           | 4. Kubernetes improves at a rapid pace, and you can get those
           | improvements for nearly free. To improve your bespoke
           | solution, you have to take the time to do it all yourself.
           | 
           | 5. You're probably not going to have the financial backing to
           | build your bespoke solution to the same quality caliber that
           | the Kubernetes folks are able to devote (yes, Kubernetes has
           | its problems, but unless you're at a FAANG then your
           | homegrown solution is almost certainly going to be poorer
           | quality if only because management won't give you the
           | resources you need to build it properly).
        
             | dolni wrote:
             | Respectfully, I think you have a lot of ignorance about
             | what a typical cloud provider offers. Let's go through
             | these each step-by-step.
             | 
             | > SSH configuration
             | 
             | Do you mean the configuration for sshd? What special
             | requirements would have that Kubernetes would help fulfill?
             | 
             | > Key management
             | 
             | Assuming you mean SSH authorized keys since you left this
             | unspecified. AWS does this with EC2 instance connect.
             | 
             | > Certificate management (via cert-manager)
             | 
             | AWS has ACM.
             | 
             | > DNS management (via external-dns)
             | 
             | This is not even a problem if you use AWS cloud primatives.
             | You point Route 53 at a load balancer, which automatically
             | discovers instances from a target group.
             | 
             | > Auto-scaling
             | 
             | AWS already does this via autoscaling.
             | 
             | > Process management
             | 
             | systemd and/or docker do this for you.
             | 
             | > Logging
             | 
             | AWS can send instance logs to CloudWatch. See
             | https://docs.aws.amazon.com/systems-
             | manager/latest/userguide....
             | 
             | > Host monitoring
             | 
             | In what sense? Amazon target groups can monitor the health
             | of a service and automatically replace instances that
             | report unhealthy, time out, or otherwise.
             | 
             | > Infra as code
             | 
             | I mean, you have to have a description somewhere of your
             | pods. It's still "infra as code", just in the form
             | prescribed by Kubernetes.
             | 
             | > Instance profiles
             | 
             | Instance profiles are replaced by secrets, which I'm not
             | sure is better, just different. In either case, if you're
             | following best practices, you need to configure security
             | policies and apply them appropriately.
             | 
             | > Reverse proxy
             | 
             | AWS load balancers and target groups do this for you.
             | 
             | > HTTPS
             | 
             | AWS load balancers, CloudFront, do this for you. ACM issues
             | the certificates.
             | 
             | I won't address the remainder of your post because it seems
             | contingent on the incorrect assumption that all of these
             | are "bespoke solutions" that just have to be completely
             | reinvented if you choose not to use Kubernetes.
        
               | throwaway894345 wrote:
               | > I won't address the remainder of your post because it
               | seems contingent on the incorrect assumption that all of
               | these are "bespoke solutions" that just have to be
               | completely reinvented if you choose not to use
               | Kubernetes.
               | 
               | You fundamentally misunderstood my post. I wasn't arguing
               | that you had to reinvent these components. The "bespoke
               | solution" is the configuration and assembly of these
               | components ("cloud provider primitives" if you like) into
               | a system that suitably replaces Kubernetes for a given
               | organization. _Of course_ you can build your own bespoke
               | alternative--that was the prior state of the world before
               | Kubernetes debuted.
        
       | talideon wrote:
       | ...except people actually use K8S.
        
       | exdsq wrote:
       | Two amazing quotes that really resonate with me:
       | 
       | > The industry is full of engineers who are experts in weirdly
       | named "technologies" (which are really just products and
       | libraries) but have no idea how the actual technologies (e.g.
       | TCP/IP, file systems, memory hierarchy etc.) work. I don't know
       | what to think when I meet engineers who know how to setup an ELB
       | on AWS but don't quite understand what a socket is...
       | 
       | > Look closely at the software landscape. The companies that do
       | well are the ones who rely least on big companies and don't have
       | to spend all their cycles catching up and reimplementing and
       | fixing bugs that crop up only on Windows XP.
        
         | alchemism wrote:
         | Likewise I don't know what to think when I meet frequent flyers
         | who don't know how a jet turbine functions! :)
         | 
         | It is a process of commodification.
        
           | throwawayboise wrote:
           | The people flying the airplane do understand it though. At
           | least they are supposed to. Some recent accidents make one
           | wonder.
        
             | puffyvolvo wrote:
             | most pilots probably don't know how any specific plane's
             | engine works further than what inputs give what outcomes
             | and a few edgecases. larger aircrafts have most of their
             | functions abstracted away with some models effectively
             | pretending to act like older ones to ship them out faster
             | (commercial pilots have to be certified per plane iirc, so
             | more familiar plane = quicker recertification), which has
             | led to a couple disasters recently as the 'emulation' isn't
             | exact. this is still a huge net benefit as larger planes
             | are far more complicated than a little cessna and much
             | harder to control with all that momentum, mass, and
             | airflow.
        
             | nonameiguess wrote:
             | Pilots generally do have some level of engineering
             | background, in order to be able to understand possible in-
             | flight issues, but they're not analogous to software
             | engineers. They're analogous to software operators.
             | Software _engineers_ are analogous to aerospace engineers,
             | who absolutely do understand the internals of how turbines
             | work because they 're the people who design turbines.
             | 
             | The problem with software development as a discipline is
             | its all so new we don't have proper division of labor and
             | professional standards yet. It's like if the people
             | responsible for modeling structural integrity in the
             | foundation of a skyscraper and the people who specialize in
             | creating office furniture were all just called
             | "construction engineers" and expected to have some common
             | body of knowledge. Software systems span many layers and
             | domains that don't all have that much in common with each
             | other, but we all pretend we're speaking the same language
             | to each other anyway.
        
               | albertgoeswoof wrote:
               | Software has been around for longer than aeroplanes
               | 
               | Developers who can only configure AWS are software
               | operators using a product, not software engineers.
               | There's nothing wrong with that but if no one learns to
               | build software, we'll all be stuck funding Mr Bezos and
               | his space trips for a long time.
        
               | throwawaygh wrote:
               | _> Software has been around for longer than aeroplanes_
               | 
               | Huh?
        
               | withinboredom wrote:
               | It doesn't help that most of it is completely abstract
               | and intangible. You can immediately spot the difference
               | between a skyscraper and a chair, but not many can tell
               | the difference between a e2e encrypted chat app and a
               | support chat app. It's an 'app' but they are about as
               | different between a chair and a skyscraper in
               | architecture and systems.
        
             | ampdepolymerase wrote:
             | Yes and no, for a private pilot license you are taught
             | through intuition and diagrams. No Navier Stokes, no
             | Lattice Boltzmann, no CFD. The FAA does not require you to
             | be able to solve boundary condition physics problems to fly
             | an aircraft.
        
             | motives wrote:
             | I think the important point here is that even pilots dont
             | know the full mechanics of a modern jet engine (AFAIK at
             | least, I don't have an ATPL so not 100% on the syllabus).
             | They may know basics like the Euler turbine equation and be
             | able to run some basic calculations across individual rows
             | of blades, but they most likely will not fully understand
             | the fluid mechanics and thermodynamics involved (and
             | especially not the trade secrets of how the entire blades
             | are grown from single crystals).
             | 
             | This is absolutely fine, and one can draw parallels in
             | software, as a mid level software engineer working in an
             | AWS based environment wont generally need to know how to
             | parse TCP packet headers, despite the
             | software/infrastructure they work on requiring them.
        
               | ashtonkem wrote:
               | > and especially not the trade secrets of how the entire
               | blades are grown from single crystals
               | 
               | Wait, what? Are you telling me that jet turbine blades
               | are one single crystal instead of having the usual
               | crystal structure in the metal?
        
               | motives wrote:
               | I'm not a materials guy personally so won't be the best
               | person to explain the exact science behind them, but
               | they're definitely a really impressive bit of
               | engineering. I had a quick browse of this article and it
               | seems to give a pretty good rundown of their history and
               | why their properties are so useful for jet engines
               | https://www.americanscientist.org/article/each-blade-a-
               | singl...
        
               | lostcolony wrote:
               | Well, clearly you're not a pilot. :P
               | 
               | https://www.theengineer.co.uk/rolls-royce-single-crystal-
               | tur...
        
               | tehjoker wrote:
               | They are grown as single metal crystals in order to avoid
               | the weaknesses of joints. They are very strong!
        
             | ashtonkem wrote:
             | Modern jet pilots certainly know much less about airplane
             | functions than they did in the 1940s, and modern jet travel
             | is much safer than it was even a decade ago.
        
               | lanstin wrote:
               | Software today is more like jets in the 1940s than modern
               | day air travel. Still crashing a lot and learning a lot
               | and amazing people from time to time.
        
             | LinuxBender wrote:
             | Many of them know the checklists for their model of
             | aircraft. The downside of the checklists is that they
             | sometimes explain the "what" and not the "why". They are
             | supposed to be taught the why in their simulator training.
             | Newer aircraft are going even further in that direction of
             | obfuscation to the pilots. I expect future aircraft to even
             | perform automated incident checklist actions. To your
             | point, not everyone follows the checklists when they are
             | having an incident as the FDR often reports.
        
           | Koshkin wrote:
           | Perhaps it is not about a jet engine, but I find this
           | beautiful presentation extremely fascinating:
           | 
           | https://www.faa.gov/regulations_policies/handbooks_manuals/a.
           | ..
        
         | void_mint wrote:
         | > Look closely at the software landscape. The companies that do
         | well are the ones who rely least on big companies and don't
         | have to spend all their cycles catching up and reimplementing
         | and fixing bugs that crop up only on Windows XP.
         | 
         | Can we provide an example that isn't also a big company? I'm
         | not really thinking of big companies that don't either dogfood
         | their own tech or rely on someone bigger to handle things they
         | don't want to (Apple spends 30m a month on AWS, as an
         | example[0]). You could also make the argument that kind of no
         | matter what route you take you're "relying on" some big player
         | in some big space. What OS are the servers in your in-house
         | data center running? Who's the core maintainer of whatever dev
         | frameworks you might ascribe to (note: An employee of your
         | company being the core maintainer of a bespoke framework that
         | you developed in house and use is a much worse problem to have
         | than being beholden to AWS ELB, as an example).
         | 
         | This kinda just sounds like knowledge and progress. We build
         | abstractions on top of technologies so that every person
         | doesn't have to know the nitty gritty of the underlying infra,
         | and can instead focus on orchestrating the abstractions. It's
         | literally all turtles. Is it important, when setting up a MySQL
         | instance, to know how to write a lexer and parser in C++?
         | Obviously not. But lexers and parsers are a big part of MySQL's
         | ability to function, right?
         | 
         | [0]. https://www.cnbc.com/2019/04/22/apple-spends-more-
         | than-30-mi...
        
         | jdub wrote:
         | I hope you and the author realise that sockets _are_ a library.
         | And used to be products! They 're not naturally occurring.
        
         | rantwasp wrote:
         | this is bound to happen. the more complicated the stack that
         | you use becomes, the less details you understand about the
         | lower levels.
         | 
         | who, today, can write or optimize assembly by hand? How about
         | understand the OS internals? How about write a compiler? How
         | about write a library for their fav language? How about
         | actually troubleshoot a misbehaving *nix process?
         | 
         | All of these were table stakes at some point in time. The key
         | is not to understand all layers perfectly. The key is to know
         | when to stop adding layers.
        
           | exdsq wrote:
           | Totally get your point! But I worry the industry is becoming
           | bloated with people who can glue a few frameworks together
           | building systems we depend on. I wish there was more of a
           | focus on teaching and/or learning fundermentals than
           | frameworks.
           | 
           | Regarding your points, I actually would expect a non-junior
           | developer to be able to write a libary in their main language
           | and understand the basics of OS internals (to the point of
           | debugging and profilling, which would include troubleshooting
           | *nix processes). I don't expect them to know assembly or C,
           | or be able to write a compiler (although I did get this for a
           | take-home test just last week).
        
             | rhacker wrote:
             | It's like building a house. Should I have the HVAC guy do
             | the drywall and the drywall guy do the HVAC? Clearly
             | software engineering isn't the same as building a house,
             | but if you have an expert in JAX-WS/SOAP and a feature need
             | to connect to some legacy soap healthcare system... have
             | him do that, and let the guy that knows how to write an MPI
             | write the MPI.
        
               | matwood wrote:
               | This isn't a bad analogy. Like modern houses, software
               | has gotten large, specific, and more complex in the last
               | 30 some odd years.
               | 
               | Some argue it's unnecessary complexity, but I don't think
               | that's correct. Even individuals want more than a basic
               | geo cities website. Businesses want uptime, security,
               | flashy, etc... in order to stand out.
        
             | rantwasp wrote:
             | I really like the way you've put it "Glue a few X
             | together".
             | 
             | This is what most software development is becoming. We are
             | no longer building software, we are gluing/integrating
             | prebuild software components or using services.
             | 
             | You no longer solve fundamental problems unless you have a
             | very special use case or for fun. You mostly have to figure
             | out how to solve higher level problems using off-the-shelf
             | components. It's both good and bad if you ask me (depends
             | at what part of the glass you're looking at).
        
             | fruzz wrote:
             | I think learning the fundamentals is a worthy pursuit, but
             | in terms of getting stuff done well, you realistically only
             | have to grok one level below whatever level of abstraction
             | you're operating at.
             | 
             | Being able to glue frameworks together to build systems is
             | actually not a negative. If you're a startup, you want
             | people to leverage what's already available.
        
               | ironman1478 wrote:
               | IMO the problem with this is when you go from startup ->
               | not a startup you go from creating an MVP to something
               | that works with a certain amount of uptime, has
               | performance requirements, etc. Frameworks will still help
               | you with those things, but if you need to solve a
               | performance issue its gonna be hard to debug if a you
               | don't know how the primitives work.
               | 
               | Lets say you have a network performance issue because the
               | framework you were using was misusing epoll, set some
               | funky options with setsockopt, or turned on Nagle's
               | algorithm. A person can figure it out, but its gonna be a
               | slog whereas if they had experience working with the
               | lowest level tools the person could have an intuition
               | about how to debug the issue.
               | 
               | An engineer doesn't have to write everything with the
               | lowest level primitives all the time, but if they have
               | NEVER done it than IMO that's an issue.
        
               | bsd44 wrote:
               | I agree. An ideal is far from reality.
               | 
               | I like to get deep into low level stuff, but my employer
               | doesn't care if I understand how a system call works or
               | whether we can save x % of y by spending z time on
               | performance profiling that requires good knowledge of
               | Linux debugging and profiling tools. It's quicker,
               | cheaper and more efficient to buy more hardware or scale
               | up in public cloud and let me use my time to work on
               | another project that will result in shipping a product or
               | a service quicker and have direct impact on the business.
               | 
               | My experience with the (startup) business world is that
               | you need to be first to ship a feature or you lose. If
               | you want to do something then you should use the tools
               | that will allow you to get there as fast as possible. And
               | to achieve that it makes sense to use technologies that
               | other companies utilise because it's easy to find support
               | online and easy to find qualified people that can get the
               | job done quickly.
               | 
               | It's a dog-eat-dog world and startups in particular have
               | the pressure to deliver and deliver fast since they can't
               | burn investor money indefinitely; so they pay a lot more
               | than large and established businesses to attract talent.
               | Those companies that develop bespoke solutions and build
               | upon them have a hard time attracting talent because
               | people are afraid they won't be able to change jobs
               | easily and these companies are not willing to pay as much
               | money.
               | 
               | Whether you know how a boot process works or how to
               | optimise your ELK stack to squeeze out every single atom
               | of resource is irrelevant. What's required is to know the
               | tools to complete a job quickly. That creates a divide in
               | the tech world where on one side you have high-salaried
               | people who know how to use these tools but don't really
               | understand what goes on in the background and people who
               | know the nitty-gritty and get paid half as much working
               | at some XYZ company that's been trading since the 90s and
               | is still the same size.
               | 
               | My point is that understanding how something works
               | underneath is extremely valuable and rewarding but isn't
               | required to be good at something else. Nobody knows how
               | Android works but that doesn't stop you from creating an
               | app that you will generate revenue and earn you a living.
               | Isn't the point of constant development of automation
               | tools to make our jobs easier?
               | 
               | EDIT: typo
        
           | tovej wrote:
           | Assembly aside, all the things you mention are things I would
           | expect a software engineer to understand. As an engineer in
           | my late twenties myself, these are exactly the things I am
           | focusing on. I'm not saying I have a particularly deep
           | understanding of these subjects, but I can write a recursive
           | descent parser or a scheduler. I value this knowledge quite
           | highly, since its applicable in many places.
           | 
           | I think learning AWS/kubernetes/docker/pytorch/whatever
           | framework is buzzing is easy if you understand
           | Linux/networking/neural networks/whatever the underlying
           | less-prone-to-change system is.
        
           | ex_amazon_sde wrote:
           | > who ... understand the OS internals? ... How about write a
           | library for their fav language? How about actually
           | troubleshoot a misbehaving *nix process?
           | 
           | Ex-Amazon here. You are describing standard skills required
           | to pass an interview for a SDE 2 at Amazon.
           | 
           | People who knows all the popular tools and frameworks of the
           | month but do not understand what an OS does have no place in
           | a team that writes software.
        
             | rantwasp wrote:
             | LoL. Also Ex-Amazon here. I can tell you for a fact that
             | most SDE2s I've worked with had zero clue on how the OS
             | works. What you're describing may have been true 5-10 years
             | ago, but I think is no longer true nowadays (what was that?
             | raising the bar they called it). A typical SDE2 interview
             | will not have questions around OS internals in it. Before
             | jumping on your high horse again: I've done around 400
             | interviews during my tenure there and I don't recall ever
             | failing anyone due to this.
             | 
             | Also, gate-keeping is not helpful.
        
             | darksaints wrote:
             | > Ex-Amazon here.
             | 
             | I don't get it. Are we supposed to value your opinion more
             | because you _used to work for Amazon_? Guess what...I also
             | used to work for Amazon and I think your gatekeeping says a
             | lot more about your ego than it does about fundamental
             | software development skills.
        
             | proxyon wrote:
             | Current big tech here (not Amazon) and very few know lower
             | level things like C, systems or OS stuff. Skillsets and
             | specializations are different. Your comment is incredibly
             | false. Even on mobile if someone is for instance a JS
             | engineer they probably don't know Objective-C, Swift,
             | Kotlin or Java any native APIs. And for the guys who do use
             | native mobile, they can't write Javascript to save their
             | lives and are intimidated by it.
        
             | emerongi wrote:
             | Yes they do. There is too much software to be written. A
             | person with adequate knowledge of higher abstractions can
             | produce just fine code.
             | 
             | Yes, if there is a nasty issue that needs to be debugged,
             | understanding the lower layers is super helpful, but even
             | without that knowledge you can figure out what's going on
             | if you have general problem-solving abilities. I certainly
             | have figured out a ton of issues in the internals of tools
             | that I don't know much about.
             | 
             | Get off your high horse.
        
             | leesec wrote:
             | Says one guy. Sorry, there's lots of people who make a
             | living writing software who don't know what an OS does.
             | Gatekeeping helps nobody.
        
             | exdsq wrote:
             | I agree with you, as opposed to the other ex-amazon
             | comments you've had (I had someone reach out to interview
             | me this week if that counts? ;)).
             | 
             | Playing devils advocate I guess it depends on what sort of
             | software you're writing. If you're a JS dev then I can see
             | why they might not care about pointers in C. I know for
             | sure as a Haskell/C++ dev I run like the plague from JS
             | errors.
             | 
             | However, I do think that people should have a basic
             | understanding of the entire stack from the OS up. How can
             | you be trusted to choose the right tools for a job if your
             | only aware of a hammer? How can you debug an issue when you
             | only understand how a spanner works?
             | 
             | I think there's a case for engineering accreditation as we
             | become even more dependent on software _which isn 't a CS
             | degree_.
        
           | swiftcoder wrote:
           | > who, today, can write or optimize assembly by hand? How
           | about understand the OS internals? How about write a
           | compiler? How about write a library for their fav language?
           | How about actually troubleshoot a misbehaving *nix process?
           | All of these were table stakes at some point in time.
           | 
           | All of these were _still_ table stakes when I graduated from
           | small CS program in 2011. I 'm still a bit horrified to
           | discover they apparently weren't table stakes at other
           | places.
        
           | abathur wrote:
           | And maybe to learn the smell of a leaking layer?
        
           | lanstin wrote:
           | But the value isn't equal. If you think of the business value
           | implemented in code as the "picture" and the run time
           | environment provided as the "frame" the frame has gotten much
           | larger and the picture much smaller, as far as what people
           | are spending their time on. (Well, not the golang folks that
           | just push out a systemctl script and a static binary, but the
           | k8s devops experts). I have read entire blogs on k8s and so
           | on where the end result is just "hello world." In the old
           | days, that was the end of the first paragraph. Now a lot of
           | YAML and docker files and so on and so on are needed just to
           | get to that hello world. Unix was successful initially
           | because it was a good portable abstraction to managing
           | hardware resources, compute, storage, memory, and network,
           | over a variety of actual physical implementations. Many many
           | of the problems people are addressing in k8s and running "a
           | variety of containers efficiently on a set of hosts" are
           | similar to problems unix solved in the 80s. I'm not really
           | saying we should go back, Docker is certainly a solution to
           | "depdendency control and process isolation" when you can't
           | have a good static binary that runs a number of identical
           | processes on a host, but the knowledge of what a socket is or
           | how schedulers work is valuable in fixing issues in docker-
           | based systems. (I'm actually more experienced in Mesos/docker
           | rather than k8s/docker but the bugs are from containers
           | spawning too many GC threads or whatever).
           | 
           | If someone is trying to debug that LB and doesn't know what a
           | socket is, or debug latency in apps in the cluster and not
           | know how scheduling and perf engineering tools work, then
           | it's going to be hard for them, and extremely likely that
           | they will just jam 90% solution around 90% solution,
           | enlarging the frame to do more and more, instead of actually
           | fixing things, even if their specific problem was easy to fix
           | and would have had a big pay off.
        
             | coryrc wrote:
             | Kubernetes is complicated because it carries around Unix
             | with it and then duplicates half the things and bolts some
             | new ones on.
             | 
             | Erlang is[0] what you can get when you try to design a
             | coherent solution to the problem from a usability and
             | first-principles sort of idea.
             | 
             | But some combination of Worse is Better, Path Dependence,
             | and randomness (hg vs git) has led us here.
             | 
             | [0] As far as what I've read about its design philosophy.
        
           | ohgodplsno wrote:
           | While I will not pretend to be an expert at either of those,
           | having at least a minimal understanding of all of these is
           | crucial if you want to pretend to be a software engineer. If
           | you can't write a library, or figure out why your process
           | isn't working, you're not an engineer, you're a plumber, or a
           | code monkey. Not to say that's bad, but considering the sheer
           | amount of mediocre devs at FAANG calling themselves
           | engineers, it just really shines a terrible light on our
           | profession.
        
             | rantwasp wrote:
             | you know. deep down inside: we are all code monkeys. Also,
             | as much as people like to call it software engineering,
             | it's anything but engineering.
             | 
             | In 95% of cases if you want to get something/anything done
             | you will need to work at an abstraction layer where a lot
             | of things have been decided already for you and you are
             | just gluing them together. It's not good or bad. It is what
             | it is.
        
             | puffyvolvo wrote:
             | abstractions layers exist for this reason. as much of a
             | sham as the 7-layer networking model is, it's the reason
             | you can spin up an http server without knowing tcp
             | internals, and you can write a webapp without caring (much)
             | about if its being served over https, http/2, or SPDY.
        
               | jimbokun wrote:
               | To be an engineer, you need the ability to dive deeper
               | into these abstractions when necessary, while most of the
               | time you can just not think about them.
               | 
               | Quickly getting up to speed on something you don't know
               | yet is probably the single most critical skill to be a
               | good engineer.
        
               | exdsq wrote:
               | But this _does_ matter to web developers! For example
               | http /2 lets you request multiple files at once and
               | server push support. If you don't know this you might not
               | implement it and end up with subpar performance. http/3
               | is going to be built on UDP-based Quic and won't even
               | support http://, will need a `Alt-Svc:` header, and
               | removes the http/2 prioritisation stuff.
               | 
               | God knows how a UDP-based http is going to work but these
               | are considerations a 'Software Engineer' who works on web
               | systems should think about.
        
               | cjalmeida wrote:
               | Err, no. Look at most startups and tell me how many of
               | them care if they're serving optimized content over
               | HTTP/2?
        
               | kaba0 wrote:
               | Someone writing the framework should absolutely be
               | intimately familiar with it, and should work on making
               | these new capabilities easy to use from a higher level
               | where your typical web dev can make use of it without
               | much thought, if any.
        
               | ohgodplsno wrote:
               | While I wouldn't judge someone not knowing anything about
               | layer 1 or 2, knowing something about MTUs, traffic
               | congestion, routing is something that should be taught at
               | any basic level of CS school. Not caring if it's served
               | over http2? Why the hell would you? Write your software
               | to take advantage of the platform it's on, and the stack
               | beneath it. The simple fact of using http2 might change
               | your organisation from one fat file served from a CDN,
               | into many that load in parallel and quicker. By not
               | caring about this, you just... waste it all to make yet
               | another shitty-performing webapp. In the same way, I
               | don't ask you to know the TCP protocol by heart, but
               | knowing just basics means you can open up wireshark and
               | debug things.
               | 
               | Once again: if you don't know your stack, you're just
               | wasting performance everywhere, and you're just a code
               | plumber.
        
               | puffyvolvo wrote:
               | > knowing something about MTUs
               | 
               | isn't that why MTU discovery exists?
               | 
               | > Write your software to take advantage of the platform
               | it's on, and the stack beneath it
               | 
               | sure, but usually those bits are usually abstracted away
               | still. otherwise cross-compatability or migrating to a
               | different stack becomes a massive pain.
               | 
               | > The simple fact of using http2 might change your
               | organisation from one fat file served from a CDN, into
               | many that load in parallel and quicker.
               | 
               | others have pointed out things like h2push specifically,
               | that was kind of what i meant with the "(much)" in my
               | original comment. Even then with something like nginx
               | supporting server-push on its end, whatever its fronting
               | could effectively be http/2 unaware and still reap some
               | of the benefits. I imagine it wont be long before there
               | are smarter methods to transparently support this stuff.
        
               | dweekly wrote:
               | All true. The problems start getting gnarly when
               | Something goes Wrong in the magic black box powering your
               | service. That neat framework that made it trivial to spin
               | up an HTTP/2 endpoint is emitting headers that your CDN
               | doesn't like and now suddenly you're 14 stack layers deep
               | in a new codebase written in a language that may not be
               | your forte...
        
               | lanstin wrote:
               | I would make a big distinction between 'without knowing'
               | and "without worrying about." Software productivity is
               | directly proportional to the amount of the system you can
               | ignore while you are writing the code at hand. But not
               | knowing how stuff works makes you less of an engineer and
               | more of a artist. Cause and effect and reason are key
               | tools, and not knowing about TCP handshake or windows
               | just makes it difficult to figure out how to answer
               | fundamental questions about how your code works. It means
               | things will be forever mysterious to you, or interesting
               | in the sense of biology where you gather a lot of data
               | rather than mathematics where pure thought can give you
               | immense power.
        
             | [deleted]
        
           | codethief wrote:
           | This reminds me of Jonathan Blow's excellent talk on
           | "Preventing the Collapse of Civilization":
           | 
           | https://www.youtube.com/watch?v=ZSRHeXYDLko
        
           | ModernMech wrote:
           | > who, today, can write or optimize assembly by hand? How
           | about understand the OS internals? How about write a
           | compiler? How about write a library for their fav language?
           | How about actually troubleshoot a misbehaving *nix process?
           | 
           | Any one of the undergraduates who take the systems sequence
           | at my University should be able to do all of this. At least
           | the ones who earn an A!
        
           | shpongled wrote:
           | disclaimer: I don't mean this to come across as arrogant or
           | anything (I'm just ignorant).
           | 
           | I'm totally self-taught and have never worked a programming
           | job (only programmed for fun). Do professional SWEs not
           | actually understand or have the capability to do these
           | things? I've hacked on hobby operating systems, written
           | assembly, worked on a toy compiler and written libraries... I
           | just kind of assumed that was all par for the course
        
             | mathgladiator wrote:
             | The challenge is that lower level work doesn't always
             | translate into value for businesses. For instance,
             | knowledge of sockets is very interesting. On one hand, I
             | spent my youth learning sockets. For me to bang out a new
             | network protocol takes a few weeks. For others, it can take
             | months.
             | 
             | This manifested in my frustration when I lead building a
             | new transport layer using just sockets. While the people
             | working with me were smart, they had limited low level
             | experience to debug things.
        
               | shpongled wrote:
               | I understand that that stuff is all relatively niche/not
               | necessarily useful in every day life (I know nothing
               | about sockets or TCP/IP) - I just figured your average
               | SWE would at least be familiar with the concepts,
               | especially if they had formal training. Guess it just
               | comes down to individual interests
        
             | kanzenryu2 wrote:
             | It's extremely common. And many of them are fairly
             | productive until an awkward bug shows up.
        
             | rantwasp wrote:
             | I think you may have missed the point (as probably a lot of
             | people did) I was trying to make. It's one thing to know
             | what assembly is and to even be able to dabble in a bit of
             | assembly, it's another thing to be proficient in assembly
             | for a specific CPU/instruction set. It's orders of
             | magnitude harder to be proficient and/or actually write
             | tooling for it vs understanding what a MOV instruction does
             | or to conceptually get what CPU registers are.
             | 
             | Professional SWE are professional in the sense that they
             | know what needs to happen to get the job done (but I am not
             | surprised when someone else does not get or know something
             | that I consider "fundamental")
        
             | woleium wrote:
             | yes, some intermediate devs I've worked with are unable to
             | do almost anything except write code. e.g. unable to
             | generate an ssh key without assistance or detailed cut and
             | paste instructions.
        
               | handrous wrote:
               | Shit, I google or manpage or tealdeer ssh key generation
               | every single time....
               | 
               | Pretty much any command I don't run several times a
               | month, I look up. Unless ctrl+r finds it in my history.
        
               | shpongled wrote:
               | Maybe I should apply for some senior dev roles then :)
        
               | kanzenryu2 wrote:
               | Many/most senior devs do not have the experience you
               | described. But there are often a lot of meetings,
               | reports, and managing other devs.
        
               | jimbokun wrote:
               | Yes, you absolutely should, unless you are already making
               | a ton of money in a more fulfilling job.
        
           | chubot wrote:
           | (author here) The key difference is that a C compiler is a
           | pretty damn good abstraction (and yes Rust is even better
           | without the undefined behavior).
           | 
           | I have written C and C++ for decades, deployed it in
           | production, and barely ever looked at assembly language.
           | 
           | Kubernetes isn't a good abstraction for what's going on
           | underneath. The blog post linked to direct evidence of that
           | which is too long to recap here; I worked with Borg for
           | years, etc.
        
             | rantwasp wrote:
             | K8s may have its time and place but here is something most
             | people are ignoring: in 80% of the time you don't need it.
             | You don't need all that complexity. You're not Google, you
             | don't have the scale or the problems Google has. You also
             | don't have the discipline AND the tooling Google has to
             | make something like this work (cough cough Borg).
        
           | xorcist wrote:
           | I honestly can't tell if this is sarcasm or not.
           | 
           | Which says a lot about the situation we find ourselves in, I
           | guess.
        
             | rantwasp wrote:
             | It's not sarcasm. A lot of things simply do not have
             | visibility and are not rewarded at the business level -
             | therefore the incentives to learn them are almost zero
        
           | 908B64B197 wrote:
           | > How about understand the OS internals? How about write a
           | compiler? How about write a library for their fav language?
           | How about actually troubleshoot a misbehaving *nix process?
           | 
           | That's what I expect from someone who graduated from a
           | serious CS/Engineering program.
        
             | rantwasp wrote:
             | you're mixing having an idea of how the OS works (ie:
             | conceptual/high level) to having working knowledge and
             | being able to hack into the OS when needed. I know this may
             | sound like moving the goal posts, but it really does not
             | help me that I know conceptually that there is a file
             | system if I don't work with it directly and/or know how to
             | debug issues that arise from it.
        
               | throwawaygh wrote:
               | _> having working knowledge and being able to hack into
               | the OS when needed._
               | 
               | I'm going to parrot the GP: "That's what I expect from
               | someone who graduated from a serious CS/Engineering
               | program."
               | 
               | I know there are a lot of _really bad_ CS programs in the
               | US, but some experience implementing OS components in a
               | System course so that they can  "hack into the OS when
               | needed" is exactly what I would expect out of a graduate
               | from a good CS program.
        
           | jimbokun wrote:
           | > who, today, can write or optimize assembly by hand? How
           | about understand the OS internals? How about write a
           | compiler? How about write a library for their fav language?
           | How about actually troubleshoot a misbehaving *nix process?
           | 
           | But developers should understand what assembly is and what a
           | compiler does. Writing a library for a language you know
           | should be a common development task. How else are you going
           | to reuse a chunk of code needed for multiple projects?
           | 
           | Certainly also need to have a basic understanding of unix
           | processes to be a competent developer, too, I would think.
        
             | rantwasp wrote:
             | there is a huge difference between understanding what
             | something is and actually working with it / being
             | proficient with it. huge.
             | 
             | I understand how a car engine work. I would actually
             | explain it to someone that does not know what is under the
             | hood. Does that make me a car mechanic? Hell no. If my car
             | breaks down I go to the dealership and have them fix it for
             | me.
             | 
             | My car/car engine is ASM/OS Internals/writing a
             | compiler/etc.
        
       | aneutron wrote:
       | Okay, right off the bat, the author is already giving himself
       | answers:
       | 
       | > Essentially, this means that it [k8s] will have fewer concepts
       | and be more compositional.
       | 
       | Well, that's already the case ! At its base, k8s is literally a
       | while loop that converges resources to wanted states.
       | 
       | You CAN strip it down to your liking. However, as it is usually
       | distributed, it would be useless to distribute it with nothing
       | but the scheduler and the API ...
       | 
       | I do get the author's point. At a certain point it becomes
       | bloated. But I find that when used correctly, it is adequately
       | complex for the problems it solves.
        
       | christophilus wrote:
       | Kubernetes gets a lot of shade, and rightfully so. It's a tough
       | problem. I do hope we get a Ken Thompson or Rich Hickey-esque
       | solution at some point.
        
         | throwaway894345 wrote:
         | Having used Kubernetes for a while, I'm of the opinion that
         | it's not so much complex as it is foreign, and when we learn
         | Kubernetes we're confronted with a bunch of new concepts all at
         | once even though each of the concepts are pretty simple. For
         | example, people are used to Ansible or Terraform managing their
         | changes, and the "controllers continuously reconciling" takes a
         | bit to wrap one's head around.
         | 
         | And then there are all of the different kinds of resources and
         | the general UX problem of managing errors ("I created an
         | ingress but I can't talk to my service" is a kind of error that
         | requires experience to understand how to debug because the UX
         | is so bad, similarly all of the different pod state errors).
         | It's not fundamentally complex, however.
         | 
         | The bits that are legitimately complex seem to involve setting
         | up a Kubernetes distribution (configuring an ingress
         | controller, load balancer provider, persistent volume
         | providers, etc) which are mostly taken care of for you by your
         | cloud provider. I also think this complexity will be resolved
         | with open source distributions (think "Linux distributions",
         | but for Kubernetes)--we already have some of these but they're
         | half-baked at this point (e.g., k3s has local storage providers
         | but that's not a serious persistence solution). I can imagine a
         | world where a distribution comes with out-of-the-box support
         | for not only the low level stuff (load balancers, ingress
         | controllers, persistence, etc) but also higher level stuff like
         | auto-rotating certs and DNS. I think this will come in a few
         | years but it will take a while for it to be fleshed out.
         | 
         | Beyond that, a lot of the apparent "complexity" is just
         | ecosystem churn--we have this new way of doing things and it
         | empowers a lot of new patterns and practices and technologies
         | and the industry needs time and experience to sort out what
         | works and what doesn't work.
         | 
         | To the extent I think this could be simplified, I think it will
         | mostly be shoring up conventions, building "distributions" that
         | come with the right things and encourage the right practices. I
         | think in time when we have to worry less about packaging legacy
         | monolith applications, we might be able to move away from
         | containers and toward something more like unikernels (you don't
         | need to ship a whole userland with every application now that
         | we're starting to write applications that don't assume they're
         | deployed onto a particular Linux distribution). But for now
         | Kubernetes is the bridge between old school monoliths (and
         | importantly, the culture, practices, and org model for building
         | and operating these monoliths) and the new devops /
         | microservices / etc world.
        
         | encryptluks2 wrote:
         | I think a large part of the problem is that systems like
         | Kubernetes are designed to be extensible with a plugin
         | architecture in mind. Simple applications usually have one way
         | of doing things but they are really good at it.
         | 
         | This begs to question if there is a wrong or right way of doing
         | things and if a single system can adapt fast enough to the
         | rapidly changing underlying strategies, protocols, and
         | languages to always be at the forefront of what is considered
         | best practices in all levels of development and deployment.
         | 
         | These unified approaches usually manifest themselves as each
         | cloud providers best practice playbooks, but each public cloud
         | is different. Unless something like Kuberenetes can build a
         | unified approach across all cloud providers or self hosting
         | solutions then it will always be overly complex because it will
         | always be changing for each provider to maximize their
         | interests in adding their unique services.
        
         | cogman10 wrote:
         | I see the shade thrown at k8s... but honestly I don't know how
         | much of it is truly deserved.
         | 
         | k8s is complex not unnecessarily, but because k8s is solving a
         | large host of problems. It isn't JUST solving the problem of
         | "what should be running where". It's solving problems like "how
         | many instances should be where? How do I know what is good and
         | what isn't? How do I route from instance A to instance b? How
         | do I flag when a problem happens? How do I fix problems when
         | they happen? How do I provide access to a shared resource or
         | filesystem?"
         | 
         | It's doing a whole host of things that are often ignored by
         | shade throwers.
         | 
         | I'm open to any solution that's actually simpler, but I'll bet
         | you that by the time you've reached feature parity, you end up
         | with the same complex mess.
         | 
         | The main critique I'd throw at k8s isn't that it's complex,
         | it's that there are too many options to do the same thing.
        
           | [deleted]
        
           | novok wrote:
           | K8 is the semi truck of software, great for semi scale
           | things, but often used when a van would just do fine.
        
             | cogman10 wrote:
             | To me, usefulness is less to do with scale and more to do
             | with number of distinct services.
             | 
             | If you have just a single monolith app (such as a wordpress
             | app) then sure, k8s is overkill. Even if you have 1000
             | instances of that app.
             | 
             | It's once you start having something like 20+ distinct
             | services that k8s starts paying for itself.
        
               | lanstin wrote:
               | Especially with 10 distinct development teams that all
               | have someone smart enough to crank out some YAML with
               | their specific requirements.
        
           | giantrobot wrote:
           | I think part of the shade throwing is k8s has a high lower
           | bound of scale/complexity "entry fee" where is actually makes
           | sense. If your scale/complexity envelope is below that lower
           | bound, you're fighting k8s, wasting time, or wasting
           | resources.
           | 
           | Unfortunately unless you've got a lot of k8s experience that
           | scale/complexity lower bound isn't super obvious. It's also
           | possible to have your scale/complexity accelerate from "k8s
           | isn't worthwhile" to "oh shit get me some k8s" pretty quickly
           | without obvious signs. That just compounds the TMTOWTDI
           | choice paralysis problems.
           | 
           | So you get people that choose k8s when it doesn't make sense
           | and have a bad time and then throw shade. They didn't know
           | ahead of time it wouldn't make sense and only learned through
           | the experience. There's a lot of projects like k8s that don't
           | advertise their sharp edges or entry fee very well.
        
             | throwaway894345 wrote:
             | > I think part of the shade throwing is k8s has a high
             | lower bound of scale/complexity "entry fee" where is
             | actually makes sense. If your scale/complexity envelope is
             | below that lower bound, you're fighting k8s, wasting time,
             | or wasting resources.
             | 
             | Maybe compared to Heroku or similar, but compared to a
             | world where you're managing more than a couple of VMs I
             | think Kubernetes becomes compelling quickly. Specifically,
             | when people think about VMs they seem to forget all of the
             | stuff that goes into getting VMs working which largely
             | comes with _cloud-provider managed_ Kubernetes (especially
             | if you install a couple of handy operators like cert-
             | manager and external-dns): instance profiles, AMIs, auto-
             | scaling groups, key management, cert management, DNS
             | records, init scripts, infra as code, ssh configuration,
             | log exfiltration, monitoring, process management, etc. And
             | then there 's training new employees to understand your
             | bespoke system versus hiring employees who know Kubernetes
             | or training them with the ample training material.
             | Similarly, when you have a problem with your bespoke
             | system, how much work will it be to Google it versus a
             | standard Kubernetes error?
             | 
             | Also, Kubernetes is _really new_ and it is getting better
             | at a rapid pace, so when you 're making the "Kubernetes vs
             | X" calculation, consider the trend: where will each
             | technology be in a few years. Consider how little work you
             | would have to do to get the benefits from Kubernetes vs
             | building those improvements yourself on your bespoke
             | system.
        
               | lanstin wrote:
               | Honestly, the non-k8s cloud software is also getting
               | excellent. When I have a new app that I can't
               | containerize (network proxies mostly) I can modify my
               | standard terraform pretty quickly and get multi-AZ,
               | customized AMIs, per-app user-data.sh, restart on
               | failures, etc. with private certs and our suite of
               | required IPS daemons, etc. It's way better than pre-cloud
               | things. K8s seems also good for larger scale and where
               | you have a bunch of PD teams wanting to deploy stuff with
               | people that can generate all the YAML/annotations etc. If
               | your deploy #s scale with the number of people that can
               | do it, then k8s works awesomely. If you have just 1
               | person doing a bunch of stuff, simpler things can let
               | that 1 person manage and create a lot of compute in the
               | cloud.
        
           | dolni wrote:
           | > how many instances should be where?
           | 
           | Are you referring to instances of your application, or EC2
           | instances? If instances of your application, in my experience
           | it doesn't really do much for you unless you are willing to
           | waste compute resources. It takes a lot of dailing in to
           | effectively colocate multiple pods and maximize your resource
           | utilization. If you're referring to EC2 instances, well AWS
           | autoscaling does that for you.
           | 
           | Amazon and other cloud providers have the advantage of years
           | of tuning their virtual machine deployment strategies to
           | provide maximum insulation from disruptive neighbors. If you
           | are running your own Kubernetes installation, you have to
           | figure it out yourself.
           | 
           | > How do I know what is good and what isn't?
           | 
           | Autoscaling w/ a load balancer does this trivially with a
           | health check, and it's also self-healing.
           | 
           | > How do I route from instance A to instance b?
           | 
           | You don't have to know or care about this if you're in a
           | simple VPC. If you are in multiple VPCs or a more complex
           | single VPC setup, you have to figure it out anyway because
           | Kubernetes isn't magic.
           | 
           | > How do I flag when a problem happens?
           | 
           | Probably a dedicated service that does some monitoring, which
           | as far as I know is still standard practice for the industry.
           | Kubernetes doesn't make that go away.
           | 
           | > How do I fix problems when they happen?
           | 
           | This is such a generic question that I'm not sure how you
           | felt it could be included. Kubernetes isn't magic, your stuff
           | doesn't always just magically work because Kubernetes is
           | running underneath it.
           | 
           | > How do I provide access to a shared resource or filesystem?
           | 
           | Amazon EFS is one way. It works fine. Ideally you are not
           | using EFS and prefer something like S3, if that meets your
           | needs.
           | 
           | > It's doing a whole host of things that are often ignored by
           | shade throwers.
           | 
           | I don't think they're ignored, I think that you assume they
           | are because they are because those things aren't talked
           | about. They aren't talked about because they aren't an issue
           | with Kubernetes.
           | 
           | The problem with Kubernetes is that it is a massively complex
           | system that needs to be understood by its administrators. The
           | problem it solves overlaps nearly entirely with existing
           | solutions that it depends on. And it introduces its own set
           | of issues via complexity and the breakneck pace of
           | development.
           | 
           | You don't get to just ignore the underlying cloud provider
           | technology that Kubernetes is interfacing with just because
           | it abstracts those away. You have to be able to diagnose and
           | respond to cloud provider issues _in addition_ to those that
           | might be Kubernetes-centric.
           | 
           | So yes, Kubernetes does solve some problems. Do the problems
           | it solves outweigh the problems it introduces? I am not sure
           | about that. My experience to Kubernetes is limited to
           | troubleshooting issues with Kubernetes ~1.6, which we got rid
           | of because we regularly ran into annoying problems. Things
           | like:
           | 
           | * We scaled up and then back down, and now there are multiple
           | nodes running 1 pod and wasting most of their compute
           | resources.
           | 
           | * Kubernetes would try to add routes to a route table that
           | was full, and attempts to route traffic to new pods would
           | fail.
           | 
           | * The local disk of a node would fill up because of one bad
           | actor and impact multiple services.
           | 
           | At my workplace, we build AMIs that bake-in their Docker
           | image and run the Docker container when the instance
           | launches. There are some additional things we had to take on
           | because of that, but the total complexity is far less than
           | what Kubernetes brings. Additionally, we have the side
           | benefit of being insulated from Docker Hub outages.
        
       | jaaron wrote:
       | I feel like we've already seen some alternatives and the
       | industry, thus far, is still orienting towards k8s.
       | 
       | Hashicorp's stack, using Nomad as an orchestrator, is much
       | simpler and more composable.
       | 
       | I've long been a fan of Mesos' architecture, which I also think
       | is more composable than the k8s stack.
       | 
       | I just find it surprising an article that is calling for an
       | evolution of the cluster management architecture fails to
       | investigate the existing alternatives and why they haven't caught
       | on.
        
         | verdverm wrote:
         | We had someone explore K8s vs Nomad and they said K8s because
         | nomad docs are bad. They got much further with K8s in the same
         | timeboxed spike
        
           | bradstewart wrote:
           | Setting up the right parameters/eval criteria to exercise
           | inside of a few week timebox (I'm assuming this wasn't a many
           | month task) is extremely difficult to do for a complex system
           | like this. At least, to me it is--maybe more ops focused
           | folks can do it quicker.
           | 
           | Getting _something_ up and running quickly isn't necessarily
           | a good indicator of how well a set of tools will work for you
           | over time, in production work loads.
        
             | verdverm wrote:
             | It was more about migrating the existing microservices than
             | some example app, runs in docker compare today. Getting the
             | respective platforms up was not the issue. I don't think
             | weeks were spent, but they were able to migrate a complex
             | application to K8s in less than a week. Couldn't get it
             | running in Nomad, which was tried first due to its supposed
             | simplicity over K8s.
        
           | mlnj wrote:
           | For me when exploring K8s vs Nomad, Nomad looked like a clear
           | choice. That was until I had to get Nomad + Consul running. I
           | found it all really difficult to get running in a
           | satisfactory manner. I never even touched the whole Vault
           | part of the setup because it was all overwhelming.
           | 
           | On the other side K8s was a steep learning curve with lots of
           | options and 'terms' to learn but never was a point into the
           | whole exploration where I was stuck. The docs are great. the
           | community is great and the number of examples available
           | allows us to mix n match lots of different approaches.
        
       | overgard wrote:
       | Is it really that complex compared to an operating system like
       | Unix though? I mean there's nothing simple about Unix. To me the
       | question is, is it solving a problem that people have in a
       | reasonably simple way? And it seems like it definitely does. I
       | think the hate comes from people using it where it's not
       | appropriate, but then, just don't use it in the wrong place, like
       | anything of this nature.
       | 
       | And honestly its complexity is way overblown. There's like 10
       | important concepts and most of what you do is run "kubectl apply
       | -f somefile.yaml". I mean, services are DNS entries, deployments
       | are a collection of pods, pods are a self contained server. None
       | of these things are hard?
        
       | dekhn wrote:
       | I have borg experience and my experience with k8s was extremely
       | negative. Most of my time was spent diagnosing self-inflicted
       | probmems by the k8s framework.
       | 
       | I've been trying nomad lately and it's a bit more direct.
        
         | jrockway wrote:
         | I have borg experience and I think Kubernetes is great. Before
         | borg, I would basically never touch production -- I would let
         | someone else handle all that because it was always a pain. When
         | I left Google, I had to start releasing software (because every
         | other developer is also in that "let someone else handle it"
         | mindset), and Kubernetes removed a lot of the pain. Write a
         | manifest. Change the version. Apply. Your new shit is running.
         | If it crashes, traffic is still directed to the working
         | replicas. Everyone on my team can release their code to any
         | environment with a single click. Nobody has ever ssh'd to
         | production. It just works.
         | 
         | I do understand people's complaints, however.
         | 
         | Setting up "the rest" of the system involves making a lot of
         | decisions. Observability requires application support, and you
         | have to set up the infrastructure yourself. People generally
         | aren't willing to do that, and so are upset when their favorite
         | application doesn't work their favorite observability stack. (I
         | remember being upset that my traces didn't propagate from Envoy
         | to Grafana, because Envoy uses the Zipkin propagation protocol
         | and Grafana uses Jaeger. However, Grafana is open source and I
         | just added that feature. Took about 15 minutes and they
         | released it a few days later, so... the option is available to
         | people that demand perfection.)
         | 
         | Auth is another issue that has been punted on. Maybe your cloud
         | provider has something. Maybe you bought something. Maybe the
         | app you want to run supports OIDC. To me, the dream of the
         | container world is that applications don't have to focus on
         | these things -- there is just persistent authentication
         | intrinsic to the environment, and your app can collect signals
         | and make a decision if absolutely necessary. But that's not the
         | way it worked out -- BeyondCorp style authentication proxies
         | lost to OIDC. So if you write an application, your team will be
         | spending the first month wiring that in, and the second month
         | documenting all the quirks with Okta, Auth0, Google, Github,
         | Gitlab, Bitbucket, and whatever other OIDC upstreams exist. Big
         | disaster. (I wrote https://github.com/jrockway/jsso2 and so
         | this isn't a problem for me personally. I can run any service I
         | want in my Kubernetes cluster, and authenticate to it with my
         | FaceID on my phone, or a touch of my Yubikey on my desktop.
         | Applications that want my identity can read the signed header
         | with extra information and verify it against a public key. But,
         | self-hosting auth is not a moneymaking business, so OIDC is
         | here to stay, wasting thousands of hours of software
         | engineering time a day.)
         | 
         | Ingress is the worst of Kubernetes' APIs. My customers run into
         | Ingress problems every day, because we use gRPC and keeping
         | HTTP/2 streams intact from client to backend is not something
         | it handles well. I have completely written it off -- it is
         | underspecified to the point of causing harm, and I'm shocked
         | when I hear about people using it in production. I just use
         | Envoy and have an xDS layer to integrate with Kubernetes, and
         | it does exactly what it should do, and no more. (I would like
         | some DNS IaC though.)
         | 
         | Many things associated with Kubernetes are imperfect, like
         | Gitops. A lot of people have trouble with the stack that pushes
         | software to production, and there should be some sort of
         | standard here. (I use ShipIt, a Go program to edit manifests
         | https://github.com/pachyderm/version-bump, and ArgoCD, and am
         | very happy. But it was real engineering work to set that up,
         | and releasing new versions of in-house code is a big problem
         | that there should be a simple solution to.)
         | 
         | Most of these things are not problems brought about by
         | Kubernetes, of course. If you just have a Linux box, you still
         | have to configure auth and observability. But also, your
         | website goes down when the power supply in the computer dies.
         | So I think Kubernetes is an improvement.
         | 
         | The thing that will kill Kubernetes, though, is Helm. I'm out
         | of time to write this comment but I promise a thorough analysis
         | and rant in the future ;)
        
           | dekhn wrote:
           | Twice today I had to explain to coworkers than "auth is one
           | of the hardest problems in computer science".
           | 
           | For gRPC and HTTP/2: you're doing end to end gRPC (IE, the
           | TCP connection goes from a user's browser all the way to your
           | backend, without being terminated or proxied)?
        
           | gbrindisi wrote:
           | > The thing that will kill Kubernetes, though, is Helm. I'm
           | out of time to write this comment but I promise a thorough
           | analysis and rant in the future ;)
           | 
           | Too much of a cliffhanger! Now I want to know your pow :)
        
             | akvadrako wrote:
             | I don't know why anyone uses Helm. I've done a fair amount
             | of stuff with k8s and never saw the need. The builtin
             | kustomize for is simple and flexible enough.
        
         | Filligree wrote:
         | Ditto.
         | 
         | Granted, I have to assume that borg-sre, etc. etc. are doing a
         | lot of the necessary basic work for us, but as far as the final
         | experience goes?
         | 
         | 95% of cases could be better solved by a traditional approach.
         | NixOps maybe.
        
         | jedberg wrote:
         | I think that's because Borg comes with a team of engineers who
         | keep it running and make it easy.
         | 
         | I've had a similar experience with Cassandra. Using Cassandra
         | at Netflix was a joy because it always just worked. But there
         | was also a team of engineers who made sure that was the case.
         | Running it elsewhere was always fraught with peril.
        
           | dekhn wrote:
           | yes several of the big benefits are: the people who run borg
           | (and the ecosystem) are well run (for the most part). And,
           | the ability to find them in chat and get them to fix things
           | for you (or explain some sharp edge).
        
       | [deleted]
        
       | joe_the_user wrote:
       | Whatever Kubernetes flaws, the analogy is clearly wrong. Multics
       | was never a success and never had wide deployment so Unix never
       | had to compete with it. Once an OS is widely deployed, efforts to
       | get rid of it have a different dynamic (see the history of
       | desktop computing, etc). Especially, getting rid of any deployed,
       | working system (os, application, language, chip-instruction-set,
       | etc) in the name of simplicity is inherently difficult. Everyone
       | agrees things should be stripped down to a bare minimum but no
       | one agrees on what that bare minimum is.
        
       | [deleted]
        
       | 1vuio0pswjnm7 wrote:
       | I like this title so much I am finally going to give this shell a
       | try. One thing I notice right away is readline. Could editline
       | also be an option. (There's two "editlines", the NetBSD one and
       | an older one at https://github.com/troglobit/editline) Next thing
       | I notice is the use of ANSI codes by default. Could that be a
       | compile-time option or do we have to edit the source to remove
       | it.
       | 
       | TBH I think the graphical web browser is the current generation's
       | Multics. Something that is overly complex, corporatised, and
       | capable of being replaced by something simpler.
       | 
       | I am not steeped in Kubernetes or its reason for being but it
       | sounds like it is filling a void of shell know-how amongst its
       | audience. Or perhaps it is addressing a common dislike of the
       | shell by some group of developers. I am not a developer and I
       | love the shell.
       | 
       | It is one thing that generally does not change much from year to
       | year. I can safely create things with it (same way people have
       | made build systems with it) that last forever. These things just
       | keep running from one decade to the next no matter what the
       | current "trends" are. Usually smaller and faster, too.
        
       | wizwit999 wrote:
       | I agree a lot with his premise, that Kubernetes is too complex,
       | but not at all with his alternative to go even lower level.
       | 
       | And the alternative of doing everything yourself isn't too much
       | better either, you need to learn all sorts of cloud concepts.
       | 
       | The better alternative is a higher level abstraction that takes
       | care of all of this for you, so an average engineer building an
       | API does not need to worry about all these low level details,
       | kind of like how serverless completely removed the need to deal
       | with instances (I'm building this).
        
         | mountainriver wrote:
         | That sounds like knative
        
           | wizwit999 wrote:
           | I haven't heard of that. Took a look and it still seems too
           | low level. I think we need to think much bigger in this
           | space. Btw were not approaching this from a Kubernetes angle
           | at all.
        
       | zozbot234 wrote:
       | > Kubernetes is our generation's Multics
       | 
       | Prove it. Create something simpler, more elegant and more
       | principled that does the same job. (While you're at it, do the
       | same for systemd which is often criticized for the same reasons.)
       | Even a limited proof of concept would be helpful.
       | 
       | Plan9 and Inferno/Limbo were built as successors to *NIX to
       | address process/environment isolation ("containerization") and
       | distributed computing use cases from the ground up, but even
       | these don't even come close to providing a viable solution for
       | everything that Kubernetes must be concerned with.
        
         | lallysingh wrote:
         | The successor will probably be a more integrated platform where
         | it provides a lot of stuff you've got to use sidecars, etc for.
         | 
         | Probably a language with good IPC (designed for real
         | distributed systems that handle failover), some unified auth
         | library, and built-in metrics and logging.
         | 
         | A lot of real-life k8s complexity is trying to accommodate many
         | supplemental systems for that stuff. Otherwise it's a job
         | scheduler and haproxy.
        
         | up_and_up wrote:
         | https://www.nomadproject.io/
        
           | gizdan wrote:
           | Nomad also doesn't have a lot of feature that are built into
           | kubernetes, features that otherwise require other hashicorp
           | tools. So now you have a vault cluster, a consul cluster, a
           | nomad cluster, then hcl to manage it all, probably a
           | terraform enterprise cluster. So what have you gained?
           | Besides the same amount of complexities with fewer features.
        
         | Fordec wrote:
         | I can claim electric cars will beat out hydrogen cars in the
         | long run. I don't have to build an electric car to back up this
         | assertion. I can look at the fundamental factors at hand and
         | project out based on theoretical maximums.
         | 
         | I can also claim humans will have longer lifespans in the
         | future. I don't need to develop a life extending drug before I
         | can hold that assertion.
         | 
         | Kubernetes is complex. Society used to still work on simpler
         | systems before we added layers of complexity. There are dozens
         | of layers of abstraction above the level of transistors, it is
         | not a stretch to think that there is a more elegant abstraction
         | yet designed without having to "prove" themselves to
         | zozobot234.
        
           | 0xdeadbeefbabe wrote:
           | > comments are intended to add color on the design of the Oil
           | language and the motivation for the project as a whole.
           | 
           | Comments are also easier to write than code. He really does
           | seem obligated to prove kubernetes is our generations
           | multics, and that's a good thing.
        
           | pphysch wrote:
           | What are the "fundamental factors at hand" with Kubernetes
           | and software orchestration? How do you quantify these things?
        
       | fnord77 wrote:
       | I really love how kubernetes decouples compute resources from
       | actual servers. It works pretty well and handles all kinds of
       | sys-ops-y things automatically. It really cuts down on work for
       | big deployments.
       | 
       | actually, it has shown me what sorts of dev-ops work is
       | completely unneeded.
        
       | [deleted]
        
       | tyingq wrote:
       | It looks to be getting more complex too. I understand the sales
       | pitch for a service mesh like Istio, but now we're layering
       | something fairly complicated on top of K8S. Similar for other
       | aspects like bolt on secrets managers, logging, deployment, etc,
       | run through even more abstractions.
        
       | [deleted]
        
       | debarshri wrote:
       | One of the most relevant and amazing blogs I have read in recent
       | times.
       | 
       | I have been working for a firm that have been onboarding multiple
       | small scale startup or lifestyle businesses to kubernetes. My
       | opinion is that if you have an ruby on rails or python app, you
       | don't really need kubernetes. It is like bringing bazooka to a
       | knife fight. However, I do think kubernetes has some good
       | practice embedded in them, which I will always cherish.
       | 
       | If you are not operating at huge scale, both operations or/and
       | teams, it actually comes at a high cost of productivity and tech
       | debt. I wish there was an easier tech that would bridge going
       | from VMs to bunch of VMs, bunch of containers to kubernetes.
        
       | slackfan wrote:
       | Kubernetes is fantastic if you're running global-scale cloud
       | platforms, ie, you are literally Google.
       | 
       | Over my past five years working with it, there has been not a
       | single customer that had a workload appropriate for kubernetes,
       | and it was 100% cargo-cult programming and tool selection.
        
         | dilyevsky wrote:
         | Your case is def not the norm. We're not google sized but we
         | are taking a big advantage of k8s running dozens of services on
         | it - from live video transcoding to log pipelines.
        
       | rektide wrote:
       | It's called "Images and Feelings", but I quite dislike using a
       | the Cloud Native Computing Foundation's quite busy map of
       | services/offerings as evidence against Kubernetes. That lots of
       | people have adopted this, and built different tools & systems
       | around it & to help it is not a downside.
       | 
       | I really enjoy the Oil Blog, & was really looking forward when I
       | clicked the link to having some good real criticism. But it feels
       | to me like most of the criticism I see: highly emotional, really
       | averse/afraid/reactionary. It wants something easier simpler,
       | which is so common.
       | 
       | I cannot emphasize enough, just do it anyways. There's a lot of
       | arguments from both sides about trying to assess what level of
       | complexity you need, about trying to right size what you roll
       | with. This outlook of fear & doubt & skepticism I think does a
       | huge disservice. A can do, jump in, eager attitude, at many
       | levels of scale, is a huge boon, and it will build skills &
       | familiarity you will almost certainly be able to continue to use
       | & enjoy for a long time. Trying to do less is harder, much
       | harder, than doing the right/good/better job: you will endlessly
       | hunt for solutions, for better ways, and there will be fields of
       | possibilities you must select from, must build & assemble
       | yourself. Be thankful.
       | 
       | Be thankful you have something integrative, be thankful you have
       | common cloud software you can enjoy that is cross-vendor, be
       | thankful there's so many different concerns that are managed
       | under this tend.
       | 
       | The build/deploy pipeline is still a bit rough, and you'll have
       | to pick/build it out. Kubernetes manifests are a bit big in size,
       | true, but it's really not a problem, it really is there for
       | basically good purpose & some refactoring wouldn't really change
       | what it is. There's some things that could be better. But getting
       | started is surprisingly easy, surprisingly not heavy. There's a
       | weird emotional war going on, it's easy to be convinced to be
       | scared, to join in with reactionary behaviors, but I really have
       | seen nothing nearly so well composed, nothing that fits together
       | so many different pieces well, and Kubernetes makes it
       | fantastically easy imo to throw up a couple containers & have
       | them just run, behind a load balancer, talking to a database,
       | which coverages a huge amount of our use cases.
        
       | anonygler wrote:
       | I dislike the deification of Ken Thompson. He's great, but let's
       | not pretend that he'd somehow will a superior solution into
       | existence.
       | 
       | The economics and scale of this era are vastly different. Borg
       | (and thus, Kubernetes) grew out of an environment where 1 in a
       | million happens every second. Edge cases make everything
       | incredibly complex, and Borg has solved them all.
        
         | jeffbee wrote:
         | Much as I am a fan of borg, I think it succeeds mostly by
         | ignoring edge cases, not solving them. k8s looks complicated
         | because people have, in my opinion, weird and dumb use cases
         | that are fundamentally hard to support. Borg and its developers
         | don't want to hear about your weird, dumb use case and within
         | Google there is the structure to say "don't do that" which
         | cannot exist outside a hierarchical organization.
        
           | throwdbaaway wrote:
           | Interesting. Perhaps k8s is succeeding in the real world
           | because it is the only one that tries to support all the
           | weird and dumb use cases?
           | 
           | > Think of the history of data access strategies to come out
           | of Microsoft. ODBC, RDO, DAO, ADO, OLEDB, now ADO.NET - All
           | New! Are these technological imperatives? The result of an
           | incompetent design group that needs to reinvent data access
           | every goddamn year? (That's probably it, actually.) But the
           | end result is just cover fire. The competition has no choice
           | but to spend all their time porting and keeping up, time that
           | they can't spend writing new features.
           | 
           | > Look closely at the software landscape. The companies that
           | do well are the ones who rely least on big companies and
           | don't have to spend all their cycles catching up and
           | reimplementing and fixing bugs that crop up only on Windows
           | XP.
           | 
           | Instead of seeing k8s as the equivalent of "cover fire" or
           | Windows XP, a more apt comparison is probably Microsoft
           | Office, with all kinds of features to support all the weird
           | and dumb use cases.
        
       | misiti3780 wrote:
       | This post brings up a good question - how does one get better at
       | low-level programming? What are some good resources?
        
       | hnjst wrote:
       | I'm pretty biased since I gave k8s trainings and operate several
       | kubes for my company and clients.
       | 
       | I'll take two pretty different contexts to illustrate why _for
       | me_ k8s makes sense.
       | 
       | 1- I'm part of the cloud infrastructure team (99% AWS, a bit of
       | Azure) for a pretty large private bank. We are in charge of
       | security and conformity of the whole platform while trying to let
       | teams be as autonomous as possible. The core services we provide
       | are a self-hosted Gitlab along with ~100 CI runners (Atlantis and
       | Gitlab-CI, that many for segregation), SSO infrastructure and a
       | few other little things. Team of 5, I don't really see a better
       | way to run this kind of workload with the required SLA. The whole
       | thing is fully provisioned and configured via Terraform along
       | with it's dependencies and we have a staging env that is
       | identical (and the ability to pop another at will or to recreate
       | this one). Plenty of benefits like almost 0 downtime upgrades
       | (workloads and cluster), on-the-shelf charts for plenty of apps,
       | observability, resources optimization (~100 runners mostly idle
       | on a few nodes), etc.
       | 
       | 2- Single VM projects (my small company infrastructure and home
       | server) for which I'm using k3s. Same benefits in terms of
       | observability, robustness (at least while the host stays up...),
       | IaC, resources usage. Stable minimalists hardened host OS with
       | the ability to run whatever makes sense inside k3s. I had to
       | setup similarly small infrastructures for other projects recently
       | with the constraint of relying on more classic tools so that it's
       | easier for the next ops to take over, I end up rebuilding a
       | fraction of k8s/k3s features with much more efforts (did that
       | with docker and directly on the host OS for several projects).
       | 
       | Maybe that's because I know my hammer well enough for screws to
       | look like nails but from my perspective once the tool is not an
       | obstacle k8s standardized and made available a pretty impressive
       | and useful set of features, at large scale but arguably also for
       | smaller setups.
        
       | gonab wrote:
       | No one has used the words "docker swarm" on the comment section
       | 
       | Fill in the words
       | 
       | Kubernetes is to Multics as ____ is to docker swarm
        
       | waynesonfire wrote:
       | my problem with k8s, is that you learn OS concepts, and then k8s
       | / docker shits all over them.
        
       ___________________________________________________________________
       (page generated 2021-07-21 23:00 UTC)