[HN Gopher] Why is Kubernetes getting so popular?
       ___________________________________________________________________
        
       Why is Kubernetes getting so popular?
        
       Author : a7b3fa
       Score  : 174 points
       Date   : 2020-05-29 19:25 UTC (3 hours ago)
        
 (HTM) web link (stackoverflow.blog)
 (TXT) w3m dump (stackoverflow.blog)
        
       | clutchdude wrote:
       | It's not because of the networking stack.
       | 
       | I've yet to meet anyone who can easily explain how the CNI,
       | services, ingresses and pod network spaces all work together.
       | 
       | Everything is so interlinked and complicated that you need to
       | understand vast swathes of kubernetes before you can attach any
       | sort of complexity to the networking side.
       | 
       | I contrast that to it's scheduling and resourcing components
       | which are relatively easy to explain and obvious.
       | 
       | Even storage is starting to move to overcomplication with CSI.
       | 
       | I half jokingly think K8s adoption is driven by consultants and
       | cloud providers hoping to ensure a lock-in with the mechanics of
       | actually deploying workloads on K8s.
        
         | mrweasel wrote:
         | Assuming that like us, you spend the last 10 - 12 years
         | deploying IPv6 and currently running servers on IPv6 only
         | networks, the Kubernetes/Docker network stack is just plain
         | broken. It can be done, but you need to start thinking about
         | stuff like BGP.
         | 
         | Kubernetes should have been IPv6 only, with optional IPv4
         | ingress controllers.
        
           | geggam wrote:
           | You mean you dont like 3+ layers of Nat VIA iptables ?
        
             | dijit wrote:
             | That's already happening anyway.
        
         | lallysingh wrote:
         | "I've yet to meet anyone who can easily explain how the CNI,
         | services, ingresses and pod network spaces all work together."
         | 
         | Badly! That'll be $500, thanks for your business.
         | 
         | On a serious note, the whole stack is keeping ok-ish coherence
         | considering the number of very different parties putting a ton
         | of work into it.
         | 
         | In a few years' time it'll be the source of many war stories
         | nobody cares about.
        
         | p_l wrote:
         | It helps going from the bottom up, IMO. It's a multi-agent
         | blackboard system with elements of control theory, which is a
         | mouthful, but it essentially builds from smaller blocks up.
         | 
         | Also, after OpenStack, the bar for "consulting-driven software"
         | is far from reached :)
        
         | base698 wrote:
         | For the nginx ingress case:
         | 
         | An ingress object creates an nginx/nginx.conf. That nginx
         | server has an IP address which has a round robin IPVS rule.
         | When it gets the request it proxy's to a service ip which then
         | round robins to the 10.0.0.0/8 container IP.
         | 
         | Ingress -> service -> pod
         | 
         | It is all very confusing but once you look behind the curtain
         | it's straight forward if you know Linux networking and web
         | servers. The cloud providers remove the requirement of needing
         | Linux knowledge.
        
           | ownagefool wrote:
           | I don't think this is accurate which plays into the parents
           | point, I guess.
           | 
           | Looking at the docs, ingress-nginx configures an upstream
           | using endpoints, which are essentially Pod IPs, with skips
           | kubernetes service based round-robin networking altogether.
           | 
           | Assuming you use an ingress that does configure services
           | instead, and assuming you're using a service proxy that uses
           | ipvs (i.e. kube-proxy in default settings) then your
           | explanation would have been correct.
           | 
           | For the most part, kubernetes networking is as hard as
           | networking with loads of automation. Often, depth in both
           | those skills are pretty exclusive, but if you're using the
           | popular and/or supported CNI not doing things like changing
           | in-flight, your average dev just needs to learn basic k8s
           | debugging such as kubectl get endpoints to check whether his
           | service selectors are setup correctly, and curl them to check
           | whether the pods are actually listening on those ports.
        
           | MuffinFlavored wrote:
           | > It is all very confusing
           | 
           | Is there an easier + simpler alternative?
        
             | base698 wrote:
             | Use Heroku, AppEngine, or if in k8s, Knative/Rio.
             | 
             | It's confusing because a lot of people being exposed to K8s
             | don't necessarily know how Linux networking and web servers
             | work. So there is a mix of terminology (services, ingress,
             | ipvs, iptables, etc) and context that may not be understood
             | if you didn't come from running/deploying Linux servers.
        
         | jonahbenton wrote:
         | Answer has to include implementation details. No credit if your
         | answer does not reference iptables.
        
         | AgentME wrote:
         | Services create a private internal DNS name that points to one
         | or more pods (which are generally managed by a Deployment
         | unless you're doing something advanced) and may be accessed
         | from within your cluster. Services with Type=NodePort do the
         | same and also allocate one or more ports on each of the hosts
         | which proxies connections to the service inside the cluster.
         | Services with Type=LoadBalancer do the same as Type=NodePort
         | services and also configure a cloud load balancer with a fixed
         | IP address to point to the exposed ports on the hosts.
         | 
         | A single Service with Type=LoadBalancer and one Deployment may
         | be all you need on Kubernetes if you just want all connections
         | from the load balancer immediately forwarded directly to the
         | service.
         | 
         | But if you have multiple different services/deployments that
         | you want as accessible under different URLs on a single
         | IP/domain, then you'll want to use Ingresses. Ingresses let you
         | do things like map specific URL paths to different services.
         | Then you have an IngressController which runs a webserver in
         | your cluster and it automatically uses your Ingresses to figure
         | out where connections for different paths should be forwarded
         | to. An IngressController also lets you configure the webserver
         | to do certain pre-processing on incoming connections, like
         | applying HTTPS, before proxying to your service. (The
         | IngressController itself will usually use a Type=LoadBalancer
         | service so that a load balancer connects to it, and then all of
         | the Ingresses will point to regular Services.)
        
       | holidayacct wrote:
       | Because Google is an advertising company, their search engine
       | controls what people believe in and they also have some good
       | engineers but they are probably not well known. There is very
       | little they couldn't advertise into popularity. Whenever you see
       | overcomplicated software or infrastructure its always a way to
       | waste executive function, create frustration and create
       | unnecessary mental overhead. If the technology you're using isn't
       | making it easier for you to run your infrastructure from memory,
       | reduce the use of executive function and decrease frustration
       | then you should ignore it. Don't fall for the fashion trends.
        
         | verdverm wrote:
         | Please don't criticize, condemn, or complain if you don't have
         | anything constructive to add.
        
       | joana035 wrote:
       | Kubernetes is getting popular because it is a no-brainer api to
       | existing things like firewall, virtual ip, process placement,
       | etc.
       | 
       | It's basically running a big computer without even trying.
        
       | clvx wrote:
       | In a side note if you were to invest your time in writing
       | operators, would you use kubebuilder or operator-sdk?
        
         | vkat wrote:
         | Both use controller-runtime underneath so there is not much
         | difference between the two. I personally have used both and
         | prefer kubebuilder
        
       | dblooman wrote:
       | If the question was, Why is kube getting so popular with
       | developers, it might get a different response. I wonder how many
       | software developers come to kubernetes through the templated/helm
       | chart/canned approach made by there DevOps team, not that this
       | isn't a fine approach, but I find it a different conversation to
       | say, Serverless, where anyone can just jump in.
       | 
       | After spending 18 months working on bringing kubernetes(EKS) to
       | production, with dozens of services on it, the time was right to
       | hand over migrating old services to the software engineers who
       | maintain them. Due to product demands, but also some lack of
       | advocacy, this didn't happen, with the DevOps folks ultimately
       | doing the migration and retaining all the kubernetes knowledge.
       | 
       | An unpopular opinion might be that Kubernetes is popular because
       | it gives DevOps teams new tech to play with, with long lead times
       | for delivery given its complexity. Kubernetes usually is a
       | gateway to tracing, service meshes and CRDs, which while you
       | don't need at all to run Kubernetes, they will probably end up in
       | your cluster.
        
       | shakil wrote:
       | Call me biased [1] but K8s will take over the world! Yes you get
       | containers and micro-services and all that good stuff, but now
       | with Anthos [2] its also the best way to achieve multi-cloud and
       | hybrid architectures. What's not to like!
       | 
       | 1. I work for GCP 2. https://cloud.google.com/anthos/gke
        
         | spyspy wrote:
         | Is there any benefit of Anthos over deploying straight to GKE
         | if you're already bought into GCP? We've had this debate
         | several times recently and can't come up with a good answer.
        
           | verdverm wrote:
           | Anthos let's you let Google run Kubernetes for you in any
           | cloud and on prem.
           | 
           | The live VMWare migration to Anthos is also quite impressive
           | too
        
           | shakil wrote:
           | If you are bought in to GCP and plan to stay there, then
           | maybe not much. OTOH, Anthos would allow you to do easier
           | migrations from on-prem, support hybrid workloads, or
           | consolidate multi-cloud clusters including those running on
           | say, AWS [1] if you like.
           | 
           | 1. https://cloud.google.com/blog/topics/anthos/multi-cloud-
           | feat...
        
           | gobins wrote:
           | You get gitops style config management of your clusters with
           | Anthos.
        
       | zelphirkalt wrote:
       | In this post it might only be an example, but I don't see
       | anything, that necessitates the use of YAML. All of that could be
       | put in a JSON file, which is far less complex.
       | 
       | YAML should not even be needed for Kubernetes. Configuration
       | should be representable in a purely declarative way, instead of
       | making the YAML mess, with all kinds of references and stuff.
       | Perhaps the configuration specification needs to be re-worked.
       | Many projects using YAML feel to me like a configuration trash
       | can, where you just add more and more stuff, which you haven't
       | thought about.
       | 
       | I once tried moving an already containerized system to Kubernetes
       | for testing, how that would work. It was a nightmare. It was a
       | few years ago, maybe 3 years ago. Documentation was plenty but
       | really sucked. I could not find _any_ documentation of what can
       | be put into that YAML configuration file, what the structure
       | really is. I read tens of pages of documentation, none of it
       | helped me to find, what I needed. Then even to set everything up,
       | to get the Kubernetes running at all also took way too much time
       | and 3 people to figure out and was badly documented. It took
       | multiple hours on at least 2 days. Necessary steps, I still
       | remember, not being listed on one single page in any kind of
       | overview, but somewhere a required step was hidden on another
       | documentation page, that was not even mentioned in the list of
       | steps to take.
       | 
       | Finally having set things up, I had a web interface in front of
       | me, where I was supposed to be able to configure pods or
       | something. Only, that I could not configure everything I had in
       | my already containerized system, via that web interface. It seems
       | that this web interface was only meant for the most basic use
       | cases, where one does not need to provide containers with much
       | configuration. My only remaining option was to upload a YAML
       | file, which was undocumented, as far as I could see back then.
       | That's were I stopped. A horrible experience and I wish not to
       | have it again.
       | 
       | There were also naming issues. There was something called "Helm".
       | To me that sounds like an Emacs package. But OK I guess we have
       | these naming issues everywhere in software development. Still
       | bugs me though, as it feels like Google pushes down its naming of
       | things into many people's minds and sooner or later, most people
       | will associate Google things with names, which have previously
       | meant different things.
       | 
       | There were 1 or 2 layers of abstraction in Kubernetes, which I
       | found completely useless for my use-case and wished they were not
       | there, but of course I had to deal with them, as the system is
       | not flexible to allow me to only have layers I need. I just
       | wanted to run my containers on multiple machines, balancing the
       | load and automatically restarting on crashes, you know, all the
       | nice things Erlang offers already for ages.
       | 
       | I feel like Kubernetes is the Erlang ecosystem for the poor or
       | uneducated, who've never heard of other ways, with features
       | poorly copied.
       | 
       | If I really needed to bring a system to multiple servers and
       | scale and load balance, I'd rather look into something like
       | Nomad. Seems much simpler and also offers load balancing over
       | multiple machines and can run docker containers and normal
       | applications as well, plus I was able to set it up in less than
       | an hour or so, having to servers in the system.
        
       | ridruejo wrote:
       | It makes a bit more sense if you see Kubernetes as the new Linux:
       | a common foundation that the industry agrees on, and that you can
       | build other abstractions on top of. In particular Kubernetes is
       | the Linux Kernel, while we are in the early days of discovering
       | what the "Linux distro" equivalent is, which will make it much
       | more friendly / usable to a wider audience
        
         | verdverm wrote:
         | Second this!
        
       | maxdo wrote:
       | if you're on microservices, it's no brainer. You'll need an army
       | of DevOps with semi-custom scripts to maintain the same. It's
       | really automating a lot of stuff. Helm + Kubernetes let our
       | company's ability to launch microservices with no DevOps
       | involved. You just provide the name of the project, push to git
       | and GitLab CI will pick it up and do the stuff by the template.
       | Even junior developers in our team are doing that from day one.
       | Isn't that a future we dream about? If you have too much load it
       | will autoscale pod, if node is overloaded it will autoscale node
       | pool, if you have a memory leak it will restart the app so you
       | can sleep at night. I can provide a million examples that make
       | our 100+ microservices management so much simpler. No Linux
       | kungfu, 0 bash scrips, no SSH, and interaction with OS, not a
       | single devops role for 15+ developers team.
       | 
       | Our management of cluster is just a simple "add more CPU or
       | memory to this nodepool", sometimes change a nodepool name for
       | deployment for certain service. All done simple cloud management
       | UI. For those who call microservices fancy stuff. No, we are a
       | startup with fast delivery, deploy cycle. We have tons of
       | subproject , integrations, and our main languages are nodejs,
       | golang and python. Some of these are not good at multi-thread so
       | no way to run it as a monolith. The other one is used only when
       | it's needed for high performance. So All together Microservices +
       | Kubernetes + Helm + good CI + proper pubsub gives our backend
       | extremely simple fast cycle of development, delivery, and what's
       | important flexibility in terms of language/framework/version.
       | 
       | What is also good is the installation of services. With helm I
       | can install high availability redis setup for free in 5 minutes.
       | The same level of setup will cost you several thousand dollars
       | for devops work and further maintenance and update. With k8s it's
       | simple helm install stable/redis-ha
       | 
       | So yeah, I can totally understand some simple projects don't need
       | k8s. I can understand you can build something is Scala and Java
       | slowly but with high quality as a monolith. You don't need k8s
       | for 3 services. I can understand some old DevOps don't want to
       | learn new things and they complain about a tool that reduces the
       | need of these guys. Otherwise, you really need k8s.
        
         | p_l wrote:
         | I will happily use k8s for that big monolith.
         | 
         | Because soon from one program on a dev server, there is a need
         | to run databases, log gathering, multiply the previous to do
         | parallel testing in clean environment, etc. etc.
         | 
         | Just running supporting tools for a small project where there
         | was insistence on self-hosting open source tools instead of
         | throwing money at slack and the like? K3s would have saved me
         | weeks of work :|
        
       | kgraves wrote:
       | As a manager i've heard in all my meetings about 'kubernetes',
       | had a look at it and have always been questioning the cost to
       | manage this.
       | 
       | What is the cheapest way to setup a production kubernetes on a
       | cloud provider?
        
         | csunbird wrote:
         | Digital ocean has a managed kubernetes service that does not
         | cost anything except the resources you use. The master node and
         | management is free, you only pay the node pools and stuff like
         | block storages (their version of EBS) or load balancers.
        
           | frompdx wrote:
           | I have used DO for managed Kubernetes since it was available
           | and I am very happy with it.
        
         | Turbots wrote:
         | Google kubernetes engine offers small, fully managed clusters
         | for like 200-300 bucks a month
        
         | gobins wrote:
         | At this point, most big cloud providers cost almost the same
         | but in terms of maturity, google's offering is still ahead. I
         | have not tried out digital ocean's hosted solution, but the
         | might be the cheapest.
        
       | yalogin wrote:
       | Google created it but did they get any benefit from it? Did it
       | help in getting any business for GCP?
        
       | gatvol wrote:
       | K8s is great - if you are solving infrastructure at a certain
       | scale. That scale being a Bank, Insurance Company or mature
       | digital company. If you're not in that class then it's largely
       | overkill/overcomplex IMO when you can simply use Terraform plus
       | managed Docker host like ECS and attach cloud-native managed
       | services.
       | 
       | Again the cross cloud portability is a non starter, unless you're
       | really at scale.
        
         | verdverm wrote:
         | k8s as a bunch of other benefits beside just scaling and you
         | can run a single node cluster with the same uptime
         | characteristics as your proposed setup and get all these
         | benefits.
         | 
         | And, we only have to learn one complex system and avoid
         | learning each cloud, one of which decided product names which
         | have little relation to what they do was a good idear
        
         | twblalock wrote:
         | > you can simply use Terraform plus managed Docker host like
         | ECS and attach cloud-native managed services
         | 
         | That's not actually simple at all, and you would need to build
         | a lot of the other stuff that Kubernetes gives you for free.
         | 
         | Kubernetes gives you an industry standard platform with first-
         | class cloud vendor support. If you roll your own solution with
         | ECS, what you are really doing is making a crappy in-house
         | Kubernetes.
        
           | Plugawy wrote:
           | I'd disagree - my team migrated from running containers on
           | VMs (managed via Ansible) to ECS + Fargate (managed by
           | Terraform and a simple bash script). It wasn't a simple
           | transition by any means, but one person wrapped it up in 4
           | weeks - now we have 0 downtime deployments, scaling up/down
           | in matter of seconds, and ECS babysits the containers.
           | 
           | Previously we had to deploy a lot of monitoring on each VM to
           | ensure that containers are running, we get alerted when one
           | of the application crashed and didn't restart because Docker
           | daemon didn't handle it etc etc.
           | 
           | Now, we only run stateless services, in a private VPC subnet,
           | Load balancing is delegated to ALB, we don't need service
           | discovery, meshes etc. Configuration is declarative, but
           | written in much friendlier HCL (I'm ok with YAML, but to a
           | degree). ECS just works for us.
           | 
           | Just like K8S might work for a bigger team, but I wouldn't
           | adopt it at our shop, simply because of all of the complexity
           | and huge surface area.
        
         | p_l wrote:
         | Hard disagree.
         | 
         | What k8s really scales is the developer/operator power. Yes, it
         | is complex, but pretty much all of it is _necessary_
         | complexity. At small enough scale with enough time, you can dig
         | a hole with your fingers - but a proper tool will do wonders to
         | how much digging you can do. And a lot of that complexity is
         | present even when you do everything the  "old" way, it's just
         | invisible toil.
         | 
         | And a lot of the calculus changes when 'managed services' stop
         | being cost effective or aren't an option at all, or you just
         | want to be able to migrate elsewhere (that can be at low scale
         | too, because of being price conscious).
        
       | pea wrote:
       | Do you guys think k8s is doing a job which previously the jvm did
       | in enterprise? i.e. if everything is on the jvm, building
       | distributed systems doesn't require a network of containers.
       | 
       | Can k8s success be explained partly due to the need for a more
       | polyglot stack?
        
         | verdverm wrote:
         | How do you roll over a fleet of JVM applications with zero
         | downtime and maintain rollback revision history?
         | 
         | Is it as easy as two simple commands?
        
       | ashtonkem wrote:
       | My opinion is that Kubernetes is the common integration point.
       | Tons of stuff works with Kubernetes without having to know about
       | each other, making deployments much much easier.
        
       | claytongulick wrote:
       | I completely understand the use case for Kubernetes when you're
       | dealing with languages that require a lot of environment config,
       | like Python.
       | 
       | I've never really thought it was that useful for (for example)
       | nodejs, where you can just npm install your whole environment and
       | deps, and off you go.
        
         | frompdx wrote:
         | I have mostly used Kubernetes for Node.js apps and find it very
         | useful for the following reasons:
         | 
         | - Automatic scaling of pods and cluster VMs to meet demand.
         | 
         | - Flexible automated process monitoring via liveness/readiness
         | probes.
         | 
         | - Simple log streaming across horizontally scaled pods running
         | the same app/serving the same function using stern.
         | 
         | - Easy and low cost metrics aggregation with Prometheus and
         | Grafana.
         | 
         | - Injecting secrets into services.
         | 
         | I'd imagine there are other things can offer the same, but I
         | find it convenient to have them all in the same place.
        
       | dustym wrote:
       | I like to say (lovingly) that Kubernetes takes complex things and
       | simplifies them in complex ways.
        
         | more_corn wrote:
         | Kubernetes makes impossible things merely hard, but at the
         | expense of also making normal and easy things hard.
         | 
         | The system becomes so complex that most people screw up simple
         | things like redundancy, perimeter security and zero downtime
         | updates.
         | 
         | I've seen all of the above from very bright and capable people.
        
         | battery_cowboy wrote:
         | It just hides the complexity in some yml files instead of in a
         | deploy script or a sysadmin's head.
        
           | happytoexplain wrote:
           | You say that like it's a bad thing. A declarative model is
           | infinitely better for representing complex systems than
           | scripts and mind space. The challenge is actually being able
           | to get to that point.
        
             | geggam wrote:
             | Until it breaks.
        
               | happytoexplain wrote:
               | No, even then it's still better. A broken declarative
               | model is better than a broken script.
        
       | StreamBright wrote:
       | Because developers are lazy.
       | 
       | I dont want do memory management-> gc
       | 
       | I dont want to do packaging -> Docker
       | 
       | I dont want to do autoscaling -> Kubernetes
        
       | bvandewalle wrote:
       | I'm using Kubernetes extensively in my day to day work and once
       | you get it up and running and learn the different abstraction, it
       | becomes a single API to manage your containers, storage and
       | network ingress needs. Making it easy to take a container and
       | getting it up and running in the cloud with an IP address and a
       | DNS configured in a couple API calls (or defined as YAMLs).
       | 
       | That being said, I will also be the first one to recognize that
       | PLENTY of workloads are not made to run on Kubernetes. Sometimes
       | it is way more efficient to spawn an EC2/GCE instance and run a
       | single docker container on it. It really depends on your use-
       | case.
       | 
       | If I had to run a relatively simple app in prod I would never use
       | Kubernetes to start with. Kubernetes starts to pay itself off
       | once you have a critical mass of services on it.
        
       | kureikain wrote:
       | Before K8S, to run a service you need:
       | 
       | - Setup VM: and their dependencies, tool chain. If you use thing
       | like package that has native component such as image processing
       | you event need to setup some compiler on the VM - Deployment
       | process - Load balancer - Systemd unit to auto restart it. Set
       | memory limit etc.
       | 
       | All of that is done in K8S. As long as you ship a Dockerfile,
       | you're done.
        
       | mancini0 wrote:
       | Lets use Bazel, and Bazel's rules_k8s to
       | build\containerize\test\deploy only the microservices of my
       | monorepo that changed.
       | 
       | Lets use Istio's "istioctl manifest apply" to deploy a service
       | mesh to my cluster that allows me to pull auth logic / service
       | discovery / load balancing / tracing out of my code and let Istio
       | handle this.
       | 
       | Lets configure my app's infrastructure (Kafka (Strimzi),
       | Yugabyte/Cockroach, etc) as yaml files. Being able to describe my
       | kafka config (foo topic has 3 partitions, etc) in yaml is
       | priceless.
       | 
       | Lets move my entire application and its infrastructure to another
       | cloud provider by running a single bazel command.
       | 
       | k8s is the common denominator that makes all this possible.
        
         | MuffinFlavored wrote:
         | > k8s is the common denominator that makes all this possible.
         | 
         | can't... terraform make all of that possible?
        
           | p_l wrote:
           | Terraform explicitly doesn't want to deal with deployment of
           | stuff that is inside VMs etc. and tries to tell you to use
           | managed services or cloud-config yamls as the solution.
           | 
           | You can write your own providers, you can use the provisioned
           | support, but TF doesn't like that and it shows.
        
       | seph-reed wrote:
       | It's a developer tool made originally by google. Of course it's
       | popular. Which isn't to say it's bad, it's just not much of a
       | question as to why it's popular.
       | 
       | -------
       | 
       | Kubernetes - kubernetes.io
       | 
       | Kubernetes is an open-source container-orchestration system for
       | automating application deployment, scaling, and management. It
       | was originally designed by Google, and is now maintained by the
       | Cloud Native Computing Foundation.
       | 
       | Original author(s): Google
        
       | sp332 wrote:
       | I'd say it's down to two things. First is the sheer amount of
       | work they're putting into standardization. They just ripped out
       | some pretty deep internal dependencies to create a new storage
       | interface. They have an actual standards body overseen by the
       | Linux Foundation. So I agree with the blog post there.
       | 
       | The second reason is also about standards, but using them more
       | assertively. Docker had way more attention and activity until
       | 2016 when Kubernetes published the Container Runtime Interface.
       | By limiting the Docker features they would use, they leveled the
       | playing field between Docker and other runtimes, making Docker
       | much less exciting. Now, new isolation features are implemented
       | down at the runc level and new management features tend to target
       | Kubernetes because it works just as well with any CRI-compliant
       | runtime. Developing for Docker feels like being locked in.
        
         | MuffinFlavored wrote:
         | > By limiting the Docker features they would use, they leveled
         | the playing field between Docker and other runtimes, making
         | Docker much less exciting.
         | 
         | Isn't the most popular k8s case to deploy Docker images still
         | though?
        
           | sp332 wrote:
           | Yes. But I think k8s took a lot of attention away from Docker
           | in terms of headspace and developer interest. RedHat for
           | example is pushing CRI-O, which is a minimalist CRI-compliant
           | runtime which lets admins focus even more on k8s and less on
           | the whole runtime level.
        
           | moduspwnens14 wrote:
           | It's confusing, but Docker images (and image registries) are
           | also an open standard that Docker implements [1].
           | 
           | A lot of the Kubernetes "cool kids" just run containerd
           | instead of Docker. Docker itself also runs containerd, so
           | when you're using Kubernetes with Docker, Kubernetes has to
           | basically instruct Docker to set up the containers the same
           | way it would if it were just talking to containerd directly.
           | From a technical perspective, you're adding moving parts for
           | no benefit.
           | 
           | If you use containerd in your cluster, you can then use
           | Docker to build and push your images (from your own or a
           | build machine), but pull and run them on your Kubernetes
           | clusters without Docker.
           | 
           | [1] https://en.wikipedia.org/wiki/Open_Container_Initiative
        
           | p_l wrote:
           | Yes. The big difference, however, is that k8s removed docker
           | from consideration when actually running the system. Yes, you
           | have docker underneath, and are probably going to use docker
           | to build the containers.
           | 
           | But deploy to k8s? There's no docker outside of few bits
           | involving "how to get to the image", and the actual docker
           | features that are used are also minimized. The result is that
           | many warts of docker are completely bypassed and you don't
           | have to deal with impact of legacy decisions, or try to
           | wrangle system designed for easy use by developer at local
           | machine into complex server deployment. And, IMHO, interfaces
           | used by k8s for the advanced features are much, much better
           | than interfaces used or exported by docker.
        
       | mahgnous wrote:
       | I don't see any advantage to it unless you're like
       | AWS/Google/etc. It's a bit overkill.
        
       | silviogutierrez wrote:
       | For me, and many others: infrastructure as code.
       | 
       | Kubernetes is _very_ complex and took a _long_ time to learn
       | properly. And there have been fires among the way. I plan to
       | write extensively on my blog about it.
       | 
       | But at the end of the day: having my entire application stack as
       | YAML files, fully reproducible [1] is invaluable. Even cron jobs.
       | 
       | Note: I don't use micro services, service meshes, or any fancy
       | stuff. Just a plain ol' Django monolith.
       | 
       | Maybe there's room for a simpler IAC solution out there. Swarm
       | looked promising then fizzled. But right now the leader is k8s[2]
       | and for that alone it's worth it.
       | 
       | [1] Combined with Terraform
       | 
       | [2] There are other proprietary solutions. But k8s is vendor
       | agnostic. I can and _have_ repointed my entire infrastructure
       | with minimal fuss.
        
         | jcastro wrote:
         | > I can and have repointed my entire infrastructure with
         | minimal fuss.
         | 
         | When you get to that blog post please consider going in depth
         | on this. Would love to see actual battletested information vs.
         | the usual handwavy "it works everywhere".
        
           | silviogutierrez wrote:
           | I sure will. 99% of the work was ingress handling and SSL
           | cert generation. Everything else was fairly seamless.
           | 
           | Even ingress is trivial if you use a cloud balancer per
           | ingress. But I wanted to save money so use a single cloud
           | balancer for multiple ingresses. So you need something like
           | ingress-nginx, which has a few vendor-specific subtleties.
        
         | tyingq wrote:
         | There's an offshoot of this that I see from developers,
         | especially at old, stodgy companies.
         | 
         | Once everything is "infrastructure as code", the app team
         | becomes less dependent on other teams in the org.
         | 
         | People like to own their own destiny. Of course, that also
         | removes a lot of potential scapegoats, so you now mostly own
         | all outages, tech debt, etc.
        
           | duxup wrote:
           | I think that's been coming since ... well ever.
           | 
           | I worked in networking for the longest time. When I started
           | there network guys and server guys (at least where I was).
           | They were different people who did different things who kinda
           | worked together.
           | 
           | Then there were storage area networks and similar, networks
           | really FOR the server and storage guys.... that kind of
           | extended the server world over some of the network.
           | 
           | Then comes VMware and such things and now there was a network
           | in a box somewhere that was entirely the server guy's deal
           | (well except when we had to help them... always).
           | 
           | Then we also had load balances who in their own way were a
           | sort of code for networks ... depending on how you looked at
           | it (open ticket #11111 of 'please stop hard coding ip
           | addresses').
           | 
           | You also had a lot of software defined networking type things
           | and so forth brewing up in dozens of different ways.
           | 
           | Granted these descriptions are not exact, there were ebs and
           | flows and some tech that sort of did this (or tired) all
           | along. It all starts to evolve slowly into one entity.
        
         | UweSchmidt wrote:
         | With "Swarm", do you mean Docker Swarm? Why has it "fizzled"?
         | 
         | The way I learned it in Bret Fisher's Udemy course, Swarm is
         | very much relevant, and will be supported indefinitely. It
         | seems to be a much simpler version of Kubernetes. It has both
         | composition in YAML files (i.e. all your containers together)
         | and the distribution over nodes. What else do you need before
         | you hit corporation-scale requirements?
        
           | morelisp wrote:
           | > What else do you need before you hit corporation-scale
           | requirements?
           | 
           | Cronjobs, configmaps, and dynamically allocated persistent
           | volumes have been big ones for our small corporation. Access
           | control also, but I'm less aware of the details here, other
           | than that our ops is happier to hand out credentials with
           | limited access, which was somehow much more difficult with
           | swarm
           | 
           | Swarm has frankly also been buggy. "Dead" but still running
           | containers - sometimes visible to swarm, sometimes only the
           | local Docker daemon - happen every 1-2 months, and it takes
           | forever to figure out what's going on each time.
        
           | sdan wrote:
           | I use Swarm in production and am learning k8s as fast as
           | possible because of how bad Swarm is:
           | 
           | 1. Swarm is dead in the water. No big releases/development
           | afaik recently
           | 
           | 2. Swarm for me has been a disaster because after a couple of
           | days some of my nodes slowly start failing (although they're
           | perfectly normal) and I have to manually remove each node
           | from the swarm, join them, and start everything up again. I
           | think this might be because of some WireGuard
           | incompatibility, but the strange thing is that it works for a
           | week sometimes and other times just a few hours
           | 
           | 3. Lack of GPU support
        
             | GordonS wrote:
             | To add another side, I use Swarm in production and continue
             | to do so because of how _good_ it is.
             | 
             | I've had clusters running for years without issue. I've
             | even used it for packaging B2B software, where customers
             | use it both in cloud and on-prem - no issues whatsoever.
             | 
             | I've looked at k8s a few times, but it's vastly more
             | complex than Swarm (which is basically Docker Compose with
             | cluster support), and would add nothing for my use case.
             | 
             | I'm sure a lot of people need the functionality that k8s
             | brings, but I'm also sure that many would be better suited
             | to Swarm.
        
               | sdan wrote:
               | Yeah I guess for smaller projects and the addition of
               | using Docker Compose files, Swarm would be worth it.
               | 
               | If K8s supported compose scripts out of the box (not
               | Kompose) that'd basically make Swarm unnecessary (at
               | least for me)
        
         | gigatexal wrote:
         | imo, all swarm needed was a decent way to handle secrets and it
         | would have been the k8s of its day
        
         | bdcravens wrote:
         | Of course it doesn't have the advantage of #2, but I've found
         | ECS to be far easier to grok and implement.
        
         | McAtNite wrote:
         | Out of curiosity why do you feel swarm fizzled out?
         | 
         | I've deployed swarm in a home lab and found it really simple to
         | work with, and enjoyable to use. I haven't tried k8, but I
         | often see view points like yours stating that k8 is vastly
         | superior.
        
         | archsurface wrote:
         | I'm not sure "a plain ol' Django monolith" with none of the
         | "fancy stuff" is either what people are referring to when they
         | say "kubernetes", or a great choice for that. I could run hello
         | world on a Cray but that doesn't mean I can say I do
         | supercomputing. Our team does use it for all the fancy stuff,
         | and spends all day everyday for years now yamling,
         | terraforming, salting, etc so theoretically our setup is
         | "entire application stack as YAML files, fully reproducible",
         | but if it fell apart tomorrow, I'd run for the hills.
         | Basically, I think you're selling it from a position which
         | doesn't use it to any degree which gives sufficient experience
         | required to give in-depth assessment of it. You're selling me a
         | Cray based on your helloworld.
        
           | silviogutierrez wrote:
           | Reading this charitably: I guess I agree. k8s is definitely
           | overpowered for my needs. And I'm almost certain my blog or
           | my business will _never_ need that full power. Fully aware of
           | that.
           | 
           | But I'm not sure one can find something of "the right power"
           | that has the same support from cloud providers, the open
           | source community, the critical mass, etc. [1]
           | 
           | Eventually, a standard "simplified" abstraction over k8s will
           | emerge. Many already exist, but they're all over the place.
           | And some are vendor specific (Google Cloud Run is basically
           | just running k8s for you). Then if you need the power, you
           | can eject. Something like Create React App, but _by_
           | Kubernetes. Create Kubernetes App.
           | 
           | [1] Though Nomad looks promising.
        
             | wolco wrote:
             | Curious why run it at all? The cost must be 10 times more
             | this way. It is mostly for the fun of learning.
             | 
             | I come from the opposite approach. I have 4 servers two
             | digital ocean $5 and two vulr $2.50 instances. One holds
             | the db. One server as the frontend/code. One server to do
             | heavy work and another to server a heavy site and holds
             | backups. For $15 I'm hosting hundreds of sites, running so
             | many background processes. I couldn't imagine hitting that
             | point where k8s would make sense just for myself unless for
             | fun.
        
             | torvald wrote:
             | I was about to, but seems like you answered the question
             | yourself through that footnote.
        
         | bulldoa wrote:
         | Do you have a recommended tutorial for engineer with backend
         | background to setup a simple k8 infra in ec2?
        
         | closeparen wrote:
         | What does the Kubernetes configuration format offer over
         | configuration management systems like Ansible, Salt, Puppet,
         | Chef, etc?
        
           | bosswipe wrote:
           | According to the article the advantage of Kubernetes is that
           | you're not writing code like you are with Puppet and Chef.
           | You're writing YAML files.
        
           | p_l wrote:
           | Having extensively used Chef and K8s, the difference is that
           | they try to deal with chaos in unmanaged way (Puppet is the
           | closest to "managed"), but when dealing with wild chaos you
           | lack many ways of enforcing the order. Plus they don't really
           | do multi-server computation of resources.
           | 
           | What k8s brings to the table is a level of standardization.
           | It's the difference between bringing some level of robotics
           | to manual loading and unloading of classic cargo ships, vs.
           | the fully automated containerized ports.
           | 
           | With k8s, you get structure where you can wrap individual
           | program's idiosyncracies into a container that exposes
           | standard interface. This standard interface allows you to
           | then easily drop it into server, with various topologies,
           | resources, networking etc. handled through common interfaces.
           | 
           | I said that for a long time before, but recently I got to
           | understand just how much work k8s can "take away" when I
           | foolishly said "eh, it's only one server, I will run this the
           | classic way. Then I spent 5 days on something that could be
           | handled within an hour on k8s, because k8s virtualized away
           | HTTP reverse proxies, persistent storage, and load balancing
           | in general.
           | 
           | Now I'm thinking of deploying k8s at home, not to learn, but
           | because I know it's easier for me to deploy nextcloud, or an
           | ebook catalog, or whatever, using k8s than by setting up more
           | classical configuration management system and deal with
           | inevitable drift over time.
        
             | naringas wrote:
             | > Now I'm thinking of deploying k8s at home, not to learn,
             | but because I know it's easier for me to deploy nextcloud,
             | or an ebook catalog
             | 
             | can't you do that just with containers?
        
               | p_l wrote:
               | I can do it with just containers, yes.
               | 
               | It would mean I removed ~20% of the things that were
               | annoying me and left 80% still to solve, while kubernetes
               | goes 80% for me with the remaining 20% being mostly
               | "assembly these blocks".
               | 
               | Plus, a huge plus of k8s for me was that it abstracted
               | away horrible interfaces and behaviours of docker daemon
               | and docker cli.
        
               | aequitas wrote:
               | But what do you use to manage those containers and
               | surrounding infra (networking, proxies, etc)? I've been
               | down the route of using Puppet for managing Docker
               | containers on existing systems, Ansible, Terraform,
               | Nomad/Consul. But in the end it all is just tying
               | different solutions together to make it work. Kubernetes
               | (in the form of K3s or a other lightweight
               | implementation) just works for me, even in a single
               | server setup. I barely have to worry about the OS layer,
               | I just flash K3s to a disk and only have to talk to the
               | Kubernetes API to apply declarative configurations. Only
               | things I'm sometimes still need the OS layer for is
               | networking, firewall or hardening of the base OS. But
               | that configuration is mostly static anyways and I'm sure
               | I will fine some operators for that to manage then
               | through the Kubernetes API as IaC if I really need to.
        
               | bisby wrote:
               | I used to have a bunch of bash scripts for bootstrapping
               | my docker containers. At one point I even made init
               | scripts, but that was never fully successful.
               | 
               | And then one day I decided to set up kubernetes as a
               | learning experiment. There is definitely some learning
               | curve about making sure I understood what deployment, or
               | replicaset or service or pod or ingress was, and how to
               | properly set them up for my environment. But now that I
               | have that, adding a new app to my cluster, and making it
               | accessible is super low effort. i have previous yaml
               | files to base my new app's config on.
               | 
               | It feels like the only reason not to use it would be
               | learning curve and initial setup... but after I overcame
               | the curve, it's been a much better experience than trying
               | to orchestrate containers by hand.
               | 
               | Perhaps this is all doable without kubernetes, and there
               | is a learning curve, but it's far from the complicated
               | nightmare beast everyone makes it out to be (from the
               | user side, maybe from the implementation details side)
        
             | bulldoa wrote:
             | >Now I'm thinking of deploying k8s at home
             | 
             | Are we talking about k8 base on your own server rack at
             | your house?
        
           | dustym wrote:
           | For certain things like layer 4 and layer 7 routing or
           | firewall policies, health checking and failover, network-
           | attached volumes, etc you have to choose software and
           | configure it on top of getting that configuration in that
           | tooling. So you are doing kernel or iptables or nginx or
           | monit/supervisord configurations and so on.
           | 
           | But basic versions of these things are provided by Kubernetes
           | natively and can be declared in a way that is divorced from
           | configuring the underlying software. So you just learn how to
           | configure these broader concepts as services or ingresses or
           | network policies, etc, and don't worry about the underlying
           | implementations. It's pretty nice actually.
        
           | birdyrooster wrote:
           | They are not comparable. You might use ansible, salt, puppet
           | or chef to deploy kubelet, apiserver, etc. You could, barring
           | those with self-love, even deploy Ansible tower on Kubernetes
           | to manage your kubernetes infrastructure.
        
             | [deleted]
        
           | silviogutierrez wrote:
           | I'm not intimately familiar with those, but I did a lot of
           | similar things with scripts.
           | 
           | As far as I can tell: those are imperative. At least in some
           | areas.
           | 
           | Kubernetes is declarative. You mention the end state and it
           | just "figures it out". Mind you, with issues sometimes.
           | 
           | All abstractions leak. Note that k8s's adamance about
           | declarative configuration can make you bend over backwards.
           | Example: running a migration script post deploys. Or waiting
           | for other services to start before starting your own. Etc.
           | 
           | I think in many ways, those compete with Terraform which is
           | "declarative"-ish. There's very much a state file.
        
             | bcrosby95 wrote:
             | I've only ever used cfengine and Ansible, but they are both
             | declarative. Hell, Ansible uses yaml files too.
             | 
             | I would be somewhat surprised to find out Puppet and Chef
             | weren't declarative either. Because setting up a system in
             | an imperative fashion is ripe for trouble. You may as well
             | use bash scripts at that point.
             | 
             | I've used Ansible for close to 10 years for hobby projects.
             | And setting up my development environment. Give me a
             | freshly installed Ubuntu laptop, and I can have my
             | development environment 100% setup with a single command.
        
               | geofft wrote:
               | Ansible is YAML, but it's definitely imperative YAML -
               | each YAML file is a list of steps to execute. It uses
               | YAML kind of like how Lisp uses S-expressions, as a nice
               | data structure for people to write code in, but it's
               | still code.
               | 
               | Sure, the steps are things like "if X hasn't been done
               | yet, do it." That means it's idempotent imperative code.
               | It doesn't mean it's declarative.
               | 
               | CFEngine is slightly less imperative, but when I was
               | doing heavy CFEngine work I had a printout on my cubicle
               | wall of the "normal ordering" because it was extremely
               | relevant that CFEngine ran each step in a specific order
               | and looped over that order until it converged, and I
               | cared about things like whether a files promise or
               | packages promise executed first so I could depend on one
               | in the other.
               | 
               | Kubernetes - largely because it insists you use
               | containers - doesn't have any concept of "steps". You
               | tell it what you want your deployment to look like and it
               | makes it happen. You simply do not have the ability to
               | say, install this package, then edit this config file,
               | then start this service, then start these five clients.
               | It does make it harder to lift an existing design onto
               | Kubernetes, but it means the result is much more robust.
               | (For some of these things, you can use Dockerfiles, which
               | are in fact imperative steps - but once a build has
               | happened you use the image as an artifact. For other
               | things, you're expected to write your systems so that the
               | order between steps doesn't matter, which is quite a big
               | thing to ask, but it is the only manageable way to
               | automate large-scale deployments. On the flip side, it's
               | overkill for scale-of-one tasks like setting up your
               | development environment on a new laptop.)
        
               | fastball wrote:
               | How does K8s know what order to do things in if there
               | aren't steps? Because on a system, certain things
               | obviously need to happen before other things.
        
               | geofft wrote:
               | A combination of things, mostly related to Kubernetes'
               | scope and use case being different from
               | Ansible/CFEngine/etc. Kubernetes actually runs your
               | environment. Ansible/CFEngine/etc. set up an environment
               | that runs somewhere else.
               | 
               | This is basically the benefit of "containerization" -
               | it's not the containers _themselves_ , it's the
               | constraints they place on the problem space.
               | 
               | Kubernetes gives you limited tools for doing things to
               | container images beyond running a single command - you
               | can run initContainers and health checks, but the model
               | is generally that you start a container from an image,
               | run a command, and exit the container when the command
               | exits. If you want the service to respawn, the whole
               | container respawns. If you want to upgrade it, you delete
               | the container and make a new one, you don't upgrade it in
               | place.
               | 
               | If you want to, say, run a three-node database cluster,
               | an Ansible playbook is likely to go to each machine,
               | configure some apt sources, install a package, copy some
               | auth keys around, create some firewall rules, start up
               | the first database in initialization mode if it's a new
               | deployment, connect the rest of the databases, etc. You
               | can't take this approach in Kubernetes. Your software
               | comes in via a Docker image, which is generated from an
               | imperative Dockerfile (or whatever tool you like), but
               | that happens _ahead of time_ , outside of your running
               | infrastructure. You can't (or shouldn't, at least)
               | download and install software when the container starts
               | up.
               | 
               | You also can't control the order when the containers
               | start up - each DB process must be capable of syncing up
               | with whichever DB instances happen to be running when it
               | starts up. You can have a "controller" (https://kubernete
               | s.io/docs/concepts/architecture/controller/) if you want
               | loops, but a controller isn't really set up to be fully
               | imperative, either. It gets to say, I want to go from
               | here to point B, but it doesn't get much control of the
               | steps to get there. And it has to be able to account for
               | things like one database server disappearing at a random
               | time. It can tell Kubernetes how point B looks different
               | from point A, but that's it.
               | 
               | And since Kubernetes only runs containers, and containers
               | abstract over machines (physical or virtual), it gets to
               | insist that every time it runs some command, it runs in a
               | _fresh_ container. You don 't have to have any logic for,
               | how do I handle running the database if a previous
               | version of the database was installed. It's not - you
               | build a new fresh Docker image, and you run the database
               | command in a container from that image. If the command
               | exits, the container goes away, and Kubernetes starts a
               | _new_ container with another attempt to run that command.
               | It can do that because it 's not managing systems you
               | provide it, it's managing containers that it creates. If
               | you need to incrementally migrate your data from DB
               | version 1 to 1.1, you can start up some _fresh_
               | containers running version 1.1, wait for the data to
               | sync, and then shut down version 1 - no in-place upgrades
               | like you 'd be tempted to do on full machines.
               | 
               | And yeah, for databases, you need to keep track of
               | persistent storage, but that's explicitly specified in
               | your config. You don't have any problems with
               | configuration drift (a serious problem with large-scale
               | Ansible/CFEngine/etc.) because there's nothing that's
               | unexpectedly stateful. Everything is fully determined by
               | what's specified in the latest version of your manifest
               | because there's no other input to the system beyond that.
               | 
               | Again, the tradeoff is this makes quite a few constraints
               | on your system design. They're all constraints that are
               | long-term better if you're running at a large enough
               | scale, but it's not clear the benefits are worth it for
               | very small projects. I prefer running three-node database
               | clusters on stateful machines, for instance - but the
               | stateless web applications on top can certainly live in
               | Kubernetes, there's no sense caring about "oh we used to
               | run a2enmod but our current playbook doesn't run a2dismod
               | so half our machines have this module by mistake" or
               | whatever.
        
               | twblalock wrote:
               | You can save your Kubernetes manifests in any order.
               | Stuff that depends on other stuff just won't come up
               | until the other stuff exists.
               | 
               | For example, I can declare a Pod that mounts a Secret. If
               | the Secret does not exist, the Pod won't start -- but
               | once I create the Secret the pod will start without
               | requiring further manual intervention.
               | 
               | What Kubernetes really is, under the hood, is a bunch of
               | controllers that are constantly comparing the desired
               | state of the world with the actual state, and taking
               | action if the actual state does not match.
               | 
               | The configuration model exposed to users is declarative.
               | The eventual consistency model means you don't need to
               | tell it what order things need to be done.
        
               | silviogutierrez wrote:
               | Check out nix for actual development environments. Huge
               | fan of that as well.
               | 
               | I can buy a new laptop and be back to 100% in a few
               | minutes. Though the amount of time I spent learning how
               | to get there far exceeds any time savings. Ever.
        
               | open-paren wrote:
               | I've been testing out nix and I haven't found out how to
               | install packages in a declarative way yet. Using "nix-env
               | -iA <whatever>` seems really imperative. How are you
               | doing that? Do you use something like home-manager, or do
               | you just define a default.nix and then nix-shell it
               | whenever you need something?
        
               | takeda wrote:
               | Yes `nix-env -iA` is installing packages in an imperative
               | way. I think it is there to be some kind of tool that
               | people from other OS can relate to. Purist say you should
               | avoid using it for installing packages and instead list
               | global packages in `/etc/nixos/configuration.nix` for
               | globally installed packages and home-manager for user
               | specific ones, and if you need temporarily just to try
               | something out use `nix-shell -p <whatever>`.
               | 
               | Back to your second question, you can configure the
               | system through `/etc/nixos/configuration.nix` it is
               | enough to configure system as a service. Pretty much
               | everything you could do through
               | Chef/Puppet/Saltstack/Ansible/CFEngine etc.
               | 
               | home-manager is taking it a step further and do this kind
               | of configuration per user. It is actually written in a
               | way that can be added to NixOS (or nix-darwin for OS X
               | users) to integrate with the main configuration so then
               | when you're declaring users you can also provide a
               | configuration for each of them.
               | 
               | So it all depends what you want to do, the main
               | configuration.nix is good enough if your machine to run
               | specific service, that's pretty much all you need, you
               | don't care about each user configuration in that
               | scenario, you just create users and start services using
               | them.
               | 
               | If you have a workstation, home-manager while not
               | essential can be used to take care of setting up your
               | individual user settings, stuff like dot-files (although
               | it goes beyond that). The benefit of using home-manager
               | is that most of what you configure in it should be
               | reusable on OS X as well.
               | 
               | If you care about local development, you can use Nix to
               | declare what is needed, for example[1]. This is
               | especially awesome if you have direnv + lorri installed
               | 
               | you can add these to home-manager configuration:
               | programs.direnv.enable = true;
               | services.lorri.enable = true;
               | 
               | When you do that you magically will get your CDE (that
               | includes all needed tools, in this case proper python
               | version, you also enter equivalent of virtualenv with all
               | dependencies installed and extra tools) by just entering
               | the directory, if you don't have them installed all you
               | have to do is just call `nix-shell`.
               | 
               | I also can't wait when Flakes[2] get merged. This will
               | standardize setup like this and enable other
               | possibilities.
               | 
               | [1] https://github.com/takeda/example_python_project
               | 
               | [2] https://www.tweag.io/blog/2020-05-25-flakes/
        
               | silviogutierrez wrote:
               | I've avoided home manager for now. I'll get into it soon.
               | 
               | Instead, every project I work on has a shell.nix in the
               | root (and if it's not a project I control, I have a
               | shell.nix mapping elsewhere).
               | 
               | Check it out, run nix-shell. Profit.
               | 
               | Once you're really ready for the big leagues, run it with
               | --pure.
        
               | sam_lowry_ wrote:
               | Hm... I just dd my hard disk to the new one and launch
               | gparted to resize.
               | 
               | A few years ago I even bothered to have two EFI loaders:
               | one for amd an one intel, in case I want to change
               | architecture as well.
        
               | takeda wrote:
               | Chef and from what I heard, since I didn't use it, Puppet
               | are declarative, but since their DSL is really Ruby, it
               | is really easy to introduce imperative code.
               | 
               | Ansible uses YAML, but when I used it few times it felt
               | that you still use it in imperative way.
               | 
               | The saltstack (which also uses YAML) was the closest from
               | that group (never used CFengine, but the author wrote
               | research paper and shown that declarative is the way to
               | go, so I would imagine he would also implement it that
               | way).
               | 
               | If you truly want a declarative approach designated from
               | a ground up, then you should try Nix or NixOS.
        
               | reddit_clone wrote:
               | Chef can be declarative. If you stick to the resources
               | and templatized config files.
               | 
               | But it has full power of ruby at your disposal (both at
               | load/compile time and run time). So it usually turns
               | imperative quickly.
        
               | james-mcelwain wrote:
               | Even though Ansible is declarative in spirit, in practice
               | I feel like a lot of playbooks just read like imperative
               | code.
        
               | fastball wrote:
               | I use Ansible for managing my infra, and the only time my
               | playbooks look imperative is when I execute a shell
               | script or similar, which is about 5% of total commands in
               | my playbooks.
        
               | takeda wrote:
               | One way to test if your playbook is declarative is try to
               | rearrange the states and have them in different order. If
               | the playbook breaks with different order it is
               | imperative.
        
           | MuffinFlavored wrote:
           | What about Terraform?
        
           | tapoxi wrote:
           | I've been using Kubernetes exclusively for the past two years
           | after coming from a fairly large Saltstack shop. I think
           | traditional configuration management is flawed. Configuration
           | drift _will_ happen because something, somewhere, will do
           | something you or the formula/module/playbook didn't account
           | for. A Dockerfile builds the world from (almost) scratch and
           | forces the runtime environment to be stateless. A CM tool
           | constantly tries to shape the world to its image.
           | 
           | Kubernetes isn't a silver bullet of course, there will be
           | applications where running it in containers adds unnecessary
           | complexity, and those are best run in a VM managed by a CM
           | tool. I'd argue using k8s is safe default for deploying new
           | applications going forward.
        
           | kissgyorgy wrote:
           | Ansible configuration is imperative (you need to run
           | notebooks in order) but Kubernetes YAML is declarative. That
           | alone is a huge difference!
        
           | 2kewl4skewl wrote:
           | Containers are the big difference.
           | 
           | Kubernetes is one way to deploy containers. Configuration
           | systems like Ansible/Salt/Puppet/Chef/etc are another way to
           | deploy containers.
           | 
           | Kubernetes also makes it possible to dynamically scale your
           | workload. But so does Auto Scaling Groups (AWS terminology)
           | and GCP/Azure equivalents.
           | 
           | The reality is that 99% of users don't actually need
           | Kubernetes. It introduces a huge amount of complexity,
           | overhead, and instability for no benefit in most cases. The
           | tech industry is highly trend driven. There is a lot of cargo
           | culting. People want to build their resumes. They like
           | novelty. Many people incorrectly believe that Kubernetes is
           | _the way_ to deploy containers.
           | 
           | And they (and their employers) suffer for it. Most users
           | would be far better off using boring statically deployed
           | containers from a configuration management system. Auto-
           | scaled when required. This can also be entirely
           | infrastructure-as-code compliant.
           | 
           | Containers are the real magic. But somehow people confused
           | Kubernetes as a replacement for Docker containers, when it
           | was actually a replacement for Docker's orchestration
           | framework: Docker Swarm.
           | 
           | In fact, Kubernetes is a very dangerous chainsaw that most
           | people are using to whittle in their laps.
        
             | geggam wrote:
             | >In fact, Kubernetes is a very dangerous chainsaw that most
             | people are using to whittle in their laps.
             | 
             | So many people miss this. k8s is a very complex system and
             | the talent it takes to manage it well, rare.
             | 
             | Extremely rare.
        
             | closeparen wrote:
             | Hm. Systemd already runs all your services in cgroups, so
             | the same resource limit handles are available. It doesn't
             | do filesystem isolation by default, but when we're talking
             | about Go / Java / Python / Ruby software does that even
             | matter? You statically link or package all your
             | dependencies anyway.
        
               | notyourday wrote:
               | Not only systemd runs your code it also does file system
               | isolation built-in, runs containers, both privileged and
               | non-privileged and sets up virtual networking for free.
               | 
               | systemd-nspawn / machined makes the other systems look
               | like very complicated solutions in search of a problem
        
         | tombert wrote:
         | Swarm is still supported and works. I have it running on my
         | home server and love it.
         | 
         | Kubernetes is fine, but setting it up kind of feels like I'm
         | trying to earn a PhD thesis. Swarm is dog-simple to get working
         | and I've really had no issues in the three years that I've been
         | running it.
         | 
         | The configs aren't as elaborate or as modular as Kubernetes,
         | and that's a blessing as well as a curse; it's easy to set up
         | and administer, but you have less control. Still, for small-to-
         | mid-sized systems, I would still recommend Swarm.
        
           | ithkuil wrote:
           | > setting it up kind of feels like I'm trying to earn a PhD
           | thesis.
           | 
           | The kind of people who has to both set the cluster up and
           | keep it up and also has to develop the application and deploy
           | it and keep it up etc is not the target audience.
           | 
           | K8s shines when the roles of managing the cluster and running
           | workloads on it are separated. It defines a good contract
           | between infrastructure and workload. It lets different people
           | focus on different aspects.
           | 
           | Yes it still has rough edges, things that are either not
           | there yet, or vestigial complexity of wrong turns that
           | happened through it's history. But if you look at it through
           | the lense of this corporate scenario it starts making more
           | sense than when you just think of what a full-stack dev in a
           | two person startup would rather use and fully own/understand.
        
             | DelightOne wrote:
             | What are Kubernetes' rough edges?
        
             | dirtydroog wrote:
             | There are elements of our company that want to move to
             | Kubernetes for no real reason other than it's Kubernetes. I
             | can't wait to see the look on their faces when they realise
             | we'll have to employ someone full-time to manage our stack.
        
               | themacguffinman wrote:
               | I'm not sure you have to, that seems like the whole point
               | of managed services like GKE.
        
             | bulldoa wrote:
             | Do you have a recommended tutorial for engineer with
             | backend background to setup a simple k8 infra in ec2, I am
             | interested in understanding devops role better
        
           | theptip wrote:
           | > setting it up kind of feels like I'm trying to earn a PhD
           | thesis.
           | 
           | Are you following "k8s the hard way"? I've never had this
           | problem; either:
           | 
           | `gcloud container clusters create`
           | 
           | Or
           | 
           | `install docker-for-mac`
           | 
           | And you have a k8s cluster up and running. Maybe it's more
           | work on AWS?
        
         | dustym wrote:
         | Yup, even monoliths can benefit from certain k8s tooling (HPAs,
         | batch jobs, etc).
        
           | silviogutierrez wrote:
           | "What cron jobs does this app run?"
           | 
           | Open cron.yaml and see. With schedule. Self documented.
           | 
           | Amazing. Every time. Even when some as my k8s battle wounds
           | are still healing (or permanently scarred). See other replies
           | for more info.
        
             | StreamBright wrote:
             | Just like my Ansible repo.
        
         | [deleted]
        
         | sp332 wrote:
         | I've just started to look into it, but it seems like the
         | project has been focusing on improving the onboarding
         | experience since it has a reputation for being a huge pain to
         | set up. Do you think it has gotten easier lately?
        
           | silviogutierrez wrote:
           | No. Not easier in my opinion. And some of the fires you only
           | learn after getting burnt _badly_. [1]
           | 
           | Note: my experience was all with cloud-provided Kubernetes,
           | never running my own. So it was already an order of magnitude
           | easier. Can't even imagine rolling my own. [2]
           | 
           | [1] My personal favorite. Truly egregious, despite how
           | amazing k8s is. https://github.com/kubernetes/kubernetes/issu
           | es/63371#issuec...
           | 
           | [2] https://github.com/kelseyhightower/kubernetes-the-hard-
           | way
        
         | mfer wrote:
         | Out of curiosity, are you using terraform to deploy k8s, your
         | app stack on k8s, or both?
        
           | throw1234651234 wrote:
           | To follow this - what do you feel K8S provides on top of
           | terraform?
           | 
           | We used K8S on a large project and I felt like it really,
           | really wasn't necessary.
        
             | nvarsj wrote:
             | They operate at different layers. K8s sits on top of the
             | infrastructure which terraform provisions. It's far more
             | dynamic and operates at runtime, compared to terraform
             | which you execute ad-hoc from an imperative tool (and so
             | only makes sense for the low level things that don't change
             | often).
        
             | verdverm wrote:
             | Consistency and standardized interfaces for AppOps
             | regardless of the hyper-cloud I use. Kubernetes basically
             | has an equivalent learning curve, but you only have to do
             | it once
        
             | StreamBright wrote:
             | Nothing. We use Terraform to provision a simple auto-
             | scaling cluster with loadbalancers and certs, does exactly
             | the same thing but there is no Docker and k8s. Few million
             | lines less Go code turning yaml filed into seggfaults.
        
           | silviogutierrez wrote:
           | So terraform is a higher-order, meta-Kubernetes. It's very
           | rarely used, but who provisions the cluster itself? That's
           | terraform.
           | 
           | So terraform creates the cluster, DNS and VPC. Then k8s runs
           | pretty much everything.
        
             | mfer wrote:
             | How are you deploying the workloads into the cluster?
             | Manual kubectl or Helm, GitOps with something like Flux,
             | something else?
        
               | silviogutierrez wrote:
               | Alas, still rely on bash for that. Practically a one
               | liner.
               | 
               | Mainly just kustomize piped into kube apply.
               | 
               | But, but, but. Having to a create a one-off database
               | migration script imperatively.
        
               | ngcc_hk wrote:
               | Ooh!
        
             | pvorb wrote:
             | Why would you want to provision your own k8s cluster, if
             | you can use EKS, AKS or similar?
        
               | kissgyorgy wrote:
               | One example I worked on myself: when you need to train
               | lots of ML models and need lots of video cards. Those
               | would be damn expensive in the cloud!
        
               | silviogutierrez wrote:
               | I don't. I use a cloud cluster. But that still has to be
               | provisioned? You need to choose size, node pool, VPC,
               | region, etc.
        
               | cure wrote:
               | EKS, GKE and the like have a number of limitations. For
               | example: they can be pretty far behind in the version of
               | K8S they support (GKE is at 1.15 currently, EKS at 1.16;
               | K8S 1.18 was released in at the end of March this year.
        
               | [deleted]
        
         | herval wrote:
         | Have you tried or considered Nomad (from the makers of
         | Terraform)?
        
           | silviogutierrez wrote:
           | I haven't, and since I've sunk the cost into Kubernetes and
           | know it very well now, likely won't end up there.
           | 
           | In retrospect though, maybe it's exactly what I needed. Great
           | suggestion.
        
           | itgoon wrote:
           | I've been using Nomad for my "toy" network, and I like it. It
           | runs many services, and a few periodic jobs. Lightweight,
           | easy to set up, and has enough depth to handle some of the
           | weirder stuff.
        
           | p_l wrote:
           | can it handle networking (including load balancing and
           | reverse proxies with automatic TLS) or virtualized persistent
           | storage? Make it easy to integrate common logging system?
           | 
           | Cause those are the parts that I miss probably the most when
           | dealing with non-k8s deployment, and I haven't had the
           | occasion to use Nomad.
        
             | chokolad wrote:
             | nomad is strictly a job scheduler. If you want networking
             | you add consul to it and they integrate nicely. Logging is
             | handled similarly to Kubernetes. Cool thing about Nomad
             | that it's less prescriptive
        
             | adadgar wrote:
             | For load balancing you can just run one of the common LB
             | solutions (nginx, haproxy, Traefik) and pick up the
             | services from the Consul service catalog. Traefik makes it
             | quite nice since it integrates with LetsEncrypt and you can
             | setup the routing with tags in your Nomad jobs:
             | https://learn.hashicorp.com/nomad/load-balancing/traefik
             | 
             | What Nomad doesn't do is setup a cloud provider load
             | balancer for you.
             | 
             | For persistent storage, Nomad uses CSI which is the same
             | technology K8s does:
             | https://learn.hashicorp.com/nomad/stateful-workloads/csi-
             | vol...
             | 
             | Logging should be very similar to K8S. Both Nomad and K8S
             | log to a file and a logging agent tails and ships the logs.
             | 
             | Disclosure, I am a HashiCorp employee.
        
             | tetha wrote:
             | Those are the advantage and the problem of nomad. We're
             | using it a lot by now.
             | 
             | Nomad, or rather, a Nomad/Consul/Vault stack doesn't have
             | these things included. You need to go and pick a consul-
             | aware loadbalancer like traefik, figure out a CSI volume
             | provider or a consul-aware database clustering like
             | postgres with patroni, think about logging sidecars or
             | logging instances on container hosts. Lots of fiddly,
             | fiddly things to figure out from an operative perspective
             | until you have a platform your development can just use.
             | Certainly less of an out-of-the-box experience than K8.
             | 
             | However, I would like to mention that K8 can be an evil
             | half-truth. "Just self-hosting a K8 cluster" basically
             | means doing all of the shit above, except its "just self-
             | hosting k8". Nomad allows you to delay certain choices and
             | implementations, or glue together existing infrastructure.
             | 
             | K8 requires you do redo everything, pretty much.
        
         | lifeisstillgood wrote:
         | So are you saying that, no matter what, if you want to reply
         | your whole infrastructure as code (networks, dmz, hosts,
         | services, apps, backups etc ) that you are going to have to
         | reproduce that somehow (whatever the combo of AWS services are,
         | OR just learn K8S
         | 
         | Effectively, "every infrastructure as code project will
         | reimplement Kubernetes in Bash"
        
           | aprdm wrote:
           | Not necessarily. You can have all of above with Terraform,
           | Ansible, Puppet, Chef... etc.
        
           | silviogutierrez wrote:
           | Exactly. That's a great way to put it. A bunch of Bash glue
           | reimplementing what Kubernetes already does. Poorly.
        
         | bosswipe wrote:
         | According to the article you are wrong about "infrastructure as
         | code". Kubernetes is infrastructure as data, specifically YAML
         | files. Puppet and Chef are infrastructure as code.
         | 
         | Edit: not sure why the down votes, I was just trying to point
         | out what seems like a big distinction that the article is
         | trying to make.
        
           | monadic2 wrote:
           | What's the material difference between well-formatted data
           | and a DSL? Why does this matter?
        
           | danaur wrote:
           | Seems like a nitpick? Infra as data seems like a subset of
           | infrastructure as code
        
           | silviogutierrez wrote:
           | Maybe? I'm too lazy to formally verify if the YAML files k8s
           | accepts are Turing complete. With kustomize they might very
           | well be.
           | 
           | How about "infrastructure-as-some-sort-of-text-file-
           | versioned-in-my-repository". It's a mouthful, but maybe it'll
           | catch on.
        
             | lmkg wrote:
             | Infrastructure-as-config?
        
             | geofft wrote:
             | They don't do loops or recursion. They don't even do
             | iterative steps in the way that Ansible YAML has
             | plays/tasks.
             | 
             | Yes, higher-level tools like Kustomize or Jsonnet or
             | whatever else you use for templating the files are Turing-
             | complete - but that's at the level of you on your machine
             | generating input to Kubernetes, not at the level of
             | Kubernetes itself. That's a valuable distinction - it means
             | you can't have a Kubernetes manifest get halfway through
             | and fail the way that you can have an Ansible playbook get
             | halfway through and fail; there's no "halfway." If
             | something fails halfway through your Jsonnet, it fails in
             | template expansion without actually doing anything to your
             | infrastructure.
             | 
             | (You can, of course, have it run out of resources or hit
             | quota issues partway through deploying some manifest, but
             | there's no ordering constraint - it won't refuse to run the
             | "rest" of the "steps" because an "earlier step" failed,
             | there's no such thing. You can address the issue, and
             | Kubernetes will resume trying to shape reality to match
             | your manifest just as if some hardware failed at runtime
             | and you were recovering, or whatever.)
        
           | AgentME wrote:
           | I think you could think of "infrastructure as code" as he
           | described it as a superset of "infrastructure as data". Both
           | have the benefit of being able to be reproducibly checked
           | into a repo. Declarative systems like
           | Kubernetes/"infrastructure as data" just go even further in
           | de-emphasizing the state of the servers and make it harder to
           | get yourself into unreproducible situations.
        
       | nelsonenzo wrote:
       | as a sys-admin, I like k8s because it solves sys-admin problems
       | in a standardized way. Things like, safe rolling deploys,
       | consolidated logging, liveness and readiness probes, etc. And
       | yes, also because it's repeatable. It takes all the boring tasks
       | of my job and let's me focus on more meaningful work, like
       | dashboards and monitoring.
        
         | honkycat wrote:
         | Yep, same here. Once you learn it, it is a standardized
         | consistent API and becomes a huge force multiplier
        
           | p_l wrote:
           | k8s is a lever to scale sysadmins power, not scale services
           | to huge numbers. :)
        
       | heipei wrote:
       | My question is: Why is only k8s so popular when there are better
       | alternatives for a large swath of users? I believe the answer is
       | "Manufactured Hype". k8s is from a purely architectural
       | standpoint the way to go, even for smaller setups, but the
       | concrete project is still complex enough that it requires dozens
       | of different setup tools and will keep hordes of consultants as
       | well as many hosted solutions from Google/AWS/etc in business for
       | some time to come, so there's a vested interest in continuing to
       | push it. Everyone wins, users get a solid tool (even if it's not
       | the best for the job) and cloud providers retain their unique
       | selling point over people setting up their own servers.
       | 
       | I still believe 90% of users would be better served by Nomad. And
       | if someone says "developers want to use the most widely used
       | tech", then I'm here to call bullshit, because the concepts
       | between workload schedulers and orchestrators like k8s and nomad
       | are easy enough to carry over from one side to the other.
       | Learning either even if you end up using the other one is not a
       | waste of time. Heck, I started out using CoreOS with fleetctl and
       | even that taught me many valuable lessons.
        
         | dnautics wrote:
         | I'm resisting kubernetes and might go with nomad (currently I'm
         | "just using systemd" and I get HA from the BEAM VM)... But I do
         | also get the argument that the difference between kubernetes
         | and nomad is that increasingly kubernetes is supported by the
         | cloud vendors, and nomad supports the cloud vendors.
        
         | verdverm wrote:
         | What are these alternatives with more users?
         | 
         | Where is the momentum?
         | 
         | Hosted GKE costs the same per month as an hour of DevOps time,
         | what's wrong with paid management for k8s?
        
           | heipei wrote:
           | I didn't say more users, I said appropriate for more users.
           | The alternative I mentioned is Nomad and I wish more people
           | would give it a try and decide for themselves. The momentum
           | behind it is Hashicorp, makers of Vault, Consul, Terraform,
           | Vagrant, all battle-proven tools. The fact that there's one
           | big player behind it really shows in how polished the tool,
           | UI and documentation is.
           | 
           | The issue that I have with managed k8s is that these products
           | will decrease the pressure to improve k8s documentation,
           | tooling and setup itself. And then there's folks (like me)
           | who want or need to run something like k8s on bare metal
           | hardware outside of a cloud where the cloud-managed solution
           | isn't available.
        
         | jsmith12673 wrote:
         | I got a bit disillusioned with k8s and looked at Nomad as an
         | alternative.
         | 
         | As a relatively noob sysadmin, I liked it a lot. Easy to deploy
         | and easy to maintain. We've got a lot of mixed rented hardware
         | + cloud VPS, and having one layer to unify them all seemed
         | great.
         | 
         | Unfortunately I had a hard convincing the org to give it a
         | serious shot. At the crux of it, it wasn't clear what
         | 'production ready' Nomad should look like. It seemed like Nomad
         | is useless without Consul, and you really should use Vault to
         | do the PKI for all of it.
         | 
         | It's a bit frustrating how so many of the HashiCorp products
         | are 'in for penny, in for a pound' type deals. I know there's
         | _technically_ ways for you use Nomad without Consul, but it
         | didn't seem like the happy path, and the community support was
         | non-existent.
         | 
         | Please tell me why I'm wrong lol, I really wanted to love
         | Nomad. We are running a mix of everything and its a nightmare
        
           | kelnos wrote:
           | I'm sympathetic toward the idea of a system made of
           | interchangeable parts, but I also kinda feel like it's a bit
           | unrealistic, maybe? Even with well-defined interfaces, there
           | will always be interop problems due to bugs or just people
           | interpreting the interface specs differently. Every new piece
           | to the puzzle adds another line (or several) to a testing
           | matrix, and most projects just don't have the time and
           | resources to do that kind of testing. It's unfortunate, but
           | IMO understandable that there's often a well-tested happy
           | path that everyone should use, even when theoretically things
           | are modular and replaceable.
        
         | ravenstine wrote:
         | > I still believe 90% of users would be better served by Nomad.
         | 
         | Well sure, but if the story just ended with "everyone use the
         | least exciting tool", then there'd be few articles for tech
         | journals to write.
         | 
         | But Kubernetes promises so much, and deep down everyone subtly
         | thinks "what if I have to scale my project?" Why settle for
         | _good enough_ when you could settle for  "awesome"? It's just
         | human nature to choose the most exciting thing. And given that
         | I do agree that there's some manufactured hype around
         | Kubernetes, it isn't surprising to me why few are talking about
         | Nomad.
        
       | nasmorn wrote:
       | I host about a dozen rails apps of different vintage and started
       | switching from Dokku to Digital Ocean Kubernetes. I had a basic
       | app deployed with load balancer and hosted DB in about 6 hours.
       | Services like the nginx ingress are very powerful and it all
       | feels really solid. I never understood Dokku internals either so
       | them being vastly simpler is no help for me. I figured for better
       | or worse kubernetes is here to stay and on DO it is easier than
       | doing anything on AWS really. I have used AWS for about 5 years
       | and have inherited things like terraformed ECS clusters and
       | Beanstalk apps. I know way more about AWS but I feel you need to
       | know so much that unless you only do ops you cannot really keep
       | up.
        
         | koeng wrote:
         | I found deploying databases with Dokku to be really intuitive.
         | CockroachDB is great, but still a lot more steps than dokku
         | postgres:create <db>. The whole certificates thing is quite
         | confusing. Otherwise, k3s on-prem is great
        
       | JackRabbitSlim wrote:
       | I get the feeling K8 is the modern PHP. Software that's easy to
       | pick up and use without complete understanding and get something
       | usable. Even if its not efficient and results in lots of
       | technical debt.
       | 
       | And like PHP, it will be criticised with the power of hind sight
       | but will continue to be used and power vast swaths of the
       | internet.
        
         | pyrophane wrote:
         | I don't think this is right. The reason I say that is because
         | for the most part, teams new to k8s aren't building and
         | managing their own clusters, they are using a managed solution.
         | In that case, an application deployment only need be a few
         | dozen lines of yaml. Most teams aren't really going to be
         | building deep into k8s, and it shouldn't be hard to deploy your
         | containers to some other managed solution.
        
         | iso-8859-1 wrote:
         | But languages are easy, there is the whole field of PL theory
         | to draw from. If you're randomly throwing things together like
         | Lerdorf was, there's a missed opportunity.
         | 
         | But what is the universally regarded theory that k8s
         | contradicts? I don't think there is one.
        
           | p_l wrote:
           | In fact, I'd say that k8s is unusually heavily stepped in
           | high-brow theories from both engineering and AI space. Just
           | not necessarily ones that enjoy hype right now.
           | 
           | The storage of apiserver essentially works as distributed
           | Blackboard in a "Blackboard System", with every controller
           | being an agent in such a system. Meanwhile the agents
           | themselves approach their tasks from control theory areas -
           | oft used comparison is with PID controllers.
        
       | pyrophane wrote:
       | Honestly, at least with GKE, hosting applications on managed k8s
       | is not that complicated, to the point that I don't think it is a
       | poor choice even for small teams who might not need all the bells
       | and whistles of k8s. That is, so long as that small team is
       | already on board with CI and containers.
        
       | AzzieElbab wrote:
       | Because the demos are awesome and there is a lot of money to be
       | made in getting it beyond demos
        
       | tristor wrote:
       | The simple answer is that Kubernetes isn't really any of the
       | things it's been described as. What it /is/, though, is an
       | operating system for the Cloud. It's a set of universal
       | abstraction layers that can sit on top of and work with any IaaS
       | provider and allows you to build and deploy applications using
       | infrastructure-as-code concepts through a standardized and
       | approachable API.
       | 
       | Most companies who were late on the Cloud hype cycle (which is
       | quite a lot of F100s) got to see second-hand how using all the
       | nice SaaS/PaaS offerings from major cloud providers puts you over
       | a barrel and don't have any interest in being the next victim,
       | and it's coming at the same time that these very same companies
       | are looking to eliminate expensive commercially licensed
       | proprietary software and revamp their ancient monolithic
       | applications into modern microservices. The culimination of these
       | factors is a major facet of the growth of Kubernetes in the
       | Enterprise.
       | 
       | It's not just hype, it has a very specific purpose which it
       | serves in these organizations with easily demonstrated ROI, and
       | it works. There /are/ a lot of organizations jumping on the
       | bandwagon and cargo-culting because they don't know any better,
       | but there are definitely use cases where Kubernetes shines.
        
         | sev wrote:
         | I think this is a good answer. I'll add that as soon as you
         | need to do something slightly more complex, without something
         | like k8s you aren't going to be happy with your life. With k8s,
         | it's almost a 1 liner. For example, adding a load balancer or a
         | network volume or nginx or an SSL cert or auto scaling
         | or...or...or...
        
           | d23 wrote:
           | Come on. What? Setting up a load balancer or nginx is
           | considered complex now?
        
             | recursive wrote:
             | Depends who's doing the configuring. I don't know how to do
             | it. But nor do I know how to use k8s for that matter.
        
             | frompdx wrote:
             | And one hour photo is now considered slow.
        
               | poisonborz wrote:
               | This.
        
             | th0ma5 wrote:
             | Well, set me up one. :P
        
               | CameronNemo wrote:
               | my pubkeys are below, send me an ip address and port to
               | an ssh server
               | 
               | https://github.com/cameronnemo.keys
        
               | th0ma5 wrote:
               | My what and what? :P I do get your point, it is "easy"
               | for some definition of such, but to be fair, k8s would
               | automatically put the ip and port in for my part of it
               | all at least.
        
         | nihil75 wrote:
         | IaaS is not "the cloud". it was in 2008 when all we had was EC2
         | and RDS.
         | 
         | Today Kubernetes is the antithesis of the cloud - Instead of
         | consuming resources on demand you're launching VMs that need to
         | run 24/7 and have specific roles and names like "master-1".
         | Might as well rent bare-metal servers. It will cost you less.
        
           | specialp wrote:
           | If that is your definition of "cloud" then most stuff running
           | on AWS and other "cloud" providers isn't. I agree that
           | Kubernetes and even containers aren't the end all. I think
           | they are a stepping stone to true on demand where you have
           | the abstraction of just sandboxed processes compiled to WASM
           | or something run wherever.
           | 
           | But as of where we are now, it is a good abstraction to get
           | there. It provides a lot of stuff like service discovery,
           | auto-scaling and redundancy. Yes you do need to have
           | instances to run K8s, but that is as of date the only
           | abstraction that we have on all cloud providers, local
           | virtualization, and bare metal. So yes it isn't true on
           | demand "cloud" but in order to work like that you need to fit
           | into your service provider's framework and accept limitations
           | on container size, runtime, deal with warm up times
           | occasionally.
        
             | nihil75 wrote:
             | We had (have) discovery, auto-scaling and redundancy in
             | PaaS. Most apps could run just fine in Cloud Foundry/App
             | Engine/Beanstalk/Heroku. But the devs insist on MongoDB &
             | Jenkins instead of using the cloud-provider solution and
             | now you're back to defining VPCs, scaling policies, storage
             | and whatnot.
        
           | deadmutex wrote:
           | Doesn't kubernetes also have autoscaling capabilities?
           | 
           | https://kubernetes.io/blog/2016/07/autoscaling-in-
           | kubernetes...
        
             | cortesoft wrote:
             | That is autoscaling within the cluster.... not the cluster
             | itself.
             | 
             | Although, managed kubernetes clusters let you auto-scale
             | the cluster itself, so i think the GP is wrong.
        
               | frompdx wrote:
               | Check out Cluster AutoScaler: https://github.com/kubernet
               | es/autoscaler/tree/master/cluster...
               | 
               | This tool allows you to autoscale the cluster itself with
               | various cloud providers. There is a list of cloud
               | providers it supports at the end of the readme.
        
           | DenisM wrote:
           | > Might as well rent bare-metal servers
           | 
           | In light of this statement, what do you make of the fact that
           | billions of dollars are spent on EC2? And of the people who
           | spend that money?
        
             | nihil75 wrote:
             | Outdated. We were further ahead in the abstraction ladder
             | with PaaS, and honestly most apps could run perfectly fine
             | in Beanstalk/App Engine/Cloud Foundry/Heroku. But then the
             | devs demand Jenkins, Artifactory and MongoDB, and instead
             | of using cloud-provider alternatives you're back to
             | defining VPCs and autoscaling groups.
        
           | enos_feedler wrote:
           | Does cloud really mean "resources on demand"?
        
             | ableal wrote:
             | Jonathan Schwartz, last CEO of Sun Microsystems, in March
             | 2006:
             | 
             |  _" Frankly, it's been tough to convince the largest
             | enterprises that a public grid represents an attractive
             | future. Just as I'm sure George Westinghouse was confounded
             | by the Chief Electricity Officers of the time that resisted
             | buying power from a grid, rather than building their own
             | internal utilities."_
             | 
             | https://jonathanischwartz.wordpress.com/2006/03/20/the-
             | netwo...
        
           | frompdx wrote:
           | > Might as well rent bare-metal servers. It will cost you
           | less.
           | 
           | Long term, but up front costs are what make cloud services
           | appealing.
           | 
           | FWIW, it's possible to minimize your idle VM costs to an
           | extent. For example, you could use one or more autoscale
           | groups for your cluster and keep them scaled to one vm each.
           | Then use tools like cluster auto scaler to resize on demand
           | as your workload grows. You are correct that idle vm costs
           | can't be completely avoided. At least not as far as I am
           | aware.
        
         | verdverm wrote:
         | Yes, people ought to do a side by side comparison of a new user
         | learning to K8S v AWS v GCP before claiming Kubernetes adds
         | more complexity than it returns in benefits.
         | 
         | Remember the first time you saw the AWS console? And the last
         | time?
        
           | yongjik wrote:
           | Hmm really? My experience is that, with k8s I end up learning
           | all the complexities of k8s _in addition to_ AWS.
           | 
           | Besides, personally I find AWS console much easier to
           | understand. I don't get why people hate it.
        
             | frompdx wrote:
             | > I don't get why people hate it.
             | 
             | Because it is hard to manage the configuration. It's why
             | tools like terraform exist.
             | 
             | Anecdote. I worked for a small company that was later
             | acquired. It turned out one of the long time employees had
             | set up the company's AWS account using his own Amazon
             | account. Bad on it's own. We built out the infra in AWS. A
             | lot of it was "click-ops". There was no configuration
             | management. Not even CloudFormation (which is not all that
             | great in my opinion). Acquiring company realizes mistake
             | after the fact. Asks employee to turn over account.
             | Employee declines. Acquiring company bites the bullet and
             | shells out a five figure sum to employee to "buy" his
             | account. Could have been avoided with some form of config
             | management.
        
               | bsder wrote:
               | > Acquiring company realizes mistake after the fact. Asks
               | employee to turn over account. Employee declines.
               | Acquiring company bites the bullet and shells out a five
               | figure sum to employee to "buy" his account. Could have
               | been avoided with some form of config management.
               | 
               | That is _completely_ the wrong lesson from this anecdote.
               | 
               | 1) The acquiring company didn't do proper due diligence.
               | Sorry, this is diligence 101--where are the accounts and
               | who has the keys?
               | 
               | 2) Click-Ops is _FINE_. In a startup, you do what you
               | _need now_ and the future can go to hell because the
               | company may be bankrupt tomorrow. You fix your infra when
               | you need to in a startup.
               | 
               | 3) Long-time employee seemed to have _exactly_ the right
               | amount of paranoia regarding his bosses. The fact that
               | the buyout appears to have killed his job and paid so
               | little that he was willing to torch his reputation and
               | risk legal action for merely five figures says something.
        
               | cmckn wrote:
               | Sounds like the exact nightmare a previous employer was
               | living. AWS' (awful) web UI convinces the faint of heart
               | to click through wizards for everything. If you're not
               | using version control for _anything_ related to your
               | infrastructure...you have my thoughts and prayers.
               | 
               | +1 for "click-ops", perfectly put.
        
               | frompdx wrote:
               | > you have my thoughts and prayers.
               | 
               | Pretty much. The lesson learned for me was to always have
               | version control for the complete stack including the
               | infra for the stack. I like terraform for this.
               | Terragrunt at least solves the issue of modules for
               | terraform reducing the verbosity. Assume things could go
               | wrong and you will need to redeploy EVERYTHING. I've been
               | there.
        
             | shiftpgdn wrote:
             | AWS console is very carefully designed to let you create
             | traps that result in spending huge buckets of money if
             | you're not paying attention.
        
             | verdverm wrote:
             | How do you view all the VMs in a project across the globe
             | at the same time?
             | 
             | Do you need to manage keys when ssh'n into a VM?
             | 
             | Do you know what the purpose of all the products are? If
             | you don't know one, are you able to at least have an idea
             | what it's for without going to documentation?
             | 
             | The have also directly opposed many efforts for Kubernetes,
             | even to their own customers, until they realized they
             | couldn't win. Only then did they cave, and they are really
             | doing the bare minimum. The most significant contribution
             | to OSS they have made was a big middle finger to Elastic
             | search...
        
               | yongjik wrote:
               | Of course everyone's experience is different, but in my
               | case...
               | 
               | > How do you view all the VMs in a project across the
               | globe at the same time?
               | 
               | I'm not sure what it's got to do with k8s? I can't see
               | jobs that belong to different k8s clusters at the same
               | time, either.
               | 
               | > Do you need to manage keys when ssh'n into a VM?
               | 
               | Well, in k8s everybody who has access to the cluster can
               | "ssh" into each pod as root and do whatever they want, or
               | at least that's how I've seen it, but I'm not sure it's
               | an improvement.
               | 
               | > Do you know what the purpose of all the products are?
               | If you don't know one, are you able to at least have an
               | idea what it's for without going to documentation?
               | 
               | Man, if I got a dime every time someone asked "Does
               | anyone know who owns this kubernetes job?", I'll have...
               | hmm maybe a dollar or two...
               | 
               | Of course k8s can be properly managed, but IMHO, whether
               | it is properly managed is orthogonal to whether it's k8s
               | or vanilla AWS.
        
           | gopalv wrote:
           | > Remember the first time you saw the AWS console? And the
           | last time?
           | 
           | There was a time in between for me - that was Rightscale.
           | 
           | For me, the real thing that k8s bring is not hardware-infra -
           | but reliable ops automation.
           | 
           | Rightscale was the first place where I encountered scripted
           | ops steps and my current view on k8s is that it is a
           | massively superior operational automation framework.
           | 
           | The SRE teams which used Rightscale at my last job used to
           | have "buttons to press for things", which roughly translated
           | to "If the primary node fails, first promote the secondary,
           | then get a new EC2 box, format it, install software, setup
           | certificates, assign an elastic IP, configure it to be
           | exactly like the previous secondary, then tie together
           | replication and notify the consistent hashing."
           | 
           | The value was in the automation of the steps in about 4
           | domains - monitoring, node allocation, package installation
           | and configuration realignment.
           | 
           | The Nagios, Puppet and Zookeeper combos for this was a
           | complete pain & the complexity of k8s is that it is a "second
           | system" from that problem space. The complexity was always
           | there, but now the complexity is in the reactive ops code,
           | which is the final resting place for it (unless you make your
           | arch simpler).
        
           | techntoke wrote:
           | Why do a comparison? K8S runs on AWS and GCP. They have
           | managed services for setting up one. If you know K8S as a
           | developer, then you simply consume the cloud K8S cluster.
        
             | ForHackernews wrote:
             | That's exactly the point. You avoid lock-in to AWS or GCP
             | by running on K8S instead. K8S becomes the "operating
             | system": a standardized abstraction over different
             | hardware.
        
               | verdverm wrote:
               | I argue this is a good thing, like the Linux kernel and
               | the Chromium kernel
        
             | kelnos wrote:
             | I think the point is that there are people that claim that
             | k8s adds a ton of complexity to your environment. But if
             | you compare k8s alone with managing your infrastructure
             | using (non-k8s) AWS or GCP primitives, you'll find that the
             | complexity is similar.
        
               | verdverm wrote:
               | Exactly
        
               | Phrodo_00 wrote:
               | While that's true on the managing instances side, you
               | also need to actually deploy the infrastructure to manage
               | them (If you're not using some PaaS offering). You don't
               | need to do this for other IaaS.
               | 
               | Honestly the last time I looked at k8s was like 5 years
               | ago, but back then it looked like a pretty big pita to
               | admin.
        
               | verdverm wrote:
               | The last 5 years have been transformative for both cloud
               | native development and also open source software
               | 
               | It is a completely different world that stretches far
               | beyond Kubernetes, though I attribute much of the change
               | to what has happened from / around k8s -> cncf
               | 
               | It's so easy, I can launch production level clusters is
               | 15 minutes with four keystrokes and make backups and
               | restore to new ephemeral clusters with a few more simple
               | commands
               | 
               | https://github.com/hofstadter-io/jumpfiles
               | 
               | (I'll be pushing these updates this weekend, haven't
               | slept in 24 hours as reworked everything to be powered by
               | https://cuelang.org )
        
               | merb wrote:
               | > but back then it looked like a pretty big pita to admin
               | 
               | - well it's also a pita to update services without a
               | downtime. - and it sucks to update operating systems
               | without a downtime. - sometimes you reinvent the wheel,
               | when you add another service or even a new website
               | 
               | however with k8s everything above is kinda the same,
               | define a yaml file, apply it, it works.
               | 
               | and also k8s itself can be managed via
               | ansible/k3s/kops/gke/kubeadmin/etc... it's way easier to
               | create a cluster and manage it.
        
         | enos_feedler wrote:
         | OS for the cloud is exactly what it is. I see AWS, Azure and
         | GCP as OEMs for cloud, just like Samsung, Oppo, Motorla, etc
         | are OEMs for smartphones. Android was the open source
         | abstraction across these devices. K8s is the open source
         | abstraction across clouds.
         | 
         | The meaning of "app" on top of these two operating system
         | abstractions is entirely different and the comparison probably
         | doesn't extend beyond this. From a computing stack standpoint
         | though, it makes sense.
        
       | znpy wrote:
       | in my opinion it kinda sets a common lingo between development
       | people and operations people.
       | 
       | operations details are hidden from developers and development
       | details (the details of the workload) are hidden from the
       | operations engineers.
        
       | gabordemooij wrote:
       | Kubernetes is popular because developers want names on their CVS.
       | A couple of shell scripts will get you anywhere.
        
       | yllus wrote:
       | To draw anecdotally from my own experiences, its for two reasons:
       | 
       | 1. It's simple to get started with, but complex enough to tweak
       | to your needs in respect to simplicity of deployment, scaling and
       | resource definition.
       | 
       | 2. It's appealingly cloud-agnostic just at the time where
       | multiple cloud providers are all becoming viable and competitive.
       | 
       | I think it's more #2 and #1; as always, timing is everything.
        
       | battery_cowboy wrote:
       | Because everyone chases the newest, shiniest thing in tech, and
       | it's not cool nor fun to make boring old stuff in C then copy one
       | binary and maybe a config to the server.
        
         | mwcampbell wrote:
         | Even if one does have a single binary and config file that one
         | can just copy to a server and run, there's more to non-trivial
         | deployments than that. For example, how do you do a zero-
         | downtime deployment where you copy over a new binary, start it
         | up, switch new requests over to the new version, but let the
         | old one keep running until either it finishes handling all
         | requests that it already received or a timeout is reached? One
         | reason why Kubernetes is popular is that it provides a
         | standard, cross-vendor solution to this and other problems.
        
         | p_l wrote:
         | Then you need to add management of storage for it, management
         | of logs, integration of monitoring, healthchecks, maybe some
         | multiple environment case because UAT is good thing to have,
         | etc. etc.
        
       | alexbanks wrote:
       | I thought it was pretty insane yesterday when I read a YC-backed
       | recruiting company was using Kubernetes. Absolutely insane. It's
       | become the new, hottest, techiest thing that every company has to
       | have even when they don't need it.
        
         | frompdx wrote:
         | Why is it insane? Was there something about this company's
         | stack that could have avoided Kubernetes in favor of something
         | else?
        
         | Bob_LaBLahh wrote:
         | It's perfectly sane if their team already knows how to use K8s,
         | especially if they use a hosted solution like GKE or
         | Digitalocean K8s. (I'll admit that I'd never want to manage my
         | own k8s cluster.)
         | 
         | Once you know K8s, it's not very difficult to use. Plus, it
         | provides solutions to a lot of different infrastructure-level
         | problems.
        
       | jfrankamp wrote:
       | I'll add my use case: we use hosted kubernetes to deploy all of
       | our branches of all of our projects as fully functional
       | application _stacks_, extremely similarly to how they will
       | eventually run in production. Want to try something and show it
       | to someone in the product owner level? Ok there will be a kube
       | nginx-ingress backed environment up in build-time + a few
       | minutes.
       | 
       | The environments advertise themselves via that same modified
       | ingress's default backend. We stick a tiny bit of deploy yaml in
       | our projects, the deployments kube tagging gives us all the
       | details we need to provide diffs, last build time, links to git
       | repos, web sites etc for the particular environment. The yaml
       | demonstrates conclusively how an app could or should be run,
       | regardless of os or software choice, so when we hand it to ops
       | folks there is a basis for them to run from.
        
       | acd wrote:
       | Devops/arch here, I think Kubernetes solves deployment in a
       | standardized way and we get fresh clean state with every app
       | deploy. Plus it restarts applications/pods that crashes.
       | 
       | That said I think Kubernetes may be at its Productivity journey
       | on the tech Hype cycle. Networking in Kubernetes is complicated.
       | This complication and abstraction has a point if you are a
       | company at Google scale. Most shops are not Google scale and do
       | not need that level of scalability. The network abstraction has
       | its price in complexity when doing diagnostics.
       | 
       | You could solve networking differently than in Kubernetes with
       | IPv6. There is not a need for complicated IPv4 nat schemes. You
       | could use native ipv6 addresses that are reachable directly from
       | the internet. Since you have so many ipv6 addresses you do not
       | need Routers/Nats.
       | 
       | Anyhow in a few years time some might be using something simpler
       | like an open source like Heroku. If you could bin pack the
       | services / intercommunication on the same nodes there would be
       | speed gains from not having todo network hops going straight to
       | local memory. Or something like a standardized server less open
       | source function runner.
       | 
       | https://en.wikipedia.org/wiki/KISS_principle
       | https://en.wikipedia.org/wiki/Hype_cycle
        
         | takeda wrote:
         | This is a good point, I was wondering why IPv6 is being avoided
         | so hard.
         | 
         | There are many arguments that IPv6 didn't solve too many IPv4
         | pain points, but if it solved something is definitively this.
        
       | Fiahil wrote:
       | TIL about kudo, The Kubernetes Universal Declarative Operator.
       | We've been doing the exact same things in a custom go CLI for 2
       | years.
       | 
       | The kubernetes ecosystem is really amazing and full of invaluable
       | resources. It's vast, complex, but well-thought. Getting to know
       | all ins and outs of the project is time consuming. So much things
       | to learn and so little time to practice...
        
         | darkteflon wrote:
         | Ars longa, vita brevis ...
        
         | hartem_ wrote:
         | I work on KUDO team. Would love to hear what you think about
         | it. All devs hang out in #kudo channel on Kubernetes community
         | slack, please don't hesitate to join and say hi.
        
       | jonahbenton wrote:
       | Obligatory- the best introduction to Kubernetes, from conceptual
       | perspective, is Google's incredible Borg paper:
       | 
       | https://static.googleusercontent.com/media/research.google.c...
        
       ___________________________________________________________________
       (page generated 2020-05-29 23:00 UTC)