[HN Gopher] Kubernetes is deprecating Docker runtime support
       ___________________________________________________________________
        
       Kubernetes is deprecating Docker runtime support
        
       Author : GordonS
       Score  : 307 points
       Date   : 2020-12-02 18:55 UTC (4 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | tannhaeuser wrote:
       | Just another case where an idea originally created for developer
       | convenience turned into an enterprise thing and instrument for
       | mass control. Reminds me of Java build tools having long
       | forgotten that they're there to make developer's lives easier
       | rather than appeal to enterprise control freak desires. Now have
       | fun developing k8s-compatible containers to enslave us in "the
       | cloud" with developer workflow an afterthought.
        
       | littlemerman wrote:
       | Might make sense to update the link to point to the language
       | exactly:
       | 
       | https://github.com/kubernetes/kubernetes/blob/master/CHANGEL...
       | 
       | "Docker support in the kubelet is now deprecated and will be
       | removed in a future release. The kubelet uses a module called
       | "dockershim" which implements CRI support for Docker and it has
       | seen maintenance issues in the Kubernetes community. We encourage
       | you to evaluate moving to a container runtime that is a full-
       | fledged implementation of CRI (v1alpha1 or v1 compliant) as they
       | become available. (#94624, @dims) [SIG Node]"
        
       | flowerlad wrote:
       | This is misleading. If you're using Docker to build images and
       | using Kubernetes to manage containers nothing changes. The
       | deprecation mentioned is internal to Kubernetes and does not
       | impact people who use Kubernetes to manage containers built using
       | Docker.
        
       | djsumdog wrote:
       | containerd will still run images build by Docker. Google can talk
       | about how Docker is missing CRI support, but I feel like this is
       | just Google wanting to cut out Docker entirely.
       | 
       | I seems like containerd is maintained by The Linux Foundation, a
       | group of people who mostly don't even run Linux (most of their
       | releases and media material is made on Macs)
       | 
       | I dunno. I don't like the direction things are going in the open
       | source world right now.
        
         | dragonwriter wrote:
         | > I seems like containerd is maintained by The Linux
         | Foundation, a group of people who mostly don't even run Linux
         | (most of their releases and media material is made on Macs)
         | 
         | Using Macs for content creation isn't evidence that the
         | Foundation members don't use also Linux, whether for software
         | development, backend servers, etc.
        
           | shadowgovt wrote:
           | It's probably fair to extrapolate that some tools they rely
           | upon in their business flow aren't available on Linux, which
           | is probably of concern.
        
             | djsumdog wrote:
             | I think Derek Taylor does a good breakdown of all the
             | various software choices by The Linux Foundation:
             | 
             | https://www.youtube.com/watch?v=a-2dYfYvJGk
        
           | Brian_K_White wrote:
           | If I made presentations, you would discover that my content
           | was all created on a linux desktop.
           | 
           | What a totally random data point of no relevance or
           | significance eh?
           | 
           | Such things do in fact reflect the character and nature of
           | the people involved. It doesn't necessarilly define them
           | entirely, but yes it does reflect them.
           | 
           | It's not that you're not a "true Scotsman" necessarily if you
           | say, care about linux primarily in other roles than desktops.
           | You can be perfectly sincere in that, and it's valuable even
           | if it only goes that far. But it does mean you are in a
           | different class from people who actually do abjure the
           | convenience of proprietary software wherever possible, and
           | today "possible" absolutely includes ordinary office work and
           | even presentation media creation.
           | 
           | It's perfectly ok to be so compromised. Everyone doesn't have
           | to be Stallman.
           | 
           | It's equally perfectly fair to observe that these people are
           | not in that class, when such class does exist and other
           | people do actually live the life.
           | 
           | You can't have it both ways that's all. If you want to preach
           | a certain gospel to capitalise on the virtue signal, without
           | actually living that gospel and not actually posessing that
           | virtue, it's completely fair to be called out for it.
        
             | fwip wrote:
             | It seems hyperbolic to say that the operating system a
             | person uses is reflective of their character.
        
         | enos_feedler wrote:
         | I would say that Google never wanted to be _in_ with Docker in
         | the first place. Google had been doing things the docker way
         | before Docker existed (Borg). Docker sort of caught the
         | developer ecosystem by surprise, but proved the viability of
         | containers in general. From this point forward it was clear
         | Google would build their cloud future on "containers", not on
         | Docker. If you can find archived streams of the GCP conference
         | that took place shortly after Dockers rise in popularity, they
         | say the word container all day long, but never mention the word
         | Docker once. I was there and remember counting
        
           | shadowgovt wrote:
           | Support for Docker was a correct market move for Cloud to
           | adopt users that were already familiar with a tech base.
           | 
           | But divorcing their API from that tech base is also a move to
           | support Cloud users---they don't want the story for big
           | companies to be "If you want to use Kubernetes, you _must_
           | also attach to Docker. " That cuts potential customers out of
           | the market who want to use Kubernetes but may have a reason
           | they can't use Docker (even if that reason is simply
           | strategic).
           | 
           | Google Cloud's business model walks a tightrope between
           | usability and flexibility. Really, all the cloud vendors do,
           | to varying degrees of success.
        
         | cogman10 wrote:
         | Probably to get around the recent docker registry throttling
         | would be my guess. They are likely looking at building out
         | their own container ecosystem.
        
           | jacques_chester wrote:
           | Changing the runtime doesn't change the registry.
        
         | chucky_z wrote:
         | i 100% agree with you. open source feels really, really bad
         | especially in the last year.
         | 
         | to others reading this -- simplified, but, docker uses
         | containerd to build/run images. all docker images are valid
         | containerd images. you can run images through containerd
         | straight off the docker hub.
        
           | shadowgovt wrote:
           | It depends on what one means by "open source."
           | 
           | Open source is fine; there's a ton of available code out
           | there, to mix and match for whatever goals you need. Open
           | _services_ were never a thing, and what we 're observing is
           | that the SAAS model is eating the entire marketplace because
           | tying services together to solve tasks is far easier (and
           | depending on scale, more maintainable) than tying software
           | together to solve tasks on hardware you own and operate
           | exclusively. Owning and operating the hardware in addition to
           | owning and operating the software that does the thing you
           | want to do doesn't scale as flexibly as letting someone else
           | maintain the hardware and provide service-level guarantees,
           | for a wide variety of applications. But the software driving
           | those services is generally closed-source.
           | 
           | If by "open source" you mean "Free (as in freedom) software,"
           | the ship has kind of sailed. The GNU-style four essential
           | freedoms break down philosophically in the case of SAAS,
           | because the underlying assumption is "It's my hardware and I
           | should have the right to control it" and that assumption
           | breaks down when it's not my hardware. There may be an
           | analogous assumption for "It's my data and..." but nobody's
           | crystallized what that looks like in the way GNU crystallized
           | the Four Freedoms.
        
             | Brian_K_White wrote:
             | This is a pretty good primer on this peculiar new problem.
             | 
             | It's kind of a case study for future text books about how
             | if there is a certain incentive, it will be embodied and
             | satisfied no matter what. If the names and labels have to
             | change, they will, but the essentials will somehow turn out
             | to not have changed in any meaningful way in the end.
             | 
             | It's if anything worse now than before. At least before you
             | were allowed to own your inscrutible black box and use it
             | indefinitely. There was sane persistence like a chair. You
             | buy it, and it's there for you as long as you still want it
             | after that. maybe you don't _want_ it any more after a
             | while, but it doesn 't go poof on it's own.
             | 
             | One way things are actually better now though is, now in
             | many cases the saas outside of your control really is just
             | a convenience you _could_ replace with self-hosted good-
             | enough alternatives, thanks to decades of open source
             | software and tools building up to being quite powerful and
             | capable today.
             | 
             | I think this is a case of the rising water lifting all
             | ships. If the proprietry crowd gained more ability to abuse
             | their consumers, everyone else has likewise gained more
             | ability to live without them. Both things are true and I
             | tell myself it's a net positive rather than a positive and
             | a negative cancelling out, because more and better tools is
             | a net positive no matter that both sides have access to use
             | them for crossed purposes. At least it means you have more
             | options today than you did yesterday.
        
         | landerwust wrote:
         | This was always a land-grab by folk who wanted Docker's
         | """community""" (read: channel) but not Docker's commercial
         | interests. Any time you see a much larger commercial entity
         | insist you write a spec for your technology, especially one
         | with much larger pockets, the writing is always on the cards.
         | 
         | The bit that absolutely fucking sickens me is how these
         | transactions are often dressed up in language with free
         | software intonations like "community", "collaboration" etc.
         | Institutionalized doublethink is so thick in the modern free
         | software world that few people even recognize the difference
         | any more. As an aside, can anyone remember not so long ago when
         | Google wouldn't shut up about "the open web"? Probably stopped
         | saying that not long after Chrome ate the entire ecosystem and
         | began dictating terms.
         | 
         | The one mea culpa for Docker is that the sales folk behind
         | Kubernetes haven't the slightest understanding of the usability
         | story that made Docker such a raging success to begin with. The
         | sheer size of the organizations they represent may not even
         | allow them to recreate that experience if indeed they
         | recognized the genius of it. It remains to be seen whether
         | they'll manage that before another orchestrator comes along and
         | changes the wind once again. The trophy could still be stolen,
         | there's definitely room for it.
        
           | cactus2093 wrote:
           | I get preferring that major open sourced projects weren't
           | controlled by a big corporation, but this seems overly
           | dramatic.
           | 
           | Docker was always a company first and foremost, I fail to see
           | how leaving the technology in their commercial control would
           | have been better in any way than making it an open standard.
           | Just because Docker = small = good and Google = giant
           | corporation = evil? Docker raised huge amounts of VC funding,
           | they had every intention of becoming a giant corporation
           | themselves.
           | 
           | And it's kind of bizarre to completely discount the outcome
           | of this situation, which is that we have amazing container
           | tools that are free and open and standardized, just because
           | you don't like some of the parties involved in getting to
           | this point.
        
             | landerwust wrote:
             | > making it an open standard
             | 
             | I would hesitate to use the term "open standard" until I'd
             | thoroughly assessed the identities of everyone contributing
             | to that open spec, along with those of their employers, and
             | what history the spec has of accepting genuinely
             | "community" contributions (in the 1990s sense of that word)
        
               | cactus2093 wrote:
               | I've never tried contributing to CRI so I don't really
               | know what the process is like. I imagine like any such
               | large and established standard it would require a
               | herculean effort, that doesn't necessarily mean it's not
               | open just that it can't possibly accept any random idea
               | that comes along and still continue to serve its huge
               | user base.
               | 
               | But let's say you're right and call it a closed standard.
               | Then this change drops support for one older, clunkier
               | closed standard in favor of the current closed standard.
               | Still doesn't seem like anything to get upset over.
        
               | caniszczyk wrote:
               | The container image/runtime/distribution area is heavily
               | standardized now via the Open Container Initiative (OCI)
               | that was founded 5 years ago.
               | 
               | https://www.linuxfoundation.org/press-
               | release/2015/12/open-c...
               | https://kubernetes.io/blog/2016/12/container-runtime-
               | interfa...
               | 
               | You can see the releases and specs that are supported by
               | all major container runtimes here:
               | https://opencontainers.org/release-notices/overview/
               | 
               | For example, OpenShift ships https://cri-o.io in its
               | kubernetes distribution as its container runtime, so this
               | isn't really new.
               | 
               | Disclosure: I helped start OCI and CNCF
        
           | btilly wrote:
           | Meh.
           | 
           | The whole idea of containerization came from Google anyways,
           | who uses it internally. Docker came out with their container
           | system without understanding what made it work so well for
           | Google. They then discovered the hard way that the whole
           | point of containers is to not matter, which makes it hard to
           | build a business on them.
           | 
           | Docker responded by building up a whole ecosystem and doing
           | everything that they could to make Docker matter. Which makes
           | them a PITA to use. (One which you might not notice if you
           | internalize their way of doing things and memorize their
           | commands.)
           | 
           | One of my favorite quotes about Docker was from a Linux
           | kernel developer. It went, "On the rare occasions when they
           | manage to ask the right question, they don't understand the
           | answer."
           | 
           | I've seen Docker be a disaster over and over again. The fact
           | that they have a good sales pitch only makes it worse because
           | more people get stuck with a bad technology.
           | 
           | Eliminating Docker from the equation seems to me to be an
           | unmitigated Good Thing.
        
             | TomBombadildoze wrote:
             | > The whole idea of containerization came from Google
             | anyways, who uses it internally.
             | 
             | Not really. Jails and chroots are a form of
             | containerization and have existed for a long time. Sun
             | debuted containers (with Zones branding) as we think of
             | them today long before Google took interest, and still
             | years before Docker came to the forefront.
             | 
             | > I've seen Docker be a disaster over and over again. The
             | fact that they have a good sales pitch only makes it worse
             | because more people get stuck with a bad technology.
             | 
             | > Eliminating Docker from the equation seems to me to be an
             | unmitigated Good Thing.
             | 
             | Now this I agree with, Docker is a wreck. Poor design, bad
             | tooling, and often downright hostile to the needs of their
             | users. Docker is the Myspace of infra tooling and the
             | sooner they croak, the better.
        
           | jrochkind1 wrote:
           | > This was always a land-grab by...
           | 
           | What's "this" in that sentence? Kubernetes in general?
        
             | Spivak wrote:
             | The standardization of "Docker" containers into "OCI"
             | containers and the huge amount of public pressure put on
             | Docker to separate out their runtime containerd from
             | dockerd.
        
         | jsmith45 wrote:
         | Docker Inc really do not want infrastructure projects wrapper
         | docker itself. It causes all sorts of headaches for them. They
         | encourage using containerd for infrastructure projects (which
         | is basically the core of original docker extracted out as a
         | sperate project maintained by a large community). Docker is
         | basically an opiononated wrapper around containerd, and they
         | intend to move even more in that direction in the future.
         | 
         | TLDR: Docker Inc almost certainly is happy to see this change
         | happen.
        
         | cpuguy83 wrote:
         | comtainerd is hosted by the Linux Foundation (more specifically
         | CNCF). It is maintained by people from all over, including but
         | not limited to the major US tech companies (Apple, Google,
         | Microsoft, Amazon).
         | 
         | containerd was also created by Docker Inc and donated to the
         | LF.
        
         | cactus2093 wrote:
         | > I dunno. I don't like the direction things are going in the
         | open source world right now.
         | 
         | I commented on a child comment as well, but I don't understand
         | this idea. The news is that a piece of commercially built
         | software is being deprecated by a major project in favor of one
         | built on an open standard, and you're interpreting this as a
         | blow to open source?
        
       | justicezyx wrote:
       | This is inevitable one way or another. Docker lost any leverage
       | after 2015, when they still had some chance of making sure
       | container as they invented can still be monetized in the same way
       | of VMware.
        
       | [deleted]
        
       | qz2 wrote:
       | Can someone explain how logging is supposed to work after this
       | change? I'm complete bloody lost.
       | 
       | Actually I'm nearly always lost with kubernetes. It's either
       | broken or changing.
        
         | throwaway201103 wrote:
         | > I'm nearly always lost with kubernetes. It's either broken or
         | changing.
         | 
         | Glad I'm not the only one. I'm sure I'm not the smartest
         | engineer/sysadmin in the world, but I'm also not the dumbest
         | and I have never gotten an on-premises Kubernetes installation
         | to work.
         | 
         | The way I manage containers is lxc and shell scripts. I
         | understand it, and it works.
        
           | freedomben wrote:
           | I work for Red Hat as an OpenShift Consultant, so I'm on the
           | bleeding edge of change and am constantly pushing boundaries.
           | 
           | Don't tell the boss or the customers, but most of the time
           | when release notes for a new version come out, I look at them
           | and go "WTF, why do we need that, I better do some research."
           | It's fast changing, complex as hell, and absolutely brutal.
           | That said most things are there for a reason and once I dig
           | in I usually see the need.
           | 
           | That said I do love it despite its warts. There's no doubt
           | some Stockholm Syndrome at play here, but I love the API
           | (which is pretty curlable btw, a mark of a great API IMHO)
           | and the principles (declarative, everything's a YAML/JSON
           | object in etcd, etc). I see it the same way I did C++ (which
           | I also loved). It gives you _great power_ which you can use
           | to build an elegant, robust system, or you can create an
           | unmaintainable, complex, monster of a nightmare. It 's up to
           | you.
        
             | jacques_chester wrote:
             | _The full-time job of keeping up with Kubernetes_ was
             | published in February 2018[0] and things have only gotten
             | faster since then.
             | 
             | [0] https://goteleport.com/blog/kubernetes-release-cycle/ ,
             | HN discussion:
             | https://news.ycombinator.com/item?id=16285192
        
           | mschuster91 wrote:
           | Getting Kubernetes itself running on bare metal ("running" as
           | in "you have containers and can access them") is half a day
           | of work.
           | 
           | What is _deadly difficult_ is getting networking to work.
           | Even a comparably  "easy" thing with a couple of two-NIC
           | machines (one external, internet-routable, one DMZ) cost me a
           | fucking week.
           | 
           | What's even worse is when one has to obey corporate
           | restrictions - for example, only having external interfaces
           | on "loadbalancer" nodes:
           | 
           | - First of all, MetalLB only has _one_ active Speaker node
           | which means your bandwidth is limited to that node 's uplink
           | and you're wasting the resources of the other loadbalancers.
           | 
           | - Second, you _can_ taint your nodes to only schedule the
           | MetalLB speaker on your  "loadbalancer" nodes via
           | tolerations... but how the f..k do you convince MetalLB to
           | change the speaker node once you apply that change?!
           | 
           | - Third, what do you do when you want to expose N services
           | but only have one or two external IPs? DC/OS was way more
           | flexible, you had one set of loadbalancers (haproxy) that did
           | all the routing, and could run an entire cluster on four
           | machines - two LBs, one master, one worker. There is _no way_
           | to replicate this with Kubernetes. None.
        
             | throwaway201103 wrote:
             | Yes, it's the networking that never works for me either.
             | I'm one guy, wear many hats, and don't have time to chase
             | rabbits down holes. If I follow published instructions and
             | it doesn't work, I pretty much stop there.
        
           | chucky_z wrote:
           | Give nomad, docker swarm, or lxd a shot.
        
             | yjftsjthsd-h wrote:
             | I'm currently involved in an effort to rip out docker swarm
             | at work because its overlay networks are _shockingly_
             | unreliable. LXD looks interesting but
             | https://linderud.dev/blog/packaging-lxd-for-arch-linux/
             | convinced me that it's another Canonical NIH special and
             | probably best avoided (in particular, "only distributed for
             | Ubuntu via snaps" means "forces auto-updates" which means
             | "not going in my environment"). Need to try Nomad; I'm
             | cautiously optimistic since the rest of HashiCorp's stuff
             | is good.
        
         | disgruntledphd2 wrote:
         | Welcome to modern software development, apparently.
        
           | qz2 wrote:
           | Modern software development makes me wish I was born in the
           | late 1940s.
        
             | [deleted]
        
         | jacques_chester wrote:
         | Loosely, my understanding Kubernetes works like this:
         | 
         | You have a Pod definition, which is basically a gang of
         | containers. In that Pod definition you have included one or
         | more container image references.
         | 
         | You send the Pod definition to the API Server.
         | 
         | The API Server informs listeners for Pod updates that there is
         | a new Pod definition. One of these listeners is the scheduler,
         | which decides which Node should get the Pod. It creates an
         | update for the Pod's "status", essentially annotating it with
         | the name of the Node that should run the Pod.
         | 
         | Each Node has a "kubelet". These too subscribe to Pod
         | definitions. When a change shows up saying "Pod 'foo' should
         | run on Node 27", the kubelet in Node 27 perks up its ears.
         | 
         | The kubelet converts the Pod definition into descriptions of
         | containers -- image reference, RAM limits, which disks to
         | attach etc. It then turns to its container runtime through the
         | "Container Runtime Interface" (CRI). In the early days this was
         | a Docker daemon.
         | 
         | The container runtime now acts on the descriptions it got. Most
         | notably, it will check to see if it has an image in its local
         | cache; if it doesn't then it will try to pull that image from a
         | registry.
         | 
         | Now: The _CRI is distinct from the Docker daemon API_. The CRI
         | is abstracted because since the Docker daemon days, other
         | alternatives have emerged (and some have withered), such as
         | rkt, podman and containerd.
         | 
         | This update says "we are not going to maintain the Docker
         | daemon option for CRI". You can use containerd. From a
         | Kubernetes end-user perspective, nothing should change. From an
         | operator perspective, all that happens is that you have a
         | smaller footprint with less attack surface.
        
           | freedomben wrote:
           | You pretty much nailed it. This is a super useful "elevator
           | description" to give to people. Mind if I share it (with
           | attribution)? Even better if you slap it into a blog post or
           | something (except a tweet thread :-D), but HN is perfectly
           | fine :-)
        
             | jacques_chester wrote:
             | Thankyou, I'm flattered. Please feel free to share.
        
           | rsanheim wrote:
           | I'm sure all this complexity makes sense for all sorts of
           | reasons buried in the history of Kubernetes development at
           | Google. "Things are the way they are because they got that
           | way."
           | 
           | The fact that so many other orgs, many of which are startups
           | or just small to medium sized tech companies, use a system
           | this complex is ludicrous to me.
        
             | jacques_chester wrote:
             | Well, I might have designed it differently, but I wasn't
             | there. For what it does, this architecture works well. More
             | to the point: none of it is visible to end-users of
             | Kubernetes. You send a Pod definition, some magic happens,
             | pow! running software.
        
               | tannhaeuser wrote:
               | Except when it doesn't, at which point you're lost.
        
               | jacques_chester wrote:
               | Well, like I said, I might have done it differently. But
               | there's a consistent logic to how it works[0]. That
               | carries a lot of water.
               | 
               | [0] most folks emphasise the control loop aspect, I think
               | it's more helpful to point to blackboard / tuple-space
               | systems as prior art.
        
             | comfydragon wrote:
             | The practical change that's happening is that in future
             | versions of Kubernetes, they're removing support for using
             | a shim for telling the Docker daemon to run containers, and
             | focusing on just using containerd (which Docker uses under
             | the covers anyway).
             | 
             | It's kind of like if you had a shell script to launch
             | programs, and it used to move the mouse to press icons, but
             | now you've deprecated that and will only run programs
             | directly.
        
             | lima wrote:
             | Hi! Small tech company that uses k8s here. Much of the
             | complexity is irreducible and has to go somewhere, and it's
             | much better to have it in a single, stateless, well-defined
             | place that's also easy too introspect.
             | 
             | I've seen way too many Ansible nightmares grown out of
             | deceptively "simple" mutable VM deployments.
             | 
             | k8s makes our life so much easier because it eliminates a
             | whole bunch of other complexity. Easily reproducible
             | development environments, workload scheduling, sane config
             | management...
        
           | chucky_z wrote:
           | The GP did ask about logging though specifically. One of
           | Docker daemon's more interesting features is how much log
           | enrichment it does. Does kubelet do the same thing out of the
           | box? I know containerd itself does not, unfortunately.
        
             | jacques_chester wrote:
             | Yes, I overlooked that. I am afraid I don't know, but since
             | the Docker daemon now relies on the same code, I would
             | expect that there's similar functionality at that level.
        
       | [deleted]
        
       | brundolf wrote:
       | It's funny, and telling, how many commenters here are using K8s
       | without really knowing how it works (and what this change
       | therefore means). I'm in that group myself.
       | 
       | Is this a testament to, or an indictment of, how abstracted our
       | systems have become?
        
         | pm90 wrote:
         | Would you say you know the intricacies of how VMs work before
         | using them to deploy apps?
         | 
         | There is nothing "un abstract" about running applications on
         | VMs or machines. We're just evolving the abstractions that we
         | work with. Before it was VMs, then containers and now
         | containers + orchestrators. In the future it will be some other
         | abstraction.
         | 
         | Every step of the way, we've made this transition for
         | compelling reasons. And it will happen again.
        
         | amackera wrote:
         | Not having to care about implementation details seems like a
         | positive thing to me. There's a reason RTFM is a meme and "read
         | the fucking source code" is not.
         | 
         | Anybody claiming they need to know _everything_ about their
         | dependencies is being unrealistic.
        
         | quickthrower2 wrote:
         | I use my car without knowing how it works. The weird thing
         | about programming is that as a job, if I had a company car
         | they'd be happy for me to call the mechanic to fix it, but if
         | kubernetes ain't working then that's another job for the
         | multihatted programmer. That sort of means, from a selfish
         | point of view it's better to pick one of the locked-in cloud
         | technologies that you can't fix (AWS or Azure has to!). But I
         | suspect many do the opposite (I want k8s on my cv!)
        
         | jrochkind1 wrote:
         | If it _works_ it 's fine.
         | 
         | I took an Operating Systems class decades ago in school in
         | which I wrote a toy OS, but at this point I couldn't tell you
         | much about how operating systems really work, but I deploy
         | software to them everyday. That is fine, it is the nature of
         | computers, they are basically abstraction machines. And OS's
         | are pretty mature and stable, I don't really ever need to debug
         | the OS itself in order to deploy software to one, for the kind
         | of software I write. (Others might need to know more).
         | 
         | But personally I still haven't figured out how to use K8, heh.
        
         | symlinkk wrote:
         | The purpose of every piece of software is to be usable without
         | knowing how it works.
        
           | Spivak wrote:
           | I feel like "how something works" and "implementation
           | details" aren't synonymous and are really context dependent.
           | 
           | As a user you should know the different types of namespacing
           | that affect containers without necessarily knowing that/how
           | your runtime calls clone() to do it. And as a sysadmin you
           | had better know how all the components fit together and their
           | failure modes because you're the one supporting them.
           | 
           | Different people have different views of any technology so
           | someone's necessary understanding as a user of managed k8s
           | can be different than a sysadmin who is a user of k8s code
           | itself.
        
       | mikesabbagh wrote:
       | Docker has networking and other layers too. Docker runs as a
       | daemon too, so it is not very secure. GKE uses containerd (u can
       | use others) What is nice about containerd is that it only runs
       | the container and you can write plugins to it. So much lighter
       | than docker.
        
       | polskibus wrote:
       | What does that mean for people working with Dockerfiles for
       | local/small scale development that would like to be able to use
       | kubernetes at some point? Will they not be able to use their
       | Dockerfiles at all?
        
         | freeone3000 wrote:
         | No change for this workflow. Developers can still use docker to
         | build OCI images as they always have, and containerd can run
         | them as previously.
        
       | dgrin91 wrote:
       | I'm kind of confused by this. It sounds like its removing just
       | some parts of docker (like the UI stuff), but not others? Can I
       | still run my docker-built images on K8?
        
         | deadmik3 wrote:
         | yes it's just the underlying container runtime. so this is
         | really only applicable to sysadmins managing their own k8s
         | installation
        
         | jacques_chester wrote:
         | > _Can I still run my docker-built images on K8?_
         | 
         | Yes.
        
       | euank wrote:
       | I think that the title of this is a bit misleading.
       | 
       | Kubernetes is removing the "dockershim", which is special in-
       | process support the kubelet has for docker.
       | 
       | However, the kubelet still has the CRI (container runtime
       | interface) to support arbitrary runtimes. containerd is currently
       | supported via the CRI, as is every runtime except docker. Docker
       | is being moved from having special-case support to being the same
       | in terms of support as other runtimes.
       | 
       | Does that mean using docker as your runtime is deprecated? I
       | don't think so. You just have to use docker via a CRI layer
       | instead of via the in-process dockershim layer. Since there
       | hasn't been a need until now for an out-of-process cri->docker-
       | api translation layer, there isn't a well supported one I don't
       | think, but now that they've announced the intent to remove
       | dockershim, I have no doubt that there will be a supported cri ->
       | docker layer before long.
       | 
       | Maybe the docker project will add built-in support for exposing a
       | CRI interface and save us an extra daemon (as containerd did).
       | 
       | In short, the title's misleading from my understanding. The
       | Kubelet is removing the special-cased dockershim, but k8s
       | distributions that ship with docker as the runtime should be able
       | to run a cri->docker layer to retain docker support.
       | 
       | For more info on this, see the discussion on this pr:
       | https://github.com/kubernetes/kubernetes/pull/94624
        
         | icco wrote:
         | Also, people probably don't understand the difference between
         | the container runtime and container build environment. You can
         | build your container with Docker still and it can run in a
         | different environment.
        
         | Havoc wrote:
         | Thanks for explaining.
         | 
         | I suspect this will nuke a huge amount of tutorials out there
         | though & frustrate newbies.
        
           | ZiiS wrote:
           | This is deep in the internals of Kubernetes, nothing about
           | `docker build/push` or `kubectl apply` will change.
        
       | sbisson wrote:
       | It gets less confusing when you realise that the original
       | specification for containerd came from Docker (the company) and
       | the current implementation of docker (the application) use
       | containerd as their runtime.
       | 
       | By using containerd (or podman) in K8s, you're getting rid of a
       | lot of unnecessary overhead and so should get more containers per
       | host...
        
       | pietromenna wrote:
       | I found the title just a reminder to invest learning concepts and
       | topics that can last a life time. Tools come and go, and it is
       | healthy to change from time to time.
       | 
       | Containers as a concept is an important learning, but the
       | implementation for today may not be the same as the one in 5
       | years from now.
        
       | npiit wrote:
       | Did anybody make a test to compare CRI-O vs Docker especially
       | when it comes to overall node memory usage for let's say 30-50
       | containers per node? I guess CRI-O would save a lot of memory but
       | I don't have numbers.
        
       | st1x7 wrote:
       | (Super naive layman question, I don't work in this space.)
       | 
       | What does this mean? I thought that Kubernetes manages Docker
       | containers which makes the title kind of confusing.
        
         | jcastro wrote:
         | Kubernetes will just use containerd directly, most end users
         | will just continue to use docker on their laptop or whatever.
         | Or you can use something else like podman, it's all OCI:
         | https://opencontainers.org/
        
         | geerlingguy wrote:
         | It's a situation where 'Docker' has become eponymous with
         | 'container'. But 'Docker' in this case refers to the runtime
         | that Kubernetes uses to run container images on servers
         | ('nodes') where the UI/UX features of Docker (like it's CLI,
         | image building capabilities, etc.) are not needed.
         | 
         | Container images nowadays can be built by a variety of tools,
         | and run by a variety of tools, with Docker likely being the
         | most popular end-user tool with the most history and name
         | recognition. Others like Podman/Buildah are differently-
         | architected replacements.
         | 
         | As long as a container meets the open container specs, it can
         | be built with whatever tool and run on whatever tool that also
         | follows the specs.
        
         | jsmith45 wrote:
         | "docker containers" are more accurately called OCI containers,
         | and have been standardized so that various container runtimes
         | can use exactly the same container images.
         | 
         | Kubernetes can use docker runtime (dockerd) to run OCI
         | containers, but Docker Inc strongly discourages the docker
         | runtime being used directly for infrastructure. Docker runtime
         | imposes a lot of opinionated defaults on containers that are
         | often unwanted by infrastructure projects. (For example docker
         | will automatically edit the /etc/hosts file in containers, in a
         | way that makes little sense for Kubernetes, so Kubernetes has
         | to implement a silly work around to avoid this.)
         | 
         | Instead Docker Inc recommends using containerd as the runtime.
         | containerd implements downloading, unpacking, creating CRI
         | manifests, and running the resulting containers all without
         | implementing docker's opinionated defaults on top. Docker
         | itself uses containerd to actually run the containers, and
         | plans to remove it downloading code in favor of using the one
         | from containerd too.
         | 
         | The only advantage to using docker proper for infrastructure
         | projects is that you can use the docker cli for introspection
         | and debugging. Kubernetes has created its own very similar cli
         | that works with all supported backend runtimes, and also can
         | include relevant Kubernetes specific information in outputs.
        
           | k__ wrote:
           | Half-OT: What are alternative runtimes and why would you use
           | them?
        
             | jsmith45 wrote:
             | The main alternative runtime that I know of (at the level
             | of containerd) is CRI-O. These runtimes are at the level of
             | fetching images, preparing manifests etc. I'm not really
             | sure what benefits they provide. CRI-O is intended to be
             | kubernetes specific, and thus lacks any features that
             | containerd would have that k8s does not need. This in
             | theory ought to mean smaller, lighter, more easily
             | auditable code.
             | 
             | There is another lower level of runtime, the OCI runtime,
             | of which the main implementation is runc. Alternatives have
             | interesting attributes, like `runv` running containers in
             | VMs with their own kernel to get even grater isolation,
             | `runhcs` which is the OCI runtime for running windows
             | containers, etc. Most if not all of the higher level
             | runtimes allow switching out the OCI runtime, but in
             | general sticking with the default of `runc` is fine.
        
               | ghaff wrote:
               | Yeah, the terminology around "runtime" is confusing and
               | is used inconsistently. As you say, the actual runtime is
               | something like runc which CRI-O (and I believe
               | containerd) normally uses. CRI-O, as the name suggests,
               | is an implementation of the Kubernetes Container Runtime
               | Interface--which should work with any OCI-compliant
               | runtime.
        
             | mfer wrote:
             | For a runtime in your Kubernetes cluster there are
             | containerd and cri-o. These are good for Docker / Open
             | Container Initiative images.
             | 
             | There are others... some for non-Docker image support.
             | There are people running other things than just Docker
             | these days. They are more niche case.
        
             | freedomben wrote:
             | In the OpenShift world we use CRI-O and it has been awesome
             | for us. I've never actually had it be a problem.
             | Occasionally have to SSH into a node and inspect with
             | crictl to see what's going on but it's almost always PEBKAC
             | that points at CRI-O when it's not CRI-O's fault. I'd
             | definitely recommend looking at it.
        
           | judge2020 wrote:
           | > But Docker Inc strongly discourages the docker runtime
           | being used directly for infrastructure.
           | 
           | Is there a list of these defaults or other downsides to using
           | docker instead of containerd?
        
             | jsmith45 wrote:
             | I'm not sure of any such list, but using containerd
             | directly is faster, less likely to break k8s when docker
             | adds new features, etc.
             | 
             | Much of this all stems from the flak infrastructure people
             | gave docker when they made swarm part of the engine. But it
             | comes to more than that. Docker has its own take on
             | networking, on volumes, on service discovery, etc. If you
             | are trying to use docker as a component of your own
             | product, at least some of these are likely things you want
             | to implement differently. And the same may well be true of
             | any new features docker wants to add in the future. At
             | which point one must ask why bother using docker directly?
             | 
             | containerd was quite literally created when docker decided
             | to extract the parts of docker that projects like
             | kubernetes might want to use. It has evolved heavily since
             | then, but that really does capture the level at which it
             | sits. This leaves dockerd in charge of things like swarm,
             | docker's view on how networking should work, docker's take
             | on service discovery, dockers view on how shares storage
             | should work, building containers, etc.
        
             | mfer wrote:
             | Docker did a nice blog post on this a few years ago. Docker
             | uses containerd for running containers. It just does things
             | on top of it that you don't need with Kubernetes. There's a
             | nice diagram in the post, too.
             | 
             | https://www.docker.com/blog/what-is-containerd-runtime/
        
           | nextaccountic wrote:
           | Why do you spell containerd as "ContainerD"?
           | 
           | You wrote dockerd without caps.
        
             | tylersmith wrote:
             | dockerd is the literal name of a binary while containerd is
             | the name of a project. As far as I can tell containerd
             | stylizes its name in all lowercase but more than half the
             | time I still see it written like a standard name,
             | ContainerD, exactly like this.
        
               | cpuguy83 wrote:
               | Being nitpicky here, but the canonical representation of
               | "containerd" is all lowercase, as in the logo.
        
             | kordlessagain wrote:
             | MANY THINGS IGNORE CAPS.
        
         | lucideer wrote:
         | Sibling comments cover the details but to put it simply: there
         | are two definitions of the work "Docker":
         | 
         | 1. [common, informal] "An OCI container".
         | 
         | 2. [pedantic, strictly accurate] "A set of tools for building &
         | interacting with OCI containers".
         | 
         | This article is talking about the latter definition.
        
           | ghaff wrote:
           | It's actually even more confusing than that. There's also
           | Docker, Inc. the company and there used to be the Docker
           | Enterprise product (although I believe newer versions are now
           | Mirantis Enterprise which bought that part of the business).
           | 
           | Docker is pretty much the a textbook example of why you
           | probably shouldn't use the same word for a lot of different
           | things.
        
             | oblio wrote:
             | Java :-)
             | 
             | Better yet, .Net.
        
               | ghaff wrote:
               | Heh. I can't tell you, especially going back a few years,
               | the number of people who claimed to hate Java with a
               | passion because as far as they were concerned it was that
               | security-shredding dialog box that would pop up demanding
               | to be updated. (OK, there are probably other reasons to
               | dislike Java as well but I agree it's a lot of different
               | things.)
        
         | mfer wrote:
         | In basic terms... this is a technical detail that isn't going
         | to impact Kubernetes users (a mass majority of them). Those who
         | are concerned with running their workloads in Kubernetes,
         | anyway.
         | 
         | The part of Kubernetes that runs containers has had a shim for
         | docker along with an interface for runtimes to use. It's called
         | the Container Runtime Interface (CRI). The docker shim that
         | worked alongside CRI is being deprecated and now all runtimes
         | (including Docker) will need to use the CRI interface.
         | 
         | These days there are numerous container runtimes one can use.
         | containerd and cri-o are two of them. Container images built
         | with Docker can be run with either of these without anyone
         | noticing.
        
         | joeskyyy wrote:
         | Kubernetes manages many types of containers, Docker containers
         | just happen to be the most popular (or at least I'd venture to
         | guess). But Kubernetes for a while has supported a few
         | container runtimes (: Here's some k8s docs on a few:
         | https://kubernetes.io/docs/setup/production-environment/cont...
        
         | scrappyjoe wrote:
         | Kubernetes orchestrates containers, but Docker is just one way
         | of running containers. It wraps all the underlying Linux into a
         | nice set of of easy to use commands. Kubernetes is deprecating
         | interacting with the underlying Linux via the Docker wrapper.
        
         | bitdivision wrote:
         | It will still run docker containers, they're just deprecating
         | the Docker runtime, which is more of an implementation detail
        
         | astuyvenberg wrote:
         | Simply put, Docker includes a bunch of UX components that
         | Kubernetes doesn't need. Kubernetes is currently relying on a
         | shim to interact with the parts that it _does_ need. This
         | change is to simplify the abstraction. You can still use docker
         | to build images deployed via Kubernetes.
         | 
         | Here's an explanation I found helpful:
         | 
         | https://twitter.com/Dixie3Flatline/status/133418891372485017...
        
           | Patrick_Devine wrote:
           | Former Docker employee here. We've been busy writing a way to
           | allow you to build OCI images with your Kubernetes cluster
           | using kubectl. This let's you get rid of `docker build` and
           | replace it with `kubectl build`.
           | 
           | You can check out the project here:
           | https://github.com/vmware-tanzu/buildkit-cli-for-kubectl
        
             | jrockway wrote:
             | That is a really good idea! Does this just schedule a one-
             | off pod, then have that do the build?
        
               | Patrick_Devine wrote:
               | Not quite a one-off pod, but very close to that. It will
               | automatically create a builder pod for you if you don't
               | already have one, or you can specify one with whichever
               | runtime that you want (containerd or docker). It uses
               | buildkit to do the builds and has a syntax which is
               | compatible with `docker build`.
               | 
               | There are also some pretty cool features. It supports
               | building multi-arch images, so you can do things like
               | create x86_64 and ARM images. It can also do build layer
               | caching to a local registry for all of your builders, so
               | it's possible to scale up your pod and then share each of
               | the layers for really efficient builds.
        
       | tannhaeuser wrote:
       | As much as I love Docker as an excellent freelance developer tool
       | for juggling customer environments, I just never understood the
       | urge to run entire enterprises on containers. It certainly
       | doesn't make things easier, faster, more secure, or cheaper; all
       | it ever did was isolating shared library dependencies (a self-
       | inflicted problem created due to overuse of shared libraries in
       | F/OSS, since static linking has done just the same thing since
       | the dawn of times; of course, in neither case do you get
       | automatic security or stability updates which was the entire
       | point of shared libs). Now they're removing Docker altogether
       | from the k8s stack? So much for Docker's perceived "isolation" I
       | guess.
        
         | deadmik3 wrote:
         | k8s runs containers, docker is just one implementation of
         | containers.
        
         | yahyaheee wrote:
         | It makes things reproducible, and k8s is still containers just
         | not docker
        
         | bird_monster wrote:
         | From your post, I think you might fundamentally misunderstand
         | Docker's use/value. From a value-add standpoint, Docker doesn't
         | really care about "isolating shared library dependencies", but
         | instead, compartmentalizing an entire application, dependencies
         | and all. The value in this, of course, is that you no longer
         | have to care about version conflicts between resources that are
         | sharing a machine. As an added bonus, it means your deployment
         | process can stay the same regardless of the type of container
         | you're deploying. Before, if you had to deploy a Ruby app as
         | well as a Python app, those required fundamentally different
         | processes, as they each require their own package managers and
         | interpreters. With a container, you compile each of those tools
         | _into the container_, and then your deployment process is just
         | "Create container image, send it somewhere".
         | 
         | Hell, even if you wrote an application with 0 dependencies,
         | you're still on the hook for installing the correct version of
         | its compiler, the correct version of your deployment tool, and
         | the correct version/OS of your VM. These are still
         | dependencies, even if they're not dev dependencies.
         | 
         | > It certainly doesn't make things easier, faster, more secure,
         | or cheaper;
         | 
         | If you don't think being able to reuse software makes your
         | workflow easier, faster and at the very least cheaper, I'm not
         | sure what you could possibly think would do those things.
        
           | tannhaeuser wrote:
           | I'm sure you believe what you're saying. But, as pointed out
           | in many posts here, very few people can setup Kubernetes, let
           | alone know it enough for troubleshooting. As an example, in a
           | project of mine we had to call-in an k8s expert after almost
           | a week of downtime (turned out IP address space was exhausted
           | on that Azure instance). And a constant in almost all recent
           | projects of mine is sure people fiddling with k8s integration
           | setups, and achieving very little.
           | 
           | In that kind of situation, it is unwise and irresponsible to
           | treat your infrastructure as a black box. You still need to
           | be able to re-build/migrate your images for security,
           | stability, and feature upgrades, so you're basically just
           | piling additional complexity on top.
           | 
           | The premise of Kubernetes and containers/clouds is an
           | economical (and legitimate) rather than technical one: that
           | you don't have to invest into hardware upfront, and pay as
           | you go with PaaS instead. That tactic only works, though, as
           | long as you have a strong negotiation position as customer.
           | In practice, if you won't get locked-in to cloud providers by
           | tying your k8s infra to IAM or other auth infrastructure, or
           | mixing Kubernetes with non-Kubernetes SaaS such as DBs (which
           | suck on k8s), then you still won't be able to practically
           | move your workload setup elsewhere due to sheer complexity
           | and risk/downtime.
           | 
           | The economical benefit is further offset by a wrong
           | assumption that you need no or fewer admin staff for Docker
           | ("DevOps" in an HR sense).
        
             | bird_monster wrote:
             | > But, as pointed out in many posts here, very few people
             | can setup Kubernetes,
             | 
             | My post, and most of yours, had nothing to do with
             | Kubernetes, but containers in general. I don't care for
             | Kubernetes, and would actively reject using it 99% of the
             | time. Your post, however, was mostly about containerization
             | of applications, whose validity has nothing to do with one
             | particular product or pattern (Kubernetes).
             | 
             | Containers are an almost unanimous win in terms of the
             | simplification of development and deployment. Conflating
             | Kubernetes to be the only approach to containerization is a
             | farce.
        
         | dbmikus wrote:
         | Kubernetes still uses containers, just not Docker. From the
         | release notes:
         | 
         | > We encourage you to evaluate moving to a container runtime
         | that is a full-fledged implementation of CRI...
        
       | renewiltord wrote:
       | Ah I see. You need a runtime in k8s that actually runs the
       | containers that are in pods. So you can use Docker to run those
       | containers, or containerd or whatever. Each of your k8s nodes has
       | to have this program running to run the containers. So they don't
       | want to support that first one.
       | 
       | Not a big deal. It's some backend stuff that's not interesting to
       | people who use managed k8s. Cool cool.
        
         | freedomben wrote:
         | Yep, NBD. OpenShift removed Docker a while ago and replaced it
         | with CRI-O. 99% of people never noticed, and the ones that did
         | just like to know how things work on the inside.
        
       | Gravityloss wrote:
       | > Docker support in the kubelet is now deprecated and will be
       | removed in a future release. The kubelet uses a module called
       | "dockershim" which implements CRI support for Docker and it has
       | seen maintenance issues in the Kubernetes community. We encourage
       | you to evaluate moving to a container runtime that is a full-
       | fledged implementation of CRI (v1alpha1 or v1 compliant) as they
       | become available. (#94624, @dims) [SIG Node]
        
       | GordonS wrote:
       | Read about this over on Twitter:
       | https://twitter.com/Dixie3Flatline/status/133418891372485017...
       | 
       | The only "official" notice about it so far seems to be in the
       | linked changelog.
        
         | boberoni wrote:
         | This was very useful. Thanks for sharing.
         | 
         | It seems that Docker images will still run fine on k8s. The
         | main change is that they're moving away from the "Docker
         | runtime", which is supposed to be installed on each of the
         | nodes in your cluster.
         | 
         | More details about k8s container runtimes here:
         | https://kubernetes.io/docs/setup/production-environment/cont...
        
           | jsmith45 wrote:
           | Right. The simplest option is to use containerd, as the
           | runtime. Installing docker will also install containerd
           | (because docker uses it internally), so nothing much needs to
           | change except a configuration option. docker and k8s can run
           | side by side sharing the same containerd instance, in case
           | you need to do something like build containers inside your
           | k8s cluster.
           | 
           | You lose out on things that require access to the docker
           | daemon socket, but ideally any such software should be
           | replaced with something that talks with the kubernetes API
           | instead. (exception is building containers in cluster. If you
           | need that, run docker side by side with the kubelet, or use
           | buildkit with containerd integration). You also lose the
           | ability to interact with containers with the docker cli tool.
           | Use crictl instead, which has most of the same commands, but
           | also includes certain k8s relevant information in output
           | tables.
        
           | ccmcarey wrote:
           | Yep, they're not really "Docker" images. A while ago the
           | image/container formats were standardized through the OCI.
        
         | fhrow4484 wrote:
         | The twitter thread has way more details about the change, which
         | is why I submitted it here:
         | https://news.ycombinator.com/item?id=25279424
         | 
         | https://twitter.com/IanColdwater/status/1334149283449352200
         | also has some details
        
       | darknessmonk wrote:
       | So... What's the better runtime alternatives?
        
         | q3k wrote:
         | The 'standard' one is containerd.
        
         | johnaoss wrote:
         | I've heard good things about https://containerd.io
         | 
         | Kubernetes documentation has a setup guide (for containerd, as
         | well as CRI-O here:
         | https://kubernetes.io/docs/setup/production-environment/cont...
        
       | GordonS wrote:
       | @mods, would appreciate if someone could change the title to
       | "Kubernetes is deprecating Docker runtime support" (I
       | accidentally missed the word "runtime" when submitting).
        
         | alpb wrote:
         | Your post is entirely clickbait. Docker runtime support doesn't
         | really matter since most people already have moved onto other
         | runtimes like containerd/runc.
        
         | andyjohnson0 wrote:
         | You can edit the title for two hours after submitting the
         | article.
        
           | asah wrote:
           | even better: deprecating non-essential Docker components (or
           | something to that effect). Currently, this is clickbait.
        
           | Supermancho wrote:
           | > You can edit the title for two hours after submitting the
           | article.
           | 
           | The submitter can. This kind of misses the point anyway. The
           | title is misleading.
        
             | [deleted]
        
             | andyjohnson0 wrote:
             | I was replying to the submitter.
        
             | GordonS wrote:
             | I am the submitter, I just didn't know I could edit the
             | title after 7 years of using HN! Gone ahead and done it
             | now.
        
       | crizzlenizzle wrote:
       | About time.
       | 
       | Coincidentally, today I watched three presentations about burning
       | Kubernetes clusters and all of them had Docker daemon issues in
       | the mix. I've been using Docker for over five years myself and
       | I've been using Kubernetes for almost two years now. The most
       | pain I encountered was with Docker or its own ecosystem.
       | 
       | In the last two years it always had some weird racy situations
       | where it damaged its IPAM or simply couldn't start containers
       | after a restart anymore. Also its IPv6 support is just a joke.
       | 
       | Sorry, I had to rant and I hope that this announcement will fuel
       | the development of Docker alternatives even more.
        
       | bobbyi_settv wrote:
       | > We encourage you to evaluate moving to a container runtime that
       | is a full-fledged implementation of CRI (v1alpha1 or v1
       | compliant) as they become available.
       | 
       | So Docker is deprecated, but no replacement is yet available?
        
         | geerlingguy wrote:
         | I believe at least containerd and CRI-0 are actively available
         | and in use quite a bit. (There are some others I've seen, too.)
         | 
         | It's just saying if you use something else, it must follow at
         | least the v1alpha1 or v1 CRI runtime standard.
        
       ___________________________________________________________________
       (page generated 2020-12-02 23:00 UTC)