[HN Gopher] Docker is deleting Open Source organisations - what ...
       ___________________________________________________________________
        
       Docker is deleting Open Source organisations - what you need to
       know
        
       Author : alexellisuk
       Score  : 1148 points
       Date   : 2023-03-15 10:57 UTC (12 hours ago)
        
 (HTM) web link (blog.alexellis.io)
 (TXT) w3m dump (blog.alexellis.io)
        
       | coding123 wrote:
       | There are other repositories to put your images and fetch images,
       | right?
        
       | MarkSweep wrote:
       | One annoyance with how docker images are specified is they
       | include the location where they are stored. So if you want to
       | change where you store you image you break everyone.
       | 
       | I wonder if what regsitry.k8s.io does could be generalized:
       | 
       | https://github.com/kubernetes/registry.k8s.io/blob/main/cmd/...
       | 
       | The idea is the depending on which cloud you are pulling the
       | image from, they will use the closest blob store to service the
       | request. This also has the effect that you could change the
       | source of truth for the registry without breaking all
       | Dockerfiles.
        
       | anon4242 wrote:
       | I wonder if this is related to the large number of more and more
       | desperate sounding emails I've been getting from a Docker sales
       | rep.
       | 
       | Are they bleeding and trying desperately to find additional
       | revenue streams?
        
       | dadrian wrote:
       | I migrated to Github Container Registry in less time than it took
       | to read this blog.
        
       | Forestessential wrote:
       | Its a business
        
       | aakash_test wrote:
       | Test
        
       | prepend wrote:
       | Is there any drop in replacement for dockerhub? I'm concerned
       | about all my random oss containers (like ruby-latest) that may
       | come from orgs that aren't able or willing to pay.
       | 
       | Is there another container hosting site?
        
         | xrd wrote:
         | You can run the docker registry inside dokku. It's fantastic. I
         | have an endpoint that is private for pushing with credentials,
         | and a public endpoint that is open. This requires a little
         | mucking around with the nginx configuration of the app, but
         | totally doable.
        
         | gabeio wrote:
         | Aws seems to have mirrored some of docker hub for us
         | (docker/library images seem to be hosted clones) and has their
         | own public repos as well (not just docker hub's images):
         | 
         | https://gallery.ecr.aws/
         | 
         | edit: typo
        
         | dijit wrote:
         | You could always try mirror.gcr.io
         | 
         | https://cloud.google.com/container-registry/docs/pulling-cac...
        
         | remram wrote:
         | quay.io and ghcr.io are popular free options.
        
       | ixtli wrote:
       | Profit motive above all else is fundamentally incompatible with
       | the social engine that powers the open source community. It
       | always has been and always will be. I'm no longer surprised, but
       | im still disappointed.
        
         | unity1001 wrote:
         | > Profit motive above all else is fundamentally incompatible
         | with the social engine that powers the open source community.
         | 
         | Its not. Freemium format works splendidly in various
         | ecosystems, one of the biggest being WordPress. It enabled WP
         | ecosystem to fund itself without VC or investor money and grow.
         | Real indie growth. So its possible.
         | 
         | Without the funding from its own userbase to sustain itself,
         | Open Source projects just flop eventually. Few remain if they
         | are way too big or if they can get corporate sponsors. Thats
         | not being 'free'. Real freedom is in Open Source being funded
         | by its users without the unreliable mechanism of donations.
        
       | [deleted]
        
       | zxcvbn4038 wrote:
       | The OP goes through a lot of trouble to obscure they are asking
       | for $35 a month, which honestly I think most people can afford,
       | even if its open source software they develop only out of
       | kindness. So I'm not really buying that argument.
       | 
       | That said I don't really want to reward Docker for writing
       | themselves in as the distribution hub for all things docker and
       | then more or less extorting money from people.
       | 
       | I think the solution is don't give Docker a dime, just run your
       | own registry in Digital Ocean, thats $5/m. If we can front the
       | registry server with a CDN then its potentially free.
        
         | TheRealDunkirk wrote:
         | > I don't really want to reward Docker for writing themselves
         | in as the distribution hub for all things docker and then more
         | or less extorting money from people.
         | 
         | I finally bit the bullet and paid for YouTube Premium. The ad
         | "experience" on YouTube has become so abysmal and unblockable,
         | that to use it at all really forces you into it. I had a hard
         | time expressing why this was so aggravating -- considering that
         | I see literally 5 other streaming services I pay for on the
         | same launch screen -- but this really nails it. It's the same
         | monopolize-for-free-and-then-charge-and-extract-all-rent-
         | forever move at the heart of every digital service now.
        
           | zxcvbn4038 wrote:
           | I just ignore the ads on youtube. I don't own any animals so
           | the pet food ads are a miss. I don't take any medicines so
           | I'm not buying those either. I work remotely so I'm not going
           | to buy a new car. I already have a VPN so those are all
           | pointless also. I think that is all they try to advertise to
           | me.
           | 
           | Oh yeah! I will never play Rage Shadow Legends in this life
           | or the next. Give it up!
        
         | 999900000999 wrote:
         | Ehh.
         | 
         | How much money should I spend on my hobbies, 35$ isn't a lot,
         | but I'd rather not waste it.
         | 
         | Plus instead of just grabbing the image I come up with to do x
         | or y, you'll have to implement it yourself. Duplicate this
         | hundreds or thousands of times.
        
           | zxcvbn4038 wrote:
           | I would spend $35/m on a hobby - but not on Docker. I
           | wouldn't even spend $5 to host on my own registry. I think a
           | dockerfile and a brew formula is all you'd get out of me.
        
       | jmac01 wrote:
       | I havent used docker but my understanding is that dockerhub hosts
       | docker images which are essentially just text files? Would that
       | be something that cud just be migrated to another platform easily
       | or does dockerhub do a lot of other things too?
        
         | albert_e wrote:
         | dockerfile is a text file spec on how to build a docker image.
         | 
         | a container image (analogous to a VM snapshot) is built from a
         | dockerfile.
         | 
         | but dockers hub contains the actual images (that run into MBs
         | and GBs) not just dockerfiles.
         | 
         | most dockerfiles don't build an image from scratch. they start
         | with a "FROM" keyword that references an existing pre-built
         | image and then adds some layers of files and configuration on
         | top.
         | 
         | everytime you build a containerized app, your build scripts
         | first pull down the latest pre-built base image referenced by
         | your app's dockerfile.
         | 
         | so a image registry like docker hub is core and essential for
         | thousands of build pipelines and automation that run across
         | thousands of companies globally.
         | 
         | there are some alternatives like Amazon ECR, and private
         | registries hosted by big companies on their own.
         | 
         | but a lot of projects and pipelines still depend on public
         | images of commonly used ones like Linux flavours and distros
         | maintained by various teams.
        
         | paxys wrote:
         | Dockerfiles are text files. Docker images are entire OS file
         | system snapshots (not exactly but close enough) built from
         | those Dockerfiles. A single one can be hundreds of MBs or even
         | multiple GBs in size.
        
       | jiggywiggy wrote:
       | It was always unbelievable too me how much they hosted for free.
       | I recklessly pushed over 100gbs of containers the last years, all
       | free. Never made sense to me, even google doesn't do this
       | anymore.
        
         | wenyuanyu wrote:
         | There are techniques to compress and dedup redundancies... I
         | doubt it is real 100gbs on their disks...
        
           | qbasic_forever wrote:
           | It's still 100gb over the wire and those bandwidth costs add
           | up, especially if it's a popular image used by tons of
           | projects.
        
             | wenyuanyu wrote:
             | yes, but the downward traffic costs (via docker pull) are
             | likely their main expense, not the upward transfer.
        
           | 101011 wrote:
           | Even so...storage is not free.
           | 
           | Looking at the rates of enterprise storage costs compared to
           | what Google or Apple charges consumers - I was surprised by
           | how subsidized people's photo libraries are.
        
             | toyg wrote:
             | They are not, but actual usage is typically a single-digit
             | % of promised space. So power users are served at cost (or
             | even at loss), but the overwhelming majority of users are
             | actually overpaying for what they use.
        
             | VWWHFSfQ wrote:
             | oh that's completely different. They want you to host your
             | photos with them because then you can never leave their
             | platform.
        
               | girvo wrote:
               | Apple does a pretty bad job of it then, because I have a
               | local copy of my entire photo library too on an external
               | hard drive. It's quite nice really, cloud storage plus a
               | local copy. I guess it's somewhat of a moat because
               | switching to some other cloud provider or my own system
               | will be more expensive?
        
               | VWWHFSfQ wrote:
               | obviously hn readers will know how to copy the photos to
               | their own computer. most people won't and that's the
               | point
        
               | 101011 wrote:
               | And it's different in more ways than one. Hosting the
               | images is just one fraction of the features that you get
               | with Apple or some other provider. Searching, albums,
               | sharing, are all baked in services that are still cheaper
               | than, say, going through S3 and having a bucket with
               | similar storage.
        
         | remram wrote:
         | We are even using Docker Hub to store and distribute VM
         | images... The so-called "container disk image" format is
         | sticking a qcow2 file in a Docker image and storing it on a
         | Docker registry.
         | 
         | https://github.com/kubevirt/kubevirt/blob/main/containerimag...
        
       | yencabulator wrote:
       | Container images should just be files at a URL, anything else
       | just leads this sort of rent-seeking.
        
       | Wronnay wrote:
       | Gitea has support for packages, so if you don't want to use
       | GitHub or something similar, you could simply self-host your
       | packages.
        
         | verdverm wrote:
         | Then you have to pay for distribution, which is what people are
         | using Docker Hub to avoid (grift off of?)
        
       | bmitc wrote:
       | It is my understanding that Microsoft has previously tried to
       | purchase Docker. Despite me having problems with companies buying
       | up each other, I wouldn't be surprised if Microsoft revisits, or
       | already is revisiting, buying Docker.
       | 
       | Being a heavy Visual Studio Code user, I have centered my
       | personal development around Docker containers using VS Code's
       | Devcontainer feature, which is a very, very nice way of
       | developing. All I need installed is VS Code and Docker, and I can
       | pull down and start developing any project. (I'm not there yet
       | for all my personal projects, but that's where I'm headed.)
        
       | xrd wrote:
       | I run multiple docker registries inside dokku. It's fantastic.
       | $5/mo for peace of mind.
        
       | ZunarJ5 wrote:
       | 1. I just started teaching myself Docker for my home server. How
       | will this affect me?
       | 
       | 2. What is a good alternative to Docker?
       | 
       | Thanks!
        
         | marvinblum wrote:
         | 1. You probably won't be able to pull some of the images any
         | longer and need to find an alternative host (ghcr.io instead of
         | hub.docker.io for example)
         | 
         | 2. Podman
        
           | ZunarJ5 wrote:
           | Thank you!
        
       | lloydatkinson wrote:
       | Is there any progress on podman for Windows or any other way of
       | running containers on Windows? I cannot wait for the day the
       | development community doesn't need to rely on anything from this
       | company.
        
         | taspeotis wrote:
         | I use podman and it works fine?
         | 
         | Podman Desktop runs podman machine for me at startup.
         | 
         | Containers set to restart automatically don't restart across
         | podman machine restarts but that hasn't upset my workflow much
         | (at all?). I just start containers as I need them.
        
           | girvo wrote:
           | I wonder if VSCode's Remote Container stuff works with Podamn
           | on windows. The Docker Desktop + VSCode + Remote Container
           | Dev + the ESP-IDF tooling is the nicest way to do production
           | ESP32 dev with multiple ESP-IDF version (and even without
           | that is just much simpler to get up and running on Windows,
           | despite the rube Goldberg machine-esque description of it)
        
             | bluehatbrit wrote:
             | I've been playing around with it recently on macos and
             | Podman is still a bit hit or miss. It seems mostly to be
             | around assumptions of permissions. I switched over to
             | colima (which is specifically for macos) and have had next
             | to no issues though. Hopefully podman is able to tick off
             | those last few boxes and make it properly stable in this
             | use case.
        
         | orra wrote:
         | Both Podman Desktop and Rancher Desktop are viable options for
         | running Linux containers on Windows.
        
         | danielnesbitt wrote:
         | Is there something in particular missing? I have been using
         | Podman for Windows almost daily for the past six months. There
         | is no management GUI built-in like Docker for Windows, but I
         | have not found that be a problem at all.
        
       | koolba wrote:
       | Can we just get the big three cloud players to make a new public
       | repo? They've got oodles of bandwidth and storage, plus the
       | advantage that a lot of access would be local to their private
       | networks.
       | 
       | Setup a non-profit, dedicate resources from each of them
       | spendable as $X dollars of credits, and this problem is solved in
       | a way that works for the real world. Not some federated mess that
       | will never get off the ground.
        
         | pimterry wrote:
         | Consensus on a new repo for public community images would help,
         | but it isn't the biggest problem (as the author notes, GHCR
         | does that already, and GitHub seem pretty committed to free
         | hosting for public data, and have the Microsoft money to keep
         | doing so indefinitely if they like).
         | 
         | The issue I worry about is the millions of blog posts, CI
         | builds, docker-compose files, tutorials & individual user
         | scripts who all reference community images on Docker Hub, a
         | huge percentage of which are about to disappear, apparently all
         | at once 29 days from now.
         | 
         | From a business perspective particularly, this looks like
         | suicide to me - if you teach everybody "oh this guide uses
         | Docker commands, it must be outdated & broken like all the
         | others" then you're paving a path for everybody to dump the
         | technology entirely. It's the exact opposite of a sensible
         | devrel strategy. And a huge number of their paying customers
         | will be affected too! Most companies invested enough in Docker
         | tech to be paying Docker Inc right now surely use >0 community
         | images in their infrastructure, and they're going to see this
         | breakage. Docker Inc even directly charge for pulling lots of
         | images from Docker Hub right now, and this seems likely to
         | actively stop people doing that (moving them all to GHCR etc)
         | and thereby _reduce_ the offering they're charging for! It's
         | bizarre.
         | 
         | Seems like a bad result for the industry in general, but an
         | even worse result for Docker Inc.
        
           | JustBreath wrote:
           | Yeah that's going to be the real issue, all the niche
           | unmaintained images that no one is going to pick up the
           | pieces for.
           | 
           | They're taking a big chunk of open source and tossing it in
           | the garbage.
        
         | miyuru wrote:
         | aws already have one https://gallery.ecr.aws/
        
           | 8organicbits wrote:
           | Unlimited free downloads inside AWS. First 5TB of outbound
           | transfer free. Then $0.09/GB for additional transfer.
           | 
           | https://aws.amazon.com/ecr/pricing/
        
           | schneems wrote:
           | My team used ECR for some stuff and it's not great. We want
           | to move on from it.
        
             | Mustard_D wrote:
             | My team also user ECR, but I've not got any complaints.
             | What issues do you have with it?
        
         | remram wrote:
         | quay.io is a pretty popular general-purpose repo, it replaced
         | docker.io for many projects when they started rate-limiting.
        
           | jacooper wrote:
           | There is no free tier https://quay.io/plans/
        
             | blcknight wrote:
             | > Can I use Quay for free? > Yes! We offer unlimited
             | storage and serving of public repositories. We strongly
             | believe in the open source community and will do what we
             | can to help!
             | 
             | It's completely free for public repositories.
        
               | jacooper wrote:
               | Wow! Didn't notice at all since its at then of the page.
        
       | acd wrote:
       | Since Silicon Valley bank with fdic takeover, the "free" services
       | will go away. Free central bank money is gone due to higher
       | inflation and interest rates. Free open source tiers are going
       | away.
        
       | ridruejo wrote:
       | Without entering into the specifics of this situation, I don't
       | understand the hate for Docker the company. They are providing a
       | huge service for the community and looking for ways to make money
       | from it to make it sustainable. I would give them a bit more
       | empathy/benefit of the doubt as they iterate on their approach.
       | Somewhere, somehow, someone has to pay for that storage and
       | bandwidth whether directly or indirectly (I am old enough to
       | remember what happened with sourceforge so I rather them find a
       | model that works for everyone)
        
         | HelloNurse wrote:
         | If you inconvenience all users (by devastating the "ecosystem"
         | of publicly available images) in order to extort money from a
         | few users (some organizations will pay up, at least
         | temporarily) you should expect hate.
         | 
         | The only benefit of doubt Docker deserves is on a psychological
         | plane: evil or stupid?
        
         | NineStarPoint wrote:
         | It's a long standing hate for me that isn't limited to just
         | Docker, companies that used "we're free" to obtain massive
         | growth only to turn around and switch monetization models
         | completely once they've become the dominant player in the
         | market. It's a massive distortion on the market, driving
         | companies that tried to be fiscally sound from the start into
         | irrelevancy while extremely inefficient ventures become the
         | market leaders on account of superior funding.
         | 
         | Or to put it another way, Docker should have been focused on
         | sustainability from the start and not dangled a price they knew
         | couldn't last in front of people to increase adoption.
        
         | armchairhacker wrote:
         | I agree they deserve to get paid, but there are better ways
         | than essentially holding customers' data and URLs hostage. The
         | problem is they are trying to extract money from other open-
         | source developers who are at least as cash-strapped as them.
         | 
         | Plus, I doubt they will get many people to actually start
         | paying. People will simply move to other storage (like Github)
         | and switch the URLs. Docker is fully open-source and works
         | without docker.io, they don't really have a position here
         | except owning the name.
         | 
         | IMO they just need to edit / clarify that open-source
         | developers and organizations _won't_ need to pay, only those
         | who presumably should have the funds. And take a more passive
         | stance: bug people with annoying messages like Wikipedia does,
         | and threaten shutting down docker.io altogether if they don't
         | _somehow_ get funding (some people will complain about this too
         | but more will understand and will be sympathetic). Wikimedia,
         | Unix /Linux, Mozilla, etc. as well as Homebrew/cURL/Rust all
         | seem to be doing fine as nonprofits without creating huge
         | controversies like this.
        
       | didip wrote:
       | Docker should have sold to Microsoft. Whatever they have been
       | doing recently is a mistake.
        
       | nikanj wrote:
       | All free tiers everywhere are going away, because someone always
       | figures out how to run a crypto miner on them.
        
         | fathyb wrote:
         | How can you mine crypto using Docker Hub?
        
           | Tao3300 wrote:
           | https://youtu.be/1EF6kB9q4vg
        
           | orangepurple wrote:
           | Nice try
        
             | henrydark wrote:
             | I love this comment because I only read the above, started
             | thinking about how I would do it, then was like "ah! I'll
             | reply...", and then saw it and changed course
        
               | adolph wrote:
               | I have to keep this one in mind at all times: 356: Nerd
               | Sniping
               | 
               | https://xkcd.com/356/
        
         | madduci wrote:
         | Storage ain't free as well, someone uploads huge images
        
           | pilif wrote:
           | Traffic is probably the higher cost
        
       | Havoc wrote:
       | To me this smells of VC model issues.
       | 
       | Initially it's great if you can get all the FOSS to play in your
       | technology walled garden. Subsidize it with VC cash.
       | 
       | Downside is it generates a ton of traffic that is hard to
       | monetize. Sooner or later it reaches a point where it can't be
       | subsidized and then you get pay up or get out decisions like
       | this.
       | 
       | One question I haven't seen yet is 420 USD? Is that what it costs
       | to serve the average FOSS project? Or is that number a bad Elon
       | style joke? If they came out with "We've calculated X as actual
       | costs. We're making no margin on this but can't free lunch this
       | anymore" that would go down a lot better I think.
        
       | LoonyFruiter wrote:
       | What alternatives to docker are there on ARM based devices like
       | apple silicon macs? I'd like to get off of Docker in light of
       | this change.
        
         | SparkyMcUnicorn wrote:
         | I use Colima. https://github.com/abiosoft/colima
        
       | worik wrote:
       | Sucked in.
       | 
       | I am sorry so many people got caught out by this. But it is an
       | regular pattern in tech.
       | 
       | Cory Doctorow coined a word for this: enshittification.
       | https://pluralistic.net/tag/enshittification/
        
       | nathantotten wrote:
       | Its pretty easy to setup a Github Action that mirrors images to a
       | private registry. We use this to mirror to GCP Artifact registry:
       | https://zuplo.com/blog/2023/03/15/mirroring-docker-images-wi...
        
       | Ekaros wrote:
       | It is kinda strange that developers who should be aware of their
       | own costs think that these sort of services will be free forever
       | or even available and never end. Specially when clearly there is
       | not much monetization going on. Unlike when services are offered
       | with adds.
       | 
       | Big players can carry the costs, but what can small ones really
       | do once money runs out?
        
       | advancingu wrote:
       | As I commented in yesterday's thread, this is not the first time
       | Docker is pulling the plug on people with very short advance
       | notice: See https://news.ycombinator.com/item?id=16665130 from
       | 2018
       | 
       | Someone back then even wondered what would happen if such a
       | change happened to Docker Hub
       | https://news.ycombinator.com/item?id=16665340 and here we are
       | today.
        
       | mc4ndr3 wrote:
       | I am confused by the meaning of Docker's announcement. They keep
       | saying "organizations" will have Docker images deleted. Does that
       | include personal FOSS images or not? Because the vast majority of
       | Docker Hub images are uploaded by individual contributors, not
       | "organizations."
       | 
       | Too bad about their poor relationship with the FOSS community.
       | I've applied to them for years, and actually merged some minor
       | patches to Docker to help resolve a go dependency fiasco. Zero
       | offers.
       | 
       | I guess the next logical move is to republish any and all non-
       | enterprise Docker images to a more flexible host like the GitHub
       | registry.
        
       | Quekid5 wrote:
       | Are they actively trying to hasten their own demise?
       | 
       | I guess I'll be moving our (FOSS) container images to GitHub
       | Registry... but what a pain :/
        
         | davedx wrote:
         | Github also charges for usage. Are they also hastening their
         | demise?
         | 
         | AWS also charges you to host things in S3. Is it hastening its
         | demise too?
         | 
         | This argument is just weird to me.
        
         | toastal wrote:
         | Can you move to GitLab? GitHub is purely proprietary which
         | doesn't match the spirit of FOSS. At least GitLab is open core.
        
           | martypitt wrote:
           | Gitlab have been increasing pricing heavily recently,
           | especially in storage-heavy areas such as containers.
           | 
           | I don't think they're likely to be a cost-effective
           | differentiator against what Docker are now charging.
        
           | jck wrote:
           | iirc gitlab's free tier has a storage and bandwidth limit
           | which applies to the container registry too. That makes
           | gitlab a no go if you want to share containers widely.
        
         | candiddevmike wrote:
         | Just did this yesterday, it was surprisingly easy. I used the
         | HashiCorp Vault secrets plugin for GitHub and pushed containers
         | via GitHub Actions, so for me it became more secure than
         | storing and retrieving a docker hub API key.
        
       | Liberonostrud wrote:
       | What is the alternaive?
        
         | yrro wrote:
         | GitHub container registries are free for open source projects.
         | For now...
        
       | twblalock wrote:
       | It was sad to see people defending Docker Desktop changing from
       | free to paid licenses. Now Docker is charging for even more
       | things that used to be free.
       | 
       | The defenders are reaping what they have sown. Next time a
       | company starts to charge for things that used to be free,
       | remember not to encourage it, because that will only make it
       | happen more.
       | 
       | People don't like this and many of them are not going to trust
       | Docker in the future.
        
         | grumple wrote:
         | Nothing a company does is free to them. To expect them to
         | provide a free service at all, let alone one with high costs
         | associated with it, is not reasonable. They don't owe the world
         | free service, same for any other company.
        
           | twblalock wrote:
           | When you offer something for free and then suddenly start
           | charging for it, you are jerking around your customers --
           | especially when you do it with such confusing and incomplete
           | communication as Docker has provided in this instance.
           | 
           | This is effectively a long-term bait and switch. This is not
           | ok.
        
           | kuratkull wrote:
           | If a company offers something for free, it's likely part of
           | their business plan. However, if they start charging for that
           | feature, users may feel uneasy. Digital Ocean offers a
           | container repo for $5/month, so users may be upset about the
           | perceived mismatch between price and features. Users not
           | paying signals that the company has failed in its business
           | strategy. Additionally, the repo hosting service has weak
           | vendor lock-in, making it easy for users to switch providers.
           | Businesses must be mindful of user feedback and work hard to
           | retain customer loyalty in a competitive market.
        
       | foepys wrote:
       | I posted this in the other thread already but will also add it
       | here. https://news.ycombinator.com/item?id=35167136
       | 
       | ---
       | 
       | In an ideal world every project had its own registry. Those
       | centralized registries/package managers that are baked into tools
       | are one of the reasons why hijacking namespaces (and typos of
       | them) is even possible and so bad.
       | 
       | Externalizing hosting costs to other parties is very attractive
       | but if you are truly open source you can tell everybody to build
       | the packages themselves from source and provide a script (or in
       | this case a large Dockerfile) for that. No hosting of binary
       | images necessary for small projects.
       | 
       | Especially since a lot of open source projects are not used by
       | other OSS but by large organizations I don't see the need to
       | burden others with the costs for these businesses. Spinning this
       | into "Docker hates Open Source" is absolutely missing the point.
       | 
       | Linux distributions figured out decades ago that universities are
       | willing to help out with decentralized distribution of their
       | binaries. Why shouldn't this work for other essential OSS as
       | well?
        
       | PedroBatista wrote:
       | The truth is Docker ( the company ) could never capitalize the
       | success of their software. They clearly need the money and I have
       | the impression things have not been "great" in the last couple of
       | years. ( regardless of reasons )
       | 
       | The truth is also the fact that most people/organizations never
       | paid a dime for the software _or_ the service, and I 'm talking
       | about Billion dollar organizations that paid ridiculous amounts
       | of money for both "DevOps Managers" and consultants but the
       | actual source of the images they pull are either from "some dude"
       | or some opensource orgs.
       | 
       | I get that there will be many "innocent victims" of the
       | circumstances but most people who are crying now are the same
       | ones who previously only took, never gave and are panicking
       | because as Warren Buffett says: "Only when the tide goes out do
       | you discover who's been swimming naked."
       | 
       | And there are a lot of engineering managers and organizations who
       | like to brag with expressions like "Software supply chains" and
       | we'll find out who has been swimming with their willy out.
        
         | foxandmouse wrote:
         | I think it's also a product of the larger economic environment.
         | The old model of grow now and profit later seems to be hitting
         | a wall, leaving companies scrambling to find profit streams in
         | their existing customer base not realizing that doing so will
         | hinder their growth projection leading to more scrambling for
         | profit.
         | 
         | It's a vicious cycle, but when you don't grow in a sustainable
         | way it seems unavoidable.
        
       | vbezhenar wrote:
       | I think that people should switch to GitHub container registry
       | for free images. I don't like this kind of centralisation, but I
       | least one can expect for Microsoft to have enough money to
       | provide service for free without strings attached, just like they
       | do with Git hosting.
        
       | FlyingSnake wrote:
       | > Start publishing images to GitHub
       | 
       | And when GitHub starts similar shenanigans, move out to where? I
       | am old enough to know the we can't trust BigTech and their
       | unpredictable behaviors.
       | 
       | Eventually we need to start a Codeberg like alternative using
       | Prototype funds to be self reliant.
       | 
       | 1: https://codeberg.org/ 2: https://prototypefund.de/
        
         | JeremyNT wrote:
         | It actually seems pretty reasonable to let BigTech host stuff,
         | so long as you know the rug pull is going to come. Let the VCs
         | light money on fire hosting the stuff we use for free, then
         | once they stop throwing money at it figure out a plan B. Of
         | course you should have a sketch of your plan B ready from the
         | start so you are prepared.
         | 
         | If you view all of this "free" VC subsidized stuff as
         | temporary/ephemeral you can still have a healthy relationship
         | with it.
        
           | 5e92cb50239222b wrote:
           | This is how I've been living for many years and it has saved
           | me many thousands of dollars, which is a significant amount
           | of money here. The various "cloud" free tiers cost them at
           | least $600 for the past year alone. Same for free CI
           | offerings, etc. Thank you VCs and BigCo for not cutting out
           | regions that are probably net negative for you overall (I
           | guess it may be serious money for me, but doesn't even
           | register on the radar at their scale).
        
         | syklep wrote:
         | Codeburg is more strict for blocking projects at the moment.
         | Wikiless is blocked by Codeburg for using the Wikipedia puzzle
         | logo but is still up and unchanged on GitHub.
        
         | user3939382 wrote:
         | It should use DHT/BitTorrent. Organizations could share magnet
         | links to the official images. OS projects have been doing it
         | for years with ISOs.
        
           | screamingninja wrote:
           | BitTorrent will solve the distribution problem but not
           | magically provide more storage. Someone still has to foot the
           | bill for storing gigabytes (or terabytes) worth of docker
           | images.
        
             | chaxor wrote:
             | This doesn't make sense as an argument at all. If there
             | isn't anyone using the image, no one will have it on their
             | computer... Sure - but that isn't as much of an issue if
             | you have a build file that constructs the image up from
             | more basic parts. Secondly, the popular files get way _way_
             | faster with the more their used /downloaded. Torrent is a
             | _phenomenal_ *wonderful* system to distribute Machine
             | Learning weights, docker images, and databases. It 's a
             | developers dream for a basic utility of distributing data.
             | Potentially ipfs could be useful too, but idk much about it
             | specifically.
             | 
             | One of the most revolutionary and fundamental tools to be
             | made is a basic way / template / paradigm which constructs
             | databases in a replicable way, such that the hash of the
             | code is mapped to the hash of the data. Then the user could
             | either just download the data or reproduce it locally,
             | depending on their system's capabilities, and automatically
             | become a host for that data in the network.
        
               | [deleted]
        
               | [deleted]
        
             | aembleton wrote:
             | Have a free client that seeds any images that you download,
             | and a paid for one that doesn't. Now you have all those who
             | don't want to pay providing your storage and bandwidth.
        
               | r3trohack3r wrote:
               | This is an excellent use of p2p incentives. Share to pay.
               | 
               | Tricky bit is, for some users, you'll either abandon them
               | with no way to share or you will still be paying their
               | ingress/egress fees when their client falls back to your
               | TURN server if NAT hole punching fails.
               | 
               | You'll also have to solve image expiration gracefully.
               | Hosting a "publish as much as you want" append-only
               | ledger isn't going to scale infinitely. There needs to be
               | garbage collection, rate limiting, fair-use policies,
               | moderation, etc. Otherwise you're still going to outstrip
               | your storage pool.
        
             | MayeulC wrote:
             | It should probably work like ipfs, with pinning services.
             | You can pin (provide a server that stores and shares the
             | contents) yourself, or pay for a commercial pinning service
             | (or get one from an OSS-friendly org, etc).
        
           | Szpadel wrote:
           | I think something like IPFS would be perfect for that, you
           | have some layers pulled in your storage anyways.
           | 
           | big projects could self host easily, as their popularity
           | would quickly give them enough seeds to not need to provide
           | much traffic themselves.
           | 
           | also I think adapting docker way of storing layers as tars is
           | fundamentally broken, maybe with combination with something
           | like ostree as a storage to decrease duplicates we could
           | really cut a lot of storage.
           | 
           | imagine how much unique content does your average docker
           | image have? 1 binary and maybe few text files? rest is
           | probably os and deps anyways.
        
         | robotburrito wrote:
         | Maybe we all start hosting this stuff via torrent or something?
        
         | nindalf wrote:
         | > And when GitHub starts similar shenanigans
         | 
         | The difference between GitHub and Docker is that GitHub is
         | profitable.
        
           | vhanda wrote:
           | Genuine question: Is Github profitable? I can't seem to
           | figure out if it was before or after the acquisition from
           | Microsoft.
        
           | FlyingSnake wrote:
           | So is Docker Inc. The last I heard it is profitable and is
           | doing quite well
        
           | whydoyoucare wrote:
           | Profitable today. Moving from docker to github is just
           | kicking the can down the road.
        
         | maxloh wrote:
         | I don't think we will receive enough donations to cover
         | infrastructure costs, let alone maintainers' salaries.
         | 
         | Even core-js sole maintainer failed to raise enough donations
         | to feed his own family, despite the library is used by at least
         | half of the top 1000 Alexa websites. [0]
         | 
         | People (and also big-techs) just won't pay for anything they
         | can get for free.
         | 
         | [0]: https://github.com/zloirock/core-
         | js/blob/master/docs/2023-02...
        
           | BlueTemplar wrote:
           | I guess the SQLite team managed to do it by using an even
           | more permissive license than GPL, which attracted big
           | companies into funding them ??
        
         | davedx wrote:
         | It actually sounds reasonable to me? They have an open source
         | program, the article says its open source definition is "too
         | strict" because it says you must have "no pathway to
         | commercialization".
         | 
         | I mean why should you expect someone to host gigabytes of
         | docker images for you, for free?
        
           | BlueTemplar wrote:
           | Somewhat related : what is Docker's stance on the licenses
           | that fail the first Open Souce test, those that forbid
           | commercial use (NC) ?
        
           | tommoor wrote:
           | While I have no _expectations_ of free hosting, one example
           | of a project that will be affected is mine -
           | https://hub.docker.com/repository/docker/outlinewiki/outline
           | 
           | I have been building this for 5+ years, and offer a community
           | edition for free while the hosted version is paid. Once the
           | community edition starts costing money there will be even
           | less reason to continue supporting it, it already causes a
           | lot of extra work and problems that I'm otherwise
           | uncompensated for.
        
             | grumple wrote:
             | > Once the community edition starts costing money there
             | will be even less reason to continue supporting it
             | 
             | This is exactly the reasoning Docker is using, so it seems
             | reasonable?
        
           | rollcat wrote:
           | > the article says its open source definition is "too strict"
           | because it says you must have "no pathway to
           | commercialization"
           | 
           | What a load of crap. Free Software's "0th freedom" is the
           | ability to use the program for whatever purpose you wish. The
           | definition of Open Source is even looser than that. They are
           | asking their "Open Source" users to make their software non-
           | free, by restricting its use cases.
           | 
           | Anyway, the writing has been on the wall for a long while. If
           | you haven't moved off Docker Hub yet, now is the time.
        
           | gwd wrote:
           | Gitlab's Open Source program has similar restrictions, and
           | it's just kind of weird. Like, there are _multiple_ companies
           | _actually making money_ off of Xen; but because Xen is owned
           | by a non-profit foundation (with a six-digit yearly budget),
           | and the _foundation_ isn 't trying to profit, it still
           | qualifies. (As does, for instance, the GNOME project.)
           | 
           | OTOH, somewhere else in this context it was mentioned that
           | curl is almost entirely maintained by one guy who makes money
           | from consulting; and because of that, he _wouldn 't_ qualify.
           | 
           | So if you're either small enough to be a side hobby project,
           | or large enough to have your own non-profit, you can get it
           | for free; anywhere in between and you have to pay.
           | 
           | Personally I'd be happy for Xen to pay for Gitlab Ultimate,
           | except that the price model doesn't really match an open-
           | source project: we can't tell exactly how many people are
           | going to show up and contribute, so how can we pay per-user?
        
           | JonChesterfield wrote:
           | Well, it's how they established themselves in the market.
           | Without being friendly to open source projects they wouldn't
           | have had that marketing and wouldn't exist as a company.
           | 
           | So now they destroy their foundations and learn whether they
           | 10x or fold. Pretty standard VC playbook so I assume that's
           | the driving force here.
        
           | lmm wrote:
           | If you're going to call it "open source" that should mean
           | what "open source" usually means, i.e. that e.g. RedHat is
           | eligible.
        
         | JonChesterfield wrote:
         | We need to use a distributed system instead of a centralised
         | one. Probably built on a source control system that can handle
         | that.
        
           | mac-chaffee wrote:
           | May want to keep your eye on dragonfly, a P2P image
           | distribution protocol: https://d7y.io/docs/
        
           | dijit wrote:
           | What happened to the old mirror lists? The ones where apt/rpm
           | package repositories tend to be hosted?
        
             | aembleton wrote:
             | https://www.debian.org/mirror/list
        
           | gooob wrote:
           | a collection of distributed systems. start your own. connect
           | with other enthusiasts in your area to get them connected.
        
           | RobotToaster wrote:
           | This seems like a perfect use case for IPFS.
        
             | r3trohack3r wrote:
             | We had a prototype Docker/BuildKit registry using IPFS at
             | Netflix built by Edgar.
             | 
             | https://github.com/hinshun/ipcs
        
         | [deleted]
        
         | delfinom wrote:
         | Yea, people are really spoiled due to more than a decade of VC
         | and general investing cashburn offering tons of services for
         | free. But at the end of the day there are costs and companies
         | will want to recoup their money.
         | 
         | The problem with just replacing GitHub isn't the source code
         | hosting part. There's tons of alternatives both commercial and
         | open source. The problem is the cost of CI infrastructure and
         | CDN/content/release hosting.
         | 
         | Even moderating said CI infrastructure is a nightmare.
         | freedesktop.org which uses a self-hosted gitlab instance
         | recently had to shutdown CI for everything but official
         | projects because the crypto mining bots attacked over the last
         | few days hard and fast.
        
         | r3trohack3r wrote:
         | The economics of hosting an image registry are tough. Just
         | mirroring the npm registry can cost $100s per month in storage
         | for tiny little tarballs.
         | 
         | Hosting GB images in an append-only registry, some of which get
         | published weekly or even daily, will burn an incredible amount
         | of money in storage costs. And that's before talking about
         | ingress and egress.
         | 
         | There will also be a tonne of engineering costs for managing
         | it, especially if you want to explore compression to push down
         | storage costs. A lot of image layers share a lot of files, if
         | you can store the decompressed tarballs in a chunk store with
         | clever chunking you can probably reduce storage costs by an
         | order of magnitude.
         | 
         | But, at the end of the day, expect costs for this to shoot into
         | the 6-7 digit USD range per month in storage and bandwidth as a
         | lower bound for your community hosted image registry.
        
           | ijaeifjzdi wrote:
           | you just have to host the recipe and the hash/meta-data
           | 
           | c'mon. This is not amateur hour. Hosting the whole thing only
           | made sense for docker because their plan was always to do
           | this microsoft style play.
           | 
           | If you assume you are either open source or fully closed
           | enterprises, the problem is very, very easy to solve. and
           | cheap. Just relinquish full control of being able to close
           | all the doors for a fee, like they are doing now.
        
       | Groxx wrote:
       | > _Has Docker forgotten Remember leftpad?_
       | 
       | Anyone who takes even a brief glance at the absurdly yolo
       | identity, upgrade, and permissions model Docker encourages should
       | be able to answer this with an immediate "obviously they don't
       | care".
       | 
       | The faster this implodes, the faster we get a safer setup where
       | we don't blindly trust everything.
        
       | preciousoo wrote:
       | Side note: what did happen to Travis? I was just googling them
       | yesterday because they were everywhere. They even came with the
       | GitHub education package.
       | 
       | Did GitHub just eat them?
        
         | qbasic_forever wrote:
         | I suspect GitHub actions put a massive dent in their product
         | usage. I seem to remember they started to cut costs and
         | restrict free usage some years back too, and that was the
         | beginning of the end.
        
         | tankerkiller wrote:
         | Seems based on some of my research that they've completely
         | exited the free side of the business. All of their plans are
         | now paid and the cheapest plan is $64/year
        
         | jacobwg wrote:
         | Travis CI got acquired by Idera in 2019
         | (https://news.ycombinator.com/item?id=18978251) then a month
         | later laid off senior engineer staff
         | (https://news.ycombinator.com/item?id=19218036).
        
           | preciousoo wrote:
           | Damn. They were my introduction to CI/CD. Such is life
        
         | Xylakant wrote:
         | Travis was acquired a few years ago and things went downhill
         | from there on.
        
         | VWWHFSfQ wrote:
         | It wasn't a good business to be in anyway. I don't think any of
         | these freebie devops businesses are all that smart. They're not
         | a "business", they're a feature of someone else's business. And
         | as soon as they catch up then you're done.
        
           | acdha wrote:
           | Also they're surprisingly expensive: things like spam and
           | cryptocurrency mean that you need a fairly large abuse
           | prevention team which is expensive but has no customer-
           | visible benefits. GitHub has that too but as you said they at
           | least have the rest of the business with which to recoup that
           | cost.
        
       | boredumb wrote:
       | Docker had the ability to be baked into nearly every enterprise
       | tech stack and extract money accordingly, instead they take time
       | every 2 years to torment users. They will end up going down as
       | one of the biggest missed plays in modern software.
        
       | nerdjon wrote:
       | I am missing something and I can't find a concrete explanation
       | anywhere.
       | 
       | What exactly does this mean as someone who pulls images but
       | doesn't push to docker hub?
       | 
       | Within a month or so are we going to start getting failures
       | trying to pull images or docker hub no longer being updated and
       | needing to start pulling from somewhere else?
        
         | Riverheart wrote:
         | It means the images you depend on may cease to exist failing
         | your builds and at worst they'll be replaced by bad actors
         | registering the free namespace so automated CI builds and
         | unsuspecting users can pull their containers instead.
         | 
         | So depending on whether these open-source orgs pay up will
         | determine whether you continue using Docker Hub or whatever
         | registry they migrate to.
        
       | raesene9 wrote:
       | Image hosting at scale is an expensive business and Docker are
       | not the only ones trying to manage the costs.
       | 
       | Kubernetes is currently in the process of changing the main
       | repository for their images because, as I understand it, they're
       | burning through their free GCP credits an an unsustainable rate.
       | 
       | Ideally Docker Hub would be an industry funded effort, but that
       | would require co-operation and funding from the major tech.
       | players, and I have a feeling that in an era of cost cutting,
       | that might be harder to achieve than it was in the past.
        
       | synthc wrote:
       | Another reminder that VC-funded hosting does not last forever.
       | Self-host your mission critical dependencies folks!
        
       | adolph wrote:
       | _Squatting and the effects of malware and poison images is my
       | primary concern here._
       | 
       | One of the things the docker api has going for it is that it is
       | hash based. Aside from the first time, it doesn't seem far
       | fetched for a docker api client to refuse or warn based on
       | comparing the new download's hash to the previous hash.
        
         | mac-chaffee wrote:
         | Not a lot of people pull by hash; they pull by tag. Tags are
         | not immutable, so the image I get from "python:3.11" today will
         | almost certainly change due to security updates and I will be
         | none the wiser.
        
           | adolph wrote:
           | I can see that. A human specifiable name is important.
           | 
           | My proposal is that each time an image is pulled, the hash is
           | recorded and retained even if the underlying container image
           | is removed. When the same image is pulled again, if the files
           | change from the previous hash, either fail or warn the user.
           | 
           | I can see how pinning to a specific patch version is not a
           | great idea and that "python:3.11" keeps people from pinning
           | to an insecure version.
        
       | dbingham wrote:
       | As an SRE Manager, this is causing me a hell of a headache this
       | morning.
       | 
       | In 30 days a bunch of images we depend on may just disappear. We
       | mostly depend on images from relatively large organizations
       | (`alpine`, `node`, `golang`, etc), so one would want to believe
       | that we'll be fine - they're all either in the open source
       | program or will pay. But I can't hang my hat on that. If those
       | images disappear, we lose the ability to release and that's not
       | acceptable.
       | 
       | There's no way for us to see which organizations have paid and
       | which haven't. Which are members of the open source program and
       | which aren't. I can't even tell which images are likely at risk.
       | 
       | The best I can come up with, at the moment, is waiting for each
       | organization to make some sort of announcement with one of "We've
       | paid, don't worry", "We're migrating, here's where", or "We've
       | applied to the open source program". And if organizations don't
       | do that... I mean, 30 days isn't enough time to find alternatives
       | and migrate.
       | 
       | So we're just left basically hoping that nothing blows up in 30
       | days.
       | 
       | And companies that do that to me give me a _very_ strong
       | incentive to never use their products and tools if I can avoid
       | it.
        
         | yenda wrote:
         | Sounds like you could save yourself some time and budget by
         | offering to pay for those images your are using?
        
         | jayp1418 wrote:
         | That's why better to have NetBSD + pkgsrc combo for servers.
        
           | drdaeman wrote:
           | You misspelled Nixpkgs ;-)
           | 
           | I'm kidding, of course, but IIRC pkgsrc (and alikes, such as
           | APT) has a number of limitations, for example a very limited
           | ability to have multiple versions of the same package
           | installed, making it less than optimal replacement.
           | 
           | (I believe a lot of people depend on ability to spin up a new
           | version while the old is running, then do the cutover and
           | shut down the old one after it's not is use.)
        
             | anthk wrote:
             | And Guix, too. Specially Guix.
        
         | owaislone wrote:
         | Good time/opportunity to get your team/company to invest in a
         | registry+proxy to host all images you depend on.
        
         | TimWolla wrote:
         | The images you mention (alpine, node, golang) are all so-called
         | "Docker Official Images". Those are all the ones _without a
         | slash_ as the namespace separator in them:
         | https://hub.docker.com/search?q=&type=image&image_filter=off...
         | 
         | They are versioned and reviewed here:
         | https://github.com/docker-library/official-images
         | 
         | I don't expect them to go away.
         | 
         | Disclosure: I maintain two of them (spiped, adminer).
        
           | legohead wrote:
           | "don't expect" or "for certain"? Can't really plan ahead
           | without some kind of certainty.
        
             | Strom wrote:
             | There is no real distinction between those two phrases
             | here, because the person using those phrases isn't
             | ultimately in control.
        
               | stingraycharles wrote:
               | One could argue that only the person who is in control
               | could say "for certain", and as such, that is the
               | implicit differentiator between those two phrases.
        
               | zamnos wrote:
               | I mean yeah okay fine the phrase is then "according to
               | your understanding of the rules set forth by Docker, as
               | of today's edit of the linked PDF (2023-03-15), and in
               | accordance with the current (2023-03-15) configuration of
               | the three images, `alpine`, `node`, and `golang`; are
               | those three images covered by the open source program and
               | will continue to be accessible or will those images cease
               | to be accessible by non-paying members of the general
               | public in thirty (30) days?"
               | 
               | It's just that I'd thought we'd moved past the need for
               | that level of pedantry here, but apparently not.
        
               | Uvix wrote:
               | That's probably a good thing, because that makes clear
               | you missed the point about the Docker Official Images
               | program. Docker's support for open-source organizations
               | has nothing to do with the Official Images program; they
               | are generated by Docker themselves, rather than being
               | generated by an open source project and merely hosted by
               | Docker.
        
               | [deleted]
        
               | stingraycharles wrote:
               | I may sound pedantic, but in all honesty, Docker has been
               | quite hostile over the past few years in terms of
               | monetizing / saving costs, so nothing would surprise me
               | at this point. I would definitely not feel comfortable
               | saying "for certain". Phrased differently, if the person
               | who is in control says "for certain", vs. some random HN
               | user, I would attach a lot more value to the statement
               | made by the personal in control.
        
               | rcme wrote:
               | Even if they were fully in control, there still would not
               | be a distinction because whoever is controller this
               | decision could change their mind at a later date.
        
               | chordalkeyboard wrote:
               | the difference is that the person in control would be
               | attesting to their state of mind, versus the person out
               | of control attesting to their understanding of the
               | relevant circumstances.
        
               | hosh wrote:
               | My analysis of this:
               | 
               | After Kubernetes became the de-facto container
               | orchestration platform, Docker sold a bunch of their
               | business to Mirantis. They shifted their marketing and
               | positioning from enterprise to developers. From public
               | sources, it sounds like their strategy is doing pretty
               | well.
               | 
               | The question then is, does Docker look like they are
               | committed to open-source and the open-source ecosystem?
               | 
               | 1. You would think that a developer-focused strategy
               | would involve open-source, and that doing things to
               | decrease their influence on the open-source world would
               | reduce their influence, branding, and narrow their
               | funnel. (But maybe not. Are the people paying for Docker
               | Desktop also big open-source users and advocates?).
               | 
               | 2. It sounds like Docker has full-time internal teams
               | that maintain the official Docker images and accept PRs
               | from upstream.
               | 
               | 3. Docker rate-limited metadata access for public
               | repositories. Is that a signal for weakening support for
               | open-source?
               | 
               | 4. According to the article, the Docker Open Source
               | program is out-of-touch ...
               | 
               | 5. ... But they may still be paying attention to the big
               | foundations like CNCF and Apache. So the images people
               | depend upon for those may not be going away anytime soon
               | 
               | So I would look for other signals for diminishing
               | commitment to open-source:
               | 
               | - If several of the larger projects pulls out of hosting
               | on Docker Hub
               | 
               | - If the internal Docker teams are getting let go
               | 
               | - If the rate at which PRs are accepted for the official
               | images are reduced
               | 
               | - If the official images are getting increasingly out of
               | sync with upstream
               | 
               | - Some other signals that matches
        
               | TimWolla wrote:
               | Indeed. While I do maintain two of them, that maintenance
               | is effectively equivalent to being an open source
               | maintainer or open source contributor. I do not have any
               | non-public knowledge about the Docker Official Images
               | program. My interaction with the Docker Official Images
               | program can be summed up as "my PRs to docker-
               | library/official-images" (https://github.com/docker-
               | library/official-images/pulls/TimW...) and the #docker-
               | library IRC channel on Libera.Chat.
        
             | Wowfunhappy wrote:
             | Unless you're hosting the infrastructure yourself, you
             | can't ever be certain. No one can know for sure what Docker
             | will decide to do in the future. The entire company could
             | shut down tomorrow.
             | 
             | But it seems to me that Docker official images are no more
             | at risk of deletion today than they were a week ago.
        
             | LeifCarrotson wrote:
             | > Can't really plan ahead without some kind of certainty.
             | 
             | You can only plan ahead with uncertainty, because that's
             | the only way that humans interact with time. Nothing is
             | 100%. Even if you paid enterprise rates for the privilege
             | to run a local instance, and ran that on a physical server
             | on your site, and had backup hardware in case the
             | production hardware failed...the stars might be misaligned
             | and you might fail your build. You can only estimate
             | probabilities, and you must therefore include that
             | confidence level in your plans.
             | 
             | Sure, depending on free third-party sources is much more
             | risky than any of that, but no one knows the future (at
             | least for now, and ignoring some unreliable claims of some
             | mystics to the contrary, though I estimate with very high
             | confidence that those claims are false and that this state
             | of affairs is unlikely to change in the next 5 years).
        
           | karamanolev wrote:
           | Useful information, bad look for Docker - "Oh, no slash as
           | the namespace separator. Good and easy way to tell, that's
           | how I would've done it!".
        
             | cshimmin wrote:
             | I mean, it's not a terrible convention. On the website they
             | have a badge ("docker official image"), but devs aren't
             | usually looking at the website, they're looking at their
             | Dockerfile in vim or whatever. This is a straightforward
             | way to communicate that semantically through namespacing.
             | 
             | Still, shame on docker for the rug-pull.
        
               | zamnos wrote:
               | It's better than none, but explicit over implicit. If it
               | were namespaced like _PULL docker.org
               | /offical/alpine:latest_ that would be better, imo.
        
         | KingLancelot wrote:
         | [dead]
        
         | ownagefool wrote:
         | > I mean, 30 days isn't enough time to find alternatives and
         | migrate.
         | 
         | Write a script to iterate the images and push them to your own
         | registry. This will buy you time in the event anything does
         | dissapear.
        
         | phpisthebest wrote:
         | Any organization that has the means to pay, should pay for
         | another service that is not openly hostile to users...
        
         | SergeAx wrote:
         | Our organization currently caching all and every external
         | dependencies we are using: Go, Python, npm and .NET packages,
         | Docker images, Linux deb packages, so everything is contained
         | inside our perimeter. We did that after one day our self-hosted
         | Gitlab runners were throttled and then rate-limited by some
         | package repository and all CI pipelines halted.
        
         | salawat wrote:
         | Time for you to locally clone the dockerfiles you're reliant
         | on, build up your own in house repository, and then do what has
         | been done since time immemorial.
         | 
         | Mirror the important shit. No excuses, just do. Yes, it's work.
         | I guarantee though, you'll be less exposed to externally
         | created drama.
         | 
         | Making sure your org stays up to date though, that's on you.
        
         | eecc wrote:
         | > If those images disappear, we lose the ability to release and
         | that's not acceptable
         | 
         | It's your responsibility to ensure your own business
         | continuity. You should review how your build pipeline depends
         | on resources outside of your org perimeter, and deploy a
         | private registry under your own control.
         | 
         | btw, you could also contribute some mirroring bandwidth to the
         | community. You must've heard that the cloud is just someone
         | else's computer.
        
         | mc4ndr3 wrote:
         | That's a fair point, and when someone with a working brain
         | mentions the fallout throughout the Internet that would result,
         | I expect Docker Inc. will reverse course and embark on a PR
         | campaign pretending it was all a mere tawdry joke.
        
         | aprdm wrote:
         | You can vendor images. Never have your product depend on
         | something that is in the internet. Spin up Harbour locally and
         | put it in the middle to cache at the very least.
        
           | mc4ndr3 wrote:
           | Imagine if everyone actually did this. Then we would have a
           | myriad of base images hiding even more malware than we do
           | currently.
           | 
           | Not to mention vertically integrating the entire Docker layer
           | set defeats the whole point of using Docker in the first
           | place.
        
             | tehbeard wrote:
             | That's.... I don't know how you even arrived at that idea
             | of that being what happens? Are you imagining some kludged
             | together perl script to hackily save the tarballs, written
             | by someone who is then immediately let go?
             | 
             | What they're suggesting is basically setting up a cache for
             | it locally in-between them and the "main repo" and ensuring
             | the cache doesn't delete after x days and/or keep backups
             | of the images they depend on.
             | 
             | If the package disappears, or the main repo falls over (
             | _cough_ github, _cough_ ), your devs, CI & prod aren't sat
             | twiddling thumbs unable to work...
             | 
             | and if the package is nuked off the planet? You've got some
             | time then to find an alternate / see where they move to.
        
             | chaxor wrote:
             | What are you talking about? Malware and spyware is just as
             | likely (if not _very much *more* likely_ - depending on the
             | definition of malware or spyware*) to be in corporate
             | sponsored software than it is in foss software, and that
             | idea extends to software distribution.
             | 
             | I would expect the security and quality of images in a
             | decentralized system to be far superior to any centralized
             | system spun up by some for profit entity.
             | 
             | * malware and spyware could be defined here as software
             | that allows remote keylogging, camera activation,
             | installation of any executables, etc - i.e. root access -
             | which is precisely what most corporate entities make
             | software to do (e.g. "security solutions" that you have to
             | install on your work computers). This is also most web
             | services which are 90% tracking with an occasional desired
             | application or feature these days.
        
             | twblalock wrote:
             | I've never worked somewhere that didn't have an internal
             | Artifactory with copies of everything.
             | 
             | Not doing that is unusual, and actually less secure. Do you
             | think it's sane or secure for all of your builds to depend
             | on downloading packages from the public internet?
        
             | wlesieutre wrote:
             | They're internal mirrors of public images, if there's
             | something in your infrastructure installing malware on them
             | you've got bigger problems
        
             | aprdm wrote:
             | No, you're wrong. Everyone who wants to stay in business
             | and makes money _actually_ does it. Has been my experience
             | in all big companies, it 's a business continuity problem
             | /not to do it/. You can and should run security in the
             | vendored images.
        
         | [deleted]
        
         | cpitman wrote:
         | Many of the responses here are talking about how to
         | vendor/cache images instead of depending on an online registry,
         | but remember that you also need access to a supply chain for
         | these images. Base images will continue to be patched/updated,
         | and you need those to keep your own images up to date. Unless
         | the suggestion is to build all images, from the bottom up, from
         | scratch.
        
           | sangnoir wrote:
           | It's a stop-gap measure. There are dozens of companies
           | chomping at the bit to replace Docker as THE docker registry:
           | I'd bet someone at Github is _very_ busy at this very moment.
        
             | hosh wrote:
             | The article talked about using the Github Container
             | Registry, which was launched in 2020.
        
               | dividedbyzero wrote:
               | Those very busy people at Github may well be in marketing
        
           | web3-is-a-scam wrote:
           | Typically when you "cache" something, you're gonna expire it
           | at some point...no? If the image is patched, it eventually
           | gets refreshed in the mirror. If the image _dissapears_ at
           | least we still have it until we figure out where the heck it
           | went.
        
         | friendzis wrote:
         | > If those images disappear, we lose the ability to release and
         | that's not acceptable.
         | 
         | left-pad moment once again.
         | 
         | > I mean, 30 days isn't enough time to find alternatives and
         | migrate.
         | 
         | Maybe take control of mission critical dependencies and self-
         | host?
        
           | Woodi wrote:
           | > Maybe take control of mission critical dependencies and
           | self-host?
           | 
           | Last few years prove that this option is a no-go - they just
           | don't do such things ! Independence ? Self-sufficiency ?
           | Security ? Local, fast access ? Obviousness ? No payment
           | required ? Avoid at all costs !
        
             | dividedbyzero wrote:
             | > Last few years prove that this option is a no-go - they
             | just don't do such things !
             | 
             | Who are "they"?
        
               | ipaddr wrote:
               | Busy DevOps crews?
        
             | web3-is-a-scam wrote:
             | taking responsibility for our supply chain and things we
             | depend on that use mostly for _free_? absolutely
             | preposterous, the business demands more features.
        
         | tyler33 wrote:
         | the bad thing about other computers, could happen to everybody,
         | it is harder to use your machine but better long termn
        
         | jjav wrote:
         | > If those images disappear, we lose the ability to release and
         | that's not acceptable.
         | 
         | This shines light on why it is so risky (from both availability
         | and security perspectives) to be dependent on any third party
         | for the build pipeline of a product.
         | 
         | I have always insisted that all dependencies must be pulled
         | from a local source even if the ultimate origin is upstream. I
         | am continuously surprised how many groups simply rely on some
         | third party service (or a dozen of them) to be always and
         | perpetually available or their product build goes boom.
        
           | darkhelmet wrote:
           | Likewise. I've always insisted on building from in-house
           | copies of external dependencies for precisely this kind of
           | scenario. It astonishes me the number of people who didn't
           | get why. Having things like docker rate-limiting/shutdowns,
           | regular supply chain attacks, etc has been helping though.
           | 
           | Slightly related: actually knowing for sure that you've got a
           | handle on all of the external dependencies is sometimes
           | harder than it should be. Building in an environment with no
           | outbound network access turns up all sorts of terrible things
           | - far more often than it should. The kind that worry me are
           | supposedly self-contained packages that internally do a bunch
           | of "curl | sudo bash" type processing in their pre/post-
           | install scripts. Those are good to know about before it is
           | too late.
        
             | jjav wrote:
             | > Building in an environment with no outbound network
             | access turns up all sorts of terrible things
             | 
             | Yes, highly recommended to build on such a system, it'll
             | shake out the roaches that lie hidden.
             | 
             | In a small startup environment, the very least to do is at
             | least keep a local repository of all external dependencies
             | and build off that, so that if a third party goes offline
             | or deletes what you needed you're still good.
             | 
             | For larger enterprises with more resources, best is to
             | build _everything_ from source code kept in local
             | repositories and do those builds, as you say, in machines
             | with no network connectivity. That way you are guaranteed
             | that the every bit of code in your product can be (re)built
             | from source even far in the future.
        
           | SoftTalker wrote:
           | I run a local Ubuntu mirror for the work systems I manage,
           | for this reason.
        
           | flyinghamster wrote:
           | Be sure to archive your development tools as well, just in
           | case that rug gets pulled. You don't want to be in the
           | position that you need v3.1415927 of FooWare X++ because
           | version 4 dropped support for BazQuux(tm), only to find that
           | it's no longer downloadable at any price.
        
           | dbingham wrote:
           | We can't go NIH for everything. If we do that we're back to
           | baremetal in our own datacenters and that's expensive and
           | (comparatively) low velocity. We have to pick and choose our
           | dependencies and take the trade off of risk for velocity.
           | 
           | This is the tradeoff we made with the move to cloud. We run
           | our workloads on AWS, GCP or Azure, use DataDog or New Relic
           | for monitoring, use Github or GitLab for repos and pipelines,
           | and so forth. Each speeds us up but is a risk. We hope they
           | are relatively low risks and we work to ameliorate those
           | risks as we can.
           | 
           | An organization like Docker _should_ have been low risk.
           | Clearly, it 's not. So now it's a strong candidate for
           | replacement with a local solution rather than a vendor to
           | rely on.
        
             | tetrep wrote:
             | It's less NIH and more "cache your dependencies." Details
             | will very greatly depending on what your tech stack looks
             | like, if you're lucky you can just inline a cache. I know
             | Artifactory is a relatively general commercial solution
             | although I can't speak personally about it.
             | 
             | If you can't easily use an existing caching solution, then
             | the only NIH you need to do is copying files that your
             | build system downloads. I know many build systems are "just
             | a bunch of scripts" so those would probably be pretty
             | amenable to this, I don't know if more opaque systems exist
             | that wouldn't give you any access like that. If so, I
             | suppose you could try to just copy the disk the build
             | system writes everything to, but then you're getting into
             | pretty hacky stuff and that's not ideal. Copying the files
             | doesn't give you the nice UX of a cache, but it does mean
             | that in the worst case scenario you at least have the all
             | the dependencies you've used in recent builds, so you'll be
             | able to keep building your things.
        
             | MikePlacid wrote:
             | > that's expensive and (comparatively) low velocity
             | 
             | The problem with this approach begins when many people your
             | build depends upon start to share it.
        
             | theamk wrote:
             | "Free service which requires $$ to maintain" and "low risk"
             | are not compatible.
             | 
             | We moved to cloud as well, and we use AWS ECR for caching.
             | We have a script for "docker login to ECR" and a list of
             | images to auto-mirror periodically. There is a bit of
             | friction when adding new, never-seen-before image, but in
             | general this does not slow developers much. And we never
             | hit any rate-limits, too!
             | 
             | We pay for those ECR accesses, so I am pretty confident
             | they are not going to go away. Unlike free docker images.
        
         | agilob wrote:
         | Very first thing you should have is a mirroring docker hub
         | proxy. Im surprised SRE manager doesn't have it, why?
        
         | [deleted]
        
         | xyst wrote:
         | if you haven't mirrored the docker images your application
         | needs to a private registry, then you are doing it wrong.
        
         | softfalcon wrote:
         | First of all, want to say, that sounds deeply frustrating.
         | 
         | Secondly, if this is a serious worry. I would recommend
         | creating your own private docker registry.
         | 
         | https://docs.docker.com/registry/deploying/
         | 
         | Then I would download all current versions of the images you
         | use within your org and push them up to said registry.
         | 
         | It's not a perfect solution, but you'll be able to pull the
         | images if they disappear and considering this will take only a
         | few minutes to set up somewhere, could be a life saver.
         | 
         | As well, I should note that most cloud providers also have a
         | container registry service you can use instead of this. We use
         | the google one to back up vital images to in case Docker Hub
         | were to have issues.
         | 
         | Is this a massive pain in the butt? Yup! But it sure beats
         | failed deploys! Good luck out there!
        
           | bravetraveler wrote:
           | As someone who maintains the registries we use globally at
           | work, +1.
           | 
           | I know people groan at running infrastructure, but the
           | registry software is really well documented and flexible.
           | 
           | If you don't need to 'push', but only pull - configuring them
           | as pull through caches is nice for availability and
           | reliability -- while also saving from nickle/diming.
           | 
           | They will get things from a configurable upstream,
           | _proxy.remoteurl_.
           | 
           | Contrary to what the documentation says, this can work with
           | anything speaking the API. Not just Dockerhub.
           | 
           | edit: My one criticism, it's not good from an HTTPS hardening
           | perspective. It's functional, but audits find non-issues.
           | 
           | You'll want _nginx_ or something in front to ensure good HSTS
           | header coverage for non-actionable requests, for example.
        
             | tetha wrote:
             | That's good to hear. So I'll just have to spend an hour or
             | so tomorrow night ensuring our private pull-through
             | registry is used on everything prod and the biggest
             | explosion is averted. Images built by the company land in
             | internal registries already, so that's fine as well.
             | 
             | Means, it's mostly a question of, (a) checking for image
             | squatting on the hub after orgs get deleted, which I don't
             | know how to deal with just yet (could I just null-route the
             | docker hub on my registry until evaluated and we just don't
             | get new images?), and (b) ruffling through all of our
             | container systems to see where people use what image to
             | figure out which are verified, or paying, or obsoleted, and
             | where they went, or what is going on. That'll be great fun.
        
               | bravetraveler wrote:
               | The typical Docker registry software when configured as a
               | 'pull through' doesn't allow for pushes, if memory
               | serves. That may be an important consideration while
               | handling the situation
               | 
               | We run them in 'maintenance mode' just to be absolutely
               | sure anything the upstream doesn't have (or had at one
               | point) is permitted in!
               | 
               | Though, I don't think they'll allow pushes anyway with _'
               | proxy.remoteurl'_ defined.
               | 
               | I'm not sure I followed your setup properly, but with the
               | private registry defined as your _' proxy.remoteurl'_,
               | you shouldn't have to worry about the Hub in particular -
               | unless it's looking there, or people are pushing bad
               | things into it
        
               | tetha wrote:
               | > I'm not sure I followed your setup properly, but with
               | the private registry defined as your 'proxy.remoteurl',
               | you shouldn't have to worry about the Hub in particular -
               | unless it's looking there, or people are pushing bad
               | things into it
               | 
               | That is exactly the thing I am worried about, as we have
               | a pull-through mirror for the docker hub.
               | 
               | What happens if some goofus container from that chaotic
               | team pulls in knownOSS/component, but knownOSS got
               | deleted and - after 30 days of available recon by _all_
               | malicious teams on the planet - got squatted instantly
               | afterwards with rather vile malware? Spend some pennies
               | to make a dollar by getting into a lot of systems.
               | 
               | Obviously, you can throw a million shoulds at me,
               | shouldn't do that, should rename + vendor and such
               | (though how would you validate the image you mirror.),
               | but that's a messy thing to deal with and I am wondering
               | about a centralized way to block it without needing
               | anyone but the registry/mirror admins.
        
               | bravetraveler wrote:
               | Ah I see!
               | 
               | I misunderstood, didn't realize that it's pointing to the
               | Hub. I assumed the more strict sense of 'private' :)
               | 
               | The Docker-provided registry software is limited in terms
               | of "don't go here". You get all of upstream, essentially
               | 
               | Quay or Harbor are more configurable in that regard, but
               | I'm less familiar.
               | 
               | We're privileged, being already very-offline and
               | signature heavy... and that's someone else. I just run
               | the systems/services!
        
             | WirelessGigabit wrote:
             | The problem here is that the company I work at has started
             | building these golden images, full of cruft and then no
             | team gets allocated to maintain them.
        
             | Yeroc wrote:
             | All good points but while this saves you from the docker
             | images disappearing it does nothing to solve the issue of
             | those images no longer receiving important security updates
             | and bug fixes going forward.
        
               | bravetraveler wrote:
               | Indeed, buying time at most :)
               | 
               | The situation just presented an opportunity for
               | improvement, I don't intend to suggest it as a cure - but
               | a good step!
               | 
               | Edit: For anyone curious, _our_ upstream is actually the
               | same software somewhere else, utilized by CICD.
               | 
               | That being the origin allows for pushes, with the pull-
               | through caches being read-only by nature
        
           | the_jeremy wrote:
           | I would not recommend doing it through Docker, though,
           | especially after this change. We use AWS's ECR, and you can
           | set it to do pull-through caching of public images, so images
           | you've already used will stick around even if Docker blows
           | up, and you don't have to pull the images yourself, you just
           | point everything in your environment to ECR and ~~ECR will
           | pull from docker hub~~ (EDIT: it only supports quay.io, not
           | docker hub) and start building its cache as you use the
           | images.
        
             | wavesquid wrote:
             | ECR pull through caching is only possible for other ECR
             | repos or quay.io. you cannot use it for Docker hub.
        
               | the_jeremy wrote:
               | ah, sorry. We only use it for quay, I didn't realize that
               | was out of necessity rather than targeting.
        
             | dbingham wrote:
             | Knowing ECR has pull through caching is really helpful. I'm
             | sure we would have come across that in the course of
             | investigating our response, but this definitely saved us
             | some time!
             | 
             | Edit: Damn, looks like ECR's pull through caching only
             | works for ECR Public and Quay? It's a little unclear, but
             | maybe not a drop in solution for Docker Hub replacement.
             | 
             | https://docs.aws.amazon.com/AmazonECR/latest/userguide/pull
             | -...
        
         | BlueTemplar wrote:
         | How viable is it to fork Docker Hub ?
        
         | amouat wrote:
         | Just to be clear, the official images are definitely not at
         | risk, and I say that as a Docker captain.
         | 
         | Official images are hugely important to Docker, now and going
         | forward.
        
         | websap wrote:
         | You probably wanna move to AWS Public ECR Gallery. They have a
         | notion of official images.
         | 
         | AWS is in a better position to offer long term coverage.
        
         | maxyurk wrote:
         | Use JFrog Artifactory. If you're with self hosting there's a
         | free JFrog Container Registry edition.
        
         | ohgodplsno wrote:
         | Or you could, you know, host a Docker registry and reupload
         | those images to something you control. Worst case scenario, in
         | 30 days, nothing is gone from Docker and you can just spin it
         | down.
         | 
         | Your job as an SRE is not to look at things and go "oh well,
         | nothing we can do lol".
        
           | dbingham wrote:
           | Yes, that involves ripping out Docker Hub everywhere. It's a
           | significant chunk of work, not something easily fit into 30
           | days on a team that is already strapped for resources with
           | more work than we can do.
        
             | Faaak wrote:
             | Setting up harbor as a docker proxy-cache is actually quite
             | simple
        
               | djbusby wrote:
               | The link
               | https://goharbor.io/docs/2.1.0/administration/configure-
               | prox...
        
               | pram wrote:
               | [flagged]
        
             | mysterydip wrote:
             | I'm not familiar with how Docker works, so forgive the
             | ignorance. I thought the point of docker images was
             | portability? Is it not just taking the references and
             | pointing to a new instance under your control?
        
               | creshal wrote:
               | I'm not too familiar with docker myself, but gitlab's
               | selfhosted omnibus includes a container registry that
               | Just Works(tm) for our small team.
        
               | friendzis wrote:
               | Most production workloads do not use docker directly, but
               | rather use it as sorts of "installation format" that
               | other services schedule (spin up, connect, spin down,
               | upgrade). One of typical defaults is to always try and
               | pull new image even if requested version is available in
               | node-local cache. On one hand it prevents issues where
               | services would fail to start on certain nodes in the
               | event of repository downtime. On the other hand it blocks
               | service startup altogether. With such a set up
               | availability of registry is mission critical for
               | continuous operation.
               | 
               | Some people think it is a perfectly reasonable idea to
               | set up defaults to always pull, point to latest version
               | and not have local cache/mirror. Judging from the number
               | of upvotes on OP, depending on third party remote without
               | any SLA to be always available for production workloads
               | seems to be the default.
        
           | bushbaba wrote:
           | That's unplanned work. There's other work needing to be done
           | as well.
        
             | ikiris wrote:
             | I'm sure docker will happily hold off on this work until
             | you can fit it into your OKR planning next quarter. /s
        
             | ohgodplsno wrote:
             | And a sudden fire is also unplanned work, but that's still
             | your work. If this is such a threat, then maybe shift
             | priorities around.
        
               | drivers99 wrote:
               | Yes, that's what unplanned work means.
        
               | taikahessu wrote:
               | You're missing the mark. It's about risk and expectation
               | management and one the risks just blew up in an
               | unexpected way.
        
               | mynameisvlad wrote:
               | Are we not allowed to complain about unnecessary
               | unplanned work being foisted on us with 30 days notice?
               | 
               | That seems like an entirely relevant complaint for this
               | forum but from your first reply, you're acting like
               | somehow it's the greatest offense in the world that
               | someone pointed this out.
        
               | ohgodplsno wrote:
               | Come on, 30 days notice is a walk in the park.
               | Additionally, OP was the one complaining that changing a
               | few URLs and eventually spinning up a new server. It's
               | quite literally a one day or two job, unless you're at a
               | company the size of Amazon (in which case, luckily for
               | you, you're not the only SRE, so it's still just a few
               | days).
               | 
               | > The best I can come up with, at the moment, is waiting
               | for each organization to make some sort of announcement
               | with one of "We've paid, don't worry", "We're migrating,
               | here's where", or "We've applied to the open source
               | program". And if organizations don't do that... I mean,
               | 30 days isn't enough time to find alternatives and
               | migrate.
               | 
               | This is the original comment. The best they can come up
               | with is... do nothing and wait to see if the smoke turns
               | into a fire ? I've seen better uses of time. 30 days is
               | enough time to find an alternative, migrate _and_ get
               | regular coffee breaks too.
        
               | Twirrim wrote:
               | > Come on, 30 days notice is a walk in the park
               | 
               | Sure, maybe in a small business or startup, and even then
               | I'd content not quite as easy as all that.
               | 
               | When you're dealing with anything larger, say involving
               | multiple teams, organisations, and priorities, 30 days is
               | an insanely short shrift to look at figuring out what
               | your actual route forwards is (and if you're provisioning
               | something new, making sure you're allowed to and have any
               | relevant sign-offs etc.)
               | 
               | This particular situation with Docker doesn't affect us,
               | but if it did this would have some serious knock on
               | implications. The teams in my org are already busy with
               | things that need to GA by certain dates or there will be
               | financial implications. It's not "tire fire" but in most
               | cases it's solid "don't waste time" territory. There's
               | always flex in the schedule, but the closer to a GA date
               | you get the more rigid the schedule has to be.
        
               | Zetice wrote:
               | If you're unable to take on a task that has a 30 day
               | deadline in your org, regardless of size, you're
               | experiencing a good amount of bloat.
        
               | foepys wrote:
               | At this size you should have a local registry that acts
               | as a transparent cache. If you don't, then get one right
               | now. What happens if Docker's servers are down for
               | whatever reason? Does your whole process break?
        
               | Twirrim wrote:
               | Sorry, didn't mean to imply that it is actually affecting
               | us or even a concern. It isn't. I was just calling out
               | that 30 days isn't just "simple" as the parent poster was
               | asserting.
        
               | mynameisvlad wrote:
               | 30 days is nowhere near enough times for people with real
               | jobs that have other things to do rather than drop
               | everything to do this. Once again, completely needlessly.
               | 
               | You're making a mountain out of an entirely valid
               | complaint.
               | 
               | Quoting your own profile, stay mad.
        
               | broast wrote:
               | What if you already have important planned and unplanned
               | urgent work occupying all your SRE'S for the month? On a
               | team or org that's already running thin? Surely you've
               | been there.
        
               | ohgodplsno wrote:
               | I have. And it was also my job to say to management "hey,
               | there's a very preoccupying fire right there, and it will
               | delay this less important thing. If you're unhappy about
               | it, send me an email explicitly telling me drop the very
               | preoccupying fire."
               | 
               | "Everybody has a plan until you get punched in the mouth"
               | also applies to tech.
        
               | broast wrote:
               | I think this thread was started by the manager that has
               | to hear that push back, hence the headaches.
        
               | tsuujin wrote:
               | "Tell me you've never worked an enterprise tech job
               | without telling me you've never worked an enterprise tech
               | job."
               | 
               | My next 30 days are already accounted for, and will
               | already include disruptions that actually come from the
               | area of work.
        
               | ohgodplsno wrote:
               | So, you're 100% booked with no room for anything?
               | Congratulations on your management for understaffing your
               | team and expecting you to do 120% if something happens,
               | they're saving up quite a bit of money.
               | 
               | You might want to start respecting your free time though,
               | because they clearly don't give a shit about you.
        
               | tetha wrote:
               | Heh. "But shouldn't an enterprise have all of these
               | things figured out and mirrored and also pay money to
               | Docker Inc"?
               | 
               | Should? Certainly. But guess what kind of emergencies it
               | takes to get these things finally prioritized and what
               | kinda mad scramble ensues from there to kinda hold it
               | together.
        
               | layer8 wrote:
               | It's not like using cloud services without suitable
               | contractual agreements isn't a known risk?
        
               | mynameisvlad wrote:
               | Sure. It's a risk. But that doesn't somehow make this
               | work expected and planned, not invalidate the original
               | comment.
               | 
               | It could have happened at any time. But it's also been
               | running for a decade now so there's an expectation that
               | things will continue rather than have the rug pulled with
               | 30 days notice.
        
               | pantalaimon wrote:
               | If someone were to set my server room on fire, I'd be
               | equally annoyed about them.
        
               | account42 wrote:
               | In this case it is entirely planned work: Anyone
               | depending on docker.io chose to make their processes
               | dependent on online endpoints with whose operators you
               | have no business relationship. An unpaid third-party
               | service going offline should be far from unexpected and
               | if you rely on it you better be ready to cope _without_
               | notice.
               | 
               | This is like complaining that you have to put out a fire
               | because rather than fixing the sparking cables you have
               | been relying on your neighbor to put them out before the
               | become noticeable and he only gave you a short notice
               | that he'd be going on vacation.
        
               | mynameisvlad wrote:
               | > Anyone depending on docker.io chose to make their
               | processes dependent on online endpoints with whose
               | operators you have no business relationship.
               | 
               | This does not somehow make the work "planned". That has a
               | specific definition and this ain't it.
               | 
               | Some people may have called it out as a risk when it was
               | implemented. But that still doesn't mean it's planned.
               | 
               | Someone may have included an explicit report on how to
               | deal with it at that time. That still doesn't make it
               | planned.
               | 
               | Also, just because it's known to be a risk and may have a
               | chance to happen in the future does not make it expected
               | either. Nor planned.
        
           | fieldcny wrote:
           | i didn't read it as that, they were stating the realities,
           | assumptions were made, those assumptions are now invalid,
           | they are working on alternatives, 30days is a short deadline
           | for something like this and docker as an organization is
           | behaving poorly.
           | 
           | all of that seems pretty true, and frankly no one should
           | support a company that does something like this. I get they
           | need to figure out how to make money, but time has shown the
           | worst way to do that is to screw over customers or potential
           | customers.
           | 
           | I like the poster will never trust docker, and will never use
           | their tooling or formats, pod-man all the way.
        
             | unilynx wrote:
             | Earlier events already had us slowly switching out docker
             | for pod man, and the tooling is more similar than I had
             | expected. Half of the work is ensuring the images are
             | explicitly prefixed with docker.io/
             | 
             | And this week it turns out that makes the now problematic
             | spots a lot more greppable
        
           | planetafro wrote:
           | This exactly. We have pipelines designed specifically for
           | this reason. We pull, patch, and perform minor edits to
           | images we use. We then version lock all-the-things for
           | consistency.
           | 
           | Not saying this is good news, but in the Enterprise, you have
           | to plan for shit like this.
        
           | hoherd wrote:
           | Imagine you shipped software that included references to
           | docker hub images. That software will no longer work if any
           | of the referenced images are deleted from docker hub. This
           | will be the case with any helm charts that reference images
           | that are deleted from docker hub.
           | 
           | Some of those charts will not have variables that let you
           | override the docker images and tags, so some of those will
           | not be usable without creating a new release.
           | 
           | This is one of the primary reasons to vendor your third party
           | docker images into a docker registry that you control.
        
             | aprdm wrote:
             | Yes vendor them all too.
        
             | account42 wrote:
             | Imagine if you had listened to people telling you to not
             | make your build systems dependent on online endpoints. But
             | I guess immediate convenince trums resilience.
        
             | ohgodplsno wrote:
             | "Don't release software that can pull code from random
             | services on the internet then execute it without making
             | that configurable" has been standard since the internet was
             | available, just about.
             | 
             | Vendor your helm charts if they are production critical.
             | Vendor the docker images if they are production critical.
             | Vendor the libraries if they are critical.
             | 
             | As an added bonus, you even help making a saner internet
             | where you don't pull left-pad three billion times a month.
        
         | nickcw wrote:
         | > Which are members of the open source program and which
         | aren't.
         | 
         | You can tell which are members of the open source program if
         | you go to their docker hub page and you'll see a banner
         | "SPONSORED OSS"
         | 
         | Here is an example:
         | 
         | https://hub.docker.com/r/rclone/rclone
        
         | richardwhiuk wrote:
         | You should be escrowing any Docker images you depend on, I'd
         | have thought.
        
           | nine_k wrote:
           | Good thing is that setting up an image registry in AWS is so
           | simple! (Ha-ha, only serious.)
        
             | hellodanylo wrote:
             | There is also ECR Public Gallery, which mirrors many public
             | images from DockerHub. https://gallery.ecr.aws
        
             | politician wrote:
             | It's strikingly trivial to self-host docker images in AWS
             | ECR and to run your own CICD platform with safe deployments
             | using EC2, the AWS SDK and the Docker SDK. A super basic
             | process that monitors one GitHub repo is ~150 LOC.
             | 
             | EDIT: I just confirmed that GPT-4 can write this program.
             | Have fun!
        
               | hughw wrote:
               | Thank you if you can share this prompt.
        
               | stuff4ben wrote:
               | Literally just ask it to do it. I've been asking ChatGPT
               | just now to write me a bunch of bash scripts I've
               | procrastinated doing. Holy crap that thing is pretty
               | awesome!
        
               | Aperocky wrote:
               | ChatGPT has learned no less than 100000 snippets about
               | `rm -r *`, and how to trick people into accidentally
               | using it.
        
             | ZiiS wrote:
             | Is https://aws.amazon.com/ecr/ no good?
        
               | nine_k wrote:
               | It exactly is good.
        
               | web3-is-a-scam wrote:
               | It's great. We migrated our images to it months ago (not
               | because of this, bandwidth issues mainly, we also vendor
               | our base images on it) and it has given us exactly 0
               | problems.
        
           | petesergeant wrote:
           | > escrowing
           | 
           | Are you sure this is what you mean? Escrow is a type of
           | contactual arrangement, one type of which is agreeing with a
           | commercial partner that you get a copy of their source-code
           | if they go broke.
           | 
           | I feel like you mean vendoring.
        
             | tmountain wrote:
             | Maybe I'm old fashioned, but we used to call this
             | mirroring.
        
               | phillc73 wrote:
               | Mirroring is the best way. Do it before a service is
               | shuttered, which we used to call "closed".
        
             | richardwhiuk wrote:
             | Escrow the image in case the partner (Docker) stops being
             | able to provide it. Seems like a fair use of the word to
             | me.
             | 
             | "Vendoring" isn't a word.
        
               | [deleted]
        
               | jacobr1 wrote:
               | Things become a word when there is a critical mass of
               | people that use the word. In this case vending initially
               | refereed to placing a copy of the source code of the
               | third party library into /vendor/ subdirectory, thus
               | "vendoring" it. It has since been extended to similar use
               | cases and has become part of the software developer
               | jargon.
        
               | mindcrime wrote:
               | "Vendoring" is 100% a word. It may not be in the OED or
               | MW, but those things are descriptive, not prescriptive.
               | Words become words when they are used as words, and
               | "vendoring" is used as such. See:
               | 
               | https://en.wiktionary.org/wiki/vendoring
               | 
               | https://www.google.com/search?q=vendoring
        
               | account42 wrote:
               | "Vendoring" is a term of art that is used to describe
               | incorporating third party dependencies into your (source
               | code) repository. While not a perfect fit it seems close
               | enough - closer than escrowing where typically a third
               | party that has no immediate use for the artifict is the
               | one holding it.
        
               | crdrost wrote:
               | Maybe this is just me being a physicist, but I would have
               | trouble applying the notion of escrow to anything that
               | does not obey a law of conservation...
               | 
               | "Put that idea in escrow"--I assume I have to write it
               | down first? "Put our incrementing page view count in
               | escrow"--uh...? "Put my time in escrow"--how on earth am
               | I going to get it back?
               | 
               | Similarly "escrowing your software dependencies", hard to
               | interpret if I didn't know the context. Whereas
               | "vendoring" is similarly opaque but immediately
               | recognizable as jargon and has made it into tools (`go
               | mod vendor` and `deno vendor` for example).
        
         | web3-is-a-scam wrote:
         | This is why our team vendors the images we depend on into AWS
         | Elastic Container Registry.
        
         | caeril wrote:
         | This whole thing is so weird. Why do so many organizations
         | _depend_ on the internet to function?
         | 
         | It wasn't too long ago that it was standard practice to vendor
         | your dependencies; that is, dump your dependencies into a
         | vendor/ directory and keep that directory updated and backed
         | up.
         | 
         | But now, you all think it's 100% acceptable to just throw your
         | hands up if github is down, or a maven repository is down, or
         | docker hub makes a policy change?
         | 
         | Every year that goes by it becomes clear that we are actually
         | _regressing_ as a profession.
        
           | phaedrus wrote:
           | There are some places that still work the old way, such as
           | where I work - and we're finding we're increasingly out-of-
           | touch with younger developers who grew up in a connected
           | world. We had a recent college grad engineer (developer) who
           | didn't work out as a hire. Some examples of the disconnect:
           | 
           | Try as I might, I couldn't get him to understand the
           | difference between "git" the tool and "Github" the website.
           | He kept making me nervous because he'd slip up and use the
           | two terms interchangeably. (We have sensitive data that
           | shouldn't be uploaded to the cloud.)
           | 
           | He didn't seem to completely understand files/folders and the
           | desktop metaphor. He didn't seem to understand the difference
           | between personal devices and work devices.
           | 
           | The last straw for our boss to let him go was he turned in a
           | project that used a free web service on the cloud to upload
           | data and get back the rows sorted. (Refer back to what I said
           | above about: sensitive data.)
           | 
           | It didn't appear he was being obstinate, it was a tech-
           | cultural difference. "Radical semantic disconnect" as I've
           | seen the term used in science fiction.
        
             | FearNotDaniel wrote:
             | Oh dear, sounds like a tricky one, but I'm not sure that
             | the "cultural" difference is what really mattered: the
             | question is, did he have a _willingness_ to understand
             | where his worldview was falling short; to see that his
             | limited experience of the world and college education was
             | only a tiny subset of human experience and technical
             | practices; to actively engage with those differences and
             | continue learning? Unfortunately over the years I 've also
             | had some bad experiences with recent grads and the biggest
             | problems usually boiled down to arrogance rather than
             | ignorance... e.g. I can sort-of understand somewhere along
             | the line that someone could have mistakenly learned that
             | this new thing called "unicode" was invented for unix
             | machines (after all, the names sort of sound similar) and
             | that therefore we have absolutely no business trying to use
             | it on a Windows system. But to then absolutely insist until
             | you are blue in the face that you must be right about this
             | because you learned it in college and everyone else in the
             | team is just wrong, no matter what evidence is produced...
             | well that is a difficult situation that I have personally
             | encountered.
        
         | sergiotapia wrote:
         | If your business is depending on these open source projects to
         | exist, shouldn't you be paying them so they can then pay for
         | Docker?
        
           | dahdum wrote:
           | Not every open source project wants to deal with donations /
           | payments that could force incorporation, tax filings, bank
           | accounts, credit/debit cards, and other paperwork. I
           | certainly wouldn't want to deal with that for a side project.
        
             | BlueTemplar wrote:
             | If you are part of an organization, you already need to
             | deal with most of those ?
        
               | tetha wrote:
               | You might need to deal with this on the receiving end.
        
         | the8472 wrote:
         | Why not use your own registry with a pull-through cache?
        
         | leshenka wrote:
         | How hard is it to spin your own registry and clone those images
         | there? I'm not heavily invested in my company's infrastructure
         | but as far as I can tell we have our own docker and npm
         | registries
        
       | muhehe wrote:
       | Can someone recommend some simple "caching proxy for docker
       | images"? Something where instead of docker/podman pull
       | docker.io/something I would do docker/podman pull
       | my.cache/something and it would do the rest. Official docker
       | registry image is supposed to do this, but cannot mirror other
       | repositories outside of dockerhub.
        
       | nottorp wrote:
       | Hmm.
       | 
       | > GitHub's Container Registry offers free storage for public
       | images.
       | 
       | But for how long?
        
         | pimterry wrote:
         | Unlike Docker Inc, GitHub (via Microsoft) do have very deep
         | pockets & their own entire cloud platform, so they can afford
         | to do this forever if they choose.
         | 
         | And their entire marketing strategy is built around free
         | hosting for public data, so it'd take a major shift for this to
         | disappear. Not to say it's impossible, but it seems like the
         | best bet of the options available.
         | 
         | Is it practical to set up a redirect in front of a Docker
         | registry? To make your images available at example.com/docker-
         | images/abc, but just serve an HTTP redirect that sends clients
         | to ghcr.io/example-corp/abc? That way you could pick a new host
         | now, and avoid images breaking in future if they disappear or
         | if you decide to change.
        
           | cauthon wrote:
           | > And their entire marketing strategy is built around free
           | hosting for public data                   1. Embrace
           | 2. Extend       <- you are here         3. Extinguish
           | 
           | it's bonkers when people innocently trust Microsoft to do the
           | right thing
        
             | joshmanders wrote:
             | This would make sense if it wasn't a core feature of GitHub
             | LONG BEFORE Microsoft bought them.
             | 
             | Can we stop this madness already?
        
               | isatty wrote:
               | No, because Microsoft did already buy them?
        
               | [deleted]
        
           | nottorp wrote:
           | > so they can afford to do this forever if they choose.
           | 
           | If they choose. It's in fashion right now to fire people and
           | squeeze free tiers.
        
           | wvh wrote:
           | A simple vanity domain solution, like for Go packages, seems
           | like it could work. Just redirect the whole URL directly to
           | the actual registry's URL.
           | 
           | I don't know if container signature tools support multiple
           | registry locations though.
        
           | shaunn wrote:
           | I like where your head is at; I found this [1] and it makes a
           | case that an attack vector may be created.
           | 
           | [1] https://stackoverflow.com/a/67351972
        
             | pimterry wrote:
             | That's different - that's about changing the _client_
             | configuration. I'm looking to change the server instead, so
             | that the client can use an unambiguous reference to an
             | image, but end up at different registries depending on the
             | server configuration. In a perfect world, Docker Hub would
             | let you do this to migrate community projects away, but
             | even just being able to manually change references now to a
             | registry-agnostic URL would be a big help.
             | 
             | Shouldn't be any security risk there AFAICT. Just hard to
             | tell if it's functionally supported by typical registry
             | clients or if there are other issues that'd appear.
        
         | agumonkey wrote:
         | > But for how long?
         | 
         | the subtitle of the web era
        
           | preciousoo wrote:
           | The hippies were right
        
         | Kwpolska wrote:
         | They have a pretty good track record in code hosting (15
         | years), why would they ruin it for containers?
        
           | bflesch wrote:
           | Github shutting down is not on my risk matrix. Hopefully it
           | happens after I retire.
        
             | tankerkiller wrote:
             | It's on ours where I work, which is why we use Gitea to
             | clone the various open source projects we use to our own
             | servers. With binary level deduplication the amount of data
             | stored is actually incredibly small. I think the
             | unduplicated storage is like 50GB, and the deduplicated is
             | like 10GB?
        
           | jnwatson wrote:
           | The 6-8 orders of magnitude difference in storage required?
        
             | bmacho wrote:
             | We need a culture to not use resources even if it is
             | available and cheap. Use github like it wasn't free.
        
           | trey-jones wrote:
           | > But for how long?
           | 
           | applies very much to Github, and for all of their services,
           | not just containers. The elephant in the room, hardly needs
           | to be mentioned I would think (Microsoft). We are only a few
           | years into that regime.
        
             | Sebb767 wrote:
             | You can say a lot of bad stuff about Microsoft, but they
             | are not known for randomly and suddenly killing of their
             | services.
        
               | trey-jones wrote:
               | It's not randomly and suddenly killing off services, but
               | rather suddenly (and not randomly) changing the pricing
               | structure when companies have been committing to the
               | point of lock-in for years on the free tier.
        
               | macintux wrote:
               | Serious question, though: how many services have they
               | offered over the years that were free to anyone?
               | 
               | Free to existing Windows users, perhaps, but free to the
               | world doesn't seem like something Microsoft was
               | historically in a position to offer, much less later
               | kill.
        
               | Kwpolska wrote:
               | Hotmail/outlook.com and OneDrive are two services that
               | Microsoft has been offering for decades, for free (with a
               | storage limit, but that's nothing weird), no Windows
               | required.
        
               | macintux wrote:
               | Not sure why those didn't occur to me, other than that
               | I've never used them. Thanks.
        
       | roughly wrote:
       | The ongoing Docker saga has convinced me I'm never, ever, ever
       | making a product for developers.
        
       | justusw wrote:
       | As long as we don't share ownership in these platforms, nothing
       | will ever truly belong to us. For Docker, the software, a Libre
       | alternative is Podman. Instead of GitHub, use Codeberg, an open
       | organization and service.
       | 
       | Now we need a Docker registry cooperative owned by everyone.
        
       | sam0x17 wrote:
       | They could probably have made more money from 1-line contextual
       | advertising in image pull messages than this shit
        
         | chatmasta wrote:
         | I think their goal, rather than making more money, is probably
         | to stop _spending_ money on resources belonging to  "customers"
         | who don't pay them any money.
         | 
         | As annoyed as I am with the change, I understand their
         | motivation. It seems rather entitled to demand Docker continue
         | offering me the free service of storing hundreds of gigabytes
         | of redundant, poorly optimized image layers. The complaints
         | seem to largely boil down to "This is outrageous! I will no
         | longer consume your resources without paying you! GOODBYE, good
         | sir!"
        
       | bombolo wrote:
       | > Cybersquat before a bad actor can
       | 
       | I don't see why the duty of preventing this falls on the
       | shoulders of the maintainers that are being kicked out to begin
       | with.
       | 
       | I don't publish on docker hub, but if they were in the process of
       | kicking me out, I'd let the chip fall and let them deal with any
       | eventual disaster.
        
       | abdellah123 wrote:
       | self hosting is the only alternative ... We cannot depend on the
       | cloud anymore. This is not the first nor the last time something
       | like this happens.
        
       | rvz wrote:
       | Tough. Should have self-hosted rather than get yourself locked
       | in. (Again)
       | 
       | > Start publishing images to GitHub
       | 
       | > GitHub's Container Registry offers free storage for public
       | images. It doesn't require service accounts or long-lived tokens
       | to be stored as secrets in CI, because it can mint a short-lived
       | token to access ghcr.io already.
       | 
       | Even after the montage of outages. Still suggesting to re-
       | centralizing back to GitHub as the solution is really asking for
       | more disappointment. [0]
       | 
       | Once again, we have learned nothing.
       | 
       | [0] https://news.ycombinator.com/item?id=34843847
       | 
       | [1] https://news.ycombinator.com/item?id=22867803
        
       | martypitt wrote:
       | I think it's an interesting challenge with FOSS infrastructure in
       | general, and I'm surprised it isn't more of an issue.
       | 
       | Docker's storage is heavier than most, but what about other
       | repositories like maven central, and npm? There must be
       | significant costs associated with running those.
       | 
       | These tools are all the backbone of modern software dev, and need
       | a business model. It's reasonable that consumers should pay for
       | the benefit. I think Docker have screwed the execution of this
       | transition, but the overall pattern of "someone has to pay" is
       | one I support.
       | 
       | Personally, I pay for Docker. I use it every day, and get value
       | from it.
       | 
       | The argument that the OP makes is really valid though - OSS needs
       | distribution channels, which need to be funded - and expecting
       | the publisher to pay for this isn't always appropriate.
       | 
       | I'd like to see something like the equivalent of CNCF which I can
       | buy a subscription from, and it funnels money to the comapnies
       | and developers that keep me in a job -- almost a Spotify model
       | for OSS and it's supporting infrastructure.
        
         | Symbiote wrote:
         | https://mvnrepository.com/repos/central
         | 
         | That has Maven Central as 34TB, which seems reasonable.
         | 
         | Docker is extremely inefficient in comparison, it was 15PB two
         | years ago: https://www.docker.com/blog/scaling-dockers-
         | business-to-serv...
        
         | schneems wrote:
         | RubyCentral pays the bills for rubygems.org. It's a non-profit
         | (501c3, not a 501c9). I believe you can buy an individual
         | subscription to Ruby Central. Also profits from their
         | conferences: RailsConf and RubyConf go towards funding infra
         | (and paying people to wear the pager too).
        
           | jacques_chester wrote:
           | That said, a lot of the true cost of rubygems.org is borne by
           | Fastly, who provide the CDN.
           | 
           | Disclosure: I work at Shopify on a team that does work in and
           | around RubyGems.
        
       | rad_gruchalski wrote:
       | I just started paying for Docker. I use it daily, it makes my
       | life easier so I decided to pay for it.
        
       | sigwinch28 wrote:
       | This is incredibly frustrating to deal with because of how deeply
       | the registry name is baked into Dockerfiles and image names. We
       | end up "mirroring" our base images but there's some disconnect
       | internally between "oh, yeah, our
       | harbor.company/library/debian:bullseye is some random pull of
       | library/debian:bullseye from the docker hub".
       | 
       | Imagine if you needed to change mirrors for `apt` and as part of
       | that process you had to change all of the names of installed
       | packages because the hostname of the mirror formed part of the
       | package's identifier.
        
       | imhoguy wrote:
       | IPFS or Torrents would be ideal way to host and distribute docker
       | images as every layer is content addressable therefore read-only.
       | I don't mind to seed any public images I keep running on my
       | server.
        
       | millerm wrote:
       | I suppose BitTorrent for Images should be a thing (again?)
       | 
       | Discussions of decentralization and redundancy always come up in
       | software/system design and development, but we seem to always
       | gravitate to bottlenecks and full dependency on single entities
       | for the tools we "need".
        
       | Animats wrote:
       | What do we do when Microsoft/Github pulls a similar trick?
        
       | unilynx wrote:
       | It seems to me that the only thing more devastating for a lot of
       | developers would be npmjs.com blowing up like this because they
       | desperately needed funding. But they got acquired by parties in
       | no rush to profit off them
       | 
       | Why didn't any of the big tech acquire Docker and their registry?
       | They don't appear to be less interesting than npmjs considering
       | technology or ecosystem. Did they resist being acquired and has
       | the opportunity passed ?
        
       | projectazorian wrote:
       | Sad but unsurprising; Docker has been slowly transforming into a
       | traditional enterprise software company for quite a while now.
       | They really squandered their potential.
        
       | cookiengineer wrote:
       | Does anybody know whether there could be something like an
       | open/libre container registry?
       | 
       | Maybe the cloud native foundation or the linux foundation could
       | provide something like this to prevent vendor lock-ins?
       | 
       | I was coincidentially trying out harbor again over the last days,
       | and it seems nice as a managed or self-hosted alternative. [1]
       | after some discussions we probably gonna go with that, because we
       | want to prevent another potential lock-in with sonarpoint's
       | nexus.
       | 
       | Does anybody have similar migration plans?
       | 
       | The thing that worries me the most is storage expectations,
       | caching and purging unneeded cache entries.
       | 
       | I have no idea how large/huge a registry can get or what to
       | expect. I imagine alpine images to be much smaller than say, the
       | ubuntu images where the apt caches weren't removed afterwards.
       | 
       | [1] https://goharbor.io
        
         | jillesvangurp wrote:
         | It's all open source software. Stupidly simple and easy to
         | host. It's a low value commodity thing without mucb value that
         | anyone can trivially self host. All you need is a docker
         | capable machine (any linux machine basically) and some disk
         | space to host the images. And a bit of operational stuff like
         | monitoring, backups, etc. So there's an argument to be made for
         | using something that's there, convenient, and available but not
         | too costly. Which until recently was dockerhub. But apparently
         | they are happy to self destruct and leave that to others.
         | 
         | They should take a good look at Github. If only for the simple
         | reason that it's a natural successor to what they are offering
         | (a free hub to host software for the world). Github actually
         | has a container registry (see above for why). And of course the
         | vast majority of software projects already uses them for
         | storing their source files. And they have github actions for
         | building the docker images from those source files. Unlike
         | dockerhub, it's a complete and fully integrated solution. And
         | they are being very clever about their pricing. Which is mostly
         | free and subsidized by paid features relevant to those who get
         | the most value out of them.
         | 
         | I like free stuff of course. But I should point out that I was
         | actually a paying Github user before they changed their pricing
         | to be essentially free (for small companies and teams). I love
         | that of course but I was paying for that before and I think
         | they were worth the money. And yes, it was my call and I made
         | that call at the time.
         | 
         | Also worth pointing out that Github actions builds on top of
         | the whole docker eco system. It's a valuable service that is
         | built on top of Docker. Hosting the docker images is the least
         | valuable thing. And it's the only thing dockerhub was ever good
         | for. Not anymore apparently. Unlike dockerhub, Github figured
         | out how to create value here.
        
       | nrvn wrote:
       | After Docker announced rate limiting for the hub this was an
       | anticipated move. Was just the matter of time.
       | 
       | The only recommendation to everyone: move away or duplicate.
       | 
       | One of the strategies I am yet to test is the synchronization
       | between gitlab and github for protected branches and tags and
       | relying on their container registries. Thus (at least) you
       | provide multiple ways to serve public images for free and with
       | relatively low hassle.
       | 
       | And then for open source projects' maintainers: provide a one
       | command way to reproducibly build images from scratch to serve
       | them from wherever users want. In production I don't want to
       | depend on public registries at all and if anything I must be able
       | to build images on my own and expect them to be the same as their
       | publicly built counterparts. Mirroring images is the primary way,
       | reproducing is the fallback option and also helps to verify the
       | integrity.
        
         | phillebaba wrote:
         | Some self promotion but I have built a project that aims to
         | solve some of these issues in Kubernetes.
         | https://github.com/xenitAB/spegel
         | 
         | I have avoided a couple of incidents caused by images being
         | removed or momentarily not reachable with it. It would at least
         | mitigate any immediate issues caused by images being removed
         | from Docker Hub.
        
         | acdha wrote:
         | > Mirroring images is the primary way, reproducing is the
         | fallback option and also helps to verify the integrity.
         | 
         | I suspect the latter will become more common over time. I can
         | count on no fingers the number of open source projects which
         | I've encountered which have production-grade container images.
         | Once you need to think about security you need to build your
         | own containers anyway and once you've done that you've also
         | removed the concern of a public registry having issues at an
         | inopportune moment.
        
       | matt3210 wrote:
       | Time to switch to Podman
        
         | tannhaeuser wrote:
         | Or, you know, just install the fscking app on a Linux VM using
         | the app's native installation method, and be done with it? Oh
         | no, we can't have that, must be using k8s, downloading the
         | Internet, and using a zoo of incidental tools and "registries"
         | to download your base images over http (which are STILL either
         | Debian- or RedHat-based, which is the entire reason of the
         | distro abstraction circus to begin with) is SO MUCH EASIER lol.
        
       | numbsafari wrote:
       | You don't go to war with the army you want, you go to war with a
       | menagerie images built and generated by random strangers on the
       | internet your team found in stack overflow posts.
        
         | pelasaco wrote:
         | and now with some layer of chat gpt generated
         | code/documentation/configuration and etc.
        
       | tolmasky wrote:
       | Could IPFS possibly be a good distributed (and free?) storage
       | backing for whatever replaces DockerHub for Open Source, as
       | opposed to using something like GitHub? We'd still need a
       | registry for mapping the image name to CID, along with
       | users/teams/etc., but that simple database should be much cheaper
       | to run than actually handling the actual storage of images and
       | the bandwidth for downloading images.
        
         | kevincox wrote:
         | Probably. You still need to store and serve the data somewhere
         | of course but for even moderately successful open source
         | organizations they will likely find volunteer mirrors. The nice
         | thing about IPFS is that new people can start mirroring content
         | without any risk or involvement, new mirrors are auto-
         | discovered, like bittorrent.
         | 
         | It seems like the docker registry format isn't completely
         | static so I don't think you can just use a regular HTTP gateway
         | to access but there is https://github.com/ipdr/ipdr which seems
         | to be a docker registry built on IPFS.
         | 
         | > We'd still need a registry for mapping the image name to CID,
         | along with users/teams/etc.
         | 
         | IPNS is fairly good for this. You can use a signing key to get
         | a stable ID for your images or if you want a short memorable
         | URL you can publish a DNS record and get
         | /ipns/docker.you.example/.
         | 
         | Of course now you have pushed responsibility of access control
         | to your DNS or by who has access to the signing key.
        
         | verdverm wrote:
         | IPFS is the same free that Docker provides. Someone, somewhere
         | is paying for the storage and network. The public IPFS would
         | not likely support the bandwidth, volume, and most CSOs.
        
       | nikolay wrote:
       | They really want to go in vain, hated, and making people happy
       | they're finally off their continuously degrading service and
       | attitude! I get it, they are a commercial entity, but it seems
       | that even they realize that they are in the mode of agony!
        
       | jhoelzel wrote:
       | Please dont forget that you can cache all these images in your
       | own registry! you will still have to worry about how to get
       | updates, but set up a private registry and deal with this on your
       | on time!
       | 
       | As a side node, Rancher desktop is good enough. Docker has
       | repeatedly demonstrated that they just where the first ones and
       | not by any means the best ones.
        
         | rustyminnow wrote:
         | Any tips to running your own registry? e.g. what registry
         | software/package do you use?
         | 
         | I think when I looked into this in the past, I couldn't find
         | anything suitable. A quick search now brings up
         | https://hub.docker.com/_/registry, but considering the content
         | of the article, not sure how I feel about it
        
       | JonChesterfield wrote:
       | My first thought on this was good riddance. The dev model of
       | "we've lost track of our dependencies so ship Ubuntu and a load
       | of state" never sat well.
       | 
       | However it looks like the main effect is going to be moving more
       | of open source onto GitHub, aka under Microsoft's control, and
       | the level of faith people have in Microsoft not destroying their
       | competitor for profit is surreal.
        
         | hospitalJail wrote:
         | >The dev model of "we've lost track of our dependencies so ship
         | Ubuntu and a load of state" never sat well.
         | 
         | This was my first thought when I learned of Docker.
         | 
         | I have a hard time calling myself an 'Engineer' when there are
         | so many unknowns, that I'm merely playing around until
         | something works. I insist on being called a Programmer. It pays
         | better than 'real' engineering. Why not embrace it? (Credit
         | toward safety critical C and assembly though, that's
         | engineering)
         | 
         | EDIT: Programmer of 15 years here
        
           | Clubber wrote:
           | Developer/Programmer/Engineer titles are mostly meaningless
           | because they mean different things at different companies.
           | You can go wayback and call yourself a coder.
        
             | freedomben wrote:
             | what's old is new again. the teenagers nowadays call
             | themselves "coders" because "programmer" is for old people.
        
           | 0xDEF wrote:
           | I doubt the real(tm) engineers at NASA and SpaceX know
           | everything their proprietary closed-source Matlab-to-FPGA
           | tooling is actually doing under the hood.
        
             | ijaeifjzdi wrote:
             | can't say for spaceX, but most NASA glue is built in house.
             | close to zero proprietary ones.
             | 
             | maybe the vlsi is closed. but that is "industry standard" i
             | guess. rest is a bunch of mathy-language du jour held
             | together with python or something.
             | 
             | ...opaque docker containers going to prod doesn't have a
             | excuse other than inefficient orgs fulled by VC or Ad
             | money. Or maybe they do, but you won't excuse them using
             | NASA as an example :)
        
           | at_a_remove wrote:
           | I remember back around 2009, our org had this horrible open
           | source program, which shall not be named, foisted upon us by
           | an excitable bloke who had heard wondrous things from the
           | university where it had been developed. Well, we had a bitch
           | of a time getting it running. Instructions and build scripts
           | were minimal. We thrashed around.
           | 
           | I noted to someone that this felt less like a product and
           | more like a website and set of scripts ripped from a working
           | system. A few of us were shipped up to the originating
           | university for a week to hob-knob with the people in charge
           | of it. Toward the end, during the ritual inebriation phase, I
           | managed to find out that they had never actually attempted to
           | install it on a clean system. This had truly been ripped from
           | a working system. And I thought to myself, "How horrible."
           | 
           | Now, I am admittedly pants at Linux. No good at all. But
           | there is something about Docker and similar technologies that
           | says, "Yes, we threw our hands in the air and stopped trying
           | to make a decent installation system."
        
           | AlotOfReading wrote:
           | As someone that does what you consider "engineering", my
           | current project is containerized because I've lost too much
           | of my life to debugging broken dev environments. The person
           | who ships safety critical releases at most companies isn't a
           | developer that's deeply familiar with the code, it's usually
           | a distracted test engineer with other things to do that may
           | not be very tech-saavy. Anything I can do to help them get a
           | reproducible environment is great.
        
         | Sebb767 wrote:
         | > The dev model of "we've lost track of our dependencies so
         | ship Ubuntu and a load of state" never sat well.
         | 
         | Docker, the company, is failing. Docker, as in containerization
         | technology, is alive and very well.
         | 
         | (edited for clarity)
        
           | osigurdson wrote:
           | >> Docker, the company, is failing. Docker, the container
           | technology, is alive and very well.
           | 
           | Is it though? Podman is more well liked (no daemon / non-
           | root) and Kubernetes doesn't have direct support for it any
           | more. I don't think it matters much that k8s uses CRI-O but
           | docker needs to be #1 for running a container on a single
           | machine. Yet, they seem to be letting that slip away because
           | it is not directly monetizable. Software businesses need to
           | be creative - invest a lot into free things, which support
           | monetization of others. If you want low risk returns, buy
           | T-bills.
        
             | Sebb767 wrote:
             | > Is it though? Podman is more well liked (no daemon / non-
             | root) and Kubernetes doesn't have direct support for it any
             | more.
             | 
             | I've used "Docker" as in "containerization", since they are
             | often used synonymously and the grandparents intent was
             | definitely to criticize the latter. Docker itself will
             | quite likely stay around as a name, but I have no faith in
             | the company.
        
               | imdsm wrote:
               | The company has never been good though. The technology
               | was great, but they never found a way to monetise it
               | properly, and their whole approach to outreach,
               | community, and developer experience was terrible. Their
               | support, non-existent.
               | 
               | In fact, I'd go as far as to say that, given the ubiquity
               | of their product, I can't think of a worse way a company
               | could have performed. It's been about 10 years now since
               | it really took off, and in that time, the technology has
               | been great, but dealing with the company, always been
               | difficult.
        
               | chaxor wrote:
               | It seems the company is slowly trying to make users pay
               | for it. Not too long ago it was free for companies, then
               | they made companies pay to use it. Now they're making
               | people pay to store images. In the next few years I would
               | be surprised if they didn't introduce a new way to
               | monetize it, leading up to removing the use of any docker
               | executable at all without payment.
               | 
               | Many will come to comment "that's absurd, and you could
               | just use an old executable you already downloaded prior
               | to them halting it's circulation", etc. But I do think
               | the writing is on the wall here with Docker continually
               | getting greedy. If they don't monetize the use of Docker
               | containers in general by making users pay to run them,
               | they have other options like spyware and ads - e.g.
               | install telemetry in the base of the system somehow to
               | sell the personal data they receive from all images*,
               | etc.
               | 
               | * I know this may not work directly as I've stated it,
               | just giving the flavor of idea
        
             | Zensynthium wrote:
             | Tell that to svb
        
             | cybrox wrote:
             | The docker runtime is not supported by k8s anymore but
             | docker built containers do still and will very likely
             | continue to work for a long time.
        
         | CGamesPlay wrote:
         | What state are you thinking of? The containers are ephemeral
         | and the dependencies are well specified in it. You can complain
         | about shipping Ubuntu, but the rest of this doesn't make sense.
        
           | vilunov wrote:
           | Makes perfect sense to me, sadly. The dependencies are
           | specified in excessively, that's why everyone is shipping
           | Ubuntu. This is caused by and further facilitates the
           | development style of "do not track what we use, just ship
           | everything". Also, the dependencies are specified in
           | container images, which themselves are derivative artifacts
           | and not the original source code, and these dependencies
           | often change in different container builds with no explicit
           | relevant change.
           | 
           | There are three practical problems as a result: - huge image
           | sizes with unused dependencies delivered as part of the
           | artifact; - limited ability to share dependencies due to
           | inheritance-based model of layers, instead of composition-
           | based model of package managers; - non-reproducibility of
           | docker images (not containers) due loosely specified build
           | instructions.
           | 
           | Predicting future comments, nix mostly fixes these issues,
           | but it has a bunch of issues of its own. Most importantly,
           | nix is incredibly invasive in development process, adopting
           | it requires heavy time investments. Containers also provide
           | better isolation
        
             | earthling8118 wrote:
             | It doesn't have to be a choice of containers or nix though.
             | You can put your nix built applications into a container
             | just fine. You can also pull an image from somewhere else
             | and shove nix stuff into it as well.
             | 
             | There is definitely a bit of a learning curve but the time
             | investment is frequently over exaggerated. I see it as
             | similar to the borrow checker in rust. Yes, you have to
             | spend some time and also learn about the rules. But it
             | helps you build software that is more robust and correct.
             | Plus once you're into it you save significant time not
             | having to deal with dependencies especially when bringing
             | on new people
        
       | ar9av wrote:
       | Rock and a hard place. These organizations have large expenses
       | and can't keep giving away so much for free.
       | 
       | OTOH there are many better ways this could have been handled.
       | More notice, discount for existing orgs etc ..
       | 
       | OpenVS for life
        
       | jon-wood wrote:
       | Docker the tool has been a massive benefit to software
       | development, every now and then I have a moan about the hassle of
       | getting something bootstrapped to run on Docker, but it's still
       | worlds better than the old ways of managing dependencies and
       | making sure everyone on a project is aligned on what versions of
       | things are installed.
       | 
       | Unfortunately Docker the company appears to be dying, this is the
       | latest in a long line of decisions that are clearly being made
       | because they can't work out how to build a business around what
       | is at it's core a nice UI for Linux containers. My hope is that
       | before the inevitable shuttering of Docker Inc another
       | organisations (ideally a coop of some variety, but that's
       | probably wishful thinking) pops up to take over the bits that
       | matter, and then hopefully we can all stop trying to keep up with
       | the latest way in which our workflows have been broken to try and
       | make a few dollars.
        
         | timcobb wrote:
         | > Unfortunately Docker the company appears to be dying
         | 
         | Docker the company is crushing 2022-2023... record revenue and
         | earnings
        
           | tyingq wrote:
           | Of course, I can do a lot of record revenue and earnings
           | selling five dollar bills for $4. I'm curious what a path to
           | profit would look like...is this kind of squeeze the only way
           | to get there?
        
             | anonymoushn wrote:
             | that would make you negative one dollar of earnings per
             | sale
        
               | cratermoon wrote:
               | unless you use dotcom accounting, in which case you can
               | say that you lose money on every sale, but make up for it
               | in volume.
        
               | tyingq wrote:
               | I agree given the usual definition of "net earnings", but
               | private companies often represent earnings in creative
               | ways that exclude obvious costs (hey Uber!).
        
               | windexh8er wrote:
               | OPs point is that revenue is easy, profit is not.
        
             | timcobb wrote:
             | You can't have positive earnings like Docker if you sell 5
             | for 4
        
               | tyingq wrote:
               | I'm assuming creative calculations. Like Uber's idea of
               | "earnings" changed when they went public.
        
               | mardifoufs wrote:
               | Based on what? How is that more likely than just them
               | being able to finally generate revenue, considering they
               | started focusing on that a few years ago? I don't get
               | your comment at all
        
               | tyingq wrote:
               | Based on observation of other companies. Fluffing your
               | earnings isn't rare for private companies looking for
               | investors.
        
               | [deleted]
        
             | mfer wrote:
             | Docker Hub is a massive expense when you consider the data
             | storage and egress. To do that for open source projects you
             | have to wither (a) have a lot of income to cover such an
             | expense, (b) a pile of VC funding to cover the expense, or
             | (c) pile on the debt paying for it while you grow. (b) and
             | (c) can only live on so long.
        
               | brodock wrote:
               | I wonder if they would have a much smaller bill if they
               | were running on physical hardware instead of renting
               | infrastructure from AWS.
               | 
               | This is really not much different from
               | https://news.ycombinator.com/item?id=35133510 case.
        
               | maccard wrote:
               | That's a self inflicted problem by docker hub squatting
               | the root namespace, though.
        
               | mfer wrote:
               | Getting out of a self inflicted problem isn't so easy.
               | They have spent a long time trying. For example, putting
               | distribution in the CNCF, working with the OCI on specs
               | (like the distribution spec), making it possible to use
               | other registries while not breaking the workflows for
               | existing folks, and even some cases of working with other
               | registries (e.g., their MCR integration with Docker Hub
               | that offloads some egress and storage).
               | 
               | The root namespace problem was created by an early stage
               | startup many years ago. I feel for the rough spot they
               | are in.
        
               | sitkack wrote:
               | > I feel for the rough spot they are in.
               | 
               | I don't. Because there is this pattern from VCs to fund
               | business models that involve dumping millions in
               | resources as Open Source on the world and then owning a
               | part of the ecosystem.
               | 
               | Docker originally wanted to "own" everything, if CoreOS
               | hadn't pushed for the OCI spec, debalkanizing containers,
               | Docker would have a near monopoly on the container
               | ecosystem.
               | 
               | At this point Docker is just the command, and it is a
               | tire fire of HCI gaffs.
        
               | maccard wrote:
               | It's a self inflicted problem they've doubled down on,
               | though, and that self inflicted problem is also the
               | reason for their success. If docker hub could be removed
               | in a config, the value add of docker the company is
               | significantly diminished. It's hard to feel sorry for a
               | company who actively pursued lock-in, and didn't make any
               | real attempts at avoiding it (you know what would help? A
               | setting to not use docker hub, or to use a specific repo
               | by default), and who have built an enormous company on a
               | value add that is a monkey's paw, and they've known that
               | all along.
               | 
               | edit: https://github.com/moby/moby/pull/10411 this is the
               | change that would _actually_ solve the problem of docker
               | squatting the root namespace, and they've decided against
               | it because it would make dockerfiles less portable (or
               | really, it would neuter docker.io's home as the default
               | place images live)
        
               | nine_k wrote:
               | I fail to follow. If DockerHub is the part that actively
               | _burns_ money, why stick with it? If, say, Docker Desktop
               | is the part that actively brings profits, why would it be
               | afflicted if the users used a different image registry?
               | Most companies, except the smallest ones, use their own
               | registry  / mirror anyway.
               | 
               | Even better, the registry may continue to exist, but
               | would (eventually) stop storing the images, and start
               | storing .torrent files for the actual downloads. Seeing
               | an image from the GitHub release page would be enough for
               | most smaller projects (yes, BT supports HTTP mirrors
               | directly).
        
               | robszumski wrote:
               | This was the initial pebble that lead to Podman existing
               | via Red Hat. No Red Hat customer wanted to pull or push
               | to DockerHub by default due to a typo. No PRs would be
               | accepted to change it and after dealing with customer
               | frustration over and over...
        
               | cratermoon wrote:
               | I'm not familiar with the 'root namespace squatting' or
               | the typo issue. Do you mean the image namespace as
               | described here: https://www.informit.com/articles/article
               | .aspx?p=2464012&seq... or is there something else? What
               | sort of typo would cause problems?
        
               | maccard wrote:
               | Yeah, this is a good summary of the problem. If I write a
               | dockerfile with                   FROM ubuntu:20.04
               | WORKDIR /app         ADD mySecretAppBinary .
               | 
               | it will pull the base image from hub.docker.io, and there
               | is no way to stop it from doing so. If I run:
               | image_tag = test-app         docker build -t $image_tag .
               | docker push $image_tag
               | 
               | it will push a container with my secret application to
               | the public docker hub, assuming I am logged in (which of
               | course I am, because docker rate limits you if you
               | don't). I don't ever want to do that, ever, under any
               | circumstances, and it's just not possible to opt out of
               | whiel using docker.
        
               | robszumski wrote:
               | This was the proposed PR that is summarized in that
               | article: https://github.com/moby/moby/pull/10411
               | 
               | if you did `docker tag supersecret/app:latest && docker
               | push` instead of `docker tag
               | registry.corp.com/supersecret/app:latest` guess where
               | your code just went?
               | 
               | Same on the pull side, if you wanted your corp's ubuntu
               | base rather than just `docker pull ubuntu`.
        
         | throwawaymanbot wrote:
         | [dead]
        
         | college_physics wrote:
         | Can't comment specifically on this or that "dying company", but
         | it is a bit disappointing that after, how many, four decades of
         | open source? and the obvious utility of that paradigm, it still
         | seems a major challenge to build sustainable open source
         | ecosystems. This means we can't really move on and imagine
         | grander things that might build on top of each other.
         | 
         | Its not clear if that is due to:
         | 
         | i) competition from proprietary business models
         | 
         | ii) more specifically the excessive _concentration_ of said
         | proprietary business models ( "big tech")
         | 
         | iii) confusion from conflicting objectives and monetisation
         | incentives (the various types of licenses etc)
         | 
         | iv) ill-adapted funding models (venture capital)
         | 
         | v) intrinsic to the concept and there is no solution
         | 
         | vi) just not having matured yet enough
         | 
         | What I am driving at is that building more complex structures
         | requires some solid foundations and those typically require
         | building blocks following some proven blueprint. Somehow much
         | around open source is still precarious and made up. Ideally
         | you'd want to walk into the chamber of commerce (or maybe the
         | chamber of open source entities), pick a name, a legal entity
         | type, a sector and get going. You focus on your solutions, not
         | on how to survive in a world that doesn't quite know what to
         | make of you.
         | 
         | Now, corporate structures and capital markets etc took hundreds
         | of years to settle (and are still flawed in many ways) but we
         | do live in accelerated times so maybe its just a matter of
         | getting our act together?
        
           | zelphirkalt wrote:
           | With lots of open source licenses, there is no copyleft.
           | Without copyleft, for profit companies can simply take the
           | hard work, add a little on top, make it proprietary, and sell
           | it. Customer mentality is to use the most comfortable thing,
           | without paying attention on whom they depend, often choosing
           | the proprietary offer, because of feature X.
           | 
           | There are healthy ecosystems, even some partially replacing
           | docker, some with more daily updates than I can process, but
           | they have copyleft licenses in place and are free software,
           | to ensure contributions flow back. Companies can still make
           | profit, but not from adding a minimalistic thing and making
           | it proprietary. They need to find other ways.
        
           | trasz3 wrote:
           | It's because the incentives to make money quickly end up
           | being stronger than incentives to build a sustainable open
           | source ecosystems.
        
           | hot_gril wrote:
           | It's still doing better than it could be. Big tech companies
           | have played way nicer than they had to, focusing more on
           | vague long-term presence than on immediate profits, and imo
           | continue to do so to a lesser extent. There always comes a
           | point when the innovation is done and they lock things down
           | again, but even then they have to fight their own employees.
        
         | msla wrote:
         | > My hope is that before the inevitable shuttering of Docker
         | Inc another organisations (ideally a coop of some variety, but
         | that's probably wishful thinking)
         | 
         | Indeed. We should all be equal in that venture: Ain't nobody
         | here but us chickens.
        
         | spicyusername wrote:
         | ideally a coop of some variety
         | 
         | This is the role I feel like podman, the tool developed by Red
         | Hat, is filling.
        
           | jacooper wrote:
           | Its not as easy nor as simple as docker + docker compose.
        
             | 5e92cb50239222b wrote:
             | You're right, it's both easier and simpler since no daemons
             | are involved. podman-compose has the same command-line
             | interface and has worked ok for me so far (maybe 3 or 4
             | years at this point).
        
               | jacooper wrote:
               | Podman-compose isn't fully compatible with the new
               | compose spec.
               | 
               | Also I really don't care if docker has a daemon or not,
               | for me it offers feature like auto starting containers
               | without bothering with SystemD, and auto updates using
               | watchtower and the docker socket.
               | 
               | And since podman doesn't have an official distro package
               | repo like docker, you are stuck use whatever old version
               | shipped in your distro without recent improvements, which
               | is important for a very active development project.
        
               | yamtaddle wrote:
               | > Also I really don't care if docker has a daemon or not,
               | for me it offers feature like auto starting containers
               | without bothering with SystemD
               | 
               | Bingo, the "pain" of the daemon (it's never cause a
               | single problem for me? Especially on Linux, on macOS I've
               | occasionally had to go start it because it wasn't
               | running, but BFD) saves me from having to touch systemd.
               | Or, indeed, from caring WTF distro I'm running and which
               | init system it uses at all.
        
               | jacooper wrote:
               | To be fair, every mainstream distro now uses Systemd
        
               | indymike wrote:
               | > And since podman doesn't have an official repo like
               | docker,
               | 
               | Hmm... https://github.com/containers/podman
               | 
               | I found that on: https://podman.io/ so, I'm pretty sure
               | it's official.
        
               | jacooper wrote:
               | I meant a a repo for a distro package manager, so you can
               | get the latest version regardless of whatever version
               | your distro ships.
        
               | fisiu wrote:
               | The most of major distros ship podman in their
               | repositories. Just use your package manager to install
               | podman.
        
               | jacooper wrote:
               | And these versions are often our of date, which is
               | important given that podman is in active a development
               | and you want to be using the latest version.
        
               | tristan957 wrote:
               | I don't understand what the issue is. Don't use an LTS
               | distro if you want up to date software. Fedora and Arch
               | are up to date for Podman. Alpine seems to be one minor
               | version behind.
        
               | jacooper wrote:
               | I want stability for the system and a newer podman
               | version. I do this all the time with docker, install an
               | LTS distro and then add the official docker repos.
        
             | ilovecaching wrote:
             | It's literally OCI compatible, integrates with systemd and
             | LSM, and runs rootless by default. Podman is 100000% better
             | designed on the inside with the same interface on the
             | outside.
        
               | dontlaugh wrote:
               | It's the lack of fully compatible compose that matters
               | most.
        
               | jacooper wrote:
               | Podman appears to support the compose v2 spec, and the
               | socket API, but still not fully supporting buildkit.
               | 
               | https://www.redhat.com/sysadmin/podman-compose-docker-
               | compos...
        
               | jacooper wrote:
               | Rootless networking is still a mess with no IP source
               | propagation and much slower performance. So for most
               | users docker with userNS-remapping is actually a better
               | choice.
               | 
               | Also systemd integration isn't a plus for me, I don't
               | want to deal with SystemD just to have a container start
               | on startup.
        
               | yrro wrote:
               | I think --network=pasta: helps with source IP
               | preservation.
               | 
               | Regardless that has never bothered me since I'm only
               | using podman or docker for local development...
        
               | jacooper wrote:
               | Hmmm, pasta seems to solve all rootless networking
               | issues...
               | 
               | https://github.com/containers/podman/pull/16141
        
             | Anarch157a wrote:
             | podman + podman-compose is as easy.
        
               | jacooper wrote:
               | Not comparable to the full compose spec.
        
           | raesene9 wrote:
           | This is more about Docker hub than Docker.
           | 
           | Image hosting is expensive at scale, and someone's got to pay
           | for the compute/storage/network...
        
             | robertlagrant wrote:
             | I agree. The core devs should create a new company and
             | focus just on the tools, with a simple, scaling licence
             | model for them.
             | 
             | As far as DockerHub goes, the OSS hosting costs do need to
             | be solved, but surely they can be.
        
               | raesene9 wrote:
               | I'm not sure it's easy. We're seeing other open source
               | projects like Kubernetes struggle with hosting costs, and
               | that's just one project.
               | 
               | Ideally it'd be great to see the industry fund it, but
               | with budget cuts in tech. I'm not sure that'll happen...
        
               | robertlagrant wrote:
               | I haven't seen that, but I haven't been following along.
               | I'd assumed they would be very Google-funded still. Is it
               | a general CNCF problem?
        
             | phkahler wrote:
             | >> Image hosting is expensive at scale, and someone's got
             | to pay for the compute/storage/network..
             | 
             | Bit Torrent would beg to differ.
        
               | Karunamon wrote:
               | That's a neat idea but probably unworkable in practice.
               | Container images need to be reliably available quickly;
               | there is no appetite for the uncertainties surrounding
               | the average torrent download
        
               | rjmunro wrote:
               | People who need "reliably available quickly" can pay or
               | set up their own mirror. Everyone else can use the
               | torrent system.
        
               | nordsieck wrote:
               | > That's a neat idea but probably unworkable in practice.
               | Container images need to be reliably available quickly;
               | there is no appetite for the uncertainties surrounding
               | the average torrent download
               | 
               | Bittorrent seems to work quite well for linux isos, which
               | are about the same size as containers, for obvious
               | reasons.
               | 
               | IMO, the big difference is that, with bittorrent, it's
               | possible to very inexpensively add lots of semi-reliable
               | bandwidth.
        
               | Karunamon wrote:
               | Nobody is going to accept worrying about whether the
               | torrent has enough people seeding in the middle of a CI
               | run. And your usual torrent download is an explicit
               | action with an explicit client, how are people going to
               | seed these images and why would they? And what about the
               | long tail?
        
               | nine_k wrote:
               | Cache images locally. Docker has enough provisions for
               | image mirrors and caches.
               | 
               | Downloading tens or hundreds of megabytes of exactly the
               | same image, on every CI run, on someone else's expense,
               | is expectedly unsustainable.
        
               | SSLy wrote:
               | > _enough people seeding_
               | 
               | the .torrent file format, and clients, include explicit
               | support for HTTP mirrors serving the same files that's
               | distributed via P2P.
        
               | yamtaddle wrote:
               | Archive.org does this with theirs. If there are no seeds
               | (super common with their torrents--IDK, maybe a few
               | popular files of theirs _do_ have lots of seeds and that
               | saves them a lot of bandwidth, but sometimes I wonder why
               | they bother) then it 'll basically do the same thing as
               | downloading from their website. I've seen it called a
               | "web seed". Only place I've seen use it, but evidently
               | the functionality is there.
        
               | phkahler wrote:
               | Nobody needs to be seeding if only one download is
               | active. You could self host an image at home on a
               | Raspberry Pi and provide an image in a minute.
               | 
               | Nobody's CI should be depending on an external download
               | of that size.
        
               | Karunamon wrote:
               | We are talking about replacing the docker hub and the
               | like, what people "should" be doing and what happens in
               | the real world are substantially different. If this
               | hypothetical replacement can't serve basic existing use
               | cases it is dead at the starting line.
        
               | worksonmine wrote:
               | Not a bad idea. Have the users seed the cached images.
        
             | yamtaddle wrote:
             | Docker Hub's the part I care about the most.
             | 
             | If I can't use it as a daemon-focused package manager that
             | works more-or-less the same everywhere with minimal
             | friction without having to learn or recall the particulars
             | of whatever distro (hell, on my home server it even saves
             | me from having to fuck with systemd) and with isolation so
             | I can run a bunch of versions of anything, I'll probably
             | just stop using it.
             | 
             | Everything else about it is secondary to its role as the
             | _de facto_ universal package manager for open source server
             | software, from my perspective.
             | 
             | ... of course, this is exactly the kind of thing they _don
             | 't_ want, because it costs money without making any--but I
             | do wonder if this'll bite them in the ass, long-term, from
             | loss of mindshare. Maybe building in some kind of
             | transparent bandwidth-sharing scheme (bittorrent/DHT or
             | whatever) would have been a better move. I'd enable it on
             | my server at home, at least, provided I could easily set
             | some limits to keep it from going _too_ nuts.
        
             | justinclift wrote:
             | While that's true, for the amount of network traffic
             | they're likely moving around, I wonder where they're
             | placing their servers.
             | 
             | eg something like AWS with massive data transfer costs, vs
             | something else like carefully placed dedicated/colocation
             | servers at places which don't charge for bandwidth
        
               | yamtaddle wrote:
               | If it's AWS, they've surely got a huge discount. No way
               | they're paying 8+x normal big-fish CDN rates for
               | transfer. At their scale, it would have easily been worth
               | the effort to move to something cheaper than AWS long
               | ago, or else to negotiate a far lower rate.
        
               | nickstinemates wrote:
               | It is on S3.                   keeb@hancock >
               | [/home/keeb] dig +short hub.docker.com         elb-
               | default.us-east-1.aws.dckr.io.
               | prodextdefblue-1cc5ls33lft-b42d79a68e9f190c.elb.us-
               | east-1.amazonaws.com.
        
               | justinclift wrote:
               | > No way they're paying 8+x normal big-fish CDN rates for
               | transfer.
               | 
               | While you're _probably_ right, I 've seen dumber things
               | happen so I wouldn't completely rule out the possibility.
               | :wink:
        
           | sofixa wrote:
           | How is a tool developed by, and strongly pushed by (to the
           | point of strongarming customers to transition to their tool,
           | features lacking be damned) a corporation, especially one
           | owned by IBM, filling the role of a coop-developed tool?
        
           | quaintdev wrote:
           | Podman is great and is first class citizen on Fedora. It also
           | integrates nicely with SystemD. My only gripe with it is not
           | many developers provide podman configuration on their install
           | pages like they do with docker compose
        
             | regularfry wrote:
             | I'm using docker-compose with a podman VM for development
             | on a mac. Works ok so far. It wasn't _quite_ slick enough
             | when Docker pulled the licence switch last year, but the
             | experience in the last couple of months has been pretty
             | painless.
        
             | majewsky wrote:
             | Tangent: Why is the misspelling "SystemD" so common, when
             | it has always been "systemd"? I would understand "Systemd"
             | or "SYSTEMD" or something, but why specifically this weird
             | spelling?
        
               | kachnuv_ocasek wrote:
               | I've always thought of it as in analogy to System V.
        
               | cosmojg wrote:
               | Nah, it's French.
               | 
               | > System D is a manner of responding to challenges that
               | require one to have the ability to think quickly, to
               | adapt, and to improvise when getting a job done.
               | 
               | > The term is a direct translation of French Systeme D.
               | The letter D refers to any one of the French nouns
               | debrouille, debrouillardise or demerde (French slang).
               | The verbs se debrouiller and se demerder mean to make do,
               | to manage, especially in an adverse situation. Basically,
               | it refers to one's ability and need to be resourceful.
               | 
               | Source: https://en.wikipedia.org/wiki/System_D
        
               | yrro wrote:
               | Interestingly, https://www.freedesktop.org/wiki/Software/
               | systemd/#spelling says...
               | 
               | > But then again, if [calling it systemd] appears too
               | simple to you, call it (but never spell it!) System Five
               | Hundred since D is the roman numeral for 500 (this also
               | clarifies the relation to System V, right?).
        
               | robohoe wrote:
               | Probably to specifically call it out as "systemd" versus
               | autocorrected misspelling of "systems".
        
               | danieldk wrote:
               | People not familiar with tacking on a lowercased 'd' to
               | the name for daemons?
        
               | misterS wrote:
               | Instinctively applying Pascal case, maybe?
        
             | yrro wrote:
             | Fortunately you can use docker-compose with Podman these
             | days.
             | 
             | (There have been a few false starts so I'm specifically
             | referring to the vanilla unmodified docker-compose that
             | makes Docker API calls to a UNIX socket which Podman can
             | listen to).
        
         | londons_explore wrote:
         | Docker should have been a neat tool made by one enthusiast,
         | just like curl is.
         | 
         | Instead it has a multi-million dollar company behind it, and
         | VC's who demand profits from a thing that shouldn't have ever
         | had a business plan.
        
           | ihucos wrote:
           | Coming trough: https://github.com/ihucos/plash 90% done and
           | useful
        
           | 0xbadcafebee wrote:
           | Docker was created by dotCloud for a different purpose than
           | it ended up as. I think they are owed credit for what has
           | become an incredibly elegant solution to many problems, and
           | how great the user experience has always been.
           | 
           | Compare it to other corporate-managed tools like Terraform
           | and Ansible. Both of them have _horrible_ UX and really bad
           | design decisions. Both make me hate doing my job, yet you can
           | 't _not_ use them because they 're so popular your company
           | will standardize on them anyway. Docker, on the other hand,
           | is a relative joy to use. It remains simple, intuitive,
           | effective, and full of features, yet never seems to suffer
           | from bloat. It just works well, on all platforms. There were
           | a few years of pain on different platforms, but now it's rock
           | solid.
           | 
           | And to be fair to them, their Moby project is pretty solidly
           | open-source, and if Docker Inc dies, the project will
           | continue.
        
           | voytec wrote:
           | > Docker should have been a neat tool made by one enthusiast,
           | just like curl is.
           | 
           | I have nothing but mad respect for Daniel Stenberg. 25 years
           | of development of great software, for which he had been
           | threatened[1] and had ridiculous US travel visa obtaining
           | issues[2].
           | 
           | [1] https://daniel.haxx.se/blog/2021/02/19/i-will-slaughter-
           | you/
           | 
           | [1] https://news.ycombinator.com/item?id=26192025
           | 
           | [2] https://daniel.haxx.se/blog/2020/11/09/a-us-visa-
           | in-937-days...
        
             | [deleted]
        
             | citizenpaul wrote:
             | There are lots of high functioning but harmless crazy
             | people out there . I used to work for a government job and
             | I found one of the most common tells was exactly what this
             | "slaughter" person did. They love to list dozens of
             | agencies to you for no reason. They have no authority so
             | they hope they can borrow it from your fear of a random
             | place. I cannot tell you how many emails/calls I have had
             | left that fit this pattern, dozens at least.
             | 
             | >I have talked to now: FBI FBI Regional, VA, VA OIG, FCC,
             | SEC, NSA, DOH, GSA, DOI, CIA, CFPB, HUD, MS, Convercent
             | 
             | Bonus Tell: They also love to say they are a doctor or PHD
             | of something or often say PHD in multiple subjects.
        
               | jamal-kumar wrote:
               | I remember someone abusing a ticketing system I had to
               | work with for reporting technical issues with a vast
               | computer network, raising a ticket with an attachment
               | from some absolute nutcase in multicolored manyunderlined
               | .RTF format which was like as you described as "hate
               | mail" in the subject line and the ticket being closed as
               | "not hate mail", still makes me chuckle every time I
               | think about that
        
             | elbigbad wrote:
             | I read the travel issues post you linked, but am not seeing
             | the causal link you're drawing between development of
             | software and visa issues. Was there more to the story?
        
               | voytec wrote:
               | I may have remembered incorrectly, which post was it.
               | Here[1], in the paragraph titled "Why they deny me?"
               | (unlinkable), Daniel hints at the possibility that this
               | may have been due to development of (lib)curl which is
               | used for malware creation by 3rd parties. There was no
               | proof though.
               | 
               | [1]
               | https://daniel.haxx.se/blog/2018/07/28/administrative-
               | purgat...
        
               | kzrdude wrote:
               | The most superficial (and likely) reason to me seems to
               | be that he uses haxx.se. I really wonder what kind of
               | investigation they do. If they just start with Google,
               | this one might come up immediately.
        
               | elbigbad wrote:
               | Ah, that makes sense. I have no dog in the fight and am
               | far from the emotion of having a visa delayed in this
               | circumstance. I would say that it was much more likely to
               | be some level of incompetence than malice, having dealt
               | with large government bureaucracies myself.
        
             | darkwater wrote:
             | > [1] https://daniel.haxx.se/blog/2021/02/19/i-will-
             | slaughter-you/
             | 
             | Wow that's clearly someone with serious mental issues :( I
             | hope he could find some help for his condition.
        
               | nine_k wrote:
               | Maybe it's a good thing that the guy affected hasn't been
               | awarded the defense contract as a result.
        
               | voytec wrote:
               | People suffering from psychosis can create "facts"
               | supporting their ideas and believe in them. Usually it's
               | the stuff like "someone follows me", "someone wants to
               | hurt me". Psychosis is the entry point to schizophrenia
               | which is more or less, an illness in which brain makes
               | stuff up and the ill person cannot differentiate facts
               | from hallucinations.
               | 
               | Possibly there was no defense contract at all.
        
               | IncRnd wrote:
               | That sounds very much how chatgpt acts.
        
               | yreg wrote:
               | Is it? GPT just halucinates the next words in a given
               | text.
        
               | IncRnd wrote:
               | Of course it is. What in the parent's post is different
               | from that? The parent post's first sentence is, "People
               | suffering from psychosis can create 'facts' supporting
               | their ideas and believe in them."
        
               | wpietri wrote:
               | It's not just people suffering from psychosis who do
               | that.
               | 
               | "29% believe aliens exist and 21% believe a UFO crashed
               | at Roswell in 1947. [...] 5% of respondents believe that
               | Paul McCartney died and was secretly replaced in the
               | Beatles in 1966, and just 4% believe shape-shifting
               | reptilian people control our world by taking on human
               | form and gaining power. 7% of voters think the moon
               | landing was fake." --
               | https://www.publicpolicypolling.com/wp-
               | content/uploads/2017/...
               | 
               | "Belief in both ghosts and U.F.O's has increased slightly
               | since October 2007, by two and five percentage points,
               | respectively. Men are more likely than women to believe
               | in U.F.Os (43% men, 35% women), while women are more
               | likely to believe in ghosts (41% women, 32% men) and
               | spells or witchcraft (26% women, 15% men)." --
               | https://www.ipsos.com/en-us/news-polls/belief-in-
               | ghosts-2021
               | 
               | "A new Associated Press-GfK poll shows that 77 percent of
               | adults believe [angels] are real. [...] belief in angels
               | is fairly widespread even among the less religious. A
               | majority of non-Christians think angels exist, as do more
               | than 4 in 10 of those who never attend religious
               | services." -- https://www.cbsnews.com/news/poll-
               | nearly-8-in-10-americans-b...
        
               | cma wrote:
               | "Belief" or stated belief to an anon survey?
        
               | ridgered4 wrote:
               | The other day some one mentioned any of these surveys
               | consistently have about a 5% troll rate.
               | 
               | The 77% belief in angels is bizarre though. Like I
               | believe in the possibility of aliens, the universe is
               | quite large. Although I think all spacecraft sightings
               | are almost certainly just mundane stuff from spy planes
               | to weather balloons, etc. I even believe in the
               | possibility of ghosts being real, more likely some
               | strange phenomenon we can't explain that we might
               | misidentify as ghosts. But angels?
               | 
               | One man's angel is another man's ghost or alien though I
               | guess.
        
               | bombolo wrote:
               | If we go that way, a lot more believe god exists!
        
               | GuB-42 wrote:
               | If you have your name all over the place, I guess that it
               | bound to happen eventually. Curl is used by millions of
               | people which makes Daniel Stenberg kind of a celebrity.
               | With so many users, there have to be some crazies like
               | the "I will slaughter you" guy.
               | 
               | It must be a common occurrence among famous software
               | people, I wonder how they deal with that. Do they
               | actively hide their real identity, for example by using a
               | proxy for licensing, do they just ignore such madness, is
               | it a burden or on the opposite, they enjoy their fame?
        
               | vxNsr wrote:
               | Yea schizophrenia is no joke. Even the follow up apology
               | makes it clear he hasn't recovered.
        
               | milsorgen wrote:
               | The Terry A. Davis reference was bemusing.
        
               | boredumb wrote:
               | In the pdf there's mention to terry davis so I'm tempted
               | to think this is actually a bit of a troll.
        
               | striking wrote:
               | That PDF links to https://web.archive.org/web/20210223111
               | 850/https://www.nerve.... Would be quite the troll to go
               | to the effort of buying a domain just to mess with an
               | open source author.
        
               | boredumb wrote:
               | You're probably right - I assumed the name terry davis
               | being embedded in an email following a schizophrenic rant
               | about software was a ruse.
        
               | monetus wrote:
               | I think it was genuine admiration, at least that is how I
               | took it.
        
               | archon810 wrote:
               | https://daniel.haxx.se/blog/2021/08/09/nocais-apology/
               | allegedly schizophrenia.
        
             | shubb wrote:
             | Replying to child I can't reply to?
             | 
             | There was a period where the US was treating public key
             | encryption like arms exports, and involved in spreading the
             | technology outside the US as tools were in us.govs sht list
        
               | Clubber wrote:
               | https://en.wikipedia.org/wiki/Phil_Zimmermann
               | 
               |  _After a report from RSA Security, who were in a
               | licensing dispute with regard to the use of the RSA
               | algorithm in PGP, the United States Customs Service
               | started a criminal investigation of Zimmermann, for
               | allegedly violating the Arms Export Control Act.[5] The
               | United States Government had long regarded cryptographic
               | software as a munition, and thus subject to arms
               | trafficking export controls. At that time, PGP was
               | considered to be impermissible ( "high-strength") for
               | export from the United States. The maximum strength
               | allowed for legal export has since been raised and now
               | allows PGP to be exported. The investigation lasted three
               | years, but was finally dropped without filing charges
               | after MIT Press published the source code of PGP_
               | 
               | They tried to ruin the man.
        
               | ncphil wrote:
               | Because he was competing with a private military
               | contractor, and the US government is a wholly owned
               | subsidiary of the MIC: or often acts like it is. Customs
               | should have told RSA "no", "this is a private contract
               | dispute", "hire a lawyer and file suit". Of course it was
               | much more than that. Zimmerman put real privacy
               | protecting encryption in the hands of the public, and the
               | Many Eyes (that included state allies and adversaries)
               | couldn't have that. But they needn't have worried:
               | decades on the public is still ignorant about encryption,
               | except as a marketing term, and most have no idea what a
               | key pair is or what to do with it. Fraud around
               | unauthorized access to government and commercial accounts
               | is rampant (you _have_ set up and secured your online
               | identity on your government's social security and revenue
               | collection sites, haven't you?). That could have been
               | prevented by early adoption and distribution of key
               | pairs, alongside a serious public education campaign.
               | Problem is, that would be at cross purposes with the goal
               | of keeping the public uneducatable. Better for them to
               | while away their time watching cable TV or delving into
               | the latest conspiracy theory (pro or con).
        
             | t344344 wrote:
             | > ridiculous US travel visa obtaining issues
             | 
             | Ridiculous? This is pretty common issue for anyone who
             | travels to US. Visa may be denied for whatever reason and
             | tough luck on appeal. I am EU citizen and had similar
             | experience just for visiting Iran on tourist trip. Do not
             | even ask about guys from India, Pakistan or less fortunate
             | countries.
             | 
             | And it got even worse with pandemic. US required
             | vaccination for very long time, long long after it was
             | relevant. Maybe they still do, frankly I do not care to
             | look at this point!
             | 
             | I think biggest WTF here is why international organization
             | like Mozilla is organizing company wide meetup in US, and
             | not in country with liberal visa entry policy such as
             | Mexico!
        
               | jdxcode wrote:
               | Ridiculous does not mean uncommon. Situations can be both
               | common and ridiculous (absurd).
               | 
               | I'm American, but I have enough friends and family from
               | other countries (my wife is an Iranian passport holder)
               | to know what you're talking about and how difficult it
               | can be.
        
               | voytec wrote:
               | I applied for US travel visa as a citizen of Poland in
               | 2012 and was denied travel due to "wrong type of visa". I
               | was planning to visit my employer and spend 1-2 weeks
               | traveling across the country. Apparently both business
               | and travel visas were inappropriate for these purposes.
               | To add, I was questioned in a US consulate/embassy (can't
               | remember) in Warsaw by a person who repeatedly refused to
               | speak in English, insisted on Polish and I, as a native
               | Polish speaker, had issues understanding them. Poor
               | experience.
               | 
               | This was not a case for Swedish citizens, which is
               | mentioned at the beginning of Daniel's linked post.
               | Sweden is a member of ESTA[1] and Daniel traveled to the
               | US multiple times before being denied travel (with still
               | valid ESTA) and only then applied for a visa.
               | 
               | [1] https://esta.cbp.dhs.gov/
        
               | pshirshov wrote:
               | I believe that B1/B2 should work just fine for these
               | purposes.
               | 
               | Probably you answered an officer (or airline worker) that
               | you were gonna "work" there, not just visit your employer
               | for an event?
        
               | voytec wrote:
               | Absolutely not. I had, and still have, my own small
               | business in Poland and I was clear (in writing) that I am
               | planning to visit my main client.
        
               | kobalsky wrote:
               | You mentioned both employer and client, are they the
               | same?
        
             | comprev wrote:
             | I consider him equally important to people like Tim
             | Berners-Lee for building the foundation of the web.
        
           | kajaktum wrote:
           | > Instead it has a multi-million dollar company behind it,
           | and VC's who demand profits from a thing that shouldn't have
           | ever had a business plan.
           | 
           | But you don't have to host curl? Who's gonna put the money to
           | host all the images and bandwidths that ten of thousands of
           | companies use but never pay?
        
             | lyind wrote:
             | THIS!
             | 
             | Alternatives:
             | 
             | - Virtual registry that builds and caches image chains on-
             | demand, locally
             | 
             | - Maybe a free protocol like Bit Torrent to store and
             | transfer the images
        
             | londons_explore wrote:
             | It could have been designed with a self-host option or a
             | torrent/ipfs backend for near-zero hosting costs and still
             | be 'just works' for the user.
        
               | bmurphy1976 wrote:
               | Even dumber, it should have just been pointers to
               | encrypted files hosted on any arbitrary web server.
        
               | npn wrote:
               | pretty sure you haven't used ipfs before.
               | 
               | for users to download resources from ifps you either need
               | to install the client (which quite resource intensive) or
               | use the gateways (which is just a server and cost money
               | to run).
               | 
               | also the speed and reliability are nowhere good enough
               | for serious works.
        
           | wvh wrote:
           | In all fairness, curl is purely a software tool. Docker is
           | arguably more like a service. As such, it creates costs for
           | and direct dependency on the entity behind it.
        
             | bryanlarsen wrote:
             | Docker is a software tool. Docker Hub is a service. If
             | Docker didn't stand up Docker Hub the equivalent services
             | from GitHub, Google et al would have competed on a more
             | even playing field.
        
               | jehb wrote:
               | It's almost like they created intentional ambiguity here
               | when they renamed the company (dotCloud) to match the
               | name of the open source tool, then renamed the open
               | source project behind the tool to something else (Moby),
               | but kept it for the command line tool, while also
               | combining the name Docker into their product offerings,
               | including Engine and Deskop, that handle completely
               | different parts of managing containers. That's not even
               | including registries, dockerfiles, Compose, Swarm, etc.
               | and the ambiguity around where those sit in the a Venn
               | diagram.
               | 
               | That's some Google-level naming strategy there.
        
             | mikepurvis wrote:
             | Lots of orgs figure out how to piggy back the "service"
             | part of whatever they're doing on free or sponsored
             | infrastructure, though. Homebrew, for example, has been
             | doing a lot of the same stuff on Travis and GH Actions
             | since forever.
        
           | jzb wrote:
           | Indeed. Docker should've been plumbing. They could've had a
           | really nice business with developer tools on top of the core
           | bits, but they decided to try to jump straight to enterprise
           | and did a number of things to alienate partners and their
           | broader community.
           | 
           | Instead of adding value to Docker they're just trying to find
           | the right knobs to twist to force people into paying. And I
           | think people _should_ pay for value when they 're using
           | Docker substantially for business. But it seems like a very
           | short-minded play for cash disregarding their long-term
           | relationship with users and customers.
           | 
           | All that said: They have to find revenue to continue
           | development of all the things people do like. I'd encourage
           | people to ask if the things they've gotten for free do in
           | fact have value, and if that's the case, maybe disregard the
           | ham-fistedness and pony up if possible.
        
           | steveBK123 wrote:
           | It seems like another symptom of zirp/cheap money.
           | 
           | Lots of ideas that could have been a neat feature or tool
           | somehow ended up raising $500M of funding with no viable plan
           | on ever monetizing.
           | 
           | The fact that the product is successful but after a decade
           | they barely make $50M/year of revenue against $500M of
           | lifetime funding is crazy. As a user, you can work at a
           | company with a billion in revenue and barely owe them a few
           | thousand/year. Or you might just use Podman for free, and
           | prefer it due some of the design differences.
           | 
           | At the very least, a lot of these firms, with VC pressure,
           | overstayed their welcome as private enterprises and should
           | have sold themselves to a larger firm.
        
             | OnuRC wrote:
             | Do you also mean things like Uber? with double digit $B
             | lost with no road to be profitable? I agree.
        
               | 1propionyl wrote:
               | And Lyft... and Doordash... and GrubHub...
               | 
               | Pretty much the entire "gig economy" is full of hot air
               | and survives on regular influxes of VC money despite
               | massive losses every year. The business model doesn't
               | frickin work.
               | 
               | The hope from investors was that they would be investing
               | into what would ultimately become a monopoly that could
               | extract rents to repay them (not very competitive market
               | of them, but that's tipping the hat a little isn't it...)
               | but the funny bit is there's like 5-7 competitors in the
               | US alone doing the same thing.
               | 
               | Here's a take: maybe this is just a natural monopoly
               | situation, and if we like the convenience of gig delivery
               | but don't like the high prices per order or that gig
               | workers don't get sufficient pay, health insurance or
               | other benefits, how about we just nationalize it?
               | 
               | You know, the same way we did for everything that wasn't
               | food or groceries before? USPS Courier service sounds
               | like an idea to me.
        
             | [deleted]
        
             | bydlocoder wrote:
             | Some time ago I learned that Postman Labs that produces a
             | nice but not-a-rocket-science HTTP client raised $433M at
             | multi-billion valuation and has 500 employees. Isn't it
             | astonishing?
        
               | nine_k wrote:
               | Postman's strength is not in the HTTP client part. It is
               | in the SaaS part, ad I think their valuation (even though
               | overblown) mostly reflects their corporate penetration
               | and the willingness of many companies to pay a small
               | amount for their services.
        
               | bydlocoder wrote:
               | The SaaS part being the offering for creating
               | developer.acme.com type pages?
        
               | nine_k wrote:
               | No.
               | 
               | Centralizing and sharing your API descriptions, test
               | suites and plans, the various ad-hoc queries people
               | usually keep in their notes or on Slack (and lose),
               | handling involved auth stuff which is a hassle with curl,
               | etc.
               | 
               | I think they gravitate towards the same area as
               | swagger.io or stoplight.io, but from the direction of
               | using the existing APIs.
        
               | bydlocoder wrote:
               | API schemas and test suites are usually stored as code in
               | some sort of SCM. I googled "postman maven" and "postman
               | gradle" and found nothing official so I guess they have
               | nothing except stand-alone workspaces.
               | 
               | API registry is a useful tool with modern love for
               | nanoservices when a team of five somehow manages ten of
               | those but I don't see anything similar done by Postman.
               | Two of the service registries I know of were implemented
               | in-house for obvious reasons.
        
           | wslh wrote:
           | Yes, even when it was launched was obvious because used they
           | packaged and configured existing solutions. It was like a
           | company behind 'ls' (irony).
        
           | streetcat1 wrote:
           | what do you mean "demand profit"?
           | 
           | Last time I check rent is not free, food is not free, bus
           | ticket is not free. No reason why software should be free.
           | 
           | Open source was invented by big co as a "marginalized your
           | complement" strategy, not the ideal that is marketed as. As
           | an evidence, I do not see any cloud vendor open source their
           | code?
        
             | vorpalhex wrote:
             | > Open source was invented by big co as a "marginalized
             | your complement" strategy, not the ideal that is marketed
             | as.
             | 
             | > In 1983, Richard Stallman launched the GNU Project to
             | write a complete operating system free from constraints on
             | use of its source code. Particular incidents that motivated
             | this include a case where an annoying printer couldn't be
             | fixed because the source code was withheld from users.
             | 
             | from
             | https://en.m.wikipedia.org/wiki/History_of_free_and_open-
             | sou...
             | 
             | > Last time I check rent is not free, food is not free, bus
             | ticket is not free. No reason why software should be free.
             | 
             | You are welcome to sell your software. You are welcome to
             | be replaced if you can't compete. You don't have to sell
             | your software and we don't have to buy it. You can and will
             | be competed with.
             | 
             | Trying to build a multimillion dollar venture off a UI -
             | even a good UI - is probably unwise. It does not seem to be
             | going well for Docker who has gone from no competitors to
             | multiple and all of those competitors are open source.
        
               | IncRnd wrote:
               | From your very link, 1983's GNU Project was not the first
               | piece of Open Source software.
               | 
               | From your link: The first example of free and open-source
               | software is believed to be the A-2 system, developed at
               | the UNIVAC division of Remington Rand in 1953
        
               | vorpalhex wrote:
               | > Software was not considered copyrightable before the
               | 1974 US Commission on New Technological Uses of
               | Copyrighted Works (CONTU) decided that "computer
               | programs, to the extent that they embody an author's
               | original creation, are proper subject matter of
               | copyright"
               | 
               | FOSS before 1974 looks.. funny. It existed! But it did
               | not look like the modern FOSS movement.
               | 
               | Even post 1974 and pre-GNU, FOSS-ish text editors and
               | such existed. This was still the era when licenses were
               | often non-standard and frequently did not exist. Handing
               | your friend a copy of a program was the norm, regardless
               | the actual legal situation (which itself was probably
               | vague and unspecified).
        
           | ollien wrote:
           | FWIW, Docker was not intended originally to be a tool for
           | commercialization; it grew out of dotCloud, which open
           | sourced the tool as a last-ditch-effort of sorts, if memory
           | serves.
        
           | aviramha wrote:
           | I think that Docker can have a viable business plan but they
           | had terrible execution. At my previous position, I wanted to
           | use DockerHub more heavily but the entire experience was like
           | a bootstrap project someone did as a university assignment.
           | Many advanced features for organizations were lacking
           | (SSO/SAML) that we would have happily paid for.
        
             | MandieD wrote:
             | That, plus not being willing to accept Purchase Orders,
             | doomed them with my employer.
             | 
             | It's as if they had no idea how things work at large
             | enterprises that are older than most Docker employees.
        
           | codexb wrote:
           | Yeah, but curl is used to access and download all sorts of
           | data, which are all hosted by multi-million dollar companies.
           | Just like git downloads and uploads data to git repositories.
           | curl and git are valuable, but so is GitHub, and websites in
           | general. The problem is that they haven't found a way to
           | monetize docker hub.
        
           | pydry wrote:
           | I think they potentially could have made a decent business
           | out of it but they made a lot of bad business decisions.
           | 
           | I find myself shaking my head at a lot of their technical
           | decisions too.
           | 
           | Podman seems to me to be a case study for how to do this
           | right.
        
             | aviramha wrote:
             | I am usually an early adopter but I keep coming back to
             | Docker since Podman is still very rough around the edges,
             | especially in terms of "non-direct" usage (aka other tools)
        
               | throwawaymanbot wrote:
               | [dead]
        
               | vladvasiliu wrote:
               | As someone who's been bitten by this, I'm not sure if
               | it's an issue with podman itself as much as the tools
               | which expect docker. It could be argued that podman is
               | not a docker drop-in replacement, but I expect more and
               | more tools to pick it up.
        
               | Kye wrote:
               | Is this a matter of developers constantly relearning the
               | lesson of the folly of only supporting the current top
               | thing, or is it a lot harder to support more than one?
        
               | orthoxerox wrote:
               | The devil is in the details. For example, docker has a
               | DNS service at 127.0.0.11 it injects into
               | /etc/resolv.conf, while podman injects domain names of
               | your network into /etc/hosts. Nginx requires a real DNS
               | service for its `resolver` configuration.
        
               | vladvasiliu wrote:
               | I don't know how "hard" it is, but in my particular case
               | I wanted to use this from intellij. It actually works,
               | but the issue is that the docker emulation socket isn't
               | where the ide expects it, and I haven't found a way to
               | tell it where to look for.
               | 
               | Once I simlinked the socket, everything worked.
        
               | yrro wrote:
               | This worked for me:
               | 
               | Connect to Docker Daemon with -> TCP Socket -> Engine API
               | URL -> unix:///run/user/$UID/podman/podman.sock
        
               | laserlight wrote:
               | > It could be argued that podman is not a docker drop-in
               | replacement
               | 
               | This is an unfortunate part IMHO. podman is not a docker
               | drop-in replacement, but it is advertised as such.
        
               | evilduck wrote:
               | Besides the advertising, it's _very close_ to being a
               | drop-in replacement but their pace isn 't closing that
               | gap quick enough (or maybe they don't want to, or it
               | isn't possible, idk I'm just a user) so you get a false
               | sense of confidence doing the simple stuff before you run
               | into a parity problem.
        
               | actionfromafar wrote:
               | Worth remembering is that Docker supports Windows
               | containers. That's a hard requirement for many
               | enterprises.
        
             | windexh8er wrote:
             | Podman is interesting. I like the architecture problems it
             | solves with respect to Docker but the way they went about
             | it was typical big business Red Hat. Dan Walsh, Podman's
             | BDFL it seems, basically stood in front of RHEL / OpenShift
             | customers for years bashing Docker even when a majority of
             | the things he was claiming were less than half baked. RHEL
             | made sly moves like not supporting the Docker runtime, even
             | at a time when it put their customers in an awkward spot
             | before containerd won the k8s runtime war. Podman is backed
             | by much larger corporate machinery. If anyone thinks that
             | Podman "winning" is a good thing then you've played right
             | in to Walsh's antics. RHEL wants nothing more than to have
             | no friction when selling all the "open source" tooling you
             | may need in your enterprise.
             | 
             | Podman wasn't built out of necessity but out of fiscal
             | competitive maneuvering. And it's working. I see so many
             | articles on the "risks" of Docker vs Podman. The root wars
             | are all over the place. Yet... The topic is blown way out
             | of proportion by RHEL for a reason: FUD all in the name of
             | sales. Is there merit to the claim? For sure. Docker's
             | architecture was originally built up as client/server for a
             | different purpose. That didn't play out and the
             | architecture ended up being a side effect of that. But we
             | don't see container escape nearly as much as Red Hat would
             | like us to believe. I keep paying Docker because I don't
             | want to live in Red Hat's world, with their tooling that
             | they can just lock out of other platforms once they feel
             | like it. No thanks.
        
               | nine_k wrote:
               | You may have a healthy dislike for the corporate behemoth
               | that is RH / IBM, but, to my mind, Docker, Inc is worse:
               | they keep more things closed, and they literally pressure
               | for money.
               | 
               | I mean, I wish guys like FSF would have produced a viable
               | Docker alternative, but this hasn't happened, at least
               | yet.
        
               | javajosh wrote:
               | _> I don't want to live in Red Hat's world, with their
               | tooling that they can just lock out of other platforms
               | once they feel like it_
               | 
               | Explain please. This sounds like you're accusing RH of
               | sabotaging Docker, or planning to. That's a very serious
               | accusation requiring proof.
        
               | mikepurvis wrote:
               | Some of it also sounds a bit like leftover angst from Red
               | Hat winning the systemd war too.
               | 
               | Turns out hanging out in someone else's cathedral can
               | have some pretty big benefits.
        
               | Gibheer wrote:
               | RedHat has not won any systemd war. From all the
               | distributions out there using systemd, RedHat is the one
               | that uses the least amount of systemd features. They are
               | even going so far as disabling features.
               | 
               | See * https://bugzilla.redhat.com/show_bug.cgi?id=1962257
               | * https://gitlab.com/redhat/centos-
               | stream/rpms/systemd/-/blob/...
               | 
               | Sometimes they even backport systemd features from more
               | recent versions, disable them but leave man pages in the
               | original state. Even the /usr split isn't progressing at
               | all.
               | 
               | Meanwhile Fedora has implemented all these changes, which
               | according to https://www.redhat.com/en/topics/linux/what-
               | is-centos-stream, should be the upstream for CentOS.
               | 
               | I would say RedHat dropped the ball on systemd and has no
               | intention of supporting any of the new features in any of
               | their systems.
        
               | dralley wrote:
               | Those are not "systemd features", they are components
               | within the systemd suite. Using systemd-init has never
               | required that you use every component within the systemd
               | suite (e.g. ntp, network management, etc.)
        
               | yrro wrote:
               | I too find Red Hat's poor documentation hygiene a pain in
               | the arse. But as for the disabled system features, I
               | think that they all fall into the category of
               | experimental/unproven sort of features that overlap with
               | other existing RHEL components. Every enabled feature has
               | a cost in the form of support burden.
        
               | antimba wrote:
               | Podman winning is good. Red Hat consistently does things
               | right, for example their quay.io is open source, unlike
               | Docker Hub and GitHub Container Registry. The risks of
               | not using rootless containers weren't blown way out of
               | proportion, because rootless containers really are much
               | more secure. Not requiring a daemon, supporting cgroup v2
               | early, supporting rootless containers well and having
               | them as the default, these are all good engineering
               | decisions with big benefits for the users. In this and
               | many other things, Red Hat eventually wins because they
               | are more open-source friendly and because they hire
               | better developers who make better engineering decisions.
        
               | sofixa wrote:
               | > In this and many other things, Red Hat eventually wins
               | because they are more open-source friendly and because
               | they hire better developers who make better engineering
               | decisions.
               | 
               | We must be talking about a different Red Hat here.
               | Podman, with breaking changes in every version, that is
               | supposedly feature and CLI complete with Docker, but
               | isn't actually, is winning because it's more open source
               | friendly or better technically? Or systemd, written in a
               | memory unsafe language (yes, that is a problem for
               | something so critical and was already exploited at least
               | a couple of times), using a weird special format for it's
               | configuration, where the lead dev insults people and
               | refuses to backport patches (no, updating systemd isn't a
               | good idea) won "because it was more open source
               | friendly"? Or OpenShift that tries to supplant Kubernetes
               | stuff with Red Hat specific stuff that doesn't work in
               | some cases (e.g. TCP IngressRoutes lack many features),
               | is winning "because it was more open source friendly"?
               | 
               | No, Red Hat are just good at marketing, are an
               | established name, and know how to push their
               | products/projects well, even if they're not good or even
               | ready (Podman is barely ready but has been pushed _for
               | years_ by this point).
        
               | dralley wrote:
               | >Or systemd, written in a memory unsafe language (yes,
               | that is a problem for something so critical and was
               | already exploited at least a couple of times)
               | 
               | What memory safe language 1) existed in 2010 and 2) is
               | thoroughly portable to every architecture people commonly
               | run Linux on and 3) is suitable for software as low-level
               | as the init?
               | 
               | Rust is an option _now_ but it wasn 't back then. And
               | Rust is being evaluated now, even though it's not quite
               | ready yet on #2.
        
               | yjftsjthsd-h wrote:
               | There's Ada.
        
               | dralley wrote:
               | Ada has no ecosystem, and a lot of the ecosystem that
               | does exist is proprietary, and it brings us back to point
               | #2.
        
               | yjftsjthsd-h wrote:
               | > Ada has no ecosystem, and a lot of the ecosystem that
               | does exist is proprietary,
               | 
               | Not _no_ ecosystem, but yes it 's way smaller... probably
               | even smaller than Rust, yes.
               | 
               | > and it brings us back to point #2.
               | 
               | I seriously doubt it. Ada is supported directly in gcc;
               | why would it have any worse platform coverage than
               | anything else?
        
               | dralley wrote:
               | OTOH, Docker didn't want to support a lot of features
               | that enterprise customers wanted, like self-hosted
               | private registries, because they wanted people using
               | Dockerhub.
               | 
               | And wasn't the runtime problems because Docker was very
               | very late to adopting CGroups v2?
        
               | cozzyd wrote:
               | Yes cgroupsv2 was a big problem for docker on EL8 for a
               | long time.
        
               | freedomben wrote:
               | Yes exactly. GP is misinformed on history. Red hat didn't
               | sabotage anything. Docker took forever to update to
               | cgroups V2, and that broke it for distros like fedora
               | that are up to date. The user had to downgrade their
               | kernel in order to use docker, but if they did everything
               | else worked fine.
        
               | pydry wrote:
               | >Podman is backed by much larger corporate machinery. If
               | anyone thinks that Podman "winning" is a good thing then
               | you've played right in to Walsh's antics.
               | 
               | I'm not making a moral judgement. I'm just saying that
               | docker had serious technical problems and docker the
               | business _sucked_ at monetizing it.
               | 
               |  _Docker_ played into red hat 's tactics. I've never
               | heard of Matt Walsh and frankly, I've wanted rootless
               | containers for years before I ever heard of podman.
               | 
               | >Podman wasn't built out of necessity but out of fiscal
               | competitive maneuvering.
               | 
               | Becuase red hat is a business not a charity.
               | 
               | I doubt they would have built a better docker if docker
               | wasn't refusing to improve.
        
               | mmcdermott wrote:
               | I first found Podman when looking for alternatives when
               | Docker broke on my laptop in the midst of all the Docker
               | Desktop licensing changes. Frankly, I use it because it
               | has been more stable lately, not because of any long run
               | marketing campaign from Red Hat. I suspect a lot of its
               | userbase will be in a similar place as the experience
               | with Docker continues to degrade.
        
           | waynesonfire wrote:
           | Yeah! I should be able to get 50x value from software and not
           | pay for it /s
           | 
           | The open source community that carrier Docker on its back and
           | is now bending over. Let this be a lesson to you. If you're
           | building open source, maybe stick to open source solutions in
           | your tech stack and if it's not there build it. This is what
           | Apache does for the Java ecosystem.
           | 
           | I don't have sympathy, the writing was on the wall and this
           | isn't the first time it's happened to the community.
        
           | krmboya wrote:
           | The VCs offered free bandwidth and storage to gain market
           | share.
           | 
           | Bandwidth and storage is not ultimately not free, it has to
           | be paid for.
        
         | osigurdson wrote:
         | I'd like to see Docker succeed. They invented / formalized the
         | space and deserve credit for that. They are probably doing the
         | right thing with some of their development tooling (though
         | maybe that should just be spun off to Microsoft) and ensuring
         | images do not contain badware is something companies will pay
         | for.
         | 
         | However, their core offering must be the leader if they want to
         | survive. Devs must want to use "docker run" instead of "podman
         | run" for example. Docker needs to be the obvious #1 for
         | starting a container on a single machine.
        
           | wongarsu wrote:
           | > their core offering must be the leader if they want to
           | survive. Devs must want to use "docker run" instead of
           | "podman run"
           | 
           | If their core offering is container hosting, they should be
           | able to make a company out of that even without the client.
           | After all jfrog and cloudsmith are more per less just that,
           | as is github.
        
           | gorjusborg wrote:
           | > I'd like to see Docker succeed. They invented / formalized
           | the space and deserve credit for that.
           | 
           | If by succeed, you mean they deserve to have revenue, I
           | disagree.
           | 
           | They spun some cool work out of dotCloud when it failed. They
           | seemed to delay thinking about how they'd monetize the work,
           | and sort of fell into charging for developer tooling after
           | their orchestration play lost to kubernetes.
           | 
           | At this point, I think of Docker the company as a wannabe
           | Oracle. They are desperate for money, and are hoping they can
           | fool you into adopting their tech so they can ransom it from
           | you once you rely on it. If that sounds appealing to you, I'd
           | say go for it.
           | 
           | For me, that situation seems worse than what I do without
           | containers at my disposal. In other words, the solution is
           | worse overall than the problem.
        
             | nickstinemates wrote:
             | Time for a https://github.com/google/lmctfy revival.
        
               | coxley wrote:
               | I mean, OCI and containerd exist. You can have "Docker"
               | containers without the Docker just fine. Just need to
               | replace the user tooling, which I assume podman does?
               | (never used it)
        
               | nickstinemates wrote:
               | Forgot the /s
        
         | JustSomeNobody wrote:
         | > Unfortunately Docker the company appears to be dying, this is
         | the latest in a long line of decisions that are clearly being
         | made because they can't work out how to build a business around
         | what is at it's core a nice UI for Linux containers.
         | 
         | It should have been just a small company, doing this, and
         | making some money for their trouble instead of whatever it is
         | they're trying to be.
        
         | hot_gril wrote:
         | First time I saw Docker, I thought "that's great, but how do
         | they make money?" They're selling a cloud containers service
         | while also giving the software away to their direct competitors
         | for free. Maybe I was too ignorant to understand their business
         | model? But now I'm thinking I was right.
        
         | voytec wrote:
         | > they can't work out how to build a business around what is at
         | it's core a nice UI for Linux containers.
         | 
         | It's quite a shame (for the lack of better wording) that the
         | better, simpler and more intuitive a free product is, the
         | harder it is to make money from it by selling support.
         | 
         | I think that the best way to go from here, would be building
         | companion products and supporting the whole ecosystem. By
         | companion products, I mean other standalone apps/services, not
         | just GUI for existing one.
        
         | bydlocoder wrote:
         | A co-op formed by big 3 cloud providers flush with cash and put
         | in maintenance mode.
        
         | user3939382 wrote:
         | I'd like to see something resembling the Linux model. In the
         | case of Docker, a foundation built around a suite of open
         | source tools that's contributed to by pledges from all the big
         | companies that use the tool. Maybe that means podman has a
         | reliable source of funds for maintenance and improvement.
         | 
         | What I don't like is having these critical tools directly in
         | the hands of a single for-profit corporation, at least where it
         | can be avoided.
        
           | Pet_Ant wrote:
           | If we want that I feel like there should be a community buy
           | out. Just so that they have something to return to investors.
           | Just so that they have incentive to play nice and not Hudson-
           | up the process. You shouldn't be able to build a critical
           | piece of infrastructure and have nothing to show for it.
           | Community buy-out should be a viable exit plan.
        
         | pjmlp wrote:
         | It packaged containers that we already knew from other UNIX and
         | mainframe/micros systems.
        
           | never_inline wrote:
           | Sir computers nothing but we already known from Alen Turing.
        
         | rendaw wrote:
         | Why didn't Docker ever offer managed container hosting? That
         | seems like the obvious logical next step when you create a tool
         | for easy deploys. Instead it's 2023 and we finally get that
         | with Fly.io.
         | 
         | I must be missing something obvious, because otherwise I feel
         | like I'm going insane.
        
         | alerighi wrote:
         | To me is the opposite, Docker promotes bad software development
         | practices that in the end will hurt you. In fact most of the
         | time when you hear that you need Docker to run a software is
         | because that software is so badly written that installing it on
         | a system is too much complex.
         | 
         | Another bad use of Docker that I've seen is because people
         | cannot figure out how to write systemd units, that is damn
         | simple (just spend a day to read the documentation and learn
         | the tools that you need). Of course that makes administering
         | the system so much complex because you cannot use the benefits
         | that systemd will give you (thus you start using
         | iperoverengineered tools like kubernetes to just run a
         | webserver and a database...).
         | 
         | I'm maybe oldschool but i use Dockers as a last resort, and
         | prefer to have all the software installed properly on a server,
         | with the use of Ansible as a configuration management tool. To
         | me a system that uses Docker containers is much more difficult
         | to manage in the long run, while a system that doesn't use it
         | is more simple, thus less things that will break, thus if I
         | need to make a fix in 10 years I ssh in the system, edit the
         | program with vim, rebuild and restart the service, no complex
         | deploy pipeline that break, depend on external sources that may
         | be taken down (as is the case) and similar stuff.
        
           | orf wrote:
           | I think you are letting your specific feelings and head-canon
           | about purity be mistaken for solid technical arguments.
           | 
           | If you're sshing to boxes, editing things by hand and
           | slinging ad-hoc commands around then your frame of reference
           | is so far away from understanding it's value proposition that
           | it's probably pointless to discuss it.
        
         | Iolaum wrote:
         | Does that mean it would be a good idea to start moving to the
         | podman ecosystem? RedHat/IBM seem to have this figured out
         | better.
         | 
         | I m doing that personally but I m very hesitant about
         | mentioning that to $job.
        
         | AtlasBarfed wrote:
         | So.... podman?
         | 
         | No, I don't work for redhat. I'm glad a ... ?less? corporate
         | entity / ?more? open source entity has pretty much gotten a
         | replacement up.
        
         | pmarreck wrote:
         | > worlds better than the old ways of managing dependencies and
         | making sure everyone on a project is aligned on what versions
         | of things are installed.
         | 
         | And Nix is worlds better than even _this_. Imagine!
        
           | djbusby wrote:
           | How to run Nix on Gentoo and Debian?
        
             | pmarreck wrote:
             | https://trofi.github.io/posts/196-nix-on-gentoo-howto.html
             | for gentoo, but
             | 
             | https://nixos.org/download.html for any linux, AFAIK
        
           | jon-wood wrote:
           | I'm yet to be entirely sold on that, mostly because I think
           | Nix the language isn't anywhere near as accessible as
           | Dockerfiles, but I'll be the first one cheering if Nix does
           | manage to take over.
        
             | pmarreck wrote:
             | Completely agree on the complexity criticism, but this
             | interactive tutorial (that literally embeds a full nix
             | interpreter in the browser) went a looooooong way towards
             | making Nix files not just look like arcane incantations to
             | me, and doesn't take very long to do:
             | 
             | https://nixcloud.io/tour/
             | 
             | if at some point you realize "oh... this is just JSON with
             | a different syntax, some shorthands, and anonymous or
             | library functions," you're on the right path
        
             | archseer wrote:
             | https://www.mpscholten.de/docker/2016/01/27/you-are-most-
             | lik...
        
           | Kinrany wrote:
           | Does Nix have an equivalent of docker-compose yet?
           | 
           | nix-shell is amazing for installing binaries, but actually
           | wiring up and running the services doesn't seem like a solved
           | problem.
           | 
           | Unless Nix expects a separate tool to do this once binaries
           | are installed, of course.
        
             | juliosueiras wrote:
             | https://github.com/hercules-ci/arion which allow docker-
             | compose
        
               | pmarreck wrote:
               | oooh, I did not know of this, nice!
        
             | pmarreck wrote:
             | docker-compose seems necessary only because you have your
             | "official postgres dockerfile" and your self-built "web app
             | dockerfile" (and maybe other things like an ElasticSearch
             | dockerfile)
             | 
             | Docker files seem necessary only because... well put it
             | this way, think of a Docker image as "the cached result of
             | a build that just so happened to succeed even though it was
             | entirely likely not to, because Docker builds are NOT
             | deterministic."
             | 
             | Now enter Nix, where builds are more or less guaranteed to
             | work deterministically. You don't need to cache them into
             | an "image" (well, the artifacts do get cached locally and
             | online at places like https://www.cachix.org/), and the
             | only reason they can do that is because _they too_ are
             | deterministically guaranteed to succeed, more or less),
             | which means you can just include any and all services you
             | need. (Unless they need to run as separate machines
             | /VM's... in which case I suppose you just manage 2 nix
             | files, but yes, "composing" in that case is not really
             | fleshed out as a thing in Nix, to my knowledge)
        
               | poolopolopolo wrote:
               | except you cant deploy Nix files, and even if you could,
               | better be sure that every employee is using Nix and have
               | the same configuration. The whole point of docker is to
               | make reproducible builds everywhere, not just your
               | computer.
        
               | pmarreck wrote:
               | > except you cant deploy Nix files
               | 
               | NixOps and nix-deploy: EXIST! https://arista.my.site.com/
               | AristaCommunity/s/article/Deploy-...
               | 
               | > better be sure that every employee is using Nix and
               | have the same configuration. The whole point of docker is
               | to make reproducible builds everywhere, not just your
               | computer.
               | 
               | lol, "tell me you never used Nix without telling me you
               | never used Nix" because it _literally guarantees that_ ,
               | each project is a pure environment with no outside
               | influences. THAT IS LITERALLY ITS ENTIRE PURPOSE OF
               | EXISTENCE lolol
               | 
               | I absolutely guarantee you that you will have more
               | reproducible builds with Nix than with Docker. I know,
               | because I've worked with both of them for months on end,
               | and I've noticed that it pains me to work with Docker
               | more than it pains me to work with Nix (hey, it's not
               | perfect either, but perfect is the enemy of good in this
               | case)
        
               | poolopolopolo wrote:
               | First you are tightly coupling your CI to your developers
               | machine, that in itself is already a pretty bad idea.
               | Second, if one employee wants to install htop on their
               | machine, then every employee will have to install it,
               | this can quickly become a problem when you have 500+
               | developers. Third, I think you missed the first part on
               | the second quote, you are FORCING every developer to not
               | only use linux but also to use one distribution that is
               | pretty niche.
        
         | frognumber wrote:
         | Do you remember SCO?
        
           | emeraldd wrote:
           | Honestly, this is reminding me of Oracle after buying Sun.
        
       | tyingq wrote:
       | The only real moat they seem to have here is that "FROM" in a
       | Dockerfile, "image:" in a docker-compose.yml file, and the docker
       | command line default "somestring" as an image to
       | "hub.docker.com:somestring".
       | 
       | They pushed that with the aggressive rate limiting first though,
       | which caused a lot of people to now understand that paragraph
       | above and use proxies, specify a different "hub", etc.
       | 
       | So this move, to me, has less leverage than they might have
       | intended, since the previous move already educated people on how
       | to work around docker hub.
       | 
       | At some point, they force everyone's hand and lose their moat.
        
         | remram wrote:
         | x/y expands to docker.io/x/y and z expands to
         | docker.io/library/z
        
           | tyingq wrote:
           | Right, it's a little different than my summary, but the main
           | point was that they educated everyone that there's a way
           | around that with specific image names, or a proxy, etc. If
           | they push hard enough, the internet will route around them,
           | distros will ship a patched docker, preset environment
           | variables, or a docker->podman alias, etc. They will lose
           | control over the "root namespace".
        
       | roydivision wrote:
       | Docker should never have become a business. There's virtually
       | nothing there to make a business around, it's a suite of useful
       | utilities that should have remained a simple open source project.
       | I switched to podman a while ago and haven't looked back.
        
         | sirius87 wrote:
         | Docker Hub does host images running into several GBs for even
         | small hobby projects, and they also bear network transfer
         | costs. Even with podman, you're going to have to host your
         | images somewhere, right?
         | 
         | Right now, the internet infrastructure heavily relies on the
         | good graces of Microsoft (Github, npm), and storage space and
         | network transfer charges are taken for granted.
        
           | manquer wrote:
           | The design of docker distribution is poor because the company
           | backing it wants to retain control .
           | 
           | Torrent based distribution for open source projects and other
           | public initiatives were there long before docker .
           | 
           | Apt mirroring has also been there for a long long time .
           | Checksum integrity verification of mirrors have well
           | established workflows .
           | 
           | We don't need good graces of any company to distribute assets
           | .
        
       | jszymborski wrote:
       | Has there been any work on making these centralised public
       | repositories distributed?
        
         | bandrami wrote:
         | What work needs to be done? Provision a server somewhere and
         | host it. AWS has one-click "give me a docker hub" and "give me
         | a git hub" products.
        
         | angio wrote:
         | Would it be possible to build something on top of DHT+Torrent?
         | Main issue seems around image discovery.
        
         | alias_neo wrote:
         | This has been happening recently with Helm chart repositories
         | etc. Maybe it's time people started hosting their own Container
         | Registries?
         | 
         | The huge bandwidth requirements are an incentive to keep images
         | small.
        
         | qbasic_forever wrote:
         | Yes containerd supports pulling images from IPFS:
         | https://github.com/containerd/nerdctl/blob/main/docs/ipfs.md
        
           | delfinom wrote:
           | Except you need to pay for pinning IPFS files to actually
           | persist. And there-in lies the crux. People don't got money
           | to pay for their small time docker containers.
           | 
           | The cost of some of the IPFS hosts that will give you a
           | dropbox of sorts still end up costing roughly the $20/month
           | similar to the Docker $25/month for a team account.
        
         | RobotToaster wrote:
         | I was just thinking this seems like an ideal use case for IPFS
         | or bittorrent, since most users will already be running on a
         | server.
         | 
         | It doesn't seem unreasonable to have the client automatically
         | pin/seed the container it pulls.
        
       | bambam24 wrote:
       | [dead]
        
       | communism wrote:
       | socialize Docker!
        
       | technick wrote:
       | Docker is shooting itself in the foot. Oddly I decided to put on
       | the docker shirt from one of their 2016 hackathons today before
       | reading the news. I'm embarrassed to own this shirt and will
       | throw it away after today.
       | 
       | RIP Docker, your former self will be missed while your current
       | self will be loathed.
        
       | pmlnr wrote:
       | > if we don't pay up, systems will break for many free users.
       | 
       | Yep. These things happen, which is why hosting a copy on your own
       | gitea, website, etc is so important.
       | 
       | > Start publishing images to GitHub
       | 
       | Why, to have this repeated in a few years time?
        
         | judge2020 wrote:
         | > Yep. These things happen, which is why hosting a copy on your
         | own gitea, website, etc is so important.
         | 
         | ...which involves an ongoing cost anyways. Docker is tired of
         | free hosting to everyone (unless they're a vetted Open Source
         | project), so you're going to see projects either move to the
         | next free solution or solicit donations/more donations
         | specifically to support hosting an open access registry.
        
       | 54656g3 wrote:
       | Well I am thankful because I don't use Docker in my professional
       | life but do in my personal life. I guess I will rebuild my system
       | without Docker. It made things easy but I have no interest in
       | trying to track bullshit like this as an end user.
       | 
       | I guess Docker is as good as dead.
        
       | irrational wrote:
       | What is the best replacement or alternate to Docker?
        
         | akho wrote:
         | The most Docker-shaped one is Podman. "Best" depends on what
         | you do with it.
         | 
         | I think the bigger issue here is not the lack of replacements,
         | but the end of oh-so-convenient centralization in DockerHub.
        
       | numbsafari wrote:
       | Docker have a responsibility to _their_ customers, not just
       | OpenFaas and other open source projects. _Docker's_ customers
       | rely on them to provide a safe and reliable service. If Docker
       | allows these projects to be taken over by nefarious actors, then
       | the risk falls to _their_ customers, not the Open Source projects
       | that they've broken with.
        
       | VadimBauer wrote:
       | Many people are quite upset. But on the other hand, how many
       | years could this work? Petabytes of data and traffic.
       | 
       | When we started to offer an alternative to Docker Hub in
       | 2015-2016 with container-registry.com, everyone was laughing at
       | us. Why are you doing that, you are the only one, Docker Hub is
       | free or almost free.
       | 
       | Owning your data and having full control over the distribution is
       | crucial for every project, event open source.
        
       ___________________________________________________________________
       (page generated 2023-03-15 23:00 UTC)