[HN Gopher] LXC vs. Docker
       ___________________________________________________________________
        
       LXC vs. Docker
        
       Author : lycopodiopsida
       Score  : 145 points
       Date   : 2022-02-18 13:45 UTC (9 hours ago)
        
 (HTM) web link (earthly.dev)
 (TXT) w3m dump (earthly.dev)
        
       | p0d wrote:
       | I've been running my saas on lxc for years. I love that the
       | container is a folder to be copied. Combined with git to push
       | changes to my app all is golden.
       | 
       | I tried docker but stuck with lxc.
        
       | ruhrharry wrote:
       | LXC is quite different from Docker. Docker is used most of the
       | time as an containerized package format for servers and as such
       | is comparable to snap or flatpak on the desktop. You don't have
       | to know Linux administration to use Docker, that is why it is so
       | successfull.
       | 
       | LXC on the other hand is lightweight virtualization and one would
       | have a hard time to use it without basic knowledge of
       | administering Linux.
        
       | melenaboija wrote:
       | I use LXC containers as my development environments.
       | 
       | When I changed my setup from expensive Mac Books to an expensive
       | work station with a cheap laptop as front end to work remotely
       | this was the best configuration I found.
       | 
       | It took me few hours to have everything running but I love it
       | now. New project is creating a new container add a rule to
       | iptables and I have it ready in few seconds.
        
         | dijit wrote:
         | FWIW I do the same thing but with docker.
         | 
         | Exposing the docker daemon on the network and setting
         | DOCKER_HOST I'm able to use the remote machine as if it was
         | local.
         | 
         | It's hugely beneficial, I've considered making mini buildfarms
         | that load balance this connection in a deterministic way.
        
           | botdan wrote:
           | Do you have any more information about how you're doing this?
           | Whenever I've tried to use Docker as a remote development
           | environment the process felt very foreign and convoluted.
        
             | dijit wrote:
             | I know what you mean, It depends a little bit on your
             | topology.
             | 
             | If you have a secure network then it's perfectly fine to
             | expose the docker port on the network in plaintext without
             | authentication.
             | 
             | Otherwise you can use Port forwarding over SSH.
             | 
             | To set up networked docker you can follow this:
             | https://docs.docker.com/engine/security/protect-access/
             | 
             | I'm on the phone so can't give a detailed guide.
        
           | stickyricky wrote:
           | Is there a benefit to this over SSH or VSCode remote?
        
             | zelphirkalt wrote:
             | Neither SSH not VSCode offer any kind of isolation out of
             | the box.
        
               | stickyricky wrote:
               | I mean running docker on the remote machine and just
               | sshing into it. I assume changing the docker host on OSX
               | just means a command is being sent over the network. Just
               | wondering why prioritize "local" development if its all
               | remote anyway.
        
       | theteapot wrote:
       | > Saying that LXC shares the kernel of its host does not convey
       | the whole picture. In fact, LXC containers are using Linux kernel
       | features to create isolated processes and file systems.
       | 
       | So what is Docker doing then??
        
       | dottedmag wrote:
       | Apples to oranges.
       | 
       | LXC can be directly compared with a small, and quite
       | insignificant, part of Docker: container runtime. Docker became
       | popular not because it can run containers, many tools before
       | Docker could do that (LXC included).
       | 
       | Docker became popular because it allows one to build, publish and
       | then consume containers.
        
         | tyingq wrote:
         | The confusion is because LXD is more comparable
         | (build/publish/consume) to Docker, but the command you use to
         | run it is called "lxc", so some people call LXD "LXC".
        
         | drewcoo wrote:
         | Isn't that where LXD is supposed to fit in?
         | 
         | https://linuxcontainers.org/lxd/
        
         | fulafel wrote:
         | LXD does all of that since a long time, eg here's a tutorial
         | from 2015: https://ubuntu.com/blog/publishing-lxd-images
        
         | sarusso wrote:
         | I agree on the apples to oranges, but LXC does not directly
         | compare to a container runtime IMHO.. It is a proper engine to
         | be fair, even if it provides much less functionalities as the
         | Docker engine.
        
         | m463 wrote:
         | I would say docker's killer feature is the Dockerfile. It makes
         | it understandable, reproducible and available to a broad range
         | of people.
         | 
         | At least they mentioned it in their apple:oranges comparison.
         | 
         | there's also the global namespace thing. "FROM ubuntu:18.04" is
         | pretty powerful.
         | 
         | I run a proxmox server with LXC but if I could use a dockerfile
         | or equivalent, my containers would be much much more organized.
         | I wouldn't like to pull images from the internet however.
        
           | richardwhiuk wrote:
           | Understandable? Yes.
           | 
           | Reproducible? No. Most Dockerfiles are incredibly
           | unreproducible.
        
             | [deleted]
        
             | zelphirkalt wrote:
             | Most are not, that's true.
             | 
             | It is possible to get to reproducibility though, by
             | involving lots of checksums. For example you can use the
             | checksum of a base image in your initial FROM line. You can
             | download libraries as source code and install them. You can
             | use package managers with lock files to get to reproducible
             | environments of the language you are using. You can check
             | checksums of other downloaded files. It takes work, but
             | sometimes it is worth it.
        
         | leecarraher wrote:
         | i agree, docker added version control, online repository, with
         | management, and search they also fixed some of the networking
         | headaches in lxc. Prior to developing containerd docker was
         | based on lxc containers.
        
         | itslennysfault wrote:
         | More like Apples to Apple Core, but sure.
        
         | chriswarbo wrote:
         | > Docker became popular because it allows one to build, publish
         | and then consume containers.
         | 
         | True, but Docker is an awful choice for those things (builds
         | are performed "inside out" and aren't reproducible, publishing
         | produces unauditable binary-blobs, consumption bypasses
         | cryptographic security by fetching "latest" tags, etc.)
        
           | tra3 wrote:
           | Can you expand on how builds aren't reproducible?
           | 
           | I though Dockerfile ensured that builds are indeed
           | reproducible?
        
             | Ajedi32 wrote:
             | Running the same script every time doesn't necessarily
             | guarantee the same result.
             | 
             | Lots of docker build scripts have the equivalent of
             | date > file.txt
             | 
             | or                 curl https://www.random.org/integers/?nu
             | m=1&min=1&max=1000&col=1&base=10&format=plain&rnd=new >
             | file.txt
             | 
             | Buried deep somewhere in the code.
             | 
             | But yeah I don't see any reason why you couldn't
             | theoretically make a reproducible build with Docker.
        
               | tra3 wrote:
               | If you have dynamism like this in your build, doesn't
               | that imply that no build system is reproducible?
        
               | MD87 wrote:
               | Docker images actually contain the timestamp each layer
               | was built at, so are basically de-facto non-reproducible.
               | 
               | Buildah from Red Hat has an argument to set this
               | programmatically instead of using the current date, but
               | AFAIK there's no way to do that with plain old docker
               | build.
        
               | gmfawcett wrote:
               | The nixpkgs workaround is to build images with build-date
               | == 1970-01-01. Annoying but reproducible.
        
           | zapita wrote:
           | > _builds are performed "inside out"_
           | 
           | Docker supports multi-stage builds. They are quite powerful
           | and allow you go beyond the "inside out" model (which still
           | works fine for many use cases).
           | 
           | > ... _and aren 't reproducible_
           | 
           | You can have reproducible builds with Docker. But Docker does
           | not _require_ your build to be reproducible. This allowed it
           | to be widely adopted, because it meets users where they are.
           | You can switch your imperfect build to Docker now, and
           | gradually improve it over time.
           | 
           | This is a pragmatic approach which in the long run improves
           | the state of the art more than a purist approach.
        
           | KronisLV wrote:
           | > builds are performed "inside out" and aren't reproducible
           | 
           | This is probably a good argument because of how hard it is to
           | do anything in a reproducible manner, if you care even about
           | timestamps and such matching up.
           | 
           | Yet, i'd like to disagree that it's because of inherent flaws
           | with Docker, merely how most people choose to build their
           | software. Nobody wants to use their own Nexus instance as a
           | storage for a small set of audited dependencies, configure it
           | to be the only source for all of them, build their own base
           | images, seek out alternatives for all of the web integrated
           | build plugins etc.
           | 
           | Most people just want to feed the machine a single Dockerfile
           | (or the technology specific equivalent, e.g. pom.xml) and get
           | something that works out and thus the concerns around
           | reproducibility get neglected. Just look at how much effort
           | the folks over at Debian have put into reproducibility:
           | https://wiki.debian.org/ReproducibleBuilds
           | 
           | That said, a decent middle ground is to use a package cache
           | for your app dependencies, specific pinned versions of base
           | images (or build your own ones on top of the common ones,
           | e.g. a customized Alpine base image) and multi stage builds,
           | you are probably 80% of the way there, since if need be, you
           | could just dump the image's file system and diff it against a
           | known copy.
           | 
           | Nexus (some prefer Artifactory, some other solutions):
           | https://www.sonatype.com/products/repository-oss
           | 
           | Multi stage builds: https://docs.docker.com/develop/develop-
           | images/multistage-bu...
           | 
           | The rest 20% might take a decade until reproducibility is as
           | user friendly as Docker currently is, just look at how slowly
           | Nix is adopted.
           | 
           | > publishing produces unauditable binary-blobs
           | 
           | It's just a file system that consists of a bunch of layers,
           | isn't it? What prevents you from doing:
           | docker run --name dump-test alpine:some-very-specific-version
           | sh -c exit       docker export -o alpine.tar dump-test
           | docker rm dump-test
           | 
           | You get an archive that's the full file system of the
           | container. Of course, you still need to check everything
           | that's actually inside of it and where it came from (at least
           | the image persists the information about how it was built
           | normally), but to me it definitely seems doable
           | 
           | > consumption bypasses cryptographic security by fetching
           | "latest" tags
           | 
           | I'm not sure how security is bypassed if the user chooses to
           | use whatever is the latest released version. That just seems
           | like a bad default on Docker's part and a careless action on
           | the user's part.
           | 
           | Actually, there's no reason why you should limit yourself to
           | just using tags, since something like "my-image:2022-02-18"
           | might be accidentally overwritten unless your repo
           | specifically prevents this from being allowed. If you want,
           | you can actually run images by their hashes, for example,
           | Harbor makes this easy to do by letting you copy those values
           | from their UI, though you can also do so manually.
           | 
           | For example, let's say that we have two Dockerfiles:
           | # testA.Dockerfile       FROM alpine:some-very-specific-
           | version       RUN mkdir /test && echo "A" > /test/file
           | CMD cat/test/file              # testB.Dockerfile       FROM
           | alpine:some-very-specific-version       RUN mkdir /test &&
           | echo "B" > /test/file       CMD cat/test/file
           | 
           | If we use just tags to refer to the images, we can eventually
           | have them be overriden, which can be problematic:
           | # Example of using version tags, possibly problematic
           | docker build -t test-a -f testA.Dockerfile .       docker run
           | --rm test-a       docker build -t test-a -f testB.Dockerfile
           | .       docker run --rm test-a
           | 
           | In the second case we get the "B" output even though the tag
           | is "test-a" because of a typo, user error or something else.
           | Yet, we can also use hashes:                 # Example of
           | using hashes, more dependable       docker build -t test-a -f
           | testA.Dockerfile .       docker image inspect test-a | grep
           | "Id"       docker run --rm "sha256:93ee9f8e3b373940e04411a370
           | a909b586e2ef882eef937ca4d9e44083cece7c"       docker build -t
           | test-a -f testB.Dockerfile .       docker image inspect
           | test-a | grep "Id"       docker run --rm "sha256:8dd9ba5f1544
           | c327b55cbb75f314cea629cfb6bbfd563fe41f40e742e51348e2"
           | docker build -t test-a -f testA.Dockerfile .       docker
           | image inspect test-a | grep "Id"       docker run --rm "sha25
           | 6:93ee9f8e3b373940e04411a370a909b586e2ef882eef937ca4d9e44083c
           | ece7c"
           | 
           | Here we see that if your underlying build is reproducible,
           | then the resulting image hash for the same container will be
           | stable. Furthermore, someone overwriting the test-a tag
           | didn't break you being able to run the first correctly built
           | image because the tags are just convenience, so you'll be
           | able to run the previous one.
           | 
           | Of course, that loops back to the reproducible build
           | discussion if you care about hashes matching up, rather than
           | just tags not being overwritten.
        
           | yjftsjthsd-h wrote:
           | It's still superior to everything that came before it (at
           | least, that I'm aware of), and cleared the "good enough" bar.
           | Actually, I'm _still_ not aware of anything that solves those
           | issues without making it way harder to use - ex. nix
           | addresses your points but has an awful learning curve.
        
       | password4321 wrote:
       | Is it accurate to say LXC is to Docker as git is to GitHub, or
       | vim/emacs vs. Visual Studio Code?
       | 
       | I haven't seen many examples demonstrating the tooling used to
       | manage LXC containers, but I haven't looked for it either. Docker
       | is everywhere.
        
         | throwawayboise wrote:
         | lxc launch, lxc list, lxc start, lxc stop, etc....
         | 
         | That's all I've ever needed. Docker is overkill if you just
         | need to run a few containers. There is a point where it makes
         | sense but running a few containers for a small/personal project
         | is not it.
        
           | mrweasel wrote:
           | Funnily enough I view it the other way around. LXC is a bit
           | overkill, I just wanted to run a container, not an entire VM.
           | 
           | In my mind you need to treat LXC containers as VMs, in terms
           | of managements. They need to be patched, monitored and
           | maintained the same ways as a VM. Docker containers still
           | need to patched of cause, many seems to forget that bit, but
           | generally they seem easier to deal with. Of cause that
           | depends on what has been stuffed into the container image.
           | 
           | LXC is underrated though, for small projects and business it
           | can be a great alternative to VM platforms.
        
         | haolez wrote:
         | In the first months of Docker, yes. Nowadays, they are
         | different beasts.
        
         | sarusso wrote:
         | I recently wrote something to clarify my mind around all this
         | [1]. If we assume that by Docker we mean the Docker engine,
         | then I think you might compare it as you said (maybe more in
         | terms as vim/emacs vs. Visual Studio Code as Git is a
         | technology while GitHub is a platform).
         | 
         | But Docker is many things: a company, a command line tool, a
         | container runtime, a container engine, an image format, a
         | registry...
         | 
         | [1]
         | https://sarusso.github.io/blog_container_engines_runtimes_or...
        
       | junon wrote:
       | LXC/LXD being the clear winner.
        
       | sarusso wrote:
       | Interesting read, not sure why you compared only these two
       | though.
       | 
       | There are a plenty of other solutions and Docker is actually many
       | things.. You can use Docker to run containers using Kata for
       | example, which is a runtime providing full HW virtualisation.
       | 
       | I wrote something similar, yet much less in detail on Docker and
       | LXC and more as a bird-eye overview to clarify terminology, here:
       | https://sarusso.github.io/blog_container_engines_runtimes_or...
        
       | bamboozled wrote:
       | One major limitation of LXC is that there is no way to easily
       | self host images. Often the the official images for many
       | distributions are buggy. For example, the official Ubuntu images
       | seem to come with a raft of known issues.
       | 
       | Based on my limited interactions with it, I'd recommend staying
       | away from LXC unless absolutely neccesary.
        
         | Lifelarper wrote:
         | > there is no way to easily self host images
         | 
         | When you run lxd init there's an option to make the server
         | available over the network (default: No), if enabled you can
         | host images from there.                   lxc remote add
         | myimageserver images.bamboozled.com         lxc publish myimage
         | lxc image copy myimage myimageserver                  lxc
         | launch myimageserver:myimage
        
         | fuzzy2 wrote:
         | If you feel that the existing images (of lxc-download?) have
         | too many bugs for your liking, you could also try the classic
         | templates, which use _debootstrap_ and the like to create the
         | rootfs.
        
         | pentium166 wrote:
         | I've only played around with LXC/LXD a little bit, what are
         | some of the Ubuntu image issues? I did a quick google, but the
         | first results seemed to be questions about hosting on Ubuntu
         | rather than with the images themselves.
        
           | silverwind wrote:
           | In my experience, most issues are related to kernel
           | interfaces which LXC disables inside unprivileged containers,
           | paired with software that does not check if those interfaces
           | are there/work before attempting to use them.
           | 
           | These issues can be observed in the official Ubuntu image and
           | seem to get worse over time. I would recommend to just use
           | VMs instead.
        
         | unixhero wrote:
         | In Proxmox "self hosting" meaning having a folder with LXC
         | images is part of the distribution. You can download templates
         | from online sources and use as images or create your own
         | templates from already running LXCs. Or maybe you mean self
         | hosting in another way?
        
       | fuzzy2 wrote:
       | I've been using LXC as a lightweight "virtualization" platform
       | for over 5 years now, with great success. It allows me to take
       | existing installations of entire operating systems and put them
       | in containers. Awesome stuff. On my home server, I have a VNC
       | terminal server LXC container that is separate from the host
       | system.
       | 
       | Combined with ipvlan I can flexibly assign my dedicated server's
       | IP addresses to containers as required (MAC addresses were locked
       | for a long time). Like, the real IP addresses. No 1:1 NAT. Super
       | useful also for deploying Jitsi and the like.
       | 
       | I still use Docker for things that come packaged as Docker
       | images.
        
         | heresie-dabord wrote:
         | > . It allows me to take existing installations of entire
         | operating systems and put them in containers
         | 
         | Friend, do you have documentation for this process? Please
         | share your knowledge. ^_^
        
       | n3storm wrote:
       | Love to hear I am not the only one enjoying LXC rather than
       | Docker
        
       | synergy20 wrote:
       | I think docker grew out of lxc initially(to make lxc easier to
       | use), for now, lxc is light weight but it is not portable, docker
       | can run on all OSes, I think that's the key difference: cross-
       | platform apps. LXC remains to be a linux-only thing.
        
         | deadbunny wrote:
         | Just as long as you ignore the linux VM all those docker
         | containers are running in.
        
       | yokem55 wrote:
       | LXD (Canonical's daemon/API front end to lxc containers) is great
       | -- as long as you aren't using the god awful snap package they
       | insist on. The snap is probably fine for single dev machines, but
       | it has zero place in anything production. This is because
       | canonical insists on auto-updating and refreshing the snap at
       | random intervals, even when you pin to a specific version
       | channel. Three times I had to manually recover a cluster of lxd
       | systems that broke during a snap refresh because the cluster
       | couldn't cope with the snaps all refreshing at once.
       | 
       | Going forward we built and installed lxd from source.
        
         | CSDude wrote:
         | I had a huge argument in 2015 with a guy that wanted to move
         | our every custom .deb package (100+) to Snap, because they had
         | talked with Canonical and it would be the future, Docker would
         | be obsolete. Main argument was to make distribution easier to
         | worker/headless/server machines. Not that Docker is a direct
         | replacement, but Snap is an abomination. They are mostly out of
         | date, most of them requires system privilleges, unstable and
         | the way they mount compressed rootfs is making starts very
         | slow, even on a good machine.
         | 
         | That all being said, LXD is great way to run non-ephemeral
         | containers that behave more like a VM. Also checkout multipass,
         | by Canonical that also makes spinning up Ubuntu VMs as easy as
         | Docker.
        
         | alyandon wrote:
         | I got so annoyed with snapd that I finally patched the auto-
         | update functionality to provide control via environment
         | variable. It's ridiculous that this is what I have to
         | personally go through in order to maintain control of when
         | updates are applied on my own systems.
         | 
         | If enough people were to ever decide to get together and
         | properly fork snapd and maintain the patched version I'd
         | totally dedicate time to helping out.
         | 
         | https://gist.github.com/alyandon/97813f577fe906497495439c37d...
        
           | agartner wrote:
           | We blocked the annoying snapd autoupdate behavior by setting
           | a http proxy to a nonexistent server. Whenever we had a
           | maintenance window we would unset the proxy, allow the
           | update, then set the set the nonexistent proxy server again.
           | 
           | Very annoying.
        
             | djbusby wrote:
             | this feels both clever and stupid at the same time - not
             | you but the software games you have to play.
        
             | alyandon wrote:
             | That certainly works too but with my approach you can run
             | "snap refresh" manually whenever you feel like updating.
        
         | baggy_trough wrote:
         | Yeah, it's truly terrible. I've had downtime from this as well.
        
         | whitepoplar wrote:
         | Just curious--how do you use LXD in production? It always
         | struck me as something very neat/useful for dev machines, but I
         | had trouble imagining how it would improve production
         | workloads.
        
         | _448 wrote:
         | > The snap is probably fine for single dev machines
         | 
         | It is not good even on single dev machines.
        
         | stingraycharles wrote:
         | Makes you wonder whether Canonical has any idea about operating
         | servers. Auto-updating packages is the last thing you want.
         | Doing that for a container engine, without building in some
         | jitter to avoid the scenario you described is absolutely
         | insane.
         | 
         | Who even uses snap in production? If I squint my eyes I can see
         | the use for desktops, but why insist on it for server
         | technologies as well?
        
           | curt15 wrote:
           | Canonical would gladly hand back full control of updates if
           | you pay them for an "enterprise edition" snap store.
           | https://ubuntu.com/core/docs/store-
           | overview#:~:text=Brand%20....
        
             | stingraycharles wrote:
             | And even then, controlling the package versions is only one
             | of the problems. The bigger problem which isn't solved with
             | this (as far as I can tell) is not having the machines
             | automatically update due to how the snap software works.
        
         | pkulak wrote:
         | Not even kidding, a huge part of what made me move to Arch was
         | that it's one of the few distros that packages LXD. Apparently
         | it's a pain, but I'm forever grateful!
        
           | sreevisakh wrote:
           | Alpine is another distro that packages LXD. I have Arch on my
           | workstation, but I'm not confident about securing Arch on a
           | server. Alpine, on the other hand, is very much suited to be
           | an LXD host. It's tiny, runs entirely from RAM and can
           | tolerate disk access failures. Modifications to host
           | filesystem won't persist after reboot, unless the admin
           | 'commits' them. The modifications can be reviewed before a
           | commit - so it's easy to notice any malicious modifications.
           | I also heard a rumor that they are integrating ostree.
           | 
           | The only gripe I have with Alpine is its installation
           | experience. Like Arch, Alpine has a DIY type installation
           | (but a completely different style). But unlike Arch, it isn't
           | easy to properly install Alpine without a lot of trial and
           | error. Alpine documentation felt like it neglects a lot of
           | important edge cases that trip you up during installation.
           | Arch wiki is excellent on that aspect - they are likely to
           | cover every misstep or unexpected problem you may encounter
           | during installation.
        
         | khimaros wrote:
         | it looks like there has been considerable progress packaging in
         | debian bookworm: https://blog.calenhad.com/posts/2022/01/lxd-
         | packaging-report...
        
         | warent wrote:
         | On my Ubuntu 20 server, I tried setting up microk8s with juju
         | using LXD and my god the experience was horrendous. One bug
         | after another after another after another after another. Then I
         | upgraded my memory and somehow snap/LXD got perma stuck in an
         | invalid state. The only solution was to wipe and purge
         | everything related to snap/LXD.
         | 
         | After that I setup minikube with a Docker backend. It all
         | worked instantly, perfectly aligned with my mental model, zero
         | bugs, zero hassle. Canonical builds a great OS, but their
         | Snap/VM org is... not competitive.
        
         | rajishx wrote:
         | I am simple man with simple need, I am perfectly happy with a
         | distro as long I have my editor my terminal and my browser.
         | 
         | I could not bear the snaps on ubuntu always coming back and
         | hard to disable on every update, I gave up and just switched to
         | arch and happy to have control on my system again.
         | 
         | I had a lot of crash running on Ubuntu when running huge rust
         | based test suite doing a lot of IO (on btrfs), never had that
         | issue on arch. not sure why, not sure how I can even debug it
         | (full freeze, nothing in systemd logs) so I guess I just gave
         | up.....
        
         | rlpb wrote:
         | > This is because canonical insists on auto-updating and
         | refreshing the snap at random intervals, even when you pin to a
         | specific version channel.
         | 
         | You can control snap updates to match your maintenance windows,
         | or just defer them. Documentation here:
         | https://snapcraft.io/docs/keeping-snaps-up-to-date#heading--...
         | 
         | What you cannot do without patching is defer an update for more
         | than 90 days. [Edit: well, you sort of can, by bypassing the
         | store and "sideloading" instead:
         | https://forum.snapcraft.io/t/disabling-automatic-refresh-
         | for...]
        
           | alyandon wrote:
           | Why would any sysadmin that maintains a fleet of servers with
           | varying maintenance windows (that may or may not be dependent
           | on the packages installed) want to feed snapd a list of dates
           | and times to basically ask it pretty please not to auto-
           | update snaps?
           | 
           | It is much easier to give sysadmins back the power they've
           | always had to perform updates when it is reasonable to do so
           | on their own schedule without having to inform snapd of what
           | that schedule is.
           | 
           | I really don't appreciate being told by Canonical that I have
           | to jump through additional hoops and spend additional time
           | satisfying snapd to maintain the systems under my control.
        
             | donmcronald wrote:
             | My initial reaction was that having schedules upgrades like
             | that would be great, but then 5 seconds later I realized
             | it's much better suited to Cron, SystemD, Ansible (etc.).
             | 
             | I think the reason for auto-updates like this is because
             | selling the control back to us is the business plan. It's
             | the same thing Microsoft does.
        
         | conradfr wrote:
         | Maybe two years ago I wanted to use LXD on a fresh Ubuntu
         | server (after testing it locally).
         | 
         | First they had just moved it to Snap which was not a great
         | install experience compared to good old apt-get, and then all
         | my containers had no IPv4 because of systemd for a reason I
         | can't remember.
         | 
         | After two or three tries I just gave up, installed CapRover
         | (still in use today) and have not tried again since.
        
       | micw wrote:
       | A while ago, I spent some time to make LXC run in a docker
       | container. The idea is to have a statefull system managed by LXC
       | run in a docker environment so that management (e.g. Volumes,
       | Ingress and Load Balancer) from K8S can be used for the LXC
       | containers. I still run a few desktops which are accessible by
       | x2go with it on my kubernetes instances.
       | 
       | https://github.com/micw/docker-lxc
        
       | istoica wrote:
       | The perfect pair
       | 
       |  _Containerfile_ vs _Dockerfile_ - Infra as code
       | 
       |  _podman_ vs _docker_ - https://podman.io
       | 
       |  _podman desktop companion_ (author here) vs _docker desktop ui_
       | - https://iongion.github.io/podman-desktop-companion
       | 
       |  _podman-compose_ vs _docker-compose_ = there should be no vs
       | here, _docker-compose_ itself can use podman socket for
       | connection OOB as APIs are compatible, but an alternative worth
       | exploring nevertheless.
       | 
       | Things are improving at a very fast pace, the aim is to go way
       | beyond parity, give it a chance, you might enjoy it. There is
       | continuous active work that is enabling real choice and choice is
       | always good, pushing everyone up.
        
         | 2OEH8eoCRo0 wrote:
         | I enjoy podman. It supported cgroups v2 before Docker and is
         | daemonless.
        
         | nijave wrote:
         | Is networking any better with Podman on Docker Compose files?
         | Last time I tried, most docker-compose files didn't actually
         | work because they created networks that Podman doesn't have
         | privileges to setup unless run as root
         | 
         | Afaik, the kernel network APIs are pretty complicated so it's
         | fairly difficult to expose to unprivileged users safely
        
       | adamgordonbell wrote:
       | I like the docker way of one thing, one process, per container.
       | LXC seems a bit different.
       | 
       | However, an exciting thing to me is the Cambrian explosion of
       | alternatives to docker: podman, nerdctl, even lima for creating a
       | linux vm and using containerd on macos looks interesting.
        
         | umvi wrote:
         | Docker can have N processes per container though, just depends
         | how you set up your image
        
           | trulyme wrote:
           | Yes, and it makes sense in some cases. Supervisord is awesome
           | for this.
        
         | wanderr wrote:
         | That seems weird for some stacks though, like nginx, php-fpm,
         | php. At least I still haven't wrapped my head around what's the
         | right answer for the number of containers involved there.
        
         | merlinscholz wrote:
         | I recently started using containerd inside Nomad, a breath of
         | fresh and simple air after failed k8s setups!
        
           | adamgordonbell wrote:
           | Oh, Nomad looks interesting. Why should someone reach for it
           | vs K8S?
        
             | merlinscholz wrote:
             | Cloudflare recently posted a great blog article on how/why
             | they use nomad: https://blog.cloudflare.com/how-we-use-
             | hashicorp-nomad/
        
             | orthecreedence wrote:
             | I have used nomad at my work for a few years now. I'd say
             | where it shines is running stateless containers simply and
             | easily. If you're trying to run redis, postgres, etc...do
             | it somewhere else. If you're trying to spin up and down
             | massive amounts of queue workers hour by hour, use it as a
             | distributed cron, or hell just run containers for your
             | public api/frontend and keep them running, nomad is great.
             | 
             | That said, you're going to be doing some plumbing for
             | things like wiring your services together (Fabio/Consul
             | Connect are good choices), detecting when to add more host
             | machines, etc.
             | 
             | As far as how it compares to k8s, I don't know, I haven't
             | used it materially yet.
        
             | dijit wrote:
             | Nomad can run a lot more things but it's not so batteries
             | included.
             | 
             | Nomad is trying to be an orchestrator.
             | 
             | Kubernetes is trying to be an operating system for cloud
             | environments.
             | 
             | since they aim for being different things they make
             | different trade offs.
        
       | gerhardhaering wrote:
       | This would have been an ok article in 2013-2015. Nothing really
       | has changed wrt. these two technologies since.
        
       | sickygnar wrote:
       | I never hear systemd-nspawn mentioned in these discussions. It
       | ships and integrates with systemd and has a decent interface with
       | machinectl. Does anyone use it?
        
         | numlock86 wrote:
         | > I never hear systemd-nspawn mentioned in these discussions.
         | It ships and integrates with systemd and has a decent interface
         | with machinectl.
         | 
         | I couldn't have said it better. And yes, I use it. Also in
         | production systems.
        
         | goombacloud wrote:
         | The big missing feature that's lacking is to pull Docker images
         | and run them without resorting to hacks.
        
           | goombacloud wrote:
           | searched and found this: https://raw.githubusercontent.com/mo
           | by/moby/master/contrib/d...
        
       | unixhero wrote:
       | LXC has been so stable and great to work with for many years. I
       | have had services in production on LXC containers and it has been
       | a joy. I can not say the same about things I have tried to
       | maintain in production with Docker, in which I had similar
       | experiences to [0], albeit around that time and therefore
       | arguably not recently.
       | 
       | For a fantastic way to work with LXC containers I recommend the
       | free and open Debian based hypervisor distribution Proxmox [1].
       | 
       | [0], https://thehftguy.com/2016/11/01/docker-in-production-an-
       | his...
       | 
       | [1], https://www.proxmox.com/en/proxmox-ve
        
       | buybackoff wrote:
       | LXC via Proxmox is great for stateful deployments on baremetal
       | servers. It's very easy to backup entire containers with the
       | state (SQLite, Postgres dir) to e.g. NAS (and with TrueNAS then
       | to S3/B2). Best used with ZFS raid, with quotas and lazy space
       | allocation backups are small or capped.
       | 
       | Nothing stops one from running Docker inside LXC. For development
       | I usually just make a dedicated priviledged LXC container with
       | nesting enabled to avoid some known issues and painful config.
       | LXC containers could be on a private network and a reverse proxy
       | on the host could map to the only required ports, without
       | thinking what ports Docker or oneself could have accidentally
       | made public.
        
         | lostlogin wrote:
         | Good comment. It was a revelation to me when I used Proxmox and
         | played with LXCs. Getting an IP per container is really nice.
        
         | petre wrote:
         | It's an annoying that you can only make snapshots on a stopped
         | container. With VMs it works in a running VM.
        
           | razerbeans wrote:
           | Weird, I just tested on my proxmox instance and I was able to
           | create a snapshot of a running container (PVE 7.1-10)
        
           | nullwarp wrote:
           | Also can't do live migrations or backups and moving storage
           | around is a headache.
           | 
           | We've pretty much stopped using LXC containers in Proxmox
           | because of all the little issues.
        
           | aaronius wrote:
           | That highly depends on the underlying storage. If it is
           | something that supports snapshots (ZFS, Ceph, LVM thin) then
           | it should work fine, also backups will be possible without
           | any downtime as they will be read from a temporary snapshot.
        
             | buybackoff wrote:
             | Even with ZFS you still have to wait for RAM to dump,
             | haven't you? And it will freeze at least for the dump write
             | time. Do they have CoW for container memory?
             | 
             | But even if they had, the RAM snapshot needs to be written,
             | but without freezing the container. I would appreciate an
             | option when I could ignore everything that was not fsyned,
             | e.g. Postgres use case. In that case the normal ZFS
             | snapshot should be enough.
        
               | aaronius wrote:
               | RAM and other state can be part of a snapshot for VMs, in
               | which case the VM will continue right where it was.
               | 
               | The state of a container is not part of the snapshot
               | (just checked), as it is really hard to capture the state
               | of the container (CPU, network, all kinds of file and
               | socket handles) and restore it because all an LXC
               | container is, is local processes in their separate
               | cgroups. This is also the reason why a live migration is
               | not really possible right now, as all that would need to
               | be cut out from the current host machine and restore in
               | the target machine.
               | 
               | This is much easier for VMs as Qemu offers a nice
               | abstraction layer.
        
         | ansible wrote:
         | We do something similar with btrfs as the filesystem. There
         | have been some issues with btrfs itself, but the LXC side of
         | this has worked pretty good. Any significant storage (such as
         | project directories) is done with a bind mount into the
         | container, so that it is easy to separately snapshot the data
         | or have multiple LXC containers on the same host access the
         | same stuff. That was more important when we were going to run
         | separate LXC containers for NFS and Samba fileservers, but we
         | ended combining those services into the same container.
        
         | ignoramous wrote:
         | > LXC via Proxmox is great for stateful deployments on
         | baremetal
         | 
         | Reminds me of (now defunct?) flockport.com
         | 
         | They had some interesting demos up on YouTube, showcasing what
         | looked like a sandstorm.io esque setup.
        
       ___________________________________________________________________
       (page generated 2022-02-18 23:00 UTC)