[HN Gopher] Harbormaster: The anti-Kubernetes for your personal ...
       ___________________________________________________________________
        
       Harbormaster: The anti-Kubernetes for your personal server
        
       Author : stavros
       Score  : 356 points
       Date   : 2021-08-19 08:59 UTC (14 hours ago)
        
 (HTM) web link (gitlab.com)
 (TXT) w3m dump (gitlab.com)
        
       | hardwaresofton wrote:
       | Other uncomplicated pieces of software that manage dockerized
       | workloads:
       | 
       | - https://dokku.com
       | 
       | - https://caprover.com
        
         | stavros wrote:
         | I use Dokku and love it! The use case is a bit different, as
         | it's mostly about running web apps (it does ingress as well and
         | is rather opinionated about the setup of its containers), but
         | Harbormaster is just a thin management layer over Compose.
        
         | wilsonfiifi wrote:
         | I really wish Dokku would embrace docker swarm like caprover.
         | Currently they have a scheduler for kubernetes but the docker
         | swarm scheduler is indefinitely delayed [0]. It's like the
         | missing piece to making Dokku a real power tool for small
         | teams.
         | 
         | Currently, if you want to scale Dokku horizontally and aren't
         | ready to take the kubernetes plunge, you have to put a load
         | balancer in front of your multiple VMs running Dokku and that
         | comes with it's own headaches.
         | 
         | [0] https://github.com/dokku/dokku/projects/1#card-59170273
        
           | proxysna wrote:
           | you should give nomad a try. Dokku has a nomad backend.
           | https://github.com/dokku/dokku-scheduler-nomad.
        
         | conradfr wrote:
         | I use CapRover and it mostly works.
         | 
         | My biggest complaint would be the downtime when the docker
         | script runs after each deployment.
        
       | aae42 wrote:
       | this is nice, this should help a lot of people in that in-between
       | space
       | 
       | i just recently decided to graduate from just `docker-compose up`
       | running inside tmux to a more fully fledged system myself...
       | 
       | since i know Chef quite well i just decided to use Chef in local
       | mode with the docker community cookbook
       | 
       | i also get the nice tooling around testing changes to the
       | infrastructure in test kitchen
       | 
       | if this would have existed before i made that switch, i may have
       | considered it, nice work!
        
       | dneri wrote:
       | This seems like a neat project! I run a homelab and my container
       | host runs Portainer & Caddy which is a really clean and simple
       | docker-compose deployment stack. This tools seems like it does
       | less than Portainer, so I am not clear on why it would be
       | preferable - just because it is even simpler?
        
       | earthboundkid wrote:
       | Python is sort of a non-starter for me. If I had a reliable way
       | of running Python on a VPS, I wouldn't need Docker, now would I?
        
         | franga2000 wrote:
         | What do you mean? Python comes pre-installed with every distro
         | I know, virtual environment support is built in, alternative
         | version packages are readily available on distros (but
         | backwards compatibility is pretty good in Python anyways).
         | 
         | If anything, Python in Docker is more of a pain than bare-metal
         | since the distro (usually alpine or debian) ships its own
         | Python version and packages that are almost entirely
         | disconnected from the one the image provides.
        
       | mradmin wrote:
       | My anti-kubernetes setup for small single servers is docker
       | swarm, portainer & traefik. It's a setup that works well on low
       | powered machines, gives you TLS (letsencrypt) and traefik takes
       | care of the complicated network routing.
       | 
       | I created a shell script to easily set this up:
       | https://github.com/badsyntax/docker-box
        
         | kawsper wrote:
         | I have a similar setup, but with Nomad (in single server mode)
         | instead of docker swarm and portainer. It works great.
        
           | GordonS wrote:
           | Don't suppose you're able to point to a simple Nomad config
           | for a dockerised web app, with a proxy and Let's Encrypt?
        
             | kawsper wrote:
             | I will see if I can write up a simple example, do you have
             | anywhere I can ping you?
        
               | GordonS wrote:
               | That would be great, thanks!
               | 
               | I'm at: gordon dot stewart 333 at gmail dot com
        
           | stavros wrote:
           | What does Nomad do for you, exactly? I've always wanted to
           | try it out, but I never really got how it works. It runs
           | containers, right? Does it also do networking, volumes, and
           | the other things Compose does?
        
             | heipei wrote:
             | What I like about Nomad is that it allows scheduling non-
             | containerized workloads too. What it "does" for me is that
             | it gives me a declarative language to specify the
             | workloads, has a nice web UI to keep track of the workloads
             | and allows such handy features as looking at the logs or
             | exec'ing into the container from the web UI, amongst other
             | things. Haven't used advanced networking or volumes yet
             | though.
        
               | stavros wrote:
               | So do you use it just for scheduling commands to run?
               | I.e. do you use `docker-compose up` as the "payload"?
        
               | kawsper wrote:
               | You send a job-specification to the Nomad API.
               | 
               | There's different kind of workloads, I use Docker
               | containers the most, but jobs can also run on a system-
               | level, there's also different types of operating modes,
               | some jobs can be scheduled like cron, where other jobs
               | just exposes a port and wants to be registered in Consuls
               | service-mesh.
               | 
               | A job can also consist of multiple subtasks, an example
               | could be nginx + django/rails subtasks that will be
               | deployed together.
               | 
               | You can see an example of a Docker job here:
               | https://www.nomadproject.io/docs/job-
               | specification#example
               | 
               | With a few modifications you can easily allow for
               | blue/green-deployments.
        
               | stavros wrote:
               | This is very interesting, thanks! I'll give it a go.
        
           | ianlevesque wrote:
           | Nomad is so perfect for this. I've been meaning to blog about
           | it somewhere.
        
         | GordonS wrote:
         | This is exactly how I deployed my last few projects, and it
         | works great!
         | 
         | The only things I'd change are switching to Caddy instead of
         | Traefik (because Traefik 2.x config is just so bewilderingly
         | complex!), and I'm not convinced Portainer is really adding any
         | value.
         | 
         | Appreciate you sharing your setup script too.
        
           | mradmin wrote:
           | Agree the traefik config is a little complex but otherwise it
           | works great for me. About using portainer, it's useful for
           | showing a holistic view of your containers and stacks, but I
           | also use it for remote deployment of services (Eg as part of
           | CI/CD). I'll push a new docker image version then I'll use
           | the portainer webhooks to redeploy the service, then docker
           | swarm takes over.
        
             | GordonS wrote:
             | Ah, I wasn't aware of the web hooks, that sounds useful :)
        
               | mradmin wrote:
               | Here's an example using GitHub Actions:
               | https://github.com/badsyntax/docker-
               | box/tree/master/examples...
        
           | dneri wrote:
           | Absolutely agree, I switched to Caddy recently and the
           | configuration is considerably easier than Traefik. Very
           | simple TLS setup (including self signed certificates).
        
             | e12e wrote:
             | After some struggle I've managed to set up traefik with
             | tags/docker socket so that services can "expose themselves"
             | via tags in their service definitions - is there anything
             | similar for caddy?
        
         | mrweasel wrote:
         | That's still a bit more than I feel is required.
         | 
         | My problem is in the two to eight server space, but networking
         | is already externally managed and I have a loadbalancer. It's
         | in this space I feel that we're lacking good solution. The size
         | is to small to justify taking out nodes for a control plane,
         | but big enough that Ansible feels weird.
        
       | thiht wrote:
       | Do people actually us k8s on a personal server? What's the point?
       | Surely just Docker with restart policies (and probably even just
       | systemd services if it's your thing) is enough?
       | 
       | K8s seems way overused in spheres without a real need for it.
       | That would explain the << k8s is over complicated >> I keep
       | reading everywhere. It's not over complicated, you just don't
       | need it.
        
       | corndoge wrote:
       | Is there software like Compose in terms of simplicity, that
       | supports multiple nodes? I use k8s for an application that really
       | needs to use multiple physical nodes to run containerized jobs
       | but k8s feels like overkill for the task and I spend more time
       | fixing k8s fuckups than working on the application. Is there
       | anything in between compose and k8s?
        
         | maltalex wrote:
         | Docker Swarm?
        
         | heipei wrote:
         | Hashicorp Nomad
        
         | stavros wrote:
         | I hear Nomad mentioned a lot for this, yeah.
        
       | selfhoster11 wrote:
       | I have a similar DIY solution with an Ansible playbook that
       | automatically installs or restarts docker-compose files. I am
       | considering switching to Harbormaster, since it's much closer to
       | what I wanted from the start.
        
       | GekkePrutser wrote:
       | Cool name. Reminds me of dockmaster which was some ancient NSA
       | system (It was mentioned in Clifford Stoll's excellent book "The
       | Cuckoo's Egg"). It was the one the German KGB hacker he caught
       | was trying to get into.
       | 
       | It sounds like a good option too, I don't want all the complexity
       | of Kubernetes at home. If I worked for the cloud team in work I
       | might use it at home but I don't.
        
       | sgentle wrote:
       | I unironically solved this problem by running docker-compose in
       | Docker. You can build an image that's just the official
       | docker/compose image with your docker-compose.yml on top, mount
       | /var/run/docker.sock into it, and then when you start the docker-
       | compose container, it starts all your dependencies. If you run
       | Watchtower as well, everything auto-updates.
       | 
       | Instead of deploying changes as git commits, you deploy them as
       | container image updates. I'm not going to call it a good
       | solution, exactly, but it meant I could just use one kind of
       | thing to solve my problem, which is a real treat if you've spent
       | much time in the dockerverse.
        
         | zrail wrote:
         | This is legitimately a fantastic idea. Would you you be willing
         | to publish a bit more detail about it? Even just a gist of the
         | Dockerfile would be great.
        
         | stavros wrote:
         | Hmm, that's interesting, do you run Docker in Docker, or do you
         | expose the control socket?
        
       | awinter-py wrote:
       | so useful. I have the same use case as the author
       | 
       | also kind of want RDS for my machine -- backup/restore + upgrade
       | for databases hosted locally
        
       | stavros wrote:
       | Hey everyone! I have a home server that runs some apps, and I've
       | been installing them directly, but they kept breaking on every
       | update. I wanted to Dockerize them, but I needed something that
       | would manage all the containers without me having to ever log
       | into the machine.
       | 
       | This also worked very well for work, where we have some simple
       | services and scripts that run constantly on a micro AWS server.
       | It's made deployments completely automated and works really well,
       | and now people can deploy their own services just by adding a
       | line to a config instead of having to learn a whole complicated
       | system or SSH in and make changes manually.
       | 
       | I thought I'd share this with you, in case it was useful to you
       | too.
        
         | dutchmartin wrote:
         | Interesting case. But did you look at other systems before
         | this? I myself use caprover[1] for a small server deployment.
         | 1: https://caprover.com/
        
           | c17r wrote:
           | I use caprover on my DO instance and it works great. Web
           | apps, twitter/reddit bots, even ZNC.
        
           | stavros wrote:
           | I have used Dokku, Kubernetes, a bit of Nomad, some Dokku-
           | alikes, etc, but none of them did things exactly like I
           | wanted (the single configuration file per server was a big
           | requirement, as I want to know exactly what's running on a
           | server).
        
         | fenollp wrote:
         | > I needed something that would manage all the containers
         | without me having to ever log into the machine.
         | 
         | Not saying this would at all replace Harbormaster, but with
         | DOCKER_HOST or `docker context` one can easily run docker and
         | docker-compose commands without "ever logging in to the
         | machine". Well, it does use SSH under the hood but this here
         | seems more of a UX issue so there you go.
         | 
         | Discovering the DOCKER_HOST env var (changes the daemon socket)
         | has made my usage of docker stuff much more powerful. Think
         | "spawn a container on the machine with bad data" a la Bryan
         | Cantrill at Joyent.
        
           | thor_molecules wrote:
           | What is the "Bryan Cantrill at Joyent" you're referring to?
        
             | e12e wrote:
             | Not (I think) the exacttalk/blog post gp was thinking of -
             | but worth watching IMNHO:
             | 
             | "Debugging Under Fire: Keep your Head when Systems have
             | Lost their Mind * Bryan Cantrill * GOTO 2017"
             | https://youtu.be/30jNsCVLpAE
             | 
             | Ed: oh, here we go I think?
             | 
             | > Running Aground: Debugging Docker in Production Bryan
             | Cantrill19,102 views16 Jan 2018 Talk originally given at
             | DockerCon '15, which (despite being a popular presentation
             | and still broadly current) Docker Inc. has elected to
             | delist.
             | 
             | https://www.youtube.com/watch?v=AdMqCUhvRz8
        
             | zdragnar wrote:
             | The technology analogy is Manta, which Bryan covers in at
             | least one if not several popular talks on YouTube, in
             | particular about contanerization.
             | 
             | He has a lot to say about zones and jails and chroot
             | predating docker, and why docker and co. "won" so to speak.
        
           | stavros wrote:
           | Hmm, doesn't that connect your local Docker client to the
           | remote Docker daemon? My goal isn't "don't SSH to the
           | machine" specifically, but "don't have state on the machine
           | that isn't tracked in a repo somewhere", and this seems like
           | it would fail that requirement.
        
             | nanis wrote:
             | > "don't have state on the machine that isn't tracked in a
             | repo somewhere"
             | 
             | https://docs.chef.io/chef_solo/
        
               | revscat wrote:
               | > chef-solo is a command that executes Chef Infra Client
               | in a way that does not require the Chef Infra Server in
               | order to converge cookbooks.
               | 
               | I have never used Chef. This is babble to me.
        
               | nanis wrote:
               | > Chef Infra is a powerful automation platform that
               | transforms infrastructure into code. Whether you're
               | operating in the cloud, on-premises, or in a hybrid
               | environment, Chef Infra automates how infrastructure is
               | configured, deployed, and managed across your network, no
               | matter its size.
               | 
               | In an imprecise nutshell: You specify what needs to exist
               | on the target system using Chef's DSL and Chef client
               | will converge the state of the target to the desired one.
        
               | redis_mlc wrote:
               | Normally the chef server listener is on a remote machine.
               | 
               | But there's another way to run the chef server locally,
               | called chef solo, which developers commonly use when
               | writing cookbooks and recipes.
               | 
               | Chef is one of the most complicated devops tools ever, so
               | don't expect to understand it without effort.
        
               | tremon wrote:
               | chef-solo is a command that applies configuration
               | statements on a host without using a separate metadata
               | server.
        
             | inetknght wrote:
             | What do you think isn't getting tracked?
             | 
             | You could put your SSH server configuration in a repo. You
             | could put your SSH authorization key in a repo. You could
             | even put your private key in a repo if you really wanted.
        
               | stavros wrote:
               | How do you track what's supposed to run and what's not,
               | for example? Or the environment variables, or anything
               | else you can set through the cli.
        
               | inetknght wrote:
               | What do you mean?
               | 
               | You run what's supposed to run the same way you would
               | anything else. It's the same for the environment
               | variables.
               | 
               | How would you track what's supposed to run and what's not
               | for Docker? Using the `DOCKER_HOST` environment variable
               | to connect over SSH is the exact same way.
        
               | stavros wrote:
               | I wouldn't. That's why I wrote Harbormaster, so I can
               | track what's running and what isn't.
        
               | electroly wrote:
               | Docker Compose is designed for this.
        
               | hckr1292 wrote:
               | The killer feature of harbormaster is watching the remote
               | repository. Can docker-compose do that? If it can, I
               | should just leverage that feature instead of
               | harbormaster!
               | 
               | The nicety here on harbormaster seems to be that there
               | are some ways to use the same code as a template in which
               | specific differences are dynamically inserted by
               | harbormaster. I'm not aware of how you could use docker-
               | compose (without swarm) to accomplish this, unless you
               | start doing a lot of bash stuff.
               | 
               | I also appreciate that harbormaster offers opinions on
               | secrets management.
        
               | stavros wrote:
               | Yep, that's why Harbormaster uses it.
        
               | gibs0ns wrote:
               | For me, I don't define any variables via the cli, i put
               | them all in the docker-compose.yml or accompanying .env
               | file, that way it's a simple `docker-compose up` to
               | deploy. Then I can track these files via git, and deploy
               | to remote docker hosts using docker-machine, which
               | effectively sets the DOCKER_HOST env var.
               | 
               | While I haven't used it personally, there is [0]
               | Watchtower which aims to automate updating docker
               | containers.
               | 
               | [0] https://github.com/containrrr/watchtower
        
             | mixedCase wrote:
             | Have you tried NixOS?
        
               | stavros wrote:
               | I have, and it's really good, but it needs some
               | investment in creating packages (if they don't exist) and
               | has some annoyances (eg you can't talk to the network to
               | preserve determinism). It felt a bit too heavy-handed for
               | a few processes. We also used to use it at work
               | extensively for all our production but migrated off it
               | after various difficulties (not bugs, just things like
               | having its own language).
        
               | mixedCase wrote:
               | You can talk to the network, either through the escape
               | hatch or provided fetch utilities, which tend to require
               | checksums. But you do have to keep the result
               | deterministic.
               | 
               | Agreed on it being a bit too heavy-handed, and the
               | tooling isn't very helpful for dealing with it unless
               | you're neck-deep into the ecosystem already.
        
       | sandGorgon wrote:
       | you should check k3s or k0s - single machine kubernetes
        
         | stavros wrote:
         | I did, but even that was a bit too much when I don't really
         | need to be K8s-compatible. Harbormaster doesn't run any extra
         | daemons at all, so that was a better fit for what I wanted to
         | do (I also want to run stuff on Raspberry Pis and other
         | computers with low resources).
        
           | sandGorgon wrote:
           | fair point. I have generally had a very cool experience
           | running these single daemon kubernetes distros.
        
             | stavros wrote:
             | They look very very interesting for development and things
             | like that, and I'm going to set one up locally to play
             | with, they just seemed like overkill for running a bunch of
             | Python scripts, Plex, etc.
        
       | debarshri wrote:
       | I have been knee deep in deployment space for post 4 years. It is
       | pretty hard problem to solve to the n-th level. Here's my 2
       | cents.
       | 
       | Single machine deployments are generally easy, you can do it DIY.
       | The complexity arises the moment you have another machine in the
       | setup, scheduling workloading, networking, setup to name a few,
       | starts becoming complicated.
       | 
       | From my perspective, kubernetes was designed for multiple team,
       | working on multiple services and jobs, making operation kind of
       | self serviced. So I can understand the anti-kubernetes sentiment.
       | 
       | There is gap in the market between VM oriented simple deployments
       | and kubernetes based setup.
        
         | SOLAR_FIELDS wrote:
         | IMO the big draw of running K8S on my home server is the
         | unified API. I can take my Helm chart and move it to whatever
         | cloud super easily and tweak it for scaling in seconds. This
         | solution from the post is yet another config system to learn,
         | which is fine, but is sort of the antithesis of why I like K8S.
         | I could see it being theoretically useful for someone who will
         | never use K8S (eg not a software engineer by trade, so will
         | never work a job that uses K8s), but IMO those people are
         | probably running VM's on their home servers instead since how
         | may non software engineers are going to learn and use docker-
         | compose but not K8S?
         | 
         | Anecdotal, but anyone I know running home lab setups that
         | aren't software guys are doing vSphere or Proxmox or whatever
         | equivalent for their home usecases. But I know a lot of old
         | school sysadmin guys, so YMMV.
        
           | debarshri wrote:
           | I agree with you. It is an anti-thesis that is why it is
           | marketed as anti-kubernetes toolset.
           | 
           | You cannot avoid learning k8s, you will end up encountering
           | it everywhere, whether you like it or not. It is the tech-
           | buzz word for past few years followed by cloud native and
           | devops.
           | 
           | I really thinking if you wish to be great engineer and truly
           | respect new general tools in generally, you have to go
           | through the route setting up proxmox cluster, loading images,
           | building those VM templates etc. Jumping directly on
           | containers and cloud you kind of skip steps. It is not bad,
           | you do miss our on few foundational concepts, around
           | networking, operating systems etc.
           | 
           | The way I would put it is - A chef who is also farming their
           | own vegetables a.k.a setting up your own clusters and
           | deploying your apps VS a chef who goes to high-end
           | wholeseller to buy premium vegetables does not care how it is
           | grown aka. developers using kubernetes and container
           | orchestration, PaaS.
        
           | thefunnyman wrote:
           | I've been working on using k3s for my home cluster for this
           | exact reason. I run it in a vm on top of proxmox, using
           | packer, terraform, and ansible to deploy. My thought process
           | here is that if I ever want to introduce more nodes or switch
           | to a public cloud I could do so somewhat easily (either with
           | a managed k8s offer, or just by migrating my VMs). I've also
           | toyed with the idea of running some services on public cloud
           | and some more sensitive services on my own infra.
        
             | skrtskrt wrote:
             | I have been doing k3s on a Digital Ocean droplet and I
             | would say k3s has really given me an opportunity to learn
             | some k8s basics without truly having to install and stand
             | up every single component of a usable k8s cluster (ingress
             | provider, etc) on my own.
             | 
             | It took a bit to figure out setting up an https cert
             | provider but then it was pretty much off to the races
        
               | thawkins wrote:
               | I use kind with podman running rootless, it only works on
               | systems with cgroup2 enabled. But it's very cool.
               | Conventional k8s with docker has a number of security
               | gotchas that stem from it effectivly running the
               | containers as root. With rootless podman k8s, it is easy
               | to provide all your devs with local k8s setups without
               | handing them root/sudo access to run it. This is
               | something that has only recently started working right as
               | more container components and runtimes started to support
               | cgroup2.
        
         | globular-toast wrote:
         | > There is gap in the market between VM oriented simple
         | deployments and kubernetes based setup.
         | 
         | What's wrong with Ansible? You can deploy docker containers
         | using a very similar configuration to docker-compose.
        
           | jozzy-james wrote:
           | team ansible as well, tho our 100 some odd servers probably
           | doesn't warrant much else.
        
         | KronisLV wrote:
         | > There is gap in the market between VM oriented simple
         | deployments and kubernetes based setup.
         | 
         | In my experience, there are actually two platforms that do this
         | pretty well.
         | 
         | First, there's Docker Swarm (
         | https://docs.docker.com/engine/swarm/ ) - it comes preinstalled
         | with Docker, can handle either single machine deployments or
         | clusters, even multi-master deployments. Furthermore, it just
         | adds a few values to Docker Compose YAML format (
         | https://docs.docker.com/compose/compose-file/compose-file-v3...
         | ) , so it's incredibly easy to launch containers with it. And
         | there are lovely web interfaces, such as Portainer (
         | https://www.portainer.io/ ) or Swarmpit ( https://swarmpit.io/
         | ) for simpler management.
         | 
         | Secondly, there's also Hashicorp Nomad (
         | https://www.nomadproject.io/ ) - it's a single executable
         | package, which allows similar setups to Docker Swarm,
         | integrates nicely with service meshes like Consul (
         | https://www.consul.io/ ), and also allows non-containerized
         | deployments to be managed, such as Java applications and others
         | ( https://www.nomadproject.io/docs/drivers ). The only serious
         | downsides is having to use the HCL DSL (
         | https://github.com/hashicorp/hcl ) and their web UI being read
         | only in the last versions that i checked.
         | 
         | There are also some other tools, like CapRover (
         | https://caprover.com/ ) available, but many of those use Docker
         | Swarm under the hood and i personally haven't used them. Of
         | course, if you still want Kubernetes but implemented in a
         | slightly simpler way, then there's also the Rancher K3s project
         | ( https://k3s.io/ ) which packages the core of Kubernetes into
         | a smaller executable and uses SQLite by default for storage, if
         | i recall correctly. I've used it briefly and the resource usage
         | was indeed far more reasonable than that of full Kubernetes
         | clusters (like RKE).
        
           | hamiltont wrote:
           | Wanted to second that Docker Swarm has been an excellent
           | "middle step" for two different teams I've worked on. IMO too
           | many people disregard it right away, not realizing that it is
           | a significant effort for the average dev to learn
           | containerization+k8s at the same time, and it's impossible to
           | do that on a large dev team without drastically slowing your
           | dev cycles for a period.
           | 
           | When migrating from a non-containerized deployment process to
           | a containerized one, there are a lot of new skills the
           | employees have to learn. We've had 40+ employees, all who are
           | basically full of work, and the mandate comes down to
           | containerize, and all of these old school RPM/DEB folks
           | suddenly need to start doing docker. No big deal, right?
           | Except...half the stuff does not dockerize easily requires
           | some slightly-more-than-beginner docker skills. People will
           | struggle and be frustrated. Folks start with running one
           | container manually, and quickly outgrow that to use compose.
           | They almost always eventually use compose to run stuff in
           | prod at some point, which works but eventually that one
           | server is full. _This_ the is the value of swarm - letting
           | people expand to multi-server and get a taste of
           | orchestration, without needing them to install new tools or
           | learn new languages. Swarm adds just one or two small new
           | concepts (stack and service) on top of everything they have
           | already learned. It 's a god send to tell a team they can
           | just run swarm init, use their existing yaml files, and add a
           | worker to the cluster. Most folks start to learn about
           | placement constraints, deployment strategies, dynamic
           | infrastructure like reverse proxy or service mesh, etc. After
           | a bit of comfort and growth, a switch to k8s is manageable
           | and the team is excited about learning it instead of
           | overwhelmed. A lot (?all?) of the concepts in swarm are
           | readily present in k8s, so the transition is much simpler
        
             | e12e wrote:
             | We currently have one foot in Docker swarm (and single node
             | compose), and considering k8s. One thing I'm uncertain of,
             | is the state of shared storage/volumes in swarm - none of
             | the options seem well supported or stable. I'm leaning
             | towards trying nfs based volumes, but it feels like it
             | might be fragile.
        
           | proxysna wrote:
           | Nomad also scales really well. In my experience swarm had a
           | lot of issues with going above 10 machines in a cluster.
           | Stuck containers, containers that are there but swarm can't
           | see them and more. But still i loved using swarm with my 5
           | node arm cluster, it is a good place to start when you hit
           | the limit of a single node.
           | 
           | > The only serious downsides is having to use the HCL DSL (
           | https://github.com/hashicorp/hcl ) and their web UI being
           | read only in the last versions that i checked.
           | 
           | 1. IIRC you can run jobs directly from UI now, but IMO this
           | is kinda useless. Running a job is simple as 'nomad run
           | jobspec.nomad'. You can also run a great alternative UI (
           | https://github.com/jippi/hashi-ui ).
           | 
           | 2. IMO HCL > YAML for job definitions. I've used both
           | extensively and HCL always felt much more human friendly. The
           | way K8s uses YAML looks to me like stretching it to it's
           | limits and barely readable at times with templates.
           | 
           | One thing that makes nomad a go-to for me is that it is able
           | to run workloads pretty much anywhere. Linux, Windows,
           | FreeBSD, OpenBSD, Illumos and ofc Mac.
        
         | imachine1980_ wrote:
         | i ask if you know nomad (i didn't use it) but co-workers say
         | was easier to deploy
        
           | debarshri wrote:
           | Yes, I did look into Nomad. Again, specification of
           | application to deploy is much simpler than kubernetes. But I
           | think operational point of view you still have the
           | complexity. It has similar concepts and abstractions like
           | kubernetes when you operate a nomad cluster.
        
             | zie wrote:
             | For a single machine, you don't need to operate a nomad
             | cluster: `nomad agent -dev` instantly gives you a 1-node
             | cluster ready to go.
             | 
             | if you decide to grow past 1 node, it's a little more
             | complex, but not by a lot, like k8s.
        
         | rcarmo wrote:
         | I have been toying with the notion of extending Piku
         | (https://github.com/piku) to support multiple (i.e., a
         | reasonable number) of machines behind the initial deploy
         | target.
         | 
         | Right now I have a deployment hook that can propagate an app to
         | more machines also running Piku after the deployment finishes
         | correctly on the first one, but stuff like green/blue and
         | database migrations is a major pain and requires more logic.
        
         | willvarfar wrote:
         | Juju perhaps?
        
           | debarshri wrote:
           | Are you talking about this[1]?
           | 
           | [1] https://juju.is/
        
           | FunnyLookinHat wrote:
           | I think Juju (and Charms) really shine more with bare-metal
           | or VM management. We looked into trying to use this for
           | multi-tenant deployment scenarios a while ago (when it was
           | still quite popular in the Ubuntu ecosystem) and found it
           | lacking.
           | 
           | At this point, I think Juju is most likely used in place of
           | other metal or VM provisioning tools (like chef or Ansible)
           | so that you can automatically provision and scale a system as
           | you bring new machines online.
        
             | werewolf wrote:
             | Sadly there is very little activity aiming at bare metal
             | and VMs nowadays. If you look at features presented during
             | couple of past months, you will find mainly kubernetes.
             | Switching from charms to operators. But kudos to openstack
             | charmers holding on and doing great work.
        
         | stavros wrote:
         | Agreed, but I made this because I couldn't find a simple
         | orchestrator that used some best practices even for a single
         | machine. I agree the problem is not hard (Harbormaster is
         | around 550 lines), but Harbormaster's value is more in the
         | opinions/decisions than the code.
         | 
         | The single-file YAML config (so it's easy to discover exactly
         | what's running on the server), the separated data/cache/archive
         | directories, the easy updates, the fact that it doesn't need
         | built images but builds them on-the-fly, those are the big
         | advantages, rather than the actual `docker-compose up`.
        
           | debarshri wrote:
           | What is your perspective on multiple docker compose files,
           | and you can do docker-compose up -f <file name>. You could
           | organise in a day that all the files are in the same
           | directory. Just wondering.
        
             | stavros wrote:
             | That's good too, but I really like having the separate
             | data/cache directories. Another issue I had with the
             | multiple Compose files is that I never knew which ones I
             | had running and which ones I decided against running
             | (because I shut services down but never removed the files).
             | With the single YAML file, there's an explicit `enabled:
             | false` line with a commit message explaining why I stopped
             | running that service.
        
               | debarshri wrote:
               | I understand your problem. I have seen solve that with
               | docker_compose_$ENV.yaml. You could set ENV variable and
               | then the appropriate file would be called.
        
               | stavros wrote:
               | Hmm, what did you set the variable to? Prod/staging/etc?
               | I'm not sure how that documents whether you want to keep
               | running the service or not.
        
               | GordonS wrote:
               | Might be I'm missing something, but I often go the route
               | of using multiple Compose files, and haven't had any
               | issue with using different data directories; I just mount
               | the directory I want for each service, e.g.
               | `/opt/acme/widget-builder/var/data`
        
               | stavros wrote:
               | Harbormaster doesn't do anything you can't otherwise do,
               | it just makes stuff easy for you.
        
       | reddec wrote:
       | Looks nice. I did something similar not so much time ago
       | https://github.com/reddec/git-pipe
        
         | uniqueuid wrote:
         | Wow this has a lot of great features baked in.
         | 
         | Especially the backup and Let's encrypt elements are great. And
         | it handles docker networks, which makes it very flexible.
         | 
         | Will definitely check it out.
        
       | nixgeek wrote:
       | Any chance this will get packaged up as a container instead of
       | "pipx install", then all the timers can just be in the container,
       | and it can control via Docker socket exposed to the container?
       | 
       | Simple one-time setup and then everything is a container?
       | 
       | If that interesting to OP then I might look into that one weekend
       | soon.
        
         | jlkuester7 wrote:
         | +1 for this! One of the things that I like most about my Docker
         | setup is that I am basically agnostic to the setup of the host
         | machine.
        
         | stavros wrote:
         | Oh yeah, that's very interesting! That would be great, I forgot
         | that you can expose the socket to the container. I'd definitely
         | be interested in that, thanks!
        
       | devmor wrote:
       | If this ever gets expanded to handle clustering, it'd be perfect
       | for me. I use k8s on my homelab across multiple raspberry pis.
        
       | rcarmo wrote:
       | This looks great. But if you don't need containers or are using
       | tiny hardware, consider trying out Piku:
       | 
       | https://github.com/piku
       | 
       | (You can use docker-compose with it as well, but as a deployment
       | step -- I might bake in something nicer if there is enough
       | interest)
        
         | stavros wrote:
         | That looks nice, isn't it kind of like Dokku? It's a nice
         | option but not a very good fit if you don't need ingress/aren't
         | running web services (most of my services were daemons that
         | connect to MQTT).
        
           | rcarmo wrote:
           | You can have services without any kind of ingress. It's
           | completely optional to use nginx, it just gets set up
           | automatically if you want to expose a website.
           | 
           | My original use case was _exactly_ that (MQTT services).
        
         | uniqueuid wrote:
         | +1 for pikku which is one of my favorite examples of "right
         | abstraction, simple, just works, doesn't re-invent the
         | architecture every 6 months".
         | 
         | Thanks for that, Rui!
        
           | rcarmo wrote:
           | Well, I am thinking of reinveinting around 12 lines of it to
           | add explicit Docker/Compose support, but it's been a year or
           | so since any major changes other than minor tweaks :)
           | 
           | It has also been deployed on all top 5 cloud providers via
           | could-init (and I'm going back to AWS plain non-Ubuntu AMIs
           | whenever I can figure out the right packages).
        
       | pmlnr wrote:
       | anti-Kubernetes is rpm/yum/apt/dpkg/pkg and all the other
       | oldschool package managers.
        
       | uniqueuid wrote:
       | This looks awesome!
       | 
       | What I couldn't immediately see from skimming the repo is:
       | 
       | How hard would it be to use a docker-based automatic https proxy
       | such as this [1] with all projects?
       | 
       | I've had a handfull of docker-based services running for many
       | years and love the convenience. What I'm doing now is simply wrap
       | the images in a bash script that stops the containers, snapshots
       | the ZFS volume, pulls newer versions and re-launches everything.
       | That's then run via cron once a day. Zero issues across at least
       | five years.
       | 
       | [1] https://github.com/SteveLTN/https-portal
        
         | stavros wrote:
         | Under the hood, all Harbormaster does is run `docker-compose
         | up` on a bunch of directories. I'm not familiar with the HTTPS
         | proxy, but it looks like you could just add it to the config
         | and it'd auto-deploy and run.
         | 
         | Sounds like a very good ingress solution, I'll try it for
         | myself too, thanks! I use Caddy now but configuration is a bit
         | too manual.
        
           | uniqueuid wrote:
           | Thanks!
           | 
           | One thing to note is that you'll need to make sure that all
           | the compose bundles are on the same network.
           | 
           | I.e. add this to all of them:                 networks:
           | default:           external:             name: nginx-proxy
        
             | stavros wrote:
             | Ah yep, thanks! One thing that's possible (and I'd like to
             | do) with Harbormaster is add configuration to the upstream
             | apps themselves, so to deploy, say, Plex, all you need to
             | do is add the Plex repo URL to your config (and add a few
             | env vars) and that's it!
             | 
             | I already added a config for Plex in the Harbormaster repo,
             | but obviously it's better if the upstream app itself has
             | it:
             | 
             | https://gitlab.com/stavros/harbormaster/-/blob/master/apps/
             | p...
        
         | 3np wrote:
         | FWIW Traefik is pretty easy to get running and configured based
         | on container tags, which you cam set in compose files.
         | 
         | Traefik can be a bit hairy in some ways, but for anything you'd
         | run Harbormaster for it should be a good fit.
         | 
         | Right now I have some Frankenstein situation with all of
         | Traefik, Nginx, HAProxy, Envoy (though this is inherited from
         | Consul Connect) at different points... I keep thinking about
         | replacing Traefik with Envoy, but the docs and complexity are a
         | bit daunting.
        
       | gentleman11 wrote:
       | How am I supposed to know whether to jump on the kubernetes
       | bandwagon when all these alternatives keep popping up?
       | Kidding/not kidding
        
         | debarshri wrote:
         | Depends upon which job interview you are going to.
         | 
         | If is a startup, use some buzzwords like cloud native, devops
         | etc. Check their sentiments towards kubernetes.
         | 
         | On a serious note, You might have to jump on the kubernetes
         | bandwagon whether you like it or not as many of the companies
         | are serious investing their resources. Having spoken to various
         | companies from series A to Enterprise. I do see the kubernetes
         | adoption is actually not as much as I would have imagined based
         | on the hype.
         | 
         | P.S discussion of kubernetes or not kubernetes was recently
         | accelerated by a post from Ably [1]
         | 
         | [1] https://ably.com/blog/no-we-dont-use-kubernetes
        
           | p_l wrote:
           | What's missing km the conversation is that said blog post can
           | be summarised as "we have money to burn".
        
         | proxysna wrote:
         | This is not an alternative, just a small personal project.
         | Learn docker, basics of kubernetes and maybe nomad.
        
       | tkubacki wrote:
       | My simple solution for smaller projects is to ssh with port
       | forward to docker registry - here I wrote blog post on that
       | topic:
       | 
       | https://wickedmoocode.blogspot.com/2020/09/simple-way-to-dep...
        
       | pdimitar wrote:
       | This looks super, I'll try it on my NAS.
        
       | zeckalpha wrote:
       | If it can pull from git, why not have the YAML in a git repo,
       | too?
        
         | stavros wrote:
         | That is, in fact, the recommended way to deploy it! If you look
         | at the systemd service/timer files, that's what it does, except
         | Harbormaster itself isn't aware of the repo.
         | 
         | I kind of punted on the decision of how to run the top layer
         | (ie have Harbormaster be a daemon that auto-pulls its config),
         | but it's very simple to add a cronjob to `git pull;
         | harbormaster` (and is more composable) so I didn't do any more
         | work in that direction.
        
       | nijave wrote:
       | At a previous place I worked, someone setup something similar
       | with `git pull && ansible-playbook` on a cron
       | 
       | It was using GitHub so just needed a read-only key and could be
       | bootstrapped by connecting to the server directly and running the
       | playbook once
       | 
       | In addition, it didn't need any special privileges or
       | permissions. The playbook setup remote logging (shipping to
       | CloudWatch Logs since we used AWS heavily) along with some basic
       | metrics so the whole thing could be monitored. Plus, you can get
       | a cron email as basic monitoring to know if it failed
       | 
       | Imo it was a pretty clever way to do continuous deploy/updates
       | without complicated orchestrators, management servers, etc
        
       | adamddev1 wrote:
       | I guess I'm one of those people mentioned on the rationle who
       | keeps little servers ($5-10 Droplets) and runs a few apps on
       | them. (Like a couple of Node/Go apps, a CouchDB, a Verdaccio
       | server). I also haven't had issues with things breaking as I do
       | OS updates. Seems like it would be nice though just to have a
       | collection of dockerfiles that could be used to deploy a new
       | server automatically. My current "old fashioned" way has been
       | very doable to me but my big question before jumping to some
       | Docker-based setup is, does running everything on Docker take a
       | huge hit on the performance/memory/capabilities of the machine?
       | Like could I still comfortably run 4-5 apps on a $5 Droplet?
       | Assuming I would have seperate containers for each app? I'm
       | having trouble finding info about this.
        
         | jrockway wrote:
         | "Docker containers" are Linux processes with maybe a
         | filesystem, cpu/memory limits, and a special network; applied
         | through cgroups. You can do all of those things without Docker,
         | and there is really not much overhead.
         | 
         | systemd has "slice units" that are implemented very similarly
         | to Docker containers, and it's basically the default on every
         | Linux system from the last few years. It's underdocumented but
         | you can read a little about it here:
         | https://opensource.com/article/20/10/cgroups
        
           | adamddev1 wrote:
           | Cool stuff with the "slice units." I use systemd to keep apps
           | running but didn't know all this. And yes I understand the
           | basics of what Docker containers are. It just seems logical
           | to me that it would be a lot more taxing on the system
           | running that overhead. Like is it exponentially harder to fit
           | the same amount of apps on a droplet if they're all
           | containerized? Or is it still easy to run 4-5 modest
           | containerized apps on a $5 droplet?
        
         | stavros wrote:
         | I haven't noticed any performance degradation (though granted,
         | these are small apps), and my home server is 10 years old (and
         | was slow even then).
        
       | mnahkies wrote:
       | Interestingly this seems like a pretty popular problem to solve.
       | 
       | I made a similar thing recently as well, although with the goal
       | to handle ingress and monitoring out the box as well, whilst
       | still able to run comfortably on a small box.
       | 
       | I took a fairly similar approach, leveraging docker-compose
       | files, and using a single data directory for ease of backup
       | (although it's on my to-do list to split out conf/data).
       | 
       | If there was a way to get a truly slim and easy to setup k8s
       | compatible environment I'd probably prefer that, but I couldn't
       | find anything that wouldn't eat most of my small servers ram
       | 
       | https://github.com/mnahkies/shoe-string-server if you're
       | interested
        
         | debarshri wrote:
         | It is quite slim and easy to setup k8s environment, thanks to
         | microk8s and k3s. Microk8s comes with newer version of ubuntu.
         | k3s is a single binary installation.
        
           | mnahkies wrote:
           | Last I checked k3s required a min of 512mb of ram, 1gb
           | recommended. Is this not the case?
        
             | debarshri wrote:
             | Yes it is. Docker's minimum requirement is 512mb with 2gb
             | recommended. Containerd + k8s is almost the same
             | requirements.
        
         | stavros wrote:
         | Huh, nice! I think the main problem yours and my project have
         | is that they're difficult to explain, because it's more about
         | the opinions they have rather than about what they do.
         | 
         | I'll try to rework the README to hopefully make it more
         | understandable, but looking at your project's README I get as
         | overwhelmed as I imagine you get looking at mine. It's a lot of
         | stuff to explain in a short page.
        
       | mafro wrote:
       | So far I've found that "restart: always" in the compose.yml is
       | enough for my home server apps. In the rare case that one of the
       | services is down, I can SSH in and have a quick look - after all
       | it's one of my home servers, not a production pod on GKE :p
       | 
       | That said, the project looks pretty good! I'll have a tinker and
       | maybe I'll be converted
        
         | uniqueuid wrote:
         | Just to add: It's definitely a bad practice to never update
         | your images, because the docker images _and their base images_
         | will accumulate security holes. There aren 't many solutions
         | around for automatically pulling and running new images.
        
           | NortySpock wrote:
           | I've heard about Watchtower (auto update) and DUIN (docker
           | image update notifier), but I haven't quite found something
           | that will "just tell me what updates are available, on a
           | static site".
           | 
           | I want to "read all available updates" at my convenience, not
           | get alerts reminding me to update my server.
           | 
           | Maybe I need to write some sort of plugin to DUIN that
           | appends to a text file or web page or SQLite db... Hm.
        
             | andrewkdinh wrote:
             | Looks like https://crazymax.dev/diun/notif/script/ would be
             | useful for that.
             | 
             | Personally, since I'm a big fan of RSS, I'd set up email in
             | Diun and send it to an email generated by https://kill-the-
             | newsletter.com/
        
           | scandinavian wrote:
           | >There aren't many solutions around for automatically pulling
           | and running new images.
           | 
           | Isn't that exactly what watchtower does?
           | 
           | https://github.com/containrrr/watchtower
           | 
           | It works great on my mediacenter server running deluge, plex,
           | sonarr, radarr, jackett and OpenVPN in docker.
        
             | Aeolun wrote:
             | My experience with watchtower is that it kept breaking
             | stuff (or maybe just pulling broken images?)
             | 
             | My server was much more stable after it didn't try to
             | update all the time any more.
             | 
             | I wonder if I can set a minimum timeout.
        
             | uniqueuid wrote:
             | Well, yes. Curiously enough, (IIRC) watchtower started out
             | automatically pulling new images when available. Then the
             | maintainers found that approach to be worse than proper
             | orchestration and disabled the pulling. Perhaps it's
             | different now.
        
             | stavros wrote:
             | Watchtower runs the images if they update, but AFAIK it
             | doesn't pull if the base image changes.
             | 
             | Then again, Harbormaster doesn't do that either unless the
             | upstream git repo changes.
        
         | stavros wrote:
         | Agreed about restarting, but I hated two things: Having to SSH
         | in to make changes, and having a bunch of state in unknowable
         | places that made it extremely hard to change anything or
         | migrate to another machine if something happened.
         | 
         | With Harbormaster, I just copy one YAML file and the `data/`
         | directory and I'm done. It's extremely convenient.
        
       | nonameiguess wrote:
       | Beware that harbormaster is also the name of a program for adding
       | RBAC to docker: https://github.com/kassisol/hbm
       | 
       | It's kind of abandonware because it was the developer's PhD
       | project and he graduated, but it is rather unfortunately widely
       | used in one of the largest GEOINT programs in the US government
       | right now because it was the only thing that offered this
       | capability 5 years ago. Raytheon developers have been begging to
       | fork it for a long time so they can update and make bug fixes,
       | but Raytheon legal won't let them fork a GPL-licensed project.
        
         | aidenn0 wrote:
         | It's also the CI component of (the now unmaintained)
         | Phabricator
        
         | stavros wrote:
         | Yeah, there were a few projects named that :/ I figured none of
         | them were too popular, so I just went ahead with the name.
        
         | ThaJay wrote:
         | One of them should fork it on their personal account and work
         | on it during bussiness hours. No liability and all the
         | benefits. Don't tell legal obviously.
         | 
         | "Someone forked it so now our fixes can get merged! :D"
        
           | nonameiguess wrote:
           | I've honestly considered this since leaving. Why not do my
           | old coworkers a solid and fix something for them, but then I
           | consider I'd be doing free labor for a company not willing to
           | let its own workers contribute to a project if they can't
           | monopolize the returns from it.
        
             | vonmoltke wrote:
             | > I consider I'd be doing free labor for a company not
             | willing to let its own workers contribute to a project if
             | they can't monopolize the returns from it
             | 
             | I don't think that is the reason. When Raytheon or other
             | contractors perform software work under a DOD contract
             | (i.e., they charge the labor to a contract) the government
             | generally gets certain exclusive rights to the software
             | created. Raytheon is technically still the copyright
             | holder, but effectively is required to grant the US
             | government an irrevocable license to do whatever they want
             | with the source in support of government missions if the
             | code is delivered to the government. Depending on the
             | contract, such code may also fall under blanket non-
             | disclosure agreements. I believe both of these are
             | incompatible with the GPL, and the latter with having a
             | public fork at all.
             | 
             | The company could work this out with the government, but it
             | would be an expensive and time-consuming process because
             | government program offices are slow, bureaucratic, and hate
             | dealing with small exceptions on large contracts. They
             | might even still refuse to make the contract mods required
             | at the end simply because they don't understand it or they
             | are too risk averse. Legal is likely of the opinion that it
             | isn't worth trying, and the Raytheon program office likely
             | won't push them unless they can show a significant benefit
             | for the company.
        
       | 3np wrote:
       | For users who are fine with the single-host scope, this looks
       | great. Definitely easier than working with systemd+$CI, if you
       | don't need it (and for all the flame it gets, systemd is very
       | powerful if you just spend the time to get into it, but then
       | again if you don't need it you don't)
       | 
       | I could also see this being great for a personal lab/playground
       | server. Or for learning/workshops/hackathons. Super easy to get
       | people running from 0.
       | 
       | If I ever run a class or workshop that has some server-side
       | aspect to it, I'll keep this in mind for sure.
        
       ___________________________________________________________________
       (page generated 2021-08-19 23:00 UTC)