[HN Gopher] Managing my personnal servers in 2020 with K3s
       ___________________________________________________________________
        
       Managing my personnal servers in 2020 with K3s
        
       Author : shamallow
       Score  : 129 points
       Date   : 2020-11-05 20:47 UTC (2 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | twiclo wrote:
       | I'll never understand people who run kubernetes, or even docker
       | for that matter, on their personal servers
        
         | k__ wrote:
         | I don't even understand 99% of the people who do it for their
         | business workloads, but who am I to judge.
        
         | yoyohello13 wrote:
         | For fun?
        
         | p_l wrote:
         | Amount of work I have to do on k3s vs amount of work I have do
         | on "classic" setup... well, k3s is greatly simpler. And I can
         | throw out some problematic distro idiosyncracies.
        
       | dsr_ wrote:
       | "When you try to CRAM everything (mail, webserver, gitlab, pop3,
       | imap, torrent, owncloud, munin, ...) into a single machine on
       | Debian, you ultimately end-up activating unstable repository to
       | get the latest version of packages and end-up with conflicting
       | versions between softwares to the point that doing an apt-get
       | update && apt-get upgrade is now your nemesis."
       | 
       | This is not my experience.
       | 
       | My main house server runs:
       | 
       | mail: postfix, dovecot, clamav (SMTP, IMAP)
       | 
       | web: nginx, certbot, pelican, smokeping, dokuwiki, ubooquity,
       | rainloop, privoxy (personal pages, blog, traffic tracking, wiki,
       | comic-book server, webmail, anti-ad proxy)
       | 
       | git, postgresql, UPS monitoring, NTP, DNS, and DHCPd.
       | 
       | Firewalling, more DNS and the other part of DHCPd failover is on
       | the router.
       | 
       | Package update is a breeze. The only time I waste the overhead of
       | a virtual machine is when I'm testing out new configurations and
       | don't want to break what I have.
       | 
       | "just having the Kubernetes server components running add a 10%
       | CPU on my Intel(R) Atom(TM) CPU C2338 @ 1.74GHz."
       | 
       | Containerization is not a win here. Where's the second machine to
       | fail over to?
        
         | Narkov wrote:
         | Sounds like a dream to compromise! Rather than one attack
         | surface, there's like 25. Crack one and you take the lot.
        
           | toyg wrote:
           | It's a home server, not an enterprise network. Security is a
           | trade-off.
        
           | dsr_ wrote:
           | It provides 25 services. If I were going to deploy 25
           | machines to provide 25 services, that would not reduce the
           | attack surface... of my house server.
        
         | rektide wrote:
         | > Containerization is not a win here. Where's the second
         | machine to fail over to?
         | 
         | It would probably take an hour or less to add another node to
         | this set up. The author made some choices that block scaling, &
         | changing that, installing service-lb, local-provisioner, would
         | take a bunch of that time. Finding a resilient replacement for
         | local-provisioner is something I wish we were better at, but
         | the folk at Rook.io have a pretty good start on this.
         | 
         | To me, the real hope is that we move more and more of our
         | configuration into Kubernetes state. Building a container with
         | a bunch of baked in configuration is one thing, but I hope we
         | are headed towards a more "cloud native" system, where the
         | email server is run not as containers, but as an operator,
         | where configuration is kept in Kubernetes, and the operator
         | goes out & configures the containers to run based on that.
         | 
         | I agree that running a bunch of service on a Debian box with a
         | couple different releases (testing/unstable) pinned into apt is
         | not really that hard. But I am very excited to stop managing
         | these pets. And I am very hopeful that we can start moving more
         | and more of our configuration from /etc/whateverd/foo.conf
         | files into something centrally & consistently managed. The
         | services themselves all require special unique management
         | today, & the hope, the dream, is that we get something more
         | like big cloud style dashboards, where each of these services
         | can be managed via common Kubernetes tools, that apply across
         | all our services.
        
           | dsr_ wrote:
           | "But I am very excited to stop managing these pets."
           | 
           | When you have a herd of cattle which is of size 1, it's a
           | pet. You don't get any efficiencies from branding them all
           | with laser-scannable barcodes, an 8-place milking machine, or
           | an automatic silage manager. You still need to call the vet,
           | and the vet needs to know what they are doing.
        
         | shamallow wrote:
         | It is because at the time I was doing a lot of python
         | development, and I was (and still) using my server as a dev
         | workstation. Isolation with virtualenv was not great and many
         | projects were needing conflicting versions of system package,
         | or newer version than what Debian stable had.
         | 
         | Lot of the issue was me messing around \o/
         | "just having the Kubernetes server components running add a 10%
         | CPU on my Intel(R) Atom(TM) CPU C2338 @ 1.74GHz."
         | Containerization is not a win here. Where's the second machine
         | to fail over to?
         | 
         | I think it is worth it in order to get a centralized control
         | plane for everything and automatic build and deployment for
         | eveything.
         | 
         | But I agree with you, some apps (postfix, dovecot) doesn't feel
         | great inside a container (Sharing data with UID issue is mewh,
         | postfix with multiprocess design also...)
         | 
         | I just wanted to have everything manage into containers, as
         | they were the last ones, so I moved them into.
        
         | [deleted]
        
         | znpy wrote:
         | > "When you try to CRAM everything (mail, webserver, gitlab,
         | pop3, imap, torrent, owncloud, munin, ...) into a single
         | machine on Debian, you ultimately end-up activating unstable
         | repository to get the latest version of packages and end-up
         | with conflicting versions between softwares to the point that
         | doing an apt-get update && apt-get upgrade is now your
         | nemesis."
         | 
         | I use Proxmox to avoid that. Some things I run in VMs (often
         | with Docker containers), some other things I run them in LXC
         | containers (persistant containers that behave like VMs).
         | 
         | I can then automation (mostly Proxmox templates and Ansible) to
         | make deployments repeatable.
         | 
         | I'm interested in k3s, though, I'll give it a better look :)
         | 
         | The next addition will be some form of NAS, either a
         | qnap/synology or a custom build using FreeNAS or Unraid
         | (probably FreeNAS).
        
         | gavinray wrote:
         | Containerization and container orchestration platforms are only
         | partly about scalability.
         | 
         | The primary appeal for me is ease of deployment and
         | reproducibility. This is why I develop everything in Docker
         | Compose locally.
         | 
         | Maybe the equivalent here would be something like Guix or Nix
         | for declaratively writing the entire state of all the desired
         | system packages and services + versions but honestly (without
         | personal experience using these) they seem harder than
         | containers.
        
           | ralmeida wrote:
           | Exactly! Other non-scalability concerns they address
           | (specifically talking about Kubernetes here) is a primitive
           | amount of monitoring/observability; no-downtime updates
           | (rolling updates); liveness/readiness probes; primitive
           | service discovery and load balancing; resiliency to any one
           | single host failing (even if the total compute power could
           | easily fit into a single bigger server).
        
             | dsr_ wrote:
             | Which of these are things you want on your house server?
             | That's what the article author is writing about, and what I
             | am writing about.
             | 
             | I do not need an octopus conducting a herd of elephants.
        
               | p_l wrote:
               | I recently ended up setting up "classic" server again
               | after a significant time keeping mostly containerized
               | infrastructure on k8s.
               | 
               | Never again, the amount of things that are simply
               | _harder_ in comparison is staggering.
        
               | Scramblejams wrote:
               | Sorry to sound pedantic, but what was harder?
               | Containerized infra or a classic server? I assume the
               | former but wanted to be sure.
        
               | gavinray wrote:
               | I can agree that the idea of reaching for Kubernetes to
               | set up a bunch of services on a home server sounds a bit
               | absurd.
               | 
               | "How did we get here?"
               | 
               | I'm not an inexperienced codemonkey by any means of the
               | term, but I am a shitty Sysadmin. And despite being a
               | Linux user from early teens, I'm not a greybeard.
               | 
               | As sorry a state as it may sound, I have more faith in my
               | ability to reliably run and maintain a dozen containers
               | in k8s than a dozen standard, manually installed apps +
               | processes managed by systemd.
               | 
               | Whether this is a good thing or a bad thing you can
               | likely find solid arguments both ways for.
        
           | dsr_ wrote:
           | I'm not deploying; this is __the server __. I do backups, and
           | I keep config in git.
           | 
           | Reproducibility? This is __the server __. I will restore from
           | backups. There is no point in scaling.
           | 
           | If you want to argue that containerization and VMs are
           | portable and deployable and all that, I agree. This is not a
           | reasonable place to do that extra work.
        
           | tomberek wrote:
           | Nix/NixOS for this purpose is very nice.
        
         | polote wrote:
         | I don't believe it, if you have postgresql package update can't
         | be a breeze. Every major version you need to manually convert
         | the database
        
       | bndw wrote:
       | Great write up! I run a similar setup and documented the high
       | level architecture here: https://bdw.to/personal-
       | infrastructure.html
        
       | candiddevmike wrote:
       | I use Debian stable and systemd-nspawn, gives me the "virtual
       | machine" experience (separate filesystem, init/systemd, network
       | address, etc) via lightweight containers that are really easy to
       | start, stop, and share files between. All managed by ansible.
       | Once a month I bump versions, run ansible, and forget about it.
        
         | packetlost wrote:
         | I really wanted to like systemd-nspawn, but ran into massive
         | issues with poor to non-existent documentation, bugs (in
         | particular with DNS between 'containers'), and usability
         | issues.
         | 
         | Also the inability to reasonably run non-systemd distros such
         | as Alpine further killed my interest in it. Even distros like
         | Ubuntu, which use systemd had to be modified to use the systemd
         | network stack in order to function properly.
        
       | Sodman wrote:
       | I love the k3s control-plane in the cloud and the raspberry pi
       | worker nodes running from home (connected over VPN). Not a bad
       | use for all of those raspberry pis we developers seem to
       | accumulate over the years!
        
       | rektide wrote:
       | This is such a great write-up. I hope we continue to evolve the
       | modern ops set-up on the metal, make it easy for folks to onboard
       | into something that scales both big & high, and small & low. This
       | is, imo, enormously good tech to learn, and I feel like many
       | people are wasting their time learning "things that work for
       | them" or that "aren't as complex" when those personal choices &
       | investments will, in all due chance, not pay off elsewhere, will
       | not be things other people are as likely to know or use or enjoy.
       | And k3s is so easy to use, works so well, that I think many folks
       | kind of cheat themselves out of a better experience when they
       | pick something a little more legacy, like Docker-compose.
       | 
       | Also noteably that this is just a first step, & we can get better
       | & better at creating a wider system of services for the personal
       | server from Kubernetes. For example I use the zalando postgres-
       | operator, which lets me just ask for/apply a kubernetes object,
       | and presto-chango, i have a postgres database with as many
       | replicated instances as I want. The author here similarly enjoys
       | having Let's Encrypt being ambient available. Managing more and
       | more systems within Kubernetes will continue to scale the
       | operational network effect of choosing tech like Kubernetes, tech
       | that doesn't just run containers, but is an overaching cloud.
       | Kubernetes is Desired State Management, a repository for all of
       | your (small) cloud's state.
       | 
       | I'd consider maybe replacing some of the hand-made Wireguard work
       | done here with Kilo[1], which can either run as a Container
       | Network Interface (CNI) plugin or can wrap your existing
       | Kubernetes networking CNI provider (by default K3s uses Flannel).
       | This will automate the process nicely, let you manage things like
       | peers in Kubernetes easily. When the author connected the RPi to
       | their existing cluster, that's exactly the sort of multi-cloud
       | topology that Kilo is there to help you run & manage, and from
       | inside of Kubernetes itself! Kilo rocks.
       | 
       | Also worth noting that some of the latter half of this write-up
       | is optional. Switching K3's own ingress out for nginx is the
       | authors opinion, for example. You may/may not need a mail server.
       | The write-up is pretty long; I think it's worth highlighting that
       | the core of what's happening here, what others would need to do,
       | is pretty short.
       | 
       | I do enjoy that the author started the steps by running gpg &
       | sops, to make keys to secure this all. This is pretty rigorous.
       | It's good to see! I don't think all operators have to do this,
       | but it showed that the author was taking it fairly seriously.
       | 
       | For reference, I run a 3 node K3s cluster at home, a separate
       | single K3s instance, and am planning on trying to convert my
       | laptops & workstations over so that operationally I get the same
       | kind of great observability & metrics & managability on them that
       | I enjoy on the cluster. I'd like to cloud-nativize more of of my
       | day to day computing experience, for consistency sake, & because
       | I think uplifting many of the local things on my machine into
       | pieces of a larger Cloud body of state will give me more
       | flexibility & capabilities that I can enjoy playing with. I look
       | forward to becoming less machine-centric, and more cross-machine
       | cloud-centric.
       | 
       | [1] https://github.com/squat/kilo
        
         | e12e wrote:
         | > planning on trying to convert my laptops & workstations over
         | 
         | Eh, ok? So set up a mesh vpn, like zerotier - when you close
         | your notebook slack migrates to your workstation?
         | 
         | (I know, you highlighted monitoring - but nothing stops you
         | from running statsd or something on your laptop).
        
       | swsieber wrote:
       | > When you try to CRAM everything (mail, webserver, gitlab, pop3,
       | imap, torrent, owncloud, munin, ...) into a single machine on
       | Debian, you ultimately end-up activating unstable repository to
       | get the latest version of packages and end-up with conflicting
       | versions between softwares to the point that doing an apt-get
       | update && apt-get upgrade is now your nemesis.
       | 
       | Has anyone here taken a look at bedrock linux? It lets you have
       | multiple linux installations coexist and interoperate (different
       | distros mainly, but probably different copies of debian is
       | possible)
       | 
       | I've been fascinated by it but never actually given it a try.
       | 
       | From the bedrock linux introduction page:
       | 
       | > Given someone already expended the effort to package the
       | specific version of the specific piece of software a given user
       | desires for one distro, the inability to use it with software
       | from another distro seems wasteful.
       | 
       | > Bedrock Linux provides a technical means to work around cross-
       | distro compatibility limitations and, in many instances, resolve
       | this limitation.
       | 
       | https://bedrocklinux.org/introduction.html
        
       | francis-io wrote:
       | k3s looks like it removes some of the "moving parts" from
       | Kubernetes, but for a single node setup, docker-compose might be
       | simpler to manage.
        
         | rektide wrote:
         | Docker-compose isn't going to help you with Let's Encrypt,
         | you're going to need to keep resolving that problem with each
         | app you have or find some other way to tackle it, because
         | you've picked a way to deploy containers, and don't have any
         | kind of centralized cloud system at your back.
         | 
         | In my comments, I mention that the author could have used Kilo,
         | which would have been a Kubernetes-native way to manage their
         | WireGuard system, and to connect the Pi & their other systems
         | to their existing K3S system.
         | 
         | I agree that docker-compose might be simpler, but there's a
         | very very limited realm of concerns that that will ever serve,
         | where-as Kubernetes's / the Cloud Native ambition is to manage
         | everything you would need in your cloud. Whatever you need,
         | should, ideally, be managable within the same framework.
         | 
         | DNS is another decent example, where Kubernetes will help you
         | manage domain names, somewhat. Still work to be done there but
         | there are some good starts. There's so many operators, all of
         | which purport to let you manage these services in a "cloud
         | native" way. We're still learning, getting better at it, but
         | being able to manage all these thing semi-consistently, via the
         | same tools, is a superpower. https://github.com/operator-
         | framework/awesome-operators
         | 
         | Also just the question of short term wins vs long term use. You
         | will not use docker-compose at your job. More and more people
         | are going to be using Kubernetes to manage a wider and wider
         | variety of systems & services, making more and more
         | capabilities managed by Kubernetes.
        
           | henryfjordan wrote:
           | If you run a single server, getting a Let's Encrypt cert is
           | as easy as running a cron script. Then you just run a single
           | instance of Nginx with the cert directory mounted as a
           | volume. You will have to do a little extra work to maintain
           | the nginx configuration to point to all your other
           | containers, but it's generally just copy/pasting a block and
           | changing the port to add a new service.
           | 
           | Kubernetes is cool but the only reason to run it on a single
           | instance is because you want to.
           | 
           | Also docker-compose isn't used to host stuff in production
           | very often but it is used to manage running local instances
           | quite a bit. I wouldn't write it off as not worth learning.
        
           | francis-io wrote:
           | Traefik reverse proxy can be hosted using docker compose,
           | which will deal with fronting each container and offloading
           | ssl.
           | 
           | You can communicate between containers using hostnames. I
           | personally have a container keep DDNS updated for my home
           | setup.I'm not sure what else you mean by DNS.
        
         | Sodman wrote:
         | I've recently moved my "personal infrastructure" from a docker-
         | compose setup to a k3s setup, and ultimately I think k3s is
         | better for most cases here.
         | 
         | FWIW, my docker-compose setup used https://github.com/nginx-
         | proxy/nginx-proxy and it's letsencrypt companion image, which
         | "automagically" handles adding new apps, new domains, and all
         | ssl cert renewals, which is awesome. It was also relatively
         | easy to start up a brand new fresh machine and re-deploy
         | everything with a few commands.
         | 
         | I started down the route of using kubeadm, but then quickly
         | switched to k3s and never looked back. It's now trivial to add
         | more horsepower to my infrastructure without having to re-
         | create _everything_ (spin up a new EC2 machine, run one command
         | to install k3s  & attach to the cluster as a worker node).
         | There's also some redundancy there, as any of my tiny ec2 boxes
         | crashes, the apps will be moved to healthy boxes automatically.
         | I'm also planning on digging out a few old Raspberry Pi's to
         | attach as nodes from home (over a VPN) just for funsies.
         | 
         | Ultimately k8s certainly has a well earned reputation for
         | having a steep learning curve, but once you get past that
         | curve, managing a personal cluster using k3s is pretty trivial.
        
           | jeffstephens wrote:
           | I found k3s to be VERY noisy in logs - I definitely recommend
           | log2ram if you want your SD card to last very long! (Or use
           | different external storage). I had two Pi nodes with
           | corrupted filesystems until I made the switch.
           | 
           | https://mcuoneclipse.com/2019/04/01/log2ram-extending-sd-
           | car...
        
       | an_opabinia wrote:
       | A refreshing and straightforward, opinionated writeup.
       | 
       | Cuts through the noise of the big cloud providers, who are
       | ironically incentivized to keep things pretty complicated.
       | 
       | Traefik isn't that complicated though, definitely worth learning.
        
       | jeromenerf wrote:
       | Very nice post indeed, even though I chose years ago to stop at
       | just running Debian stable, for personal stuff.
       | 
       | Someone, maybe 10 years ago, said that at scale, one should treat
       | servers like cattle not pets. At home, I feel it's OK to treat
       | your server as a pet. One self hosted server that does it all is
       | not a 2020 anomaly, it's just boring and effective :)
       | 
       | I have spent a small part of my imposed 2020 free time
       | simplifying my family digital cattle to a cow self hosted digital
       | pets. I liberated myself both from gear and services I was
       | maintaining for affective value or some professional distorsion;
       | the openbsd systems I love but don't use enough, the apple rmbp I
       | never use, the VPSes for my personal services, ... donated, sold,
       | resigned. It feels great.
        
       | midrus wrote:
       | LOL. I want to go back 10 years then. Or just use dokku. We've
       | gone crazy guys.
        
         | rektide wrote:
         | What seems crazy to me is that, a decade or two ago,
         | 
         | * We had few standard ways of doing things. Everyone was
         | cobbling together their own stack, figuring out their own ways
         | to run a plethora of services needed to run & operate an
         | internet connected box.
         | 
         | * Each service or daemon needed to run your systems had it's
         | own stand-alone management interfaces & systems. Nothing worked
         | with anything else. There was no apiserver there to store the
         | configurations you wanted. You had whatever scripts you wrote,
         | checked in to source, then a big "do the scripts" button, and
         | then you got a bunch of files written all over kingdom come to
         | various hosts to do all the various tasks that supposedly would
         | run your system, you hoped.
         | 
         | * People with bare-metal had few tools to auto-scale or get
         | resiliency for their systems. If you do have three machines,
         | most operators managed them like pets, not cattle.
        
           | xorcist wrote:
           | Ten years ago was 2010. VMs had already taken over everything
           | by then. Some were convinced that VM images would take over
           | application distribution, because it solved all dependency
           | problems. Others weren't convinced and pointed to problems
           | with insight and integration.
           | 
           | Resiliency or scaling wasn't any easier or harder than now.
           | It's not 1985 we're talking about.
        
         | tdeck wrote:
         | I actually still use dokku for my side projects and it works
         | great. Very little learning curve, minimal headaches. Sure it
         | doesn't scale out but that's not been a problem so far.
        
       | renewiltord wrote:
       | Very cool. I use GKE with nginx as my ingress controller. The
       | Google LB ingresses are too expensive for this sort of thing.
       | 
       | Also appreciate the cert manager advice. Thank you!
        
       ___________________________________________________________________
       (page generated 2020-11-05 23:00 UTC)