commented: Cool! There is another way as well in recent versions of weechat: The new weechat api relay protocol allows you to connect a local weechat client to another weechat instance running somewhere else. You could thus run weechat-headless in Kubernetes, and connect to it with a local client, as long as the clients share the same version. commented: My first impression was that this was (characteristically) absurd, but it’s not that different from running an IRC bouncer on a VPS. I had a VPS to hand when I started doing that, and OP has a k8s cluster to hand. Using what’s within reach is rational. commented: I’m still not really understanding the need for a VM. Just boot systemd in a container? It’s a bit unorthodox, but less unorthodox than running a VM on k8s. commented: What’s unorthodox about using k8s to start a VM? Gce runs on Borg: VMs and security sandboxing techniques are used to run external software by Google’s AppEngine (GAE) [38] and Google Compute Engine (GCE). We run each hosted VM in a KVM process [54] that runs as a Borg task. https://pdos.csail.mit.edu/6.824/papers/borg.pdf commented: But Borg and Kubernetes are not the same thing. commented: They may be apples and oranges, but they’re not apples and fish. I am an SRE for Borg and have a homelab kubernetes cluster, so I’m not completely ignorant about either. commented: If I’m going to SSH into it, run tmux, and update weechat at runtime (it’s written in C), I might as well have a normal Linux VM that I understand instead of something else that I don’t understand as well. In a way, the ways that make it exciting make it boring. commented: Overengineered solution to keep a log file? commented: There is some state there as well, like unread messages and such, but that would probably be lost if the pod gets rescheduled. .