[HN Gopher] I host this blog from my garage
       ___________________________________________________________________
        
       I host this blog from my garage
        
       Author : throwaway894345
       Score  : 103 points
       Date   : 2021-12-07 16:09 UTC (6 hours ago)
        
 (HTM) web link (eevans.co)
 (TXT) w3m dump (eevans.co)
        
       | midasuni wrote:
       | If only he'd hosted it on AWS he too could be offline
        
       | mro_name wrote:
       | wouldn't a raspberry pi do it from within the drawer?
       | 
       | Not only https://solar.lowtechmagazine.com does similar, but
       | https://cheapskatesguide.org/articles/raspberry-pi-3-4-web-s...
       | has some numbers.
       | 
       | Seems feasible, unless you are the NYT.
        
         | shosti wrote:
         | Yeah that's what I would do if I was just hosting a blog, I
         | already had the k8s lab set up for other reasons and wanted to
         | get more use out of it.
        
         | probotect0r wrote:
         | He clearly says:
         | 
         | To stave off the inevitable haters: I know this is over-
         | engineered. That's kind of the point! By trying out "flashy"
         | software in a low-stakes environment...
        
       | treesknees wrote:
       | Heh, I'm actually more interested in the "garage" piece of the
       | blog which is barely touched on. I also run a home lab, but from
       | my climate-controlled relatively clean basement. I've thought
       | about moving it to the garage, but I'm worried about temperature
       | and dust/dirt ingress. Do you have a sealed cabinet or air
       | filters or any other environmental controls around the servers?
       | Or are you using industrial machines rated for dusty locations?
       | Otherwise a garage actually doesn't seem like a great place to
       | put servers.
        
         | chomp wrote:
         | I live in the Southern US and it's extremely humid and hot in
         | the summers.
         | 
         | Only certain systems are fine with being in the garage. For
         | instance, a Supermicro chassis I put out there began rusting in
         | a few spots after a few months, but my custom system that I
         | built was perfectly fine. I suspect the metals Supermicro uses
         | are treated in some way and then cut, but the edges aren't re-
         | treated, allowing rust to form on all of the edges of the
         | chassis.
         | 
         | I never put spinners out there, only SSDs. My bulk storage
         | lives inside my house.
         | 
         | I have an HP Sandy Bridge Xeon box that lives out there and
         | it's always super quiet, but I don't push it hard either.
         | Raspberry pis are super okay with being outside. I'd say if you
         | can make a small Pi cluster, it's a good fit.
         | 
         | I use an open air rack next to my water heater. I don't do any
         | woodworking or anything, so the dust isn't too bad.
        
           | cma wrote:
           | There was a talk from comma.ai with a part on what they did
           | to prevent corrosion running a datacenter in their garage to
           | avoid the nvidia datacenter tax or something:
           | 
           | https://www.youtube.com/watch?v=D0JO5dr8cPE&t=15m12s
        
             | chomp wrote:
             | Fascinating, thanks for this!
        
         | jmnicolas wrote:
         | I have seen production servers run for ten years upstairs a
         | dirty bus repair shop. The room is never cleaned so there's a
         | few millimeters of black dust on everything.
         | 
         | In fact you know the age of a server depending on the amount of
         | dust on it.
         | 
         | There's a defective A/C unit that sometimes throw some water
         | all around.
         | 
         | There's power cut 2 or 3 times a year, the batteries are old so
         | the servers get their hard reboot.
         | 
         | As far as reliability is concerned, sometimes a disk fail, but
         | that's about it.
         | 
         | Now if you think I'm describing something from the third world,
         | you're wrong. It's in France and I'm not exaggerating about it.
         | 
         | So what I'm saying is that for a homelab, don't fret over
         | cleanliness or optimal conditions.
        
           | themadturk wrote:
           | Yes, I've run an enterprise server stack on an old closet,
           | with a home air conditioner venting into the warehouse. Even
           | when the A/C unit died for three days one summer (we had to
           | keep the door open and run several fans), the servers,
           | ranging from 3-7 years old, kept running. Far from optimal,
           | but the systems stay up and the company stays in business.
        
           | Lhiw wrote:
           | I spent some time with a computer refurber back in the day.
           | He used to buy old computers from mining operations these
           | things were literally filled to the brim with dust and weird
           | shit.
           | 
           | Computers are amazingly resilient if you don't need them for
           | much more than office work.
        
         | shosti wrote:
         | (Author here) I'm kind of cruel to my hardware lol (they're all
         | frankenservers hacked together with random parts from eBay).
         | I'm not doing anything fancy to protect from dust and whatnot,
         | but it's pretty much always chilly in the garage (and I'm using
         | low power parts) so heating hasn't been an issue.
        
         | BrandoElFollito wrote:
         | My "datacenter" is on the top shelf of a closed closet. This is
         | in an apartment and the ceiling that is just above the server
         | is heated (heating of the neighbor above me). There is a fiber
         | ONT, switch, router, a raspberry and tower computer ("the
         | server")
         | 
         | The ambient temperature is 32-35degC
         | 
         | The temperature of the disks is 38degC (ssd) to 49degC (hdd),
         | the CPU cores are 27 to 34degC (not sure why there is a
         | difference, it is one CPU).
         | 
         | When I put the computer and other equipment there, I was
         | concerned that the temperature would be unbearable. I am
         | positively surprised.
        
         | pengaru wrote:
         | It's not too difficult or costly to frame+finish a corner
         | closet if you have the garage space to spare, especially if the
         | garage interior isn't already finished.
        
       | whoknowswhat11 wrote:
       | I also got a lot of mileage out of a homelab.
       | 
       | I experimented with proxmox and ESXi on the homelab front. Now
       | ESXi is at the business.
       | 
       | Same thing with different server setups (Dell / HP / SuperMicro).
       | Gives you a good feel for updates, KVM options etc.
       | 
       | Same thing with networking - I'm wasting time on Mikrotik and
       | it's pretty fun.
       | 
       | It can be handy as a pool of additional resources. I had a very
       | built out homelab server (with new SSD drives etc). Leadtimes got
       | long for some Dell Server configs, we needed something 1-2 days,
       | just grabbed a box from home. With ESXi once the real machine
       | comes in its not that hard to move stuff over.
       | 
       | The big win is experimenting with low stakes. I find I'm learning
       | 2-3x faster in that setup. I can try something, reinstall worst
       | case, reboot, KVM, and literally plug in very easily (I have a
       | hardware KVM setup in addition to the iDRAC type stuff).
        
       | hinkley wrote:
       | If you wanted to 'publish' a more interactive website to the
       | internet while still keeping all of the publishing tools safe
       | inside an enclave/intranet somewhere, then you probably need a
       | read-only copy of a database. But any time I look at read-replica
       | configurations, it looks like they expect the replica to contact
       | the master. You can't set up a proper bastion server if you have
       | to have a secret door back into your intranet to replicate data.
       | This has left me stuck on the starting blocks for a couple of pet
       | projects I might like to work on.
       | 
       | Anybody know how to solve this problem, short of roll-your-own?
        
         | int0x2e wrote:
         | How much latency and inconsistency can you live with? Could you
         | dump full or incremental database backups from your secure
         | enclave, upload them to blobs and ingest on the cloud? Could
         | you publish a change feed of sorts to Kafka and apply changes?
        
           | hinkley wrote:
           | It would have to be incremental because a full dump does not
           | scale up. Of course if you scale down you could probably get
           | away with shipping snapshots of a sqlite database, and I
           | understand some people do just that.
           | 
           | But it would be nice to have streaming WAL logs with the
           | instigator flipped. Probably I need some sort of middleware
           | to dis-intermediate the pipe.
        
         | teraflop wrote:
         | With Postgres, you can do log-shipping replication using
         | whatever mechanism you like to deliver the WAL files, with no
         | direct network connection between the primary and standby
         | servers.
         | 
         | It's still _kind of_ roll-your-own, because you need to provide
         | shell scripts to handle archiving, fetching and cleaning up the
         | log files on either end, but it can be done.
        
       | rektide wrote:
       | Very nice set up. Thanks for sharing, thanks for submitting. Very
       | great technology to be able to set up & have running, have at
       | your back. A lot of haters, but this is such a vast leap beyond a
       | personal server, to having real cloud infrastructure, to having
       | one and only one control plane for all your systems.
       | 
       | Recap of tech choices: Cilium networking, MetalLB load balancing,
       | nginx ingress, Rook/Ceph storage, Prometheus/Grafana
       | monitoring/alerting, Loki logs, Harbor container registry, Flux
       | for GitOps.
       | 
       | Also a huge shout out to the Ingress controller comparison
       | spreadsheet[1] the author made! Really nice being able to compare
       | these feature sets. Neat to see a couple folks already working on
       | HTTP3 & QUIC load balancing, who does Maglev routing, who has the
       | best offload capabilities. This definitely brought my interest in
       | Apache APISIX way way up.
       | 
       | The author digs at their own infrastructure some, calling it for
       | learning. But to me, I think it's bad/dangerous having a
       | conservative outlook that personal users should have to learn/use
       | inferior/lesser tools (dozens of hours), it's bad to bifurcate
       | the computing world into high and low computing. It took the
       | author a lot of effort I'm sure to cobble together this system,
       | but my hope is, over time, we start to make works like this much
       | more paved road, and we start to make them much more accessible &
       | documented & supported. We still haven't seen personal-use-
       | focused Kubernetes distributions arrive, but I'm hoping that
       | happens. Here's the author's description:
       | 
       | > _By trying out "flashy" software in a low-stakes environment, I
       | can have some idea of how it will work in production-critical
       | ones, which helps me make better choices. (As an example, my
       | attempts to run Istio in a homelab setting have convinced me that
       | it should be avoided in most circumstances.)_
       | 
       | Again I think this is undershooting the real value, of using real
       | tools. The hill to climb right now seems big. But imo the effort
       | one's going to invest in selfhosting is probably going to be
       | significant, and I like a lot the plan of shooting for good.
       | 
       | [1]
       | https://docs.google.com/spreadsheets/d/191WWNpjJ2za6-nbG4ZoU...
        
         | shosti wrote:
         | Thanks, glad you liked it! Minor clarification, I didn't write
         | the ingress comparison, that's from https://learnk8s.io (who
         | I've been working with recently). They have a bunch of awesome
         | resources on their site.
        
       | bob446 wrote:
       | Impressive but makes me want to use AWS more than ever
        
       | superkuh wrote:
       | >Well, hosting a blog from home is probably not a great idea from
       | a practical perspective.
       | 
       | I'm not a hater, but maybe he perceives it as not practical here
       | because of all the fun unnecessary complexity. Keeping something
       | like this going for more than a couple years would require
       | complex sysadmin maintainence for updates (which is fun till it
       | isn't).
       | 
       | But if you just install nginx from your system repositories, have
       | your hugo generated .html and media files in your www dir, and
       | forward a port 80 on your router to your server LAN IP:80, it's
       | going to work until your distro stops working without input and
       | without security issues. For most people, for most of the time,
       | it works great. And it's okay if it doesn't work some of the
       | time.
       | 
       | Hosting your blog from home, in whatever room, is a great idea
       | and very practical.
        
         | throwaway894345 wrote:
         | If all you're ever doing is hosting a static blog, then yeah,
         | you don't need something like Kubernetes; however, if you want
         | to host other services (e.g., database, authentication,
         | comments, etc) then it quickly behooves you to build on some
         | higher-level platform or else you'll end up building your own
         | (poorly) and missing out on all of the experience,
         | documentation, and tooling that are publicly available for
         | Kubernetes, Docker, etc.
        
           | btilly wrote:
           | Sorry, but we did all of that 20 years ago, and those
           | approaches remain a lot simpler for basic use cases. You
           | know, like serving 100k dynamic pages/hour database backed
           | site. (That's what _I_ was doing 20 years ago, how about
           | you?)
           | 
           | See https://pythonspeed.com/articles/dont-need-kubernetes/
           | for some of the reasons NOT to use Kubernetes for this.
           | 
           | With, of course, a giant exception for two cases. The first
           | is, as with the author, if the purpose of using Kubernetes is
           | to learn how to use Kubernetes. The second is if you are
           | aware of the tradeoffs and have specific reason to say that
           | Kubernetes really is appropriate for your circumstance.
           | 
           | And if you think that Kubernetes is always the right
           | approach, then you clearly are NOT aware of the tradeoffs.
        
             | throwaway894345 wrote:
             | > Sorry, but we did all of that 20 years ago, and those
             | approaches remain a lot simpler for basic use cases.
             | 
             | Yeah, we did, and they were complicated then and they're
             | complicated now. The difference is some of us have decades
             | of experience which make them feel simpler.
             | 
             | > And if you think that Kubernetes is always the right
             | approach, then you clearly are NOT aware of the tradeoffs.
             | 
             | I was very explicit that Kubernetes isn't always the right
             | approach. Allow me to quote myself: "If all you're ever
             | doing is hosting a static blog, then yeah, you don't need
             | something like Kubernetes". My point is that things like
             | Kubernetes and Docker Compose are increasingly viable
             | defaults for nontrivial cases. In other words, if you know
             | you're going to have a bunch of services to manage, it's a
             | lot easier for most people (i.e., those lacking decades of
             | experience) to manage them with the aforementioned tools
             | rather than trying to build an equivalent "platform" from
             | scratch.
        
               | btilly wrote:
               | Kubernetes wraps several layers of abstraction around the
               | old way of doing things. There is no world in which the
               | operation of the site is in any way clarified by
               | involving Kubernetes. Kubernetes also adds performance
               | overhead. There are things which are easier with
               | Kubernetes. But only after you've climbed a long learning
               | curve.
               | 
               | From what I've seen, even people without "decades of
               | experiment" find the non-abstracted system easier to
               | understand and reason about than the Kubernetes version.
               | And the difference in ease is dramatic.
               | 
               | Examples where it makes sense include:
               | 
               | 1. You need to deploy multiple independent systems with
               | similar configurations. (Even then, consider ansible.)
               | 
               | 2. You have to deploy different clusters of connected
               | systems with related, but different, components.
               | 
               | 3. You need to scale up and down what needs to be
               | deployed. (Except don't try to use autoscale. That only
               | works in marketing blurbs.)
               | 
               | 4. Somebody else has set it up and you never need to
               | actually understand it. (Good luck if you need to debug.)
               | 
               | But there is no reason to introduce Kubernetes because
               | you need a database, a few webservers, front end proxy,
               | failover, etc. And if you are using Kubernetes for that,
               | odds are that you'll save yourself a world of headaches
               | (and potential security holes!) by migrating away. No
               | matter how much the "chief architect" may claim
               | otherwise. I've seen how this plays out in practice.
        
           | silverfox17 wrote:
           | Running a server to do database, authentication, comments,
           | etc. is relatively simple. If you're trying to do scaling or
           | whatever, sure. But to do all of that on a single vanilla
           | server is pretty trivial, especially if you're using a CMS or
           | some other software on it.
        
           | kayodelycaon wrote:
           | Running Wordpress on a single server is relatively painless.
           | Document the setup and configure automatic backups of the
           | relevant configs, www folder, and database.
           | 
           | I have another server ssh in every night and drop everything
           | to a folder in my dropbox account. Replicates to computers
           | that have incremental backups and less granular off-site
           | backups.
        
         | [deleted]
        
         | nonameiguess wrote:
         | I wouldn't host anything I cared about at home because
         | residential broadband is unreliable as heck with low caps on
         | egress I'm already hitting half of an average month thanks to
         | screen sharing and video conferencing working from home. I
         | don't want Spectrum throttling me and suddenly I can't work any
         | more because someone found my blog and put it on Hacker News.
        
         | shosti wrote:
         | (Author here) I mostly worry about security for this. If you
         | have nothing private on your network it's probably fine, but if
         | you have, say, a NAS that isn't using proper authentication
         | (pretty common), an os/nginx vulnerability could end up
         | exposing stuff.
         | 
         | Of course there are much simpler ways to lock things down also
         | :)
        
           | iso1210 wrote:
           | Obviously your server would be on a DMZ vlan, probably on its
           | own. Set it to automatically take security updates every
           | night and aside from some zero days I'm not sure what
           | security issues you'd have.
        
           | scubbo wrote:
           | Thank you for a great article! I recently took the plunge of
           | building-and-hosting a blog too - but, due to security
           | concerns, I took the entirely opposite approach of making it
           | fully cloud-based (Git repos for infra and for content -> AWS
           | CodePipeline, Hugo during CodeBuild -> S3 and CloudFront).
           | This was sadly ironic since I'd mostly wanted to blog about
           | my experiences with homelabbing, but I didn't trust myself to
           | open a port to the outside world. Thanks to your blog I might
           | finally learn Kubernetes and use a Cloudflare tunnel to
           | implement a similar truly-selfhosted blog!
        
             | hellojesus wrote:
             | I've done something similar to the author but with only ufw
             | and port forwarding.
             | 
             | My closet server is set up with a cron job that runs daily
             | and updates my domain's dns on Cloudflare to my currently
             | allocated dynamic ip.
             | 
             | U Port forwarding sends the 80/443 requests to my closet
             | server.
             | 
             | Closet server only accepts 80/443 requests from
             | Cloudflare's published ip addresses via ufw rules so that
             | all traffic must pass through Cloudflare to be accepted.
             | 
             | Nginx on closet server routes it to the appropriate
             | internal port for that service.
             | 
             | Maybe someone has broken into my home network, but I hope
             | this solution works relatively well!
        
               | shosti wrote:
               | Yeah this is basically what I was planning to do before I
               | learned about cloudflared.
        
             | shosti wrote:
             | I would say you don't really need Kubernetes for this sort
             | of setup (I already was running all the K8s stuff which is
             | why I went with it, but docker compose or even just running
             | things in systemd without containers would work too).
             | 
             | I think the main thing is to have some sort of network
             | isolation (like a separate VLAN or a server that blocks
             | outbound traffic) between stuff that's exposed to the
             | internet and stuff that's private on the network.
        
               | hogFeast wrote:
               | I use wireguard/iptables for this.
               | 
               | I have one small VPS with access to wireguard network,
               | wireguard rule to forward certain traffic to a virtual
               | machine running on my desktop, fairly easy to setup tbh
               | (and I add/remove devices constantly). I am not a
               | networking person, my understanding of iptables is shaky
               | but I also ran a similar setup with Nginx. Could also use
               | TailScale, but I found the wireguard CLI very easy.
               | Straightforward to add more networks and isolate stuff
               | from each other (tbh, I only run one network that doesn't
               | isolate my web-facing stuff from other stuff I run
               | privately...as I said, I am not a networking guy so have
               | no idea how bad of an idea this is given that the only
               | way in is traffic on certain ports being forwarded).
        
               | scubbo wrote:
               | Ah, I see - I misread and got the impression that
               | `cloudflared` could only connect to Kubernetes pods, but
               | I see from reading the docs[1] that it can connect to
               | traditional apps-on-ports as well. I'll have a poke
               | around - thanks again!
               | 
               | [1] https://developers.cloudflare.com/cloudflare-
               | one/connections...
        
       | mkasberg wrote:
       | It's unfortunate that there aren't better solutions for securely
       | exposing a part of your home network to the cloud. Imagine if it
       | were easy to set up a secure solution to access your home lab
       | from anywhere - you could use it as your own private cloud. And
       | perhaps then it would also be easy to expose part of that
       | publicly, like a blog.
        
         | rubatuga wrote:
         | You could try our service https://hoppy.network
         | 
         | It tunnels a public IPv4 and IPv6 (/56) over WireGuard. We have
         | a good IP reputation so you can use it as a mail server. We're
         | using the service ourselves, so overall pretty reliable.
        
         | shosti wrote:
         | I feel like Tailscale gets you pretty close to an "easy private
         | cloud", it's just the "public" part that's still a bit tricky.
        
         | charcircuit wrote:
         | >Imagine if it were easy to set up a secure solution to access
         | your home lab from anywhere
         | 
         | It is. You just need to port forward the ports you want exposed
         | to the "cloud" (aka the internet)
        
       ___________________________________________________________________
       (page generated 2021-12-07 23:01 UTC)