[HN Gopher] A tiny Docker image to serve static websites
       ___________________________________________________________________
        
       A tiny Docker image to serve static websites
        
       Author : nilsandrey
       Score  : 208 points
       Date   : 2022-04-12 14:42 UTC (8 hours ago)
        
 (HTM) web link (lipanski.com)
 (TXT) w3m dump (lipanski.com)
        
       | 0xb0565e487 wrote:
       | I don't know why there is a big fish at the top of your website,
       | but I like it a lot.
        
         | ludwigvan wrote:
         | Me too!
         | 
         | Also the other blog posts have different big fishes, so check
         | them out as well.
        
         | adolph wrote:
         | Agreed. GIS says at least some are from the NYPL:
         | 
         | https://nypl.getarchive.net/media/salmo-fario-the-brown-trou...
        
       | pojzon wrote:
       | Tbh the moment the author thought about hosting yourself anything
       | to serve static pages -> it was already too much effort.
       | 
       | There are free ways to host static pages and extremely
       | inexpensive ways to host static pages that are visited mullions
       | of times per month using simply services built for that.
        
         | [deleted]
        
         | krick wrote:
         | So, the best free or extremely inexpensive way to host static
         | pages that are visited a lot would be...?
        
           | riffic wrote:
           | netlify, amplify, cloudflare pages, vercel, et cetera
           | 
           | It's a crowded field now
        
       | 0xbadcafebee wrote:
       | If you use "-Os" instead of "-O2", you save 8kB!
       | 
       | However, Busybox also comes with an httpd... it may be 8.8x
       | bigger, but you also get that entire assortment of apps to let
       | you troubleshoot, run commands in an entrypoint, run commands
       | from the httpd/cgi, etc. I wouldn't run it in production.... but
       | it does work :)
        
       | souenzzo wrote:
       | Is it smaller than darkhttpd?
       | 
       | https://unix4lyfe.org/darkhttpd/
        
       | nitinagg wrote:
       | For static websites, hosting them directly on S3 with cloudfront,
       | or on cloudflare might be a better option?
        
         | MuffinFlavored wrote:
         | or https://pages.github.com/ maybe?
        
       | amanzi wrote:
       | I assume the author would then publish this behind a reverse
       | proxy that implements TLS? Seems like an unnecessary dependency,
       | given that Docker is perfect for solving dependency issues.
        
         | EnigmaCurry wrote:
         | That's certainly what I would do. I think its great that thttpd
         | does not include a TLS dependency itself. Every once in awhile
         | I find a project that forces their own TLS and its annoying to
         | undo it.
        
       | KronisLV wrote:
       | > My first attempt uses the small alpine image, which already
       | packages thttpd:                 # Install thttpd       RUN apk
       | add thttpd
       | 
       | Wouldn't you want to use the --no-cache option with apk, e.g.:
       | RUN apk add --no-cache thttpd
       | 
       | It seems to slightly help with the container size:
       | REPOSITORY       TAG       IMAGE ID       CREATED          SIZE
       | thttpd-nocache   latest    4a5a1877de5d   7 seconds ago    5.79MB
       | thttpd-regular   latest    655febf218ff   41 seconds ago   7.78MB
       | 
       | It's a bit like cleaning up after yourself with apt based
       | container builds as well, for example (although this might not
       | _always_ be necessary):                 # Apache web server
       | RUN apt-get update && apt-get install -y apache2 libapache2-mod-
       | security2 && apt-get clean && rm -rf /var/lib/apt/lists
       | /var/cache/apt/archives
       | 
       | But hey, that's an interesting goal to pursue! Even though
       | personally i just gave up on Alpine and similar slim solutions
       | and decided to just base all my containers on Ubuntu instead:
       | https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...
        
       | kristianpaul wrote:
       | Why do we need this when you can run a web server inside systemd?
        
       | tadbit wrote:
       | I _love_ stuff like this.
       | 
       | People will remark about how this is a waste of time, others will
       | say it is absolutely necessary, even more will laud it just for
       | the fun of doing it. I'm in the middle camp. I wish
       | software/systems engineers would spend more time optomising for
       | size and performance.
        
         | memish wrote:
         | Wouldn't removing Docker entirely be a good optimization?
        
           | kube-system wrote:
           | Docker adds other value to the lifecycle of your deployment.
           | An "optimization" where you're removing value is just a
           | compromise. Otherwise we'd all run our static sites on UEFI.
        
             | ar_lan wrote:
             | This is a really good point, and something I think a lot of
             | people forget. It's true, the most secure web app is one
             | written with no code/no OS/does nothing.
             | 
             | Adding value is a compromise of _some_ increased security
             | risk - and it 's our job to mitigate that as much as
             | possible by writing quality software.
        
             | vanviegen wrote:
             | What value is that, for running such a simple piece of
             | software?
        
               | vorticalbox wrote:
               | A few off the top of my head.
               | 
               | The ability to pull the image on to any machine without
               | needing to clone the source files and build it.
               | 
               | Smaller images mean faster pod starts when you auto
               | scale.
        
               | spicybright wrote:
               | You have to login to some docker repository anyways and
               | know the series of commands to actually run it. Cloning a
               | repo and running a shell script is probably a lot easier
               | and faster than that.
               | 
               | What kind of work are you doing that requires really fast
               | auto scaling? Is a few minutes to spin up a new instance
               | really that cumbersome? Can you not signal for it to spin
               | up a new instance a tiny bit earlier than when it's
               | needed when you see traffic increases?
        
               | kube-system wrote:
               | > You have to login to some docker repository anyways and
               | know the series of commands to actually run it. Cloning a
               | repo and running a shell script is probably a lot easier
               | and faster than that.
               | 
               | In isolation, yes. But if, for instance, you're already
               | running a container orchestration tool with hundreds of
               | containers, and have CI/CD pipelines already set up to do
               | all of that, it's easier just to tack on another
               | container.
        
             | jamal-kumar wrote:
             | yeah see some of us still do this on OSes that haven't
             | turned into a giant bloated hodgepodge of security theatre
             | and false panacea software.
             | 
             | docker has dead whale on the beach vibes. what value does
             | it offer to those of us who have moved on from the mess
             | linux is becoming?
        
               | kube-system wrote:
               | I'm not suggesting it has value to everyone. I'm
               | suggesting it has value to the people who see value in
               | it.
        
               | jamal-kumar wrote:
               | I'm super curious to know what the value to people who
               | see that happens to be. It's serving static websites, why
               | do I need to wrap THAT of all things in a container?
               | 
               | Really, enlighten me
        
               | mise_en_place wrote:
               | Because you want a reproducible environment/runtime for
               | that static server. Nix/NixOS takes it a step further, in
               | that it provides not only a reproducible runtime
               | environment, but a reproducible dev and build environment
               | as well.
        
               | kube-system wrote:
               | > why do I need to wrap THAT of all things in a
               | container?
               | 
               | If you can't see a reason why, then you probably don't
               | need to. You probably have different needs than other
               | people.
               | 
               | Many people use Docker not because of what they're doing
               | inside of the container, but because it is convenient for
               | tangential activities. Like lifecycle management,
               | automation, portability, scheduling, etc.
               | 
               | I have several static sites in Docker containers in
               | production. We also have dozens of other microservices in
               | containers. We could do everything the same way, or we
               | can one-off an entirely separate architecture for our
               | static sites. The former makes more sense for us.
        
               | deberon wrote:
               | I actually found myself needing something like this a
               | couple weeks ago. I use a self-hosted platform
               | (cloudron.io) that allows for custom apps. I wanted to
               | host a static blog on that server. Some people are happy
               | to accept "bloat" if it does, in fact, make life easier
               | in some way.
        
               | rektide wrote:
               | If you literally ONLY ever need to run a single static
               | website, then yeah, containers might not be helpful to
               | you.
               | 
               | But once you start wanting to run a significant number of
               | things, or a significant number of instances of a thing,
               | it becomes more helpful to have a all-purpose tool
               | designed to manage images & run instances of them. Having
               | a common operational pattern for all your systems is a
               | nice, overt, clean, common practice everyone can adapt &
               | gain expertise in. Rather than each project or company
               | defining it's own deployment/operationization/management
               | patterns & implementations.
               | 
               | The cost of containers is also essentially near zero
               | (alas somewhat less true with regards to local FS
               | performance, but basically equal for many volume mounts).
               | They come with great features like snapshots & the
               | ability to make images off images- CoW style
               | capabilities, the ability to mix together different
               | volumes- there's some really great operational tools in
               | containers too.
               | 
               | Some people just don't have real needs. For everyone
               | else...
        
               | audleman wrote:
               | Once you've gone the container route you no longer even
               | need to think about virtual servers. You can just deploy
               | it to a container service, like ECS.
        
             | jart wrote:
             | Redbean supports UEFI too. Although we haven't added a bare
             | metal implementation of berkeley sockets yet. Although it's
             | on the roadmap for the future.
        
           | throwanem wrote:
           | In terms of CPU cycles and disk space, maybe. In terms of
           | engineer cycles, absolutely not. Which costs more?
        
             | danuker wrote:
             | Hmm, a SCP shell script on my laptop, prompting my SSH
             | key's password and deploying the site to the target
             | machine?
             | 
             | Or a constantly-updating behemoth, running as root,
             | installing packages from yet another unauditable repository
             | chain?
        
               | somenewaccount1 wrote:
               | You forgot the step where you had to provision that
               | server to run the software and maintain all the systems
               | security updates on the live running server, and that
               | server requires all the same maintenance, with or without
               | docker. And if you fuck it up, better call the wife and
               | cancell Sunday plans because you forgot how it all gets
               | installed and ......yeah, just use docker :p
        
               | danuker wrote:
               | Debian offers unattended upgrades:
               | https://wiki.debian.org/UnattendedUpgrades
               | 
               | And security updates, as you said, are needed regardless
               | of whether you run Docker on top. I think Docker is a
               | needless complexity and security risk.
        
               | lolinder wrote:
               | Security updates are only needed on the OS level if
               | you're running Docker on bare metal or a VPS. If you're
               | running Docker in a managed container or managed
               | Kubernetes service such as ECS/EKS, you only need to
               | update the Docker image itself, which is as simple as
               | updating your pip/npm/maven/cargo/gem/whatever
               | dependencies.
               | 
               | I see two main places where Docker provides a lot of
               | value: in a large corp where you have massive numbers of
               | developers running diverse services on shared
               | infrastructure, and in a tiny org where you don't have
               | anyone who is responsible for maintaining your
               | infrastructure full time. The former benefits from a
               | standardized deployment unit that works easily with any
               | language/stack. The latter benefits from being able to
               | piggy-back off a cloud provider that handles the physical
               | and OS infrastructure for you.
        
               | throwanem wrote:
               | And you're welcome to think so, but if you intend to make
               | a case for removing Docker as optimization, you still
               | have yet to start.
        
               | danuker wrote:
               | I am arguing against Docker for maintainability reasons,
               | not CPU cycles.
               | 
               | Relying on hidden complexity makes for a hard path ahead.
               | You become bound by Docker's decisions to change in the
               | future.
               | 
               | For example, SSLPing's reliance on a lot of complex
               | software (among which NodeJS and Docker) got it to close,
               | and it got on the front page of HN recently.
               | 
               | https://sslping.com/
               | 
               | https://news.ycombinator.com/item?id=30985514
               | 
               | Keeping dependencies to a minimum will extend the useful
               | lifespan of your software.
        
               | throwanem wrote:
               | Docker Swarm isn't Docker; it's an orchestration service
               | on top of Docker, that happens to have originated with
               | the same organization that Docker does - hence the name.
               | A few years ago Swarm looked like it might be competitive
               | in the container orchestration space, but then Kubernetes
               | got easy to use even for the small deployments Swarm also
               | targeted, and Swarm has withered since. It wouldn't be
               | impossible or probably even all that difficult to switch
               | over to k8s if that were the only blocker, but as the
               | sunset post notes and you here ignore, that wasn't the
               | only or the worst problem facing SSLping - as, again, the
               | sunset post notes to your apparent disinterest, it's been
               | losing money for quite a while before it fell apart.
               | 
               | (Has it occurred to you that it losing money for a while
               | might have _contributed_ to its eventual
               | unmaintainability, as the dev opted sensibly to work on
               | more sustainably remunerative projects? If so, why ignore
               | it? If not, why not?)
               | 
               | Similarly for the Node aspect - that's very much a corner
               | use case related to this specific application (normally
               | SSLv3 support is something you actively _don 't_ want!),
               | and not something that can fairly be generalized into an
               | indictment of Node overall. Not that it's a surprise to
               | see anyone unjustly indict Node on the basis of an
               | unsupportable generalization from a corner case! But that
               | it's unsurprising constitutes no excuse.
               | 
               | Other than that you seem here to rely on truisms, with no
               | apparent effort to demonstrate how they apply in the
               | context of the argument at which you gesture. And even
               | the truisms are misapplied! Avoiding dependencies for the
               | sake of avoiding dependencies produces unmaintainable
               | software because you and your team are responsible for
               | _every_ aspect of _everything_ , and that can work for a
               | team of Adderall-chomping geniuses, but also _only_ works
               | for a team of Adderall-chomping geniuses. Good for you if
               | you can arrange that, but it 's quite absurd to imagine
               | that generalizes or scales.
        
               | danuker wrote:
               | Just because a project has a larger budget, doesn't mean
               | any of it should be spent on Docker, Docker Swarm, or
               | Kubernetes or whatever other managers of Docker that you
               | can mention here.
               | 
               | Fact is, for 3 servers, it would be hard to convince me
               | of any use of Docker compared to the aforementioned
               | deployment shell script + Debian unattended-upgrades.
               | 
               | What problem does Kubernetes address here? So what if it
               | is "easy to use"? I prefer "not needed at all".
               | 
               | > but also only works for a team of Adderall-chomping
               | geniuses
               | 
               | Of course, not everything should be implemented by
               | yourself. Maybe this project wouldn't have been possible
               | at all without offloading some complexity (like the
               | convenient NodeJS packages).
               | 
               | But in particular Docker and its ecosystem are only worth
               | it when you have an amount of machines that make it worth
               | it - when things become difficult to manage with a simple
               | shell script everyone understands: when you have a lot of
               | heterogeneous servers, or you want to deploy to the Cloud
               | (aka Someone Else's Computers) and you have no SSH
               | access.
               | 
               | > truisms
               | 
               | I don't have any experience with Kubernetes nor Docker
               | Swarm. The reason is that the truisms have saved me from
               | it. If you don't talk me into learning Kubernetes, I
               | won't, unless a customer demands it explicitly.
               | 
               | > Has it occurred to you that it losing money for a while
               | might have contributed to its eventual unmaintainability
               | 
               | It absolutely has. Maybe if the service hadn't used
               | Docker Swarm or Docker at all, it would have lasted
               | longer, since updating Docker would not have broken
               | everything, since this was named a factor in the closure.
               | And therefore, the time and money would have gone
               | further.
        
               | mountainriver wrote:
               | Exploring technologies can give you great insight into
               | your current practices. You are living by assumption
               | which is a pretty weak position
        
               | throwanem wrote:
               | And maybe if my grandmother had wheels she'd be a
               | bicycle. But - over a so far 20-year career of which
               | "devops" work has been sometimes a fulltime job, and
               | always a significant fraction, since back when we called
               | that a "sysadmin" - I've developed sufficiently intimate
               | familiarity with both sorts of deployment workflows that,
               | when I say I'll always choose Docker where feasible even
               | for single-machine targets, that's a judgment based on
               | experience that in your most recent comment here you
               | explicitly disclaim:
               | 
               | > I don't have experience with Kubernetes nor Docker
               | Swarm. The reason is that the truisms have saved me from
               | it.
               | 
               | Have they, though? It seems to me they may have "saved"
               | you from an opportunity to significantly simplify your
               | life as a sysadmin. Sure, your deployment shell scripts
               | are "simple" - what, a hundred lines? A couple hundred?
               | You have to deal with different repos for different
               | distros, I expect, adding repositories for deps that
               | aren't in the distro repo, any number of weird edge cases
               | - I started writing scripts like that in 2004, I have a
               | pretty good sense of what "simple" means in the context
               | where you're using it.
               | 
               | Meanwhile, my "simple" deployment scripts average about
               | _one_ line. Sure, sometimes I also have to write a
               | Dockerfile if there isn 't an image in the registry that
               | exactly suits my use case. That's a couple dozen lines a
               | few times a year, and I only have to think about
               | dependencies when it comes time to audit and maybe update
               | them. And sure, it took me a couple months of intensive
               | study to get up to speed on Docker - in exchange for
               | which, the time I now spend thinking about deployments is
               | a more or less infinitesimal part of the time I spend on
               | the projects where I use Docker.
               | 
               | Kubernetes took a little longer, and manifests take a
               | little more work, but the same pattern holds. And in both
               | cases, it's not only my experience on which I have to
               | rely - I've worked most of the last decade in
               | organizations with dozens of engineers working on shared
               | codebases, and the pattern holds for _everyone_.
               | 
               | I don't know, I suppose. Maybe there's another way for
               | twenty or so people to support a billion or so in ARR,
               | shipping new features all the while, without most months
               | breaking a sweat. If so, I'd love to know about it. In
               | the meantime, I'll keep using those same tools for my
               | single-target, single-container or single-pod stuff,
               | because they're really not that hard to learn, and quite
               | easy to use once you know how. And too, maybe it's worth
               | your while to learn just a little bit about these tools
               | you so volubly dislike - if nothing else, in so doing you
               | may find yourself better able to inform your objections.
               | 
               | All that said, and faint praise indeed at this point, but
               | on this one point we're in accord:
               | 
               | > If you don't talk me into learning Kubernetes, I won't,
               | unless a customer demands it explicitly.
               | 
               | I did initially learn Docker and k8s because a customer
               | demanded it - more to the point, I went to work places
               | that used them, and because the pay was much better there
               | I considered the effort initially worth my while. That's
               | paid off enormously for me, because the skills are much
               | in demand; past a certain point, it's so much easier to
               | scale with k8s especially that you're leaving money on
               | the table if you _don 't_ use it - we'd have needed 200
               | people, not 20, to support that revenue in an older
               | style, and even then we'd have struggled.
               | 
               | I still think it's likely worth your while to take the
               | trouble, for the same reasons I find it to have been
               | worth mine. But extrinsic motivation can be a powerful
               | factor for sure. I suppose, if anything, I'd exhort you
               | at least not to actively _flee_ these technologies that
               | you know next to nothing about.
               | 
               | Sure, you might investigate them and find you still
               | dislike them - but, one engineer to another, I hope
               | you'll consider the possibility that you might
               | investigate them and find that you _don 't_.
        
               | danuker wrote:
               | > what, a hundred lines? A couple hundred? You have to
               | deal with different repos for different distros, I
               | expect, adding repositories for deps that aren't in the
               | distro repo, any number of weird edge cases
               | 
               | Well, here is where I must thank you. Thank you for
               | replying to me, and giving me a real reason to look at
               | this ecosystem.
               | 
               | My personal deployment script is really just one SCP
               | command - it copies the new version of my statically-
               | built blog to my server. The web server comes with the
               | OS, and that's all I need.
               | 
               | But when I read "hundred lines? A couple hundred?" I
               | realized my company has a script fitting that bill. There
               | might be an opportunity for improving it. While I am
               | still somewhat skeptical, because using Kubernetes
               | instead of that script might still not be worth it long-
               | term for a 7-person company (of which 3 devs and one
               | sysadmin), I will check out its capabilities.
               | 
               | > in so doing you may find yourself better able to inform
               | your objections.
               | 
               | Thank you for the patience to follow up, in spite of my
               | arrogance. I might just come up with an improvement
               | somewhere. Certainly we're a long way from a billion in
               | ARR - I am thankful for your valuable time, and wish you
               | continued and further success!
        
               | throwanem wrote:
               | Don't worry about the arrogance - I was much the same
               | myself, once upon a time. :) It's worth taking thought to
               | ensure you don't let it run too far away with you, of
               | course, but I'd be a fool to judge harshly in others a
               | taste of which I once drank deep myself. And hey, what
               | the hell - did anyone ever change the world _without_
               | being at least a little arrogant? There are other ways to
               | dare what seems impossible to achieve, I suppose, but
               | maybe none so effective.
               | 
               | One thing, I'd suggest looking to Docker before
               | Kubernetes unless you already know you need multi-node
               | (ie, multi-machine) deployments, and maybe as a pilot
               | project even if you do. Kubernetes builds upon many of
               | the same concepts (and some of the same infrastructure)
               | as Docker, so if you get to grips with Docker alone at
               | first, you'll likely have a much easier and less
               | frustrating time later on than if you come to Kubernetes
               | cold. (And when that time does come, definitely start
               | with k3s if you're doing your own ops/admin work - it's
               | explicitly designed to work on a smaller scale than other
               | k8s distributions, but it also pretty much works out of
               | the box with little admin overhead. As with starting on
               | Docker alone vs k8s, it's all about managing your
               | frustration budget so you can focus on learning at a
               | near-optimal rate.
               | 
               | But hey, thanks for the well-wishes, and likewise taking
               | the time in this thread! It's been of real benefit to me
               | as well. If we're to be wholly honest, as an IC in my own
               | right I've never been above mid-second quartile at
               | absolute best, and at my age I'll never be better than I
               | am today - or was yesterday. But that also means I'm at a
               | point in my career where I best spend my time helping
               | other engineers develop; if I can master the skill of
               | making my accumulated decades of experience and
               | knowledge, and whatever little wisdom may be found in
               | that, of use to engineers who still have the time and
               | drive to make the most of it in ways I maybe failed to do
               | - well, I'll take it, you know? It's not the kind of work
               | I came into this field to do, I suppose, but I've done
               | enough of it by now to know both that I _can_ do it, and
               | that it is very much worth doing.
               | 
               | So, in that spirit and quite seriously meant - I might be
               | off work sick this afternoon, one peril I'm finding
               | attends ever more frequently upon advancing age, but
               | evidently that's no barrier to improving the core skill
               | that I intend to build the rest of my career around.
               | Thank you for taking the time and trouble to help make
               | that possible today, and here's likewise hoping you find
               | all the success you desire!
        
               | danuker wrote:
               | Thanks again. Get well quick!
               | 
               | BTW, your CV is down, again due to relying on hidden
               | complexity (Stack Exchange Jobs is extinct). You made me
               | curious so I stalked you a bit :D
               | 
               | https://aaron-m.com/about
        
               | throwanem wrote:
               | Hahaha, that's _perfect_. I 'll fix it when I get the
               | chance, thanks again!
        
             | marginalia_nu wrote:
             | Building simpler systems allows you to save on all three.
        
               | krick wrote:
               | Does it? Or, rather, is it even simpler?
               | 
               | To host something as a docker container I need 2 things:
               | to know how to host docker, and a docker image. In fact,
               | not even an image, just a dockerfile/docker-composer.yaml
               | in my source code. If I need to host 1000 apps as a
               | docker containers, I need 1000 dockerfiles and still to
               | know (and remember) 1 thing: how to host docker. That's 1
               | piece of knowledge I need to keep in my head, and 1000 I
               | keep on a hard-drive, most of the time not even caring
               | what's the instruction inside of them.
               | 
               | If I need to host 1000 apps without dockerfiles, I need
               | to keep 1000 pieces of knowledge in my head. thttpd here,
               | nginx to java server there, very simple and obvious
               | postgres+redis+elastic+elixir stack for another app...
               | Yeah, sounds fun.
        
               | throwanem wrote:
               | That's true, but in my experience there is nothing
               | mutually exclusive in systems being simple and systems
               | running Docker.
               | 
               | Granted, you do need to learn how Docker works, and be
               | ready to help others do likewise if you're onboarding
               | folks with little or no prior experience of Docker to a
               | team where Docker is used. That's certainly a tradeoff
               | you face with Docker - just as with literally every other
               | shared tool, platform, codebase, language, or
               | technological application of any kind. The question that
               | wants asking is whether, in exchange for that increased
               | effort of pedagogy, you get something that makes the
               | increased effort worthwhile.
               | 
               | I think in a lot of cases you do, and my experience has
               | borne that out; software in containers isn't materially
               | more difficult to maintain than software outside it if
               | you know what you're doing, and in many cases it's much
               | easier.
               | 
               | I get that not everyone is going to agree with me here,
               | nor do I demand everyone should. But it would be nice if
               | someone wanted to take the time to _argue_ the other side
               | of my claim, rather than merely insisting upon it with no
               | more evident basis than arbitrarily selected first
               | principles given no further consideration in the context
               | of what I continue to hope may develop into a discussion.
        
               | marginalia_nu wrote:
               | Docker is absolutely ups the complexity.
               | 
               | Whatever set-up your application needs is a still
               | necessary step in the process. But now you've not only
               | added more software in docker with its a docker registry,
               | and Docker's state on top of the application's state,
               | you've also introduced multiple virtual filesystems and a
               | layer of mapping between those and locations on the host,
               | mappings between the container's ports and the host's
               | ports. There is no longer a single truth about the host
               | system. The application may see one thing and you, the
               | owner, another. If the application says "I wrote it to
               | /foo/bar", you may look in "/foo/bar" and find that /foo
               | doesn't even exist.
               | 
               | All of that is indirection and new ways things can be
               | that did not exist if you just ran your code natively.
               | What is complexity if not additional layers of
               | indirection and the increase of ways things can be?
        
               | throwanem wrote:
               | Okay, and in exchange for that, I've gained single-
               | command deployments of containers that already include
               | all the dependencies their applications require, and at
               | most I only have to think about that when I'm writing a
               | deployment script or doing an update audit.
               | 
               | It's rare that I need to find out _de novo_ where a given
               | path in a container is mapped on the host. When I do need
               | to do that, I can usually check a deployment script, or
               | failing that inspect the container directly and see what
               | volume mounts it has.
               | 
               | I don't need to worry about finding paths very often -
               | much less frequently than I need to think about
               | deployments, which at absolute minimum is once per
               | project.
               | 
               | So, sure, by using Docker I've introduced a little new
               | complexity, that's true. But you overlook that this
               | choice does not exist in a vacuum, and that that added
               | complexity is more than offset by the _reduction_ of
               | complexity in tasks I face much more often than the one
               | you describe.
               | 
               | And that's just _me!_ These days I have a whole team of
               | engineers on whose behalf, as a tech lead, I share
               | responsibility for maintaining and improving developer
               | experience. Do you think I 'd do them more of a favor by
               | demanding they all comprehend a hundred-line _sui
               | generis_ shell script for deployments, or by saying
               | "here's a single command that works in exactly the same
               | way everyone you'll work with in the next ten years does
               | it, and if it breaks there's fifty people here who all
               | know how to help you fix it"?
        
             | [deleted]
        
         | encryptluks2 wrote:
         | The difference between a systems engineer and a software
         | engineer is that to a systems engineer a half functioning 5MB
         | docker image is okay but to a software engineer a fully
         | functional 5GB Node image is fine.
        
           | ttty wrote:
        
         | qbasic_forever wrote:
         | I think the real value is just focusing on the absolute minimum
         | necessary software in a production docker/container image. It's
         | a good practice for security with less surface area for
         | attackers to target.
        
       | bachmitre wrote:
       | How many requests can thttpd handle simultaneously, compared to,
       | say nginx ? It's a moo point being small if you then have to
       | instantiate multiple containers behind a load balancer to handle
       | simultaneous requests.
        
       | calltrak wrote:
        
       | uoaei wrote:
       | Nail, meet hammer.
        
       | mg wrote:
       | For static websites, is there any reason not to host them on
       | GitHub?
       | 
       | Since GitHub Pages lets you attach a custom domain, it seems like
       | the perfect choice.
       | 
       | I would expect their CDN to be pretty awesome. And updating the
       | website with a simple git push seems convenient.
        
         | throwaway894345 wrote:
         | I'm sure their CDN is great, and I've used it in the past;
         | however, I like to self-host as a hobby.
        
         | coding123 wrote:
         | Well, not everything is open source.
        
         | jason0597 wrote:
         | > is there any reason not to host them on GitHub?
         | 
         | Because some people may not want to depend even more on Big
         | Tech (i.e. Microsoft) than they already do
        
         | marban wrote:
         | Netlify FTW -- For the rewrite rules alone.
        
         | _-david-_ wrote:
         | >For static websites, is there any reason not to host them on
         | GitHub?
         | 
         | One reason would be if your site violates the TOS or acceptable
         | use policy. GitHub bans "excessive bandwidth" without defining
         | what that is for example. For a small blog about technology you
         | are probably fine.
        
         | tekromancr wrote:
         | Can you do SSL?
        
           | dewey wrote:
           | Yes, since 2018.
        
         | enriquto wrote:
         | > For static websites, is there any reason not to host them on
         | GitHub?
         | 
         | I don't like github pages because it's quite slow to deploy.
         | Sometimes it takes more than a couple of minutes just to update
         | a small file after the git push.
        
         | marginalia_nu wrote:
         | Wanting to own your own web presence is reason not to host them
         | on GitHub.
         | 
         | For static websites, CDNs are largely unnecessary. My potato of
         | a website hosted from a computer in my living room has been on
         | the front page of HN several times without as much as
         | increasing its fan speed.
         | 
         | It took Elon Musk tweeting a link to one of my blog posts
         | before it started struggling to serve pages. I think it ran out
         | of file descriptors, but I've increased that limit now.
        
           | lostlogin wrote:
           | Are you able to describe how you run yours? I scummed your
           | blog but didn't see anything about it.
        
             | marginalia_nu wrote:
             | The static content is just nginx loading files straight off
             | a filesystem. The dynamic content (e.g. the search engine)
             | is nginx forwarding requests to my Java-based backend.
        
         | naet wrote:
         | Once you've used a couple more static hosts you'll find that gh
         | pages is a second tier host at best. Lacks some basic
         | configuration options and toolings, can be very slow to update
         | or deploy, the cdn actually isn't as good as others, etc.
         | Github pages is great for hobby projects and if you're happy
         | with it by all means keep using it... but I wouldn't ever set
         | up a client's production site on it.
         | 
         | If you're curious, Netlify is one popular alternative that is
         | easy to get in to even without much experience. I would say
         | even at the free tiers Netlify is easily a cut above Github for
         | static hosting, and it hooks into github near perfect straight
         | out of the box if that is something you value.
        
         | qbasic_forever wrote:
         | I don't think you can set a page or URL on github to return a
         | 301 moved permanently response or similar 3xx codes. This can
         | really mess up your SEO if you have a popular page and try to
         | move off github, you'll basically lose all the clout on the URL
         | and have to start fresh. It might not matter for stuff you're
         | just tossing out there but is definitely something to consider
         | if you're putting a blog, public facing site, etc. there.
        
           | nobodywasishere wrote:
           | I have a few 301 redirects setup on github pages
           | $ curl https://nobodywasishere.github.io # moved to
           | https://blog.eowyn.net              <html>
           | <head><title>301 Moved Permanently</title></head>
           | <body>         <center><h1>301 Moved
           | Permanently</h1></center>         <hr><center>nginx</center>
           | </body>         </html>              $ curl
           | https://blog.eowyn.net/vhdlref-jtd # moved to
           | https://blog.eowyn.net/vhdlref              <html>
           | <head><title>301 Moved Permanently</title></head>
           | <body>         <center><h1>301 Moved
           | Permanently</h1></center>         <hr><center>nginx</center>
           | </body>         </html>
        
             | qbasic_forever wrote:
             | Is that coming back with a HTTP 200 response though and the
             | made up HTML page? That doesn't seem right... at least, I
             | dunno if google and such would actually index your page at
             | the new URL vs. just thinking "huh weird looks like
             | blog.eowyn.net is now called '301 Moved Permanently',
             | better trash that down in the rankings".
        
               | cptskippy wrote:
               | It shows up as a proper 301 when I load up the URL in
               | Firefox. The question is, how?
        
               | MD87 wrote:
               | One of the Jekyll plugins that GH Pages supports[0] is
               | jekyll-redirect-from, which lets you put a `redirect_to`
               | entry in a page's front matter.
               | 
               | [0]: https://pages.github.com/versions/
        
               | nobodywasishere wrote:
               | For the latter, `<meta http-equiv="refresh" content="0;
               | URL=https://blog.eowyn.net/vhdlref" />` in the `<head>`
        
               | qbasic_forever wrote:
               | Wow, nice yeah I'd love to know how github supports
               | configuring a URL to 301 redirect!
        
             | chabad360 wrote:
             | Yea, no.
             | 
             | A 301 (or 302) redirect means setting the status code
             | header to 301 and providing a location header with the
             | place to redirect to. Last I checked GitHub doesn't allow
             | any of this, or setting any other headers (like cache-
             | control). To work around this, I've been putting cloudflare
             | in front of my site which lets me use page rules to set
             | redirects if necessary.
        
       | riffic wrote:
       | there are services specifically for static site hosting. I'd let
       | them do the gritty devops work personally.
       | 
       | Netlify, Amplify, Cloudflare Pages, etc.
        
         | nilsandrey wrote:
         | I use them too. Sometimes I like to have some repos with the
         | static content, which get deployed by a CD tool to those
         | services. It's common for me when debugging or testing locally
         | in my PC or LAN, to include some docker build for those repos
         | which I don't use at production time, but I used it locally.
         | Maybe is not a big problem at all, but I use it that way,
         | specially when in my projects the CND used is not a free one.
         | Makes sense?
        
       | wereHamster wrote:
       | I used this as a base image for a static site, but then needed to
       | return a custom status code, and decided to build a simple static
       | file server with go. It's less than 30 lines, and image size is
       | <5MB. Not as small as thttpd but more flexible.
        
       | kissgyorgy wrote:
       | Redbean is just 155Kb without the need for alpine or any other
       | dependency. You just copy the Redbean binary and your static
       | assets, no complicated build steps and hundred MB download
       | necessary. Check it out: https://github.com/kissgyorgy/redbean-
       | docker
        
         | tyingq wrote:
         | And it does https/tls, where thttpd does not.
        
           | somenewaccount1 wrote:
           | I'm confused how the author considers thttpd more 'battle
           | tested' if it doesn't resolve https.
           | 
           | Either way though, it's a great article I'm glad the author
           | took to write. His docker practices are wonderful, wish more
           | engineers would use them.
        
             | cassandratt wrote:
             | "Battle tested" typically means that the code has been
             | running for a long time, bugs found, bugs squashed, and a
             | stability has been attained for a long time. It's usage
             | predates the "information wars", back when we really didn't
             | think about security that much because nothing was
             | connected to anything else that went outside the companies,
             | so there were no hackers or security battles back then. So
             | I suspect this is the authors frame of reference.
        
             | SahAssar wrote:
             | The term 'battle tested' has nothing to do with amount of
             | features, it's about how proven the stability and/or
             | security of the included features included are. The term
             | also usually carries a heavy weight towards older systems
             | that have been used in production for a long time since
             | those have had more time to weather bugs that are only
             | caught in real-world use.
        
               | bornfreddy wrote:
               | Also, https is often dealt with on a different server
               | (load balancer for example).
        
         | mrweasel wrote:
         | There's also the 6kB container, which uses asmttpd, a webserver
         | written in assembler.
         | 
         | https://devopsdirective.com/posts/2021/04/tiny-container-ima...
        
         | danuker wrote:
         | Wow! This is the Redbean which is an "Actually Portable
         | Executable", or a binary that can run on a range of OSes
         | (Linux, Windows, MacOS, BSDs).
         | 
         | http://justine.lol/ape.html
        
           | adolph wrote:
           | Well worth a read:
           | 
           |  _I believe the best chance we have of [building binaries "to
           | stand the test of time with minimal toil"], is by gluing
           | together the binary interfaces that've already achieved a
           | decades-long consensus, and ignoring the APIs. . . .
           | Platforms can't break them without breaking themselves._
        
       | jandeboevrie wrote:
       | But why would you prefer Docker like this over, for example,
       | running thttpd directly? Saves you a lot of Ram an indirection?
        
         | qbasic_forever wrote:
         | Run this on a linux host and it isn't that much different from
         | running thttpd directly. There's just some extra chroot,
         | cgroups, etc. setup done before launching the process but none
         | of that gets in the way once it's running. Docker adds a bit of
         | networking complexity and isolation, but even that is easily
         | disabled with a host network CLI flag.
         | 
         | It's really only on windows/mac where docker has significant
         | memory overhead, and that's just because it has to run a little
         | VM with a linux kernel. You'd have the same issue if you tried
         | to run thttpd there too and couldn't find a native mac/windows
         | binary.
        
         | ttty wrote:
        
         | somenewaccount1 wrote:
         | For one, because his home server provides multiple utilities,
         | not just this one project, and without docker he starts to have
         | dependency conflicts.
         | 
         | He also like to upgrade that server close to edge, and if that
         | goes south, he want to rebuild and bring his static site up
         | quickly, along with his other projects.
        
           | gotaquestion wrote:
           | I serve several sites off an AWS EC2 instance, all are
           | dynamic REST endpoints with DBs in their own `tmux` instance.
           | I also have a five line nodeJS process running on another
           | port for just my static page. All of this is redirected from
           | AWS/r53/ELB. The only pain in the arse is setting up all the
           | different ports, but everything runs in its own directory so
           | there are no dependency issues. I've tried to ramp up with
           | docker, but I always end up finding it faster to just hack
           | out a solution like this (plus it saves disk space and memory
           | on my local dev machine). In the end my sol'n is still a hack
           | since every site is on one machine, but these are just sites
           | for my own fun. Perhaps running containers directly would be
           | easier, but I haven't figured out how to deal with disk space
           | (since I upload lots of stuff).
        
           | Yeroc wrote:
           | Well in the article he ended up compiling thttpd statically
           | so he wouldn't have dependency conflicts if he ran it
           | directly. Funny how there's overlap in docker solutions that
           | solve different but related issues for non-docker deploys as
           | well...
        
       | timcavel wrote:
        
       | mr-karan wrote:
       | While this is remarkably a good hack and I did learn quite a bit
       | after reading the post, I'm simply curious about the motivation
       | behind it? A docker image even if it's a few MBs with Caddy/NGINX
       | should ideally be just pulled once on the host and sit there
       | cached. Assuming this is OP's personal server and there's not
       | much churn, this image could be in the cache forever until the
       | new tag is pushed/pulled. So, from a "hack" perspective, I
       | totally get it, but from a bit more pragmatic POV, I'm not quite
       | sure.
        
         | rektide wrote:
         | I feel like there's a lot of low-hanging fruit on the table for
         | containers, and it's weird we don't try to optimize loading. I
         | could be wrong! This seems like a great sample use case-
         | wanting a fast/low-impact simple webserver for any of a hundred
         | odd purposes. Imo there's a lot of good strategies available
         | for making starting significantly larger containers very fast!
         | 
         | We could be using container snapshots/checkpoints so we don't
         | need to go through as much initialization code. This would
         | imply though that we configure via the file-system or something
         | we can attach late though. Instead of 12-factor configure via
         | env vars, as is standard/accepted convention these days.
         | Actually I suppose environment variables are writable, but the
         | webserver would need to be able to re-read it's config, accept
         | a SIGHUP or whatever.
         | 
         | We could try to pin some specific snapshots into memory.
         | Hopefully Linux will keep any frequently booted-off snapshot
         | cached, but we could try & go further & try to make sure hosts
         | have the snapshot image in memory at all times.
         | 
         | I want to think that common overlay systems like overlayfs or
         | btrfs or whatever will do a good job of making sure, if
         | everyone is asking for the same container, they're sharing some
         | caches effectively. Validating & making sure would be great to
         | see. To be honest I'm actually worried the need-for-speed
         | attempt to snapshot/checkpoint a container & re-launch it might
         | conflict somewhat- rather than creating a container fs from
         | existing pieces & launching a process, mapped to that fs, i'm
         | afraid the process snapshot might reencode the binary? Maybe?
         | We'd keep getting to read from the snapshot I guess, which is
         | good, but there'd be some duplication of the executable code
         | across the container image and then again in the snapshotted
         | process image.
        
         | throwaway894345 wrote:
         | It gets pulled once per host, but with autoscaling hosts come
         | and go pretty frequently. It's a really nice property to be
         | able to scale quickly with load, and small images tend to help
         | with this in a variety of ways (pulling but also instantiating
         | the container). Most sites won't need to scale like this;
         | however, because one or two hosts is almost always sufficient
         | for all traffic the site will ever receive.
        
           | mr-karan wrote:
           | I did mention that it's the OP's server which I presume isn't
           | in an autoscale group.
           | 
           | Even then, saving a few MBs in image size is the devops
           | parlance of early optimisation.
           | 
           | There's so much that happens in an Autoscale group before the
           | instance is marked healthy to serve traffic, that an image
           | pull of few MBs in the grand scheme of things is hardly ever
           | any issue to focus on.
        
             | throwaway894345 wrote:
             | Yeah, like I said, I'm not defending this image in
             | particular--most static sites aren't going to be very
             | sensitive to autoscaling concerns. I was responding
             | generally to your reasoning of "the host will just cache
             | the image" which is often used to justify big images which
             | in turn creates a lot of other (often pernicious) problems.
             | To wit, with FaaS, autoscaling is highly optimized and tens
             | of MBs can make a significant difference in latency.
        
               | mr-karan wrote:
               | Noted, that makes sense. Thanks!
        
         | marginalia_nu wrote:
         | The less resources you use from your system, the more things
         | you can do with your system.
        
           | spicybright wrote:
           | Only matters if you're actually using those extra cycles or
           | not. The majority of web servers hover at <10% CPU just
           | waiting for connections.
        
             | munk-a wrote:
             | I don't know if that's really true - if you're renting the
             | server from a cloud provider chances are you can bump down
             | the instance size if you don't need the extra processing
             | capacity... and if it's a server you manually maintain I
             | think lighter usage generally decreases part attrition,
             | though the other factors in that are quite complex.
        
       ___________________________________________________________________
       (page generated 2022-04-12 23:00 UTC)