[HN Gopher] Oasis: Small statically-linked Linux system
       ___________________________________________________________________
        
       Oasis: Small statically-linked Linux system
        
       Author : ingve
       Score  : 285 points
       Date   : 2022-08-14 12:35 UTC (10 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | tyingq wrote:
       | I'm curious how it handles the client side dns resolver. That's
       | one of the sticking points for "fully static linking".
       | 
       | Edit: Looks like it uses musl's resolver. Which is better than I
       | remember it when using it a long time ago. It does still lack
       | some things like IDN/punycode support. And things like this:
       | https://twitter.com/richfelker/status/994629795551031296?lan...
       | 
       | Pluggable authentication modules (PAM) may also be an issue with
       | this setup, probably at least requires a lot of manual research
       | and twiddling.
        
         | suprjami wrote:
         | The musl resolver also lacks "no-aaaa" which glibc grew a week
         | or two ago.
        
       | gravypod wrote:
       | I'd love to see someone port this to Bazel and build a ruleset to
       | generate os images for KVM/VirtualBox/flashing to bare metal.
       | This would be great for having a generic platform for building
       | embedded systems from source.
        
         | benreesman wrote:
         | I'm working on a Nix/NixOS based polyglot Bazel build
         | infrastructure (that we're using in production) and I've
         | _almost_ got `glibc` and `libstdc++` squeezed all the way out.
         | They 're like fucking Kudzu (which is not an accident), but
         | I'll get 'em eventually. Once `glibc` and `libstdc++` are all
         | the way gone it will be a hop, skip, and a jump to doing what
         | you're describing.
         | 
         | I anticipate it being ready for open source as a standalone
         | flake by the end of 2022.
        
         | dijit wrote:
         | I literally had that conversation in their IRC channel and the
         | resounding consensus was that "why would you want that"
         | 
         | I was asking if anybody had done it yet, but since there's
         | people who want that I will submit some BUILD field to oasis.
         | :)
        
       | hulitu wrote:
       | > Minimal bootstrap dependencies. Any POSIX system with git, lua,
       | curl, a sha256 utility, standard compression utilities, and an
       | x86_64-linux-musl cross compiler can be used to bootstrap oasis.
       | 
       | So you need a custom POSIX system to bootstrap oasis.
       | 
       | > All software in the base system is linked statically,
       | 
       | this could be a good thing, however
       | 
       | > including the display server (velox) and web browser (netsurf).
       | 
       | No X server and an obscure web browser. Useless.
        
         | sophacles wrote:
         | Name me an OS that doesn't need an OS to build it.
        
           | hulitu wrote:
           | It needs a very special OS: "git, lua, curl, a sha256
           | utility, standard compression utilities, and an x86_64-linux-
           | musl cross compiler"
        
             | sophacles wrote:
             | You mean besides just install the relevant packages by:
             | "pacman -S git lua52 gzip coreutils gcc musl" on a standard
             | arch install? (presumably similar for other distros)
        
             | yjftsjthsd-h wrote:
             | Of those, only an x86_64-linux-musl cross compiler strikes
             | me as even remotely unusual, and that's still widely
             | available.
        
         | castlec wrote:
         | um, it says if you have the libs and configured compiler, you
         | can build it.
         | 
         | there's also nothing stopping you from doing static builds of
         | whatever application you want to include in your OS.
        
         | sleepydog wrote:
         | Velox appears to be a Wayland compositor, and supports
         | Xwayland.
        
         | 80h wrote:
        
       | jancsika wrote:
       | Would dlopen be disallowed on this system?
       | 
       | E.g., technically speaking most (all?) audio plug-ins are shared
       | libs...
        
       | waynecochran wrote:
       | Death to shared libraries. The headaches they cause are just not
       | worth the benefit. I have been dreaming of someone cresting a
       | system devoid of them.
        
         | goodpoint wrote:
         | Absolutely not. Dynamic libraries are crucial for security.
         | 
         | Static linking again and again has proved to be a security
         | nightmare. It makes security updates dependent from upstream
         | release or full rebuilds making things difficult.
        
         | onlyrealcuzzo wrote:
         | What are the headaches with shared libraries?
         | 
         | Assuming that you pin the version of the library you need, two
         | applications using the same library /shouldn't/ cause issues.
         | 
         | In most *nix systems - each instance of a shared library has
         | its own data.
        
           | waynecochran wrote:
           | I am assuming you know the answer to the question if you have
           | been developing code for awhile. What makes the problem worse
           | is that it is transitive. I link against Colmap libs, which
           | links against Ceres libs, which links against OpenMP, ...
           | these are ever changing code bases and I have to create
           | executables that run on local machines, Azure cloud machines,
           | AWS cloud machines, ... chafed down shared lib mismatches
           | becomes a nightmare.
        
           | rattlesnakedave wrote:
           | " Assuming that you pin the version of the library you need,
           | two applications using the same library /shouldn't/ cause
           | issues."
           | 
           | Then why bother?
        
         | encryptluks2 wrote:
         | I used to feel the same way. In an ideal world all packages
         | would be automated and developers would have a way to alert
         | packaging tools that a new version is release, so that it can
         | be run through CI/CD and users would get new packages almost
         | instantaneously without any breaking changes.
         | 
         | However, until that happens there is nothing wrong with shared
         | libraries. They exist for good reason. They save bandwidth and
         | ensure that you're not getting old libraries.
        
           | waynecochran wrote:
           | The save bandwidth? Do you mean memory? You want to link
           | against the _exact_ same code you developed and tested your
           | program with. Ensuing you get and exact match is critical for
           | robust code.
        
             | encryptluks2 wrote:
             | Yes, dynamic libraries save a descent amount of bandwidth
             | when installing packages. Also, for the most part no I
             | don't want the exact libraries always. If library creators
             | have a security fix then I want the one with the fix, and
             | there are plenty of times I've seen developers create
             | something and barely maintain it afterwards. Package
             | maintainers are really good at keeping packages for
             | necessary libraries.
        
         | ksec wrote:
         | I dont disagree. But wondering if I am missing anything other
         | than the usual cons such as Memory usage, code ( Storage ) size
         | and longer compile time.
        
           | waynecochran wrote:
           | I understand the motivation for shared libs is the reduced
           | memory footprint for large common libs. I have an alternative
           | idea that avoids the headaches of shared lib mismatches and
           | still can reduce the memory footprint. Have relocatable code
           | be statically linked but stored with a hash. When loading a
           | binary see if common relocatable code is already loaded .. if
           | it has the same hash then it is exactly the same and can be
           | safely shared. No version mismatch headaches.
        
             | eru wrote:
             | Wouldn't you want to randomise some stuff to make attacks
             | harder? (Sorry, I forgot the details.)
        
             | cesarb wrote:
             | > Have relocatable code be statically linked but stored
             | with a hash. When loading a binary see if common
             | relocatable code is already loaded .. if it has the same
             | hash then it is exactly the same and can be safely shared.
             | 
             | It's not actually "code" which is being shared, but instead
             | the memory pages containing that code (the sharing
             | granularity is a single page, since sharing works by
             | pointing the page table entries on each process to the same
             | physical page). So either you'd have to force page
             | alignment for each and every function (highly wasteful,
             | since the page size is at least 4096 bytes), or you're
             | reinventing shared libraries but using a hash instead of a
             | version.
        
               | waynecochran wrote:
               | This is not like shared libraries in that my program is
               | guaranteed to load and run regardless of what libs are on
               | the system. Also, I am guaranteed my program is running
               | with the exact same library I tested it with.
        
               | waynecochran wrote:
               | That's finer grained than I imagined. I am assuming
               | hashes for large pieces of code that are completely
               | relocatable in memory.
        
             | 8organicbits wrote:
             | I think often the binaries will have been compiled against
             | slightly different versions. With dynamic linking, I can
             | swap 1.0.1 for 1.0.2 or 1.0.3 without issue. The hash would
             | require an exact match, which is likely rare.
        
               | waynecochran wrote:
               | I want an exact match. That is what I developed and
               | tested against. A change, however small, can have
               | unintended consequences. Changing versions is a huge
               | issue from a reliability point of view.
        
         | hlsnqp wrote:
         | So next time a library such as OpenSSL has a critical security
         | vulnerability that needs to be patched, you'd rather every
         | single binary that has it statically linked is updated, rather
         | than a single shared library file?
        
           | flohofwoe wrote:
           | Considering the complexity that shared libraries add over
           | static linking: yes absolutely.
        
             | palata wrote:
             | Is that only that good libraries should correctly use
             | semver (i.e. be backward-compatible unless they change the
             | major version), or is there complexity somewhere else?
             | 
             | My feeling is that dynamic linking is a pain because
             | sometimes one needs, say, 1.2.5 for some part of the code
             | and 1.3.1 for another part, and they are not compatible.
             | Which hints towards the library author having messed up
             | something, right?
        
           | rattlesnakedave wrote:
           | Google "OpenSSL dependency conflict" and view the results.
           | There is significantly more work going on when OpenSSL
           | releases and update than "just update the single shared
           | library file." Applications often need to be updated as well,
           | so you may as well just statically link the new binary.
        
             | itvision wrote:
             | Haven't seen a single OpenSSL issue over the past 25 years
             | of using Linux.
             | 
             | libstdc++ errors? Seen them.
        
               | pfarrell wrote:
               | Libstdc++ errors notwithstanding, what about heartbleed?
               | 
               | https://en.m.wikipedia.org/wiki/Heartbleed
        
               | RealStickman_ wrote:
               | I think the parent was talking about dependency
               | conflicts, not issues in OpenSSL.
        
             | nicce wrote:
             | It really depends on whether the bug is in API or not. If
             | it is not, it is likely that upgrading only the shared
             | library is enough.
        
               | convolvatron wrote:
               | How lovely to live in a world where api contracts are
               | absolute and complicated internal behaviors never
               | conflict in unforeseen ways. We've really gotten on top
               | of the software complexity problem
        
               | hedora wrote:
               | That's exactly the problem deb files and apt solve (via
               | integration testing and documenting antidependences.)
        
               | arinlen wrote:
               | > _How lovely to live in a world where api contracts are
               | absolute and complicated internal behaviors never
               | conflict in unforeseen ways._
               | 
               | Some libraries managed to preserve API contracts
               | throughout the years just fine.
               | 
               | Even in C++, a language which does not establish a
               | standard ABI, has frameworks which have not been breaking
               | the ABI throughout multiple releases, such as Qt.
        
           | glowingly wrote:
           | We just didn't update Log4j since it broke a critical, old,
           | closed source offline program. Vendor has long dropped
           | support for this old program. Cue that up to the usual
           | executive missteps that we get from up high.
           | 
           | Yay, we get to keep on using old Log4j now because one
           | program holds it back.
        
             | cesarb wrote:
             | If the "old log4j" is 1.2.x, you can update to reload4j
             | (https://reload4j.qos.ch/), which is a fork of the latest
             | log4j 1.2.x by its initial author; you don't have to update
             | to log4j 2.x (which changed the API). And you might not
             | even need to update, since "old log4j" (1.2.x) wasn't
             | affected by that vulnerability in the first place (it has
             | its own old vulnerabilities, but they're all in code which
             | is usually not used by real world deployments).
        
           | Ericson2314 wrote:
           | If you keep around the .o / .a files this isn't so bad.
        
           | ori_b wrote:
           | Sure. Networks are fast enough.
        
             | hlsnqp wrote:
             | What about closed-source binaries where a patched binary
             | can't be obtained from the vendor?
        
               | ori_b wrote:
               | The same thing I do if the vulnerability is in the binary
               | itself, and not in a library.
               | 
               | Surely, if you care, you have a mitigation plan for this?
        
               | vbezhenar wrote:
               | Binary patch it.
        
               | IshKebab wrote:
               | Closed source programs almost universally bundle all
               | dependencies (except system ones) irrespective of whether
               | they are using static or dynamic linking. They have to to
               | ensure that they actually run reliably.
        
               | Beltalowda wrote:
               | With closed-source binaries the argument for static
               | linking is the strongest, IMHO, or at least shipping
               | _all_ the required shared libraries and LD_LIBRARY_PATH
               | that in. The amount of times I 've had to muck about to
               | make a five-year old game work on a current system is
               | somewhat bonkers, and I would never expect someone
               | unfamiliar with shared library details (i.e. a normal
               | non-tech person, or even a tech person not familiar with
               | C/Linux) to figure all of that out.
        
               | eru wrote:
               | You could link locally, or you could opt not to use them.
               | Or you could restrict dynamic linking to these special
               | cases.
        
             | eru wrote:
             | Or you can link locally.
        
           | marssaxman wrote:
           | Oh, yes, I certainly would. Shared libraries are an
           | expensive, trouble-prone solution for that relatively
           | uncommon problem.
           | 
           | Anyway that's not why we have shared libraries. They were
           | invented so that memory managers could load the same library
           | once, then map it into the address space for every process
           | which used it. This kind of savings mattered back when RAM
           | was expensive and executable code represented a meaningful
           | share of the total memory a program might use. Of course that
           | was a very long time ago now.
        
             | arinlen wrote:
             | > _Oh, yes, I certainly would. Shared libraries are an
             | expensive, trouble-prone solution for that relatively
             | uncommon problem._
             | 
             | This personal assertion has no bearing in the real world.
             | Please do shine some light on what exactly do you interpret
             | as "expensive", and in the process explain why do you feel
             | that basic software design principles like modularity and
             | the ability to fix security vulnerabilities in the whole
             | system by updating a single shared library is "trouble-
             | prone".
             | 
             | I mean, the whole world never saw a real world problem, not
             | the UNIX comunity nor Microsoft and it's Windows design nor
             | Apple and it's macOS design not anyone ever, but somehow
             | here you are claiming the opposite.
        
               | jen20 wrote:
               | > the whole world never saw a real world problem, not the
               | UNIX comunity nor Microsoft and it's Windows design
               | 
               | "DLL hell" [1] was absolutely a thing. It was solved only
               | by everyone shipping the world in their application
               | directories.
               | 
               | [1]: https://en.wikipedia.org/wiki/DLL_Hell
        
               | naasking wrote:
               | Not to mention dynamic linking is brutally slow. I don't
               | think people even realize how many cycles are wasted. The
               | notion that dynamic linking never had any real problems
               | and still doesn't is totally false.
        
               | jiggawatts wrote:
               | > basic software design principles like modularity
               | 
               | A single product can have a modular design, but
               | modularity across _unrelated_ products designed by
               | different designers is a different concept that should
               | have a different name.
               | 
               | In my opinion it's just not something that really works.
               | This is similar to the "ability" of some software to have
               | multiple pluggable database engines. This always yields a
               | product that can use only the lowest common
               | denominator... badly.
               | 
               | But now imagine having to coordinate the sharing of
               | database engines across multiple relying products! So
               | upgrading Oracle would break A, B and fix C, but
               | switching A and B to MySQL would require and update to
               | MySQL that would break D, E and F.
               | 
               | That kind of thing happens all the time and people run
               | away screaming rather than deal with it.
               | 
               | Shared libraries are the same problem.
        
             | benreesman wrote:
             | Also: text pages are shared in the typical case. It was
             | largely a spurious argument to begin with. If it didn't
             | fuck over `musl` et. al. and `libstdc++` it would have been
             | relegated to corner cases years ago.
        
               | comex wrote:
               | Is your first sentence missing a negation, or can you
               | elaborate?
        
               | benreesman wrote:
               | I should have been more precise, but that can become a
               | treatise by HN comment standards easily so consider there
               | to be an asterisk next to all of this for myriad
               | platform/toolchain corner cases.
               | 
               | To the extent that shared libraries economize on memory,
               | or are claimed to, it's when a lot of different processes
               | link the same version of the same library, e.g. your
               | `libc` (or the original motivating example of X). The
               | tradeoff here is that the kernel has an atomic unit of
               | bringing data into RAM (AKA "core" for the old-timers),
               | which is usually a 4kb page, and shared libraries are
               | indivisible: you use `memcpy` and you get it and whatever
               | else is in the 4kb chunk somebody else's linker put it
               | in. Not a bad deal for `memcpy` because it's in
               | _everything_ , but a very bad deal for most of the
               | symbols in e.g. `glibc`.
               | 
               | The counter-point is that with static linking, the linker
               | brings in _only the symbols you use_. You do repeat those
               | symbols for each `ELF`, but _not for each process_. This
               | is often (usually?) a better deal.
               | 
               | A little poking around on my system to kind of illustrate
               | this a bit more viscerally: https://pastebin.com/0zpVqA0r
               | (apologies for the clumsy editing, it would be too long
               | if I left in every symbol from e.g. `glibc`).
               | 
               | Admittedly, that's a headless machine, and some big
               | clunky Gnome/Xorg thing might tell a different story, but
               | it would have to be 100-1000x worse to be relevant in
               | 2022, and almost certainly wouldn't offset the (heinous)
               | costs. And while this is speculation, I think it would be
               | negative worse, because there just aren't that many
               | distinct `ELF`'s loaded on any typical system even with a
               | desktop environment. Chromium or whatever is going to be
               | like, all your RAM even on KDE.
               | 
               | There are couple of great CppCon talks that really go
               | deep here (including one by Matt Godbolt!):
               | https://www.youtube.com/watch?v=dOfucXtyEsU,
               | https://www.youtube.com/watch?v=xVT1y0xWgww.
               | 
               | Don't be put off by the fact it's CppCon, all your GNU
               | stuff is the same story.
        
             | Groxx wrote:
             | Uncommon by _all_ uses of shared libraries maybe, but
             | certainly not uncommon for security-oriented libraries. A
             | couple times per year is WAY more than often enough to be
             | worth shared libraries, as otherwise you 're talking about
             | rebuilding much of the world multiple times per year _per
             | security-oriented library_.
             | 
             | And nearly all libraries eventually become "security-
             | oriented libraries", given enough time. Everything is a
             | hacking target eventually.
        
           | alkonaut wrote:
           | This argument comes up every time this discussion comes up
           | (on flatpaks etc). And for _almost_ all systems I'd say "yes,
           | absolutely".
           | 
           | One headache over another.
           | 
           | What's to guarantee applications haven't statically linked a
           | library (in whatever version) rather than linked a "system"
           | version of a library?
           | 
           | With the dynamic one you have both problems...
        
             | arinlen wrote:
             | > _And for almost all systems I'd say "yes, absolutely"._
             | 
             | Do you even understand that you are advocating in favour of
             | perpetually vulnerable systems?
             | 
             | Let's not touch the fact that there are vague and fantastic
             | claims about shared libraries being this never ending
             | sources of problems (which no one actually experiences)
             | that when asked to elaborate the best answer is wand waving
             | and hypotheticals and non-sequiturs.
             | 
             | With this fact out if the way, what is your plan to update
             | all downstream dependencies when a vulnerability is
             | identified?
        
               | csande17 wrote:
               | Before you decry statically linked systems as
               | "perpetually vulnerable", I'd advise you take a look at
               | the number of privilege escalation vulnerabilities caused
               | by LD_PRELOAD alone.
        
               | hedora wrote:
               | That's like claiming that a leaky sump pump sank the
               | titanic.
               | 
               | Linux has never been anywhere close to resistant to
               | privilege escalation bugs.
        
               | arinlen wrote:
               | > _Before you decry statically linked systems as
               | "perpetually vulnerable"_
               | 
               | But statically linked systems are perpetually vulnerable
               | by design, aren't they?
               | 
               | I mean, what's your plan to fix a vulnerable dependency
               | once a patch is available?
               | 
               | Are you tracking each and every dependency that directly
               | or indirectly consumes the vulnerable library? Are you
               | hoping to have access to their source code, regardless of
               | where they came from, and rebuild all libraries and
               | applications?
               | 
               | Because with shared libraries, all it takes to patch a
               | vulnerable library is to update just the one lib, and all
               | downstream consumers are safe.
               | 
               | What's your solution for this problem? Do you have any at
               | all?
        
               | csande17 wrote:
               | What's your plan if fixing a vulnerability requires
               | changing the size of a struct?
        
               | arinlen wrote:
               | > _What 's your plan if fixing a vulnerability requires
               | changing the size of a struct?_
               | 
               | Do you actually have an answer any of the points I
               | presented you, or are you going to continue desperately
               | trying to put up strawmen?
               | 
               | If any of your magical static lib promises had at bearing
               | in the real world, you wouldn't have such a hard time
               | trying to come up with any technical justification for
               | them.
        
               | csande17 wrote:
               | The argument that you seem to be making is that under
               | dynamic linking, there is never any reason to need to
               | recompile an application, and thus the fact that static
               | linking forces you to recompile applications makes it
               | non-viable from a security perspective.
               | 
               | Unfortunately, this is very far from being true in the
               | real world.
               | 
               | And even if all libraries did maintain 100% ABI
               | compatibility forever, and even if there were never any
               | compiler or processor bugs that needed to be mitigated by
               | recompiling applications, dynamic linking would still add
               | runtime complexity and thus surface area for security
               | vulnerabilities.
               | 
               | > Are you tracking each and every dependency that
               | directly or indirectly consumes the vulnerable library?
               | Are you hoping to have access to their source code,
               | regardless of where they came from, and rebuild all
               | libraries and applications?
               | 
               | Yes? I mean, are you just throwing random untrusted
               | binaries onto your servers, not keeping track of what
               | their dependencies are, and hoping to God that you will
               | never ever need to recompile them? (And if your
               | explanation for that is "I use proprietary software from
               | vendors who refuse to share source code", have you chosen
               | incompetent vendors who cannot respond to security
               | vulnerabilities in a timely fashion?)
        
               | AlphaCenturion wrote:
               | What is your plan?
        
               | csande17 wrote:
               | Four-part plan is this:
               | 
               | 1. For each application and library in the system,
               | maintain a copy of the source code and the scripts
               | necessary to rebuild it.
               | 
               | 2. Keep track of the dependency tree of each application
               | and library in the system.
               | 
               | 3. If there's a problem with a dependency, update it and
               | rebuild all dependent applications.
               | 
               | 4. If I'm using binaries from a vendor (whether that's a
               | Linux distribution or a proprietary software company),
               | the vendor needs to be responsible for (1) to (3).
               | 
               | In practice, your package manager will generally
               | implement almost all of (1) to (3) for you. (Newfangled
               | systems like NPM or Cargo usually do it using lockfiles
               | and automated tools like Dependabot.)
               | 
               | If you're using dynamic linking, guess what? _You still
               | have to do these four things_ , because even with dynamic
               | linking, there are still problems that can only be solved
               | by recompiling your software. Even if you think reasons
               | like ABI incompatibility or processor bugs aren't
               | compelling, there might be, y'know, bugs in the
               | applications themselves that you've gotta patch.
        
             | LtWorf wrote:
             | Flatpaks won't be automatically recompiled though. It might
             | take years before they bump whatever vulnerable thing they
             | are using.
             | 
             | > What's to guarantee applications haven't statically
             | linked a library (in whatever version) rather than linked a
             | "system" version of a library?
             | 
             | It's easy to see which commands are given to the compiler
             | and if there is a copied library if you have sources.
        
               | alkonaut wrote:
               | Yes but in general a system can safely be assumed to
               | contain At least some software which was built elsewhere.
               | Especially in more proprietary worlds like on Windows and
               | MacOS.
        
               | gravypod wrote:
               | > Flatpaks won't be automatically recompiled though. It
               | might take years before they bump whatever vulnerable
               | thing they are using.
               | 
               | If we made all builds reproducible (like oasis) we could
               | setup an automated trusted system that does builds
               | verifiably automatically (like what Nix has).
        
               | goodpoint wrote:
               | There are already trusted sources of software packages
               | that provide security updates: traditional Linux
               | distributions.
               | 
               | Doing automated, local patching and rebuilding is far
               | from enough. You need experts to correctly backport
               | patches and test them thoroughly.
        
               | LtWorf wrote:
               | People bashing distributions seem to ignore that the
               | alternative is the google play store.
        
               | bayindirh wrote:
               | No, no... We need to move very fast, and break all the
               | things. Tradition and conservatism is not the way. We
               | need to compile everything and anything. Change the
               | paradigm, break the status quo!
               | 
               | Oh, just like Gentoo, Slackware and LFS. ;)
               | 
               | Seriously, maybe we shouldn't ignore history and learn
               | why things are like this today.
        
               | bayindirh wrote:
               | Which reproducible builds project is striving to do for
               | quite some time.
               | 
               | It's not just a simple switch you flick and recompile.
        
           | enriquto wrote:
           | > you'd rather every single binary that has it statically
           | linked is updated, rather than a single shared library file?
           | 
           | Yes! A thousand times yes. I'd rather reinstall every single
           | compromised program than deal with the ridiculous complexity
           | of dynamic linking.
        
           | worldshit wrote:
           | you only need to relink, not recompile. i think oasis caches
           | artifacts so there's no overhead.
        
           | waynecochran wrote:
           | Yes. I can think of worse problems w shared libs. Should we
           | start a list?
        
           | indy wrote:
           | Yes
        
           | kortex wrote:
           | In the majority of cases: Absolutely. That is basically the
           | direction things have gone with "just docker build the image
           | and redeploy".
           | 
           | The amount of frustration dynamic linking causes is just not
           | worth it in today's day.
           | 
           | I think that calculus looks very different 20, 10, maybe even
           | 5 years ago, when getting packages rebuilt and distributed
           | from a zillion different vendors was much harder.
        
             | Wowfunhappy wrote:
             | This. If we'd just statically link executables, we wouldn't
             | need Docker. Docker is a solution for a problem that only
             | exists because of dynamic linking, and the tool's
             | popularity should be a wakeup call.
        
               | icedchai wrote:
               | Not really, you'd still have a problem of building with
               | possibly conflicting system dependencies, packaging (a
               | dev team that can build its own .debs or .rpms is a
               | rarity), ease of deployment / distribution, and running
               | multiple instances. Developers don't want to deal with
               | that crap.
        
               | maccard wrote:
               | That's not true.
               | 
               | Docker/containers do far more than just bundle shared
               | libraries. We use a container for almost all of our CI
               | builds. Our agents are container images with the
               | toolchains, etc installed. It also means it's trivially
               | easy to reproduce the build environment for those
               | occasional wtf moments where someone has an old version
               | of <insert tool here>.
        
               | Wowfunhappy wrote:
               | But if the tools were all statically linked, couldn't you
               | ship a zip with the binaries needed for building?
        
               | maccard wrote:
               | How do you update those tools and make sure you've
               | deleted all the old versions? What do you do to avoid the
               | problem of a developer SSH'ing and apt-get installing a
               | build dependency?
               | 
               | Tools aren't the only part of the environment though.
               | Setting the JAVA_HOME environment variable for a build
               | can dramatically change the output, for example. Path
               | ordering might affect what version of a tool is launched.
               | 
               | They also mean that our developers don't need to run
               | centos 7, they can develop on windows or Linux, and test
               | on the target platform locally from their own machine.
        
               | josephg wrote:
               | Setting JAVA_HOME matters, again, because the Java
               | runtime environment is essentially dynamically linked to
               | the Java program you're running.
               | 
               | The nice thing about fully statically linked executables
               | can just be copied between machines and ran. If Java
               | worked this way, the compiler would produce a (large)
               | binary file with no runtime dependancies, and you could
               | just copy that between computers.
               | 
               | Rust and Go don't give you a choice - they only have this
               | mode of compilation.
               | 
               | Docker forces all software to work this way by hiding the
               | host OS's filesystem from the program. It also provides
               | an easy to use distribution system for downloading
               | executables. And it ships with a Linux VM on windows and
               | macos.
               | 
               | But none of that stuff is magic. You could just run a VM
               | on macos or windows directly, statically link your Linux
               | binaries and ship them with scp or curl.
               | 
               | The killer feature of docker is that it makes anything
               | into a statically linked executable. You can just do that
               | with most compilers directly.
        
               | maccard wrote:
               | java home is an example. CC is another one for most C
               | build systems, goroot.
               | 
               | > But none of that stuff is magic.
               | 
               | I never said it was, I simply said that docker does more
               | than statically link binaries.
               | 
               | >You could just run a VM on macos or windows directly,
               | statically link your Linux binaries and ship them with
               | scp or curl.
               | 
               | https://news.ycombinator.com/item?id=9224
               | 
               | > The killer feature of docker is that it makes anything
               | into a statically linked executable.
               | 
               | The killer feature of docker is it gives you an
               | immutable*, versionable environment that handles more
               | than just scp'ing binaries.
        
               | benreesman wrote:
               | Indeed, you could. Linus takes backward compatibility
               | more than a little seriously.
        
               | vbezhenar wrote:
               | Docker provides much more isolation than a dynamic
               | libraries.
               | 
               | Static linking is not a new thing. It was there since a
               | beginning. Dynamic linking is a new thing compared to it.
        
               | goodpoint wrote:
               | > Docker provides much more isolation
               | 
               | The very opposite. You can do all the sandboxing you want
               | without needing docker and you will avoid docker's big
               | attack surface.
        
               | reegnz wrote:
               | This is painfully reductionist. The mount namespace is
               | only one of multiple namespaces (eg. pid, network, user,
               | etc.) that containers utilize. Security doesn't stop at
               | linking and shipping binaries.
        
               | [deleted]
        
             | goodpoint wrote:
             | No, both docker, static linking and flatpak share the same
             | issue with being unable to provide reliable and tested
             | security updates.
             | 
             | Once you bundle multiple libraries into blobs you always
             | have the same security nightmare:
             | 
             | https://www.researchgate.net/publication/315468931_A_Study_
             | o...
             | 
             | (and on top of that docker adds a big attack surface)
        
         | benreesman wrote:
         | When it comes to mandatory dynamic linking, it's always a
         | Beautiful Day for an auto-da-fe: death is too good for
         | mandatory `glibc` and `libstdc++`.
         | 
         | As for a system devoid of it, I have grand plans, but for now:
         | people do it with Docker every day! That's what Docker _is_. It
         | 's not broadly technically feasible right now to get a low-
         | friction static link on a mainstream distro with mainstream
         | libraries, but life, erm, technology Finds a Way. For now
         | that's Docker, but the modest </s> enthusiasm around it shows
         | that people want a way out pretty fucking bad and there will be
         | better options soon.
        
           | arinlen wrote:
           | > _As for a system devoid of it, I have grand plans, but for
           | now: people do it with Docker every day! That 's what Docker
           | is._
           | 
           | No, Docker really isn't what you think it is.
           | 
           | Docker isolates processes, and provides a convenient platform
           | independent way to deliver stand alone applications, complete
           | with software defined networking and file system abstraction,
           | at the expense of shipping the whole tech stack, including
           | everything plus the kitchen sink like full blown web servers.
           | 
           | Thus a .DEB package which otherwise would be 10MB ends up
           | being a Docker image often weighing over 100MB.
           | 
           | And pray tell what's Docker's solution to software updates?
           | Well, it's periodic builds that force downstream dependencies
           | and users to re-download payloads of well over 100MB a pop.
           | 
           | And you advocate this as an alternative to shared libraries?
           | 
           | Who in their ri
        
             | benreesman wrote:
             | I advocate `musl`/etc. and LLVM as an alternative to
             | `fuck.this.so.0.0.4 -> ...`. I switched from Docker to Nix
             | some time ago (because of many of the pain points you
             | mention).
             | 
             | But in a world where native applications worked on any
             | kernel >= the one it was built on, you'd rapidly find out
             | that bin-packing the webserver and webcrawler into a finite
             | list of SKUs (which is where all this container mania
             | originated) via namespaces and cgroups are really not
             | problems that most Docker users have, and there are much
             | simpler ways to get two versions of `node.js` looking at
             | two different sets of libraries.
             | 
             | Because you can't reliably know what devilry just stomped
             | all over your `/usr/lib/x86_64/...` or whatever, you kind
             | of have to eat that 100MB image if you want software that
             | worked on Tuesday to still work on Friday.
             | 
             | It's a sad state of affairs, but This Too Will Pass.
             | Getting the word out that _the reason Docker saves your ass
             | on reproducibility is emulation of static linking behind
             | the FSF 's back_ will accelerate that process.
        
               | matheusmoreira wrote:
               | > in a world where native applications worked on any
               | kernel >= the one it was built on
               | 
               | Isn't that the world we live? Linux kernel has a stable
               | binary interface. Problems almost always arise in user
               | space.
        
               | benreesman wrote:
               | Ah, I phrased that clumsily. Linux/Linus are _fanatical_
               | about not breaking userspace (God bless them). Your stuff
               | never runs on a different system because some asshat
               | clobbered your ` /usr/lib/...`.
        
               | akiselev wrote:
               | _> the reason Docker saves your ass on reproducibility is
               | emulation of static linking behind the FSF 's back_
               | 
               | Can you expand on this? It's an interesting perspective I
               | haven't heard before.
               | 
               | I get the gist but even without a Dockerfile, a container
               | image is much less opaque and much more deterministic
               | than a binary with a statically linked library where the
               | compiler might inline entire functions and perform
               | optimizations across module boundaries with LTO. It's not
               | as simple as setting LD_LIBRARY_PATH but it's not like
               | trying to reverse the information lost during
               | compilation.
        
               | matheusmoreira wrote:
               | > Can you expand on this?
               | 
               | Bundling dependecies is essentially static linking that
               | doesn't violate the GPL. Static linking means all
               | dependencies are compiled into the binary. Modern
               | technology like docker, appimages, snaps, etc. all
               | emulate that by creating a static image with all the
               | dependencies and then dynamically linking to those
               | specific libraries. So now we have a secondary linker and
               | loader on top of the operating system's.
        
               | benreesman wrote:
               | There's a big subthread on this post about it:
               | https://news.ycombinator.com/item?id=32459755.
               | 
               | Hermetic builds (Bazel, Nix, etc.) are in some sense
               | orthogonal to packaging and delivery via e.g. a Docker
               | container.
               | 
               | It's true that it takes great care to get bit-for-bit
               | identical output from a C/C++ toolchain, and it's also
               | true that very subtle things can send the compiler and
               | linker into a tailspin of "now I inline this and now I
               | don't have room to inline that and now other optimization
               | passes do something different" and whoa, my program just
               | got a lot faster or slower, yikes. I forget the title
               | now, but there was a great Strange Loop talk about how if
               | you do good statistics on repeated builds or even just
               | stuff more or less into `argv` that the difference
               | between `-O2` and `-O3` doesn't hold up (on either `gcc`
               | or `clang`).
               | 
               | But whether the final executable is delivered to the
               | target system via Docker or `scp` doesn't change any of
               | that. If you want identical builds the Bazel and Nix
               | people have a high-investment/high-return sales pitch
               | they'd like you to hear :).
               | 
               | And it's _also_ true (as another commenter pointed out)
               | that it 's at least a little bit reductionist of me to
               | say "Docker gives you static linking". Docker does a
               | zillion things, and for the people using the other stuff,
               | more power to them if it's working well.
               | 
               | I would contend that most of them (especially since QEMU
               | emulating x86_64 on your Intel Mac or vice versa isn't
               | really a great story anymore) are not the real selling
               | point though: you already had `virtualenv` and `nvm` and
               | `rustup` and just countless ways to isolate/reproduce
               | _everything except_ the contents of your friendly
               | neighborhood ` /usr/lib/...`. If the native compiled code
               | underpinning all of this stuff had all it's symbols
               | linked in, and your kernel is the same or newer, you
               | could use a tarball (and I've worked places where we did
               | just that)!
               | 
               | `git` was probably the tool that really sold everyone on:
               | I want a goddamned cryptographic hash of everything in
               | the thing, otherwise I Don't Trust Like That. We don't
               | accept mystery meat from our source code anymore, why we
               | feel any differently about _what actually runs_ is
               | Stockholm 's syndrome IMHO.
        
         | SkyMarshal wrote:
         | _> I have been dreaming of someone cresting a system devoid of
         | them._
         | 
         | Doesn't that pretty much already exist now with immutable
         | system builds, eg Fedora Silverblue, GUIX, NixOS? Each package
         | pulls in its own dependencies with the specified version and
         | config, and then no changes can be made once the system is
         | built. Only way to update is to rebuild the system with the
         | updated versions. Costs more disk space but that is cheap these
         | days, and solves shared dependency hell.
        
         | speed_spread wrote:
         | You're just embracing another set of problems. Shared libraries
         | were introduced for a reason. The are complex to handle, but
         | that's a long standing tooling problem rather than an inherent
         | issue of the model. I think we're actually just getting the
         | hang of it with stuff like Nix and containers.
        
           | flohofwoe wrote:
           | > Shared libraries were introduced for a reason.
           | 
           | Those reasons were floppy disks and RAM that was measured in
           | kilobytes though.
        
           | rattlesnakedave wrote:
           | "Shared libraries were introduced for a reason."
           | 
           | I'm pretty sure the reason is "X11 was large and badly
           | factored, and people who should have known better thought it
           | would save disk space."
           | 
           | The problems they claim to solve are updates across
           | applications at once, but in the enterprise how often does
           | that actually happen? Application authors still have to test
           | against the new version, and sysadmins still have to roll out
           | the changes required to support the update. In practice they
           | may as well statically link.
           | 
           | Let's not forget the problems they introduce are significant.
           | Implementations are complex. Applications become less
           | portable. ABI versioning is a pain (versioned symbols are
           | gross). Semantic changes in the shared lib cause chaos. Exec
           | is slower because of the linking done at load time. Dynamic
           | libs are bigger than static ones, AND they're loaded into
           | memory. More difficult to verify the correctness of a given
           | executable.
           | 
           | The alleged "benefit" here really does not outweigh the cost.
           | If we were doing static linking from the beginning, it's
           | difficult to say if we would have wound up with things like
           | docker or nix in the first place.
        
             | cesarb wrote:
             | > If we were doing static linking from the beginning, it's
             | difficult to say if we would have wound up [...]
             | 
             | We were doing static linking from the beginning, at least
             | on Unix. Dynamic linking came later.
        
               | IshKebab wrote:
               | And on Mac.
        
             | guenthert wrote:
             | > Exec is slower because of the linking done at load time.
             | 
             | Oh, cry me a river. 30 years ago the performance impact was
             | known and was very well worth the trade-off. Machines are
             | orders of magnitude faster now and consequently the impact
             | less significant. Next thing, you're telling me X11 is
             | slow.
        
             | edflsafoiewq wrote:
             | Surely the main reason is that the linked library may
             | actually be different on different machines. Stuff like
             | Open GL.
        
             | a9h74j wrote:
             | Plenty of support for static linking, I am seeing.
             | 
             | Would static linking conceivably make sandboxing, jailing,
             | etc easier?
             | 
             | Compared to all the effort which goes into dynamic linking,
             | I suspect it would be better spent on sandboxing (both
             | mechanism and UX).
        
           | alkonaut wrote:
           | Yes. But the other set of problems exists anyway. Even if you
           | drop in a new system version of a shared library, you still
           | need to consider what other versions are used apart from the
           | system library. Any app (other than those provided by a
           | system package manager of course) can have a local version
           | different from the system version and any app can have linked
           | it statically?
        
           | shrimp_emoji wrote:
           | > _Shared libraries were introduced for a reason._
           | 
           | Lack of storage and bandwidth in the 1970s
        
             | goodpoint wrote:
             | False. The main reason was to avoid having to maintain and
             | update multiple copies.
        
           | arccy wrote:
           | nix and containers actually eliminate all the benefits of
           | shared libraries while still having all their problems
        
         | arinlen wrote:
         | > _Death to shared libraries. The headaches they cause are just
         | not worth the benefit._
         | 
         | Completely disagree. Even though one size does not fit all,
         | anyone who makes sweeping statements about static libraries is
         | just stating to the world how they are completely oblivious
         | regarding basic software maintenance problems such as tracking
         | which software package is updated, specially those who are not
         | kept up to date on a daily basis.
        
           | benreesman wrote:
           | Dynamic linking by default has been vocally criticized by
           | everyone from Carmack to Bell Labs/Plan9/golang people. Many
           | if not most of the big scale companies have historically
           | avoided it like the plague. It has horrific costs in
           | performance, security, reproducibility, and ultimately
           | comprehensibility of any system.
           | 
           | People pay out the ass in droves via Docker (huge-ass images,
           | clunky composability, cgroup/namespace complexity) and Nix
           | (horror-movie learning curve) and I'm sure other things to
           | get around it / emulate static linking. `sudo apt-whatever
           | upgrade` and pray is pretty much DOA in even low-complexity
           | settings. People are voting with their feet.
           | 
           | You can disagree, and if you're not affiliated with Debian or
           | GNU I might even believe that you come by that opinion
           | honestly, but this gigantic bloc of people who _hate_ it are
           | not all  "oblivious".
        
             | pjmlp wrote:
             | Bell Labs introduced shared objects into UNIX, maybe they
             | should have learned better from other platforms.
        
               | benreesman wrote:
               | They realized that they had goofed like 25-30 years ago
               | and have been battling their own monster ever since:
               | http://harmful.cat-v.org/software/dynamic-
               | linking/versioned-....
               | 
               | I take the authors of a disaster _more_ seriously when
               | they say it 's a disaster, not less.
        
           | waynecochran wrote:
           | I like the old school days of just building everything from
           | source. Similarly the Linux Kernel was statically built --
           | before modules. I knew the whole system from top to bottom.
        
           | mariusor wrote:
           | I'd say this is a solved problem. It's easy to get all build
           | dependencies of a package and for every library that gets
           | updated you build the world.
           | 
           | It's a more computing power and bandwidth that is required,
           | but the maintenance burden is mostly solved by now by modern
           | package managers or tooling on top of not so modern ones.
        
           | nicoburns wrote:
           | If there was a standard for specifying which static library a
           | binary has linked, this would be a non-issue.
        
             | arinlen wrote:
             | > _If there was a standard for specifying which static
             | library a binary has linked, this would be a non-issue._
             | 
             | You just gave a non-sequitur answer that rings as rational
             | and true as stating that if the Pope would fly he would be
             | an Airbus A330.
             | 
             | Limiting which static library you can link to means
             | absolutely nothing to the problems created by static
             | linking, and how shared libraries solve all of them without
             | any meaningful tradeoff at all.
        
             | jacoblambda wrote:
             | In a sense there kinda is.
             | 
             | Pre-linking is a way to statically specify the lookup for a
             | given shared library in an ELF executable. It's still kinda
             | dynamic linking because the file is loaded at runtime but
             | there's no library lookup, it's just "load this file at
             | this path into address space while we load the rest of the
             | main executable". Before PIC there would have been a bit
             | more of a distinction but nowadays static and dynamic
             | libraries are really only distinguished by where the
             | symbols are (unless you want to be able to dynamically load
             | and unload after startup).
        
         | icedchai wrote:
         | You could try DEC Ultrix. (Oh, did you mean a modern system?)
        
       | rishav_sharan wrote:
       | As someone who clearly doesn't understands this well enough - can
       | I run a desktop environment on top of it?
        
         | pas wrote:
         | it has Velox which is a Wayland compositor, so you can start
         | running static linked Wayland programs.
         | 
         | and of course you can add things like Firefox, LibreOffice or
         | whatever, but you either need to spend many many cursed nights
         | and dark days to get them to compile statically ... or just add
         | a dynamic linker (ld) and add the dependencies... but at that
         | point you built a second independent system on the same
         | filesystem.
        
       | numlock86 wrote:
       | > BearSSL is incredibly small and well written, but is not widely
       | adopted.
       | 
       | Well, probably because the author himself discourages you from
       | using it in anything serious. Crypto libraries need a lot of
       | testing and reviews by experts, which BearSSL doesn't have yet.
       | Thomas Pornin is an incredibly smart guy when it comes to crypto
       | and BearSSL is my crypto library of choice for small (mostly
       | embedded) projects, too. Projects like this might help to get it
       | the attention it needs.
        
       | snvzz wrote:
       | >and is probably better compared to a BSD.
       | 
       | That's a very unfortunate comment in the second paragraph.
        
         | Sakos wrote:
         | Should probably clarify in what ways (or for what) it's
         | supposedly better. That's quite the inflammatory statement.
        
           | camgunz wrote:
           | I think the meaning here is "better <to be> compared to a
           | BSD", as in it's closer to a BSD than a Linux in terms of
           | design and ethos.
        
             | Sakos wrote:
             | Ah, you're right.
        
           | projektfu wrote:
           | I don't read that sentence to mean it's better than BSD,
           | rather that Oasis is more like BSD than a typical Linux
           | distribution.
           | 
           | "It is better compared to" is an idiom meaning "the
           | comparison is more apt to".
        
           | [deleted]
        
         | speed_spread wrote:
         | phrasing should be changed to "similar to BSD in design".
        
         | mforney wrote:
         | Sorry for the confusing wording. I meant that oasis has more
         | similarities to a BSD than a typical Linux distribution.
         | 
         | I've updated the README to be a bit clearer.
        
         | rafram wrote:
         | You're misreading it. The author means that it's more similar
         | to a BSD than to a typical Linux system, as the previous
         | sentence makes clear.
        
         | unsafecast wrote:
         | ((better compared) (to BSD)), not (better (compared to BSD)) :)
        
           | endorphine wrote:
           | Or:
           | 
           | "[...] is better compared to BSD", not "[...] is better,
           | compared to BSD".
        
             | unsafecast wrote:
             | Yeah, that's definitely a better way to put it.
        
       | its_bbq wrote:
       | Is this meant more to be used on a small embedded Linux system?
       | As a Serverless OS for fast startup?
        
       | jrexilius wrote:
       | This line is a bit concerning to me: "and libtls-bearssl, an
       | alternative implementation of libtls based on BearSSL" It sounds
       | like a custom rolled variant of a low-adoption SSL lib that isn't
       | itself very well tested.. It may be better.. it may be horribly
       | flawed or backdoored in a non-obvious way. But I also get how
       | horrible OpenSSL is..
        
         | synergy20 wrote:
         | same here, in fact the last bearssl release was 4 years ago,
         | very inactive if still alive.
         | 
         | mbedtls might be better here, but it does not have tls1.3 fully
         | ready(I don't think bearssl has 1.3 either).
         | 
         | wolfssl could be the best, small and robust with new features,
         | it is dual licensed if you want to use it in commercial
         | releases.
        
       | badrabbit wrote:
       | Debian has reproducable builds going for it. I would be very
       | interested in static gentoo like source distro though.
        
       | mro_name wrote:
       | The core values are very appealing to me.
       | 
       | what is the typical update routine a la 'sudo apt-get update &&
       | sudo apt-get dist-upgrade'?
        
         | bragr wrote:
         | From the readme:                 No package manager.
         | Instead, you configure a set of specifications of what files
         | from which packages to include on your system, and the build
         | system writes the resulting filesystem tree into a git
         | repository. This can then be merged into /, or pulled from
         | another machine.
         | 
         | If you're looking for a package manager, you're in the wrong
         | place.
        
       | denton-scratch wrote:
       | Sounded good to me, until I found out there's no repository; you
       | just pull everything from github. Then I stopped reading. What
       | could go wrong?
        
       | gkhartman wrote:
       | I was ready to give this a try, but the use of BearSSL is
       | concerning. It goes on to say that it's not a widely adopted SSL
       | implementation, so how can it be trusted? I suppose adoption has
       | to start somewhere, but I'll watch from a safe distance.
        
         | baby wrote:
         | I used to audit tls libraries for a living and bearssl is the
         | library I would choose today if I had to make a choice
        
           | ComplexSystems wrote:
           | Can you elaborate?
        
       | mmargerum wrote:
       | Now if I could just boot this from a cartridge I'd approach the
       | last time programming was truly joyful for me.
        
         | projektfu wrote:
         | I played around with the Qemu image and I really liked it. I
         | wasn't even aware of some of the software they chose, like
         | Velox.
        
       | benreesman wrote:
       | "I tend to think the drawbacks of dynamic linking outweigh the
       | advantages for many (most?) applications." - John Carmack
       | 
       | <rant>
       | 
       | I don't like to be a parrot for the Plan9 people [1] (they've
       | been winning their own fights for a while now), but they've
       | assembled a better set of citations than I could in the time I'm
       | willing to devote to an HN comment.
       | 
       | Dynamic linking is fine, in _very specific_ , _very limited_
       | cases. Python extensions. Maybe you want to do a carve-out for
       | like, some small hardened core of an SSL implementation or a
       | browser because you 've got actual hot-deploy in your (massive)
       | installation/infrastructure.
       | 
       | As a _basically mandatory default_ it 's a chemical fire next to
       | an elementary school. It started gaining serious traction as a
       | horrific hack to get a not-remotely-finished X Windows to run on
       | Sun boxes in like, +/- 1990. It wasn't long before the FSF had
       | figured out that they could bio-weaponize it against
       | interoperability. The `glibc` docs are laughing in their beer
       | while they explain why static linking on "GNU" systems is
       | impossible, and also makes you a criminal. They don't even
       | pretend to make a cogent argument (they said the quiet part loud
       | on the mailing lists). It's to kill other `libc` and `libc++`
       | (fuck you small-but-vocal anti-interoperability FSF people!).
       | Don't want LLVM to finish you off? Start interoperating better.
       | There are a lot of us who think "Free Software" has to mean that
       | I can use it in legal ways that you philosophically disagree
       | with, and if it happens to be better than your Win32-style lock-
       | in? Well that's just groovy.
       | 
       | Dynamic linking by (mandatory) default: Just Say No.
       | 
       | Myths:
       | 
       | - People want it: Docker. QED.
       | 
       | - Secure: Dynamic linking is the source of _tons_ of security
       | vulnerabilities. More generally, the best way to not trust your
       | system is to not be able to audit the _literal code you 're
       | running_ without jumping through insane, sometimes practically
       | impossible hoops. If you want "father knows best" security, you
       | need self-patch like Chromium and shit do now. And if you're
       | willing to trade performance for security by the gross, flat pack
       | or something.
       | 
       | - Performant: Dynamic linking forces address patch-ups, indirect
       | calls, and defeats many optimizations. It thrashes TLB. It slows
       | startup of even simple programs.
       | 
       | - Necessary: Text pages are shared. I don't care if you've got 9,
       | 90, or 90k processes using `printf`: it's going to be in the same
       | text page unless something else went badly, badly wrong.
       | 
       | </rant>
       | 
       | [1] http://harmful.cat-v.org/software/dynamic-linking/
        
         | int_19h wrote:
         | > The `glibc` docs are laughing in their beer while they
         | explain why static linking on "GNU" systems is impossible, and
         | also makes you a criminal.
         | 
         | Can you provide a reference? I skimmed through the glibc manual
         | but couldn't find anything like this.
         | 
         | > It's to kill other `libc` and `libc++`
         | 
         | How does this follow from dynamic linking?
        
           | benreesman wrote:
           | Good primary sources are the `glibc` documentation,
           | especially the FAQ [1], and the `musl` docs, especially the
           | "differences from glibc" section [2] and the FAQ [3], which
           | is basically a long list of ways that `glibc`/`gnulib`
           | developers have leaned into ever-more-broken/nonstandard
           | behavior. If you dig through the GNU mailing lists (which is
           | where I draw the line for an HN comments), you'll find
           | megabytes of admission that this is in an effort to make
           | "GNU/Linux" (or as I've taken to calling it, GNU "plus"
           | Linux, Linux being merely the kernel...), to quote Fassbender
           | as Steve Jobs: "Closed system! Completely incompatible with
           | anything!"
           | 
           | But the short story is that both NSS (DNS lookup), and
           | `iconv` (locale handling) don't work without `glibc`-specific
           | dynamic linking shit. Ostensibly this is because of things
           | like merely being able to _read_ decades-obsolete multibyte
           | character encodings, and not able to _write_ them, is a
           | bridge too far to support _at all_ , let alone as a default.
           | They've also got a bridge they'd like to sell you (see
           | mailing lists).
           | 
           | It's just 90s Microsoft vendor lock-in: "embrace, extend,
           | extinguish" we used to call it. Stallman was a hero for
           | getting the free software movement rolling in the 80s. But in
           | this business, you either die a hero or live long enough to
           | become the villain, and Bill Gates himself would be proud.
           | 
           | [1] https://sourceware.org/glibc/wiki/FAQ [2]
           | https://wiki.musl-libc.org/functional-differences-from-
           | glibc... [3] https://wiki.musl-libc.org/faq.html
        
         | medoc wrote:
         | Your last point is a bit off. Text pages are shared iff they
         | come from the same file. So your point works if the 90k
         | processes are the same program. Else: shared libs...
        
           | benreesman wrote:
           | Yeah I was typing too fast in ranty mode, thanks for calling
           | it. I did a much more detailed answer that gets it (mostly)
           | right replying to another commenter:
           | https://news.ycombinator.com/item?id=32461766
        
       | KSPAtlas wrote:
       | I used this before, pretty impressive how much it can pull off
        
       | logdahl wrote:
       | The discussions on static- vs dyamic linking piqued my interest.
       | Is there any resources I should read about the history and
       | concrete pros & cons of linking techniques?
        
       | dijit wrote:
       | I ran a really non-scientific test before on startup speed
       | between Oasis vs a slimmed down direct kernel booted Debian with
       | systemd.
       | 
       | Obviously I'm writing this because Oasis absolutely trounced it.
       | 
       | So; Oasis is currently my go-to for small deployed VPS's.
       | 
       | (here's the video if you actually care:
       | https://www.youtube.com/watch?v=X2Aw5SSqYFo you'll note that the
       | systemd one was delayed by the network on start- some people are
       | going to argue that this is an artificial delay, but this is the
       | truth of a startup and oasis had already booted before the delay
       | anyway)
        
         | itvision wrote:
         | I don't know man.
         | 
         | I see a display manager (lxdm under Xorg) password prompt 3
         | seconds after the GRUB menu.
         | 
         | And that includes quite a lot of stuff:
         | 
         | systemd-analyze blame | wc -l
         | 
         | 72
        
         | herpderperator wrote:
         | If you want super fast boot, go with Arch Linux.
        
         | LtWorf wrote:
         | Normally for me on debian, just compiling the kernel to remove
         | most modules and get rid of initrd gave great results. But the
         | kernel config is not the same on all the machines of course.
        
         | zamadatix wrote:
         | FWIW I get a faster boot time than that slimmed Debian VM with
         | a physical Celeron class physical box booting a stock kernel,
         | even including the real BIOS and hardware initialization time.
         | I don't think systemd really has much to do with why that VM
         | boots slow.
        
           | raggi wrote:
           | Debian is why debian boots slow.
        
             | teaearlgraycold wrote:
             | Oof
        
           | dijit wrote:
           | I don't think that's true.
           | 
           | That machine is a threadripper 3970x with 256G 3.2GHz DDR4,
           | with a 4xNVMe RAID0; it's a disturbingly fast glass-cannon.
           | 
           | I mean, it's not an accurate test for sure; but I seriously
           | doubt your VMs can boot faster stock and with the bios, on a
           | slower machine.
        
             | MobiusHorizons wrote:
             | Machines with more cores and ram actually take longer to
             | boot. I believe some of that was due to the dram training
             | that must occur on startup, but the is must also discover
             | all of the cores and build up the internal state to manage
             | them. I believe some xeons take on the order of minutes to
             | boot due to some of these considerations.
        
               | dijit wrote:
               | Even if that were true (which it's not) then those points
               | would be nullified by the fact that QEMU is exposing very
               | little hardware (cores) and devices to the guest.
        
               | SoftTalker wrote:
               | "Enterprise" class servers do a lot more in POST than
               | consumer/desktop machines. On an older server, this can
               | take minutes. Once the OS actually starts to boot, I have
               | not noticed an appreciable difference.
        
             | zamadatix wrote:
             | Boot speeds on the timescales in the video end up having
             | little to do with horsepower once you get to having "some
             | sort of SSD to load the boot image from" and "some sort of
             | CPU that can at least decompress that in a reasonable
             | time". It's more the "Wait here for 1 second for this
             | hardware to probe", "Wait here for 5 seconds to see if the
             | user wants to go into setup", "Wait here for network to
             | timeout" type configuration items. Having a ton of high end
             | hardware just makes this worse, more cores to initialize,
             | more memory to initialize, more devices to probe.
             | 
             | I picked the low end fanless box as example because of this
             | - it boots significantly faster than my high end box after
             | all. It still loses out to how little a VM has to do while
             | booting but not by much. It defaults to assuming a very
             | short "wait on user" time for BIOS/UEFI, it has a mostly
             | static and very simple hardware topology so doesn't need to
             | dynamically load much during BIOS/UEFI, and systemd isn't
             | told to do all those boot time loads/delays the Debian
             | default config seems to have. For whatever you have
             | "slimmed down" on it you're still trying to load nvidia
             | GPUs, run cloud-init scripts each boot, and whatever
             | networked service you mentioned thinks it needs to wait for
             | network before letting the box finish booting is. Those
             | kinds of things are what make the Debian VM boot slow, not
             | systemd or necessarily the size of the kernel until the
             | more noticeable things are fixed. Systemd is just what
             | Debian uses to do those boot time things and if you did the
             | same things with OpenRC or SysVinit or your own init system
             | without changing what was being done it'd be a pretty slow
             | boot too.
             | 
             | As for VPS if "not systemd" and small/fast/static are high
             | or your list I'd highly recommend Alpine Linux over Oasis.
             | It's known as the gold standard for containers but it makes
             | a nice lightweight VM box too. I don't have anything
             | against systemd but I did run Alpine as my standard VM
             | image for a number of years and was very happy with it on
             | the server side plus you get to keep convenient things like
             | a package manager if you so desire. Their "Virtual" pre-
             | built image will have the kernel image and firmware files
             | slimmed down to the point you might not even care about
             | trying to roll your own changes into it.
        
             | bee_rider wrote:
             | The one on the left gets caught on raising network
             | interfaces. I don't think your beefy machine will speed
             | that operation up much. It wouldn't be that surprising if
             | somebody can beat that boot time by tweaking that step
             | (depends on the configuration -- in the extreme you don't
             | even need a network interface or can delay it until the
             | desktop environment is started and the user starts thinking
             | about web browsers).
        
               | dijit wrote:
               | > The one on the left gets caught on raising network
               | interfaces
               | 
               | I mentioned in the parent that Oasis had already booted
               | before the delay even begins.
               | 
               | I also mentioned that it's not really fair to compare
               | Debian being greatly optimised: I was measuring the
               | practical loading time of a stock VM (that was optimised
               | for being a VM in Debians case and which bypassed the
               | BIOS).
               | 
               | The images I used were the cloud optimised ones that have
               | a slimmed down/optimised kernel for QEMU:
               | https://cdimage.debian.org/images/cloud/
        
               | bee_rider wrote:
               | But still:
               | 
               | > I don't think that's true.
               | 
               | There's no reason not to believe zamadatix, right? It is
               | probably the case that they are making an irrelevant
               | comparison, but not an untrue one.
        
               | dijit wrote:
               | I think there's sufficient reason to not believe him
               | without evidence.
               | 
               | Oasis took under a second to get to a login prompt,
               | Debian took about 1.5-2s.
               | 
               | UEFI BIOS takes longer than that on every system I've
               | ever used, even little fanless ones(they weren't ever
               | faster actually). So this claim alone makes me
               | suspicious; but also booting faster than Oasis, on worse
               | hardware, with more steps than my system with the slower
               | of the two OS's I ran.
               | 
               | I mean, there's a point where you suspend disbelief
               | without evidence.
        
       ___________________________________________________________________
       (page generated 2022-08-14 23:00 UTC)