[HN Gopher] Buck2: Our open source build system
       ___________________________________________________________________
        
       Buck2: Our open source build system
        
       Author : mfiguiere
       Score  : 202 points
       Date   : 2023-04-06 16:05 UTC (6 hours ago)
        
 (HTM) web link (engineering.fb.com)
 (TXT) w3m dump (engineering.fb.com)
        
       | rektide wrote:
       | I was reading on & on, going, yeah sounds great, but when are you
       | going to mention that it runs on Mercurial, which just puts such
       | massive distance between FB & the rest of the planet?
       | 
       | They do mention that it supports virtual file systems through
       | Sapling, which now encompasses their long known EdenFS. I'd like
       | to know more but right off the bat Sapling says it is:
       | 
       | > _Sapling SCM is a cross-platform, highly scalable, Git-
       | compatible source control system._
       | 
       | Git compatible eh? Thats a lot less foreboding. (It is however a
       | total abstraction over git with it's own distinct cli tools.)
       | 
       | I hope there are some good up to date resources on
       | Sapling/EdenFS. I have heard some really interesting things about
       | code kind of auto getting shipped up to the mothership &
       | built/linted proactively, it just at commit points, which always
       | sounded neat, and there seem to be some neat transparent thin
       | checkout capabilities here.
       | 
       | https://github.com/facebook/sapling
        
         | krallin wrote:
         | (engineer working on Buck2 here)
         | 
         | Buck2 is actually used internally with Git repositories, so
         | using Sapling / hg is definitely not a requirement
        
         | autarch wrote:
         | I'm not sure how Mercurial is relevant. From reading the Buck2
         | getting started docs, it looks like it works just fine with Git
         | repos.
        
           | rektide wrote:
           | It indeed is not the primary concern of build systems. For
           | many folks, there's some ci/CD systems checking stuff out &
           | feeding their build tools.
           | 
           | Buck2 notably though tries to be a good persistent
           | incremental build system, and it needs much more awareness of
           | things changing. It does integrate with Sapling for these
           | kind of reasons.
           | 
           | So the boundaries are a bit blurrier than they used to be
           | between scm/build/ci/CD tools.
        
             | fanzeyi wrote:
             | My understanding is that Buck2 uses Watchman (disclaimer:
             | I'm one of the maintainers) so it can work with both Git
             | and Mercurial repositories efficiently, without
             | compromising performance.
        
               | ndmitchell wrote:
               | It can use watchman, but for open source we default to
               | inotify (and platform equivalent versions on Mac/Windows)
               | to avoid you having to install anything.
        
       | phendrenad2 wrote:
       | > Build systems stand between a programmer and running their
       | code, so anything we can do to make the experience quicker or
       | more productive directly impacts how effective a developer can
       | be.
       | 
       | How about doing away with the build system entirely? Build
       | systems seem like something that shouldn't exist. When I create a
       | new C# .NET app in Visual Studio 2019, what "build system" does
       | it use? You might have an academic answer, but that's beside the
       | point. It doesn't matter. It just builds. Optimizing a build
       | system feels like a lack of vision, and getting stuck in a local
       | maxima where you think you're being more productive, but you're
       | not seeing the bigger picture of what could be possible.
        
         | palata wrote:
         | How do you think VS builds your code?
        
         | humanrebar wrote:
         | Visual Studio is a build system. And about eleven other things.
        
         | [deleted]
        
         | lanza wrote:
         | https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
        
         | sangnoir wrote:
         | How does Visual Studio convert a solution to an executible?
        
         | 0xcafefood wrote:
         | I guess you could write machine code by hand if you don't want
         | to build it. Otherwise, what specifically do you propose to do
         | away with build systems?
        
         | kristjansson wrote:
         | If you look around the toolbench, and you can't figure out who
         | the build system is ...
        
         | stonemetal12 wrote:
         | I am not sure I understand your point. I click build in VS,
         | whatever VS does it takes 2 minutes.
         | 
         | Supposedly the C# compiler can compile millions of lines per
         | second, but my not millions of lines of code project takes a
         | minute to compile so it must be wasting time doing something,
         | and could use some optimization.
        
       | jeffbee wrote:
       | Hrmm, it makes performance claims with regard to Buck1 but not to
       | Bazel, the obvious alternative. Hardly anyone uses Buck1 so you'd
       | think it would be relevant.
        
         | dtolnay wrote:
         | I have a non-toy multi-language project in
         | https://github.com/dtolnay/cxx for which I have both Buck2 and
         | Bazel build rules.
         | 
         | On my machine `buck2 clean && time buck2 build :cxx` takes 6.2
         | seconds.
         | 
         | `bazel clean && time bazel build :cxx` takes 19.9 seconds.
        
           | jeffbee wrote:
           | Anyway regardless of the fact that my local Rust environment
           | isn't cut out to repro your result, how much of that speedup
           | is due to parallelism that Buck2 offers and Bazel does not?
           | When I build your cxx repo with bazel and load the invocation
           | trace, the build was fully serialized.
        
           | jeffbee wrote:
           | That's cool. I was not able to repro due to the buck2
           | instructions not working for me, two different ways
           | Compiling gazebo v0.8.1 (/home/jwb/buck2/gazebo/gazebo)
           | error[E0554]: `#![feature]` may not be used on the stable
           | release channel       --> gazebo/gazebo/src/lib.rs:10:49
           | 
           | Then with the alternate instructions:                 error:
           | no such command: `+nightly-2023-01-24`      Cargo does not
           | handle `+toolchain` directives.      Did you mean to invoke
           | `cargo` through `rustup` instead?
        
             | fanzeyi wrote:
             | It looks like you don't have rustup.rs. You will need to
             | install that since Buck2 is depending on nightly Rust
             | features.
        
         | rajman187 wrote:
         | The author of Bazel came over to FB and wrote Buck from memory.
         | In Google it's called Blaze. Buck2 is a rewrite in rust and
         | gets rid of the JVM dependence, so it builds projects faster
         | but it's slow to build buck2 itself (Rust compilation)
        
         | kajecounterhack wrote:
         | I wonder if it's just because they don't have the same scale of
         | data, since FB as a company uses Buck1/Buck2 but not Bazel?
         | 
         | They've clearly learned from Bazel though! I like the idea of
         | not needing Java to build my software, and Starlark is battle
         | tested / might make transitioning off Bazel easier.
        
           | rajman187 wrote:
           | The author of Bazel came over to FB and wrote Buck from
           | memory. In Google it's called Blaze. Buck2 is a rewrite in
           | rust and gets rid of the JVM dependence, so it builds
           | projects faster but it's slow to build buck2 itself (Rust
           | compilation)
        
             | bhawks wrote:
             | I believe this is an over simplification. Engineers who had
             | used Blaze at Google reimplemented it at Facebook based on
             | what they knew of how it worked.
             | 
             | Even Facebook's Buck launch blog does not offer this story
             | of Bucks lineage and although the author worked on the
             | Closure compiler at Google that is not all of Blaze.
             | 
             | https://engineering.fb.com/2013/05/14/android/buck-how-we-
             | bu...
        
         | krschultz wrote:
         | It's honestly hard to measure at the scale of Meta. Just making
         | everything compatible with Bazel would be a non-trivial
         | undertaking.
         | 
         | Also that seems an interesting thing an independent person
         | could write about, but whatever claims Meta made on a topic
         | like that would be heavily scrutinized. Benchmarking is
         | notoriously hard to get right and always involves compromises.
         | It's probably not worth making a claim vis a vis a "competitor"
         | and triggering backlash. If it's significantly faster than
         | Bazel that will get figured out eventually. If not the tool
         | really is aimed at Buck1 users upgrading to Buck2 so that is
         | the relevant comparison.
        
       | candiddevmike wrote:
       | For folks that are using these kinds of tools, any regrets? How
       | much more complexity do they add vs make or shell scripts?
        
         | optymizer wrote:
         | I've used Buck at Meta for years and while it is technically
         | impressive and does an excellent job with speed, remote
         | compilation and caching, I am not a fan of all of the
         | undocumented, magic rules used all over the place in BUCK and
         | .bzl files.
         | 
         | I've yet to try BUCK on small projects though - I personally
         | default to Makefiles in that case.
         | 
         | On thing I definitely wouldn't use it is for Android
         | development. The Android Studio integration is much worse than
         | gradle's and adding external dependencies means you have to
         | make BUCK modules for each one.
         | 
         | I would however use it for large-scale projects or projects
         | with more than a dozen developers.
        
         | numbsafari wrote:
         | Speaking of the "tools" genrically, they are totally worth it
         | because of their ability to aggressively cache and parallelize,
         | but also because you end up with more declarative definitions
         | of common build targets in a way that is more or less type
         | safe. I personally think that makes these kinds of tools a win
         | over make. Beyond that, they make it trivial to implement
         | repeatable builds that run build steps in "hermetic" sandboxes.
         | You can do all that with make, but you are abusing the hell out
         | of the tool to get there, and it will look foreign to anyone
         | familiar with using make "the traditional way".
         | 
         | That said, bazel's set of prepublished rules, reliance on the
         | jdk, etc, make it not worth the burden, imo/e.
         | 
         | I think less ambitious, but similar tools are where it's at. We
         | use please for this reason, and are generally quite happy with
         | how it balances between pragmatism and theory.
         | 
         | In any event, having your build tool be a single binary is a
         | major win. I'd rather use make than anything written in python
         | or Java just because I don't have to worry about the overhead
         | that comes with those other tools and their "ecosystems".
        
       | yurodivuie wrote:
       | Do smaller companies (smaller than Meta and Google) use these
       | kinds of build tools much? It seems like a system that rebuilds
       | everything whenever a dependency changes is more suited an
       | environment that has very few, if any, external dependencies.
       | 
       | Is anyone using Buck/Bazel and also using frameworks like Spring,
       | or React, for example?
        
         | chucknthem wrote:
         | Uber adopted Bazel a few years ago for their Go and Java
         | monorepos, which is the majority of their code at the time. I
         | doin't know the state of their UI repos.
        
       | LegNeato wrote:
       | Congrats to the team! Very excited to finally get to use this.
        
       | RcouF1uZ4gsC wrote:
       | > Buck2 is an extensible and performant build system written in
       | Rust
       | 
       | I really appreciate tooling that is written in Rust or Go that
       | produce single binaries with minimal runtime dependencies.
       | 
       | Getting tooling written in for example Python to run reliably can
       | be an exercise in frustration due to runtime environmental
       | dependencies.
        
         | crabbone wrote:
         | Your problem is that Python sucks, especially it's dependency
         | management. It sucks not because it ought to suck, but because
         | of the incompetence of PyPA (the people responsible for
         | packaging).
         | 
         | There are multiple problems with Python packaging which ought
         | not exist, but are there and make lives of Python users worse:
         | 
         | * Python doesn't have a package manager. pip can install
         | packages, but installing packages iteratively will break
         | dependencies of packages installed in previous iterations. So,
         | if you call pip install twice or more, you are likely to end up
         | with a broken system.
         | 
         | * Python cannot deal with different programs wanting different
         | versions of the same dependency.
         | 
         | * Python version iterates very fast. It's even worse for most
         | of the Python packages. To stand still you need to update all
         | the time, because everything goes stale very fast. In addition,
         | this creates too many versions of packages for dependency
         | solvers to process leading to insanely long installation times,
         | which, in turn, prompts the package maintainers to specify very
         | precise version requirements (to reduce the time one has to
         | wait for the solver to figure out what to install), but this,
         | in turn, creates a situation where there are lots of allegedly
         | incompatible packages.
         | 
         | * Python package maintainers have too many elements in support
         | matrix. This leads to quick abandonment of old versions,
         | fragmented support across platforms and versions.
         | 
         | * Python packages are low quality. Many Python programmers
         | don't understand what needs to go into a package, they either
         | put too little or too much or just the wrong stuff altogether.
         | 
         | All of the above could've been solved by better moderation of
         | community-generated packages, stricter rules on package
         | submission process, longer version release cycles, formalizing
         | package requirements across different platforms, creating tools
         | s.a. package manager to aid in this process... PyPA simply
         | doesn't care. That's why it sucks.
        
           | androidbishop wrote:
           | Most of this is false. You are ignoring the best practices of
           | using python virtual environments for managing a project's
           | binary and package versions.
        
           | zdw wrote:
           | s/Python/NodeJS/ and everything in this statement is
           | multiplied by 10x
        
             | IshKebab wrote:
             | Some of it is also true for Node (e.g. poor package
             | quality), but I think it would be hard to argue that the
             | actual package management of Node is anywhere near as bad
             | as Python.
             | 
             | Node basically works fine. You get a huge node_modules
             | folder, sure. But it works.
             | 
             | Python is a complete mess.
        
               | zdw wrote:
               | I've tended to have the exact opposite experience - Node
               | projects have 10x (or more) the dependencies of Python
               | ones, and the tooling is far worse and harder to isolate
               | across projects.
               | 
               | A well engineered virtualenv solves most Python problems.
        
               | nextaccountic wrote:
               | > You get a huge node_modules folder, sure. But it works.
               | 
               | pnpm and other tools deduplicates that
        
         | TikolaNesla wrote:
         | Yes, just what I thought when I installed the Shopify CLI
         | (https://github.com/Shopify/cli) a few days ago because they
         | force you to install Ruby and Node
        
         | rektide wrote:
         | Personally it seems like a huge waste of memory to me. It's the
         | electron of the backend. It's absolutely done for convenience &
         | simplicity, with good cause after the pain we have endured. But
         | every single binary bringing the whole universe of libraries
         | with it _offends_.
         | 
         | Why have an OS at all if every program is just going to package
         | everything it needs?
         | 
         | It feels like we cheapened out. Rather than get good & figure
         | out how to manage things well, rather than driver harder, we're
         | bunting the problem. It sucks & it's lo-fi & a huge waste of
         | resources.
        
           | crabbone wrote:
           | Absolutely. As soon as it started to seem like even couple
           | hundreds of JARs won't put a significant strain on the
           | filesystem having to house them, the typical deployment
           | switched to Docker images, and, on top of the hundred of JARs
           | started to bundle in the whole OS userspace. Which also,
           | conveniently, makes memory explode because shared libraries
           | are no longer shared.
           | 
           | This would definitely sound like a conspiracy theory, but I'm
           | quite sure that hardware vendors see this technological
           | development as, at least, a fortunate turn of events...
        
           | bogwog wrote:
           | I don't think that matters so much. For building a system,
           | you definitely need dynamic linking, but end user apps being
           | as self contained as possible is good for developers, users,
           | and system maintainers (who don't have to worry about
           | breaking apps). As long as it doesn't get out of hand, a few
           | dozen MBs even is a small price to pay IMO for the
           | compatibility benefits.
           | 
           | As a long time Linux desktop user, I appreciate any efforts
           | to improve compatibility between distros. Since Linux isn't
           | actually an operating system, successfully running software
           | built for Ubuntu on a Fedora box, for example, is entirely
           | based on luck.
        
             | rektide wrote:
             | There's also the issue that if a library has a
             | vulnerability, you are now reliant on every static binary
             | updating with the fix & releasing a new version.
             | 
             | Where-as with the conventional dynamic library world one
             | would just update openssl or whomever & keep going. Or if
             | someone wanted to shim in an alternate but compatible
             | library, one could. I personally never saw the binary
             | compatibility issue as very big, and generally felt like
             | there was a while where folks were getting good at
             | packaging apps for each OS, making extra repos, that we've
             | lost. So it seems predominantly to me like downsides, that
             | we sell ourselves on, based off of outsized/overrepresented
             | fear & negativity.
        
               | preseinger wrote:
               | the optimization you describe here is not valuable enough
               | to offset the value provided by statically linked
               | applications
               | 
               | the computational model of a fleet of long-lived servers,
               | which receive host/OS updates at one cadence, and serve
               | applications that are deployed at a different cadence, is
               | at this point a niche use case, basically anachronistic,
               | and going away
               | 
               | applications are the things that matter, they provide the
               | value, the OS and even shared libraries are really
               | optimizations, details, that don't really make sense any
               | more
               | 
               | the unit of maintenance is not a host, or a specific
               | library, it's an application
               | 
               | vulnerabilities affect applications, if there is a
               | vulnerability in some library that's used by a bunch of
               | my applications then it's expected that i will need to
               | re-deploy updated versions of those applications, this is
               | not difficult, i am re-deploying updated versions of my
               | applications all the time, because that is my deployment
               | model
        
               | lokar wrote:
               | Indeed. I view Linux servers/vms as ELF execution
               | appliances with a network stack. And more and more the
               | network stack lives in the NIC and the app, not Linux.
        
               | rektide wrote:
               | Free software has a use beyond industrial software
               | containers. I don't think most folks developing on Linux
               | laptops agree with your narrow conception of software.
               | 
               | Beyond app delivery there's dozens of different utils
               | folks rely on in their day to day. The new statically
               | compiled world requiring each of these to be well
               | maintained & promptly updated feels like an obvious
               | regression.
        
               | howinteresting wrote:
               | Again, _there is no alternative_. Dynamic linking is an
               | artifact of an antiquated 70s-era programming language.
               | It simply does not and cannot work with modern language
               | features like monomorphization.
               | 
               | Linux distros are thankfully moving towards embracing
               | static linking, rather than putting their heads in the
               | sand and pretending that dynamic linking isn't on its
               | last legs.
        
               | PaulDavisThe1st wrote:
               | Whoa, strong opinions.
               | 
               | Dynamic linking on *nix has nothing to do with 70s era
               | programming languages.
               | 
               | Did you consider the possibility that the incompatibility
               | between monomorphization (possibly the dumbest term in
               | all of programming) and dynamic linking should perhaps
               | saying something about monomorphization, instead?
        
               | howinteresting wrote:
               | > Dynamic linking on *nix has nothing to do with 70s era
               | programming languages.
               | 
               | Given that dynamic linking as a concept came out of the C
               | world, it has everything to do with them.
               | 
               | > Did you consider the possibility that the
               | incompatibility between monomorphization (possibly the
               | dumbest term in all of programming) and dynamic linking
               | should perhaps saying something about monomorphization,
               | instead?
               | 
               | Yes, I considered that possibility.
        
               | PaulDavisThe1st wrote:
               | The design of dynamic linking on most *nix-ish systems
               | today comes from SunOS in 1988, and doesn't have much to
               | do with C at all other than requiring both the compiler
               | and assembler to know about position-independent code.
               | 
               | What elements of dynamic linking do you see as being
               | connected to "70s era programming languages"?
               | 
               | > Yes, I considered that possibility.
               | 
               | Then I would urge you to reconsider.
        
               | preseinger wrote:
               | dynamic linking is an optimization that is no longer
               | necessary
               | 
               | there is no practical downside to a program including all
               | of its dependencies, when evaluated against the
               | alternative of those dependencies being determined at
               | runtime and based on arbitrary state of the host system
               | 
               | monomorphization is good, not bad
               | 
               | the contents of /usr/lib/whatever should not impact the
               | success or failure of executing a given program
        
               | PaulDavisThe1st wrote:
               | Dynamic linking wasn't an optimization (or at least, it
               | certainly wasn't _just_ an optimization). It allows for
               | things like smaller executable sizes, more shared code in
               | memory, and synchronized security updates. You can, if
               | you want, try the approach of  "if you have 384GB of RAM,
               | you don't need to care about these things", and in that
               | sense you're on quicksand with the "just an
               | optimization". Yes, the benefits of sharing library code
               | in memory are reduced by increasing system RAM, but we're
               | seeing from a growing chorus of both developers and
               | users, the "oh, forget all that stupid stuff, we've got
               | bigger faster computers now" isn't going so well.
               | 
               | There's also the problem that dynamic loading relies on
               | almost all the same mechanisms as dynamic linking, so you
               | can't get rid of those mechanisms just because your main
               | build process used static linking.
        
               | preseinger wrote:
               | > Free software has a use beyond industrial software
               | containers. I don't think most folks developing on Linux
               | laptops agree with your narrow conception of software.
               | 
               | the overwhelming majority of software that would ever be
               | built by a system like buck2 is written and deployed in
               | an industrial context
               | 
               | the share of software consumers that would use this class
               | of software on personal linux laptops is statistically
               | zero
               | 
               | really, the overwhelming majority of installations of
               | distros like fedora or debian or whatever are also in
               | industrial contexts, the model of software lifecycles
               | that their maintainers seem to assume is wildly outdated
        
           | stu2b50 wrote:
           | Sometimes things just don't have good solutions in one space.
           | We solved in another space, as SSD and ram manufacturers made
           | memory exponentially cheaper and more available over the last
           | few decades.
           | 
           | So we make the trade off of software complexity for hardware
           | complexity. Such is how life goes sometimes.
        
           | howinteresting wrote:
           | Dynamic linking is an artifact of C, not some sort of
           | universal programming truth.
        
           | maccard wrote:
           | We've had decades to figure this out, and none of the
           | "solutions" work. Meanwhile, the CRT for visual studio id
           | 15MB. If every app I installed grew by 15MB I don't think I
           | would notice.
        
           | preseinger wrote:
           | when someone writes a program and offers it for other people
           | to execute, it should generally be expected to work
           | 
           | the size of a program binary is a distant secondary concern
           | to this main goal
           | 
           | static compilation more or less solves this primary
           | requirement, at the cost of an increase to binary size that
           | is statistically zero in the context of any modern computer,
           | outside of maybe embedded (read: niche) use cases
           | 
           | there is no meaningful difference between a 1MB binary or a
           | 10MB binary or a 100MB binary, disks are big and memory is
           | cheap
           | 
           | the optimization of dynamic linking was based on costs of
           | computation, and a security model of system administration,
           | which are no longer valid
           | 
           | there's no reason to be offended by this, just update your
           | models of reality and move on
        
             | rektide wrote:
             | I never had a problem before. The people saying we need
             | this for convenience felt detached & wrong from the start.
             | 
             | It's popular to be cynical & conservative, to disbelieve.
             | That has won the day. It doesn't do anything to convince me
             | it was a good choice or actually helpful, that we were
             | right to just give up.
        
               | preseinger wrote:
               | "wrong" or "a good choice" or "actually helpful" are not
               | objective measures, they are judged by a specific
               | observer, what's wrong for you can be right for someone
               | else
               | 
               | i won't try to refute your personal experience, but i'll
               | observe it's relevant in this discussion only to the
               | extent that your individual context is representative of
               | consumers of this kind of software in general
               | 
               | that static linking provides a more reliable end-user
               | experience vs. dynamic linking is hopefully not
               | controversial, the point about security updates is true
               | and important but very infrequent compared to new
               | installations
        
           | thangngoc89 wrote:
           | > with minimal runtime dependencies
           | 
           | You're probably thinking of static binary. I believe that OP
           | is comparing a single binary vs installing the whole
           | toolchain of Python/Ruby/Node and fetching the dependencies
           | over the wire.
        
             | crabbone wrote:
             | If it's not a statically linked binary, then the problem is
             | just as bad as it is with Python dependencies: instead, now
             | you need to find the shared libraries that it linked with.
        
       | orthoxerox wrote:
       | I really hope the team responsible for it is called Timbuktu.
        
       | 1MachineElf wrote:
       | There are a few references to NixOS on the code/issues.[0] I
       | wonder what Meta's use case is for NixOS.
       | 
       | [0] https://github.com/facebook/buck2/search?q=nixos&type=issues
        
         | ndmitchell wrote:
         | These were from an open source contributor - out the box Buck2
         | doesn't really have support for Nix. But because the rules are
         | flexible, you can write your own version of Nix-aware Buck2.
        
       | thefilmore wrote:
       | > In our internal tests at Meta, we observed that Buck2 completed
       | builds 2x as fast as Buck1.
       | 
       | Interesting, so twice the bang for your buck.
        
         | faitswulff wrote:
         | But if you need Buck2 then you're back to one bang per buck
        
       | mgaunard wrote:
       | > Written in Rust
       | 
       | stopped reading there.
        
       | ethicalsmacker wrote:
       | Kind of upset they didn't leverage a corny phrase like "the buck
       | stops here".
        
       | 0cf8612b2e1e wrote:
       | I like to use Makefiles for project automation. Does Buck make it
       | straightforward to run phony target tasks? I have been
       | considering transitioning to Justfiles, but Buck will obviously
       | be getting significantly more expose and mindshare.
        
         | ndmitchell wrote:
         | There's no such thing as a phony target, but there is both
         | `buck2 build` and `buck2 run` - each target can say how to run
         | and how to build it separately. So you can have a shell script
         | in the repo, write an export_file rule for it, then do `buck2
         | run :my_script` and it will run.
        
           | 0cf8612b2e1e wrote:
           | Nuts. Possible, but I would be fighting the platform a bit.
           | Especially if I might want something like phony1 to depends
           | on phony2.
        
       | ihnorton wrote:
       | The fact that Buck2 is written in a statically-compilable
       | language is compelling, compared to Bazel and others. It's also
       | great that Windows appears to be supported out of the box [1,1a]
       | -- and even tested in CI. I'm curious how much "real world" usage
       | it's gotten on Windows, if any.
       | 
       | I don't see many details about the sandboxing/hermetic build
       | story in the docs, and in particular whether it is supported at
       | all on Linux or Windows (the only mention in the docs is Darwin).
       | 
       | It's a good sign that the Conan integration PR [2] was warmly
       | received (if not merged, yet). I would hope that the system is
       | extensible enough to allow hooking in other dependency managers
       | like vcpkg. Using an external PM loses some of the benefits, but
       | it also dramatically reduces the level of effort for initial
       | adoption. I think bazel suffered from the early difficulties
       | integrating with other systems, although IIUC rules_foreign_cc is
       | much better now. If I'm following the code/examples correctly,
       | Buck2 supports C++ out of the box, but I can't quite tell if/how
       | it would integrate with CMake or others in the way that
       | rules_foreign_cc does.
       | 
       | (one of the major drawbacks of vcpkg is that it can't do parallel
       | dependency builds [3]. If Buck2 was able to consume a vcpkg
       | dependency tree and build it in parallel, that would be a very
       | attractive prospect -- wishcasting here)
       | 
       | [1] https://buck2.build/docs/developers/windows_cheat_sheet/ [1a]
       | https://github.com/facebook/buck2/blob/738cc398ccb9768567288...
       | [2] https://github.com/facebook/buck2/pull/58 [3]
       | https://github.com/microsoft/vcpkg/discussions/19129
        
         | fanzeyi wrote:
         | One side effect of all the Metaverse investment is that Meta
         | now has a lot more engineers working on Windows. You bet there
         | will be real world usage. ;)
        
         | e4m2 wrote:
         | > There are also some things that aren't quite yet finished:
         | 
         | > There are not yet mechanisms to build in release mode (that
         | should be achieved by modifying the toolchain).
         | 
         | > Windows/Mac builds are still in progress; open-source code is
         | mostly tested on Linux.
         | 
         | Source: https://buck2.build/docs/why.
        
       | bogwog wrote:
       | I feel so lucky that I found waf[1] a few years ago. It just...
       | solves everything. Build systems are notoriously difficult to get
       | right, but waf is about as close to perfect as you can get. Even
       | when it doesn't do something you need, or it does things in a way
       | that doesn't work for you, the amount of work needed to
       | extend/modify/optimize it to your project's needs is tiny (minus
       | the learning curve ofc, but the core is <10k lines of Python with
       | zero dependencies), and doesn't require you to maintain a fork or
       | anything like that.
       | 
       | The fact that the Buck team felt they had to do a from scratch
       | rewrite to build the features they needed just goes to show how
       | hard it is to design something robust in this area.
       | 
       | If there are any people in the Buck team here, I would be curious
       | to hear if you all happened to evaluate waf before choosing to
       | build Buck? I know FB's scale makes their needs unique, but at
       | least at a surface level, it doesn't seem like Buck offers
       | anything that couldn't have been implemented easily in waf.
       | Adding Starlark, optimizing performance, implementing remote task
       | execution, adding fancy console output, implementing hermetic
       | builds, supporting any language, etc...
       | 
       | [1]: https://waf.io/
        
         | jsgf wrote:
         | I don't know if they considered waf specifically, but the team
         | is definitely very familiar with the state of the art:
         | https://www.microsoft.com/en-us/research/uploads/prod/2018/0...
         | 
         | One of the key requirements is that Buck2 had to be an (almost)
         | drop-in replacement for Buck1 since there's no way we could
         | reasonably rewrite all the millions of existing build rules to
         | accommodate anything else.
         | 
         | Also Buck needs to support aggressive caching, and doing that
         | reliably puts lots of other constraints on the build system (eg
         | deterministic build actions via strong hermeticity) which lots
         | of build systems don't really support. It's not clear to me
         | whether waf does, for example (though if you squint it does
         | look a bit like Buck's rule definitions in Starlark).
        
         | PaulDavisThe1st wrote:
         | And the best part about waf? The explicit design intent that
         | you _include the build system with the source code_. This gets
         | rid of all the problems with build systems becoming backwards
         | /forwards incompatible, and trying to deal with the issues when
         | a developer works on one project using build system v3.9 and
         | another that users build system v4.6
         | 
         | With waf, the build system is trivially included in the source,
         | and so your project always uses the right version of waf for
         | itself.
        
         | softfalcon wrote:
         | I could be wrong as I haven't dug into the waf docs too too
         | much, but I think the major difference between waf and Buck is
         | the ability to handle dependency management between various
         | projects in a large org.
         | 
         | The documentation and examples for waf seem to be around
         | building one project, in one language, with an output of
         | statistics and test results. I am sure this is a simplification
         | for education and documentation purposes, but it does leave a
         | vague area around "what if I have more than 1 or 2 build
         | targets + 5 libs + 2 apps + 3 interdependent helper libraries?"
         | 
         | Buck seems to be different in that it does everything waf does
         | but also has clear `dep` files to map dependencies between
         | various libraries within a large repository with many, many
         | different languages and build environments.
         | 
         | The key thing here being, I suspect that within Meta's giant
         | repositories of various projects, they have a tight inter-
         | linking between all these libraries and wanted build tooling
         | that could not only build everything, but be able to map the
         | dependency trees between everything as well.
         | 
         | Pair that with a bunch of consolidated release mapping between
         | the disparate projects and their various links and you have a
         | reason why someone would likely choose Buck over waf purely
         | from a requirements side.
         | 
         | As for another reason they likely chose Buck over waf. It would
         | appear that waf is a capable, but lesser known project in the
         | wider dev community. I say this because when I look into waf, I
         | mostly see it compared against CMake. Its mental state resides
         | mostly in the minds of C++ devs. Either because of NIHS (not
         | invented here syndrome) or fear that the project wouldn't be
         | maintained over time, Meta may have decided to just roll their
         | own tooling. They seem to be really big on the whole "being the
         | SDK of the internet" as of late. I could see them not wanting
         | to support an independent BSD licensed library they don't have
         | complete control over.
         | 
         | These are just my thoughts, I could be completely wrong about
         | everything I've said, but they're my best insights into why
         | they likely didn't consider waf for this.
        
           | bogwog wrote:
           | It's true that Waf doesn't come with dependency management
           | out of the box (EDIT: unless you count pkg-config), so maybe
           | that's why (besides NIHS). The way I handle it is with
           | another excellent project called Conan (https://conan.io/)
           | 
           | However, if you're going to build a custom package management
           | system anyways, there's no reason you couldn't build it on
           | top of waf. Again, the core is tiny enough that one engineer
           | could realistically hold the entire thing in their head.
           | 
           | But I don't think we're going to get it right speculating
           | here lol. I'm sure there was more to it than NIHS, or being
           | unaware of waf.
        
             | joshuamorton wrote:
             | A number of things like being written in python start to
             | matter at big scale. I love python, but cli startup time in
             | python is actually a concern for apps used many times daily
             | by many engineers.
             | 
             | Fixing that or moving to a daemon or whatever starts to
             | take more time than just redoing it from scratch, and if
             | the whole thing is 10k lines of python, it's something a
             | domain expert can mostly reimplement in a week to better
             | serve the fb specific needs.
        
               | bogwog wrote:
               | I've been using Waf for a couple of years, including on
               | retro thinkpads from ~08. I've never run into issues with
               | the startup time for waf and/or Python. Even if the
               | interpreter were 100x slower to start and execute than it
               | currently is, that time would be negligible next to the
               | time spent waiting for a compiler or other build task to
               | complete.
               | 
               | And if it is too slow, there's profiling support for
               | tracking down bottlenecks, and many different ways to
               | optimize them. This includes simply optimizing your own
               | code, or changing waf internal behavior to optimize
               | specific scenarios. There's even a tool called
               | "fast_partial" which implements a lot more caching than
               | usual project-wide to reduce time spent executing Python
               | during partial rebuilds in projects with an obscene
               | number of tasks.
               | 
               | > Fixing that or moving to a daemon or whatever starts to
               | take more time than just redoing it from scratch, and if
               | the whole thing is 10k lines of python, it's something a
               | domain expert can mostly reimplement in a week to better
               | serve the fb specific needs.
               | 
               | Well, considering Buck just went through a from-scratch
               | rewrite, I would argue otherwise. Although, to be fair,
               | that 10k count is just for the core waflib. There are
               | extra modules to support compiling C/C++/Java/etc for
               | real projects.
               | 
               | (also, waf does have a daemon tool, but it has external
               | dependencies so it's not included by default)
        
               | joshuamorton wrote:
               | > Well, considering Buck just went through a from-scratch
               | rewrite, I would argue otherwise
               | 
               | Based on what, the idea that waf fits their needs better
               | than the tool they wrote and somehow wouldn't need to be
               | rewritten or abandoned?
               | 
               | > Even if the interpreter were 100x slower to start and
               | execute than it currently is, that time would be
               | negligible next to the time spent waiting for a compiler
               | or other build task to complete.
               | 
               | This wrongly assumes that clean builds are the only use
               | case. Keep in mind that in many cases when using buck or
               | bazel, a successful build can complete without actually
               | compiling anything, because all of the artifacts are
               | cached externally.
               | 
               | > There's even a tool called "fast_partial" which
               | implements a lot more caching than usual project-wide to
               | reduce time spent executing Python during partial
               | rebuilds in projects with an obscene number of tasks
               | 
               | Right, the point that this is a concern to some people,
               | and that there's clearly some tradeoff here such that it
               | isn't the default immediately rings alarm bells.
        
               | bogwog wrote:
               | No offense, but I think you're reading too much into my
               | casual comments here to guide your understanding of waf,
               | rather than the actual waf docs. Waf isn't optimized for
               | clean builds (quite the contrary), and neither you nor I
               | know whether the waf defaults are insufficient for
               | whatever Buck is being used for. I just pointed out the
               | existence of that "fast_partial" thing to show how deep
               | into waf internals a project-specific optimization effort
               | could go.
               | 
               | But discussions about optimization are pointless without
               | real world measurements and data.
        
         | klodolph wrote:
         | > If there are any people in the Buck team here, I would be
         | curious to hear if you all happened to evaluate waf before
         | choosing to build Buck?
         | 
         | There's no way Waf can handle code bases as large as the ones
         | inside Facebook (Buck) or Google (Bazel). Waf also has some
         | problems with cross-compilation, IIRC. Waf would simply choke.
         | 
         | If you think about the problems you run into with extremely
         | large code bases, then the design decisions behind
         | Buck/Bazel/etc. start to make a lot of sense. Things like how
         | targets are labeled as //package:target, rather than paths like
         | package/target. Package build files are only loaded as needed,
         | so your build files can be extremely broken in one part of the
         | tree, and you can still build anything that doesn't depend on
         | the broken parts. In large code bases, it is simply not
         | feasible to expect all of your build scripts to work all of the
         | time.
         | 
         | The Python -> Starlark change was made because the build
         | scripts need to be completely hermetic and deterministic.
         | Starlark is reusable outside Bazel/Buck precisely because other
         | projects want that same hermeticity and determinism.
         | 
         | Waf is nice but I really want to emphasize just how damn large
         | the codebases are that Bazel and Buck handle. They are large
         | enough that you cannot load the entire build graph into memory
         | on a single machine--neither Facebook nor Google have the will
         | to load that much RAM into a single server just to run builds
         | or build queries. Some of these design decisions are basically
         | there so that you can load subsets of the build graph and cache
         | parts of the build graph. You want to hit cache as much as
         | possible.
         | 
         | I've used Waf and its predecessor SCons, and I've also used
         | Buck and Bazel.
        
           | nextaccountic wrote:
           | > They are large enough that you cannot load the entire build
           | graph into memory on a single machine
           | 
           | You mean, multiple gigabytes for build metadata, that just
           | says things like that X depends on Y and to build Y you run
           | command Z?
        
           | bogwog wrote:
           | I get that, but again, there's no reason Waf can't be used as
           | a base for building that. I actually use Waf for cross
           | compilation extensively, and have built some tools around it
           | with Conan for my own projects. Waf can handle cross
           | compilation just fine, but it's up to you to build what that
           | looks like for your project (a common pattern I see is custom
           | Context subclasses for each target)
           | 
           | Memory management, broken build scripts, etc. can all be
           | handled with Waf as well. In the simplest case, you can just
           | wrap a `recurse` call in a try catch block, or you can build
           | something much more sophisticated around how your projects
           | are structured.
           | 
           | Note, I'm not trying to argue that Google/Facebook "should
           | have used X". There are a million reasons to pick X over Y,
           | even if Y is the objectively better choice. Sometimes,
           | molding X to be good enough is more efficient than spending
           | months just researching options hoping you'll find Y.
           | 
           | I'm just curious to know _if they did evaluate Waf_ , why did
           | they decide against it.
        
         | xxpor wrote:
         | I truly believe any build system that uses a general-purpose
         | language by default is too powerful. It lets people do silly
         | stuff too easily. Build systems (for projects with a lot of
         | different contributors) should be easy to understand, with few,
         | if any, project specific concepts to learn. There can always be
         | an escape hatch to python (see GN, for example), but 99% of the
         | code should just be boring lists of files to build.
        
           | sangnoir wrote:
           | You cannot magick away complexity. Large systems (think
           | thousands of teams with hundreds of commits per minute)
           | require a way to express complexity. When all is said and
           | done, you'll have a turing-complete build system anyway - so
           | why not go with something readable
        
             | xxpor wrote:
             | I seriously doubt there's a single repo on the planet that
             | averages hundreds of commits _per minute_. That 's
             | completely unmanageable for any number of reasons.
        
           | pjmlp wrote:
           | They are the bane of any DevOps/Build Engineer when trying to
           | fix build issues.
        
           | bogwog wrote:
           | I agree, that's also pretty much why Starlark exists.
           | However, there are many cases where you do need complex build
           | logic.
           | 
           | Personally, I always go for _declarative_ CMake first, then
           | waf as soon as I find my CMakeLists looking like something
           | other than just a list of files.
           | 
           | I've considered before creating a simple declarative language
           | to build simple projects like that with waf, but I don't like
           | the idea of maintaining my own DSL for such little benefit,
           | when CMake works just fine, and everyone knows how to use it.
           | I feel like I'd end up with my own little TempleOS if I
           | decided to go down that rabbit hole.
        
           | DrBazza wrote:
           | The problem with build systems are its users. For exactly the
           | reason you say. For a man with a hammer every problem is a
           | nail. Developers don't think of build systems in the right
           | way. If you're doing something complex in your build it
           | should surely be a build task in its own right.
        
           | baby wrote:
           | I think I would agree as well. So I'm not sure how that makes
           | me feel about nix.
        
             | xxpor wrote:
             | Nix is different because no one's smart enough to figure
             | how how to do silly things ;)
        
       | mgaunard wrote:
       | No one seems to know how to do practical and useful build
       | systems, so I write my own.
       | 
       | In particular the idea of writing something entirely generic that
       | works for everything is a waste of time. The build system should
       | be tailored to building your application in the way that matters
       | to you and making the best of the resources that you have.
        
         | Shish2k wrote:
         | This sounds like a nightmare if you're dealing with even
         | single-digit numbers of projects - even just in my personal
         | spare-time hobby projects, I have ~5 nodejs projects, ~5 php
         | projects, ~5 rust projects, ~5 python projects - and I find
         | myself wishing for a common build tool because right now, even
         | if I only have one build system _per language_, that still
         | means that eg migrating from travis to github actions meant
         | that I needed to rewrite four separate build/test/deploy
         | workflows...
        
           | mgaunard wrote:
           | I have 500 projects.
           | 
           | They're all made to follow the same conventions so neither of
           | them specifies anything redundant.
        
       | born-jre wrote:
       | wow nobody has dropped obligatory xkcd 927
       | 
       | https://xkcd.com/927/
        
         | PaulDavisThe1st wrote:
         | Build systems are not standards.
        
       | evmar wrote:
       | How do the "transitive-sets (tsets)" mentioned here compare to
       | Bazel depsets[1]? Is it the same thing with a different name, or
       | different in some important way?
       | 
       | [1] https://bazel.build/rules/lib/depset
        
         | cjhopman wrote:
         | tsets are described in more detail here:
         | https://buck2.build/docs/rule_authors/transitive_sets/. Bazel's
         | depsets were one of the influences on their design. To users,
         | they will seem fairly similar and would be used for solving
         | similar problems, there's some differences in the details of
         | the APIs.
         | 
         | I'm not well-versed on the internal implementation details of
         | bazel's depsets, but one interesting thing about tsets that may
         | further differentiate them is how they are integrated into the
         | core, specifically that we try hard to never flatten them
         | there. The main two places this comes up are: (1) when an
         | action consumes a tset projection, the edges on the DICE graph
         | (our incremental computation edges) only represent the direct
         | tset roots that the action depends on, not the flattened full
         | list of artifacts it represents and (2) when we compute the
         | input digest merkle tree for uploading an actions inputs to RE,
         | that generally doesn't require flattening the tset as we cache
         | the merkle trees for each tset projection node and can
         | efficiently merge them.
        
       | oggy wrote:
       | Great to see this. I hope it takes off - Bazel is useful but I
       | really like the principled approach behind it (see the Build
       | Systems a la Carte paper), and Neil is scarily good from my
       | experience of working with him so I'd expect that they've come up
       | with something awesome.
       | 
       | One thing I find annoying with all of these general, language-
       | agnostic build systems though is that they break the
       | "citizenship" in the corresponding language. So while you can
       | usually relatively easily build a Rust project that uses
       | crates.io dependencies, or a Python project with PyPi
       | dependencies, it seems hard to make a library built using
       | Bazel/Buck available to non-Bazel/Buck users (i.e., build
       | something available on crates.io or PyPi). Does anyone know of
       | any tools or approaches that can help with that?
        
         | marcyb5st wrote:
         | Regarding bazel, the rules_python has a py_wheel rule that
         | helps you creating wheels that you can upload to pypi (https://
         | github.com/bazelbuild/rules_python/blob/52e14b78307a...).
         | 
         | If you want to see an approach of bazel to pypi taken a bit to
         | the extreme you can have a look at tensorflow on GitHub to see
         | how they do it. They don't use the above-mentioned building
         | rule because I think their build step is quite complicated
         | (C/C++ stuff, Vida/ROCm support, python bindings, and multiOS
         | support all in one before you can publish to pypi).
        
         | dnsco wrote:
         | If I'm understanding, for the rust specific case, this
         | generates your BUCK files from your Cargo.toml:
         | 
         | https://github.com/facebookincubator/reindeer
        
         | kccqzy wrote:
         | I have a lot of respect for Neil, but I've been burned by the
         | incompleteness and lack of surrounding ecosystem for his
         | original build system Shake (https://shakebuild.com/). This was
         | in a team where everyone knows Haskell.
         | 
         | I'm cautiously optimistic with this latest work. I'm glad at
         | least this isn't some unsupported personal project but
         | something official from Meta.
        
         | jpdb wrote:
         | Bazel now has a module system that you can use.
         | 
         | https://bazel.build/external/module
         | 
         | This means your packages are just Git repos + BUILD files.
        
         | lopkeny12ko wrote:
         | > One thing I find annoying with all of these general,
         | language-agnostic build systems though is that they break the
         | "citizenship" in the corresponding language
         | 
         | I mean, this is kind of the whole point. A language agnostic
         | build system needs a way to express dependencies and
         | relationships in a way that is _agnostic_ to, and abstracts
         | over, the underlying programming language and its associated
         | ecosystem conventions.
        
       | lopkeny12ko wrote:
       | I'm missing some historical context here. This article goes out
       | of its way to compare and contrast with Bazel. Even the usage
       | conventions, build syntax (Starlark), and RBE API are the same as
       | in Bazel.
       | 
       | Did FB fork Bazel in the early days but retain basically
       | everything about it except the name? Why didn't they just...adopt
       | Bazel, and contribute to it like any other open source project?
        
         | 0xcafefood wrote:
         | One thing you might be missing is that this is Buck2.
         | 
         | Buck (https://github.com/facebook/buck) has been open sourced
         | for nearly 10 years now.
         | 
         | The lore I've heard is that former Googlers went to Facebook,
         | built Buck based on Blaze, and Facebook open sourced that
         | before Google open sourced Blaze (as Bazel).
         | 
         | The first pull to the Buck github repo was on May 8, 2013 (http
         | s://github.com/facebook/buck/pulls?q=is%3Apr+sort%3Acrea...).
         | The first to Bazel was Sep 30, 2014 (https://github.com/bazelbu
         | ild/bazel/pulls?q=is%3Apr+sort%3Ac...).
        
           | dheera wrote:
           | Smells like what FB did with Caffe vs. Caffe2, the two of
           | which have nothing to do with each other.
        
         | ynx wrote:
         | Buck far predates Bazel, and was built by ex-googlers
         | replicating Blaze.
         | 
         | Skylark was a later evolution, after the python scripts grew
         | out of control, and a cue that fb took from Google long after
         | Buck had been near-universally deployed for several years.
        
         | krschultz wrote:
         | At the time that FB started writing Buck, Bazel was not open
         | source. I believe it did exist as Blaze internally at Google
         | before FB started writing Buck. Facebook open sourced Buck
         | before Google open sourced Blaze as Bazel.
         | 
         | Over time Facebook has been working to align Buck with Bazel,
         | e.g. the conversion to Starlark syntax so tools such as
         | Buildozer work on both systems. I believe Buck2 also now uses
         | the same remote execution APIs as Bazel, but don't quote me on
         | that.
        
       | thomasahle wrote:
       | It's probably more about better caching, but using buck2
       | internally at Meta reduced me buildtimes from minutes to seconds.
       | A very welcome upgrade.
        
       | rockwotj wrote:
       | Does anyone know how IDE support for Buck2 is? I couldn't find
       | anything except some xcode config rules. Half the battle with
       | Bazel/Buck/etc is that getting and IDE or LSP to work for
       | C++/Java/Kotlin/Swift/etc is always a pain because those tools
       | don't really work out of the box.
        
       ___________________________________________________________________
       (page generated 2023-04-06 23:00 UTC)