[HN Gopher] Launch HN: Depot (YC W23) - Fast Docker Builds in th...
       ___________________________________________________________________
        
       Launch HN: Depot (YC W23) - Fast Docker Builds in the Cloud
        
       Hey HN! We're Kyle and Jacob, the founders of Depot
       (https://depot.dev), a hosted container build service that builds
       Docker images up to 20x faster than existing CI providers. We run
       fully managed Intel and Arm builders in AWS, accessible directly
       from CI and from your terminal.  Building Docker images in CI today
       is slow. CI runners are ephemeral, so they must save and load the
       cache for every build. They have constrained resources, with
       limited CPUs, memory, and disk space. And they do not support
       native Arm or multi-platform container builds, and instead require
       emulation.  Over 4 years of working together, we spent countless
       hours optimizing and reoptimizing Dockerfiles, managing layer
       caching in CI, and maintaining custom runners for multi-platform
       images. We were working around the limitation of multi-platform
       builds inside of GitHub Actions via QEMU emulation when we thought
       "wouldn't it be nice if someone just offered both an Intel and Arm
       builder for Docker images without having to run all that
       infrastructure ourselves". Around January of 2022 we started
       working on Depot, designed as the service we wished we could use
       ourselves.  Depot provides managed VMs running BuildKit, the
       backing build engine for Docker. Each VM includes 16 CPUs, 32GB of
       memory, and a persistent 50GB SSD cache disk that is automatically
       available across builds--no saving or loading of layer cache over
       the network. We launch both native Intel and native Arm machines
       inside of AWS. This combination of native CPUs, fast networks, and
       persistent disks significantly lowers build time -- we've seen
       speedups ranging from 2x all the way to 20x. We have customers with
       builds that took three hours before that now take less than ten
       minutes.  We believe that today we are the fastest hosted build
       service for Docker images, and the only hosted build service
       offering the ability to natively build multi-platform Docker images
       without emulation.  We did a Show HN last September:
       https://news.ycombinator.com/item?id=33011072. Since then, we have
       added the ability to use Depot in your own AWS account; added
       support for Buildx bake; increased supported build parallelism;
       launched an eu-central-1 region; switched to a new mTLS backend for
       better build performance; simplified pricing and added a free tier;
       and got accepted into YC W23!  Depot is a drop-in replacement for
       `docker buildx build`, so anywhere you are running `docker build`
       today, you replace it with `depot build` and get faster builds. Our
       CLI is wrapping the Buildx library, so any parameters you pass to
       your Docker builds today are fully compatible with Depot. We also
       have a number of integrations that match Docker integrations inside
       of CI providers like GitHub Actions.  We're soon launching a public
       API to programmatically build Docker images for companies that need
       to securely build Docker images on behalf of their customers.  You
       can sign up at https://depot.dev/sign-up, and we have a free tier
       of 60 build minutes per month. We would love your feedback and look
       forward to your comments!
        
       Author : jacobwg
       Score  : 137 points
       Date   : 2023-02-22 16:40 UTC (6 hours ago)
        
       | abatilo wrote:
       | I've had a few chats with Kyle and Jacob over the last few
       | months.
       | 
       | They're incredibly knowledgeable about the subject and are making
       | amazing strides for build speeds. I'd encourage anyone who
       | doesn't believe these results and benchmarks to just try it out.
       | They're completely real and it's delightful.
        
       | blueicelake121 wrote:
       | [flagged]
        
       | lxe wrote:
       | > Building Docker images in CI today is slow. CI runners are
       | ephemeral, so they must save and load the cache for every build.
       | 
       | >...persistent disks significantly lowers build time
       | 
       | Does this mean your solution places specific caches, like bazel,
       | node_modules, .yarn, and other intermediary artifacts onto a
       | shared volume and reuses them among jobs?
        
         | jeremy_k wrote:
         | Yes, Buildkit allows for this with the `--mount=type=cache`
         | functionality. [0]'
         | 
         | Now as an end user you still have to add this to your
         | Dockerfile, but if subsequent builds are able to continually
         | use this cache, build times will drastically improve.
         | 
         | - [0] https://docs.docker.com/build/cache/
        
           | jacobwg wrote:
           | --mount=type=cache is awesome!
           | 
           | One of the reasons we created Depot is that cache mounts
           | aren't supported in GitHub Actions, since each CI run is
           | entirely ephemeral, so the files saved in a cache mount
           | aren't saved across runs. BuildKit doesn't export those cache
           | types via cache-to. There are some manual workarounds
           | creating tarballs of the BuildKit context directory, but we
           | wanted something that just works without needing to save/load
           | tarballs, which can be quite slow.
        
             | [deleted]
        
         | jacobwg wrote:
         | Yes, all the layer cache is saved on a persistent volume that's
         | reused between jobs. In that respect, it's very similar to
         | running `docker build` on your local laptop, where each build
         | is incremental. But in Depot's case, that incremental
         | experience is shared for all CI builds, and between all users
         | running `depot build` on their local devices.
        
           | lxe wrote:
           | Awesome. How does this work in a distributed sense? For
           | example, for multiple parallel builds where each build is
           | building a different git sha?
        
             | jacobwg wrote:
             | Today we're vertically scaling builders, so multiple builds
             | run on the same large EC2 instance. The instances support
             | processing multiple Docker builds at once, and thanks to
             | BuildKit can even deduplicate work between multiple
             | parallel builds, so like if two simultaneous builds share
             | some steps, it can handle coordinating so those steps run
             | just once.
             | 
             | We have plans to expand to more horizontal scaling with
             | tiered caching, so we can keep the speedups we see today
             | but further increase potential parallelism.
        
               | lxe wrote:
               | Yes! Love the vertical scaling approach. Pinning similar
               | builds to same host doesn't get enough love.
        
           | jeremy_k wrote:
           | It sounds like the SSD provides the layer cache with
           | `--cache-to=type=local`? Does Depot support connecting
           | additional VMs to the volume if someone wants to scale up?
           | I've been meaning to look into the new S3 cache to solve this
           | issue.
        
             | jacobwg wrote:
             | Exactly, we effectively persist BuildKit's state directory,
             | so it doesn't even need to tar and export any of the
             | internal state or layer cache, it gets the same state
             | volume for the next build.
             | 
             | Today we're using vertical scaling, so we run BuildKit on
             | larger EC2 instances, tune `max-parallelism`, and let
             | BuildKit handle processing multiple builds / deduplicating
             | steps across builds / etc. What our website calls a
             | "project" basically equates to a new EC2 instance (so two
             | different projects are fully isolated from each other).
             | 
             | We'd like to expand into horizontal scaling, probably with
             | some kind of tiered caching, so the builders would be able
             | to use local cache on local SSDs if available, but fall
             | back to copying cache from S3 if not available locally.
        
       | mmastrac wrote:
       | If ARM continues to take off, this will be a pretty useful tool.
       | I'm building Rust native binaries for one of my projects using
       | buildx, but it's 1) way too slow using buildx emulation and 2)
       | way too slow to build on the Pi itself.
       | 
       | In the end I created a hacky build process where I use a single
       | container to build both the x64 and ARM versions serially, and
       | then create multi-arch containers in a separate step. It was very
       | painful to get the right native libraries installed, and it's not
       | terribly easy to build these two platforms in parallel.
       | 
       | In short, having access to real ARM builders would be great, and
       | persistent disks would probably boost my build performance quite
       | a bit.
       | 
       | The dockerfile that I had to use:
       | https://github.com/mmastrac/progscrape/blob/master/Dockerfil...
       | 
       | Example build run (~20 mins):
       | https://github.com/mmastrac/progscrape/actions/runs/42285298...
        
         | kylegalbraith wrote:
         | Hey there! I'm the other half of Depot. This is an awesome tool
         | you have built, I have actually been looking for something like
         | this for myself as well.
         | 
         | Couldn't agree more with the pain points you mention, it was
         | the biggest things that led us to start Depot and we're really
         | excited about what we can do next.
         | 
         | If you ever want to do way with your hacky build process and
         | try out Depot, we have a free tier now.
        
           | mmastrac wrote:
           | FWIW setup was pretty easy, but I got an RPC error halfway
           | through my build. Not sure what happened here, though it
           | looks like things were definitely building quicker!
           | Error: failed to receive status: rpc error: code =
           | Unavailable desc = closing transport due to: connection
           | error: desc = "error reading from server: EOF", received
           | prior goaway: code: NO_ERROR       Error: failed with: Error:
           | failed to receive status: rpc error: code = Unavailable desc
           | = closing transport due to: connection error: desc = "error
           | reading from server: EOF", received prior goaway: code:
           | NO_ERROR
           | 
           | I didn't have a chance to clean up the hacks yet but I'll
           | give that a shot and see if it clears up the error. It might
           | just be how intense the Rust build process is for this
           | project.
        
             | jacobwg wrote:
             | Hey, I'll look into this one - this kind of error usually
             | means that the BuildKit server believes the build has
             | finished, but the CLI missed the fact that the build ended.
             | It _might_ be related to your Rust build, but I want to
             | make sure there's not something happening on our end.
        
               | mmastrac wrote:
               | Great! My email is my profile if you need to reach out
               | about anything. The project is open-source, so feel free
               | to dig around if needed.
        
       | neuronexmachina wrote:
       | Do you have a timeline for supporting Google Cloud and their Arm-
       | based Tau VMs?
        
         | jacobwg wrote:
         | We'd like to expand to support GCP as well yes. Are you looking
         | to run the Depot runners in your own GCP account?
        
       | kashnote wrote:
       | This looks interesting! Congrats on the launch. Question, though,
       | why is it asking for credit card information for the free tier?
       | Would be nice to try it first without giving my card info.
        
         | jacobwg wrote:
         | Hey! This is cryptominer protection - we used to have an option
         | to access a trial without a credit card, and the cryptominers
         | did indeed sign up. :) Requiring a credit card for the free
         | tier is a speed bump for that, on the free plan it subscribes
         | you to a $0 product in Stripe.
        
         | atonse wrote:
         | I have to imagine that _any_ service that is providing
         | CPU/compute (even if it is for building things) can be abused
         | by cryptominers, hence the additional verification.
        
       | programmarchy wrote:
       | Sounds like this could solve the woes I've been having building
       | x86 images on my M1. Docker emulation via qemu is still really
       | buggy.
        
         | kylegalbraith wrote:
         | Yup we can definitely help with this. It was one of the pain
         | points that inspired Depot as we found that QEMU was just to
         | slow inside of things like GitHub Actions where we wanted to
         | build both for both architectures for local development and
         | production.
        
       | mynameisvlad wrote:
       | I saw this while checking out Moon yesterday and looked great. I
       | spun up my own GH runner because arm64 builds were taking up so
       | much time and eating into my credit.
       | 
       | But the pricing hurts for people that need more than 60 minutes
       | of build time (which is pretty easy to go through in one month)
       | but would be using it for personal projects like myself.
       | 
       | I could certainly see myself paying X/min over 60, but $49+5c/min
       | for personal stuff is a hard no.
        
         | kylegalbraith wrote:
         | Hey! Thank you for sharing this feedback, it's incredibly
         | helpful and we are so grateful for it. We thought a lot about
         | this when we decided to add a free tier and have some different
         | ideas on what we want to do next in this realm.
         | 
         | For now, if you want to sign up for our free tier we can flip
         | your org to this type of structure you are talking about, 1
         | project, 1 user, first 60 build minutes free & 5 cents/minute
         | after that.
        
       | user3939382 wrote:
       | My immediate reaction is, does this offer a feature incentive to
       | use this when I already have this in AWS CodePipeline+CodeBuild?
        
         | kylegalbraith wrote:
         | Hey thank you for the great question! I think the short answer
         | is that if you already have all of this wired up in
         | CodePipeline+CodeBuild and it works for you, then you're
         | probably set.
         | 
         | But there is a few steps to get there in that setup. I believe
         | you have to have three CodeBuild projects, one for each
         | architecture & then the manifest merge. So it works but is a
         | bit of config to stitch together.
         | 
         | With Depot, you would just install our `depot` CLI in your
         | config and run `depot build --platform linux/amd64,linux/arm64`
         | instead of `docker build`. We handle building the image on both
         | architectures in parallel and can push the merged result to
         | your registry. We can even run the builders in your own AWS
         | account so you maintain total control of the underlying infra
         | building your image.
         | 
         | We are working on other features for Depot that would go beyond
         | the speed & multi-platform capabilities. We want to surface
         | insights and telemetry about image builds that could help them
         | be smaller, faster, and smarter. We are also thinking about
         | things in the container security space such as container
         | signing, sboms, etc. Happy to answer more questions about any
         | of this!
        
       | mnapoli wrote:
       | Happy Depot user here, our builds are now 10 to 20 times faster:
       | https://twitter.com/matthieunapoli/status/162009074440824422...
        
       | eatonphil wrote:
       | > And they do not support native Arm or multi-platform container
       | builds
       | 
       | What's the issue with
       | https://docs.docker.com/build/building/multi-platform/? I only
       | just learned about this today but I've already got it building
       | cross-platform images correctly in Github Actions.
        
         | jacobwg wrote:
         | Hey, we support that exact thing, the `depot` CLI wraps buildx
         | so we support the same `--platform` flag.
         | 
         | The difference between Depot and doing it directly in GitHub
         | Actions is the "native CPU" part - on GitHub Actions, the Arm
         | builds will run with emulation using QEMU. It works, but it's
         | often at least 10x slower than it could be if the build was
         | running on native Arm CPUs. Builds that should take a few
         | minutes can take hour(s).
         | 
         | For multi-platform, if you want to keep things fast, you need a
         | simultaneous connection to both an Intel and an Arm machine,
         | and the two work in concert to build the multi-platform image.
         | 
         | There are workarounds with emulation, or pushing individual
         | images separately and merging the result after they're in a
         | registry. But if you just want to `build --platform
         | linux/amd64,linux64` and it be fast, we handle all the
         | orchestration for that.
        
         | ellisv wrote:
         | How do you deal with so many base images only supporting a
         | single architecture? We'd love to build multi-arch containers
         | but don't want to maintain all of our base images either.
        
           | eatonphil wrote:
           | What images support only a single architecture? The ones I
           | use: alpine, ubuntu work fine.
           | 
           | If you're already building images (because that's what we're
           | talking about) what difference does it make at which base
           | image you start?
        
       | DAlperin wrote:
       | This is super cool! Do you have any plans to support on-demand
       | image building like the now defunct https://ctr.run? It seems
       | like with your speed it's kind of a perfect match.
        
         | kylegalbraith wrote:
         | Thank you for the very kind comments! It would be really cool
         | to do something like ctr with what we have with Depot today.
         | Did you have a part you really enjoyed with it that would be a
         | must have for you?
        
       | 121watts wrote:
       | Congrats y'all! Great bunch of humans building cool things.
        
       | caleblloyd wrote:
       | Satisfied Depot customer here! Jacob was super responsive when I
       | had questions about how to integrate with "docker buildx bake".
       | The product works great, it has cut our Docker image build times
       | from 15 minutes previously on GitHub Actions down to 3 minutes on
       | Depot.
        
       | jensneuse wrote:
       | It's great to see others invest in fast CI as well! Amazing
       | product, quite similar to our stack. We're using Firecracker
       | (fly.io Machines) and Podman to build docker containers. Our
       | baseline is 13 seconds to deploy a change, including git clone
       | and docker push (4s usually). Here's a link to a short video:
       | https://wundergraph.com/blog/wundergraph_cloud_announcement#...
       | 
       | We're soon going to post a blog on how we've built this pipeline.
       | Lots on interesting details to share on how to make docker builds
       | fast.
        
         | jacobwg wrote:
         | Super cool, if you'd ever like to chat, let me know, my email
         | is in my profile. We have an API in private beta to dynamically
         | manage namespaces and acquire BuildKit connections within those
         | namespaces, designed for platforms that need to build images on
         | behalf of their customers. Any feedback you might have would be
         | awesome!
        
       | paulgb wrote:
       | Congrats on the launch!
       | 
       | We've been using Depot with Plane (https://plane.dev/). Prior to
       | depot, I had to disable arm64 builds because they slowed the
       | build down so much (30m+) on GitHub's machines. With Depot, we
       | get arm64 and amd64 images in ~2m.
        
       | fidgewidge wrote:
       | Congrats on hitting the front page. As someone perhaps one step
       | away from your target market, can you please explain how the
       | value prop here really works? You charge $49/month for 50GB of
       | disk cache, 32GB of RAM and 16 vCPUs along with a bit of custom
       | tooling on top. For 44 EUR/month I can get a dedicated Hetzner
       | machine with a terabyte of disk, 64 GB of RAM and 6 dedicated
       | cores (12 with SMT), and no user or project limits. Because it
       | has so much more disk and RAM, and because builds tend to be
       | disk/ram/bandwidth limited, performance is probably competitive.
       | 
       | You say that this is meant to solve problems with CI being
       | ephemeral. Maybe I'm old fashioned, but my own CI cluster uses
       | dedicated hardware and nothing is ephemeral. It could also use
       | dedicated VMs and the same thing would apply. We run TeamCity and
       | the agents are persistent, builds run in dedicated workspaces so
       | caching is natural and easy. This doesn't cost very much.
       | 
       | When you add more features then I can see there's some value
       | there (SBOM etc) but then again, surely such features are more
       | easily done by standalone tools rather than having to rent
       | infrastructure from you.
        
         | jacobwg wrote:
         | So we have the concept of a "project", which in retrospect
         | isn't the best name and is way too vague. :) But on our end, a
         | "project" equates to one or two cache SSDs + one or two EC2
         | instances we're running, depending on whether you've asked for
         | a single platform build or an Intel+Arm multi-platform build.
         | 
         | We do charge $0.05 per minute of build time used, but in theory
         | that $49/mo plan gives you access to up to 20 build machines,
         | if you're building 10 projects at once.
         | 
         | That said, if you already have your own dedicated build cluster
         | / CI setup, you may prefer to just use that! Depot is
         | effectively doing that kind of thing for you if you don't
         | already have your own hosted CI system or would prefer not to
         | orchestrate Docker layer cache.
         | 
         | We will be expanding to more things like SBOMs, container
         | signing, insights and analytics about what's happening inside
         | the builds, but hopefully in more integrated ways, since we
         | control the execution environment itself.
        
           | fidgewidge wrote:
           | I see. So it's $50/month plus .05 per minute of build time
           | after 60 minutes is up in that month. Let's say you need 10
           | hours of building per week, so that'd be another $120/month
           | on top. I'm not sure how fast these builds would go with your
           | setup, maybe 10 hours a week is a lot. But we're still
           | talking like $170/month for something with user limits and
           | fairly restricted resources. For less than that I can get a
           | 16 core AMD Rome machine with nearly 8 TB of flash split
           | across two drives, which should eat image builds for
           | breakfast. The extra cost is a bit of Linux sys admin which
           | can be fully automated (apt-get install unattended-upgrades
           | and a little more on first install).
           | 
           | Clearly from other responses in this thread there are people
           | who feel this is a good deal, so best of luck to you. But I'm
           | kinda reminded here of 37signals saying they can save $7M
           | over 5 years by leaving the cloud. It seems the goal here is
           | to dig people out of performance problems they get by using
           | one type of cloud service, by selling them another type of
           | cloud service!
        
             | ascendantlogic wrote:
             | >But we're still talking like $170/month
             | 
             | Rounding error for most companies.
             | 
             | >The extra cost is a bit of Linux sys admin which can be
             | fully automated
             | 
             | You are overweighting hard dollar costs and underweighting
             | the value of engineering time. Maybe you're the worlds
             | greatest devops/platform engineer/sysadmin and once you
             | wire up everything in under 5 minutes it will never need
             | maintenance ever again but for most everyone else speeding
             | up image builds by using a service that someone else thinks
             | about and does maintenance on is absolutely worth it for
             | $170/mo.
        
               | fidgewidge wrote:
               | Yes, maybe. I do know Linux pretty well and don't
               | consider sysadmin costs a big drain on my own company or
               | time. I can see that it'd be much more expensive if you
               | hire people who don't have much UNIX experience.
               | 
               | On the other hand, I've experienced first hand how cloud
               | costs can explode uncontrollably in absurd ways. One
               | company I worked at had a cloud cost crisis and they
               | weren't even serving online services, just shovelling
               | money into Azure for generic dev services like VMs for
               | load tests, DBs for testing, super-slow CI agents, etc.
               | They never managed to properly fix this because of the
               | mentality you express here: a few hundred bucks a month
               | here, a few hundred there, everyone gets access to spin
               | up resources and it's all worth it because we're all
               | soooo valuable. Then one day you realize you're
               | inexplicably burning millions on subscription services
               | and cloud spend, yet nobody can identify quite why or on
               | what, or how to push costs down. Death by a thousand
               | cuts, it was quite the revelation. Free cloud credits are
               | murder, because they embed a culture of profligacy and
               | "my time is too valuable to optimize this". By the time
               | the startup credits run out it's too late.
        
           | nasmorn wrote:
           | As someone who is not a user yet 49 sounds like a rounding
           | error in our dev budget and if all I have to do is change a
           | line in my CI script that does sound very tempting. Standing
           | up another machine and registering it as runner sounds like
           | way more work. Also incurs a recurring task to run updates
           | there or cycle the VM to get a new base image. An hour of dev
           | time is really very expensive
        
       | nodesocket wrote:
       | Congratulations on the launch. I am currently using GitHub
       | actions and docker/setup-qemu-action@master. Then just call
       | docker buildx with the platform arg. This works, except the
       | builds do take a while since it's running on Intel emulated arm64
       | (Microsoft booo).
       | 
       | What happens when GitHub adds native arm support though? Seems
       | like a big value add of your service is immediately displaced and
       | additionally can use self-hosted runners with GitHub to solve
       | caching.
        
         | jacobwg wrote:
         | > What happens when GitHub adds native arm support though?
         | 
         | That will make it much faster to build arm images on GHA
         | natively - in that scenario, Depot should still be several
         | times faster like we are on Intel today, primarily due to how
         | we're managing layer cache to avoid needing to save it or load
         | it between builds (cache-to / cache-from), as well as just
         | having larger runners and more sophisticated orchestration. We
         | can take advantage of BuildKit's ability to share cache and
         | deduplicate steps between concurrent builds for instance.
         | 
         | We're also expanding Depot in a few different directions,
         | including along the security path with container signing and
         | SBOM support, as well as some upcoming build insights and
         | analytics features. The goal is that it's always super easy to
         | `depot build` from wherever you `docker build` today, and that
         | Depot provides the best possible experience for container
         | builds.
        
       | vignesha wrote:
       | Congrats on the launch,
        
       | whatsu wrote:
       | I registered using google auth and still have to confirm my
       | email, is that correct?
        
       | markdrew74 wrote:
       | This has been an amazing tool and has been speeding up our
       | builds. What is normally a long process is done in seconds. We
       | wait more time for the repo (sideyes at bitbucket) than for
       | depot! Great work Kyle and Jacob!
        
       | qrush wrote:
       | Big fans of Depot here at Wistia, keep it up!!
        
       | jiggawatts wrote:
       | AFAIK most orgs that use something like Azure DevOps pipelines
       | for builds will deploy a VM Scale Set Agent Pool with the runner
       | image. This provides layer caching and incremental builds. Ref:
       | https://learn.microsoft.com/en-us/azure/devops/pipelines/age...
       | 
       | What's the advantage of your platform over this?
        
         | jacobwg wrote:
         | I haven't used Azure DevOps scale sets, but it looks similar,
         | assuming it is orchestrating a persistent cache disk.
         | 
         | If you're not using Azure DevOps, or even if you are, you can
         | use `depot build` from any CI provider or from your local
         | machine, anywhere you'd run `docker build` today, and the speed
         | and cache are shared everywhere. And you don't need to
         | configure any infrastructure. We also orchestrate both Intel
         | and Arm machines depending on what type of build you need to do
         | (or both for multi-platform). That can be important if you need
         | to run on Arm in production, or if you have developers using
         | new MacBooks.
         | 
         | We're working on more Docker-specific features as well, we plan
         | to directly integrate things like SBOMs, container signing,
         | build analytics and insights for what happens inside the Docker
         | build, etc into Depot, there are some interesting things we can
         | do having control of the execution environment.
        
       | MuffinFlavored wrote:
       | Congrats on the launch!
       | 
       | Sorry to ask this silly question, but since your team is an
       | expert in the "fast Docker images" area, could somebody avoid the
       | traditional `docker build` with say NixOS or Bazel and achieve
       | the same results as Depot (aka, fast building with the output
       | being an OCI/Docker image)? Is that what Depot is doing at a high
       | level? Was this considered?
       | 
       | > Our CLI is wrapping the Buildx library
       | 
       | I'm surprised you're able to build Docker images faster than
       | Docker using their code/libraries?
        
         | jacobwg wrote:
         | > could somebody avoid the traditional `docker build` with say
         | NixOS or Bazel
         | 
         | Yes! You can think of an OCI image as a special kind of
         | tarball, so things like NixOS and Bazel are able to construct
         | that same tarball, potentially fairly quick if it just has to
         | copy prebuilt artifacts from the store.
         | 
         | Today we're running BuildKit, so we support all the typical
         | Docker things as well as other systems that use BuildKit, e.g.
         | Dagger, and I believe there are nix frontends for BuildKit. In
         | that sense, we can be an accelerated compute provider for
         | anything compatible with BuildKit.
         | 
         | > build Docker images faster than Docker
         | 
         | Today the trick is in the hosting and orchestration. We're
         | using fast machines, launching Graviton instances for Arm
         | builds (no emulation) or multiple machines for multi-platform
         | build requests, orchestrating persistent volumes, etc. It's
         | more advanced than what hosted CI providers give you today, and
         | closer to something you'd need to script together yourself with
         | your own runners. There's also some Docker build features (e.g.
         | cache mounts) that _only_ work with a persistent disk.
        
         | jlhawn wrote:
         | > I'm surprised you're able to build Docker images faster than
         | Docker using their code/libraries?
         | 
         | It's not a code/library problem. Knowing what Buildkit options
         | to use is the easy part. It's almost entirely a storage
         | infrastructure and networking problem as it has huge
         | implications on whether or not you'll be able to easily cache
         | build layers.
        
           | MuffinFlavored wrote:
           | > It's almost entirely a storage infrastructure and
           | networking problem as it has huge implications on whether or
           | not you'll be able to easily cache build layers.
           | 
           | On a single machine, would NixOS/Bazel handle this better
           | than Dockerfile/Docker/BuildKit?
        
             | jacobwg wrote:
             | Potentially - if Nix or Bazel has already built the
             | binaries, and just needs to construct an OCI-compliant
             | image tarball with them, that can be quite quick, similar
             | to a Dockerfile with only COPY instructions. Nix and Bazel
             | can also give you deterministic / reproducible images that
             | take more effort to construct with Dockerfiles.
             | 
             | I've also seen people use Nix or Bazel inside their
             | Dockerfile, like ultimately the build has to execute
             | somewhere, be that inside or outside a Dockerfile.
        
               | lewo wrote:
               | FYI, the nix2container [1] project (author here) aims to
               | speedup the standard Nix container workflow
               | (dockerTools.buildImage) by basically skipping the
               | tarball step: it directly streams non already pushed
               | layers.
               | 
               | [1] https://github.com/nlewo/nix2container
        
       | mxcrbn wrote:
       | Thanks guys for building depot, it makes things easier and faster
       | for us (aws graviton users)
        
       | rubenfiszel wrote:
       | We've been very happy customer at https://github.com/windmill-
       | labs/windmill, all of our docker builds are on depot and it
       | replaced our fleet of github runners on hetzner :)
        
         | kylegalbraith wrote:
         | Thanks for the very kind words, we're super excited to be
         | working with you all at Windmill!
        
       | amitizle wrote:
       | Congrats on the launch.
       | 
       | We've been using Depot for the past two months, and without
       | changing anything the builds became faster (compared to our CI).
       | 
       | Good luck Depot team and keep up the good work!
        
       | chasemgray wrote:
       | Been using this for several months now and it's helped improve
       | our build pipelines significantly! We are planning to integrate
       | it with more of our tooling soon.
        
       | moabuaboud wrote:
       | We, at https://github.com/activepieces/activepieces, had been
       | using Depot.
       | 
       | Before depot we faced extreme slow builds for ARM-based images on
       | github machines. However, using Depot helped us reduce the image
       | build time from 2 hours (with an emulator) to just 3 minutes.
       | 
       | Emulator:
       | https://github.com/activepieces/activepieces/actions/runs/39...,
       | 
       | Depot:
       | https://github.com/activepieces/activepieces/actions/runs/40....
        
         | hummus_bae wrote:
         | [dead]
        
       ___________________________________________________________________
       (page generated 2023-02-22 23:00 UTC)