[HN Gopher] Show HN: Run globally distributed full-stack apps on... ___________________________________________________________________ Show HN: Run globally distributed full-stack apps on high- performance MicroVMs Hi HN! We're Yann, Edouard, and Bastien from Koyeb (https://www.koyeb.com/). We're building a platform to let you deploy full-stack apps on high-performance hardware around the world, with zero configuration. We provide a "global serverless feeling", without the hassle of re-writing all your apps or managing k8s complexity [1]. We built Scaleway, a cloud service provider where we designed ARM servers and provided them as cloud servers. During our time there, we saw customers struggle with the same issues while trying to deploy full-stack applications and APIs resiliently. As it turns out, deploying applications and managing networking across a multi-data center fleet of machines (virtual or physical) requires an overwhelming amount of orchestration and configuration. At the time, that complexity meant that multi-region deployments were simply out-of-reach for most businesses. When thinking about how we wanted to solve those problems, we tried several solutions. We briefly explored offering a FaaS experience [2], but from our first steps, user feedback made us reconsider whether it was the correct abstraction. In most cases, it seemed that functions simply added complexity and required learning how to engineer using provider-specific primitives. In many ways, developing with functions felt like abandoning all of the benefits of frameworks. Another popular option these days is to go with Kubernetes. From an engineering perspective, Kubernetes is extremely powerful, but it also involves massive amounts of overhead. Building software, managing networking, and deploying across regions involves integrating many different components and maintaining them over time. It can be tough to justify the level of effort and investment it takes to keep it all running rather than work on building out your product. We believe you should be able to write your apps and run them without modification with simple scaling, global distribution transparently managed by the provider, and no infrastructure or orchestration management. Koyeb is a cloud platform where you come with a git repository or a Docker image, we build the code into a container (when needed), run the container inside of Firecracker microVMs, and deploy it to multiple regions on top of bare metal servers. There is an edge network in front to accelerate delivery and a global networking layer for inter-service communication (service mesh/discovery) [3]. We took a few steps to get the Koyeb platform to where it is today: we built our own serverless engine [4]. We use Nomad and Firecracker for orchestration, and Kuma for the networking layer. In the last year, we spawned six regions in Washington, DC, San Francisco, Singapore, Paris, Frankfurt and Tokyo, added support for native workers, gRPC, HTTP/2 [5], WebSockets, and custom health checks. We are working next on autoscaling, databases, and preview environments. We're super excited to show you Koyeb today and we'd love to hear your thoughts on the platform and what we are building in the comments. To make getting started easy, we provide $5.50 in free credits every month so you can run up to two services for free. P.S. A payment method is required to access the platform to prevent abuse (we had hard months last year dealing with that). If you'd like to try the platform without adding a card, reach out at support@koyeb.com or @gokoyeb on Twitter. [1] https://www.koyeb.com/blog/the-true-cost-of-kubernetes-peopl... [2] https://www.koyeb.com/blog/the-koyeb-serverless-engine-docke... [3] https://www.koyeb.com/blog/building-a-multi-region-service-m... [4] https://www.koyeb.com/blog/the-koyeb-serverless-engine-from-... [5] https://www.koyeb.com/blog/enabling-grpc-and-http2-support-a... Author : edouardb Score : 27 points Date : 2023-08-17 10:45 UTC (12 hours ago) (HTM) web link (www.koyeb.com) (TXT) w3m dump (www.koyeb.com) | emschwartz wrote: | My understanding is that Fly.io also started using Nomad but | ended up running into big reliability issues at scale across many | regions. I'm curious if you all are using it differently or | haven't gotten to that scale yet. | yann_eu wrote: | I'd say we don't use it exactly the same way: we don't have a | single global nomad cluster, which is a critical difference. | | We have one Nomad cluster per region, which we "federated" | ourselves using our own orchestrator. This basically reduces | the latencies between agents and each cluster, reduces the | failure domains, and also avoids encoding all the constraints | in one single Nomad job definition. | | I'm not so much worried about scaling with our setup but the | performance of the autoscaler might be a concern in the future. | crabmusket wrote: | Source: https://fly.io/blog/carving-the-scheduler-out-of-our- | orchest... | AntonCTO wrote: | Congratulations on the launch! It seems very intriguing! I have a | some open questions: - Where is the company based and what is the | jurisdiction? Probably you forgot to add the imprint :o) - Is | there a difference between edge and non-edge locations? - Can | data storage be tied to a location? - Is it tied to github or can | it be used with self-hosted gitlab? - Is there a rough ETA for | databases, especially postgres(-like)? | | Thanks in advance! | Palmik wrote: | This looks quite interesting, congrats on the launch! | | Reminiscent of fly.io. Is it a direct competitor, or is there a | major twist to it? | | How do you handle apps composed of multiple services, if there | isn't a configuration? | | The pricing is a bit confusing by the way, the free tier says | "16GB of RAM & 16 vCPU per service" while in reality it seems you | only get the 512mb RAM instance. | yann_eu wrote: | Thanks! Hope you'll like it :) | | We have similarities with fly.io (Firecracker MicroVMs on top | of BareMetal) and also some key differences: | | - we directly integrate with GitHub to automatically build your | application on push. We support building native code with | Builpacks or from Dockerfile in addition to pre-built | containers. | | - we put a CDN in front of all your services to provide caching | and edge TLS termination | | - technically, our internal network is a service mesh built | with Kuma and Envoy | | - overall, we aim to be a bit higher in the stack, instead of | looking at providing low-level virtual machines, we want to | focus on productivity features like preview environments | | We actually thought zero _infrastructure_ configuration. At | this stage, there is some basic setup to do for a multi-service | app. You need to configure the HTTP routes. We aim to add as | much automatic discovery of the codebase as possible. | | Thanks for the feedback on the pricing. $0 is actually the | price of the plan and we provide $5.5 of free credit in the | plan. It seems the "Up to" was somehow skipped in the "16GB & | 16 vCPU per service", this is indeed confusing. | amerine wrote: | I'm so excited for your launch, I cannot wait to get in and play | with it. ___________________________________________________________________ (page generated 2023-08-17 23:00 UTC)