[HN Gopher] Launch HN: Seed (YC W21) - A Fully-Managed CI/CD Pip...
       ___________________________________________________________________
        
       Launch HN: Seed (YC W21) - A Fully-Managed CI/CD Pipeline for
       Serverless
        
       Hi HN, we are Jay and Frank from Seed (https://seed.run).  We've
       built a service that makes it easy to manage a CI/CD pipeline for
       serverless apps on AWS. There are no build scripts and our custom
       deployment infrastructure can speed up your deployments almost 100x
       by incrementally deploying your services and Lambda functions.  For
       some background, Serverless is an execution model where you send a
       cloud provider (AWS in this case), a piece of code (called an AWS
       Lambda function). The cloud provider is responsible for executing
       it and scaling it to respond to the traffic needs. And you are
       billed for the exact number of milliseconds of execution.  Back in
       2016 we were really excited to discover serverless and the idea
       that you could just focus on your code. So we wrote a guide to show
       people how to build full-stack serverless applications --
       https://serverless-stack.com. But once we started using serverless
       internally, we started hitting all the operational issues that come
       with it.  Serverless Framework apps are typically made up of
       multiple services (20-40), where each service might have 10-20
       Lambda functions. To deploy a service, you need to package each
       Lambda function (generate a zip of the source). This can take 3-5
       mins. So the entire app might take over 45 mins to deploy!  To fix
       this, people write scripts to deploy services concurrently. But
       some might need to be deployed after others, or in a specific
       order. And if a large number of services are deployed concurrently,
       you tend to run into rate-limit errors (at least in the AWS
       case)--meaning your scripts need to handle retries. Your services
       might also be deployed to multiple environments in different AWS
       accounts, or regions. It gets complicated! Managing a CI/CD
       pipeline for these apps can be difficult, and the build scripts can
       get large and hard to maintain.  We spoke to folks in the community
       who were using serverless in production and found that this was a
       common issue, so we decided to fix it. We've built a fully-managed
       CI/CD pipeline specifically for Serverless Framework and CDK apps
       on AWS. We support deploying to multiple environments, regions,
       using most common git workflows. There's no need for a build
       script. You connect your git repo, point to the services, add your
       environments, and specify the order in which you want your services
       to be deployed. And Seed does the rest. It'll concurrently and
       reliably (handle any retries) deploy all your services. It'll also
       remove the services reliably when a branch is removed or a PR is
       closed.  Recently we launched incremental deploys, which can really
       speed up deployments. We do this by checking which services have
       been updated, and which of the Lambda functions in those services
       need to be deployed. We internally store the checksums for the
       Lambda function packages and concurrently do these checks. We then
       deploy only those Lambda functions that've been updated. We've also
       optimized the way the dependencies (node_modules) in your apps are
       cached and installed. We download and restore them asynchronously,
       so they are not blocking the build steps.  Since our launch in
       2017, hundreds of teams rely on Seed everyday to deploy their
       serverless apps. Our pricing plans are based on the number of build
       minutes you use and we do not charge extra for the number of
       concurrent builds. We also have a great free tier --
       https://seed.run/pricing  Thank you for reading about us. We would
       love to hear what you think and how we can improve Seed, or
       serverless in general!
        
       Author : jayair
       Score  : 104 points
       Date   : 2021-01-19 16:44 UTC (6 hours ago)
        
       | seanemmer wrote:
       | Do you plan on supporting Google Cloud Functions?
        
         | jayair wrote:
         | It's definitely on our roadmap, a little bit further down the
         | road.
         | 
         | But I'd love to connect and learn more about the specifics of
         | Google Cloud.
         | 
         | jay@seed.run
        
       | davmar wrote:
       | I use seed.run and it is absolutely outstanding. The UI is
       | incredibly easy to use and I have so much more confidence in my
       | deployments.
       | 
       | These guys have done an outstanding job, definitely take a look.
       | It's an indispensable tool.
        
         | jayair wrote:
         | Wow thank you! Really appreciate your support!
        
           | davmar wrote:
           | But honestly, thank you. What you built makes my life better.
        
       | dbnoch wrote:
       | Any plans for supporting BAA/HIPAA companies? Congrats on the
       | launch!
        
         | jayair wrote:
         | Yup it is on our roadmap. Feel free to get in touch if you'd
         | like to talk further jay@seed.run
        
       | dataminded wrote:
       | Nice. Very real problem. I'm excited to check this out.
        
         | jayair wrote:
         | Awesome, would love to hear what you think once you get a
         | chance to take a look. jay@seed.run
        
       | jedberg wrote:
       | This looks really great! But how do you differ from/compliment
       | what serverless.com offers?
        
         | jayair wrote:
         | Thank you! They offer something similar but there are a couple
         | of differences.
         | 
         | - The big one is the focus on speed, the incremental deploys at
         | a service and Lambda function level
         | (https://seed.run/blog/speeding-up-serverless-
         | deployments-100...).
         | 
         | - We are also focussed on reliability, so setting up and
         | tearing down environments while handling all the AWS rate-limit
         | errors, or other timing related errors. We do this by
         | connecting directly to CloudFormation.
         | 
         | - We also allow you to configure a deployment order for your
         | services (https://seed.run/docs/configuring-deploy-phases).
         | 
         | On the alerts, logs, and metrics; the critical difference is
         | that we query directly against your CloudWatch Insights or
         | subscribe to your CloudWatch groups, instead of ingesting all
         | your logs on our side. This allows us to:
         | 
         | - Provide real-time Lambda alerts basically for free
         | (https://seed.run/docs/issues-and-alerts)
         | 
         | - And you don't need to configure anything on your side. You
         | connect your AWS credentials and it works out of the box.
         | 
         | As always feel free to reach out if you need further details
         | jay@seed.run
        
           | jedberg wrote:
           | Awesome, thanks for the info! Do you have more info on what
           | you mean by "We do this by connecting directly to
           | CloudFormation"?
        
             | jayair wrote:
             | Yeah for sure. Previously, we relied on the the Serverless
             | Framework CLI output. But now we directly monitor the
             | CloudFormation events to figure out the root cause of the
             | failure, then decide if we should retry the deployment, and
             | how long to wait to retry.
        
       | gazzini wrote:
       | I loved reading serverless-stack a couple of years ago; it was
       | really helpful & convinced me to use serverless for a side-
       | project that's still going (with almost no expenses!).
       | 
       | I'm surprised to hear how many separate lambda functions each
       | service in your example had. I understand the need to deploy each
       | service independently... but to have +10 deployments within each
       | service seems crazy to me. Is there a reason each service needs
       | so many lambdas (vs deploying the service code as a single lambda
       | function with different branches)?
       | 
       | Fwiw, I found it possible to get quite far with a single
       | monolithic lambda function that defined multiple "routes" within
       | it, similar to how an Express server would define routes &
       | middleware.
       | 
       | Anyways, thanks for writing that PDF, and good luck with Seed!
        
         | jayair wrote:
         | Thank you for the kind words about Serverless Stack. Frank and
         | I poured ourselves into creating it. So it makes me really
         | happy when I hear that it ended up being helpful.
         | 
         | On the Lambdas per service front, the express server inside a
         | Lambda function does work. A lot of our customers (and Seed
         | itself) have APIs that need to have lower response times. And
         | individually packaging them using Webpack or esbuild ends being
         | the best way to do it. So you'll split each endpoint into a
         | Lambda.
         | 
         | I just think the build systems shouldn't limit the
         | architectural choices.
        
         | anfrank wrote:
         | Frank here from Seed. Just wanted to add that when you have a
         | monolithic Lambda, multiple routes would share a CloudWatch log
         | group, metric, and share a common node in x-ray. On the flip
         | side, the advantage of having separate Lambda functions
         | handling each route lets you leverage other AWS services
         | better.
        
         | erikerikson wrote:
         | One problem with monolithic functions is that you must grant
         | them a union of all the rights required by every code branch in
         | the monolith.
         | 
         | Obviously this can expand the blast radius of any vulnerability
         | and tends to encourage rougher grained privilege grants.
        
           | f6v wrote:
           | This is getting out of hand. Are there "monolithic" and
           | "micro" functions now?
        
             | jayair wrote:
             | That made me chuckle. But to be fair, in this case
             | "monolithic" function is just a way to describe this
             | pattern of moving your entire app (express in this case),
             | inside a Lambda function. When Lambda started to become
             | popular, this was the most common way to migrate to it.
             | Just move your monolithic app to a function, hence
             | "monolithic" functions.
        
       | f6v wrote:
       | I build my first lambda 4 years ago and it was great: no servers,
       | no complicated tools. Just one function which I upload and it
       | works. The amount of tooling which exists now is just daunting.
       | At this point, is it still worth it if the technology is so
       | complex that people are building the whole SaaS for managing it?
       | 
       | PS YC is still bullish on selling shovels I see.
        
         | jayair wrote:
         | I think that's fair. When we started back in 2016 with Lambda,
         | it was similar to how you describe it.
         | 
         | Now we've got a ton of companies that just use Lambda. So you
         | can imagine a team for 50 developers, working on 40 or so
         | separate services, with 500 or so Lambda functions. It can be
         | hard manage the tooling for all of this internally.
        
       | mavbo wrote:
       | This looks great! I've been using Serverless Framework for a
       | project and have not been too satisfied with the experience.
       | Could you explain the integration with that framework a little
       | more? I see the two options for services with Seed are the
       | Serverless Framework or Serverless Stack (which I have no
       | experience with, but looks like a compelling alternative). Is
       | Seed just compatible with existing Serverless Framework yml
       | configurations, or does it integrate with your Serverless
       | Framework account somehow? I see you offer an integration with
       | Serverless Pro, which confused me as this appeared (to me) to be
       | a full replacement for Serverless Framework.
        
         | jayair wrote:
         | Yeah so if you have a Serverless Framework (the open source
         | project) app in a git repo, you can add that to Seed. And it'll
         | deploy it for you. To the environments you configure on Seed.
         | 
         | It doesn't connect to your Serverless Pro (their SaaS offering)
         | account. Serverless Pro offers some similar features to Seed
         | but most of our users just use Seed.
         | 
         | If you want to deploy using Seed, while viewing logs or metrics
         | on Serverless Pro, you'll need to follow those docs you
         | mentioned to create an access key
         | (https://seed.run/docs/integrating-with-serverless-pro). We
         | should clarify the integration in our docs to make it less
         | confusing.
         | 
         | I hope that makes sense!
        
         | garethmcc wrote:
         | I am curious what made you unsatisfied. As a member of the
         | Serverless team I'd love to hear the feedback so we can
         | potentially improve the experience for you and others.
        
         | jayair wrote:
         | Just made a quick edit to that doc, I hope it helps:
         | 
         | https://github.com/seed-run/homepage/commit/e5fdd3fb41fedb2b...
        
       | _0o6v wrote:
       | Well done, and thanks for Serverless Stack! Awesome tutorial!
       | 
       | I completed it and it was excellent, and a lot of fun.
       | 
       | The only thing I would say is that a section on public user
       | uploads would be amazing (e.g. avatars) as the perms and CDK
       | stuff is a bit knotty for that (I eventually figured it out but
       | it took a bit of trial and error).
        
         | jayair wrote:
         | Thank you for the kind words!
         | 
         | That's a good point on the avatars idea. We'll need to create a
         | version of the notes app, where there's a public aspect of it.
         | So maybe being able to publish it.
        
       | becausepc wrote:
       | Weird to see "folks" everywhere in the text. Why not "people"? Or
       | "developers"?
       | 
       | I'm sure it's not on purpose, but it made me think about Social
       | Justice language war: https://newdiscourses.com/tftw-folks/
       | 
       | (I even had to switch to my anonymous HN account to post this.)
       | 
       | I would prefer to think about this (very cool!) startup instead.
        
         | jayair wrote:
         | Honestly, it's a weird quirk I developed as I started writing
         | more publicly. Hadn't thought too much of it!
        
         | dang wrote:
         | Just in case anyone's wondering, I made some fine-grained edits
         | to the text above after it was posted, and that included some
         | de-folkification. This was before I saw your comment.
         | 
         | Since I'm now going to get asked what the hell I'm doing
         | mucking with people's text:
         | 
         | I help YC startups with their Launch HN blurbs. Mainly I coach
         | them to take out anything that sounds like marketing or PR, and
         | to add things that the community tends to find interesting.
         | Usually we'll agree on a final draft by email, but sometimes we
         | skip the fine-tuning step, and in that case I sometimes do it
         | live, because I'm a compulsive editor. Part of the intention is
         | to sand off sharp edges that might get things snagged in
         | offtopicness, so I'm glad to see your comment as a sort of
         | natural experiment demonstrating that this is useful :)
         | 
         | By the way, I'm happy to help anyone else with this too. That
         | is, if any of you want to present your startup or some other
         | major piece of work to HN, in the style of this post and
         | https://news.ycombinator.com/launches, you can email a draft to
         | hn@ycombinator.com and I'll try to look it over and give you
         | feedback. The only catch is that I can't always reply quickly,
         | and my worst case latency is abominable because the HN inbox
         | undergoes periodic overwhelm. Still, it does mostly work. If
         | you want to do this, you can look at the advice I give YC
         | startups here: https://news.ycombinator.com/yli.html. The
         | logistical aspects only apply to YC startups, but the
         | communication aspects are more important and they are
         | universal.
        
           | becausepc wrote:
           | Thank you dang! I appreciate your service, even after having
           | my comment above to be flagged :)
           | 
           | Is there a way to let you know about similar sharp edges, in
           | order to avoid writing offtopic comments like mine?
        
             | [deleted]
        
           | [deleted]
        
         | jack_riminton wrote:
         | Come out of the rabbit hole! most people are not SJW
        
           | becausepc wrote:
           | > most people are not SJW
           | 
           | This is true. However, few active people are enough to poison
           | the discussion. Examples:
           | 
           | * erasure of gendered language from source code comments
           | 
           | * "master" branch controversy
           | 
           | * BLM banners in numerous open source project
           | 
           | * Python PEP8 English controversy: https://github.com/python/
           | peps/pull/1470/commits/89b72cf7261...
        
         | vincentmarle wrote:
         | I noticed this too, I personally prefer the usage of _folx_
        
           | becausepc wrote:
           | At least "folks" is a real word.
           | 
           | "Folx" is unquestionably the result of the language war:
           | https://newdiscourses.com/tftw-folx/
        
             | [deleted]
        
               | becausepc wrote:
               | > it literally costs you nothing
               | 
               | There is cost. It's hard enough to communicate as it is,
               | even without weird unpronounceable terms certain
               | academics come up with.
               | 
               | > language that is inclusive
               | 
               | Unfortunately, "inclusive" is among those weird terms,
               | now with double, almost opposite, meaning:
               | 
               | https://newdiscourses.com/tftw-inclusion/
        
               | [deleted]
        
             | vincentmarle wrote:
             | What's wrong about trying to include other marginalized
             | groups including people of color and trans people?
        
       | PaywallBuster wrote:
       | tbh, didn't run into this problem yet.
       | 
       | Half of my project is being developed in serverless (the
       | microservices) that add to the big monolith application.
       | 
       | I've basically implemented a "monorepo CI/CD" which mostly works
       | fine for our needs. (With some limitations/bugs in Gitlab CI due
       | to the monorepo design)
       | 
       | For the most part we probably don't get so many functions bundled
       | together, thus avoiding the deployment limitations referred.
       | 
       | Only one serverless app is reaching any kind of limits (200
       | resources per Cloudformation template if I remember correctly)
       | 
       | https://pedrogomes.medium.com/gitlab-ci-cd-serverless-monore...
        
         | jayair wrote:
         | Yeah that makes sense. That's basically how Seed started.
         | Thanks for sharing.
         | 
         | What we started noticing with teams that we were talking to
         | (and our own experience) was that the build process started
         | limiting our architecture choices. For example, we want
         | functions packaged individually because it reduces cold starts.
         | But because the builds take long we had to make a trade-off.
         | And that didn't make sense to us.
        
       | freeqaz wrote:
       | Do you have any plans to open source this?
       | 
       | I'm thinking about lock-in -- what if you suddenly deprecated the
       | product? Will my deploys suddenly break?
       | 
       | Are you planning to maintain 1:1 feature parity with
       | Serverless/CDK long-term? Could I fall back to those deployment
       | tools, albeit slower, worst case?
       | 
       | Either way, this is awesome and congrats on the launch!
        
         | jayair wrote:
         | Yeah we've definitely talked about open sourcing this and it is
         | a long term goal of ours. I think if we were starting over, we
         | would've open sourced it right from the beginning.
         | 
         | > Could I fall back to those deployment tools, albeit slower,
         | worst case?
         | 
         | Yup, that's how we've designed Seed. We deploy it on your
         | behalf. So if we were to go down, you could still deploy your
         | app just as before.
        
       | jayair wrote:
       | Just to add, if you have any questions about Seed or need some
       | help with your serverless apps, send me an email: jay@seed.run or
       | just put something on my calendar: https://calendly.com/jayair
        
       | zackmorris wrote:
       | I wish there was something like this for Docker rather than
       | Lambda functions.
       | 
       | I'm new to all of it, but the security groups, route tables,
       | internet gateways and other implementation details of AWS left me
       | feeling overwhelmed and insecure (literally, because roles and
       | permissions are nearly impossible for humans to reason about).
       | AWS also suffers from the syndrome of: if you want to use some of
       | it, you have to learn all of it.
       | 
       | Basically what I need is a sandbox for running Docker containers
       | with any reasonable scale (under 100? what's big these days?).
       | Then I just want to be able to expose incoming port 443 and one
       | or two others for a WebSocket or an SSL port so admins can get to
       | the database and filesystem (maybe). Why is something so
       | conceptually trivial not offered by more hosting providers?
       | 
       | I researched Heroku a bit but am not really sure what I'm looking
       | at without actually doing the steps. I'm also not entirely
       | certain why CI/CD has been made so complicated. I mean
       | conceptually it's:
       | 
       | 1) Run a web hook to watch for changes at GitHub and elsewhere
       | 
       | 2) Optionally run a bunch of unit tests and if they pass, go to
       | step 3
       | 
       | 3) Run a command like "docker-compose --some-option-to-make-this-
       | happen-remotely up"
       | 
       | So why is a 3 step thing a 3000 step thing? Full disclose, I did
       | the 3000 steps with Terraform and while I learned a lot from the
       | experience, I can't say that I see the point of most of it. I
       | would not recommend the bare-hands way on any cloud provider to
       | anyone, ever (unless they're a big company or something).
       | 
       | I guess what I'm asking is, could you adapt what you've done here
       | to work with other AWS services like ECS? It's all of the same
       | configuration and monitoring stuff. I've already hit several bugs
       | in ECS where you have to manually run docker prune and other
       | commands in the EC2 instance because the lifetimes are in hours
       | and they haven't finished the rough edges around their cleanup
       | commands. So I've hit problems where even though I've spun down
       | the cluster, the new one won't spin up because it says the Nginx
       | container is still using the port. I can't tell you how
       | infuriating it is to have to work around issues like that which
       | ECS was supposed to handle in the first place. And I've hit
       | similar gotchas on the other AWS services too, to the point where
       | I'm having trouble seeing the value in what they're offering, or
       | even understanding why a service exists in the first place, when
       | I might have done it a different way if I was designing it.
       | 
       | TL;DR: if you could make deploying Docker as "easy" as Lambda,
       | you'd quickly run out of places to store the money.
        
         | pongogogo wrote:
         | Have you tried cloud run on GCP? It sits in the niche you're
         | describing between a serverless platform and some managed
         | container orchestration platform like kubernetes (GKE or EKS).
        
         | jayair wrote:
         | Yeah I feel your pain in regards to AWS. It was a big reason
         | why we wrote https://serverless-stack.com.
         | 
         | We run some ECS clusters internally and have run into some of
         | the issues you mentioned. We use Seed to deploy them but the
         | speed and reliability bit that I talked about in the post
         | mainly applies to Lambda. So Seed can do the CI/CD part but it
         | can't really help with the issues you mentioned.
         | 
         | Btw, have you tried Fargate?
        
         | leetrout wrote:
         | > docker-compose --some-option-to-make-this-happen-remotely
         | 
         | Some of this exists- you can do remote operations like that
         | with contexts but that doesn't solve the infrastructure issue.
         | 
         | Custom docker images on heroku is closer...
        
         | safeerm wrote:
         | hey Zack, we have a prototype of this, we would love to have
         | you try out (and anyone else). We just helped a couple
         | customers migrate their Docker code repos from DigitalOcean to
         | AWS and save $2K a month with our template. Gives you a CI/CD
         | pipeline and deploys on ECS/Fargate.
         | 
         | Please reach out safeer [at] tinystacks.com
        
         | colinchartier wrote:
         | We're building something like what you describe (YC S20) -
         | https://layerci.com - it's similar to OP but meant for standard
         | containers instead of serverless.
         | 
         | TL;DR:
         | 
         | 1. Install on GitHub
         | https://github.com/apps/layerci/installations/new
         | 
         | 2. Create files called 'Layerfile' to configure the pipeline
         | 
         | Docker Compose example for step 3:
         | https://layerci.com/docs/examples/docker-compose
         | 
         | Then just point it at a docker swarm cluster or run the
         | standard docker/ecs integration:
         | https://docs.docker.com/cloud/ecs-integration/
        
       | simoncrypta wrote:
       | Thank you for making serverless easy and accessible! I really
       | enjoy using Seed for some of my projects.
        
         | jayair wrote:
         | I really appreciate the kind words and support!
        
       | whalesalad wrote:
       | I have achieved this with AWS Cloudformation/SAM, a template.yml
       | and a makefile. Polyglot too, a mix of Python backend and JS
       | backend across multiple functions.
       | 
       | I'm trying to think of how a service would help me here. However
       | I do think this is a frontier-space where there is a lot of room
       | for improvement. Looks polished though, I'll take it for a spin
       | on a hobby project soon.
        
         | jayair wrote:
         | Yeah makes sense. Adding SAM support is on our roadmap.
         | 
         | Looking forward to hearing your feedback when you give it a
         | try! I should've clarified in the post, we support all the
         | runtimes, not just Node.
        
           | leetrout wrote:
           | Do you have any rough plans for how you would support SAM?
           | Would you be transforming the YAML in some proprietary way or
           | just calling off to CloudFormation on the users behalf?
        
             | jayair wrote:
             | Yeah at our core we do CloudFormation deployments. Whether
             | thats through Serverless Framework or CDK (using SST
             | https://github.com/serverless-stack/serverless-stack). So
             | in the case of SAM it would be similar, deploying the CF
             | stack on the users behalf. The deployments process roughly
             | looks like: install dependencies > package functions >
             | generate CF stack > deploy it > monitor progress. We do
             | some optimizations along those steps but thats the gist of
             | how it works.
             | 
             | Hope that helps. Feel free to get in touch if you want to
             | know more jay@seed.run
        
       | AlphaWeaver wrote:
       | It's been a while since I touched anything serverless, but it
       | looks like Seed supports incremental deployments, which was a
       | major pain point when I last worked with the Serverless Framework
       | (an open source library for deploying Lambdas, one of the first
       | ones.) Nice job team!
        
         | jayair wrote:
         | Thank you! We do these checks on the service level
         | (https://seed.run/docs/incremental-service-deploys) and the
         | Lambda level too (https://seed.run/docs/incremental-lambda-
         | deploys).
        
       | abd12 wrote:
       | Wow! Congrats to you, Jay and Frank. I've been a fan of your work
       | on both Seed.run & Serverless Stack for a while. Best of luck,
       | and I'm excited to see Seed grow :)
        
         | jayair wrote:
         | Thank you! I really appreciate the support!
        
       | astuyvenberg wrote:
       | Congrats on the launch!
        
         | jayair wrote:
         | Thank you!
        
       | davecap1 wrote:
       | How does this compare to something like AWS CodePipeline with CDK
       | (https://docs.aws.amazon.com/cdk/latest/guide/cdk_pipeline.ht...)
       | ?
        
         | jayair wrote:
         | Most of my post was about Serverless Framework but we support
         | CDK as well (with SST https://github.com/serverless-
         | stack/serverless-stack).
         | 
         | A couple of things that we do for CDK that's different from
         | CodePipeline:
         | 
         | - Setting up environments is really easy, we support PR and
         | branch based workflows out of the box.
         | 
         | - We automatically cache dependencies to speed up builds.
         | 
         | - And we internally use Lambda to deploy CDK apps, which means
         | it's basically free on Seed (https://seed.run/docs/adding-a-
         | cdk-app#pricing-limits)!
        
       | jack_riminton wrote:
       | Looks great. For someone who's not taken the plunge into
       | Serverless yet, how would the costs compare to the more
       | traditional options of hosting an app? i.e. a Rails/React app on
       | Heroku
       | 
       | Of course 'it depends', but roughly speaking?
        
         | jayair wrote:
         | Yeah it does depend. But the numbers that get touted are at
         | around 70-80%.
         | 
         | But here are the caveats. If your usage patterns are 24/7 and
         | very predictable. You can design your infrastructure to be
         | cheaper than the Lambda.
         | 
         | However for most other cases, including us at Seed (we use
         | serverless extensively). It's so much more cheaper that we
         | wouldn't do it any other way.
         | 
         | If you have a hobby project, it'll be in the free tier.
         | 
         | Some more details here -- https://serverless-
         | stack.com/chapters/why-create-serverless-...
        
           | jack_riminton wrote:
           | Great reply, thanks will give it a go once I learn how!
        
         | jayair wrote:
         | Oh I'll add, Seed is heavily influenced by Heroku. It's a
         | little like Heroku but for Serverless.
        
           | loosescrews wrote:
           | Isn't Heroku serverless? It is a PaaS offering similar to
           | Lambda and Google's various PaaS offerings that generally get
           | branded as serverless.
        
             | jayair wrote:
             | I should clarify, when I mentioned serverless, I really
             | meant serverless on AWS.
             | 
             | Broadly speaking PaaS is similar to serverless. The main
             | thing I look for as a user is, the per millisecond billing,
             | the ability to scale up instantly and scale all the way
             | down to zero.
        
       | girfan wrote:
       | Cool product! Any plans to support Azure Functions?
        
         | jayair wrote:
         | Thanks! We do but it's a bit further down the roadmap.
        
       ___________________________________________________________________
       (page generated 2021-01-19 23:00 UTC)