[HN Gopher] Managed Kubernetes Price Comparison
       ___________________________________________________________________
        
       Managed Kubernetes Price Comparison
        
       Author : spalas
       Score  : 92 points
       Date   : 2020-03-07 15:54 UTC (7 hours ago)
        
 (HTM) web link (devopsdirective.com)
 (TXT) w3m dump (devopsdirective.com)
        
       | showerst wrote:
       | Does anyone have experience with OVH's managed k8s offering? I've
       | had good experiences with them in the past on pricing/quality.
        
       | neop1x wrote:
       | Egress costs at those major clouds are ridiculous. Once you start
       | having some traffic it can easily make 50% of all costs or more.
       | That is not justifiable! Meanwhile hardware costs are going down.
       | We need more k8s providers with more resonable pricing.
       | Unfortunatelly both Digital Ocean and Oracle Cloud don't have
       | proper network load balancer implementations which is a must for
       | elastic regional clusters and to forward TCP in a way that client
       | IP is preserved and be able to add nodes without downtimes or TCP
       | resets. OVH cloud doesn't implement LoadBalancer service type at
       | all. So the choice in 2020 is really just Google, Amazon, Azure
       | with their rolls-royce pricing. The cost difference between them
       | is neglible. And then there are confidential free credits for
       | startups. So sad...
        
         | ikiris wrote:
         | You seem to think theres some kind of disconnect between egress
         | pricing, and if the network has a functional backbone with load
         | balancing infrastrucutre.
         | 
         | If anything, egress pricing is to _low_ in the vast majority of
         | cases, and it really shows when customers start using it.
        
         | atombender wrote:
         | For HTTP, an external load balancer/CDN such as CloudFlare or
         | Fastly could easily fulfill that role, though you obviously
         | have to jump through a few more hoops to get there, since it's
         | not built in. But with some of them you also get things like
         | TLS termination, DNS hosting, advanced rewriting, caching, and
         | DDOS protection.
        
       | based2 wrote:
       | https://aws.amazon.com/en/blogs/aws/amazon-eks-on-aws-fargat...
        
       | almostdigital wrote:
       | Scaleway gives you a k8s cluster starting from 40 EUR per month
        
       | mcdoker18 wrote:
       | Only Azure doesn't charge for the k8s control plane, that is the
       | most surprising thing for me.
        
         | petilon wrote:
         | We don't know how long that's going to last. Google didn't
         | charge for control plane either, until recently. So Azure
         | couldn't charge, in order to be competitive. Now that Google
         | has started charging Azure may start too.
        
           | fernandotakai wrote:
           | Azure also doesn't have an SLA, so that's why they don't
           | charge.
           | 
           | Google started charging after they added an SLA.
        
             | dodobirdlord wrote:
             | Makes sense, the point of an SLA is that you agree to pay
             | back the customer's money if you don't meet it. What does
             | it mean to have an SLA for a free product?
        
       | spectramax wrote:
       | Just curious - what's wrong with buying a large instance (24
       | cores) and running it for < 10,000 users? Kubernetes feels like
       | an insane complexity that doesn't need to be taken on and
       | managed. You're gonna spend more time managing Kubernetes than
       | writing _actual_ software. Also, it feels like if something goes
       | wrong in prod with your cluster - you 're gonna need external
       | help to get you back on the feet.
       | 
       | If you're not going to build the next Facebook, why would you
       | need so much complexity?
        
         | the_other_b wrote:
         | If you're not going to build the next Facebook, why would you
         | need so much complexity?
         | 
         | You don't. I think this is a recent point people are trying to
         | make. Kubernetes makes sense at a certain scale, but for
         | smaller startups it maybe shouldn't be the go to.
        
           | spectramax wrote:
           | Can you educate on at what point Kubernetes actually makes
           | sense? Is there a rule of thumb?
        
             | the_other_b wrote:
             | I don't think I'm in a position to answer this (heavily
             | experienced outside the control plane, but not in), but if
             | I had to answer I'd say:
             | 
             | Once you don't personally have your hand in every
             | application your company is running (in addition to the
             | points the other comments have brought up).
        
             | p_l wrote:
             | When you need to run more than 1-2 applications, especially
             | in HA, and you're cost-conscious so just throwing tons of
             | machines in autoscaling groups doesn't work for you.
             | 
             | Said applications don't have to be applications that you're
             | writing as part of your startup. It can be ElasticSearch
             | cluster, Redis, tooling to run some kind of ETL that
             | happens to be able to reuse your k8s cluster for task
             | execution, CI/CD, etc. etc.
        
           | DelightOne wrote:
           | So if Kubernetes is too complex, then terraform is a Nono
           | too?
           | 
           | I don't find them complex at all. You just tell the tools to
           | be in a specific state and the tool applies the necessary
           | changes. Server templates. Provisioning. Orchestration. etc.
        
             | the_other_b wrote:
             | I don't think theres a comparison there (or I'm just unsure
             | of the point you're making with that statement). I agree,
             | they aren't conceptually complex, but Kubernetes is a large
             | scheduler that _definitely_ benefits from having a
             | dedicated team managing it.
             | 
             | That being said, I always recommend using a tool like
             | Terraform to back up infrastructure and the likes.
        
               | DelightOne wrote:
               | Maybe I didn't do enough with Kubernetes to need a
               | dedicated team hmm.
               | 
               | The point I wanted to make is that my opinion is a bit
               | different. Being able to declare how state should be
               | instead of doing it imperatively/with configuration
               | management is just something I enjoy and which I think
               | does not cost much more in comparison.
               | 
               | That is why I wondered why not use it as a small startup?
        
         | crypt1d wrote:
         | I don't know why you are being down-voted, you are not wrong -
         | Kubernetes is not something I would roll out at a start of a
         | project. I think people are just excited to try it out so they
         | often overlook the operational side of things.
        
           | p_l wrote:
           | Considering that once you get through possibly high upfront
           | cost, it _greatly_ simplifies operational side, I often get
           | the feeling that the  "you are not Google" crowd misses the
           | operational side completely, or looks at it through rose-
           | tinted glasses.
        
             | petilon wrote:
             | Absolutely agree. The point of Kubernetes is to simplify
             | operational side. I use it for my hobby projects. There is
             | some learning investment needed, but after that it
             | _simplifies_ things so much. You can use Kubernetes for
             | simple projects that don 't need to scale to the size of
             | Google.
        
               | p_l wrote:
               | In another recent thread, I mentioned running ~62 apps on
               | kubernetes, and people asking if it could be simplified
               | to less.
               | 
               | They are mostly plain old PHP webapps, non-trivial amount
               | of them _Wordpress_ (shudduer), some done in random
               | frameworks, some with ancient dependencies, some in
               | node.js, one was ruby, etc. They are the equivalent of
               | good old shared hosting stuff.
               | 
               | With kubernetes, we generally manage them in much
               | simplified form, and definitely much cheaper than hosting
               | multiple VMs to work around issues in conflicting
               | dependencies at distro level or keeping track of our own
               | builds of PHP.
               | 
               | We also run CI/CD on the same cluster. Ingress mechanics
               | mean we have vastly simplified routing. Yes, we cheat a
               | lot by using GKE instead of our own deployment of k8s,
               | but we can manage that too, it's just cheaper that way.
               | 
               | Pretty much smooth sailing with little operational
               | worries except the fact that Google's stackdriver IMO
               | sucks :)
        
         | halfmatthalfcat wrote:
         | Because it removes both provisioning and deployment concerns
         | when you can build a container and then just tell Kubernetes to
         | deploy it onto the cluster. Theres not much that goes into
         | spinning up these managed Kuberentes clusters. Most of it is
         | telling them what classes of instances you want created.
         | 
         | When you buy a large instance, you still need to set up the
         | instance and tweak it to your application's needs. You then
         | need to babysit this node.
        
         | eknkc wrote:
         | It is a single point of failure. I want multiple smaller
         | instances even if I'm running something trivial. It is not
         | about scale but about reliability for us. And the moment I go
         | with multiple instances I need something to manage that mess.
         | Kube handles it well.
        
           | spectramax wrote:
           | I've used DigitalOcean's load balancer + multiple instances
           | that run app containers. It works fine and without
           | Kubernetes.
           | 
           | I wasn't saying that just use 24 core single instance.
           | Perhaps I should have worded it better.
        
             | closeparen wrote:
             | How do you obtain the right docker images and launch them
             | with the right settings?
             | 
             | How do you coordinate rolling deployments?
             | 
             | (I'm not saying you _need_ Kubernetes to do these things.
             | But if you 've written something to handle them, it is very
             | probably Kubernetes-shaped).
        
             | eknkc wrote:
             | Well a managed kubernetes cluster is not any more complex
             | than the other solutions.
             | 
             | You can use GCP control panel to launch clusters, add
             | nodes, launch containers expose and autoscale them without
             | touching a single line of config file or a terminal if
             | thats what you like. If you launch / manage your own
             | cluster though.. It's a pain.
        
         | mjfisher wrote:
         | > Kubernetes feels like an insane complexity
         | 
         | There's two things there - the complexity of engineering the
         | cluster itself, and the complexity of using it.
         | 
         | The former is where the pain is. If you can remove that pain by
         | using a reliable managed offering, it changes the perspective a
         | bit.
         | 
         | The complexity of using it is also non-trivial - you have to
         | learn it's terminology and primitives, understand the
         | deployment and scheduling model, know how volumes are
         | provisioned, aggregate logs, etc.
         | 
         | However, the ROI for learning that complexity can be pretty
         | good. If you get comfortable with it (maybe a month or so of
         | learning and hacking?) you get sane rolling deployment
         | processes by default, process management and auto healing,
         | secrets and config management, scheduled tasks, health checks,
         | and much, much easier infrastructure as code. Which means if
         | things do go really sideways, it's usually not that hard just
         | to stand up a replacement cluster from scratch. With a bit more
         | reading and some additional open source services, you get fully
         | automatic SSL management with let's encrypt too.
         | 
         | All that said, I absolutely agree with you in principle. No one
         | should overcomplicate their deployments for no benefit. It's
         | just worth reflecting on what those benefits are. Kubernetes
         | had a bunch of them - and whether they're worth the effort of
         | getting to know it depends very much on the system(s) you're
         | running.
        
           | spectramax wrote:
           | I used to work for a company that had less than 130 employees
           | and none of the software services were exposed to public -
           | the total user count was 130 users. And we had a Kubernetes
           | cluster.
           | 
           | Thanks for an insightful response - Don't you think that a
           | lot of wizbang software developers just want to use latest
           | and greatest buzzword thing, whatever that might be, to get
           | some cool points? As I grow older, I see increasing lack of
           | objectivity and decreasing attention to KISS principles.
        
             | mjfisher wrote:
             | > Don't you think that a lot of wizbang software developers
             | just want to use latest and greatest buzzword thing
             | 
             | Yes, absolutely, and it drives me up the wall. I've seen
             | some incredibly unsuitable technology choices made, the
             | complexity of which have absolutely sunk productivity on
             | projects.
             | 
             | That said, it's easy to become cynical and associate
             | anything that's become recently popular with hype-driven
             | garbage. But that can blind you to some really great new
             | stuff too. I tend to hang behind the very early adopters
             | and wait to see how useful new tech becomes in the wild -
             | the "a year with X" style blog posts tend to be really
             | informative.
        
             | coryrc wrote:
             | If you switch companies every two years for your first ten
             | years, you make 50% more than someone who changed once.
             | Companies have set it up so you _must_ optimize your resume
             | to get a competitive salary. Now, start giving 10% yearly
             | raise as standard and maybe employees can afford to work on
             | simple solutions.
        
         | p_l wrote:
         | It all depends on how many applications you _really_ have to
         | run. When you have one application, can depend on vendored
         | services for some of the infra, or have sufficiently small
         | requirements for extra services, it all works pretty well.
         | 
         | Helps if you also don't do due diligence on various things
         | (like actually caring about logging and monitoring), so you
         | scratch out those concerns. This is not even half sarcastic,
         | people went pretty far with that.
         | 
         | You might probably still spend more time than you think on
         | actual deployment and management due to lack of automation &
         | standardization, but if you're sufficiently small on the
         | backend it works.
         | 
         | When the number of concerns you have to manage rises and you
         | want to optimize the infrastructure costs, things get weirder.
         | Kubernetes' main offering is _NOT_ scaling. It 's providing a
         | standardized control plane you can use to reduce the
         | operational burden of managing often wildly different things
         | that are components in your total infrastructure build - Things
         | like infrastructural metrics and logging, _your business_
         | metrics and logging, various extra software you might run to
         | take care of stuff, making it easier to do HA, abstract away
         | the underlying servers so you don 't have to remember which
         | server hosted which files or how the volumes were mounted (it
         | can be as easy as classic old NFS/CIFS server with a rock-solid
         | disk array, you just don't have to care about mounting the
         | volumes on individual servers). It makes it easier to manage
         | HTTPS routing in my experience (plop an nginx-ingress-
         | controller with a host port, do a basic routing setup to push
         | external HTTP/HTTPS traffic to those nodes that run it, get the
         | bliss to forget about configuring individual Nginx configs
         | anymore --- or use your cloud's Ingress controller, with little
         | configuration difference!).
         | 
         | In my experience, k8s was a force multiplier to any kind of
         | infrastructure work, because what used to be much more complex,
         | even with automation tools like Puppet or Chef, now had a way
         | to be nicely forced into packages that even self-arranged
         | themselves on the servers, without me having to care about
         | which server and where. Except for setting up a node on-prem,
         | or very rare debug tasks (that usually can be later rebuilt
         | using k8s apis to get another force-multiplier), I don't have
         | to SSH to servers except maybe for fun. Sometimes I login to
         | specific containers, but that falls under debugging and
         | tweaking the applications I run, not the underlying
         | infrastructure.
         | 
         | That's the offering - whether it is _right_ for you, is another
         | matter. Especially if you 're in cash-rich SV startup, things
         | like Heroku might be better. For me, the costs of running on
         | PaaS are higher than getting another full-time engineer... and
         | we manage to run k8s part-time.
        
         | saber6 wrote:
         | While I see a lot of derision about Kubernetes these days, if I
         | am starting a Greenfield platform/design/product, why wouldn't
         | I use it?
         | 
         | There are tremendous benefits to K8S. It isn't just hype.
         | 
         | On the flip side, starting out as a monolithic (all in one VM)
         | app will take significant effort to transition to micro
         | services / K8S.
         | 
         | If I think I might end up at microservices/K8S, I think I might
         | as well plan for it (abstractly) initially.
        
           | allset_ wrote:
           | I believe this is the right way to think about it. You can
           | start off with a relatively monolithic architecture, and then
           | break that out into smaller microservices as needed with a
           | much easier transition.
        
       | oroup wrote:
       | At the low end it's worth considering Fargate distinct from EKS.
       | You don't need to provision a whole cluster (generally 3 machines
       | minimum) and can just run as little as a single Pod.
        
         | petilon wrote:
         | I tried Fargate and found it to be crappy. It is very hard to
         | use. It is proprietary, so your app will not be portable, and
         | your knowledge and experience will not be portable either. If
         | you use Kubernetes there is tons of tutorials, your app becomes
         | portable across clouds and your knowledge is portable from
         | cloud to cloud too. GKE only costs around $60 per month for a
         | single-machine "cluster".
        
           | txcwpalpha wrote:
           | What is proprietary about Fargate? It's containers. I did not
           | find any experience/knowledge (other than the basic knowledge
           | of navigating the AWS console) that wouldn't transfer to any
           | other container service.
        
             | petilon wrote:
             | AWS console is the crappy part. Azure and Google have much
             | better GUIs. And here's the proprietary part: https://docs.
             | aws.amazon.com/AmazonECS/latest/developerguide/...
             | 
             | For contrast, you can manage a Kubernetes deployment using
             | _standardized_ yaml and kubectl commands, regardless of
             | whether the application is running on localhost (minikube),
             | on Azure or on GKE.
             | 
             | BTW, AWS Lightsail has decent GUI. Alas, it doesn't support
             | containers out of the box. The best support for Docker
             | image-based deployment is Azure App Service.
        
               | bdcravens wrote:
               | > here's the proprietary part: https://docs.aws.amazon.co
               | m/AmazonECS/latest/developerguide/....
               | 
               | That's ECS, not EKS. Two different products.
               | 
               | The EKS documentation is at https://docs.aws.amazon.com/e
               | ks/latest/userguide/fargate.htm...
               | 
               | > For contrast, you can manage a Kubernetes deployment
               | using standardized yaml and kubectl commands, regardless
               | of whether the application is running on localhost
               | (minikube), on Azure or on GKE.
               | 
               | Likewise for EKS.
        
               | petilon wrote:
               | I was replying to this:
               | 
               | > _At the low end it's worth considering Fargate distinct
               | from EKS._
        
               | txcwpalpha wrote:
               | Right, and you linked to the documentation for ECS on
               | Fargate rather than the documentation for Kubernetes
               | Fargate, which is what was being talked about. Again, two
               | different products.
        
               | txcwpalpha wrote:
               | I'm still not seeing the difference. As pointed out, what
               | you linked is for ECS. That has nothing to do with
               | Kubernetes, so I'm not sure why you're comparing the
               | things on that page to kubectl commands on GKE or Azure.
               | Of course you cannot use kubectl on ECS, because ECS has
               | nothing to do with kube.
               | 
               | When you are using actual EKS (with or without Fargate),
               | you certainly can use standardized kubectl commands.
               | 
               | The only "proprietary" things I see in your link is the
               | specific AWS CLI commands used to set up the cluster
               | before you can use kubectl, but both Azure and GCP
               | require using the Azure CLI and gcloud CLI for cluster
               | deployment, too. There's also setting up AWS-specific
               | security groups and IAM roles, but you have to do those
               | same things on GCP or Azure, too, and both of those have
               | their own "proprietary" ways of setting up networking and
               | security, so I don't see the differentiating factor.
        
           | thoraway1010 wrote:
           | I use fargate and pretty happy with it. Don't need big scale
           | out - it supports $1M/year revenue so not huge, but LOVE the
           | simplicity.
           | 
           | I just have the CLI commands in my dockerfiles as comments,
           | so once I get things sorted locally using docker I update the
           | task with some copy / paste. I only update occasionally when
           | I need to make some changes (locally do a lot more).
           | 
           | The one thing I'd love to get my DOCKER image sizes down -
           | they seem way too big for what they do but it's just easier
           | to start with full fat images. I tried alpine images and
           | couldn't get stuff to install / compile etc.
        
       | madjam002 wrote:
       | Too bad AKS is just terrible.
       | 
       | Slow provisioning time, slow PVCs, slow LoadBalancer
       | provisioning, slow node pool management, plus non-production
       | ready node pool implementation.
        
       | 0x1221 wrote:
       | I'm not familiar with the space so my question might not be that
       | relevant - where does OpenShift fit in all of this (I still
       | struggle to differentiate it from Kubernetes) and is there any
       | merit to IBM trying to sell it so hard?
        
         | symfrog wrote:
         | OpenShift _is_ Kubernetes, just like RHEL is a Linux
         | distribution with support for enterprises. OpenShift makes an
         | opinionated choice about what they bundle (distribute) with
         | vanilla Kubernetes. For example, Istio was chosen as the
         | service mesh distributed with OpenShift 4.
        
         | p_l wrote:
         | Openshift _wraps around_ Kubernetes, with some of their own
         | special offering stuff on it. Generally, plain K8s is a
         | building block - RedHat made Openshift with a bunch of
         | opinionated choices, geared towards enterprise deployments,
         | some of them migrating later to Kubernetes itself (OpenShift 's
         | _Route_ inspired K8s ' _Ingress_ ), some OpenShift cribs from
         | K8s (Istio becoming part of OpenShift by default in OS 4).
         | 
         | Generally OpenShift heavily targets enterprises as "All-in-One"
         | package. Some of that works, some doesn't, but honestly it's
         | often more a case of the IT dept that manages the install ;)
         | 
         | Except installing OpenShift. That's horrific. Someone should
         | repent for the install process, seriously.
        
       | david-s wrote:
       | It doesn't seem to include Digital Ocean in the comparison.
        
         | based2 wrote:
         | They are not also listed in
         | https://kubernetes.io/docs/concepts/cluster-administration/c...
         | 
         | https://github.com/ramitsurana/awesome-kubernetes#cloud-prov...
         | 
         | https://community.hetzner.com/tutorials/install-kubernetes-c...
         | 
         | https://www.ovhcloud.com/fr/public-cloud/kubernetes/
        
         | axaxs wrote:
         | DO is my absolute favorite. I really think they could be a long
         | term winner. Their interface is so much nicer than the
         | competitors, in my opinion. I'm not even currently a customer,
         | let alone a shill.
        
           | steve_adams_86 wrote:
           | I agree that DO is awesome. I'd argue though that they can
           | make a better UI because they offer less. Everything is a
           | littler simpler. It would be hard to condense AWS into a
           | similar type of interface.
           | 
           | Having said that, DO is enough for virtually everything I've
           | ever worked on, and the user experience and price are so much
           | better. They're a clear winner for almost everything I do
           | these days.
        
             | axaxs wrote:
             | I agree but that's part of the charm to me. I only use what
             | 4 or 5 things in AWS, but each login is information
             | overload. Having to Ctrl F what you are looking for is not
             | an ideal experience.
             | 
             | Whether a conscious decision or not, I think offering what
             | the 80ish percent (just a guess) actually use, and
             | streamlining it, is the right decision.
        
         | BinaryArcher wrote:
         | Because DO is crap and they treat their customers like crap.
         | Constantly breaking SLAs and revoking enterprise accounts.
        
         | dindresto wrote:
         | Yeah I was very disappointed to see that this is limited to the
         | "regular" three...
        
         | mjfisher wrote:
         | Digital Ocean is still significantly cheaper (unsurprisingly).
         | They don't charge for the control plane, so you just pay the
         | normal prices for the droplets and resources you use. It's well
         | integrated, allowing K8 to provision load balancers and
         | volumes, and the Terraform provider for it works well.
         | 
         | My (admittedly small) cluster of 3x 4Gb droplets, an external
         | load balancer, and volumes enough for logs, databases and
         | filesystems costs about 70 USD/Month. It's been absolutely rock
         | solid too. I have very few minor gripes and a lot of positive
         | things to say about it.
        
           | tmpz22 wrote:
           | What's the story for automatically provisions TLS
           | certificates for your load balancer been like?
        
             | DelightOne wrote:
             | I don't know about terminating on the load balancer level,
             | but it works fine on the ingress-level (http router) with
             | cert-manager, nginx-ingress-controller and the Ingress-
             | definition.
        
               | mjfisher wrote:
               | That's exactly how I manage it too. It means there only
               | needs to be one load balancer per cluster, and adding a
               | new SSL cert is just a matter of adding a couple of lines
               | to the ingress config.
        
               | status_quo69 wrote:
               | Load balancer certs via annotations are supported, but
               | they're a bit iffy when pairing with controllers like
               | ambassador, since ambassador expects to own TLS
               | termination (although the ambassador docs do say this is
               | configurable).
               | https://www.digitalocean.com/docs/kubernetes/how-
               | to/configur...
        
               | bndw wrote:
               | aside: ambassador definitely supports external TLS
               | termination (tested with AWS ELB).
        
               | DelightOne wrote:
               | Ah good to know, thank you!
        
           | gingerlime wrote:
           | Isn't it more limited though, e.g. with auto-scaling not
           | available for nodes, but only for pods?
        
             | mjfisher wrote:
             | Yes, it is absolutely more limited. That, single-IP load
             | balancers, and no direct equivalent of VPCs spring to mind
             | as the biggest differences. AWS still makes a lot of sense
             | in a lot of cases. It is worth noting DO has a decent API,
             | so if wouldn't be _that_ hard to implement autoscaling
             | yourself if you needed it.
        
             | photonios wrote:
             | DigitalOcean now has a node auto-scaling as well [1]. It
             | was released very recently. It was not available in the
             | first general release.
             | 
             | [1] https://www.digitalocean.com/docs/kubernetes/how-
             | to/autoscal...
        
         | spalas wrote:
         | Author here -- I can add DO to the comparison today, I'll ping
         | here once I have done so!
         | 
         | ---
         | 
         | EDIT: Done!
         | 
         | I have updated the notebook with the digital Ocean offering
         | using their General Purpose (dedicated CPU) droplets.
         | 
         | The major takeaways for DO are that they:                 -
         | Also do not charge for the control plane resources       -
         | $/vCPU is less expensive than the other providers       - $/GB
         | memory is more  expensive than the other providers       - No
         | preemptible or committed use discounts available
         | 
         | For smaller clusters and/or clusters running CPU bound
         | workloads, DO looks like the most affordable option!
        
           | patrickaljord wrote:
           | How about https://www.packet.com/solutions/kubernetes/ ?
        
       ___________________________________________________________________
       (page generated 2020-03-07 23:00 UTC)