[HN Gopher] Google Kubernetes Engine is introducing a cluster ma...
       ___________________________________________________________________
        
       Google Kubernetes Engine is introducing a cluster management fee on
       June 6
        
       EDIT: Link on Google's Official Documentation here
       https://cloud.google.com/kubernetes-engine/pricing  Just received
       an email from Google Cloud -  "On June 6, 2020, your Google
       Kubernetes Engine (GKE) clusters will start accruing a management
       fee, with an exemption for all Anthos GKE clusters and one free
       zonal cluster.  Starting June 6, 2020, your GKE clusters will
       accrue a management fee of $0.10 per cluster per hour, irrespective
       of cluster size and topology.  We're making some changes to the way
       we offer Google Kubernetes Engine (GKE). Starting June 6, 2020,
       your GKE clusters will accrue a management fee of $0.10 per cluster
       per hour, irrespective of cluster size and topology. We're also
       introducing a Service Level Agreement (SLA) that's financially
       backed with a guaranteed availability of 99.95% for regional
       clusters and 99.5% for zonal clusters running a version of GKE
       available through the Stable release channel. Below, you'll find
       additional details about the new SLA and information to help you
       reduce your costs."
        
       Author : agoell
       Score  : 313 points
       Date   : 2020-03-04 17:20 UTC (5 hours ago)
        
 (HTM) web link (cloud.google.com)
 (TXT) w3m dump (cloud.google.com)
        
       | d_watt wrote:
       | $72/month per cluster, regardless of the size. It's interesting
       | they're not charging per managed node (outside of the regular
       | machine cost), makes it steep if you want to have a small cluster
       | up.
        
         | tssva wrote:
         | Each billing account gets 1 free zonal cluster so the cost for
         | keeping a small cluster up won't change.
        
           | mleonard wrote:
           | Currently I have a small cluster with 3 nodes running, each
           | in a different zone (data centres next to each other, eg in
           | London). Sadly this will now be charged.
        
           | enos_feedler wrote:
           | How do if i know if my cluster is zonal? Does it mean all
           | nodes are vms provisioned in the same zone?
        
             | zaat wrote:
             | >Single-zone clusters
             | 
             | >A single-zone cluster has a single control plane (master)
             | running in one zone. This control plane manages workloads
             | on nodes running in the same zone.
             | 
             | https://cloud.google.com/kubernetes-
             | engine/docs/concepts/typ...
        
       | lain wrote:
       | That's about $72 a month, which matches Amazon EKS's lowered
       | pricing. I guess that rules out my hopes of EKS not charging for
       | the management plane in the near future.
        
         | state_less wrote:
         | It looks like Google's offering one free cluster per billing
         | account if I read that right. If that's not the case, I'll be
         | turning off my cluster before this kicks in.
        
           | dman wrote:
           | One zonal cluster per billing account is free
        
         | numbsafari wrote:
         | I mean... if I were Amazon, I'd eliminate those management fees
         | this afternoon just to spite them. I'm sure it's a rounding
         | error as far as AWS is concerned.
        
       | miguelmota wrote:
       | The good ol' bait and switch tactic; lure new customers in with
       | zero fees and switch out the fee structure once they're locked
       | in.
        
       | agoell wrote:
       | Just received an email from Google Cloud -
       | 
       | "On June 6, 2020, your Google Kubernetes Engine (GKE) clusters
       | will start accruing a management fee, with an exemption for all
       | Anthos GKE clusters and one free zonal cluster.
       | 
       | Starting June 6, 2020, your GKE clusters will accrue a management
       | fee of $0.10 per cluster per hour, irrespective of cluster size
       | and topology.
       | 
       | We're making some changes to the way we offer Google Kubernetes
       | Engine (GKE). Starting June 6, 2020, your GKE clusters will
       | accrue a management fee of $0.10 per cluster per hour,
       | irrespective of cluster size and topology. We're also introducing
       | a Service Level Agreement (SLA) that's financially backed with a
       | guaranteed availability of 99.95% for regional clusters and 99.5%
       | for zonal clusters running a version of GKE available through the
       | Stable release channel. Below, you'll find additional details
       | about the new SLA and information to help you reduce your costs."
        
       | whatakaruvad wrote:
       | This is seriously such a bummer. This was the main reason for us
       | moving out of AWS
        
         | MightySCollins wrote:
         | We also jumped over for this reason and had to deal with a
         | large number of gotchas from GCP. I kinda wish I never spent
         | the long nights on this...
        
       | wereHamster wrote:
       | > One zonal cluster per billing account is free
       | 
       | For hobby projects nothing will change.
        
       | rcconf wrote:
       | Oh wow, one of the biggest reasons we picked Google Cloud was
       | that you did not have to pay a flat fee for their managed
       | Kubernetes service. Luckily there is Kubernetes support across
       | all Cloud Providers so we're happy we're not vendor locked in.
       | (biggest reason we picked Kubernetes in the first place.)
       | 
       | We were thinking of using Stackdriver for logging, but we were
       | scared of vendor locked in due to price increases or other
       | changes that we've been warned about with Google. In this case, I
       | think it's safe to say we'll be using Prometheus + Grafana + Loki
       | instead since there may be a random Stackdriver flat fee
       | introduced or some other weird fee and we may need to migrate out
       | of Google.
        
         | thockingoog wrote:
         | https://news.ycombinator.com/item?id=22487110
        
         | alasdair_ wrote:
         | The last place I worked, with a couple of petabytes of monthly
         | Stackdriver logs and a full embrace of almost every GKE/GCP
         | tool, also switched to Prometheus + Grafana due to a lack of
         | functionality within Stackdriver. I think you're making a good
         | choice.
        
         | atombender wrote:
         | Stackdriver is terrible, and super expensive for what it is. We
         | ran a dedicated Fluentd in GKE for a long time to work around
         | its shortcomings (GKE also uses/used Fluentd to shuffle logs
         | into Stackdriver), then switched to using Loki + Promtail +
         | Grafana, which has been excellent.
        
           | physicles wrote:
           | +1 for Loki + Promtail + Grafana. Really low maintenance once
           | it's set up.
        
           | batter wrote:
           | Stackdriver indeed horrible
        
       | msohn wrote:
       | Try project Gardener.
       | 
       | It's fully open source and uses kubernetes to run kubernetes
       | control planes and manage underlying infrastructure across many
       | infrastructure providers.
       | 
       | Manage homogeneous kubernetes clusters across Azure, AWS, GCP,
       | Alicloud, OpenStack, VMWare at scale. Kubernetes on bare metal
       | with Packet Cloud or using open source metal-stack coming soon.
       | 
       | Extensible for other infrastructures, contribute support for your
       | favorite infrastructure.
       | 
       | Automation of day 2 operations e.g. etcd management including
       | automated backup/restore.
       | 
       | Choose kubernetes version, DNS provider, operating system,
       | network plugins, container runtimes.
       | 
       | Extended cluster services like DNS management and TLS certificate
       | services.
       | 
       | https://gardener.cloud/ https://github.com/gardener
       | https://kubernetes.io/blog/2018/05/17/gardener/
       | https://kubernetes.io/blog/2019/12/02/gardener-project-updat...
       | https://landscape.cncf.io/selected=gardener
       | 
       | https://github.com/gardener?q=extension-provider
       | https://github.com/gardener?q=gardener-extension-os
       | https://github.com/gardener?q=gardener-extension-networking
       | https://github.com/metal-stack/gardener-extension-provider-m...
       | 
       | https://www.packet.com/developers/integrations/container-man...
       | https://github.com/metal-stack
       | https://knative.dev/docs/install/knative-with-gardener/
        
       | jeroenvisser101 wrote:
       | > Last modified: November 27, 2018 | Previous Versions
       | 
       | > As of November 28, 2017, Google Kubernetes Engine no longer
       | charges a flat fee per hour per cluster for cluster management,
       | regardless of cluster size, as provided at
       | https://cloud.google.com/kubernetes-engine/pricing. Accordingly,
       | Google no longer offers a financially-backed service level
       | agreement for the Google Kubernetes Engine service. The service
       | availability of nodes in a Google Kubernetes Engine-managed
       | cluster is covered by the Google Compute Engine SLA at
       | https://cloud.google.com/compute/sla.
       | 
       | > Uptime for Google Kubernetes Engine is nevertheless highly
       | important to Google, and Google has an internal goal to keep the
       | monthly uptime percentage at 99.5% for the Kubernetes API server
       | for zonal clusters and 99.95% for regional clusters regardless of
       | the applicability of a financially-backed service level
       | agreement.
       | 
       | https://cloud.google.com/kubernetes-engine/sla
       | 
       | Interesting that they're now walking back from this...
        
         | developerdylan wrote:
         | Google, walking back their commitment to software? _gasp_
        
           | sneak wrote:
           | https://twitter.com/SwiftOnSecurity/status/12341533883188715.
           | ..
        
       | partingshots wrote:
       | The guarantees for uptime with 99.5% for regional and 99.5% for
       | zonal clusters seem pretty good. Is this above the average in
       | terms of cloud providers?
        
         | seslattery wrote:
         | Small typo: 99.95% for regional. It's similar to AWS with a
         | 99.9% for EKS clusters. Also not sure what GKE's new SLA
         | reimbursement structure will be going forwards.
        
         | zzzcpan wrote:
         | 99.95% availability (three nines) is something a single
         | datacenter infrastructure could do, it's even typical, but
         | 99.5% is actually pretty low, that's worse than running a
         | service from home with a single home internet connection.
        
       | jwegner wrote:
       | There actually was a 15/cent/hr fee a few years back:
       | 
       | https://www.forbes.com/sites/janakirammsv/2017/11/29/google-...
       | 
       | YMMV, but I think the value prop of using a managed GKE cluster
       | vs the raw costs and engineering time to run your own control
       | plane is still strong.
        
       | londons_explore wrote:
       | Are there any other companies doing managed Kubernetes in GCP?
       | 
       | If there was interest I could try to build a third party clone of
       | GKE running in all GCP regions and still managing GCP VM's, and I
       | could run it for a lot less than $0.10 per hour (although
       | obviously it would take many months of refinement till I could
       | offer a decent SLA)
        
         | [deleted]
        
         | the_duke wrote:
         | Azure and AWS both have managed Kubernetes ...
        
           | londons_explore wrote:
           | But a lot of businesses have a reason to stick to GCP. For
           | example if you need access to TPU's... You can't easily use
           | Azure kubernetes with the actual VM's running in GCP...
        
         | fcantournet wrote:
         | Canonical has an offer
        
       | agoell wrote:
       | After this, I've been exploring other places to host our team's
       | clusters... copied pricing below
       | 
       | - EKS: $0.10/hour/cluster
       | 
       | - Digital Ocean: Free (only charges for the nodes)
       | 
       | - Azure: Free (only charges for the nodes)
       | 
       | In the long run we'll probably try and build our stack on vendor-
       | agnostic tools..
       | 
       | - Rancher - https://rancher.com/products/rancher/
       | 
       | - Infra.app - https://infra.app (mentioned a few weeks back on
       | the Kubernetes podcast)
       | 
       | - Prometheus https://prometheus.io/ - metrics
       | 
       | The cloud providers all include their own tooling (logging,
       | monitoring) built-in but I'm worried this will only lock us on to
       | further price increases.. has anyone found a good vendor-neutral
       | logging system? We don't really want to use ELK stack right now
       | since it's really heavy and costly to run...
        
         | erdii wrote:
         | Shameless plug: https://www.kubermatic.io Disclaimer: I work at
         | Loodse (the company behind kubermatic)
        
         | thockingoog wrote:
         | https://news.ycombinator.com/item?id=22487110
        
           | rcconf wrote:
           | Honestly I understand the hard work it takes to manage all
           | the clusters, but this was a total bait and switch and hurts
           | the reputation that everyone has with Google Cloud. Telling
           | us to DIY because we cannot pay $71 just sounds like someone
           | who works at Google would say, which you do work at Google.
           | 
           | The sentiment with my clients before was that Google Cloud
           | was a great choice because of the security and expertise with
           | GKE. It's also free!
           | 
           | Meanwhile, in the back of my head I've always had this fear
           | because of your reputation that you do not keep your promises
           | and that you do not care about your users. Because of this
           | fear, we have tried to make every infrastructure decision not
           | use a managed service by Google even though it may be easier
           | to do so short-term.
           | 
           | For the product I'm working on, we decided to use Kubernetes
           | just in case you baited and switched us with the reputation
           | you have. In terms of monitoring, we really wanted to use
           | Stackdriver, but now we're 100% using fluent-bit + prometheus
           | + loki + grafana. It's the only way to protect ourselves from
           | your reputation which is becoming a reality.
           | 
           | So yeah, this is pretty sad and a bad decision. Should have
           | priced GKE at $70 / month to begin with and we would have
           | been fine with it. Now we're (actually) looking at EKS since
           | Amazon doesn't seem to have this reputation and you've
           | spooked us. We never would have thought about using any other
           | provider until today.
        
             | thockingoog wrote:
             | I understand the emotional response here, but I don't think
             | it's rational. GKE has to work as a business, or else the
             | whole thing is in trouble.
             | 
             | I think GKE provides tons of value, but people tend to
             | under-estmate that. In order to keep providing that value,
             | we need to make sure it is sustainable.
             | 
             | I'm really, truly sad that you perceive it as bait-and-
             | switch, but I disagree with that characterization. If you
             | want to move off GKE, I'll go out of my way to help you,
             | but I urge you to take a big-picture look at the TCO.
        
               | ohyeshedid wrote:
               | I think part of the optic's issues is your peers seem to
               | be offering similar services for free, while being
               | sustainable.
        
               | thockingoog wrote:
               | EKS has always had a fee.
               | 
               | AKS, well, I don't have any insight into their business,
               | but I have my suspicions.
        
               | solidasparagus wrote:
               | This kind of mentality is why Google is struggling. You
               | forget that your customers are human and make emotion-
               | driven decisions. This price increase proves that you are
               | not making sustainable long-term decision and you are
               | willing to dump the cost of that mistake on your
               | customers.
               | 
               | We already don't trust Google to provide long-term,
               | stable, reliable infrastructure and each time something
               | like this happens, we become more convinced that Google
               | isn't trustworthy.
        
         | erulabs wrote:
         | One more shameless plug for my startup, https://kubesail.com
         | (YCS19)!
        
         | slovenlyrobot wrote:
         | OVH also offer a free control plane, but their service is
         | relatively beta so far.
        
       | whatakaruvad wrote:
       | At one side EKS is slashing their prices and on the other GKE is
       | increasing it :(
        
       | praveenperera wrote:
       | Good time for people to look into DigitalOcean managed Kubernetes
       | (DOKS). I've been using it since it was in pre-release and its
       | been great so far. Their support has been very responsive as
       | well.
       | 
       | https://www.digitalocean.com/products/kubernetes/
        
       | cbushko wrote:
       | One of the reasons I recommend GKE over EKS is because of the
       | lack of fees on the control plane.
       | 
       | I guess that one advantage is gone now...
       | 
       | Bad move Google Cloud. Bad move.
        
       | rolleiflex wrote:
       | This is awful - I don't think GCP is fully aware of their
       | position in the market as the second, inferior choice. I took a
       | bet on the underdog by using GCP and they bit me back in return.
       | Especially considering their 'default' kubernetes config
       | automatically sets you up with three(!) control planes in
       | replication, that's, as far as I understand, $~300 added to our
       | monthly bill, for nothing.
       | 
       | Oh, and per their docs, three-control-planes decision is _not_
       | reversible - I cannot in fact shut two of those down without
       | shutting down my production cluster and starting a new one.
       | https://cloud.google.com/kubernetes-engine/docs/concepts/typ...
       | 
       | Awful. Just so awful.
       | 
       | Edit: To answer some questions below - we have a single-tenant
       | model where we run an instance of our async discussion tool
       | (https://aether.app) per customer for better isolation, that's
       | why we had bought into Kubernetes / GCP. Since we have our own
       | hypervisor inside the clusters, it makes me wonder whether we can
       | just deploy multiple hypervisors into the same cluster, or remove
       | the Kubernetes dependency and run this on a Docker runtime in a
       | more classical environment.
        
         | trilinearnz wrote:
         | Who would be the first choice: Amazon or Microsoft?
        
         | dhsuieeh wrote:
         | Dumb plan, underdogs don't have billion dollar connections.
         | Worse plan because Google has a reputation for bad service.
        
         | [deleted]
        
         | tmpz22 wrote:
         | Too many people drank the cloud kool-aid. The move from day one
         | was to create provider agnostic cloud architectures and repent
         | the use of provider-specific services.
         | 
         | That said they do make it damn hard. Our k8s cluster is as
         | basic as it comes, no databases, simple deployments, but we do
         | still have a dependency on Google Cloud Loadbalancer (which we
         | hate).
         | 
         | If pricing goes up too much from this we'll move, but the GCL
         | dependency will be a PITA :/
        
           | pojzon wrote:
           | How about managing own k8s running on VMs / bare-metal?
           | 
           | Pretty much anyone who works in ops longer understood from
           | the go that its impossible to be totally provider-agnostic.
           | K8S is just a nice api on top of provider api that still
           | requires provider specific configuration.
        
             | takeda wrote:
             | From what I've seen looks like managing k8s on your own
             | often ends up requiring a dedicated team to keep with their
             | insane release cycle.
        
               | freedomben wrote:
               | Can confirm. Depending on your cluster size you will need
               | at least 2 dedicated people on the "Kubernetes" team.
               | You'll probably also end up rolling-your-own deployment
               | tools because K8s API is a little overwhelming for most
               | devs.
        
               | mleonhard wrote:
               | I started learning Kubernetes and was overwhelmed. The
               | biggest problem was the missing docs. I filed a Github
               | issue asking for missing Kubernetes YAML docs:
               | 
               | https://github.com/kubernetes/website/issues/19139
               | 
               | Google will ignore it like all of the tickets I file. The
               | fact is that Google is in the business of making money
               | and they are focused on enterprise users. Enterprise
               | users are not sensitive to integration difficulty since
               | they can just throw people at any problems. So eventually
               | everything Google makes will become extremely time-
               | consuming to learn and difficult to use. They're becoming
               | another Oracle.
        
             | bcheung wrote:
             | I'm running my own on bare metal dedicated servers. You
             | will need to install a few extra things (MetalLB for
             | LoadBalancer, CertManager for SSL, an ingress controller
             | (nginx, Ambassador, Gloo), and one of the CSI plugins for
             | your preferred storage method). It is extra work but as a
             | personal cluster for hobby work, I'm paying $65/mo total
             | for the cluster. Same specs would probably be $1000/mo at a
             | public cloud provider.
        
             | freedomben wrote:
             | _Disclaimer: I work for Red Hat and am very biased, but
             | this is my own honest opinion._
             | 
             | If you're going to run on bare-metal or in your own VMs,
             | OpenShift is very much worth a look. There are hundreds,
             | maybe _thousands_ of ways to shoot yourself in the foot,
             | and OpenShift puts up guard rails for you (which you can
             | bypass if you want to). OpenShift 4 runs on top of RHCOS
             | which makes node management much simpler, and allows you to
             | scale nodes quickly and easily. Works on bare metal or in
             | the cloud (or both, but make sure you have super low
             | latency between data centers if you are going to do that).
             | It 's also pretty valuable to be able to call Red Hat
             | support if something goes wrong. (I still shake my head
             | over the number of days I spent debugging arcane networking
             | issues on EKS before moving to OpenShift, which would have
             | paid for a year or more of support just by itself).
        
               | stas2k wrote:
               | Don't want to sound snarky, but how about an upgrade path
               | from 3.11 to 4.x? I am a heavy Openshift user and it
               | seems that RH just dumped whatever architecture they had
               | with pre-4 clusters and switched to a Tectonic-like 4.x
               | installations without any way to upgrade other than a new
               | installation. This makes it hard to migrate with physical
               | nodes.
        
             | jrockway wrote:
             | The big problem with running your own cluster is the extra
             | machines you need for a high-availability control plane,
             | which is expensive. That is why Amazon and now Google feel
             | like they can charge for this; you can't really do it any
             | better yourself.
        
           | Glyptodon wrote:
           | Doing load balance, DNS, and egress has been way uglier in
           | the Google Cloud K8s than I expected. Pushes projects towards
           | doing it themselves in cluster IMO.
        
           | rolleiflex wrote:
           | We're in the same situation -- we've engineered for minimum
           | provider-specific dependencies but GKE LoadBalancers were
           | where they got us via arm twisting as well. There is no way
           | to expose a cluster to the outside world in a production
           | environment otherwise.
        
             | uberduper wrote:
             | There are ways to expose your cluster to public and/or run
             | your own load balancers on GKE (or any other cloud k8s
             | deployment).
        
               | [deleted]
        
             | tmpz22 wrote:
             | Do you also have occasional outages because the load
             | balancer gets into a confused state and changes take 10+
             | minutes to propagate with no re-course other then than to
             | destroy and re-create the entire resource?
        
               | [deleted]
        
             | lvh wrote:
             | It's kind of ridiculous internal load balancers can't get
             | automatic certs. We've had to do a stupid dance just to get
             | certs via the LE DNS challenge out of band, and then
             | regularly install them on internal LBs.
        
               | rolleiflex wrote:
               | We paid for long-lasting wildcard certs because of that.
               | Which Apple killed a few days ago. It's going to be fun
               | when they are close to expiry.
        
               | zomglings wrote:
               | Does cert-manager not for your needs?
               | 
               | https://github.com/jetstack/cert-manager
        
               | tmpz22 wrote:
               | Same! I still _manually_ provision some certificates just
               | because LEGO /etc. just don't work with GCP + Google
               | Cloud Load balancer! And the docs for the entire subject
               | are useless..
        
         | darau1 wrote:
         | > their position in the market as the second, inferior choice
         | Who's the superior choice? EKS?
        
         | bloblaw wrote:
         | GCP is actually more the 3rd inferior option, behind Azure.
         | Gartner lists Azure as just behind AWS for IaaS providers, and
         | GCP a more distant 3rd:
         | 
         | https://pages.awscloud.com/Gartner-Magic-Quadrant-for-Infras...
        
           | Spooky23 wrote:
           | I don't have a horse in the race, but I work with Gartner
           | _alot_ , and would encourage you to actually read their
           | guidance carefully and read about the Magic Quadrant
           | methodology carefully. Gartner analysts go through the
           | features and functions very closely, but the magic quadrant
           | ratings are heavily weighted by Gartner customer and other
           | peer feedback.
           | 
           | The magic quadrant isn't a good housekeeping seal of
           | approval. It's a screener for an architect in Fortune 500 or
           | .Gov to show social proof that their product selection isn't
           | insane.
           | 
           | The "cautions" for GCP are about the nascent state of their
           | field sales and solution architects, enterprise relationship
           | management, and limited partner community.
           | 
           | The "cautions" for Azure are poor reliability, poor support,
           | and sticker shock.
           | 
           | My takeaway was very different from yours. When you read the
           | analysis, it was reflective of a mature, dominant player
           | (AWS) and two highly capable challengers with different
           | issues.
           | 
           | Google is a newer business that is missing some services (ie.
           | anything user facing) and is transitioning from a weird sales
           | model to a more conventional enterprise one. Microsoft has an
           | established business process and best in class sales org, but
           | they tend to use sticks of various types to force adoption
           | and are organizationally poorly equipped to support
           | customers.
        
           | pojzon wrote:
           | Azure AKS is pretty terrible TBH in comparison to GKE.
           | 
           | Also lack of SLA / shady SLA does not help.
           | 
           | Ps. Talking as someone with hands on experience.
           | 
           | Ps2. Azure support is terrible and their response times are
           | constantly breaking SLA..
        
             | nfg wrote:
             | I'm keen to hear what problems you've had with AKS if
             | you've got time to share them here.
        
             | GordonS wrote:
             | Counterpoint to your PS2 - I've used Azure support at both
             | my enterprise job and my microISV - each time the responses
             | have been quite quick, and each time they have been
             | helpful.
             | 
             | Honestly, I've been pleasantly surprised.
        
           | ygggvg wrote:
           | Gcp is not gke. In my opinion the gke offering fromg gcp is
           | the best right now.
        
             | fernandotakai wrote:
             | i kind of agree, after trying AKS, EKS and GKE. GKE blows
             | everyone out of the water.
        
             | transect wrote:
             | I agree. Though I will say that IAP and gsuite groups
             | backed IAM is nice.
        
           | lvh wrote:
           | Anecdotal, I know, but for prospective Latacora customers
           | this is absolutely not reflected in market share. It's AWS
           | first, GCP second, Azure very distant third. I'd happily
           | believe Azure is dominating in some segments where MS showers
           | prospective customers with millions of dollars in credit, but
           | IMO a blind person can see Azure does not have the product
           | offering to warrant a "completeness of vision" that is right
           | on the heels of AWS.
        
             | alasdair_ wrote:
             | According to Canalys, GCP is at 6% cloud market share (in
             | dollars), Azure at 17.4% and AWS at a bit over 32%.
             | 
             | https://www.canalys.com/static/press_release/2020/Canalys--
             | -...
             | 
             | AliCloud and Rackspace are very close to GCP as well.
             | 
             | That being said, if you're planning on running Kubernetes,
             | I'd choose GCP over any other offering - the tooling and
             | support just seems better, in my entirely subjective
             | opinion.
        
             | simonkafan wrote:
             | What essential product offerings is Azure missing that AWS
             | have?
             | 
             | My experience is that once AWS offers a new service that
             | gets attention, a few month later also Azure offers it -
             | and vice versa.
        
             | sl1ck731 wrote:
             | Anecdotally and in my opinion, Azure is more complete than
             | GCP. Between stuff like this and their product dropping
             | stigma, most of my customers (in the cloud consulting
             | space) are trying to get into Azure. This is across every
             | industry we work in (retail especially). I've come across 2
             | customers in 3 years of consulting that want anything to do
             | with GCP.
        
               | outworlder wrote:
               | > Azure is more complete than GCP
               | 
               | It has more features, yes. How well those features work
               | is another matter entirely.
        
         | lvh wrote:
         | For what it's worth, we're seeing a GKE being the main reason
         | prospective clients use GCP at Latacora, to a point where I'd
         | say I was surprised if someone was on GCP but _not_ using GKE.
         | Obviously that's a small subset of all companies, but GKE does
         | seem like the goose that lays the golden eggs for them, at
         | least insofar they care about startup market share.
         | 
         | I also think of them as the second, inferior cloud, but they're
         | almost certainly the better k8s hoster. If you're serious about
         | running k8s on AWS, there's a good chance you're doing
         | something like CloudPosse's Terraform-based setup, not EKS.
        
         | sethvargo wrote:
         | Thank you for the feedback. The management fee is per cluster.
         | You are not billed for replicated control planes. You can use
         | the pricing calculator at
         | https://cloud.google.com/products/calculator#tab=container to
         | model pricing, but it should work out to $73/mo regardless of
         | nodes or cluster size (again, because it's charged per-
         | cluster).
         | 
         | There's also one completely free zonal cluster for hobbyist
         | projects.
        
           | halbritt wrote:
           | > There's also one completely free zonal cluster for hobbyist
           | projects.
           | 
           | Nice.
        
           | dcolkitt wrote:
           | Hi Seth,
           | 
           | What about clusters that are used for lumpy work loads? Like
           | data science pipelines? For example, our org has a few dozen
           | clusters being used like that.
           | 
           | Each pipeline gets its own cluster instance as a way to
           | enforce rough and ready isolation. Most of the times the
           | clusters sit unused. To keep them alive we keep a small,
           | cheap, preemptive node alive on the idle cluster. When a new
           | batch of data comes in, we fire up kube jobs which then
           | triggers GKE autoscaling that processes the workload.
           | 
           | This pricing change means we're looking at thousands of
           | dollar more in billing per month. Without any tangible
           | improvement in service. (The keepalive node hack only costs
           | $5 a month per cluster.) We could consolidate the segmented
           | cluster instances into a single cluster with separate
           | namespaces, but that would also cost thousands in valuable
           | developer time.
           | 
           | I don't know how common our use pattern is, but I think we
           | would be a lot better served by a discounted management fee
           | when the cluster is just being kept alive and not actually
           | using any resources. At $0.01, maybe even $0.02, per hour we
           | could justify it. But paying $0.10 to keep empty clusters
           | alive is just egregious.
        
             | zomglings wrote:
             | On GKE, you can use a single cluster with multiple node
             | pools to achieve a similar effect. Just set the right
             | affinity on your job resources.
        
             | thockingoog wrote:
             | Those empty clusters that you get for free cost Google
             | money. Perhaps it never should have been free, because that
             | skewed incentives towards models like this.
        
           | rolleiflex wrote:
           | Seth -- I appreciate you being here to take feedback, and for
           | the clarification as well. The very surprising email I've
           | received this morning is very hazy on the details, and the
           | docs linked from the email are not updated yet.
           | 
           | The main issue is that _not charging for the control plane_
           | and _charging for the control plane_ leads to two very
           | different Kubernetes architectures, and as per your docs,
           | those decisions made at the start are very much set in stone.
           | You cannot change your cluster from a regional cluster to a
           | single zone cluster for example. So you have customers who
           | built their stacks taking into account your free control
           | plane, and you're turning the screws in by adding a cost for
           | it -- but they cannot change the type of their cluster to
           | optimise their spend, since, per your docs, those decisions
           | are set in stone. That's entrapment.
           | 
           | You should keep existing clusters in the pricing model
           | they've been built in, and apply this change for clusters
           | created after today.
           | 
           | That said, many of us made a bet on GCP. For us in
           | particular, we made a bet to the point that our SQL servers
           | are on AWS, but we still switched to GCP for 'better'
           | Kubernetes and for _not nickel and diming_ , since AWS had a
           | charge that looked like it was designed to convey that they'd
           | much rather have you use their own stuff than Kubernetes. It
           | is a relatively trivial amount, but it makes a world of
           | difference in how it feels and you guys know more than anyone
           | how much of these _GCP vs AWS_ decisions are made based not
           | on data sheets but for the 'general feel' for the lack of a
           | better word.
           | 
           | AWS' message is that they're the staid, sort of old
           | fashioned, but reliable business partner. GCP's message, as
           | of this morning, is _stop using GCP._
        
             | dward wrote:
             | GKE can't offer financial backed SLOs without charging for
             | the service. This is something that, I assume, significant
             | customers want and that competitors already have:
             | 
             | https://aws.amazon.com/eks/sla/
        
               | outworlder wrote:
               | Workers are not free and never were. So they were already
               | charging.
        
               | sethvargo wrote:
               | Correct, but the control plane nodes _were_ free and had
               | no SLA. This changes that. [edit: spelling]
        
               | gowld wrote:
               | _were_ free. (Emphasis yours.)
        
             | lilbobbytables wrote:
             | > and the docs linked from the email are not updated yet.
             | 
             | That about sums up most things Google does for developers.
        
               | alasdair_ wrote:
               | I thought the standard advice for Google stuff was "there
               | are always two systems - the undocumented one, and the
               | deprecated one"
        
               | andybak wrote:
               | That's a wonderful quote that applies to many companies.
               | (I think that will resonate with the Unity developer
               | community right now.)
        
             | cmhnn wrote:
             | Sorry for technical tangent but curious. Your decision
             | making on GCP appears to appeal to best of breed + cost.
             | But you put SQL Server on AWS? If you are saying SQL Server
             | is better on AWS than on Azure it would be interesting to
             | learn why.
        
               | rolleiflex wrote:
               | We need MySQL 8 because of window functions, which GCP
               | does not offer. That is available on AWS.
        
               | cmhnn wrote:
               | My bad. A clever marketing decision made me see the
               | capital SQL as SQL Server since I am used to people
               | saying Postgres, MySQL or SQL.
        
               | carterehsmith wrote:
               | I see. Curious about the latency between your GCP apps
               | and the database on AWS - is it like 1 ms or 100ms? Does
               | it affect the product?
        
             | andrewmutz wrote:
             | If this cost bothers you a great deal, why not just deploy
             | a new cluster?
        
             | ones_and_zeros wrote:
             | I agree the rollout is a little bumpy but I'm curious what
             | workloads you are using k8s for where a $74/mo (or $300/mo)
             | bill isn't a rounding error in your capex?
        
               | ssmw wrote:
               | Think about any medium sized dev agency managing 3x
               | environments for 20x customers. That's 50k/year out of
               | the blue.
               | 
               | My problem is that this fee doesn't look very "cloud"
               | friendly. Sure the folks with big clusters won't even
               | notice it, but others will sweat it.
               | 
               | The appeal of cloud is that costs increase as you go, and
               | flat rates are typically there to add predictability (see
               | BigQuery flat rate). This fee does the opposite.
        
               | olafure wrote:
               | $3600/year is significant for a startup on a shoestring
               | budget.
        
               | nimish wrote:
               | So ask for Google cloud for startups? One free cluster is
               | enough to get started.
        
               | jorams wrote:
               | > Google Cloud for Startups is designed to help companies
               | that are backed by VCs, incubators, or accelerators, so
               | it's less applicable for small businesses, services,
               | consultancies, and dev shops.[1]
               | 
               | This makes it seem like Google Cloud for Startups is
               | aimed at startups that aren't really on a shoestring
               | budget.
               | 
               | [1]: https://cloud.google.com/developers/startups/
        
             | sethvargo wrote:
             | Thank you <3. I apologize the email was hazy on details. I
             | can't un-send it, but I'll work with the product teams to
             | make sure they are crystal clear in the future. I'm
             | interested to learn more about what you mean about outdated
             | docs? The documentation I'm seeing appears to have been
             | updated. Can you drop me a screenshot, maybe on Twitter
             | (same username, DMs are open).
             | 
             | These changes won't take effect until June - customers
             | won't start getting billed immediately. I'm sorry that you
             | feel trapped, that's not our intention.
             | 
             | > You should keep existing clusters in the pricing model
             | they've been built in, and apply this change for clusters
             | created after today.
             | 
             | This is great feedback, but clusters should be treated like
             | cattle, not pets. I'd love to learn more about why your
             | clusters must be so static.
        
               | Allower wrote:
               | >I'm sorry that you feel trapped, that's not our
               | intention.
               | 
               | Bull-fucking-shit.
        
               | rolleiflex wrote:
               | > This is great feedback, but clusters should be treated
               | like cattle, not pets. I'd love to learn more about why
               | your clusters must be so static.
               | 
               | What's inside our clusters are indeed cattle, but the
               | clusters themselves do carry a lot of config that is set
               | via GCP UI for trivial things like firewall rules. Of
               | course we could script it and automate, but your CLI tool
               | also changes fast enough that it becomes an ongoing
               | maintenance burden shifted from DevOps to engineers to
               | track. In other words, it will likely incur downtime due
               | to unforeseen small issues.
               | 
               | It's also in you guys' interest that we don't do this and
               | clusters are as static as possible right now, since if we
               | are risking downtime and moving clusters, we're
               | definitely moving that cluster back to AWS.
        
               | sethvargo wrote:
               | Hmm - have you considered a tool like Terraform or
               | Deployment manager for creating the clusters? In general,
               | it's best practice to capture that configuration as code.
        
               | Aeolun wrote:
               | I think it's a bit optimistic to assume that your
               | customers will just change their deployment model because
               | you introduced a fee.
               | 
               | You provide a web interface, so it's reasonable to assume
               | people will use it.
        
               | te_chris wrote:
               | Even still it's not like it's non-trivial to just bring
               | up and drop clusters. Just setting up peering with cloud
               | sql or https certs with GKE ingress can be fraught with
               | timing issues that can torpedo the whole provisioning
               | process.
        
               | rolleiflex wrote:
               | We use Skaffold and it's great. I'm talking about very
               | minor unforeseen stuff that causes outages, not that we
               | do it manually.
        
               | rexreed wrote:
               | Can someone explain to me the cattle vs. pets analogy?
               | I'm not sure I get it.
        
               | gowld wrote:
               | > I'm sorry that you feel trapped, that's not our
               | intention.
               | 
               | Please don't do this. You can apologize for your actions
               | work to improve in the future , but you cannot apologize
               | for how someone feels as a result of your actions.
               | 
               | Also, intent doesn't matter unless you plan to change
               | your behavior to undo or mitigate the unintended result.
        
               | BossingAround wrote:
               | > but clusters should be treated like cattle, not pets
               | 
               | Ha. They should, but they are absolutely not. Customers
               | typically ask "why should we spend time on automating
               | cluster deployment when we are going to do it just once?"
               | and when I explain that it's for when the cluster goes
               | away, if it goes away, they say it's an acceptable risk.
               | 
               | The truth of the matter is, even in some huge
               | international companies, they don't have the resources to
               | keep up with development of tools to have completely
               | phoenix servers. They just want to write automation and
               | have it work for the next 10 years, and that's definitely
               | not the case.
        
               | geerlingguy wrote:
               | > clusters should be treated like cattle, not pets
               | 
               | Heh... how many teams _actually_ treat their clusters
               | like cattle, though? Every time I advocate automation
               | around cluster management, people start complaining that
               | "you don't have to do that anymore, we have Kubernetes!"
               | 
               | Some people get it, yes, but even of that group, few have
               | the political will/strength to make sure that automation
               | is set up on the cluster level--especially to a point
               | where you could migrate running production workloads
               | between clusters without a potentially large outage /
               | maintenance window.
        
               | skboosh wrote:
               | > clusters should be treated like cattle, not pets
               | 
               | Sugarkube is designed to do exactly that.
               | 
               | [1] https://docs.sugarkube.io
        
               | yongjik wrote:
               | > clusters should be treated like cattle, not pets.
               | 
               | Off-topic, but is this really how people do k8s these
               | days? Years ago when I was at Google, each physical
               | datacenter had at most several "clusters", which would
               | have fifty thousand cores and run every job from every
               | team. A single k8s cluster is already a task management
               | system (with a lot of complexity), so what do people gain
               | by having many clusters, other than more complexity?
        
               | slovenlyrobot wrote:
               | The most common thing I've heard is "blast radius
               | reduction", i.e. the general public are not yet smart
               | enough to run large shared infrastructures. That seems
               | something that should be obviously true.
               | 
               | People had exactly the same experiences with Mesos and
               | OpenStack, but k8s has decent tooling for turning up many
               | clusters, so there is an easy workaround
        
           | xur17 wrote:
           | We currently spin up dev clusters with a single node. $73/mo
           | is going to basically double the cost of all of these..
        
             | lvh wrote:
             | This highlights a sorta-weird consequence of this pricing
             | change: suddenly pricing incentivizes you to use
             | namespacing instead of clusters for separating
             | environments.
             | 
             | (As a security person: ugh.)
        
               | outworlder wrote:
               | Assuming you can do that, and your system is not using
               | namespacing for its own purposes.
        
               | aludwin wrote:
               | I know kubeflow can use namespaces for its own purposes,
               | but otherwise I thought that was quite rare. Namespaces
               | are intended to be used for exactly this usecase
               | (isolating teams and/or workloads).
               | 
               | What kind of system have you seen where this isn't true?
        
               | rolleiflex wrote:
               | That's interesting - I think you're right. We might move
               | our staging cluster into our main production deployment.
               | 
               | More likely though, AWS or OpenShift running on bare
               | metal on a beefy ATX tower in the office. We want to have
               | production and staging as close to each other as
               | possible, so this is an additional reason and a p0 flag
               | on reducing the dependency on Google-specific bits of
               | Kubernetes as much as possible, hopefully also useful for
               | our exit strategy as well.
        
               | lazyier wrote:
               | Kubespray works well for me for setting up a bare bones
               | kubernetes cluster for the lab.
               | 
               | I'll use helm to install metallb for the load balancer,
               | which you can then tie into whatever egress controller
               | you like to use.
               | 
               | For persistent storage a simple NFS server is the bees
               | knees. Works very well and a NFS provisioned is a helm
               | install. Very nice, especially, over 10GbE. Do NOT
               | dismiss NFSv4. It's actually very nice for this sort of
               | thing. I just use a small separate Linux box with
               | software raid on it for that.
               | 
               | If you want to have the cluster self-host storage or need
               | high availability then GlusterFS works great, but it's
               | more overhead to manage.
               | 
               | Then you just use normal helm install routines to install
               | and setup logging, dashboards, and all that.
               | 
               | Openshift is going to be a lot better for people who want
               | to do multi-tenant stuff in a corporate enterprise
               | environment. Like you have different teams of people,
               | each with their own realm of responsibility. Openshift's
               | UI and general approach is pretty good about allowing
               | groups to self-manage without impacting one another. The
               | additional security is a double edged. Fantastic if you
               | need it, but annoying barrier to entry for users if you
               | don't.
               | 
               | As far as AWS goes... EKS recently lowered their cost
               | from 20 cents per hour to 10 cents. So costs for the
               | cluster is on par with what Google is charging.
               | 
               | Azure doesn't charge for cluster management (yet), IIRC.
        
               | lazyier wrote:
               | > Have you used NFS for persistent storage in prod much?
               | 
               | I think NFS is heavily underrated. It's a good match for
               | things like hosting VM images on a cluster and for
               | Kubernetes.
               | 
               | In the past I really wanted to use things iSCSI for
               | hosting VM images and such things, but I've found that
               | NFS is actually a lot faster for a lot of things. There
               | are complications to NFS, of course, but they haven't
               | caused me problems.
               | 
               | I would be happy to use it in production, and have
               | recommended it, but it's not unconditional. It depends on
               | a number of different factors.
               | 
               | The only problem with NFS is how do you manage the actual
               | NFS infrastructure? How much experience does your org
               | have with NFS? Do you already have a existing file
               | storage solution in production you can expand and use
               | that with Kubernetes?
               | 
               | Like if your organization already has a lot of servers
               | running ZFS, then that is a nice thing to leverage for
               | NFS persistent storage. Since you already have expertise
               | in-house it would be a mistake not to take advantage of
               | it. I wouldn't recommend this approach for people not
               | already doing it, though.
               | 
               | If you can afford some sort of enterprise-grade storage
               | appliance that takes care of dedupe, checksums,
               | failovers, and all that happy stuff, then that's great.
               | Use that and it'll solve your problems. Especially if
               | there is some sort of NFS provisoner that Kubernetes
               | supports.
               | 
               | The only place were I would say it's a 'Hard No' is if
               | you have some sort of high scalability requirements. Like
               | if you wanted to start some web hosting company or needed
               | to have hundreds of nodes in a cluster. In that case then
               | distributed file systems is what you need... Self-hosted
               | storage aka "Hyper Converged Infrastructure". The cost
               | and overhead of managing these things is then relative
               | small to the size of the cluster and what you are trying
               | to do.
               | 
               | It's scary to me to have a cluster self-host storage
               | because storage can use a huge amount of ram and cpu at
               | the worst times. You can go from a happy low-resource
               | cluster, then a node fails or other component takes a
               | shit, and then while everything is recovering and
               | checksum'ng (and lord knows what) the resource usage goes
               | through the roof right during a critical time. The
               | 'perfect storm' scenarios.
        
               | freedomben wrote:
               | Have you used NFS for persistent storage in prod much? I
               | know people do it, but numerous solutions architects have
               | cautioned against it.
        
               | yetanotherme wrote:
               | My experience with NFS over the years has taught me to
               | avoid it. Yes, it mostly works. And then every once a
               | while you have a client that either panics or hangs.
               | Despite the versions of Linux, BSD, Solaris, Windows
               | changing over the decades. The server end is usually a
               | lot more stable. But that's of little to no comfort to
               | know that yes, other clients are fine.
               | 
               | However, if you can tolerate client side failure then go
               | for it.
        
               | geerlingguy wrote:
               | (replying to freedomben): NFS has worked fairly well for
               | persistent file storage that doesn't require high
               | performance for reads/writes (e.g. good for media storage
               | for a website with a CDN fronting a lot of traffic, good
               | for some kinds of other data storage). It would be a
               | terrible solution for things like database storage or
               | other high-performance needs (clustering and separate PVs
               | with high IOPS storage would be better here).
        
               | lazyier wrote:
               | It's good to have multiple options if you want to host
               | databases in the cluster.
               | 
               | For example you could use NFS for 90% of the storage
               | needs for logging and sharing files between pods. Then
               | use local storage, FCOE, or iSCSI-backed PVs for
               | databases.
               | 
               | If you are doing bare hardware and your requirements for
               | latency are not too stringent then not hosting databases
               | in the cluster is also a good approach. Just used
               | dedicated systems.
               | 
               | If you can get state out of the cluster then that makes
               | things easier.
               | 
               | All of this depends on a huge number of other factors, of
               | course.
        
               | bluhbi wrote:
               | What? Shouldn't you try to make the creation and deletion
               | of your staging cluster cheap instead of moving it to
               | somewhere else?
               | 
               | And if that is your central infrastructure, shouldn't it
               | be worth the money?
               | 
               | I do get the issue with having cheap and beefy hardware
               | somewhere else, i do that as well, but only for private.
               | My hourly salary spending or wasting time on stuff like
               | that costs the company more than just paying for an
               | additional cluster with the same settings but perhaps
               | with much less Nodes.
               | 
               | If more than one person is using it, the multiplication
               | effects for suddenly unproductive people, is much higher.
               | Also that decreases the per head cost.
        
               | yoshiat wrote:
               | Yes, namespace alone isn't sufficient for isolation.
               | Would you be able to look at our latest Multi-Tenancy
               | best practices?
               | 
               | https://cloud.google.com/kubernetes-engine/docs/best-
               | practic...
               | 
               | It's a living product which comes with Terraform modules.
               | We introduced various features to enable doing Multi-
               | Tenancy as well (and more on their way!)
        
               | mrbrowning wrote:
               | I suspect I'm in the minority on this, but I would love
               | for k8s to have hierarchical namespaces. As much as they
               | add complexity, there are a lot of cases where they're
               | just reifying complexity that's already there, like when
               | deployments are namespaced by environment (e.g.
               | "dev-{service}", "prod-{service}", etc.) and so the
               | hierarchy is already present but flattened into an
               | inaccessible string representation. There are other
               | solutions to this, but they all seem to extract their
               | cost in terms of more manual fleet management.
        
               | aludwin wrote:
               | Hey - I'm a member of the multitenancy working group (wg-
               | multitenancy). We're working on a project called the
               | Hierarchical Namespace Controller (aka HNC - read about
               | it at http://bit.ly/38YYhE0). This tries to add some
               | hierarchical behaviour to K8s without actually modifying
               | k/k, which means we're still forced to have unique names
               | for all namespaces in a cluster - e.g., you still need
               | dev-service and prod-service. But it does add a
               | consistent way to talk about hierarchy, some nice
               | integrations and builtin behaviours.
               | 
               | Do you want to mention anything more about what you're
               | hoping to get out of hierarchy? Is it just a management
               | tool, is it for access control, metering/observability,
               | etc...?
               | 
               | Thanks, A
        
               | sah2ed wrote:
               | Any reason why you put your link behind a URL shortener
               | besides tracking number of clicks?
               | 
               | Since there are no character limits to worry about here
               | unlike Twitter, better to put up the full URL so the
               | community can decide for themselves if the domain linked
               | to is worth clicking through or not.
        
               | aludwin wrote:
               | Nope (other than that Google Docs URLs are looong),
               | sorry.
               | 
               | Friendly docs:
               | https://docs.google.com/document/d/1R4rwTweYBWYDTC9UC-
               | qThaMk...
               | 
               | Code: https://github.com/kubernetes-sigs/multi-
               | tenancy/tree/master...
        
               | lokar wrote:
               | You can dedicated nodes by namespace, at which point the
               | isolation is pretty strong.
        
               | dharmab wrote:
               | * Assuming you also configure strong RBAC, network
               | isolation and don't let persistent volumes cross-talk
        
               | sethvargo wrote:
               | As also a security person (:wave:), you can use dedicated
               | node pools and workload identity to isolate workloads in
               | the same cluster.
        
               | lvh wrote:
               | Workload identity is a GCP-specific beta feature for
               | mapping to GCP IAM, right?
        
               | bluhbi wrote:
               | yes
        
             | bluhbi wrote:
             | Its still billed by the minute. If you run your dev
             | clusters all the time 24x7 then they apparently are
             | critical enough.
        
             | rad_gruchalski wrote:
             | Generally curious, isn't Docker Kubernetes an option?
        
               | xur17 wrote:
               | It is - especially on OSX, it is very cpu and memory
               | intensive thought.
        
             | dahfizz wrote:
             | For a dev environment, why not host your own hardware?
             | Especially if cost is a concern, it seems like a no
             | brainer.
        
         | taleodor wrote:
         | My approach with any Google b2b product - always have a plan to
         | migrate out of Google and never agree to anything that locks
         | you to Google.
         | 
         | After seeing what they did to Google Maps and Api.AI /
         | Dialogueflow jumped from free to 5k$ overnight - just can't
         | trust them.
        
           | tmpz22 wrote:
           | There is a whole generation of future CTO / VP of Engineering
           | types who are coming up on these reputations for GCP, AWS,
           | Azure, etc. and it'll be interesting to see how the biases
           | play out over the next 5-10 years. I predict a strong move
           | back to self-hosting once the pains of i.e. self-managing a
           | bare metal K8s cluster come down, as well as storage/ram/cpu
           | prices continuing to drop. I for one welcome it.
           | 
           | There is a billion dollar company on the horizon for whoever
           | can best commoditize bare metal with an apple-esque usability
           | model.
        
           | riyadparvez wrote:
           | > never agree to anything that locks you to Google.
           | 
           | How is Google different from other cloud poviders in terms of
           | vendor lock-in?
        
             | taleodor wrote:
             | Fair point - yes, I kind of try to avoid it for any
             | provider. But special thing about Google is they are
             | raising prices or worse cancelling or modifying services at
             | will.
             | 
             | For AWS or Azure I developed way more trust over time -
             | could be subjective - but also could be that there is a
             | reason Google is distant 3rd in the game.
        
             | notyourday wrote:
             | Being locked into a provider that does not increase prices
             | or modify services in a non-compatible way ( cancel etc )
             | works much better than being locked into a provider that
             | does.
        
         | ygggvg wrote:
         | It's 72$ not 300 and the first is free.
         | 
         | I'm not sure what your usecase is that you would choose gke and
         | you are worried about 300$ per month infra costs.
         | 
         | For Corp we use gke. For private I use selfhostet k3samd for
         | our startup asuper cheap digital ocean cluster.
        
           | maest wrote:
           | > worried about 300$ per month infra costs
           | 
           | That feels like the wrong attitude.
        
             | bluhbi wrote:
             | The salary of people working and using those 'tools' this
             | infrastructure is higher then 300$.
             | 
             | If your kubernetes cluster is part of your core
             | infrastructure, then 300$ more or less should not be an
             | issue at all (not to say that i think 300$ is nothing).
             | 
             | That should not mean that you should waste money but often
             | enough, if you buy cheap and your hardware breaks and your
             | time&material costs much more then what a better hardware
             | would have cost, then you wasted money by buying cheap.
             | 
             | Unfortunate with IT products, there are certain things
             | which are not directly visible: Like how secure is your
             | product. GCP offers 2FA, Digital Ocean does not. How much
             | money is it worth to you to have your whole infrastructure
             | protected by 2FA? For me in a business context, non 2FA
             | would be a no go.
        
               | jorams wrote:
               | > GCP offers 2FA, Digital Ocean does not.
               | 
               | Digital Ocean definitely supports 2FA[1]
               | 
               | [1]:
               | https://www.digitalocean.com/docs/accounts/security/2fa/
        
         | endymi0n wrote:
         | Interesting. As a mid level GCP customer, it won't make a big
         | dent on our bill specifically, but in the end, I'm not sure
         | this pricing move is a smart strategy.
         | 
         | With this fixed fee model, the change will barely make a
         | difference (== Google revenue) for the large customers who can
         | spare the money, but will create a significant entry barrier to
         | that side project / super-early stage that considers getting
         | hooked on GCP, specifically GKE.
         | 
         | Then again, not my decision to make.
        
           | bluhbi wrote:
           | Thats for me the most frustrating thing with GCP, AWS and
           | Azure. I would never use them as a very early small 3 people
           | startup or for private reasons.
           | 
           | There is no billing protection (which could make you very
           | poor very fast) and every service has a certain cost and
           | quality which is just not feasable in the beginning.
           | 
           | Even GKE with its free kubernetes master does block a lot of
           | resources on the nodes: https://cloud.google.com/kubernetes-
           | engine/docs/concepts/clu...
           | 
           | Also a ton of great features on gke you will probably never
           | use if you are too small. It is so much cheaper to just get
           | cheap hardware somewhere and put your own k8s onto it if you
           | have more time then money.
           | 
           | Even on Digital Ocean you have the load balancer problem: you
           | need to use the provided and also 'costly' LoadBalancer
           | service. There is only one hacky way to prevent it by
           | exposing your ingress on the host and mapping that one ip but
           | then you loose all the self healing stuff and loadbalancing
           | capability.
        
         | whatsmyusername wrote:
         | It's considerably cheaper than EKS. Looks like $75-80 a month
         | vs I believe around $200 per EKS cluster.
         | 
         | Everyone's having EKS cost problems while I'm just sitting over
         | here paying nothing for ECS control planes.
        
           | ones_and_zeros wrote:
           | It's the same price as EKS...
           | https://aws.amazon.com/eks/pricing/
        
             | [deleted]
        
           | maishsk wrote:
           | I am not sure how you get to $200/month for an EKS cluster -
           | it is $0.1/hr
           | 
           | https://aws.amazon.com/eks/pricing/
        
             | whatsmyusername wrote:
             | I might be thinking of the numbers I put together for
             | dev/stage/prod.
             | 
             | Turned out to be irrelevant. Following their instructions I
             | couldn't get EC2 runtime hosts to attach after a couple
             | hours of fucking around (which is my standard for, 'is this
             | mature enough to use'), while with ECS I hit one button and
             | was up an running. Wasn't a hard choice when we started
             | dockerizing (especially since I could simplify most jobs
             | even further having dev use Fargate, albeit at a premium).
             | 
             | EKS struck me as a feature parity product, not something
             | you'd actually use.
        
         | geerlingguy wrote:
         | The 'cluster control plane is free' selling point was basically
         | the _only_ thing I saw from all the different groups I worked
         | with which was in GKE's favor. Yes you can get one free cluster
         | but anyone serious about using Kubernetes would have _at least_
         | two clusters (a prod and non-prod staging cluster), so unless
         | you're a true hobbyist (and the use case for K8s in that realm
         | is pretty slim unless it's to backstop work projects) this
         | effectively means you're going to pay as much for GKE cluster
         | control planes than you do for EKS.
        
           | sethvargo wrote:
           | Can you help me understand how these changes would be _more_
           | than EKS?
        
             | geerlingguy wrote:
             | Sorry, misread the original post, it would be the same.
        
           | bogomipz wrote:
           | Maybe I'm misunderstanding your comment but at $0.10 an hour
           | wouldn't GKE pricing be half of what the EKS pricing for
           | managed control plane at $0.20 an hour?
           | 
           | https://aws.amazon.com/blogs/containers/cost-optimization-
           | fo...
        
         | buzzkillington wrote:
         | The projects I've been on that used GCP over AWS fell into two
         | categories.
         | 
         | 1). CEO with delusions of grandeur who thought that Amazon was
         | a direct competitor to their business and should not be given
         | money.
         | 
         | 2). Projects that used Kubernetes.
         | 
         | Two is the only type of project which doesn't result in tens of
         | millions of dollars wasted.
        
         | dastx wrote:
         | For comparison, running a HA master node in London on
         | n1-standard-1 will set you back ~93 dollars per month. On top
         | of that obviously you'd be figuring out how Kubernetes works,
         | what the best configuration is among other things. I don't
         | agree with the blatant bait and switch, but it still works out
         | way better.
        
         | yoshiat wrote:
         | Thank you for your feedback and we understand this was a
         | surprise to you and many.
         | 
         | For cluster per customer architecture, would you be able to
         | look into https://cloud.google.com/kubernetes-engine/docs/best-
         | practic... to see if there is anything useful? We understand
         | changing the architecture isn't easy at all and we'd love to
         | know how we can help.
        
           | andybak wrote:
           | I'm curious whether you're comfortable with this move or not.
           | Something about your tone gives the impression you think this
           | was a strategic error.
        
             | yoshiat wrote:
             | thockingoog's responses below articulates my feeling very
             | well.
             | 
             | https://news.ycombinator.com/item?id=22487726
             | https://news.ycombinator.com/item?id=22487110
             | 
             | What I can do is to lower the bar for aggregating clusters
             | with the investments we've done so far. Hope that makes
             | sense.
        
       | marcinzm wrote:
       | I don't offhand remember AWS increasing prices for a service
       | before but I might be wrong. How often does Google increase
       | prices?
       | 
       | For a business I prefer a company that starts with higher prices
       | and then only lowers them to one that may increase them at any
       | time.
        
         | blibble wrote:
         | they added IPv4 address rental costs earlier this year, nearly
         | doubling the cost of a small VM
         | 
         | with no IPv6 option (of course)
        
         | nerdjon wrote:
         | I was wondering the same thing. Nothing comes to mind but there
         | are enough services I don't use that its easily possible
         | something slipped through.
         | 
         | AWS lowering compute costs is fairly largely shared, but I am
         | curious if anyone has compiled a list of the cloud providers
         | (AWS, Azure, and GCC) increasing the costs of services.
        
         | saltysugar wrote:
         | Having worked for AWS, one of the things we got a lot push back
         | from was offering something free and then walk back from it.
         | 
         | AWS really strongly focuses on gaining customer trust, and they
         | will only lower price, and never increase price. They won't
         | turn off things until the last customer stops using it (they
         | might stop new customers from onboarding)
         | 
         | I did not enjoy working for AWS so I left pretty fast, but some
         | of the customer obsession there really impressed me.
        
           | pdelgallego wrote:
           | For a AWS product manager, pricing is a one-way door once you
           | cross it there is no way back from it you.
        
         | pojzon wrote:
         | Google Maps, GCE, this the second time I see Google increase
         | price on highly important parts of business. On the third case
         | it will be a trend.
         | 
         | Makes me rethink whether I want to do any business with Google
         | anymore.
        
       | hughpeters wrote:
       | This makes GKE pretty much impossible to use for side projects.
       | Given that it will cost $100 per cluster per month without
       | including the instance costs.
       | 
       | Edit: A single zonal cluster per GCP account is exempt from this
       | fee, so this comment is inaccurate.
        
         | thockingoog wrote:
         | Free (zonal) cluster per account, regardless of size, should
         | cover a lot of this, no?
        
           | hughpeters wrote:
           | Great point! Didn't catch that one. I'll edit my comment
        
       | larosalia wrote:
       | That's it, I'm out of GCP.
       | 
       | They could at least start working on fixing their horrendous
       | documentation.
        
       | youngdynasty wrote:
       | I run a hobby project in multiple zones because low latency is
       | important, and Kubernetes makes it easy to do so.
       | 
       | There's no way I'm going to pay an extra $73/mo -- I already pay
       | for the computing resources, this should be free.
       | 
       | Looks like I'll be moving away from GKE. It's a shame, I _was_ a
       | big advocate.
        
         | sethvargo wrote:
         | We offer one free zonal cluster, which is specifically designed
         | for your use case :)
        
           | sofaofthedamned wrote:
           | I have personal experience of a few companies in the UK where
           | GCP are offering 90+% discounts to onboard. GCP are spending
           | hundreds of millions to do this. K8S control-planes are a
           | rounding error compared to this.
           | 
           | You could have grandfathered in current deployments but -
           | nope. In the tech world this is up there with killing Google
           | Reader.
        
           | youngdynasty wrote:
           | I use more than one zone because I run TCP services which
           | need low latency. I'll probably just switch to Digital Ocean.
        
       | etchalon wrote:
       | Time to start looking into DigitalOcean more seriously.
       | 
       | G Cloud is already unreasonably expensive and nearly impossible
       | to price manage. It's cool to see them double-down on that.
        
         | Thaxll wrote:
         | DO is for garage projects. If you need anything serious it's
         | AWS / GCP or MS.
        
         | croh wrote:
         | For last 7 years I am running DO and never had any issue. I
         | never understand why it is looked down. In fact I have faced so
         | many issues with AWS (particulary their old hardward). In one
         | case, our ec2 instance was rebooting frequently. AWS team
         | didn't accept any issue from there end and after few weeks ask
         | us to upgrade instace because of bad health.
         | 
         | In my experience, AWS is a very expensive cloud with clunky UI
         | and big brand name. During consulting gigs, I have seen many
         | customers want to go with AWS only because of brand. And later
         | they cry when bills start to hit roof with vendor lock in.
        
           | asdfman123 wrote:
           | I recently tried to spin up a VM for my own use in AWS, but I
           | had to do a rate increase because I wanted a beefier machine.
           | Easy peasy. My experience was comically bad.
           | 
           | ====================== First email from AWS (several days
           | after my request): ======================
           | 
           | Thank you for submitting your Limi Increase request.
           | 
           | I'm contacting your to inform you that we've received your
           | Workspaces Application Manager - Total Products limit
           | increase request, for a max of 5 in the Oregon region. I will
           | be more than happy to submit this request on your behalf.
           | 
           | Please note that for a limit increase of this type, I will
           | need to collaborate with our Service team to get approval.
           | This process can take some time as the Service team must
           | review your request first in order to proceed with the
           | approval. This is to ensure that we can meet your needs while
           | keeping existing infrastructure safe.
           | 
           | You may rest assured I will push towards expediting your
           | request to be addressed as soon as possible. As soon as the
           | Service team contacts me I will definitely let you know by
           | email.
           | 
           | In the meantime, please feel free to let me know if you have
           | any additional questions or concerns and I'll be happy to
           | help!
           | 
           | I appreciate your patience while we evaluate your request.
           | 
           | ====================== Second email: ======================
           | 
           | Thank you for your kind patience whiIe we continue to
           | evaluate your Workspaces Application Manager - Total Products
           | limit increase request.
           | 
           | I apologize for the time is taking to provide you with a
           | resolution as we've always aimed to provide our customers
           | with a rewarding experience that meets and goes beyond
           | expectations. Unfortunately, from time to time there are
           | cases where the final outcome is handled by another
           | department and the time they take is completely out of our
           | hands.
           | 
           | We certainly understand the sense of urgency that you have
           | for this particular request and therefore, we have spent time
           | communicating with the service team to let them know about
           | it. Rest assured that your case is active, being looked into
           | and the sense of priority has been transferred. As soon as we
           | have an update from their end we'll be touching base with you
           | immediately.
           | 
           | I am committed ensuring that you will get the help that you
           | need as fast as possible, so we can ensure everything is
           | being handled to your satisfaction, please feel free to let
           | us know if you have any further questions or concerns through
           | this case, so we can address them as soon as possible.
           | 
           | ============= My response: =============
           | 
           | You can go ahead and cancel my request -- I've decided to not
           | go forward with my project.
           | 
           | ============= Their reply: =============
           | 
           | Greetings from Amazon Web Services.
           | 
           | We're sorry. You've written to an address that cannot accept
           | incoming e-mail.
           | 
           | If you need to contact us, please visit
           | http://www.aws.amazon.com/contact-us .
           | 
           | Thank you for your business.
        
             | [deleted]
        
             | croh wrote:
             | I always suspect, do they still copy configuration manually
             | ? Is this delay because of that ? There was article where
             | in early days AWS was doing it. Even amazon.com was not
             | running on AWS those days. Hope it is not the case.
        
         | itake wrote:
         | DO is pretty famous for losing data.
         | 
         | https://news.ycombinator.com/item?id=20064169
         | https://news.ycombinator.com/item?id=17225665
        
           | whalesalad wrote:
           | DO is famous for being a pain in the ass. They're great for
           | tiny hobby things but honestly I'd never run a
           | prod/serious/client workload there. Too many issues. I've had
           | _multiple_ clients lose a droplet due to a simple credit card
           | expiration.
           | 
           | It's a race to the bottom on price so this doesn't surprise
           | me. They chose this life.
        
             | geerlingguy wrote:
             | To be fair, what is the best way to handle expiring credit
             | cards? For one of my SaaS products, I give a 30 day grace
             | period, then delete the data. If they didn't have a backup,
             | that's on them...
             | 
             | If they delete the droplet the second a single CC payment
             | fails, that's one thing, but I don't believe that's how
             | their system works.
        
               | whalesalad wrote:
               | If you are literally _in the business_ of enabling,
               | storing, and protecting production workloads, data, etc..
               | then catastrophic data loss should be an asbolute last
               | resort.
               | 
               | In both of these instances I am referring to a balance of
               | less than $20.
               | 
               | So for less than $20 (a few weeks late) DO says, welp
               | fuck this customer we are going to terminate all of their
               | resources immediately.
               | 
               | This is what DO and others need to do: Put it in your
               | terms that you will keep racking up charges and then send
               | it to collections. Charge interest, charge fees, do
               | whatever you want. Turn $20 into $40. Why? Because
               | businesses do not give a shit... if it is between losing
               | everything or a slap on the wrist (monetary fee) they
               | will chose the latter every time.
               | 
               | One of my clients had to painstakingly trudge through
               | archive.net to recreate their missing blog posts. How
               | fucking miserable is that? Over a few hundred megabytes
               | of disk that DO could have kept around...
               | 
               | Also, actally _make an effort_ to reach out before doing
               | anything serious. Call phone numbers, email other members
               | on the team to alert them to the issue, etc...
               | 
               | Too many times I have seen some script kiddie throw
               | together a client's WP site and toss it on DO because it
               | is 'so cheap and cool' and yet they forget about
               | everything else: backups, security, managing the box,
               | etc... and inevitably shit will hit the fan.
               | 
               | I was really rootin' for DO in the beginning. I even
               | applied to work there when they were first starting out
               | but did not want to relo to NY. Now I am moving three
               | clients OFF of DO because they are all very unhappy with
               | the level (or lack) of service they've received.
        
               | rstupek wrote:
               | I think the saying "you get what you pay for" would apply
               | in this case. People want to not pay for things, they
               | don't get the things.
        
               | mping wrote:
               | I keep backups of my cloud data whenever possible. Mostly
               | a couple hundred mb for small projects, I have been
               | bitten by the same situation in the past
        
               | nuggien wrote:
               | Or...if it was important enough to you that losing it
               | hurts, then maybe pay attention to your emails and don't
               | let things expire and pay your shit on time. And of
               | course, a sane person would backup anything important.
        
               | whalesalad wrote:
               | Yep, and that is something I have since instituted since
               | taking the reigns. Still... DO could turn this lemon of a
               | situation into lemonade by increasing revenue and
               | preventing unnecessary headache for their customers.
        
             | toohotatopic wrote:
             | Why would you use the cloud but have that single point of
             | failure? Billing is also a network activity. Why not have
             | the backup infrastructure linked to another credit card?
        
           | riffic wrote:
           | If you know how these storage services worked under the hood
           | you would understand that durability is not guaranteed. It is
           | the responsibility of the customer to ensure their data is
           | backed up.
        
           | Sebguer wrote:
           | Except no data wasn actually lost in that case? They got full
           | access back to their account, and additionally, it wasn't a
           | technical issue - they were accidentally flagged as a
           | fraudulent / abusive account.
        
             | itake wrote:
             | I linked two incidents. The first one required a trending
             | HN post to get resolved. The second, the developer never
             | got their data back.
        
           | etchalon wrote:
           | Goddammit.
        
             | Sebguer wrote:
             | https://blog.digitalocean.com/an-update-on-last-weeks-
             | custom...
        
         | dcow wrote:
         | DO minimum for a cluster is $20/mo. That sure beats $72
         | although gcloud is offering the first zonal cluster for free.
         | It might still be cheaper to use gloud for small things. I do
         | really like DO though, have a few personal projects hosted
         | there.
        
         | partiallypro wrote:
         | Outside of just hating Microsoft...why not Azure?
        
           | etchalon wrote:
           | Azure is as expensive as G Cloud, and less robust.
        
         | jazoom wrote:
         | Look at Vultr too. My last 2 support tickets were responded to
         | within 2 minutes and solved within 10 minutes. Their support
         | has always been good, but unlike almost all other companies, it
         | seems to get better as they grow.
         | 
         | I run a Kubernetes cluster on Vultr and I haven't had any
         | problems.
        
       | jjoonathan wrote:
       | Oh, so now in addition to Google's reputation for killing
       | services, GCP wants a reputation for raising prices?
        
         | justaguyhere wrote:
         | They have done it before. Remember Google Maps?
        
         | dchest wrote:
         | They already had this reputation.
        
       | [deleted]
        
       | vhmaster wrote:
       | Those who had a chance to look deeper into k8s control structure,
       | can comprehend such movement. There's no free lunch. I mean
       | master nodes, etcd and all that stuff in HA has its costs. That's
       | it. Surprisingly, AWS announced 50% price reduction for EKS
       | control plane January this year :)
        
       | pnathan wrote:
       | If the trade is a SLA for a management fee, that is a reasonable
       | business decision, and largely a rounding error for a decently
       | sized company with a well designed clustering system. Lack of
       | SLAs is a major issue IME with cloud providers.
        
       | rb808 wrote:
       | What resources would the cluster mgmt require? I always thought
       | k8s was perfect cloud platform as the management was minimal and
       | you're actually paying for the pod resources you require.
        
       | marvinblum wrote:
       | We moved our cluster into the Hetzner Cloud for Emvi [1]. Yes,
       | they don't offer a managed solution and it took us about two
       | weeks to set up and test the cluster properly. But the cost is
       | less than 1/6 of what gcloud costs us. If you have the resources
       | and knowledge to maintain your own cluster, check it out. They
       | have insanely good pricing (2,96 EUR/month for the cheapest
       | instance which is faster than Googles $15/month VM).
       | 
       | Here is a very good tutorial on how to set up your own cluster:
       | https://community.hetzner.com/tutorials/install-kubernetes-c...
       | 
       | [1] https://emvi.com/
        
         | dindresto wrote:
         | Do you use any special solution like Rancher or kubeadm?
        
           | marvinblum wrote:
           | kubeadm
        
         | sphix0r wrote:
         | Hetzner is solid from a perf / price perspective. Mind that the
         | network peering outside Europe is not that great. So could be a
         | very good choice depending on your user base' location.
        
           | marvinblum wrote:
           | True. In our experience it's fast enough. Additionally
           | traffic is free unless you hit the 20 TB outgoing traffic per
           | node (which probably will never happen to us). In contrast
           | gcloud costs about 12ct per GB outgoing traffic (!!!).
        
       | kwils wrote:
       | Note: Azure does not charge a master/cluster management fee (bias
       | disclosure I work for Microsoft)
        
         | cmhnn wrote:
         | I'm not sure any provider appearing to capitalize on a
         | momentary pricing decision is a great idea. All are too big and
         | things change too fast for any crowing as if anything is a long
         | term decision.
         | 
         | Replies on another thread by alleged Google employees appear to
         | indicate they are learning on the fly that doing support
         | matters and costs money.
         | 
         | There are a lot of comments from people wanting free things.
         | You may get those things for a while but it won't last. Free
         | only works for so long in anything no matter how hard that is
         | for some people to figure out across all aspects of life.
        
           | kwils wrote:
           | Not looking to capitalize just looking to state a fact that
           | has been in place since the service was launched. In the
           | earlier comments people were discussing/suggesting
           | alternatives and I believe AKS is worth considering.
        
         | minimaxir wrote:
         | Given this news, I suspect that won't last.
        
           | fernandotakai wrote:
           | yup. also, AKS does not have an SLA --
           | https://azure.microsoft.com/en-
           | in/support/legal/sla/kubernet...
           | 
           | >As a free service, AKS does not offer a financially-backed
           | service level agreement. We will strive to attain at least
           | 99.5% availability for the Kubernetes API server. The
           | availability of the agent nodes in your cluster is covered by
           | the Virtual Machines SLA. Please see the Virtual Machines SLA
           | for more details.
           | 
           | google didn't have one either, so they added it + a price for
           | said SLA.
        
         | jchiu1106 wrote:
         | It wouldn't surprise me if Azure charges for control plane at
         | some point. Seems like EKS and GKE can get away with charging
         | so why not...
        
         | sethvargo wrote:
         | Attempting to capitalize on a competitor's announcement isn't
         | cool and doesn't look great.
        
           | nimos wrote:
           | Seems fine to me. Frankly, what isn't cool and doesn't look
           | great is you not disclosing you work for Google in this
           | comment.
        
           | jchiu1106 wrote:
           | I don't work for Azure and having used both GKE and AKS, I
           | can say GKE is superior, however, I don't understand why this
           | comment isn't cool. It's not like they're capitalizing on
           | your misery. It's a decision you have taken and it's a fair
           | game IMO that they want to emphasize their advantage over
           | you.
        
           | GKE_Greed wrote:
           | Its actually appreciated. When a vendor makes a change to
           | screw customers, those customers like me appreciate hearing
           | from the vendor's competitors.
        
       | [deleted]
        
       | _jezell_ wrote:
       | Ah, the sound of former Oracle executives counting their GCP
       | bonuses.
        
         | toomuchtodo wrote:
         | Why people think cloud providers are benevolent provider of
         | infra is beyond me. Their margin is because people are willing
         | to pay it. Either run your own metal (k8s arguably makes this
         | easier than VMs along in the past) or form a cloud coop, but
         | absolutely don't be shocked when a business is taking money off
         | the table because they can.
         | 
         | Don't marry yourself to a provider, stay portable, it's just
         | good risk management. Pricing changes? Spin up a cluster
         | elsewhere, migrate data, migrate traffic, profit. The terms of
         | the agreement can change at any time.
        
           | m0xte wrote:
           | Yes I agree. But this industry is driven by fashion not
           | engineering or risk analysis both of which are changed to fit
           | the scenery. There are very few purists left who understand
           | this.
        
           | marcinzm wrote:
           | Different companies make money using different approaches and
           | it's perfectly valid to be upset at the approach a particular
           | company is taking. Just because they provide a service
           | doesn't mean they'll try to f __* you over at every chance.
           | For some it 's bad long term business to do that.
           | 
           | AWS, for example, begins with high prices and then lowers
           | them over time. It costs money but you know the maximum.
           | 
           | Google seems to be grabbing you with cheap prices and then
           | jacking them up when you're committed (google maps is another
           | example offhand). Maybe no on purpose but bad initial pricing
           | and ill-intent have the same impact externally.
        
             | the_duke wrote:
             | > AWS, for example, begins with high prices and then lowers
             | them over time. It costs money but you know the maximum.
             | 
             | That's only true for official pricing though.
             | 
             | I know of multiple cases where AWS had initially given
             | significant discounts, only to stop doing so once they
             | believed the customer to be firmly tied to the platform.
        
             | toomuchtodo wrote:
             | > AWS, for example, begins with high prices and then lowers
             | them over time. It costs money but you know the maximum.
             | 
             | Past performance is no guarantee of future benevolence or
             | reasonable behavior.
        
               | matwood wrote:
               | > Past performance is no guarantee of future benevolence
               | or reasonable behavior.
               | 
               | Sure, but pushing prices down and eating competitors
               | margin to the benefit of the customer has been Amazon's
               | MO since inception.
               | 
               | Google seems to make a lot of missteps wrt pricing and
               | the cloud. Remember the geo pricing change that put a lot
               | of projects out of business??
        
               | toomuchtodo wrote:
               | Amazon's bandwidth charges are still extortionate. Have
               | those ever gone down?
        
               | marcinzm wrote:
               | Same goes for metal, your data center can suddenly
               | increase your prices massively or go bankrupt.
        
               | toomuchtodo wrote:
               | For sure! That's why I said, "Don't marry yourself to a
               | provider, stay portable". Compute is a commodity, treat
               | it as such.
        
               | marcinzm wrote:
               | Except that costs time and money, and for many people
               | isn't worth it. Good business is not about eliminating
               | risk but understanding and managing it.
               | 
               | For a startup, the risk of AWS doing something is tiny
               | compared to all other risks so not worth spending effort
               | to mitigate.
               | 
               | For a moderately sized company, true, being on multiple
               | clouds may have an advantage.
               | 
               | For a large company, you get long term contracts with AWS
               | that mitigate the risk.
        
       | ljsmith93 wrote:
       | "Anthos GKE clusters are exempt from this fee"
       | 
       | Adopt Anthos or pay, basically.
        
       | pcj-github wrote:
       | Thomas Kurian is going to be the Steve Ballmer of Google, and
       | he's currently tearing up Google from the inside. Google
       | leadership need to wise up and give this guy his walking papers.
        
         | GKE_Greed wrote:
         | he is like GKE, once you start him, you can't stop :)
        
       | johnvega wrote:
       | So what makes anyone think this is the last?
        
       | [deleted]
        
       | mchiang wrote:
       | I haven't done cost calculations yet, but this might actually
       | make me want to explore to EKS pricing. I do love the experience
       | of GKE.
        
         | lain wrote:
         | It's the same as the new pricing that EKS has at $0.10/hour.
         | Guess that rules out my hopes of EKS not charging for the
         | management plane any time in the near future. :'(
        
           | sethvargo wrote:
           | It's slightly different than other cloud pricing because we
           | include a free zonal cluster as a hobby tier.
        
         | sethvargo wrote:
         | You don't need to do the calculations by hand :). We've updated
         | our pricing tool to account for these changes:
         | https://cloud.google.com/products/calculator#tab=container.
        
       | tebruno99 wrote:
       | Now I'm not sure what sure what we were paying for when we paid
       | for our cluster? Isn't the price we pay to use/build the cluster
       | the payment for the cluster?
        
       | MightySCollins wrote:
       | I am really disappointed by this. One of the reasons we moved to
       | GCP from AWS was the ability to create multiple clusters at no
       | extra charge. Now it looks like the pricing matches EKS.
        
       | sethvargo wrote:
       | Hey everyone - Seth from Google here. Please let us know if you
       | have any questions! You can learn more about the pricing changes
       | at https://cloud.google.com/kubernetes-engine/pricing.
        
         | Glyptodon wrote:
         | This change is pretty huge for non-revenue units & small teams
         | at institutions and SMBs. These smaller teams often seem to run
         | two clusters rather than try to split their production and dev
         | environments within one cluster (I think this is even widely
         | recommended for smaller, less experienced outfits). For many
         | the management fee will probably be a large percentage cost
         | increase for units that are very cost sensitive and require
         | significant re-engineering to avoid for units where engineer
         | hours a scarce resource.
         | 
         | Seems weird, given that GKE is basically the main reason people
         | seem to use Google Cloud. These kinds of users aren't big fish,
         | but I suspect a lot of them are going to run.
        
           | donmcronald wrote:
           | Yeah. Having 1 free cluster will just encourage SMBs and
           | hobbyists to resort to the bad practice of commingling dev
           | and production, wont it? It's the same attitude that GitLab
           | has with a lot of their CI stuff. It's not only huge
           | enterprises that want to try to follow best practices.
        
           | JMTQp8lwXL wrote:
           | It's one free cluster per billing account. Have separate
           | billing accounts for dev and prod usage of GCP. Probably a
           | good idea to follow separate accounts on any cloud, for that
           | matter.
        
         | future_i_snow wrote:
         | Hi Seth - I am a LONG time lurker here on HN, but this news
         | just forced me to create an account.
         | 
         | I am part of a small company which has separated our deployment
         | into a number of sub projects, some of which are: dev, staging,
         | production, ci, etc.
         | 
         | The difference for us will be several hundred dollars per
         | month, and that will make an actual (negative) difference for
         | us. We didn't need a "financially backed SLA" before and we
         | don't need it now.
         | 
         | You asked for a question and here it is: Why isn't a
         | financially backed SLA a part of a billing negotiation? I mean,
         | there are some really cool features in "Anthos" but I am not
         | picking up a phone to find out how much that is going to cost.
         | 
         | If a really useful feature like "Cloud Run for GKE" is
         | awkwardly placed in the "Anthos" box, then why isn't the SLA
         | part of "Anthos" too?
         | 
         | Free clusters was a huge part of why we selected GCP. If this
         | SLA nonsense isn't made optional, our next project is not
         | landing on GCP.
        
           | sethvargo wrote:
           | Thank you for the feedback. I'll relay this to the product
           | team. I feel your frustration and, unfortunately, I do not
           | have much to offer beyond my promise to relay this feedback
           | and the items I've expressed in other responses.
        
         | numbsafari wrote:
         | Hey Seth,
         | 
         | Thanks for being the recipient of everyone's (justifiable)
         | frustrations. They probably don't pay you enough.
         | 
         | I think, what is especially frustrating about this, is that we
         | do already pay for resources that are provisioned by our K8S
         | clusters. We pay for the network traffic, the storage, the
         | compute. I saw you mention StackDriver... we pay for that as
         | well.
         | 
         | I can appreciate that actually setting up and managing GKE
         | backplanes is a non-trivial expense, but I generally assumed
         | that that cost was amortized out, just like I don't pay for the
         | backplane that runs GCE and the rest of GCP's service suite.
         | 
         | I also appreciate that you mention some customers are perhaps
         | taking advantage of this "free" resource. But, isn't that
         | quotas are for?
         | 
         | Frankly, more concerning than the fact that now I have a new
         | $73/mo. fee attached to my account (which, is not the end of
         | the world) is that this really comes out of left field, and in
         | the context of concerns about the the nature of GCP's new
         | leadership, and reports of Google leadership debating GCP as a
         | going concern. I realize a lot of that isn't well founded, but
         | it's surprises like this one that keep that narrative alive.
         | AWS ain't no saint, but they are pretty consistently who they
         | are: not full of bad surprises.
         | 
         | This just leaves a bad taste in the mouth, and makes me wonder
         | if I can expect other surprising cost increases, or perhaps, if
         | these don't "work", worse surprises like deprecation notices.
         | Is this the precursor to you all discontinuing GKE because, as
         | the DevRel class likes to tweet, nobody should be using
         | Kubernetes if they can use (more expensive) services like Cloud
         | Run?
         | 
         | Are we about to get Oracled?
        
           | sethvargo wrote:
           | > They probably don't pay you enough.
           | 
           | Can confirm :)
           | 
           | > ...we do already pay for resources that are provisioned by
           | our K8S clusters
           | 
           | Customers are charged for worker nodes, but until this point,
           | the control plane ("master") nodes have been free. In
           | addition to the raw compute costs for those nodes, there's
           | the SRE overhead for managing, upgrading, and securing them.
           | 
           | > ...but I generally assumed that that cost was amortized out
           | 
           | <googlehat>I'm not really sure.</googlehat> <civilian>My
           | guess would be that, initially, this was the case. However,
           | over time, people have created many zero-node clusters. Now
           | the amortization isn't. Again, pure speculation.</civilian>
           | 
           | > But, isn't that quotas are for?
           | 
           | See my comment above about zero-node clusters.
           | 
           | > I have a new $73/mo. fee attached to my account (which, is
           | not the end of the world) is that this really comes out of
           | left field...
           | 
           | Acknowledge, but I do want to highlight that changes take
           | place a few months from now (June 2020), not immediately.
           | Furthermore, each billing account gets one zonal cluster with
           | no management fee.
           | 
           | > Is this the precursor to you all discontinuing GKE because,
           | as the DevRel class likes to tweet, nobody should be using
           | Kubernetes if they can use (more expensive) services like
           | Cloud Run?
           | 
           | 100% no. Also, Cloud Run is almost always cheaper than
           | running a Kubernetes cluster.
           | 
           | > Are we about to get Oracled?
           | 
           | I'm not sure what you mean by that verb.
        
             | sofaofthedamned wrote:
             | So couldn't you charge the control-plane for zero-node
             | clusters?
        
             | JMTQp8lwXL wrote:
             | > Furthermore, each billing account gets one zonal cluster
             | with no management fee.
             | 
             | This seems like a really important detail. For hobby
             | projects, one cluster in one zone should be enough. Per
             | your statement, those people will not be impacted. With
             | this knowledge, I'm experiencing much less FUD.
        
             | Glyptodon wrote:
             | I think the reason a lot of people create zero node
             | clusters is that they want to "turn off" their cluster
             | without destroying its current configuration or state,
             | which otherwise doesn't seem possible.
             | 
             | I may be missing something here, but my guess is that a lot
             | of people turn to GKE to learn how to use K8s, and then are
             | like "wait, I'm in the middle of this
             | project/tutorial/etc., but I don't want to be billed
             | overnight when it's literally just going to be doing
             | nothing, what do I do?" and find Stack Overflow or
             | something recommending you just scale it to zero. See
             | questions like this:
             | https://serverfault.com/questions/877619/turn-off-a-
             | cluster-...
        
             | numbsafari wrote:
             | Thank you for the detailed response.
             | 
             | > In addition to the raw compute costs for those nodes,
             | there's the SRE overhead for managing, upgrading, and
             | securing them.
             | 
             | By that logic, can we expect to see charges for GCP
             | Projects and the GCP Console? Cloud IAM?
             | 
             | > people have created many zero-node clusters
             | 
             | I'd be really curious what is driving folks to do that. Are
             | they using the backplane for CRDs and custom controllers
             | and no compute?
             | 
             | This feels like it could be addressed similar to alpha
             | clusters, or with a quota, e.g.: clusters with 0 nodes for
             | > 24 hours will be terminated?
             | 
             | Separately, It seems like handing everyone 3 months to
             | figure out what to do about a new $73 * X fee isn't the
             | best plan. Including some kind of estimate in the emails
             | that were sent out would have been helpful. There was a
             | change in pricing for StackDriver a while back that did
             | this. It was very helpful to understand how we would be
             | impacted.
             | 
             | > Furthermore, each billing account gets one zonal cluster
             | with no management fee.
             | 
             | My feedback is that you would probably get getting _way_
             | less blowback if that free-tier didn 't come across as
             | inadequate. I can appreciate that there are use-cases where
             | it makes sense for you all to be charging. But one zonal
             | cluster... It makes the whole thing feel punitive.
             | 
             | > I'm not sure what you mean by that verb.
             | 
             | I have a feeling we're all about to go on a journey of
             | discovery together.
        
               | ssmw wrote:
               | If abuse of zero-node clusters is an issue, wouldn't it
               | be better to introduce a zero-node cluster fee the same
               | way you charge for unused reserved IP addresses?
        
               | remram wrote:
               | Also, how much resource does a 0-node cluster actually
               | use on the control plane?
        
               | jonas21 wrote:
               | > I'd be really curious what is driving folks to do that.
               | 
               | I was one of those people. I got an email from Google
               | this morning and thought "that's weird. I didn't even
               | know I was running a Kubernetes cluster." I think I
               | created it years ago to work through a Kubernetes
               | tutorial and, since it was free, never bothered to delete
               | it.
               | 
               | So, I can imagine this being a problem. Though it seems
               | like having a minimum hourly charge per cluster would
               | have been a better way to handle this (i.e. if your
               | cluster is using less than $0.10/hr in resources, you get
               | charged the difference).
        
               | foota wrote:
               | That seems like a really good idea, maybe they should
               | look at doing that? As noted, $73 should be a trivial
               | charge both from Google's perspective and the customer's
               | for an actual cluster.
        
             | defen wrote:
             | >> Are we about to get Oracled?
             | 
             | > I'm not sure what you mean by that verb.
             | 
             | The CEO of Google Cloud is the former President of Product
             | Development at Oracle Corporation. Oracle Corporation has a
             | reputation for being incredibly hostile to their customers,
             | which includes things like "finding creative new ways to
             | charge our customers more money. I mean, what are they
             | going to do, switch to Postgres? lol"
             | 
             | I think the fundamental problem is that Google Cloud's
             | reputation is irreparably harmed by Google's overall
             | reputation among developers. Treating this like a tactical
             | or technical problem ("our solution is the best and
             | cheapest!") is missing the forest for the trees.
        
         | jerendy92 wrote:
         | Why is this change coming in? I can hardly see costs on
         | Google's side having increased to provision and 'manage' K8S
         | having increased over the past 3 years, especially given that
         | it's used in production there. Also, given that no-cost K8S
         | clusters was pushed by your sales and marketing teams back in
         | 2018 as a significant benefit for switching to GCP, it doesn't
         | really inspire confidence in GCP if we're just going to be
         | shafted further on down the line. Lastly, $0.1/hour is
         | expensive given that a 'managed' k8s cluster can be rolled out
         | using Terraform and Ansible with a bunch of GCE nodes with
         | minimal effort. This frankly just feels like a cash grab for
         | those that are either inexperienced/unfamiliar with cluster
         | management/provisioning, or from those that are in too deep
         | with GCP and won't have another option other than to pay the
         | piper, so to speak.
        
           | sethvargo wrote:
           | Thank you for the question. While I can't go into deep
           | detail... as with most free things, people find a way to
           | abuse the system. While we've invested significant effort to
           | curtail such abuse, this is the road we've landed.
           | 
           | To your point about running your own K8S cluster - two
           | things:
           | 
           | 1. That's something you have always been (and still are)
           | entitled to do.
           | 
           | 2. Having personally run large-scale K8S clusters, the
           | challenge isn't provisioning, it's maintenance, security
           | patches, upgrades, etc.
        
             | EpicEng wrote:
             | So...
             | 
             | You guys roll out a free service. You tell your sales
             | people to hype it up as a benefit over other providers. You
             | somehow don't anticipate that some users will "abuse" the
             | free service, so you hike up rates for everyone?
             | 
             | Sorry, I don't think you're likely to find much empathy on
             | this one.
        
               | geodel wrote:
               | > You guys roll out a free service. You tell your sales
               | people to hype it up as a benefit over other providers.
               | 
               | Strange argument. It is basically what whole world do.
               | Give some free or heavily discounted product or service
               | in hope of gaining market and later on increase price /
               | start charging for that thing.
        
         | GKE_Greed wrote:
         | Hi Seth, two messages this change is sending: 1) GCP can
         | arbitrarily add additional fees to services we consume whenever
         | a PM is under pressure to increase revenue. 2) GCP pricing only
         | goes one direction: UP
         | 
         | I know you are just the messenger here, and I send my sincere
         | sympathies that you have to work with a product manager there
         | that can't compute strategic impact of this change :)
        
         | jchiu1106 wrote:
         | This is really disappointing. I've been a big proponent of GKE
         | not only to my employer but to my friends as well. I think it's
         | the best Kubernetes implementation available. The justification
         | for a management fee b/c there are abusers just feels like an
         | excuse for making some extra revenue. Truly with Google's
         | prowess you can detect and deal with abusers without having to
         | raise cost for everybody. I'm worried this is going to dampen
         | the momentum of Kubernetes adoption unfortunately...
        
           | sethvargo wrote:
           | It's not _just_ abuse. It's not _just_ the new SLA. It's also
           | the additional functionality we've built beyond just
           | Kubernetes and how simple we have made the offering and auto-
           | scaling, etc.
        
         | awslattery wrote:
         | Hey Seth, thanks for taking to the comments here; sad I
         | wouldn't be able to catch one of your talks at Next this year
         | in-person.
         | 
         | I'd like to share some feedback that echoes that of other
         | commentators, from a different perspective.
         | 
         | I run a local cloud developer community with regional pull for
         | attendees, as well as working directly with local early-stage
         | startups looking to become cloud-native.
         | 
         | GCP has always been my go-to for recommendation for our
         | attendees (mix of developers and technical founders, and some
         | enterprise technology folks) given the affordability factor,
         | pathways to additional credits to flesh out ideas, learn new
         | technologies, or stretch the limited runway of their new
         | organization, and ultimately my belief that GCP is one of, if
         | not the best, clouds for developers given the investment in
         | documentation and engagement DevRel channels.
         | 
         | With the rollback of open-enrollment into a smaller plan of
         | Google Cloud for Startups, and price changes like this, I'm
         | fearing I've chosen the wrong hill to die on when talking with
         | these new customers.
         | 
         | I appreciate the inclusion of a free regional cluster per
         | account, which will still afford myself the opportunity to
         | demonstrate k8s at meetups and to end users without taking more
         | of an out-of-pocket hit, and for folks to learn on their own or
         | maintain hobbyist projects on the same budgets they are
         | accustomed to.
         | 
         | My fear with this announcement is that the negative
         | repercussions of this will not be felt on the bottom line or
         | figures that it seems more and more is the priority of the
         | Google Cloud leaders. Rather, it will be felt hardest by the
         | smaller customers; the hobbyist developer or technical co-
         | founder looking to learn new technologies, to scale up their
         | operations, and who at least in my experience, are driving
         | growth in the mindspace around GCP in their communities.
         | 
         | Put another way, moves like this will further tarnish the
         | reputation of Google for those who the sales engineers have for
         | the last two years, promoted heavily no cluster management fees
         | like "the other guys," and in the eyes of many starting out in
         | these areas (of which I recognize most will never become the
         | big customers that satisfy the requirements of executives).
         | 
         | I hope that when the dust settles, this does not lead to a
         | retraction of what makes Google Cloud great in my mind, which
         | is specifically to developer experience and outreach.
         | 
         | With that in said, I would suggest really driving home this
         | change through dismissable in-console communication at the
         | point of cluster creation, on the dashboard, and in email
         | communication to folks who this will impact, with a clear
         | picture of the impact on them. No one wants another large
         | disruption of thousands of small organizations and users, as
         | was the case with the Google Maps pricing change.
         | 
         | Personally, I'd love to see it increased to one free regional
         | or zonal cluster per account for the remainder of 2020, and
         | then making only one zonal free per account effective 2021.
         | Given the uncertainty around engineer capacity and scheduling
         | given the ongoing human malware crisis affecting companies
         | large and small, I think this could be a good middle ground to
         | satisfy most customers affected by these changes, while still
         | achieving the objective of moving this away from being a loss
         | leader of sorts.
        
           | sethvargo wrote:
           | This feedback is super valuable - thank you for sharing. I'll
           | be sure to relay it to the product team.
        
         | sethvargo wrote:
         | See also: https://news.ycombinator.com/item?id=22485679
        
         | mleonard wrote:
         | Very disappointed. Not by the price increase per se... but by
         | the lack of a reasonable 'always free tier'. I think you should
         | strongly consider tweaking the pricing to provide one or two or
         | three multi-zone clusters for free instead of one single-zone
         | cluster. Let us see the power of GKE without the extra charge
         | and grow on your platform. This would allow new companies to
         | choose gcp over aws/azure and start out with a proper highly
         | available cluster or two/three in different regions and grow to
         | more clusters over time. With the new pricing you're forcing
         | them to choose between a single zone cluster or $70 per month
         | per cluster (or another cloud). Please consider tweaking the
         | new pricing to enable a lower price ramp up for newer
         | companies... why not offer three multi-zone same-region cluster
         | for free and then charge more established enterprises using
         | more than 3 clusters? I appreciate the money is in the big
         | customers... but why scare away the small customers who want
         | 1-3 highly available clusters behind a gclb to provide higher
         | availability and lower global latency. The mindshare of
         | developers will move away from gke if you're not careful...
         | both to aws/azure and others like DO kubernetes.
         | 
         | I believe the community is keen to engage with you on this
         | based on the comments in this thread. If your team would like
         | to talk to a disappointment (very small but hoping to grow)
         | customer I'd be happy to jump on a call. I hope others here
         | would be happy to do the same.
        
           | sethvargo wrote:
           | I think you'd be interested in our Google Cloud for Startups
           | program: https://cloud.google.com/developers/startups
        
             | mleonard wrote:
             | It's a good program. I'm currently in the stage one step
             | before that program. Would your team consider tweaking the
             | pricing as I mentioned, with the goal of helping early
             | stage startups choose GCP? GKE/kubernetes is increasingly
             | not just for big enterprise. Personally I find GKE as easy
             | as app engine or cloud run but much more future proof and
             | more flexible/powerful... the real heart of a GCP to rival
             | AWS. Just this week I set up Config Connector to provision
             | a global load balancer and other GCP resources used by two
             | clusters. An always free tier of two or three (ideally
             | multi zone clusters) would I think go a long way to earn
             | the trust and belief of many devs and early stage startups.
             | As would coming back in the next few days with tweaked
             | pricing based on community feedback.
             | 
             | Edit: Additional comments: You could limit the number of
             | nodes in the always-free-tier clusters. Above n nodes the
             | free tier clusters aren't free.
             | 
             | With the new pricing, I can't choose to use GKE instead of
             | app engine/cloud run and get the same availability without
             | having to pay for both the nodes and the new control plane
             | cost. Those managed products run over multiple zones in a
             | region. It's disappointing that even just one multi-zone
             | cluster is charged.
        
               | mleonard wrote:
               | Additional comments: You could limit the number of nodes
               | in the always-free-tier clusters. Above n nodes the free
               | tier clusters aren't free.
               | 
               | With the new pricing, I can't choose to use GKE instead
               | of app engine/cloudrun and get the same availability (by
               | this I mean multiple zones) without having to pay for
               | both the nodes and the new control plane cost. Those
               | managed products run over multiple zones in a region.
               | It's disappointing that even just one multi-zone cluster
               | is charged. I'd be very happy to see you include at least
               | a single multi-zone cluster control plane in the free
               | tier.
        
               | sethvargo wrote:
               | > Would your team consider tweaking the pricing as I
               | mentioned, with the goal of helping early stage startups
               | choose GCP?
               | 
               | To be clear, it's not my team. I'm relaying feedback, but
               | I can't make any guarantees or promises.
               | 
               | All this feedback is super valid and important, and it's
               | being synthesized to the product team.
        
               | mleonard wrote:
               | I appreciate that. Thanks for being available on
               | hackernews and helping relay feedback.
        
       | reilly3000 wrote:
       | AKS still has a free control plane. GCP won my business for a
       | bit, but quickly lost it based on some features. I still love
       | StackDriver and BigQuery, but don't love doing business with GCP-
       | the sales/support experience was pretty lacking, networking for
       | serverless was immature for multi-region, and what they are doing
       | with Anthos feels like Oracle (it is).
        
       | MuffinFlavored wrote:
       | How many businesses really need Kubernetes? Can't you orchestrate
       | infrastructure + rolling deploys with Terraform + Docker
       | containers?
        
       | pirate_dev wrote:
       | So happy I went with AWS. Sorry to the Google guys, this bites.
        
       | kwils wrote:
       | Note: Azure does not charge a master/cluster management fee (bias
       | disclosure I work for Microsoft)
        
       | stevencorona wrote:
       | Wow, this is a huge bummer. A lot of our infrastructure
       | assumptions have been based around having several small GKE
       | clusters.
        
         | thockingoog wrote:
         | I think this trend is not good overall, and people will
         | eventually be very unhappy with it. I'd rather help you figure
         | out how to use fewer clusters.
        
           | seneca wrote:
           | Tim, would you be willing to elaborate on why you dislike the
           | "many small clusters" pattern?
        
             | DevKoala wrote:
             | Actually, could you elaborate on the benefits of your
             | approach? edit: I am asking because this is counter
             | intuitive to anything I'd want to solve with K8. Specially
             | when it comes as a managed service.
        
             | thockingoog wrote:
             | Many small clusters just do not deliver on a lot of the
             | value of Kubernetes. Clusters are still hard boundaries to
             | cross (working to fix that). Utilization and efficiency are
             | capped. OpEx goes up quickly.
             | 
             | There are reasons to have multiple clusters, but I think
             | the current trend takes that too far.
             | 
             | TO BE SURE - there's more work to do in k8s and in GKE.
        
           | tamale wrote:
           | Our cloud service here at confluent is designed around giving
           | customers their own infrastructure. A lot of the times, that
           | means giving them their own k8s cluster. The management
           | overhead there isn't the issue however.
           | 
           | The real issue comes into play when you try to make developer
           | environments.
           | 
           | To give our developers any semblance of a "real production-
           | like" workload, they need to work with an entire kubernetes
           | cluster - maybe even a couple - to simulate what's happening
           | in production.
           | 
           | This means at any given time, we have hundreds of GKE
           | clusters because each developer needs a place to try things.
           | Yes, these are ephemeral and can be tossed aside, and yes
           | they cost a tiny bit in VM prices, but adding a per-cluster
           | management fee is going to skyrocket this expense and push us
           | towards trying to figure out ways to share these clusters
           | between developers, which defeats the entire purpose of the
           | project.
           | 
           | We'll have to seriously consider abandoning GKE for this use-
           | case now and that sucks, because it's by far the fastest
           | managed k8s solution we've found so far.
        
             | thockingoog wrote:
             | Try KIND. Much better devex.
        
         | sethvargo wrote:
         | I'm sorry to hear that. You can use namespaces and and separate
         | node pools to isolate workloads. We'd love to hear more about
         | your use case for having many small GKE clusters.
        
           | jchiu1106 wrote:
           | Hey Seth, I know you used to work at Hashicorp on vault. I
           | think Vault recommends that if you want to deploy it on
           | Kubernetes, it should have the cluster to itself.
        
             | sethvargo wrote:
             | That's correct. Vault Enterprise (at my last math) was
             | ~$125k/yr, so that management cost is negligible :)
        
               | [deleted]
        
       | outime wrote:
       | Not that I invested much in GCP but this was just what I needed
       | to completely stay away from it myself and my clients. Really
       | awful decision. I'm sorry for those affected who trusted them.
        
       | sladey wrote:
       | This is really disappointing. GKE was a staple amongst Kubernetes
       | adoption, not only for the feature-set but also that there were
       | no overhead costs.
       | 
       | I hope GCP re-thinks this.
        
         | thockingoog wrote:
         | For folks just trying it out, 1 cluster is still free.
        
           | mleonard wrote:
           | For folks just trying it out, 1 cluster is still free... in a
           | single physical data centre. Sadly you'll be charged for
           | running a cluster across two or three data centres in the
           | same availability zone (eg London).
        
             | thockingoog wrote:
             | This is a fair point. We don't have an HA (multi-master)
             | zonal offering either, because mostly people don't want
             | that.
        
       | maktouch wrote:
       | We're a little bit too locked to GCP to migrate out.... But I'll
       | definitely stop evangelizing GCP to people.
       | 
       | I have now lost faith in you, GCP.
        
         | GKE_Greed wrote:
         | same here - losing face now with my team when I argued for GCP
         | (and won) against AWS.
        
       | JMTQp8lwXL wrote:
       | I've been mentioning Google's penny pinching for awhile here.
       | Simple things like Chrome's address bar showing google searches
       | before my bookmarks. It's all part of the monetization. Are fees
       | like this because Google is struggling to continue to grow?
       | That's probably the most concerning thing for Google's future,
       | rather than a $73 fee.
        
       | yashap wrote:
       | I know the HN crowd hates this sort of thing, but seems
       | reasonable to me. $70/month is a very reasonable price, and most
       | businesses likely have well under 10 clusters (I'd think many
       | just have 2-3, a single cluster for prod and then 1-2 dev/test
       | type environments). This is probably mostly to cut down on edge
       | cases users who are spinning up crazy numbers of clusters for
       | weird reasons, and costing Google a bunch of $$$.
        
         | sneak wrote:
         | Furthermore, anyone spending enough on compute to warrant k8s
         | shouldn't balk at all at $70/mo. I think the threshold for
         | introducing the complexity and overhead of k8s isn't probably
         | until at least $5-10k/mo of spend (and probably 3-10x that in
         | the normal case). Less than that and k8s is a whole lot of
         | overkill.
        
           | briffle wrote:
           | We use google cloud projects to isolate customers and
           | environments. (Some of our clients are old school and VERY
           | scared of cloud, and multi-tenant). so for our pretty small
           | company, we have >60 projects, each with a k8s cluster. that
           | is a pretty good bump in costs come this summer.
           | 
           | Historically, we have powered down all the compute in a
           | project that isn't needed, but left the k8s cluster in place
           | (with its compute nodes powered down) because we could then
           | bring it back up in 2 min or less. That dropped our cost for
           | each project that was powered down to ~$25-$50/month,
           | depending on how much disk space, etc, they were using. this
           | will more than double (or triple) the cost of those projects.
           | If we have to rebuild a full cluster from scratch, we then
           | have to wait for the global load balancer to build our
           | ingress, and then go authorize a TLS cert. This adds 20-30
           | min to re-activating our projects, which will suck.
        
             | zomglings wrote:
             | Perhaps the right thing for GKE to do is introduce a
             | cluster snapshot. Sounds like a great feature.
        
               | briffle wrote:
               | That would actually be pretty awesome.
        
       | jerendy92 wrote:
       | Time to dump GCP then. It's not even that the fee is that large,
       | but rather that this is once again Google failing on a long term
       | commitment and shafting those on their platform once again. This
       | was one of the benefits that was pushed by their sales team when
       | they called us up to market GCP over AWS and their EKS offering.
       | Doesn't matter that they are price matching, Google's inability
       | to actually commit to long term support, servicing, pricing or
       | features across any of their products is tiresome. Time to move
       | business elsewhere to AWS or Azure. They may be more expensive,
       | but at least we know what we are paying for, and that it's going
       | to stay that way for a significant length of time.
        
         | simplecto wrote:
         | Never trust free without an escape hatch.
        
         | tpetry wrote:
         | What? A 73$ price difference to AWS was their main selling
         | point?
        
           | jerendy92 wrote:
           | It depends entirely on how a firm has their infrastructure
           | set up - if you have small cluster(s) per client for
           | isolation/compliance purposes, you end up with, for example
           | with 250 clients, each one using, say, 1000 billable
           | hours/month:
           | 
           | 250 * 0.1 * 1000 = $25,000/month.
           | 
           | This is quite the price hike for something that was a) free
           | until now. b) Not a service that warrants such a fee given
           | that it uses existing (GCE) resources and can frankly be done
           | manually by one of the DevOps engineers for a few hours/month
           | and some scripting. It's just a charge for convenience it
           | seems.
        
             | sethvargo wrote:
             | You keep noting how "easy" it is to provision and manage a
             | Kubernetes cluster. From experience, properly securing and
             | maintaining a Kubernetes cluster is a multi-person full-
             | time job.
        
               | jerendy92 wrote:
               | We already do provision clusters, using the
               | aforementioned tools. There is some setup involved, but
               | once done, provisioning and upgrade is relatively simple.
               | Indeed, we used to exclusively provision and upgrade via
               | Terraform/Ansible. When we started using GCP, any data
               | that could be stored by a US company without causing
               | compliance issues was offloaded to GCP over other
               | providers due to the auto provisioning/management at no
               | cost.
               | 
               | If you guys find it hard to maintain/upgrade clusters,
               | that's your business. All I am saying is that as a
               | company giving you business, with this change, you are
               | now no longer the cheapest, most reliable or most
               | convenient. As a result, we will be moving to provision
               | instances with other providers from now on.
        
         | thockingoog wrote:
         | If the main value of GKE over DIY is $73, you should totally
         | DIY.
         | 
         | I mostly try not to be too Google-focused here, but I have to
         | say...
         | 
         | I'm pretty proud of GKE, and I think it offers a lot of value
         | other than just being cheap. Managing clusters is not always
         | easy. GKE handles all of that for you - including integrations,
         | qualifications, upgrades, and patching clusters transparently
         | BEFORE public security disclosures happen.
         | 
         | We have a large team of people who deal with making GKE the
         | industry-leading Kubernetes experience that it is. They are on-
         | call and active in every stage of the GKE product lifecycle,
         | adding value that you maybe can't see every day, but I promise
         | you is there. When things go sideways, there isn't a better
         | team on the planet to field the tickets.
         | 
         | I don't understand the anger here - you're literally saying
         | you'd rather pay more for a service of lower quality because...
         | why? Because they will continue to charge you more? Does not
         | compute.
         | 
         | For those people who use a large numbers of small clusters, I
         | understand this may make you reconsider how you operate. As a
         | Kubernetes maintainer, I WANT to say that a smaller number of
         | larger clusters is generally a better answer. I know it's not
         | always true, but I want to help make it true. GKE goes beyond
         | pure k8s here, too. Things like NodePools and sandboxes give
         | you even more robust control
         | 
         | GKE is the best managed Kubernetes you can get. And we're
         | always making it better. Those clusters actually DO have
         | overhead for Google, and as we make GKE better, that overhead
         | tends to go up. As someone NOT involved in this decision, it
         | seems reasonable to me that things which are genuinely valuable
         | have a price.
         | 
         | Also, keep in mind that a single (zonal) cluster is free, which
         | covers a notable fraction of people using GKE.
        
           | je42 wrote:
           | I believe everything that you say. The value it provides is
           | very good.
           | 
           | If Google Cloud would have charged 73$ from the start (or
           | after beta), i think there wouldn't be so much anger.
           | 
           | The anger comes from, a product was free and now it is not. A
           | lot of people made architectural choices that depended on the
           | price of 0. (You mentioned these cases in your post).
           | 
           | However, i believe the bigger issue is, that Google Cloud
           | broke essentially a promise.
           | 
           | As a customer I need to be able to trust my cloud provider,
           | because I am literally helpless without it.
           | 
           | Can I trust an entity that breaks promises ? No, I can't. I
           | need to worry. Especially, if I cannot follow the reasoning
           | behind it.
           | 
           | If it is true, that Google's overhead went up, because of
           | improvements, then it would have better to have two kinds of
           | clusters (better and paid, old-school and free). You would
           | have not broken the promise. People can choose on their own
           | pace to upgrade if they need to.
           | 
           | Also keep in mind, that you also carry the Google brand.
           | Hence, if other teams of Google break promises (like f.e.
           | Stadia) this will also reflect on the Google Cloud team.
           | Unless you keep a crystal clear track record, i need to
           | assume it can get worse than what you have done right now.
           | 
           | My conclusion is that, I will design the cloud architecture I
           | am responsible for, such that it has minimal dependencies on
           | Google Cloud specifics.
        
           | zzzcpan wrote:
           | _> As a Kubernetes maintainer, I WANT to say that a smaller
           | number of larger clusters is generally a better answer._
           | 
           | I'm not trying to nitpick here, but that justification is
           | awful. It goes against reliability engineering on a deeply
           | fundamental level, pretty much guarantees to make already not
           | that reliable things even less reliable. Generally the more
           | isolated entities you have and the smaller they are the less
           | they affect each other and the environment when something bad
           | happens, the faster they can be recovered, the fewer end
           | users they affect, etc. If I remember correctly, this is even
           | how some Kubernetes people justified ideas behind Kubernetes
           | itself that you want to drop now.
        
             | thockingoog wrote:
             | That is not absolute truth. If it were you would eschew
             | kubernetes altogether and just use VMs.
             | 
             | Everything is a tradeoff. If you want total isolation, you
             | pay for it. If you don't want to pay for it, you make more
             | value-based tradeoffs.
             | 
             | Concretely, Google runs "a handful" of "pretty reliable"
             | services on a relatively small number of clusters.
        
           | solatic wrote:
           | > I don't understand the anger here - you're literally saying
           | you'd rather pay more for a service of lower quality
           | because... why? Because they will continue to charge you
           | more? Does not compute.
           | 
           | This response, right here, is everything you need to
           | understand about why Google Cloud is failing to sell to the
           | enterprise market.
           | 
           | The enterprise market only really cares about one thing: rock
           | solid stability. It doesn't care about features, and it
           | doesn't (really) care about price. It wants a product that it
           | can forget is there.
           | 
           | What's really sad is, technically, GKE is that product. It
           | just works. It is solid. You do get to forget that it's
           | there. Until you get a random email telling you that you get
           | to explain to your boss that your bill is going up next month
           | and your project might end up running over budget as a
           | result.
           | 
           | If you can understand why a large segment of the market
           | prefers to pay a higher but stable charge over a lower but
           | undependable charge, then you can understand why Google Cloud
           | is failing at selling to enterprise.
        
             | Aeolun wrote:
             | In addition, AWS charges only ever go down. I don't think
             | I've ever seen a price increase.
        
           | atombender wrote:
           | Just a data point:
           | 
           | I'm the CTO at a very small company. All our stuff is running
           | on GKE. Our monthly bill tends is a lot less than $10,000/mo.
           | We're currently in the process of splitting our stack into
           | separate projects and clusters, because co-locating projects
           | in a single cluster has gotten messy. We'll probably end up
           | with 4-5. That will increase our bill by $292/mo, worst case,
           | assuming the first cluster is free. For a company our size,
           | it's not a huge expense. But _these things add up_.
           | 
           | Since moving from DigitalOcean, our Google Cloud setup has
           | more than doubled our monthly bill. We're paying for more
           | compute, but certainly not twice the amount, as we've only
           | gone from 14-15 nodes to around 20; it's just more expensive
           | across the board, both node cost and ingress/egress. We're
           | even cost-cutting by using self-hosted services instead of
           | Google's; for example, we use Postgres instead of Cloud SQL.
           | I ran the numbers earlier today; the equivalent on Cloud SQL
           | would be 3.4 times more expensive.
           | 
           | In short, Google Cloud is expensive, and it's not like the
           | bill is getting smaller over time.
           | 
           | Developments like these factor in my choice of cloud provider
           | for future projects.
        
             | TheFiend7 wrote:
             | For sure, though I will say if you're trying to cut costs,
             | it's my understanding serverless is quite cheap. So if you
             | can turn some of your services into serverless
             | containers/functions. I'd highly recommend it.
        
             | echelon wrote:
             | How much would it cost for you to provision and colocate
             | your own hardware, run a k8s cluster, and manage upgrades?
             | 
             | You might not be at the scale where this is feasible yet
             | since that's probably multiple full-time engineers, but
             | eventually the cost functions intersect.
        
               | atombender wrote:
               | Well, we don't even have a dedicated ops person.
        
             | hn_throwaway_99 wrote:
             | Not sure what size a "very small company" size is, but I'm
             | just curious as to why you chose GKE. I make tech decisions
             | for a (probably) much smaller company, and I found things
             | like App Engine Flexible Environment, Cloud Run and Cloud
             | Functions let me do much of the stuff I can do with k8s but
             | with much, much less complexity (at least on my side of
             | things). The main factor is that I don't have a full-time
             | infrastructure expert, and my experience in the past is
             | that k8s essentially requires that.
        
               | atombender wrote:
               | Less than 15 employees. Several products, two teams, <=
               | 20 nodes.
               | 
               | We migrated our stuff from DigitalOcean around 2018. At
               | the time, we briefly toyed with the notion of self-
               | hosting Kubernetes on DO, but it's complex to manage, and
               | we don't have any dedicated ops staff. GKE is
               | significantly easier to manage.
               | 
               | At that time we migrated, the things you mentioned
               | weren't available/mature, I think. Even today, I'd choose
               | Kubernetes over a complex mishmash of different systems.
               | I like the unified, extensible ops model. In fact, I'd go
               | so far as to say that I wish all of GCP could be managed
               | as Kubernetes objects.
        
               | praveenperera wrote:
               | But DigitalOcean has managed K8s now:
               | https://www.digitalocean.com/products/kubernetes/
        
               | atombender wrote:
               | DigitalOcean did not have Kubernetes then. Are you
               | suggesting we should spend 6-12 man months migrating
               | back?
        
               | praveenperera wrote:
               | No my mistake DOKS came out late 2018.
               | 
               | I had been using it since May 2018 but it didn't come out
               | of early access till December.
        
               | tie_ wrote:
               | How about contracting an ops-oriented person for a month
               | that would do the migration for you? Where do those cost
               | functions intersect?
        
               | atombender wrote:
               | Would never happen. Just the amount of time needed to
               | dedicate to onboarding a temporary contractor would be
               | really disruptive to the developers, not to mention the
               | disruptive effect of the technical migration -- databases
               | to move over, persistent volumes to copy, DNS to repoint,
               | lots of downtime, etc. There's a good reason companies
               | don't switch clouds often.
        
               | briangrant wrote:
               | Re. managing GCP as Kubernetes resources:
               | https://cloud.google.com/config-connector/docs/overview
        
               | atombender wrote:
               | That's very cool, thanks! Note that this allows
               | selectively _creating_ Kubernetes resources backed by GCP
               | resources. Looks like it will not automatically sync
               | everything that already exists, which seems like a missed
               | opportunity.
        
               | atombender wrote:
               | We have about 200 long-running pods right now. On Cloud
               | Run, that would cost us more than $12,000 in CPU alone,
               | and that's excluding memory and request costs.
               | 
               | That also excludes stateful apps like Elasticsearch that
               | would not be able to run on Cloud Run. Not sure what
               | Google product is appropriate there.
        
           | polskibus wrote:
           | Shouldn't the investment in Kubernetes bring its running cost
           | down instead of increasing it?
        
           | alasdair_ wrote:
           | >If the main value of GKE over DIY is $73, you should totally
           | DIY.
           | 
           | It's not the fee itself, it's the worry that GKE will do what
           | Google Maps did and _massively_ increase fees with very
           | little notice, causing people to scramble to migrate.
           | 
           | Google has a really bad reputation right now when it comes to
           | cancelling projects that people have built their businesses
           | upon, or jacking up fees quickly. The $73 is irrelevant on
           | its own - the issue is (a lack of) customer trust.
        
             | thockingoog wrote:
             | Google and Google Cloud are largely different businesses,
             | though I understand it's hard to keep that in mind in the
             | context of things like this.
             | 
             | I encourage everyone to always stay nimble and keep your
             | eyes on portability. I also encourage you to try to assess
             | the REAL costs of doing things yourself. It's rarely as
             | cheap as you think it is.
             | 
             | As a Kubernetes maintainer, I am fanatical about
             | portability.
             | 
             | As part of the GKE team, I think we provide tremendous
             | value that people tend to under-estimate.
             | 
             | NOTE: I was NOT involved in this decision, but I understand
             | it, and I want to help other people understand it.
        
               | GKE_Greed wrote:
               | sorry that you have to work with such low-IQ people that
               | made this decision. Followed your work on k8s, thank you,
               | its better because of people like you and others.
        
               | thockingoog wrote:
               | I don't think that is at all a fair characterization, you
               | just don't have the same data available to you.
               | 
               | Thanks for the props. It means a lot to me personally.
        
               | Aeolun wrote:
               | Google and Google Cloud are not largely different
               | businesses to the world at large. That may be true
               | internally, but what you do reflects on the other.
               | 
               | Aside from that, Google has a reputation for pulling this
               | shit, and now Google Cloud does too.
               | 
               | The value you provide is irrelevant to how you make
               | people feel with decisions like this.
        
           | sah2ed wrote:
           | > _I don 't understand the anger here ... Does not compute._
           | 
           | The saying: _"the market's perception is your reality"_ , is
           | especially apt here. Google's decision makers tends to forget
           | that in the end they are dealing with human customers not
           | machines. Contrary to the concept of economic rationality,
           | humans are notorious for exhibiting behavior that, to the
           | untrained eye, appear irrational.
           | 
           | A commenter helpfully explained their perception of the new
           | pricing change:
           | 
           |  _"The anger comes from, a product was free and now it is
           | not. A lot of people made architectural choices that depended
           | on the price of 0. (You mentioned these cases in your post).
           | However, i believe the bigger issue is, that Google Cloud
           | broke essentially a promise."_
           | 
           | IOW, from their perspective, the pricing change was framed as
           | a loss [1] which opened up a host of negative emotions
           | (anger, mistrust, etc) that come with mitigating a loss that
           | is imminent.
           | 
           | Google as an engineering company may look down on fields like
           | psychology or behavioral econs, but if they genuinely want a
           | fighting chance against AWS and Azure, they will need to
           | court sales leaders with a strong humanities tinge, to avoid
           | these kinds of decision-making that achieve the opposite
           | intended effect -- eroding people's trust in GCP.
           | 
           | [1] https://en.wikipedia.org/wiki/Loss_aversion
        
         | sethvargo wrote:
         | Thank you for the feedback.
         | 
         | > Google's inability to actually commit to long term support...
         | 
         | This is _exactly_ what Google is doing in this case. We are
         | providing an SLA - a legal agreement of availability and
         | support. These changes introduce a guaranteed availability of
         | the management control plane.
        
           | sladey wrote:
           | Shouldn't that be opt-in? The management control plane is not
           | something we consider critical to operations. I'd happily
           | accept if it was unavailable for 1 and a half minutes a day
           | versus these additional costs.
        
             | sethvargo wrote:
             | That's great feedback. I'll relay that to the product team.
             | IANAL, but I think it would be legally challenging.
        
               | sladey wrote:
               | IANAL either, but I don't see why it would be? Just have
               | a separate cluster type, e.g SLA Zonal, SLA Regional. The
               | SLA already differentiates the current cluster types.
               | Anthos Clusters are also not subject to any additional
               | fees?
               | 
               | And having it opt-in will save face with those users of
               | GKE where an additional $73/m is significant.
        
               | waffle_ss wrote:
               | Hard to understand how it would be legally challenging.
               | ISP's do it all the time when differentiating their
               | business plans from residential. Both services run over
               | the same infrastructure and you typically get the
               | same/similar speeds, but a key difference is an SLA with
               | the business plan.
        
               | bavell wrote:
               | Opt-in for the SLA and additional cluster cost would be
               | fantastic. We run pretty small clusters but don't need
               | any additional SLA's on top of what's already provided.
               | Frankly we could care less about the control plane SLA.
        
           | notyourday wrote:
           | > We are providing an SLA - a legal agreement of availability
           | and support.
           | 
           | Do I still have to pay the bill first, fill out forms, get
           | account managers involved, at some point receive a partial
           | credit, and repeat this until the delta between what I was
           | expected as the SLA credit and what I got as the SLA credit
           | is less than the cost of the time to fight for another cycle?
        
           | jerendy92 wrote:
           | Sure, in this case I can see that. I was referring to those
           | four points with respect to Google services in general. I'm
           | sure I don't need to dig up a list of features and services
           | that have been merged, shuttered, price hiked or moved into a
           | different product suite over the years. Admittedly a lot of
           | the issues are with the GSuite side of things, but it's sad
           | to see this coming to GCP as well.
           | 
           | On a hopefully more constructive note, if this is the way
           | it's going to be from now on, I would at least expect to see
           | an exemption on such a management fee/SLA on preemptible
           | nodes - having an SLA and management fee on the cluster
           | whereby nodes can be killed in a 30 second window without
           | prior warning seems to be a little more than pointless.
        
             | sethvargo wrote:
             | Even if your worker nodes are pre-emptible, the master
             | nodes are not. The management fee covers running those
             | master nodes and many other core GCP integrations (like
             | Stackdriver logging and other advanced functionality).
             | Billing is computed on a per-second basis for each cluster.
             | The total amount will be rounded to the nearest penny at
             | the end of the month.
        
           | txomon wrote:
           | He means that there was a sales pitch from all gcp sales guys
           | to not charge for that. 99.95% is not enough IMO to charge
           | 73$/mo.
           | 
           | As someone else noted, it breaks a lot of recommended
           | architectures where you would have auto provisioning and a
           | lot of clusters to separate concerns and keep costs down.
           | 
           | Finally, the pricing changes are starting to look like a
           | pattern, every time Google deems the usage of a product is
           | good enough, they will increase the price.
           | 
           | They are the Ryanair of the cloud.
           | 
           | Edit 1: moreover, it will increase the cost of composer, and
           | on top of that, the recommended pattern where composer is
           | paired with a kubernetes cluster for executing the workloads
        
             | allendoerfer wrote:
             | > They are the Ryanair of the cloud.
             | 
             | Isn't Ryanair literally the Ryanair of the cloud(s)?
        
             | Niksko wrote:
             | EKS only gives you 99.9% uptime, and I'm uncertain as to
             | whether you could achieve more than 99.9% uptime on your
             | own by DIYing your cluster in a public cloud provider
             | without doing multi-region.
        
               | physicles wrote:
               | To put that in perspective, three 9's allows you about 9
               | hours of downtime a year, which will certainly require
               | multi-region and a dedicated ops team.
               | 
               | Two and a half 9's is a whole different story. We
               | achieved about 20 hours of downtime last year even
               | without HA on k8s bare metal in Alibaba cloud. But I'm
               | uncertain whether that's a feat we can repeat this year.
        
             | bartread wrote:
             | > Finally, the pricing changes are starting to look like a
             | pattern, every time Google deems the usage of a product is
             | good enough, they will increase the price.
             | 
             | To be fair this is hardly new and by no means limited to
             | Google. Any number of SaaS startups that have survived to
             | at least moderate success have done similar things.
             | 
             | Look at UserVoice as an example: started out with a free
             | tier plus some reasonable paid tiers with transparent
             | pricing, then a year or two back killed the free tier and
             | moved to a non-transparent "enterprise" pricing model with
             | absolutely exhorbitant fees.
             | 
             | Plenty of other companies offer free to build their
             | userbase and reach, then either water down the free tier,
             | or remove it entirely. It's practically _the_ SV modus
             | operandi for the last decade.
        
       | GKE_Greed wrote:
       | This is penny wise pound foolish GCP.
        
       | minimaxir wrote:
       | I think this is _fair_ : for hobbyists, a free zonal cluster is
       | sufficient and you probably wouldn't use more than one cluster.
       | For businesses/revenue drivers, the $7.30/mo/cluster is nothing
       | (EDIT: actually $73/mo/cluster, which may be a tougher sell but
       | if the business is in a case where it benefits from Kubernetes,
       | it's still likely insignificant relative to the cost of actually
       | running the VMs on it )
        
         | EpicEng wrote:
         | What's unfair about the situation is the fact that Google's
         | sales people hyped this up as an advantage over other providers
         | for years.
        
         | sethvargo wrote:
         | Just doubling down on the free zonal cluster as a hobby tier.
         | Folks seem to be missing that among the announcement.
        
         | sladey wrote:
         | If I'm not mistaken, it should be $73.00+/mo
        
           | minimaxir wrote:
           | Oops, I can't math. Fixed.
        
       | dmitryminkovsky wrote:
       | Just got this email. $0.10 per hour is so much for something that
       | was 0 per hour before. Wow these guys. The chutzpah! This isn't
       | just offensive monetarily, it's offensive like price gouging is
       | offensive. It's emotionally offensive. Gotta look somewhere else.
       | Shame.
        
       | cmhnn wrote:
       | This entire kerfluffle is a microcosm of so many things wrong
       | with the industry. First, so many people have decided to use k8
       | for wrong reasons that is mind boggling. Those are mainly
       | customers. Customers are often led into decisions by their own
       | developers who are more interested in learning new things or in
       | making themselves attractive for the next job. Yes sales people
       | from the providers also help make this happen but more often than
       | not the customer walks themselves into walls.
       | 
       | The providers hawk shit like it's magic and overstate their
       | capabilities ad nauseum. _Everything_ is sold like fucking magic.
       | AI has been so fucking overhyped that it is mind numbingly hard
       | to even deal with peers who should know better. Because it
       | involves a whole new class of jackasses who specialize in
       | statistics it is being mass introduced into an ecosystem where
       | programmers with years of experience feel out of their depth and
       | are acting like rubes when they 're fed pure crap.
       | 
       | Then there is "I want free or cheap" guy. No sympathy. Go to DO
       | or whatever you think is the magical unicorn out. Or, dig in and
       | put up your own damn control plane. Actually learn what the
       | spaghetti hell of k8 is (all things that try to solve hard
       | distributed problems are some level of spaghetti). 90% of free
       | and cheap guy want to work on the next dick pic app so fuck em.
       | 
       | But most annoying is the degree of almost political like knee
       | jerk behavior by a community filled with people who routinely
       | sneer at the religious or emotional. Same fucking community that
       | fights over shit like this, fight over which language is "better"
       | based on nonsensical criteria and hasn't figured out that the
       | less moving parts the better in a distributed system. Wonder how
       | many tanked elections or other horseshit people in this industry
       | have to pull before folks are like "Hey, all those arrogant
       | fucksteins who want free food and 10 times everyone's salary need
       | to start being accountable for the crap they do."
        
         | the_duke wrote:
         | > dick pic app so fuck em.
         | 
         | > fucking
         | 
         | > shit
         | 
         | > horseshit
         | 
         | > arrogant fucksteins
         | 
         | > crap
         | 
         | I'm not usually one to flaunt the HN guidelines, but I would
         | suggest a read-through, and perhaps exploring different venues
         | for venting frustrations.
        
           | cmhnn wrote:
           | Which axe are you grinding? If you took this personal then
           | maybe something stuck because it should? If you mean there is
           | no cussing allowed then I missed that for sure.
           | 
           | Edit: ah checked your history. So you say things like "Amazon
           | reviews are about as trustworthy as the selling points of a
           | slimy salesman" thus impugning Amazon and salespeople in
           | general but are coming after me because you are no doubt
           | personally miffed by either previous comments I have made or
           | the ones above because they struck a chord. The guidelines
           | also say basically downvote or flag don't whine and they say
           | don't get political which is an unenforced joke no doubt only
           | meant to apply to those the current moderators dislike.
           | 
           | Bottom line, I found your posts and style criminally boring,
           | your tone here hypocritical and am glad we both get to have
           | an opinion. HAND.
        
             | erulabs wrote:
             | Welcome to HN - this is not reddit - please take a look at
             | the comment rules:
             | https://news.ycombinator.com/newsguidelines.html
        
               | cmhnn wrote:
               | LOL! Did you read them??
               | 
               | "Please don't submit comments saying that HN is turning
               | into Reddit. It's a semi-noob illusion, as old as the
               | hills."
               | 
               | Likening HNN to reddit and posting that like you did is
               | the definition of "snarky" IMO.
               | 
               | Please grind your axe in the open over the content
               | instead of appeal to alleged community norms you are fine
               | violating.
        
       | ratherbefuddled wrote:
       | I don't like the direction this is heading, it seems like the SLA
       | and the accompanying charge could easily have been optional.
       | 
       | We have a couple of dozen clusters, two per client, and can't
       | change the architecture. We use helm and terraform and can build
       | new clusters quickly but we can't treat them entirely like cattle
       | because we don't own all the DNS. Our clients are not the sort to
       | do things quickly - or even slowly.
       | 
       | Does anybody have any good and up to date resources comparing the
       | current options for K8 providers? I'd like to get a feel for what
       | it would take to switch.
        
         | vpEfljFL wrote:
         | As many have mentioned here already, $72/mo most likely a
         | routing error on workloads kubernetes is designed for.
         | 
         | I think, most customers love the change because of SLA where
         | even 1 minute of downtime per year is amplitude more costly
         | than 10 years of cluster managing fee.
         | 
         | This also showing the commitment from google to provide great
         | and reliable service.
         | 
         | If you're looking to run k8s "for free", Digital Ocean looks
         | like the way to go, but again, it's two completely different
         | set of offerings and if you've chosen the google cloud in the
         | first place then DO doesn't look like suitable alternative.
        
       | klingonopera wrote:
       | "Cloud computing is a trap, warns GNU founder Richard Stallman"
       | [2008]
       | 
       | https://www.theguardian.com/technology/2008/sep/29/cloud.com...
        
         | klingonopera wrote:
         | I know this is kinda against the guidelines, but just to
         | highlight what an ideological war is being fought, at this
         | moment, my parent post has received 5 upvotes and 6 downvotes,
         | with the last 4 downvotes all occuring in the last minute.
         | 
         | On the content, RMS chose a colorful (insulting) language, yes,
         | do hold that against him, because _that_ is wrong, but in my
         | opinion, at the core, his statements are quite legit.
        
           | cmhnn wrote:
           | I am grateful for Stallman's efforts but claiming cloud is a
           | nefarious plot no matter the style is not "legit".
           | 
           | As for the voting, join the club. Like all communities online
           | HNN is an often hypocritical cliquefest where the mob will
           | have its way. But at least it's not r __ __;)
        
       | caleblloyd wrote:
       | I think that CRDs are partially to blame for this. CRDs can tax
       | the API Server and backing data store, without directly mapping
       | to a revenue-generating activity.
       | 
       | I've noticed a trend where teams spin up new clusters for each
       | application. Since CRDs are installed on the cluster level, it is
       | not possible to namespace resource versions. It is easier for
       | teams to take the cluster-per-application approach as opposed to
       | mandating a specific version of cluster tooling.
       | 
       | More small clusters means more control planes, and more
       | subsidizing if a cloud provider is giving away the control plane.
       | 
       | I just finished a blog post on this opinion that goes into more
       | detail- https://caleblloyd.com/software/crds-killed-free-
       | kubernetes-...
        
       | simonkafan wrote:
       | It's not so much about an additional fee (honestly, 0.10$ per
       | hour are nothing) it's more about Googles practice of suddenly
       | charging services, shutting down services as they like, not
       | giving a shit about us customers. Azure and AWS are much more
       | customer friendly here.
        
       | nimos wrote:
       | "Let's trade goodwill for short term profits." I suppose not
       | really surprising given the maps fiasco and Oracle appointment
       | but this really comes off poorly to me.
       | 
       | Flat fee is gonna suck for people running a lot of clusters. I
       | bet there are some people out there spinning up a cluster per x
       | who are going to be real unhappy about this.
        
       | jb775 wrote:
       | I'm not sure why everyone is so surprised. Part of Google's
       | monetization model is to offer developer-friendly software for
       | free, then charge a small fee once it crosses the headache-to-
       | replace threshold.
        
       ___________________________________________________________________
       (page generated 2020-03-04 23:00 UTC)