Shouldn't that be opt-in? The management control plane is not something we consider critical to operations. I'd happily accept if it was unavailable for 1 and a half minutes a day versus these additional costs.
Hard to understand how it would be legally challenging. ISP's do it all the time when differentiating their business plans from residential. Both services run over the same infrastructure and you typically get the same/similar speeds, but a key difference is an SLA with the business plan.
IANAL either, but I don't see why it would be? Just have a separate cluster type, e.g SLA Zonal, SLA Regional. The SLA already differentiates the current cluster types. Anthos Clusters are also not subject to any additional fees?
And having it opt-in will save face with those users of GKE where an additional $73/m is significant.
Opt-in for the SLA and additional cluster cost would be fantastic. We run pretty small clusters but don't need any additional SLA's on top of what's already provided. Frankly we could care less about the control plane SLA.
btw. I would prefer to have something like a cost reduce if the cluster runs 24/7. Currently we also do not need that amount of SLA. but we actually have a single cluster and are a really small customer that reall choose GKE, because of no management fee (i.e. nodes are really expensive compared to non cloud providers). But we never used more than one REGIONAL cluster (we also never spun up a new one, we only change workers). And now it will cost us money. What a shame.
For folks just trying it out, 1 cluster is still free... in a single physical data centre. Sadly you'll be charged for running a cluster across two or three data centres in the same availability zone (eg London).
I've been using DO spaces for about a year now, and for the later half of that time, my experience has been pretty terrible.
- Spaces throwing up errors that magically fix itself a couple of days later.
- Asking about the credits we were promised for when Spaces lost our files results in the question being ignored. Still haven't received the promised credits for 6+ months. I can't even look back at the tickets now as the support system has deleted all the tickets older than a month.
It's gotten to the point where we have started work on migrating off DO, which is unfortunate because DO's offerings looked very attractive.
Hi Sladey - Zach here from DO. I'd like to help you out and investigate what might be going on. Can you send me an email (first name at) and I'll investigate right away?
I happened to be initializing a GKE pool upgrade just as this occurred. The upgrade is now stuck according to the console.
The interesting thing is that a couple of minutes before everything went wrong, kubectl returned a "error: You must be logged in to the server (Unauthorized)" error
I'm having issues with the GCP console, but all our GCP services are working without issue. Lots of errors related to spanner popping up in the console.
I have seen some reports of Cloud pub/sub not working.
I'm a big supporter of Cloudflare and have been using it for almost 8 years. I personally don't think doing something like that is what Cloudflare needs.
What Cloudflare needs is further customization, especially in regards to caching. We actually had to migrate a certain part of our infrastructure to Fastly due to the lack of caching customization/rules.
I'd like to see:
- Custom caching rules similar to the new firewall rules
- Finer granularity of the cache expiry (I'm aware Enterprise has the ability to cache for 30 seconds, but we don't want to upgrade to Enterprise just for that one thing).
- Cache hit rate analytics grouped by path/domain/etc
I've submitted suggestions to some people but I believe with the workers it should be possible to create your own custom pull/push CDN because of the granularity controls they have added. It used to not be possible to interact with the caching layer but since they have added controls for that (about a year or so ago)
EA operates their feeder flight school [1]. There is a PR from EA somewhere showing his yearbook photo from their graduating class 2010, but again it was in the Ethiopian press, I forgot to outline it.
We've used GCF for some production tasks and it's worked pretty well for us.
Would like to see some more runtimes/languages. I'm hoping AWS' recent layers implementation has made this more of a focus at Google. I'm curious how the implementation of Go has affected the ease of integrating other languages.
> I'm curious how the implementation of Go has affected the ease of integrating other languages.
In some ways, it helps. You start to see similar issues arise and know what to look out for when you're launching a new runtime. In other ways, every language has its peculiarities and its own set of design considerations.
Launching/polishing a completely new language still takes a decent amount of work. Launching a new version of an existing language tends to be much quicker.
Would you consider running a container that could be run like Cloud Functions? This container could run the binary that you create. It's not something that we support today but I'm curious whether this would meet your needs.
> running a container that could be run like Cloud Functions
Does this mean we actually run the container ourselves on our GKE cluster or in a VM? Or do you mean a "container" runtime for Cloud Functions? Both would be interesting, but we'd prefer the latter since there would be less to manage. I'd be interested to see the performance of it.
This is something Cloud Native Buildpacks (buildpacks.io) are intended to make easy. We hang out on buildpacks.slack.com, if you'd like to come pick our brains.
You're right. If you look around in the logs for the deployment output, you'll see that both Cloud Functions and App Engine go through a container build step via Cloud Build.