Hacker Newsnew | past | comments | ask | show | jobs | submit | sladey's commentslogin

Is there any mature integration to achieve this with Kubernetes?



Shouldn't that be opt-in? The management control plane is not something we consider critical to operations. I'd happily accept if it was unavailable for 1 and a half minutes a day versus these additional costs.


That's great feedback. I'll relay that to the product team. IANAL, but I think it would be legally challenging.


Hard to understand how it would be legally challenging. ISP's do it all the time when differentiating their business plans from residential. Both services run over the same infrastructure and you typically get the same/similar speeds, but a key difference is an SLA with the business plan.


IANAL either, but I don't see why it would be? Just have a separate cluster type, e.g SLA Zonal, SLA Regional. The SLA already differentiates the current cluster types. Anthos Clusters are also not subject to any additional fees?

And having it opt-in will save face with those users of GKE where an additional $73/m is significant.


Opt-in for the SLA and additional cluster cost would be fantastic. We run pretty small clusters but don't need any additional SLA's on top of what's already provided. Frankly we could care less about the control plane SLA.


btw. I would prefer to have something like a cost reduce if the cluster runs 24/7. Currently we also do not need that amount of SLA. but we actually have a single cluster and are a really small customer that reall choose GKE, because of no management fee (i.e. nodes are really expensive compared to non cloud providers). But we never used more than one REGIONAL cluster (we also never spun up a new one, we only change workers). And now it will cost us money. What a shame.

P.S.: german sites have the pricing wrong.


If I'm not mistaken, it should be $73.00+/mo


Oops, I can't math. Fixed.


This is really disappointing. GKE was a staple amongst Kubernetes adoption, not only for the feature-set but also that there were no overhead costs.

I hope GCP re-thinks this.


For folks just trying it out, 1 cluster is still free.


For now.


For folks just trying it out, 1 cluster is still free... in a single physical data centre. Sadly you'll be charged for running a cluster across two or three data centres in the same availability zone (eg London).


This is a fair point. We don't have an HA (multi-master) zonal offering either, because mostly people don't want that.


I've been using DO spaces for about a year now, and for the later half of that time, my experience has been pretty terrible.

- Spaces throwing up errors that magically fix itself a couple of days later.

- Asking about the credits we were promised for when Spaces lost our files results in the question being ignored. Still haven't received the promised credits for 6+ months. I can't even look back at the tickets now as the support system has deleted all the tickets older than a month.

It's gotten to the point where we have started work on migrating off DO, which is unfortunate because DO's offerings looked very attractive.


Hi Sladey - Zach here from DO. I'd like to help you out and investigate what might be going on. Can you send me an email (first name at) and I'll investigate right away?

Thank you, Zach


I happened to be initializing a GKE pool upgrade just as this occurred. The upgrade is now stuck according to the console.

The interesting thing is that a couple of minutes before everything went wrong, kubectl returned a "error: You must be logged in to the server (Unauthorized)" error


I'm having issues with the GCP console, but all our GCP services are working without issue. Lots of errors related to spanner popping up in the console.

I have seen some reports of Cloud pub/sub not working.


Interesting, this should be a good post-mortem about what happened and why the errors weren't showing up on the status dashboard.


[flagged]


Could you please stop posting unsubstantive comments to Hacker News?


[flagged]


None of us here has any associates at Alphabet. Please stop now.


I'm a big supporter of Cloudflare and have been using it for almost 8 years. I personally don't think doing something like that is what Cloudflare needs.

What Cloudflare needs is further customization, especially in regards to caching. We actually had to migrate a certain part of our infrastructure to Fastly due to the lack of caching customization/rules.

I'd like to see:

- Custom caching rules similar to the new firewall rules

- Finer granularity of the cache expiry (I'm aware Enterprise has the ability to cache for 30 seconds, but we don't want to upgrade to Enterprise just for that one thing).

- Cache hit rate analytics grouped by path/domain/etc


FWIW, our Workers product is pretty good at allowing you to define custom caching rules. We have a section in the Workers docs specifically focused on the Cache API: https://developers.cloudflare.com/workers/reference/cache-ap...

If that doesn’t help you build what you’re looking for, happy to chat via email and hear more about what you need: kristian@cloudflare


I've submitted suggestions to some people but I believe with the workers it should be possible to create your own custom pull/push CDN because of the granularity controls they have added. It used to not be possible to interact with the caching layer but since they have added controls for that (about a year or so ago)

Also your link is missing a trailing slash (which is odd that their router doesn't add that). https://developers.cloudflare.com/workers/reference/cache-ap...


Your source doesn't say that he graduated from flight school in 2010, but joined Ethiopian Airlines in 2010.

Edit: Source that says he graduated at Ethiopian https://www.nytimes.com/2019/03/12/business/ethiopian-airlin...


EA operates their feeder flight school [1]. There is a PR from EA somewhere showing his yearbook photo from their graduating class 2010, but again it was in the Ethiopian press, I forgot to outline it.

[1]: https://www.ethiopianairlines.com/EAA/


We've used GCF for some production tasks and it's worked pretty well for us.

Would like to see some more runtimes/languages. I'm hoping AWS' recent layers implementation has made this more of a focus at Google. I'm curious how the implementation of Go has affected the ease of integrating other languages.


> I'm curious how the implementation of Go has affected the ease of integrating other languages.

In some ways, it helps. You start to see similar issues arise and know what to look out for when you're launching a new runtime. In other ways, every language has its peculiarities and its own set of design considerations.

Launching/polishing a completely new language still takes a decent amount of work. Launching a new version of an existing language tends to be much quicker.


Although I've never used Lambda Layers. Do you think a similar approach is something that could be implemented?

Maybe even just something for compiled languages like Go, Crystal, etc that will just run the binary?


It's definitely something that we've discussed.

Would you consider running a container that could be run like Cloud Functions? This container could run the binary that you create. It's not something that we support today but I'm curious whether this would meet your needs.


> running a container that could be run like Cloud Functions

Does this mean we actually run the container ourselves on our GKE cluster or in a VM? Or do you mean a "container" runtime for Cloud Functions? Both would be interesting, but we'd prefer the latter since there would be less to manage. I'd be interested to see the performance of it.


If you're interested in something along the lines of the latter, you can sign up for an early access preview here: g.co/serverlesscontainers


> It's definitely something that we've discussed.

This is something Cloud Native Buildpacks (buildpacks.io) are intended to make easy. We hang out on buildpacks.slack.com, if you'd like to come pick our brains.


Given what we've seen with AppEngine they've been moving away from custom language toolchains and to a more containerized deployment approach.


You're right. If you look around in the logs for the deployment output, you'll see that both Cloud Functions and App Engine go through a container build step via Cloud Build.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: