Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think the upsides are worth all the work.

You can spend a lot of time getting databases and other stateful workloads to work -- mess around with StatefulSet and PVC on top of all the normal Kubernetes concepts, and what do you get in the end? Are you really better off than you would have been if you ran the database in EC2?

Plus, "herds not pets" kind of breaks down once you start using StatefulSets and PVCs. Those things exist to make Kubernetes more like a static environment for workloads that can't handle being run like ephemeral cattle. So why not just keep using your static environment?

If Kubernetes is the only workload management control plane you have, then I guess this makes sense. But if you are already able to deploy your databases with existing tools, and those existing tools don't really suck, it's probably not worth migrating. It would take a lot of time and introduce significant new risks and operational complexity without a compensating payoff.



yeah but if your org has orchestration tooling built around k8s, in a way it becomes much easier to provision a DB with k8s, setup the service, routing, networking, roles, etc than it would be in terraform. especially if you have to repeat this process in like 20 envs (stage, prod) x multiple regions


This sounds dangerously close to "yeah but if the only tool you know is a hammer..."


If (big if,) you org orchestration supports stateful sets.

Where I was the tooling was very focused on disposable api servers.



If you want to use k8s as your dataplane, sure. Though I'd rather entrust that task to Crossplane.

There's always a point at which running things on Kubernetes becomes worth it. RDS is an expensive service, and if you want to enable every developer to boot up a development environment, or several even, if becomes prohibitivly expensive to use one process for everything. This is also true about running any workloads on Kubernetes to begin with though, and many companies definitely would be better off with a service like ECS/FarGate/CloudRun/Fly.io. Especially if they don't need the flexibility to build their own addon metrics/secrets/logging stacks.


It's the same RDS though. These controllers enable you to manage the lifecycle of the exact same AWS services you get when using the AWS console or CLI. Kubernetes is the porcelain and AWS offerings are still the plumbing.


+1 Sometimes just because you can does not mean you should.


What if you have finance customers who don't like commingled data, and you want to sell them a service and tell them with a straight face that their tenanted database isn't one bad query from serving up their data to someone else?


You can still have separate ACLs, databases, tables and even row level access control even if you share database servers.


You can also have dedicated database servers per customer on bare metal or VMs. It's not always more work than asking your database team to learn Kubernetes...


Speaking as someone who has sold Saas into banks numerous times, "no shared tenancy" is very frequently an absolute and non-negotiable requirement.


Fair enough; as 'twblalock says, it doesn't have to be more arduous to set up database servers per seat outside of k8s.

I simply wanted to highlight that there are many ways to skin a cat, and "no shared tenancy" is not in itself a valid argument for hosting your DBs in k8s, even though there may be other good reasons.


It's also more resource efficient, especially for non-production or non-critical workloads. VMs only come in discrete configurations and many times even the smallest one is too big, wasting a lot of resource. When you run thousands of instances, thanks to the magic of microservices, the costs add up.


Those are good arguments for ephemeral workloads but they don't make as much sense for databases.


Another turtle.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: