Extended effort to push up a lambda function and not have to worry about automating deployment and configuration and patching and monitoring and upgrading and fail over-ring and, yes, scaling? Maybe its just me but I'd rather not see the backend of a server ever again for anything other than development.
That's why I recommend containers, because automating deployment and config would be the same regardless of destination, right? Monitoring also seems to be the same if you're using built-in cloud stuff.
As for scale, I think that's massively overstated. Servers are really fast and most apps aren't anywhere near capacity. Even a $10 digitalocean server is plenty of power, and there's no cold starts. Even YC's advice is to focus on features and dev speed, and worry about scaling when it truly becomes an issue.
But a lambda is just a container that you don’t have to manage.
I don’t get this sort of anti-serverless sentiment. If you have even one good SRE, then it’s an absolute breeze. Writing a lambda function is writing business logic, and almost nothing else. I can’t see how you could possibly do any better in terms of development velocity. I don’t get this ‘testing functions is hard’ trope either. Writing unit test that run on your local is easy.
Not really, aside from the other AWS services you consume (KMS, parameter store...). A cloud function takes an event, executes your business logic, and returns a response. The structure of the event can change slightly, but they’re remarkably portable, and I’ve moved them before. If you’re doing it right, most of your API gateway config will be an OpenAPI spec, and equally portable.
> it is more expensive if you need to scale
This is context specific.
> it has higher latency
Again context specific, and likely not something actually worth caring about.
> it is harder to test locally
This is one I simply cannot understand. You can run your functions locally, they’re just regular code. I’ve never had a problem testing my functions locally. If anything I’d say it’s easier.
There’s upsides and downsides to any architecture design. Serverless models have their downsides, but these anti-serverless discussions tend to miss what the downsides actually are, and kinda strawman a bunch of things that aren’t really.
I’d say the most common downside with serverless is that the persistence layer is immature. If you want to use a document database, it’s great, if you want to use a relational one, you might have to make a few design compromises. But that said, this is something that’s improving pretty quickly.
Focus on features and dev speed by managing a container mesh, the underlying server, system libraries, patching for security, handling a potential spike, solving each problem with your architecture as if it were novel, etc.?
There are times to go serverless and times to avoid it, but with what you're saying you want to optimize for, serverless is the answer.
I guess you can make either one as complicated as you want, but surely just putting a container on a server is rather simple? There's no mesh for a single server, and is a potential spike a realistic concern?
I get your point but I think with products like Knative/Cloud Run everything will converge on a lambda-for-containers model eventually which combines the best of both worlds.
There's still a mesh. Containers need to know which containers to communicate with and across which ports.
If putting a container on a service at scale were simple then services like Lambda would have never been popularized and orchestration frameworks like kubernetes wouldn't exist
I don't think popularity means it's the best option. That's the point of the blog post.
I'm also the first to recommend Kubernetes as soon as you need it as it's a solid platform, but most apps stay small and don't need all that upfront complexity. However I stand by Knative being the best of both, have you had a chance to look at that?
However, once you are running enough containers, your server bill becomes something you can't ignore...
Making enough servers with enough memory capacity to keep all or containers running with fail over support was $400/month.. and that was just 60 containers (an easy number to hit in micro-services architectures).
And your right, we never got near server capacity by CPU usage .. completely agree there, we ran out of memory to keep the containers in memory ready for use.
> not have to worry about automating deployment
How do you validate your code before deploying to production? If you test in environments besides production how do you manage configuration settings for the different environments (i.e. a db connection string)? How do you avoid patching? Almost any code I've written takes dependencies on 3rd party libraries and those will often have security vulnerabilities (usually some time after I wrote the code).
I mean of infrastructure, servers etc. I.e. the days of Puppet and Ansible are behind me. And patching a Lambda is as simple as pushing up a new version. No downtime, restarting services etc. And no patching of OS or Docker containers. Or building them. Configuration is as simple as environment vars or SSM for secrets.