It may already be clear that it's my go-to choice when debating a stack for a green field projects, but what exactly is the reasoning behind that default? There is a bunch of reasons that make it great for new projects, but two I consider most import are: it's almost a plug and play solution that can extend almost any environment you are currently working in and the fact that it democratizes app development (especially the mysterious lands of backend & networks).
What do I mean by democratization?
There is a bunch of way to deploy a new web application, but good luck finding something with a lower entry cost than serverless. It's equally easy to launch a new FE project working with it as it is to create a CRUD. You can run crons, background jobs, trigger events, write webhooks - it encompasses most of the ways you can do web development these days. Besides that it's not language dependant, fits in nicely with other trends (hey microservices) and comes packed amazing open source tooling abstracting the heavy stuff away. That means most anyone can start doing serverless development if they know how to code, it saves you a lot of knowledge overhead (but comes with it's own specific trivia domain - read on to learn more).
My top 4 reasons for serverless to be the first suggestions when starting a new project are:
- Less operations overhead
If you use a framework like Serverless (or anything else that let's you keep your environments in code), you will not spend eons configuring the networks, VPCs and all the other resources just to expose a bunch of endpoints.
You will not have to think much about scaling & instance sizes as most of the solutions you can use will autoscale depending on the demand - even when the usage of your application spiker here an there, there is hardly any planning you have to do for that. Most of us rarely think about that, but ask anyone working in high traffic e-commerce about their preparations for Black Friday or the Christmas season - they have proper plans of upscaling their VPSs to accommodate the higher than usual traffic.
Most of the ops work, moves onto the development team - meaning less coordination between different teams. Admittedly doing this properly has some upfront cost - creating a separate deployment account for each app and possibly stage of the application with IAM permissions that cause a headache of a lot of the cloud security tribe.
- Focus on feature delivery vs the meta-craft / reinventing the wheel:
It's not the point of what we're doing to write state of the art pipelines. It's not the point to write functional only solutions too clever for most other than author to comprehend, follow and modify. It's not the point to try new frameworks and find new ways to solve the CRUD problems. The point is iterating quickly, bringing value and cutting down on the cost of doing all that in the process.
- Smaller teams can accomplish way more when working with serverless:
A team that doesn't need the help of a dedicated devOps specialists and DBas to set-up and "manage" their deployments all the way from development to production? A team that is not blocked by waiting for resource provisioning (I love to cry when my CF stacks take 20 minutes to deploy from scratch without me doing more than launching a single command) A team that doesn't have to grow infinitely just to manage to amount of servers they have to maintain and update. You can have all that in serverless. It's also way easier to move between different paradigms when starting out with a serverless monorepo.
Touched on it in ad 1 already. You have very little to think about when it comes to scaling - besides optimising memory size, provisioned and reserved concurrency and the provided flagging / throttling your account it will auto scale with the traffic it received.
As a result of the above (If done properly) It allows for faster iteration.
As with any tech decision it obviously comes with it's own set of caveats and drawbacks :
- Knowledge gaps:
Some of the stuff that was dark magic before is making its way into the developers arsenal of tools. Setting up IAM permissions , Infrastructure as Code, and a bunch of networking aspects (eg. caching) - all have to be of first concern! A number of teams will try to just move whatever they've implemented before to this new paradigm, which can be a sort of great frustration.
- Different limitations:
Dreaded Cold Starts - serverless introduces a bunch of new concepts we hardly had to think about before. Cold Start being a prime example of that - after all with regular servers you don't care all that much how long it takes for your application to spin up, it should be doing it too often anyways. Here the story is different - functions have a predefined lifespan, after which they have to start again, which often means they will boot up several time every HOUR!
The approach is generally surprisingly bad for time sensitive applications, this is not a result of serverless itself but rather the event based approach it bases on. In any case - you should think of it mostly as being eventually consistent.
Before StepFunctions and similar solutions it was quite difficult to share state between lambdas, this could be addressed by switching to EventBased designs and using messaging systems like SQS or Kafka.
- Lack of standards:
Since the approach is relatively new - it's still evolving at a breach neck pace. What you read about it in May may not be true in June already.
It's still debatable how to test it, what databases and when to use with it and how to structure your application properly (it's been pointed out that this is mostly not true as there is already quite a few authorities in the space, where better to look for examples than on Serverless Framework blog )
- Nature of serverless apps:
They are usually distributed systems from the get go, and as a consequence are more difficult to reason about / have more moving parts that traditional applications.
Hugely influenced by https://ben11kehoe.medium.com/serverless-is-a-state-of-mind-717ef2088b42 and plenty of Paul Swail's articles.