Home
Tags Projects About License

What is Serverless Architecture and what are its benefits?

What is Serverless Architecture and what are its benefits?

The activity of web applications is uncertain, sometimes they serve a huge number of workloads and sometimes they sit idle without many requests. Whether the application is running or not, you need to pay for resources. To solve these problems, you have to solve a lot of operational problems using not the most trivial methods. Managing all this in large enterprises is difficult, and in pet projects, it makes zero sense. But with the rise of serverless architectures, it becomes much easier.

Serverless(FaaS based)

This is a way of deploying a service without deploying the actual server. We take our function(or any other serverless service) defined in one of the supported languages, we send this function to the cloud and it runs there in a sandbox that the cloud provides us. How that function called, how sandboxes can be reused, etc., depends on the cloud provider and is subject to change.

Even though this method is called "server-less", it is not really a lack of a server, we just don’t have a dedicated server.

Benefits

Scalability

  • The first is, of course, maximum scalability. Fast scalability from zero to thousands of concurrent functions without any effort from you. And this allows you, engineers, to stop paying attention to the infrastructure and engage in writing the code and business logic of the application wherever it is an enterprise service or pet project.
  • You don't pay to rent a server, but for the execution time of your code — the milliseconds in which your request is processed. Instead of running the server 24/7, you pay for the compute time you actually used. Because of granular billing, you can more accurately predict and calculate costs and apply them to your architecture and business logic. This is great for prototyping because you pay for the number of specific queries.
  • You do not need to write boilerplate code, you can develop and deliver applications as fast as possible. The runtime is provided by the cloud platform and it can do various work for you — set up the required libraries, reuse environments to speed up the response, do load balancing, etc. Infrastructure becomes secondary to the code, but don't think that you should stop caring about it. Does the code work in the air? Doesn't it need a network? What about security?
  • When properly designing the system based on the serverless architecture, it is easier to build a loosely coupled architecture where an error in one part of the system will not affect the performance of the entire application.

Disadvantages

There are disadvantages to the serverless architecture, of course — there is no silver bullet.

  • First of all, serverless architecture can be implemented only using a cloud provider. Here we become even more dependent on vendors who provide us infrastructure, the so-called vendor lock. Functions designed for AWS will be very difficult to transfer, for example, to Google Cloud Platform. And not because of the code, Python is Python even in Google, but rather because you rarely use serverless applications in isolation. Besides them, you will probably use a database, messaging queues, logging systems, etc., which are completely different from one cloud provider to another. However, even this functionality can be made independent with mutlicloud solutions.
  • You can't store the state in the serverless application, you can’t use global variables, you can’t store anything on the hard disk. Technically it’s possible, but it doesn’t make sense and goes against the concept itself. You can only use external things — external database, external cache, external storage, etc. Thanks to this, there are no problems with horizontal scaling when there is a huge load of incoming requests.
  • Another disadvantage is that the price of excellent scalability is that until your serverless function is called, it does not launch. And when you need to run it, it can take up to a few seconds(cold start), which can be critical to your business/application.
  • The communication pattern in a classic monolithic application and in a distributed system, which serverless architecture really is, is very different. It is necessary to think about asynchronous interaction, possible delays, and monitoring of individual parts of the application.
  • Sometimes when a request from a client goes through a dozen of functions and cloud services, it becomes very difficult to debug the possible cause of an error, if any.
  • You can’t deploy an application with a lot of dependencies because, again, since you have no control over the operating system where all this stuff spins, you may have problems with the libraries that rely on OS.
  • You can’t do long-running tasks because the vendor limits the time it takes for functions to run (for example, AWS allows up to 15 minutes). So you can’t take and run some web scraper and wait for it to parse sites within an hour.

AWS Serverless Application Model

One of the market leaders of FaaS services today is AWS. AWS has a specific definition for Serverless — Serverless Application Model or SAM. It is CloudFormation-compliant resource templates for applications. Under this definition in AWS falls a huge set of services for building systems of varying degrees of load and complexity, some of which have the Serverless tag — functions (Lambda), queues and stream processing (SQS, Kinesis), notifications (SES, SNS), integration (EventBridge, Step Functions), storage (S3), databases (DynamoDB), data processing (Glue, Athena). In essence, Serverless creates a separate layer of abstraction for you, removing the operational burden of working with the services.

For example, with the AWS Glue service you are not allowed to work with the ETL engine itself - instead, you declare the data transformations, specify the data source and destination, as well as define the task. Everything else is out of your hands and control.

It's about the same with AWS Lambda. The easiest way to start with Lambda is to deploy it in the Amazon administrator interface. Amazon has an interface for everything, they even have their own built-in code editor. You write a function there. The function should receive an event that contains information about the request and a context that contains some information about the runtime. There are no abstractions here, Amazon doesn’t enforce using yet another meaningless DSL, you can use native Python here or any other supported language.

By creating a function, you can create an endpoint for that function in the same AWS interface. You must enter the AWS Gateway, where you can create an endpoint with a couple of mouse clicks and bind that endpoint to the desired lambda function. All requests that come here will be passed to the function and will be served by this function. However, when you add more resources, it is obvious that the headache of connecting them will increase.

AWS Lambda automatically scales as needed, depending on the requests received by the application. Thus, allowing you to pay as you go. If your code is not called, no fee will be charged.

For me, the disadvantage of this platform is pricing. In the case of RDS databases and ECS Fargate tasks where payment is by the second, with Serverless you pay for volume (gigabytes of data stored in S3) and utilization (requests and traffic). Although they give you a detailed description of what was charged and their price calculator, I often surprised at the pricing. Various internal data transfers, usage of S3 buckets, Lambda calls, data transfers between different availability zones, even if you have everything in one zone. Amazon charges a fee for all this, a small fee, but you should never forget about it.

Serverless applications are not only a set of Lambda functions behind API GW. The serverless application consists only of Serverless services. Hence, Serverless is an abstraction over PaaS, which is just charged differently.

Conclusion

Let's start with what we have now. So we have monoliths, we have microservice architecture, and now cloud providers are openly pushing Serverless.

Monolith holds a large set of functionality for different business structures. Microservices are just pieces of a monolith, each covering its own domain either completely or partially (i.e., multiple microservices per domain), communicating with each other via API, RPC, or MQ.

One of the main advantages of monolith applications is performance, microservices have relative ease of development and speed of delivery.

Going to cloud

In the case of a serverless infrastructure, we take that infrastructure that Cloud Service Providers offer us and connect those services that already have some kind of logic. You write a function that takes what came to the web server and places it somewhere in the database or somewhere else and we build a lot of these little bridges. Our application now is a grid of small pieces. Smaller than microservice because microservice approach requires as to have a completely separate such resources as persistent storage, queues, etc. This way, the code is not the backbone of the application but is a glue that binds the various infrastructure components.

Not every application is suitable for Serverless architecture. This thing won't become a standard, it's not the next step in technology, it's not a holy grail, but it still has a very, very good niche that it fills. Fail cheap is what competitively "sells" Serverless. I don't need as many resources to test an idea as I would need to deploy a small cluster of VMs. Speed is not the most important factor here.

Learn more about serverless

Common guys, share the content to see more good engineers.



Buy me a coffee

More? Well, there you go:

How to start your blog for 20 cents

AWS Lambda abuse

EMR Serverless a 400-level guide