HomeOperationsAutomated OperationsGoing serverless: considerations for your organization

Going serverless: considerations for your organization

One of the latest cloud computing trends is to use serverless computing for deploy software applications. But what is serverless? In short: organizations do not maintain Virtual Machines or containers anymore. Instead, their applications are deployed using “functions” and “events” and run them in the cloud. The cloud provider facilitates the infrastructure layer. When running functions in the cloud, you pay per use: for example based on the execution time of a function or the memory and CPU usage of it.
Serverless is similar to Platform-as-a-Service, or PaaS for short. A PaaS is a more opinionated, encompassing solution taking over more than code execution. A serverless environment focuses more on code execution itself. The major differences between PaaS and serverless are scalability, pricing, startup time, tooling, and the ability to deploy at the network edge.
Serverless lets development teams focus on application development, but what exactly are the considerations for your organization?


As of now all major cloud providers offer serverless technologies as part of their product portfolio. Microsoft provides Azure functions, Amazon offers AWS Lambda and Google offers Cloud functions.

Pizza as a Service
Source: https://www.syntouch.nl/serverless-is-more/
Using pizzas as an example, let’s look how serverless, or Functions-as-a-Service, differ from virtual machines and containers. Going from left to right, each additional layer in green means more and more is managed by the service provider, not you. With a traditional on-prem datacenter (far left), you manage everything. With SaaS, you just manage the configuration and data. Serverless, or FaaS, is one step to the left from SaaS, meaning that most of the plumbing is taken care of by the service provider, including containers, operating systems and virtual machines. This is why it is ‘serverless’: you, the customer, do not manage servers anymore; the vendor takes care of VMs, containers, scheduling, scaling, patching, and more.
You do relinquish some control as part of that deal, though. Installing dependencies becomes harder, and custom configurations are not possible.

Practical examples

Developers can use a bunch of popular programming languages to run serverless functions: Java, Python, NodeJS, Go, C# to name a few. Since these are popular languages, this makes the transition from traditional deployment models to serverless a bit easier. Practical examples help to clarify the concept:

  • Real-time data transformation. Imagine you want to put a watermark on each of your proprietary photos. Serverless enables you to do this quickly for example each time a user uploads a photo to your website. The upload event triggers the function to add the watermark. Batch processing of large amounts is handled very well, since the required compute power scales dynamically.
  • Transactions in an eCommerce web shop can be taken care of entirely by serverless functions. Saving items to a favorites list, adding them to the basket, checking out and paying for a basket can all be run as functions in a serverless environment.

Well-known benefits

Not having to learn a new language is a big plus to a development team. More benefits from different perspectives are the following:

  • Remove the operational and people overhead of setting up and maintaining infrastructure. Don’t operate a Virtual Machine, don’t patch it, don’t secure it and don’t monitor it. This helps your teams focus on other, more important things in your organization.
  • Performance boost when needed: serverless infrastructure scales (almost) indefinitely. No need to add extra Virtual Machines or add more memory when your application needs it. This also means: no service windows and/or downtime for your applications.
  • Eco-friendly. The only time the infrastructure (components) run is when the serverless functions are executed. No servers remain idle unneeded (e.g. in the weekend). Thus reducing the consumption of electricity and production of heat so less cooling power of your datacenter is needed.
  • Cost efficient: you only pay per use. No costs for running servers which are only used for a few days a month. E.g. generating a monthly sales report.
  • Secure your infrastructure. No need to secure (e.g. harden & patch) your servers since you do not maintain them anymore. But don’t forget about security in a serverless world. More to come later in the article.

This list of benefits is just the beginning. It acts as a starting point for your organization to decide to migrate traditional applications or not.

Cost Matters

One of the most important benefits of serverless is the cost aspect. Before your organization can benefit from serverless, it’s essential to understand how the actual costs for serverless functions are calculated. If you don’t know about this, serverless functions can eat up a lot of your cloud budgets, thus ruining your investment.

Going serverless: considerations for your organization
Source: https://pixabay.com
Costs are calculated (e.g. in AWS) based upon a “pay per use”. It slightly differs for all major cloud providers and the way serverless functions are being setup and used. It’s difficult to give an exact answer here. Use the following example as a starting point to do some rough calculations for your applications.

Vendor lock-in equal high price

Vendor lock-in is one of the downsides of running serverless functions. It’s not the functions itself which are the challenge. From a “source code” point of view, it’s easy to just copy and paste the source code from one cloud provider to another. It’s about the integration with other cloud native services that matters here.
When deploying functions in AWS, you would use AWS Lambda service to run your code. AWS Lambda can (and sometimes should) be integrated or triggered from other AWS services like Amazon API Gateway or Amazon Kinesis to name a few. These services exist only at AWS, Microsoft Azure or Google Cloud does not have these services. This locks your AWS Lambda solution to AWS.
It’s difficult to migrate your functions to another cloud provider since you also need to find a substitute for your connected services. Migrations of those aren’t as easy as copy/pasting code, but involve re-creating the entire application architecture on another cloud. Unfortunately, while Infrastructure-as-Code tooling is able to recreate many resources without any effort, these tools are not yet able to make this migration process a completely hands-off endeavour
And being practically locked in (as it takes time and effort to actually migrate away), means the cloud vendors can, and often will, charge a premium for service adjacent to the serverless service itself, like API Gateways. These hidden costs are known to creep up on you, with prices many times the cost of the Lambda functions themselves.

Different Cost Model makes estimating hard

And the pay-per-request model of serverless is fundamentally different. Cost estimations require a different approach: knowing how, and how much, your application is used. And to know that, you need to know how the design of your application is impacted by serverless pricing.

For instance, estimating cost of a simple ‘guestbook’ application requires you to decompose the application on paper, figuring out which functions and methods the application exists of. How many enterprise architects or business analysts do you know who can do that?

Next, figuring out the average number of calls and requests to each of the functions. That requires you to know the transactional flow users take through the application, like logging in, viewing the guestbook, adding entries, etc.

Determining the average runtime for functions. Functions are billed based on their execution or processing time, so knowing how long each function runs, time the number of times it is run, is roughly the cost of each function. Similarly, there are bounds and limitation to memory usage.

But the story doesn’t end here. There’s associated costs to make functions available for consumption through API gateways, retrieving and storing data, network transfer costs, and more.

As you can see, there is a whole lot of information needed to estimate the costs of running your application. You can’t just refactor your application and lift and shift it to the cloud.

Measure it

A famous dutch phrase is “meten is weten” (roughly translated: to know it, you should measure it). Probably this information is not available in your organization, since most of the items are not so relevant when running your own infrastructure. However, without this information it is impossible to know if using serverless for this application is beneficial or not. Find more use cases and examples at the website of simform.

Organizational impact

Drilling a little bit deeper into current applications, reveals a big organizational impact. We all know the big monolithic application with the big chunks of difficult to manage code bases and slow release cycles. Also we know about micro-services which are like applications that are split up into multiple smaller pieces all with it’s own lifecycle. And now we are talking about individual functions.

Serverless functions should be designed and coded in the most optimal way to really benefit from the “pay per usage” cost model. The functions need to “collaborate” together to make up the final application. It requires a good overview of which functions are needed, in which order, for each (business) feature of the application.

This brings us to logging and tracing of applications. This becomes much harder since there are much more “moving parts”. Make sure logging and auditing for all functions is done in a similar manner to quickly pinpoint any errors. Debugging of applications is also considered fairy difficult, since not all the well known Integrated Development Environments (IDEs) can be used.


Going serverless: considerations for your organization
Source: https://pixabay.com

Besides this, what about the role and tasks of the “traditional system administrator”. We already saw the importance of SRE and the changing role of this personal in the organization. With serverless and virtually no servers and systems to maintain, what will be their role in the organization? Maybe this role will disappear in the future, or shift to the service provider entirely? We don’t know yet. Think of this in case you employ a group of system administrators. It’s good for them to know about the move to serverless and the impact it might and will be on their jobs.


In public cloud, security is still extremely important. Security aspects change when moving to serverless. It’s true you do not need to execute security patches on Virtual Machines anymore. But security is not fixed. You definitively need to think about security for your applications.

Security concerns shifts from the infrastructure layer to the application layer. Developers need to be aware of it since the code they create is run directly in the cloud, there is no “extra layer” to provide protection (e.g. a server firewall or cross side scripting protection). The boundaries of the outside world and the organizational (inside) world are not so clear anymore. So security still remains a serious topic.


Some considerations help to understand the impact of it:

  • Bigger attack surface. This item is linked to the previous section. All functions which are accessible are directly exposed, thus making them more vulnerable for attacks. Traditional applications have a limited amount of endpoints. A lot of code is “hidden” from the outside.
  • Security scanning for serverless functions is difficult. As of now there are not so many tools available which can scan serverless code for vulnerabilities. Developers need to check a lot of things manually. This increases the risk of security threads which remain undetected. Virtual Servers are inaccessible since the cloud provider maintains them.
  • Big dependency of third party libraries. Serverless functions rely on a lot of other (open source) libraries. Those libraries can (and probably will) contain security vulnerabilities. If those are not patched (and you don’t control nor see them easily) it makes your code vulnerable. Some tips to help you in this matter:
    • Build a list of dependencies and their versions.
    • Remove any unneeded dependencies/libraries.
    • Update the packages regularly and scan them using specialized tools.

Security aspects for serverless become a new chapter for your teams. A big effort is required from the developers since there is a lot of manual work involved. It’s also about a change in mindset. Shifting security left shifts responsibility: from infrastructure teams to development teams. Therefore it is important to be prepared as the people involved will understand what to expect.

For more serverless computing risks, be sure to check out the OWASP serverless top 10.

Final words

Serverless is an interesting proposition. It promises even faster development cycles, no time wasted on managing servers, containers and orchestrators, while only handing in a little flexibility. The Pros outweigh the Cons for many use cases.

Slowly, the early adopters are saying ‘build serverless first. If needed, move to containers‘. There are some major points to take into consideration, though. Cost and lock-in of adjacent services are two of the major ones, but the cloud-native architectural approach requires a sometimes steep learning curve. It’s just so different than monolithic apps on a traditional on-prem environment.

Luckily, serverless isn’t going to take over the world anytime soon. Just like with containers, adoption is ramping up, but will be nowhere near complete in the 2020s. There’s ample time to learn serverless.


Receive our top stories directly in your inbox!

Sign up for our Newsletters