HomeDevelopmentContinuous DeliveryProtecting your serverless workloads is not a “nice to have”

Protecting your serverless workloads is not a “nice to have”

Serverless computing has gained a lot of traction the last couple of years. Many organizations are already experimenting with it and using its fullest potential for their applications. As described in a previous article, its popularity is likely to increase the upcoming year. When running functions in the cloud (FaaS) security should be treated differently compared to traditional infrastructure. Protecting your serverless workloads is not a “nice to have”. It’s a must to stay secure all the time. Read on for some best practices and learn how to avoid common pitfalls.

WAF is not enough

Let’s start this article by zooming in into a rather traditional way to protect servers. In the public cloud you can implement a WAF – Web Application Firewall. This is like a traditional firewall which sits in front of your endpoint/services to determine which traffic it allows to use your services and which to block. Sometimes, a WAF also has some advanced features like DDoS protection, preventing a system from an overflow of malicious traffic.

Protecting your serverless workloads is not a “nice to have”
Source: https://pixabay.com

A WAF is not enough to protect your functions since it cannot protect other cloud (native) services like storage devices, nor can it prevent database modifications and services which receive streaming data from other services.

For example by employing an API gateway. The API gateway filters the incoming traffic of the consumers before it reaches the functions which are at the heart of your business logic. The cloud provider handles this level of protection – the number of invocations of your functions are limited and this saves money (remember: with functions you pay per use). As soon as users are authenticated, the API gateway handles throttling and quotas. This further reduces the number of (invalid and/or disruptive) requests to your functions.

The least privilege principle

You probably have seen this principle over and over again. In the cloud era this is more important than ever. Serverless is no exception. Why? Because there is no clear perimeter between the cloud environment and the “outside world”. This line is much more blurred than in the traditional datacenter with a fixed perimeter. Instead, each cloud services, and each function, needs its own perimeter.

Other services and users (with certain roles) trigger functions with the bare minimum privileges to do so. For example: a function should only be triggered by a web front-end using the API Gateway, but not by a user accessing the function directly.

This way, unauthorized access to the function is blocked, limiting the attack surface and limiting the damage which can be caused after an exploit. The extra challenge here is to keep the required permissions and access maintainable and in sync with the function itself.

Always perform input validation

A “classic” web application uses a form to be filled in by a consumer, it validates the input before it is being processed. Serverless works differently since it’s heavily dependent on “events” and “triggers” coming from other services. To make things even more complicated, it can typically handle synchronous & asynchronous data or even streaming data as well at the same time. Data flows are not so predictable.

All of this data acts as unfiltered input to your function if you don’t provide any input validation. Think of events not just coming from other services, without any user intervention. Keep an eye on services like S3 which could hold malicious data uploaded by human users. Remember that functions are not always called in a “logical order” or an order which you have designed upfront.

Protecting your serverless workloads is not a “nice to have”
Source: https://pixabay.com

In case an event sends out some data to your function directly your function is vulnerable to anything which is inside of the function call. This malicious data can be used to exploit your function. Proper input validation helps you to prevent SQL injection, run-time code injection, noSQL injection, Server-Side Request Forgery (SSRF) injection, etc. Injection is (still) the number 1 in the OWASP top 10. It’s one of the most important security measures you should take.

Some examples help to get you started: validate expected data-length, type of data which you expect, don’t pass on data blindly. Escape the input data properly and apply ORM principles before running (dynamic) queries against your database. Basically perform the same input checks as for “classic” web applications. Luckily, functions can be chained, where one of the functions provides the input validation before passing it on to another function.

Assign proper resources

One of the benefits of serverless is the “pay per use” model. This includes the time needed to execute your functions and the amount of compute power and memory you require. It helps you to keep costs to a minimum.

Besides this, from a security perspective it’s wise to avoid any function from running longer than strictly needed. In case of an “active” exploit, there is less time to actually run malicious code. If a function times out rather sooner than later, less damage can occur when the function is poisoned with malicious code.

It also limits the time in which the function can consume resources which contribute to a high bill at the end of the month. Avoid resource exhaustion. Your organization is responsible for the contents and services which run by those resources. An extra factor to keep this in mind.

Block deployments which violate your security thresholds

Perhaps you’ve read the blog which outlines the shift left principle. This principle also applies in the context of securing serverless workloads. Security violations and vulnerabilities should be detected as soon as possible. In case your function uses third party dependencies which have vulnerabilities or other security related violations, you should not even continue the deployment of this function. The tricky part here is that you can’t always see which third party dependencies are part of your function. Dependency management becomes a bit harder than in the traditional way of deploying applications. It’s best to integrate the vulnerability scanning tool in the IDE (Integrated Development Environment) of developers so they even know of any problematic dependencies before the application travels through the CI/CD pipeline.

If you already passed this phase, you should break the CI pipeline flow as soon as possible and provide feedback to your developers to fix the violation and/or vulnerability. Tools like Sysdig or Veracode can help you secure your applications in the container and serverless world. They have a lot of “off the shelve” solutions for AWS as well as Google cloud. When running containers, Sysdig provides a plugin for kubectl (to control a kubernetes cluster) which helps you capture the data inside a container. It can help you to pinpoint any suspicious behavior.

Protecting your serverless workloads is not a “nice to have”
Source: https://pixabay.com

It’s very important to have a good patch strategy in place. In case vulnerabilities are automatically patched (some tools can do this), your functions should still work as expected. Incompatible dependencies should be avoided. For this, it’s good to have proper unit tests to detect this as soon as possible.

Monitor your functions

In this section, let’s highlight two not so common best practices. However, those are important.

Don’t assume you have total control over your functions, even if your developers have created those themselves. In case a workstation is compromised, sloppy code can sneak in to those functions. Image what can happen if your developer has (admin) access to your production environment in the cloud. A function with malicious code can cause a real rampage in your cloud environment. Yet another reason to always use pipelines and proper security checks before deploying a function in a cloud based environment.

Functions are something to monitor, just like other resources in the cloud. As the number of functions grows very fast in case you have a large application, it’s even more important to “tag” them properly. Tagging gives you a way to add metadata to your functions so you can easily distinguish them or group them together. This makes monitoring a little bit easier.

In case you need to analyze the logs of a function be sure not to manually do this. It’s best to let a tool do it since the number of logs of all of the functions can be rather overwhelming. A single function can generate a lot of different log files and audit traces. Use a tool to keep the logs “clean”. This way there is no way you manually invalidate the logs – removing any traces of an attack. Keep the evidence intact – leave the analysis to logs to the (right) tools.

Common patterns to take away

To conclude this article, some last common patters to take away:

  • Use secure coding conventions everywhere as you already did for your traditional application (components).
  • Secure and verify the data in transit and rest using proper security protocols and mechanisms.
  • Don’t reuse code if you don’t understand it completely, a lot of vulnerabilities creep in by just copy and pasting code from third party sources like Github.
  • Handle secrets from a centralized secrets management tool, outlined in the article about secrets management.
  • Switch off verbose error messages and treat critical errors (exception handling) correctly. Do not expose more information than is strictly needed as this reveals too much hints about your internal logic. This also hides any weaknesses which can be exploited by attackers.

The best practices presented here are just the beginning. It’s good to keep these in mind when shifting from Virtual Machines, via containers to serverless. The journey demands a lot from your development and security departments but it’s worth the effort to fully reap the benefits of serverless.

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

spot_img
spot_img
spot_img
spot_img
spot_img

LET'S CONNECT