In this final installment of our Amazic World series sponsored by F5, we’re looking at F5’s continued investments in datacenter security for container deployments using F5 Container Ingress Services.
|This is post 2 of 4 in the Amazic World series sponsored by F5.
1. F5: from Code to Customer
2. Containers and CI/CD
4. Security & Data Centers
While F5 has made numerous investments in NGINX and Shape Security cloud-native technologies, they continue to invest in application delivery and security for the datacenter. As customers are investing in digital transformation, cloud migration and cloud-native applications, they still rely on their current application landscape.
The code to customer journey
In the journey from datacenter-focused applications to modern architectures, the datacenter still matters. It’s where your applications come from, and where a number of them will continue to run. Sometimes, the application crown jewels stay in the on-prem datacenter entirely. In that case, securing and delivering datacenter-based applications is as important as it’s ever been.
And with a part of the application landscape being moved to the various public cloud offerings, integration of application delivery across all these environments becomes a challenge of its own and adds complexity in areas like routing, load balancing and securing application traffic.
Optimizing container-based microservice and cloud-native application traffic
BIG-IP is F5’s workhorse for physical and virtualized environments. These application delivery controllers handle everything from full-proxy load balancing, web application firewalling, DDoS protection and advanced security and acceleration features.
How to protect against the OWASP top 10
The Open Web Application Security Project (OWASP) helps reduce security shortcomings within enterprises. Each year, the OWASP publishes a list of the 10 most critical web application security threats
By adding support for containers to BIG-IP, customers can leverage their current investments in F5’s BIG-IP controllers, so they don’t need to start over with application delivery and security just for containers. Instead, BIG-IP delivers a seamless application delivery experience between microservices and traditional applications, allowing applications to be slowly, surely and safely migrated and adopted to modern architectures.
F5 Container Ingress Services (CIS) integrates with Kubernetes to automatically create the policies and service on BIG-IP systems based on Kubernetes Pod and Service configurations to load balance network traffic. CIS listens for configuration changes on the Kubernetes side and modifies BIG-IP configuration on-the-go for a frictionless integration of BIG-IP into the container world, without the need to retrain existing NetOps teams or change their processes.
Using F5’s full-proxy architecture for containers increases the security posture and limits the threat vectors for microservices, fully terminating sessions and essentially air-gapping services while explicitly allowing traffic using security and traffic policies. Another advantage lies in the more advanced inspection capabilities of full-proxy traffic management for DDoS, SSL termination on a very granular level.
By using BIG-IP for not just the on-prem datacenter, but also the on-cloud workloads, NetOps teams can ensure consistency, reliability, manageability and deployment velocity for all application services. Coupled with BIG-IQ, F5’s central management tooling, teams get visibility into the multi-cloud estate, helping teams stay in control of security compliance by securing and managing traffic flowing between on-prem datacenters and public cloud environments.
Finally, BIG-IP helps NetOps teams automate application delivery and security workflows, removing complexity and decrease lead time for applying changes more quickly and more often. By making consumption of the BIG-IP services self-service, application development and DevOps teams can consume security features and stay compliant without waiting on the NetOps team. This also frees up the NetOps teams to deliver more and better automated workflows, reducing toil even further.
Web Application Firewall across on-prem, cloud and cloud-native
Just like how F5 Container Ingress Services blurs the lines between the on-prem and cloud-native world, F5’s battle-tested Web Application Firewall (or WAF for short) doesn’t just run on BIG-IP, but also on the light-weight NGINX platform. This gives developers and DevOps teams the flexibility to use open source tooling (like mod_security), as well as enterprise-ready options.
And as this is the same code, security and network teams can now interchange security policies between BIG-IP and NGINX App Protect seamlessly and without re-writing a single line of code. That means teams can configure a security policies in the BIG-IP graphical interface (with a machine learning engine and integration with the leading vulnerability scanners) and export the resulting policy to NGINX App Protect. The re-use of existing technology improves adoption and security posture for new cloud-native applications immediately, regardless of wether these applications run on-prem, in a virtual machine, as a container or as a cloud-native service.
This is a huge advantage from both a security compliance as well as an operational perspective. Typical F5 customers have hundreds or even thousands of applications running across multiple environments, datacenters, cloud VPCs and regions.
F5’s BIG-IP continues to be the go-to solution for application delivery in the datacenter. By adding support for public cloud and containers, NetOps and SecOps teams can continue to use BIG-IP for security and application delivery, expand its usage into the container and public cloud world with NGINX while securing traditional and modern applications from a single pane of glass.
With BIG-IP, F5 Container Ingress Services and NGINX App Protect, customers can start their container and cloud journey with confidence, full visibility and control over security and traffic management policies.
NetOps teams can continue to work with their trusted infrastructure load balancer, providing security, visibility and governance, while DevOps teams can use Kubernetes and publish services to the internet without breaking security protocols.