HomeOperationsAutomated OperationsAdopting zero-trust security with a Service Mesh

Adopting zero-trust security with a Service Mesh

As organizations are becoming more and more cloud native, adopting Kubernetes, container-based applications and microservices, they’re realizing that doing security the right way, at scale, is non-trivial. Kubernetes is great at deploying and running container-based applications, but it doesn’t automatically do security, especially the zero trust security approach. And that’s a problem, with the many application instances spinning up and down as new versions are being deployed and scaled across multiple cloud platforms to meet customer demand.

To do zero trust networking right, at scale, across clouds, requires more than just Kubernetes. It requires a service mesh to manage security at a higher level of abstraction than what standard Kubernetes constructs, like kube-proxy and ingress load balancing can provide.

A service mesh helps standardize additional constructs into reusable patterns and policy-based security and networking, so that applications developers do not have to spend time on the plumbing of a zero trust networking approach.

Most service mesh solutions implement a sidecar architecture, which deploys the necessary networking infrastructure alongside the application, hence the name. These sidecar containers perform the redirection of traffic to and from the application containers to make the service mesh work, and include other services like adding circuit breakers, service discovery, load balancing, encryption using mTLS, tracing instrumentation and capturing of metrics and traces, authentication and authorization; all without the need to alter the original application, making it widely compatible with all sorts of applications.

Diving into the specific features, what they do and how they work is a bit much for this blog post. However, if you are interested in the what and how, the free ebook Service Mesh for Mere Mortals is a great resource to get you started.

The most commonly used service mesh is Istio, an open source mesh that, like Kubernetes, originated from Google. Istio is platform independent, which means it’ll run on heterogeneous environments like the big three cloud platforms, on-prem, bare metal, at the edge, and on platforms other than Kubernetes.

This feat allows companies to use Istio to abstract away networking and security differences in multi-cloud scenarios, allowing organizations to deploy applications across clouds and on-prem consistently, reducing complexity and knowledge requirements; making the developer’s life easier.

By overlaying a service mesh across clouds and environments, organizations can gain back control over multi-cloud complexity, enabling teams to deploy applications wherever they need to, instead of being limited by technological shortcomings. This makes applications truly portable.

However, to effectively manage applications running across multiple clouds and environments, centralized management of the infrastructure, including the service mesh, is a hard requirement. This consistent platform layer, like Mirantis Flow, allows system and software engineers alike to operate different environments as one.

If you have questions related to this topic, feel free to book a meeting with one of our solutions experts, mail to sales@amazic.com.

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

spot_img
spot_img

LET'S CONNECT