HomeArchitectureWhat is a Service Mesh and how do you choose the right...

What is a Service Mesh and how do you choose the right product

I have found the concept of a service mesh a difficult one to meaningfully explain to the Board, So I thought that I would gather my thoughts, and hopefully aid you the reader at the same time.

If you are a CTO or a board member of an organization that relies on cloud-native applications, you might have heard of the term “service mesh” before. But what exactly is it and how can it benefit your business?

A service mesh is a dedicated infrastructure layer that manages the communication and coordination between different microservices that make up your application. It provides features such as service discovery, load balancing, encryption, authentication, authorization, observability, and fault tolerance, among others.

A service mesh can help you address some of the common challenges and pain points of microservices architecture, such as:

  • Complexity: As your application grows and scales, so does the number of microservices and their interactions. This can make it hard to monitor, debug, and secure your system. A service mesh can abstract away the complexity and provide a unified view and control of your services.
  • Consistency: Different microservices may use different languages, frameworks, and protocols, which can lead to inconsistency and compatibility issues. A service mesh can standardize the communication and configuration across your services, regardless of their implementation details.
  • Reliability: Microservices are distributed and dynamic, which means they can fail or change at any time. A service mesh can ensure that your services are resilient and responsive to failures, changes, and network conditions.
  • Security: Microservices communicate over the network, which exposes them to various threats and attacks. A service mesh can encrypt and authenticate the traffic between your services, as well as enforce policies and rules to protect your data and resources.

How did the concept of a service mesh evolve and why is it needed?

Time for a little bit of a history lesson.  A service mesh is a journey into application scale and distribution.  We can trace the origins of the service mesh to the original three-tiered model, where we moved from a monolithic single machine single function and split into logical layers, these being a user layer (fat client application or a web-front end) an application layer that handled business logic and finally a data layer.

At the client layer we had applications like Apache or NGINX to handle load balancing, retries, and proxying (both forward and reverse).  This model was very successful until the rise of the mega global application, (think Twitter, Netflix, Amazon, etc).

The shear number and complexity of interactions needed a new model of operations; this led to the rise of microservices to break the new monolith of the 3-tiered application in to independently running pieces. This led to new issues, it was no longer just north-south traffic you needed to worry about, but also East-West, between services in your application.  If this went wrong your application went down.

These hyperscale companies created applications (Stubby, Hystrix, Finagle) that laid the foundations for what became the first service meshes.  These application were still not a service mesh, but a set of libraries and fat clients, Although these libraries still exist there use was deprecated by the next evolution, the proxy server, these proxies are discrete devices and as such deployment complexities are dramatically reduced; further their use effectively side steps a major operational issue with the use of libraries in that they are language specific, ie a library will be needed for any language using it.  The introduction of a proxy meant a single API with potentially polyglot capabilities. Also upgrading a library is a major operational undertaking.

BeforeServiceMesh
Life was more complicated when trying multi-cloud before Service Mesh (copywrite Hashicorp)

OK thats the history lesson.

Most importantly for larger organisations, implementing the service access in proxies rather than libraries shifts responsibility for providing runtime operations functionality from service owners to the platform engineering team, the end users of this functionality. This provider-consumer alignment gives these teams autonomy and decouples complex dev-ops dependencies.

These factors have helped proxies become a runtime sanity tool. The next iteration the service mesh standardises runtime operations across the organisation by deploying a distributed “mesh” of proxies that can be maintained as part of the underlying infrastructure and providing centralised APIs to analyse and operate on this traffic.

So what is a service mesh?

A service mesh at is base minimum is a dedicated infrastructure layer that uses a proxy to facilitate service-to-service communications between microservices or services in an application. The service mesh couples observability, security, and reliability of the network traffic between both the microservices and the end service user. A service mesh consists of network proxies, called the data plane, that run alongside each service or instance, and a set of management processes, called the control plane .

A service mesh can help you manage the complexity and challenges of microservices architectures, such as service discovery, load balancing, fault tolerance, encryption, authentication, authorization, monitoring, and tracing. However, not all service meshes are created equal. There are different approaches and technologies that can affect how well a service mesh meets your needs and expectations.

A great analogy for the Service Mesh is image you are a child in your playroom and have a lot of toys and you want to play with them all. But you don’t want to go around the room picking up each toy one by one. Instead, you ask your friend to help you pick up the toys and bring them to you. Your friend is like the service mesh. They help you get all the toys you want without you having to go around the room.

AfterServiceMesh
This Looks a heck of a lot simplier, than the previous option. (copywrite Hashicorp)

The rest of this article will compare five popular service mesh solutions from Hashicorp: Consul, Kong, Istio, Linkerd, and Kuma. We will explain the differences in approach and give benefits of each and explain their strengths and weaknesses. Finally, we will give some reasons why you as a CTO or CIO or Solution architect should be considering a service mesh strategy give our regards to both cloud native and legacy applications.

Below is a table of several expected features that the majority of Service Mesh products should provde.

  Consul Kong Istio Linkerd Kuma
Service discovery Yes Yes Yes Yes Yes
Load balancing Yes Yes Yes Yes Yes
Traffic routing Yes Yes Yes Yes Yes
Traffic encryption Yes No Yes Yes No
Traffic observability Yes Yes Yes Yes Yes
Traffic control policies Yes Yes Yes No No
Service identity and security Yes No Yes No No
Multi-cluster support No No Yes No No
Multi-mesh support   No No No No Yes

Table 1:-Table based on the information available on the official website of each vendor

Consul

Consul is an open source service mesh solution from HashiCorp that provides service discovery, configuration, and segmentation functionalities, it can run on any platform and integrate with any runtime environment. Consul extends its functioality by the use of a sidecar proxy pattern to inject Envoy proxies into each service instance. The Consul control plane provides a central registry for services and their locations, as well as policies and configurations for the data plane.

Some of the benefits of Consul are:

  • It is platform-agnostic and can run on any cloud or on-premises environment.
  • It supports multiple data center deployments and can scale horizontally across regions.
  • It has a simple and intuitive user interface and a rich set of APIs for automation and integration.
  • It has a strong community and ecosystem of partners and integrations.

Some of the drawbacks of Consul are:

  • It lacks some advanced features that other service meshes offer, such as traffic shifting, routing rules, retries, timeouts, circuit breaking, etc.
  • It requires additional components and configuration to enable observability and security features.
  • It has a steep learning curve and requires expertise in Consul’s configuration language (HCL).

Kong

Kong-Mesh is an open source service mesh solution from Kong Inc. that provides API gateway and service connectivity functionalities, it can run on any platform and integrate with any runtime environment. Kong-Mesh uses a hybrid mode to deploy its data plane and control plane components. The data plane consists of Kong Gateway instances that act as proxies for the services. The control plane consists of Kong Manager instances that provide a graphical user interface and APIs for managing the data plane.

Some of the benefits of Kong are:

  • It is platform-agnostic and can run on any cloud or on-premises environment.
  • It supports multiple data center deployments and can scale horizontally across regions.
  • It has a powerful plugin system that allows users to extend its functionality with custom logic or third-party integrations.
  • It has a robust security model that supports mutual TLS, OAuth 2.0, JWT, etc.

Some of the drawbacks of Kong are:

  • It requires additional components and configuration to enable observability features such as metrics, logs, and traces.
  • It has a complex architecture that involves multiple components and dependencies.
  • It has a high resource consumption and performance overhead compared to other service meshes.

Istio

Istio is an open source service mesh solution from Google, IBM, and Lyft that provides traffic management, security, observability, and policy enforcement functionalities. Istio runs on any Kubernetes-based platform and integrate with any runtime environment, it also uses a sidecar proxy pattern to inject Envoy proxies into each service instance. the Istio control plane is more complicated as it consist of three components, these being the Pilot, Mixer, and Citadel. Pilot provides service discovery and traffic routing rules for the data plane. Mixer provides telemetry collection and policy enforcement for the data plane. Citadel provides certificate management and encryption for the data plane.

Some of the benefits of Istio are:

  • It is Kubernetes-native and can leverage the features and benefits of Kubernetes.
  • It supports multiple cluster deployments and can federate services across different clusters.
  • It has a rich set of features that cover various aspects of service mesh functionality such as traffic shifting, fault injection, rate limiting, mirroring, etc.
  • It has a strong community and ecosystem of partners and integrations.

Some of the drawbacks of Istio are:

  • It is Kubernetes-specific and cannot run on other platforms or environments.
  • It has a high resource consumption and performance overhead compared to other service meshes.
  • It has a steep learning curve and requires expertise in Istio’s configuration language (YAML).

Linkerd

Linkerd is an open source service mesh solution from Buoyant that provides reliability, security, and observability functionalities. like Itsio it can run on any Kubernetes-based platform and integrate with any runtime environment.  It also uses a sidecar proxy pattern to inject Linkerd proxies into each service instance. The control plane consists of four components: Controller, Destination, Identity, and Proxy Injector. Controller provides the APIs and user interface for managing the data plane. Destination provides service discovery and routing information for the data plane. Identity provides certificate management and encryption for the data plane. Proxy Injector injects the Linkerd proxy into each service instance.

Some of the benefits of Linkerd are:

  • It is Kubernetes-native and can leverage the features and benefits of Kubernetes.
  • It supports multiple cluster deployments and can federate services across different clusters.
  • It has a simple and lightweight design that focuses on core service mesh functionality such as retries, timeouts, load balancing, etc.
  • It has a low resource consumption and performance overhead compared to other service meshes.

Some of the drawbacks of Linkerd are:

  • It is Kubernetes-specific and cannot run on other platforms or environments.
  • It lacks some advanced features that other service meshes offer, such as traffic shifting, fault injection, rate limiting, mirroring, etc.
  • It has a limited set of integrations and plugins compared to other service meshes.

Kuma

Our final prodct Kuma is an open source service mesh solution from Kong Inc. that provides connectivity, security, observability, and policy enforcement functionalities. Kuma can run on any platform and integrate with any runtime environment. Kuma uses a sidecar proxy pattern to inject Envoy proxies into each service instance. Kuma’s control plane consists of two components: Kuma CP and Kuma GUI. Kuma CP provides the APIs and logic for managing the data plane. Kuma GUI provides a graphical user interface for managing the data plane.

In terms of differences between the two Kong products in this article, Kong Mesh builds upon Kuma providing enterprise features and support.

Some of the benefits of Kuma are:

  • It is platform-agnostic and can run on any cloud or on-premises environment.
  • It supports multiple cluster deployments and can federate services across different clusters.
  • It has a simple and intuitive user interface and a declarative configuration language (YAML).
  • It has a modular architecture that allows users to choose which features to enable or disable.

Some of the drawbacks of Kuma are:

  • It requires additional components and configuration to enable observability features such as metrics, logs, and traces.
  • It has a high resource consumption and performance overhead compared to other service meshes.
  • It has a limited set of integrations and plugins compared to other service meshes.

Five reasons to consider a service mesh strategy

A service mesh can provide many benefits for your application and servcies, especially if you are adopting a microservices architecture or moving to a cloud-native environment. when considering a service mesh strategy for your environment you need to think about the following:

  • A service mesh can improve the reliability of your application by providing features such as load balancing, retries, timeouts, circuit breaking, health checks, etc. These features can help you handle failures gracefully and prevent cascading errors or outages.
  • A service mesh can improve the security of your application by providing features such as mutual TLS, encryption, authentication, authorization, etc. These features can help you protect your data and services from unauthorized access or tampering.
  • A service mesh can improve the observability of your application by providing features such as metrics, logs, traces, etc. These features can help you monitor the performance and behavior of your services and identify issues or bottlenecks quickly.
  • A service mesh can improve the agility of your application by providing features such as traffic shifting, fault injection, rate limiting, mirroring, etc. These features can help you test new features or changes in production without affecting users or services.
  • A service mesh can improve the compatibility of your application by providing features such as protocol conversion, header manipulation, content transformation, etc. These features can help you integrate legacy applications or services with modern ones without requiring code changes or modifications.

Summary

A service mesh is a powerful tool that can help you manage the complexity and challenges of microservices architectures or cloud-native environments. However, choosing the right service mesh for your application depends on various factors such as your platform, environment, requirements, expectations, etc.

In this article, we compared five popular service mesh solutions: Consul, Kong, Istio, Linkerd, and Kuma. We explained the differences in approach and gave benefits of each and explained their strengths and weaknesses. We also gave five reasons why you should consider a service mesh strategy for your application.

We hope this article helped you understand what a service mesh is and how to choose the right one for your application.

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

spot_img
spot_img

LET'S CONNECT