In a multi-cloud, hybrid IT architecture world, where applications are deployed as microservices, the use of service meshes is becoming an important (although not mandatory) component of cloud-native architecture. Early deployments of the technology – which promises network routing, security and configuration control for microservices-based applications – are largely based on open source code, with Envoy emerging as a de facto standard data plane.
The 451 Take
Why all the excitement about service mesh? Because it has the potential to become a Swiss Army Knife of modern-day software, solving for the most vexing challenges of distributed microservices-based applications. The technology, which defines and controls networking at the application layer, is not suitable for every use case, but the ecosystem explosion around Envoy is creating proof points and surfacing problems that the open source community – the ultimate self-healing network – is striving to solve. It's still early, though, and the counteractive forces of innovation versus integration have yet to play out. The opportunity for innovation here is significant – see Figures 1 and 2 below.
What is a Service Mesh, Anyway?
A service mesh is an evolution of traditional API gateways that offers a single point of entry for traffic into an application. With traditional software, the job of the API gateway is to intercept the data coming into the system at the edge and apply checks to authenticate and configure it so that it can be processed. Once inside the application, communication is handled by function calls – lines of code that deliver the information to functions where the software does its work – for example, creating an invoice, writing to a database or triggering another function.
Service Mesh Day Highlights Progress and Gaps
On March 29, Tetrate hosted Service Mesh Day as part of what it called 'the first-ever technology conference related to service mesh.' Sponsors included Google Cloud, Juniper Networks, AWS, the Cloud Native Computing Foundation and the Open Networking Foundation, and sessions delved into the future of service mesh as a next-generation networking model, adoption patterns (including in brownfield applications), and production readiness of Envoy and Istio.
- Service mesh is becoming a platform in its own right. The proxies in a service mesh can be configured to automate a variety of tasks that are inherently difficult in a distributed system, including service discovery, health checking, routing, load balancing, authentication and observability. Because microservices-based applications are disaggregated and dynamic, one important function of the control plane is to safely release updates into production, and this is where circuit breaks and phased rollouts come into play. Envoy has taken off in part because of its extensibility as a universal data plane, which makes it possible to build differentiated services on top – effectively, it has evolved from a generic proxy into a platform.
- It's early (and this market is still being made up). Service meshes have become necessary because of problems created by microservices. For all their flexibility and innovation, microservices roll up into fragmented environments that are subject to difficulties as they scale, and they necessarily make networking an application-layer issue. Among the problems to be solved are how to accommodate multiple clouds, including edge and mobile clients; how to centralize authority to avoid balkanized control; and how to manage identity in applications that may have hundreds of app developers working on them. Still to be determined are which elements can be structurally encoded in hardware (e.g., network interface controllers) and how to enable federation (i.e., interoperability between a variety of meshes).
- Service mesh is not for everyone. The enthusiasm for service mesh combined with its lack of maturity raises the danger of technical debt, where early adopters may have implemented an early version of a component and may then have to refactor when the underlying control plane changes or live with an outdated version. Service meshes are fundamentally complicated, and installation and scaling can be difficult. Some enterprises we spoke with are building their own control planes, in addition to testing 'opinionated' alternatives very carefully before committing. Although Service Mesh Day speakers cited use cases where Envoy and Istio were being deployed in advance of Kubernetes and in ways that encompass VMs and containers in brownfield environments, service mesh implementations will likely remain highly varied per application, and enterprises considering the technology must be satisfied with the performance and usability of the alternative(s) they choose.
AWS App Mesh was previewed at re:Invent 2018, and is now generally available. The fully managed service mesh offering provides application-level networking, enabling customers to run and monitor microservices at scale. Services can be built and run using compute infrastructure such as Amazon EC2, AWS Fargate, Amazon Elastic Container Service and Amazon Elastic Container Service for Kubernetes. AWS App Mesh routes and monitors traffic, and provides insight and the ability to reroute traffic after failures or code changes. Previously, this required users to build monitoring and control logic directly into code and redeploy services every time there were changes. Service meshes resolve this problem. AWS App Mesh uses the open source Envoy proxy server software (data plane) developed by Lyft (and now part of CNCF), but it is not an implementation of the Istio control planes developed by Google, IBM and Lyft. AWS believes Istio is too 'opinionated' for the vast majority of customers, which are not Kubernetes-only shops and will therefore require communication across services running in different compute environments. AWS App Mesh works with AWS Cloud Map service discovery.
Figure 1: Beyond Infrastructure: Cloud-native Feature Adoption Plans are Strong
Figure 2: Migration of Application Stack or Portfolio to Microservices - by Industry
In addition to producing the quarterly Cloud Price Index deliverables, Jean covers vendors and cloud providers that offer technology or services to manage or improve public and private cloud TCO, performance or consumption. She has developed a niche in new private-cloud pricing models, including pay-as-you-go and build-transfer-operate.