Benchmarking Service Mesh Performance

Posted on
istio news

Benchmarking Service Mesh Performance

Service meshes add a lot of functionality to application deployments, including traffic policies, observability, and secure communication. But adding a service mesh to your environment comes at a cost, whether that’s time (added latency) or resources (CPU cycles). To make an informed decision on whether a service mesh is right for your use case, it’s important to evaluate how your application performs when deployed with a service mesh.

Earlier this year, we published a blog post on Istio’s performance improvements in version 1.1. Following the release of Istio 1.2, we want to provide guidance and tools to help you benchmark Istio’s data plane performance in a production-ready Kubernetes environment. Overall, we found that Istio’s sidecar proxy latency scales with the number of concurrent connections.

At 1000 requests per second (RPS), across 16 connections, Istio adds 3 milliseconds per request in the 50th percentile, and 10 milliseconds in the 99th percentile. In the Istio Tools repository, you’ll find scripts and instructions for measuring Istio’s data plane performance, with additional instructions on how to run the scripts with Linkerd, another service mesh implementation. Follow along as we detail some best practices for each step of the performance test framework.

Source: istio.io