Benchmark results of Kubernetes network plugins (CNI) over 10Gbit/s network

Posted on
cni kubernetes network news performance

Kubernetes is a great orchestator for containers. But it does not manage network for Pod-to-Pod communication. This is the mission of Container Network Interfaces (CNI) plugins which are a standardized way to achieve network abstraction for container clustering tools (Kubernetes, Mesos, OpenShift, etc.)

But here is the point: what are the differences between those CNIs? Which one has the best performance? Which one is the leanest?

This article is showing the results of a benchmark I’ve conducted on 10Gbit/s network. These results were also presented during a conference at the Devops D-DAY 2018 in Marseille (France) on November 15, 2018. The benchmark is conducted on three Supermicro bare-metal servers connected through a Supermicro 10Gbit switch.

The servers are directly connected to the switch via DAC SFP+ passive cables, and are setup in the same VLAN with jumbo frames activated (MTU 9000). Kubernetes 1.12.2 is setup on Ubuntu 18.04 LTS, running Docker 17.12 (default docker version on this release).

Source: itnext.io