← back
2020-09-20: examining the different Kubernetes network technologies
Kubernetes itself relies on network plugins in order to allow
pods to talk to each other and also to communicate through the
kube proxy.
Generally there are two types; the first type are
CNI plugins which align with the the appc or CNI spec and
designed for interoperability; the second type are kubenet
plugins which merely implement a basic cbr0 via the
bridge and host-local CNI plugins.
These days the Kubernetes cluster networking plugin is quite diverse, and a number of providers as detailed
here allow for a great
many more features than even a few years ago and much more than docker ever could.
A list of the various network types and the more commonly used plugins are detailed below:
-
noop:
default if no kubelet network plugin, sets net/bridge/bridge-nf-call-iptables=1
to allow Docker bridge-like networking via iptables
-
kubenet:
an incredibly basic network plugin on Linux, typically for clusters with a single node
-
cni:
container network interface; supports hostPort for exposing ports on the nodes, and
also traffic shaping to set the bandwidth on pod ingress and egress
-
flannel:
a CNI plugin that is a very simple layer 3 network fabric for Kubernetes with a minimal featureset
-
calico:
a CNI plugin that is a highly scalable networking and network policy solution, very feature rich and
conceptionally similar to the IP network traffic found on the internet between pods; it is deployed
without encapsulation or overlays to provide high-performance and high-scale datacentre networking.