Friday, September 21, 2018

Kubernetes Networking - A More In-Depth look

I see a lot of people using Flannel, and Weave-Net for their Kubernetes Networking implementations.

I came across a reasonable attempt to explain the distinctions between them at this blog here:
https://chrislovecnm.com/kubernetes/cni/choosing-a-cni-provider/

I think there were about ten or twelve listed there, but Flannel and Weave-Net are the two most prevalent ones.

Flannel has more Git activity currently, but in terms of robustness and features, Weave-Net apparently has more of that, while Flannel has simplicity.

There is no shortage of good blogs out there on how these work, but this one link I came across had some nice packet flows, and those aren't easy to do, so I will show those here for future reference (for me or anyone else that consults this blog).

Here is Part I:
https://medium.com/@ApsOps/an-illustrated-guide-to-kubernetes-networking-part-1-d1ede3322727

In Part I, this packet flow is irrespective of which particular Kubernetes network implementation you use. In other words, this flow is "Kubernetes Centric". It deals with how pods inter-communicate with each other on a single node, and how pods intercommunicate with each other across nodes.

One of the main aspects is that all nodes in a Kubernetes cluster get a routing table that is updated with the pod CIDRs.

NOTE: This does not address pods going out of Kubernetes and back into Kubernetes. Something I need to look into.

and Part II:
https://medium.com/@ApsOps/an-illustrated-guide-to-kubernetes-networking-part-2-13fdc6c4e24c

In Part II, he shows how a Flannel overlay network "bolts on" to the networking implementation in Part I above.  Flannel uses a "flannel0" interface that essentially encapsulates and tunnels packets to the respective pods. A daemon, flanneld, consults Kubernetes for the tunneling information that it uses when it adds source and destination ip addresses for the pods that packets need to be delivered to.

No comments:

Fixing Clustering and Disk Issues on an N+1 Morpheus CMP Cluster

I had performed an upgrade on Morpheus which I thought was fairly successful. I had some issues doing this upgrade on CentOS 7 because it wa...