Thursday, April 18, 2019

Kubernetes Networking - Multus

It's been a while since I have posted on here. What have I been up to?

Interestingly enough, I have had to reverse-engineer a Kubernetes project. I was initially involved with this, but got pulled off of it, and the project had grown immensely in its layers, complexity and sophistication in my absence.  The chief developer on it left, so I had to work with a colleague to try and get the solution working, deployable and tested.

Right off the bat, the issue was related to Kubernetes Networking. That was easy to see.

The project uses Multus to create multi-homed pods (pods with multiple network interface adaptors).

By default, a Kubernetes pod only allows a single NIC (i.e. eth0). If you need two interfaces or more, there is a project call Multus (Intel sponsors this) that accomodates this requirement.

Multus is not a simple thing to understand. Think about it. You have Linux running on baremetal hosts. You have KVM virtual machines running on the VMs (virtualized networking). You have Kubernetes, and its Container Networking Interface plugins that supply a networking fabric amongst pods (Flannel, Weave, Calico, et al). And, now, on top of that, you have - Multus.

Multus is not a CNI itself. It does not "replace" Flannel, or Weave, but instead inserts itself between Kubernetes and Flannel or Weave much like a proxy or a broker would.

This article here has some good diagrams and exhibits that show this:

https://neuvector.com/network-security/advanced-kubernetes-networking/

[ I am skipping some important information about the Multus Daemonset here - and how that all works. But may come back to it. ]

One issue we ran into, is that we had two macvlan (Layer 2) configurations.

One used static host networking configuration:

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan-conf
spec:
  config: '{
      "cniVersion": "0.3.1",
      "type": "macvlan",
      "master": "eth1",
      "mode": "bridge",
      "ipam": {
        "type": "host-local",
        "subnet": "10.10.20.0/24",
        "rangeStart": "10.10.20.1",
        "rangeEnd": "10.10.20.254",
        "routes": [
          { "dst": "0.0.0.0/0" }
        ],
        "gateway": ""
      }
    }

while the other used DHCP.

{
   "cniVersion": "0.3.1",
   "name": "macvlan-conf-2",
   "type": "macvlan",
   "master": "eth1",
   "mode": "bridge",
   "ipam": {
             "type": "dhcp",
             "routes": [ { "dst": "192.168.0.0/16", "gw": "192.168.0.1" } ]
           },
   "dns": {
             "nameservers": [ "4.4.4.4", "8.8.8.8" ]
          }
}

The DHCP directive is interesting, because it will NOT work unless you have ANOTHER cni plugin called cni-dhcp deployed into Kubernetes so that it is installed on each Kubernetes node that is receiving containers that use this. This took me a WHILE to understand. I didn't even know about the plugin, its existence, or anything like that.

We were running into an issue where the DHCP Multus pods (those that used this macvlan-conf-2) where stuck in an Initializing state. After considerable debugging, I figured out the issue was with DHCP.

Once I realized the plugin existed, I knew the issue had to either be with the plugin (which requests leases), or the upstream DHCP server (which responds). In the end, it turned out to be that the upstream DHCP server was returning routes that the dhcp plugin could not handle. By removing these routes, and letting the upstream DHCP server just worry about ip assignment, the pods came up successfully.

No comments:

Zabbix to BigPanda Webhook Integration

Background BigPanda has made its way into the organization. I wasn't sure at first why, given that there's no shortage of Network Mo...