Showing posts with label Container. Show all posts
Showing posts with label Container. Show all posts

Thursday, April 18, 2019

Kubernetes Networking - Multus

It's been a while since I have posted on here. What have I been up to?

Interestingly enough, I have had to reverse-engineer a Kubernetes project. I was initially involved with this, but got pulled off of it, and the project had grown immensely in its layers, complexity and sophistication in my absence.  The chief developer on it left, so I had to work with a colleague to try and get the solution working, deployable and tested.

Right off the bat, the issue was related to Kubernetes Networking. That was easy to see.

The project uses Multus to create multi-homed pods (pods with multiple network interface adaptors).

By default, a Kubernetes pod only allows a single NIC (i.e. eth0). If you need two interfaces or more, there is a project call Multus (Intel sponsors this) that accomodates this requirement.

Multus is not a simple thing to understand. Think about it. You have Linux running on baremetal hosts. You have KVM virtual machines running on the VMs (virtualized networking). You have Kubernetes, and its Container Networking Interface plugins that supply a networking fabric amongst pods (Flannel, Weave, Calico, et al). And, now, on top of that, you have - Multus.

Multus is not a CNI itself. It does not "replace" Flannel, or Weave, but instead inserts itself between Kubernetes and Flannel or Weave much like a proxy or a broker would.

This article here has some good diagrams and exhibits that show this:

https://neuvector.com/network-security/advanced-kubernetes-networking/

[ I am skipping some important information about the Multus Daemonset here - and how that all works. But may come back to it. ]

One issue we ran into, is that we had two macvlan (Layer 2) configurations.

One used static host networking configuration:

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan-conf
spec:
  config: '{
      "cniVersion": "0.3.1",
      "type": "macvlan",
      "master": "eth1",
      "mode": "bridge",
      "ipam": {
        "type": "host-local",
        "subnet": "10.10.20.0/24",
        "rangeStart": "10.10.20.1",
        "rangeEnd": "10.10.20.254",
        "routes": [
          { "dst": "0.0.0.0/0" }
        ],
        "gateway": ""
      }
    }

while the other used DHCP.

{
   "cniVersion": "0.3.1",
   "name": "macvlan-conf-2",
   "type": "macvlan",
   "master": "eth1",
   "mode": "bridge",
   "ipam": {
             "type": "dhcp",
             "routes": [ { "dst": "192.168.0.0/16", "gw": "192.168.0.1" } ]
           },
   "dns": {
             "nameservers": [ "4.4.4.4", "8.8.8.8" ]
          }
}

The DHCP directive is interesting, because it will NOT work unless you have ANOTHER cni plugin called cni-dhcp deployed into Kubernetes so that it is installed on each Kubernetes node that is receiving containers that use this. This took me a WHILE to understand. I didn't even know about the plugin, its existence, or anything like that.

We were running into an issue where the DHCP Multus pods (those that used this macvlan-conf-2) where stuck in an Initializing state. After considerable debugging, I figured out the issue was with DHCP.

Once I realized the plugin existed, I knew the issue had to either be with the plugin (which requests leases), or the upstream DHCP server (which responds). In the end, it turned out to be that the upstream DHCP server was returning routes that the dhcp plugin could not handle. By removing these routes, and letting the upstream DHCP server just worry about ip assignment, the pods came up successfully.

Friday, November 9, 2018

There are other container platforms besides Docker? Like LXC?


I'm relatively new to containers technology. I didn't even realize there were alternatives to Docker (although I hadn't really thought about it).

Colleague of mine knew this though, and sent me this interesting link.

https://robin.io/blog/linux-containers-comparison-lxc-docker/

This link is a discussion about a more powerful container platform called LXC, which could be used as an alternative to Docker.

I'm still in the process of learning about it. Will update the blog later.

Friday, September 21, 2018

Docker: Understanding Docker Images and Containers


So this week has been emphasized on understanding how Docker works.

First, I learned about Docker images - how to create images, etc.

There is a pretty good blog that can be used to get going on this topic, which can be found here:
https://osric.com/chris/accidental-developer/2017/08/running-centos-in-a-docker-container/

Docker images are actually created in Layers. So you generally start off by pulling in a docker container image for centos, and then running it.

This is done as follows;
# docker pull centos
# docker run centos
# docker image ls

Note: If you run "docker container ls" you won't see it because it's not running and therefore not containerized.  Once you run the container image, THEN it becomes containerized and you can run "docker container ls" and you will be able to see it.

# docker run -it centos

Once you run the image, you are now "in" the container and you get a new prompt with a new guid, as shown below:

[root@4f0b435cbdb6 /]#

Now you can make changes to this image as you see fit; yum install packages, copy things into a running container by using the "docker cp" command.

Once you get a container the way you want it, you can exit that container, and then use the guid of that container (don't lose it!) to "commit" it.

Once committed, you need to push it to a registry.

Registries are another topic. If you use Docker Hub (https://hub.docker.com), you need to create an account, and you can create a public repository or a private repository. If you use a private one, you need to authenticate to use it. If you use a public one, anyone can see, take or use whatever you upload.

JFrog is another artifact repository that can be used.

Ultimately what we wound up doing, is creating a new local registry in a container by using the following command:
# docker run -d -p 5000:5000 --restart=always --name registry registry:2

Then, you can push your newly saved container images to this registry by using a command such as:
# docker push kubernetes-master:5000/centos-revised-k8s:10

So essentially we did the push to a specific host (kubernertes-master), on port 5000, and gave it the image name and a new tag.


Friday, September 14, 2018

Jumping into Kubernetes without understanding Docker and Containers


Clearly, it's best to understand Containers and Docker before you jump right into the deep water of Kubernetes.

I went in and started following cookbooks and how-to guides on Kubernetes without "stepping back" and trying to learn Docker first, and this has bit me and caused me to now go back to learn a bit about Docker.

As it turns out, the Containers I was launching with Kubernetes kept failing with a CrashLoopBackOff error.

It took me a while to learn how to debug this effectively in Kubernetes. I finally learned how to use kubectl to look at the logs, show the container information and events and so forth. I finally came to realize that the container we were pulling from jFrog was running a python script that was failing because someone had hard-coded IP addresses into it that weren't in use.

I decided to build a new container and fix this problem. I had to go to jFrog to learn that Docker containers are built in "Layers". Then, I decided I had to build a new container from scratch to fix this problem.

Doing all of this with no Docker knowledge means that I am essentially going ground up with Docker 101.

Thankfully we have this new guy we hired who is teaching me Docker (and some Kubernetes). Which is cool. It's a good two way exchange. I teach him about Networking and SD-WAN, and OpenStack/OpenBaton, and he teaches me about Docker and Kubernetes.

SLAs using Zabbix in a VMware Environment

 Zabbix 7 introduced some better support for SLAs. It also had better support for VMware. VMware, of course now owned by BroadSoft, has prio...