Wednesday, October 31, 2018
Data Plane Development Kit (DPDK)
I kept noticing that a lot of the carrier OEMs are implementing their "own" Virtual Switches.
I wasn't really sure why, and decided to look into the matter. After all, there is a fast-performing OpenVSwitch, which while fairly complex, is powerful, flexible, and, well, open source.
Come to learn, there is actually a faster way to do networking than with native OpenVSwitch.
OpenVSwitch minimizes all of the context switching between user space and kernel space when it comes to taking packets from a physical port, and forwarding those packets to virtualized network functions (VNF) and back.
But - DPDK provides a means to circumvent the kernel, and have practically everything in user space interacting directly to the hardware (bypassing the kernel).
This is fast, indeed, if you can do this. But it bypasses all of the purposes of a kernel network stack, so there has to be some sacrifice (which I need to look into and understand better). One of the ways it bypasses the kernel is through Direct Memory Access (DMA), based on some limited reading (frankly, reading it and digesting it and understanding it usually requires several reads and a bit of concentration as this stuff gets very complex very fast).
The other question I have, is that if DPDK is bypassing the kernel en route to a physical NIC, what about other kernel-based networking services that are using that same NIC? How does that work?
I've got questions. More questions.
But up to now, I was unaware of this DPDK and its role in the new generation of virtual switches coming out. Even OpenVSwitch itself has a DPDK version.
Sunday, October 28, 2018
Service Chaining and Service Function Forwarding
I had read about the concept of service chaining and service forward functioning early on, in a SD-WAN / NFV book that I had read, which at the time was ahead of its time. I hadn't actually SEEN this, or implemented it, until just recently on my latest project.
Now, we have two "Cloud" initiatives going on at the moment, plus one that's been in play for a while.
Now, we have two "Cloud" initiatives going on at the moment, plus one that's been in play for a while.
- Ansible - chosen over Puppet, and Chef in a research initiative, this technology is essentially used to automate the deployment and configurations of VMs (LibVirt KVMs to be accurate).
- But there is no service chaining or service function forwarding in this.
- OpenStack / OpenBaton - this is a project to implement Service Orchestration - using ETSI MANO descriptors to "describe" Network Functions, Services, etc.
- But we only implemented a single VNF, and did not chain them together with chaining rules, or forwarding rules.
- Kubernetes - this is a current project to deploy technology into containers. And while there is reliance and dependencies between the containers, including scaling and autoscaling, I would not say that we have implemented Service Chaining or Service Function Forwarding the way it was conceptualized academically and in standards.
The latest project I was involved with DID make use of Service Chaining and Service Function Forwarding. We had to deploy a VNF onto a Ciena 3906mvi device, which had a built-in Network Virtualization module that ran on a Linux operating system. This ran "on top" of an underlying Linux operating system that dealt with the more physical aspects of the box (fiber ports, ethernet ports both 1G and 100G, et al).
It's my understanding that the terms Service Chaining and Service Function Forwarding have their roots in the YANG reference model. https://en.wikipedia.org/wiki/YANG
This link has a short primer on YANG.
YANG is supposed to extend a base set of network operations that are spelled out in a standard called NETCONF (feel free to research this - it and YANG are both topics in and of themselves).
In summary, it was rather straightforward to deploy the VNF. You had to know how to do it on this particular box, but it was rather straightforward. What was NOT straightforward, was figuring out how you wanted your traffic to flow, and configuring the Service Chaining and Service Function Forwarding rules.
What really hit home to me is that the Virtual Switch (fabric) is the epicenter of the technology. Without knowing how these switches are configured and inter-operate, you can't do squat - automated, manual, with descriptors, or not. And this includes troubleshooting them.
Now with Ciena, theirs on this box was proprietary. So you were configuring Flooding Domains, Ports, Logical Ports, Traffic Classifiers, VLANs, etc. This is the only way you can make sure your traffic is hop-scotching around the box the way you want it to, based on rules you specify.
Here is another link on Service Chaining and Service Function Forwarding that's worth a read.
Monday, October 15, 2018
Kubernetes Part VI - Helm Package Manager
This past week, my colleague has introduced me to something called Helm, which is sort of like a "pip" for Python. It manages Kubernetes packages (it is a Kubernetes Package Manager).
The reason this was introduced:
We found SEVERAL github repos with Prometheus Metrics in them, and they were not at all consistent.
This meant I had to understand what Helm is. Helm is divided into a client (Helm), a server (Tiller), and you install packages (Charts). I guess it's a maritime themed concept, although I don't know why they can't call a package a package (copyright reasons maybe?).
So - I installed Helm, and that went smoothly enough. I installed it on my Kubernetes Master. I also downloaded a bunch of Charts off the stable release in GitHub (Prometheus is one of these). These all sit in a /stable directory (after the git clone, ./charts/stable).
When I came back in, and wanted to pick back up, I wasn't sure if I had installed Prometheus or not. So I ran a "helm list", and got the following error:
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
Yikes. For a newbie, this looked scary. Fortunately Google had a fix for this on a StackOverflow page.
I had to run these commands:
But, more importantly, I really need to take some time to understand what we just ran above, with regards to this stuff above; the cluster role bindings, et al.
The reason this was introduced:
We found SEVERAL github repos with Prometheus Metrics in them, and they were not at all consistent.
- Kubernetes had one
- There was another one at a stefanprod project
- There was yet a third called "incubator"
This meant I had to understand what Helm is. Helm is divided into a client (Helm), a server (Tiller), and you install packages (Charts). I guess it's a maritime themed concept, although I don't know why they can't call a package a package (copyright reasons maybe?).
So - I installed Helm, and that went smoothly enough. I installed it on my Kubernetes Master. I also downloaded a bunch of Charts off the stable release in GitHub (Prometheus is one of these). These all sit in a /stable directory (after the git clone, ./charts/stable).
When I came back in, and wanted to pick back up, I wasn't sure if I had installed Prometheus or not. So I ran a "helm list", and got the following error:
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
Yikes. For a newbie, this looked scary. Fortunately Google had a fix for this on a StackOverflow page.
I had to run these commands:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
helm init --service-account tiller --upgrade
These seemed to work. The "helm list" command showed no results, though, so I guess I need to install the prometheus package (sorry...chart) after all with helm now.But, more importantly, I really need to take some time to understand what we just ran above, with regards to this stuff above; the cluster role bindings, et al.
Thursday, October 11, 2018
What is a Flooding Domain?
I have been working on configuring this Ciena 3906MVI premise router, with a Virtualized Network Function (VNF), and connecting that VNF back to some physical network ports.
This is a rather complex piece of hardware (under the hood).
I noticed in some commands, they were creating these Flooding Domains. And I didn't know what those were (there were sub-types called VPWS and VPLS and I need to look into that as well).
These Flooding Domains are then associated with "classifiers", like "Ingress Classifiers".
I didn't truly know what a Flooding Domain was. Not a lot on the web if you search those two words together. There's plenty of stuff on the concept of Flooding, however.
I found a link where someone asked what the difference between Flooding and Broadcasting is, and it is in this link where I found the best clues to get the proper understanding. So I will recap that there:
https://networkengineering.stackexchange.com/questions/36662/what-is-the-difference-between-broadcasting-and-flooding
What is the Difference between Broadcasting and Flooding?
Broadcasting is a term that is used on a broadcast domain, which is bounded by layer-3 (routers). Broadcasts are sent to a special broadcast address, both for layer-2 and layer-3. A broadcast cannot cross a layer-3 device, and every host in a broadcast domain must be interrupted and inspect a broadcast.
Flooding is used by a switch at layer-2 to send unknown unicast frames to all other interfaces. If a frame is not destined for a host which receives it, the host will ignore it and not be interrupted. This, too, is limited to a broadcast domain.
Flooding in OSPF (layer-3) means that the routes get delivered to every OSPF router in an area. It really has nothing to do with a broadcast. OSPF doesn't use broadcasts to send routes, it uses unicast or multicast to connect with its neighbors. Each OSPF router needs to have a full understanding of all the routers and routes in its area, and it tells all its neighbors about all its local routes, and any routes it hears about from other neighbors. (OSPF routers are unrepentant gossips.)
So, a Flooding Domain is essentially a "domain of packet delivery" - where the point of the inbound (ingress) packet is not where the packet exits (egress). That's my best definition.
Subscribe to:
Posts (Atom)
SLAs using Zabbix in a VMware Environment
Zabbix 7 introduced some better support for SLAs. It also had better support for VMware. VMware, of course now owned by BroadSoft, has prio...

-
After finishing up my last project, I was asked to reverse engineer a bunch of work a departing developer had done on Kubernetes. Immediat...
-
Initially, I started to follow some instructions on installing Kubernetes that someone sent to me in an email. I had trouble with those, s...
-
I did some more work on Kubernetes. So the way Kubernetes was set up in here, was that SD-WAN traffic would be "routed" through...