Friday, September 27, 2019

Vector Packet Processing - Part IV - Testing and Verification

As I work through the documentation on fd.io, it discusses Installation, and then there is a section called "Running VPP".

The first error I encountered in this documentation had to do with running the VPP Shell. The documentation said to run the following command: "sudo vppctl -s run/vpp/cli-vpp.sock"

On a CentOS7 installation, the cli-vpp.sock file is actually called "cli.sock", not "cli-vpp.sock".  So in correcting this, indeed, I see a CLI shell, which I will show further down.

So there is a CLI to master with this. And to be any kind of guru, one will need to learn this. It does look like a more or less "standardized" CLI, with syntax commands that include the familiar, "show", "set", etc. I ran the "help" command to get a dump of commands, which showed a hefty number of sub-commands to potentially learn.

I decided to run a fairly simple "show interface" command, to see what that would produce. And, here is the result of that.

"show interface" results from VPP shell CLI - all interfaces down
So the CLI sees 4 Gigabit Ethernet interfaces, all in a state of "down". 

This server has two dual-port NIC cards, so it makes sense to me that there would be two found on GigabitEthernet1. Why there is only a single interface found on GigabitEthernet3, I need to look into (seems there should also be two of these). The local0 interface, I presume, is a NIC that is on the motherboard (I could see people confusing local0 with a loopback). 

If you proceed with the dp.io documentation, it actually instructs you to set up a veth pair - not the actual physical NICs on the box - and create interfaces that way and enable them, and then do some tracing. It probably makes some sense to do that, before trying to bring these Gigabit Ethernet NICs up and test those. Why? Well, for one reason, you could knock your connectivity to the server out, which would be bad. So let's leave our physical NICs alone for the time being.

So next step, we will run the veth steps and the tracing steps on the dp.io website.

Then, after that, I noticed there is a VPP Testing site on GitHub.

https://github.com/FDio/vpp/tree/master/test

It is written in Python, so you could run your Makefile commands and, hopefully, run these easily.

Vector Packet Processing - Part III - Ensuring a Supported PCI Device

Okay - today I took a quick look into why the "Unsupported PCI Device" errors were popping up when I started the VPP service.

It turns out, that the Realtek network adaptors on that server, are, in fact, not supported! Duh. This has nothing to do with VPP. It has to do with the underlying Data Plane Development Kit, on which VPP resides as a layer on top of (in other words, VPP uses DPDK libraries).

The DPDK site lists the adaptors that are supported, on this page of their website, entitled, "Supported Hardware".
http://core.dpdk.org/supported/

Sure enough, no long-in-the-tooth RealTek NICs are listed here.

So what would you do (on that server) to test and experiment with VPP?

  1. Well, you could swap out the adaptors. If you do that, you better make sure you think about static IP assignments based on MAC address because all of your MACs will change. 
  2. You could use a virtual adaptor that is supported.
Or, you could simply find another server. Which I did. And this server is using Intel adaptors that ARE supported.

VPP Startup with Supported Adaptors
Next, I ran the "vppctl list plugins" command, which dumped out a ton of .so (shared object) files. 

These files are shared libraries, essentially. Rather than linking stuff into all of the binaries (making them larger), a shared object or shared library accommodates multiple binaries using the code (they get their own local data segments but share a pointer to the code segment - as I understand it). 

So - it looks like we have a working VPP service on this service. Yay. What next? Well, here are a couple of possibilities:

1. OpenStack has a Neutron VPP driver. That could be interesting to look into, and see what it's all about, and how well it works.

2. Maybe there are some ways of using or testing VPP in a standalone way. For example, some test clients. 

I think I will look into number 2 first. At this point, I am only interested in functional testing here. I am not doing any kind of performance bake-offs. Not even sure I have the environment and tools for that right now. We're just learning here.
  

Tuesday, September 24, 2019

Vector Packet Processing - Part II - Installing VPP

As a wrap-up to my day, I decided to take one of my CentOS7 servers, and install vpp on it.

I followed the cookbook found at this link:
https://wiki.fd.io/view/VPP/Installing_VPP_binaries_from_packages#RPMs

This link doesn't tell you how to set up the vpp repository, which is necessary to install any of the vpp packages (a yum groupinstall would have been nice for this, actually).

But the link for the repository is here:
https://my-vpp-docs.readthedocs.io/en/latest/gettingstarted/users/installing/centos.html

For convenience I included the snippet below.
$ cat /etc/yum.repos.d/fdio-release.repo
[fdio-release]
name=fd.io release branch latest merge
baseurl=https://nexus.fd.io/content/repositories/fd.io.centos7/
enabled=1
gpgcheck=0
This didn't take long to do at all. No problem installing packages, no problem starting up the vpp service.
But, it looks to me like old hardware and old network cards don't support vpp. So more work to do.
Unsupported PCI Device Errors on vpp service startup

Vector Packet Processing - Part I

Yesterday, I was reading up on something called Vector Packet Processing (VPP). I had not heard of this, nor the organization called Fd.io (pronounced Fido), which can be found at the following link: http://fd.io

Chasing links to get more up to speed, I found this article, which does a very good indoctrination on these newer networking technologies, which  have emerged to support virtualization, due to the overhead (and redundancy) associated with forwarding packets from NICs, to virtualization hosts, and into the virtual machines.

https://software.intel.com/en-us/articles/an-overview-of-advanced-server-based-networking-technologies

I like how the article progresses from the old-style interrupt processing, to OpenVSwitch (OVS), to SR-IOV, to DPDK, and then, finally, to VPP.

I am familiar with OpenVSwitch, which I came into contact with OpenStack, which had OpenVswitch drivers (and required you to install OpenVSwitch on the controller and compute nodes).

I was only familiar with SR-IOV because I stumbled upon it and took the time to read up on what it was. I think it was a virtual Palo Alto Firewall that had SR-IOV NIC Types, if I'm not mistaken. I spent some time trying to figure out if these servers I am running support SR-IOV and they don't seem to have it enabled, that's for sure. Whether they support it would take more research.

And DPDK I had read up on, because a lot of hardware vendors were including FastPath Data switches that were utilizing DPDK for their own in-house virtual switches, or using the DPDK-OpenVSwitch implementation.

But Vector Packet Processing (VPP), this somehow missed me. So I have been doing some catch-up on VPP, which I won't go into detail on in this post or share additional resources on such a large topic. But the link above to Fido is essentially touting VPP.

UPDATE:
I found this link, which is also spectacularly written:
https://www.metaswitch.com/blog/accelerating-the-nfv-data-plane

And, same blog with another link for those wanting the deep dive into VPP:
https://www.metaswitch.com/blog/fd.io-takes-over-vpp

Thursday, September 12, 2019

Graphical Network Simulator-3 (GNS3) - Part II Installation on a Linux Server

Okay for Part II of GNS3, I came in today looking to install GNS3 on a Linux Server.

I noticed that GNS3 is designed to run on Ubuntu Linux, and as I tend to run in a CentOS7 shop, I am now faced with the hump of putting an Ubuntu server in here, or trying to get this to run on CentOS7. It should run on CentOS7, right? After all, this is a Linux world, right? 😏

I decided to take one of my 32Gb RAM servers, an HP box, that runs CentOS7, and follow a cookbook for installing GNS3 on it.

I followed this link:
https://gns3.com/discussions/how-to-install-gns3-on-centos-7-

I chose this box because it runs X Windows. It didn't have Python 3.6 on it, or the pip36 used for installing and managing python 3.6 packages.

A lot of steps in this thing.

Some questions I have about this cookbook that I need to look into:

1. Why does the cookbook use VirtualBox on Linux? I have KVM installed. Surely I can use that instead of VirtualBox. I only use VirtualBox on my Win10 laptop. So I have, for now, skipped that section.

2. What is IOU support? I will need to google that.

UPDATE: IOU (also called IOL, which stands for IOS on Linux, is basically an IOS simulator) that can run on an i386 chipset.  You would need and want that if you run any Cisco elements on the GNS3 simulator.

Friday, September 6, 2019

Graphical Network Simulator-3 (GNS3) - Part I Initial Introduction on a Laptop

Someone told me about this network modeling and simulation tool called Graphical Network Simulator-3. There is a Wikipedia page on this tool, which can be found here:

https://en.wikipedia.org/wiki/Graphical_Network_Simulator-3

Fascinating tool. Allows you to drag and drop network elements onto a canvas - but unlike the old tools, this tool can actually RUN the elements! To do this, you need to import image files as you drag and drop the elements out on the canvas. Below is an example of how, when dragging a simulated internet cloud onto the canvas will prompt for an image to run on a virtual machine.

Image Files Required for Network Elements in GNS3

Then, once you have the elements properly situated on the canvas, you can use a connector to interconnect them (it will prompt you for the NIC interface), and then, once your interconnection points are established, you can click a "run" button.

If all goes well everything turns green and packets start to flow. There is a built-in packet trace on each link line, which will dump packets to a file if you choose to do a packet capture.

SLAs using Zabbix in a VMware Environment

 Zabbix 7 introduced some better support for SLAs. It also had better support for VMware. VMware, of course now owned by BroadSoft, has prio...