Showing posts with label VPP. Show all posts
Showing posts with label VPP. Show all posts

Tuesday, December 3, 2019

Virtualized Networking Acceleration Technologies - Part II


In Part I of this series of posts, I recapped my research on these virtualized networking technologies, with the aim to build an understanding of:

  • what they are
  • the history and evolution between them
What I did not cover, was a couple of further questions:
  1. When to Use Them
  2. Can you Combine Them?
This link is a fantastic link that discusses item number one. Now, I can't tell how "right" or "accurate" he is, and I typically look down in comments for rebuttals and refutes (I didn't see any and most commenters seemed relatively uninformed on this topic).

He concludes that in East-West (inter-data center) traffic, DPDK wins, and in North-South traffic, SR-IOV wins.
https://www.telcocloudbridge.com/blog/dpdk-vs-sr-iov-for-nfv-why-a-wrong-decision-can-impact-performance/

Friday, September 27, 2019

Vector Packet Processing - Part IV - Testing and Verification

As I work through the documentation on fd.io, it discusses Installation, and then there is a section called "Running VPP".

The first error I encountered in this documentation had to do with running the VPP Shell. The documentation said to run the following command: "sudo vppctl -s run/vpp/cli-vpp.sock"

On a CentOS7 installation, the cli-vpp.sock file is actually called "cli.sock", not "cli-vpp.sock".  So in correcting this, indeed, I see a CLI shell, which I will show further down.

So there is a CLI to master with this. And to be any kind of guru, one will need to learn this. It does look like a more or less "standardized" CLI, with syntax commands that include the familiar, "show", "set", etc. I ran the "help" command to get a dump of commands, which showed a hefty number of sub-commands to potentially learn.

I decided to run a fairly simple "show interface" command, to see what that would produce. And, here is the result of that.

"show interface" results from VPP shell CLI - all interfaces down
So the CLI sees 4 Gigabit Ethernet interfaces, all in a state of "down". 

This server has two dual-port NIC cards, so it makes sense to me that there would be two found on GigabitEthernet1. Why there is only a single interface found on GigabitEthernet3, I need to look into (seems there should also be two of these). The local0 interface, I presume, is a NIC that is on the motherboard (I could see people confusing local0 with a loopback). 

If you proceed with the dp.io documentation, it actually instructs you to set up a veth pair - not the actual physical NICs on the box - and create interfaces that way and enable them, and then do some tracing. It probably makes some sense to do that, before trying to bring these Gigabit Ethernet NICs up and test those. Why? Well, for one reason, you could knock your connectivity to the server out, which would be bad. So let's leave our physical NICs alone for the time being.

So next step, we will run the veth steps and the tracing steps on the dp.io website.

Then, after that, I noticed there is a VPP Testing site on GitHub.

https://github.com/FDio/vpp/tree/master/test

It is written in Python, so you could run your Makefile commands and, hopefully, run these easily.

Vector Packet Processing - Part III - Ensuring a Supported PCI Device

Okay - today I took a quick look into why the "Unsupported PCI Device" errors were popping up when I started the VPP service.

It turns out, that the Realtek network adaptors on that server, are, in fact, not supported! Duh. This has nothing to do with VPP. It has to do with the underlying Data Plane Development Kit, on which VPP resides as a layer on top of (in other words, VPP uses DPDK libraries).

The DPDK site lists the adaptors that are supported, on this page of their website, entitled, "Supported Hardware".
http://core.dpdk.org/supported/

Sure enough, no long-in-the-tooth RealTek NICs are listed here.

So what would you do (on that server) to test and experiment with VPP?

  1. Well, you could swap out the adaptors. If you do that, you better make sure you think about static IP assignments based on MAC address because all of your MACs will change. 
  2. You could use a virtual adaptor that is supported.
Or, you could simply find another server. Which I did. And this server is using Intel adaptors that ARE supported.

VPP Startup with Supported Adaptors
Next, I ran the "vppctl list plugins" command, which dumped out a ton of .so (shared object) files. 

These files are shared libraries, essentially. Rather than linking stuff into all of the binaries (making them larger), a shared object or shared library accommodates multiple binaries using the code (they get their own local data segments but share a pointer to the code segment - as I understand it). 

So - it looks like we have a working VPP service on this service. Yay. What next? Well, here are a couple of possibilities:

1. OpenStack has a Neutron VPP driver. That could be interesting to look into, and see what it's all about, and how well it works.

2. Maybe there are some ways of using or testing VPP in a standalone way. For example, some test clients. 

I think I will look into number 2 first. At this point, I am only interested in functional testing here. I am not doing any kind of performance bake-offs. Not even sure I have the environment and tools for that right now. We're just learning here.
  

Tuesday, September 24, 2019

Vector Packet Processing - Part II - Installing VPP

As a wrap-up to my day, I decided to take one of my CentOS7 servers, and install vpp on it.

I followed the cookbook found at this link:
https://wiki.fd.io/view/VPP/Installing_VPP_binaries_from_packages#RPMs

This link doesn't tell you how to set up the vpp repository, which is necessary to install any of the vpp packages (a yum groupinstall would have been nice for this, actually).

But the link for the repository is here:
https://my-vpp-docs.readthedocs.io/en/latest/gettingstarted/users/installing/centos.html

For convenience I included the snippet below.
$ cat /etc/yum.repos.d/fdio-release.repo
[fdio-release]
name=fd.io release branch latest merge
baseurl=https://nexus.fd.io/content/repositories/fd.io.centos7/
enabled=1
gpgcheck=0
This didn't take long to do at all. No problem installing packages, no problem starting up the vpp service.
But, it looks to me like old hardware and old network cards don't support vpp. So more work to do.
Unsupported PCI Device Errors on vpp service startup

Vector Packet Processing - Part I

Yesterday, I was reading up on something called Vector Packet Processing (VPP). I had not heard of this, nor the organization called Fd.io (pronounced Fido), which can be found at the following link: http://fd.io

Chasing links to get more up to speed, I found this article, which does a very good indoctrination on these newer networking technologies, which  have emerged to support virtualization, due to the overhead (and redundancy) associated with forwarding packets from NICs, to virtualization hosts, and into the virtual machines.

https://software.intel.com/en-us/articles/an-overview-of-advanced-server-based-networking-technologies

I like how the article progresses from the old-style interrupt processing, to OpenVSwitch (OVS), to SR-IOV, to DPDK, and then, finally, to VPP.

I am familiar with OpenVSwitch, which I came into contact with OpenStack, which had OpenVswitch drivers (and required you to install OpenVSwitch on the controller and compute nodes).

I was only familiar with SR-IOV because I stumbled upon it and took the time to read up on what it was. I think it was a virtual Palo Alto Firewall that had SR-IOV NIC Types, if I'm not mistaken. I spent some time trying to figure out if these servers I am running support SR-IOV and they don't seem to have it enabled, that's for sure. Whether they support it would take more research.

And DPDK I had read up on, because a lot of hardware vendors were including FastPath Data switches that were utilizing DPDK for their own in-house virtual switches, or using the DPDK-OpenVSwitch implementation.

But Vector Packet Processing (VPP), this somehow missed me. So I have been doing some catch-up on VPP, which I won't go into detail on in this post or share additional resources on such a large topic. But the link above to Fido is essentially touting VPP.

UPDATE:
I found this link, which is also spectacularly written:
https://www.metaswitch.com/blog/accelerating-the-nfv-data-plane

And, same blog with another link for those wanting the deep dive into VPP:
https://www.metaswitch.com/blog/fd.io-takes-over-vpp

SLAs using Zabbix in a VMware Environment

 Zabbix 7 introduced some better support for SLAs. It also had better support for VMware. VMware, of course now owned by BroadSoft, has prio...