Monday, May 11, 2020

DPDK Hands-On - Part II - Poll Mode Drivers and IOMMU


In our last post Hands On with DPDK - Part I we chose a box to try and install DPDK on.

This box, was a circa 2015 Dell T-1700. A bit long in the tooth (it is now 2020), and it is not and never was, a data center grade server.

And, looking forward, this will bite us. But will help us learn a LOT about DPDK due to the persistence and troubleshooting.

So - to get started, I did something rather unconventional. Rather than read all of the documentation (there is a LOT of documentation), I took a cursory look at the dpdk.org site (Getting Started), and then went looking for a couple of blogs where someone else tried to get DPDK working with OVS.

Poll Mode Drivers

Using DPDK requires using a special type of network interface card driver known as a poll mode driver. This means that the driver has to be available (custom compiled and installed with rpm, or perhaps pre-compiled and installed with package managers like yum). 

Poll Mode drivers continuously poll for packets, as opposed to using the classic interrupt-driven approach that the standard vendor drivers use. Using interrupts to process packets is considered less efficient than polling for packets. But - to poll for packets continuously is cpu intensive, so there is a trade-off! 

There are two poll mode drivers listed on the dpdk.org website:
https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html

  1. UIO (legacy)
    1. uio_pci_generic
    2. igb_uio
  2. VFIO (current recommended driver)

The DPDK website has this to say about the two driver families (UIO and VFIO). 

"VFIO is the new or next-gen poll mode driver, that is a more robust and secure driver in comparison to the UIO driver, relying on IOMMU protection". 

So perhaps it makes sense to discuss IOMMU, as it will need to be disabled for UIO drivers, and enabled for VFIO drivers.

 IOMMU

Covering IOMMU would be a blog series in its own right. So I will simply list the Wikipedia site on IOMMU.  Wikipedia IOMMU Link

What does IOMMU have to do with DPDK? DPDK has this to say in their up-front pre-requisites for DPDK.

"An input-output memory management unit (IOMMU) is required for safely driving DMA-capable hardware from userspace and because of that it is a prerequisite for using VFIO. Not all systems have one though, so you’ll need to check that the hardware supports it and that it is enabled in the BIOS settings (VT-d or Virtualization Technology for Directed I/O on Intel systems)"
 

So there you have it. It took getting down to the poll mode drivers, but IOMMU provides memory security...but for the newer-generation VFIO drivers. Without this security, one rogue NIC could affect the memory for all NICs, or jeopardize the memory of the system in general.

So - how do you enable IOMMU?

Well, first you need to make sure your system even supports IOMMU.

To do this, you can do one of two things (suggested: do both) - Linux system assumed here.
  1. Check and make sure there is a file called /sys/class/iommu 
  2. type (as root) dmesg | grep IOMMU
On #2, you should see something like this
IOMMU                                                          
[    0.000000] DMAR: IOMMU enabled
[    0.049734] DMAR-IR: IOAPIC id 8 under DRHD base  0xfbffc000 IOMMU 0
[    0.049735] DMAR-IR: IOAPIC id 9 under DRHD base  0xfbffc000 IOMMU 0
Now in addition to this, you will need to edit your kernel command line so that two IOMMU directives can be passed in:  iommu=pt intel_iommu=on
 
The typical way these directives are added is using the grub2 utility.

NOTE: Many people forget that once they add the parameters, they need to do a mkconfig to actually apply these parameters!!!

After adding these kernel parameters, you can check your kernel command line by running the following command:

# cat /proc/cmdline

And you should see your iommu parameters showing up:

BOOT_IMAGE=/vmlinuz-3.10.0-1127.el7.x86_64 root=UUID=4102ab69-f71a-4dd0-a14e-8695aa230a0d ro rhgb quiet iommu=pt intel_iommu=on

Next Step: Part III - Huge Pages

No comments:

Fixing Clustering and Disk Issues on an N+1 Morpheus CMP Cluster

I had performed an upgrade on Morpheus which I thought was fairly successful. I had some issues doing this upgrade on CentOS 7 because it wa...