Wednesday, June 17, 2020

DPDK Hands-On - Part VI - Configuring OpenVSwitch DPDK Parameters

In earlier posts, we talked about the fact that we needed to check our compatibility with NUMA, HugePages, and IOMMU. And when we decided we had that compatibility, we "enabled" these technologies by passing them into the Linux kernel through grub.

But - now that everything is enabled, OpenVSwitch needs to be configured with a number of parameters that, to the layman's eye, look quite scary and intimidating.  Fortunately, there is a website that elaborates on these parameters much better than I could, so I will list that here:    OVS-DPDK Parameters

The first thing to do, is initialize OpenVSwitch for DPDK. Without this, OpenVSwitch is deaf,dumb and blind as to DPDK-related directives.

# ovs-vsctl set Open_vSwitch . other_config:dpdk-init=true

Next, we will reserve memory, in Hugepages, for DPDK sockets. The command below will reserve 1 Hugepage with an upper limit of 2 Hugepages (each Hugepage = 1G).

# ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="1024"
# ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-limit="2048"

NOTE: The Hugepages use can be checked a couple of ways. See prior post on Hugepages.

Next, we instruct OVS to place DPDK threads on specific cores the way we want to. We will do that by specifying a CPU mask for each of two settings. First, let's discuss the OVS directives (settings):

  • dpdk-lcore-mask - lcore are the threads that manage the poll mode driver threads
  • pmd-cpu-mask

The dpdk-lcore-mask is a core bitmask that is used during DPDK initialization and it is where the non-datapath OVS-DPDK threads such as handler and revalidator threads run.
 
The pmd-cpu-mask is a core bitmask that sets which cores are used by OVS-DPDK for datapath packet processing.

 
The masks in particular are quite esoteric, and need to be understood properly.

Calculating Mask Values

I found a pmd (poll mode driver) mask calculator out on GitHub, at the following link:

Another calculator I found, is at this link:

So let's take our box as an example. We have one Numa Node, with 4 cores. One thread per core. This makes things a bit simpler since we don't have to consider sibling pairs.

This mask will set the lcore to run on CPU core 0.
# ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x2
This mask will set the pmd threads to run on CPU cores 2 and 3.
# ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask="0xC"

It will be EASY to see if your pmd cores are set correctly, because if you run top or htop, these cpus will be consumed 100% because the poll mode drivers consume those cores polling for packets!

So per the screen shot below, we can see that cores 2-3 are fully consumed. This is a classic example of why affinity rules matter! The OS scheduler would never attempt to place a process on a cpu that was fully consumed, but if you wanted your OS to run on core 0 (along with the lcore threads), you only have one core remaining (on this system anyway), to run VMs unless you wanted to share cores 0-1 for userspace processes that are launched.

NOTE: in htop, cores 2-3 are shown as 3-4 because htop starts with 1 and not with 0.



No comments:

Zabbix to BigPanda Webhook Integration

Background BigPanda has made its way into the organization. I wasn't sure at first why, given that there's no shortage of Network Mo...