First off, you can NOT add a DPDK port for a physical NIC onto OpenVSwitch, if the NIC is not using a DPDK driver! And that driver must be working!!!
Now, when I got started, I was using vfio as my DPDK driver. Why? Because, per the DPDK and DPDK+OVS websites, vfio is the newer driver and the one everyone is encouraged to use.
In order to use a poll mode driver (on Linux), the appropriate kernel module(s) need(s) to be loaded. Once this kernel module is loaded, you can use a couple of methods (at least) to tell Linux to use the vfio driver for your PCI NIC. The two tools I use, are:
- driverctl - this ships with the OS (CentOS at least)
- dpdk-devbind - to use this utility, dpdk-tools package needs to be installed, or if you have downloaded and compiled DPDK by source code you can use it that way.
Using driverctl, you can set your driver by "overriding" the default driver, as follows:
# driverctl set-override 0000:01:00.0 vfio-pci
or, if you are using uio_pci_generic,
# driverctl set-override 0000:01:00.0 uio_pci_generic
NOTE: I had to use uio_pci_generic with my cards. The vfio driver was registering an error that I finally found in the openvswitch log.
I find the dpdk-devbind tool best for checking status, but you can also use driverctl for that also. I will show both commands and their output. The driverctl is not as clean in its output format as the dpdk-devbind command is.
# driverctl -v list-devices | grep -i net
0000:00:19.0 e1000e (Ethernet Connection I217-LM)
0000:01:00.0 uio_pci_generic [*] (82571EB/82571GB Gigabit Ethernet Controller D0/D1 (copper applications) (PRO/1000 PT Dual Port Server Adapter))
0000:01:00.1 uio_pci_generic [*] (82571EB/82571GB Gigabit Ethernet Controller D0/D1 (copper applications) (PRO/1000 PT Dual Port Server Adapter))
0000:03:00.0 e1000e (82571EB/82571GB Gigabit Ethernet Controller D0/D1 (copper applications) (PRO/1000 PT Dual Port Server Adapter))
0000:03:00.1 e1000e (82571EB/82571GB Gigabit Ethernet Controller D0/D1 (copper applications) (PRO/1000 PT Dual Port Server Adapter))
# dpdk-devbind --status
Network devices using DPDK-compatible driver
============================================
0000:01:00.0 '82571EB Gigabit Ethernet Controller 105e' drv=uio_pci_generic unused=e1000e
0000:01:00.1 '82571EB Gigabit Ethernet Controller 105e' drv=uio_pci_generic unused=e1000e
Network devices using kernel driver
===================================
0000:00:19.0 'Ethernet Connection I217-LM 153a' if=em1 drv=e1000e unused=uio_pci_generic
0000:03:00.0 '82571EB Gigabit Ethernet Controller 105e' if=p1p1 drv=e1000e unused=uio_pci_generic
0000:03:00.1 '82571EB Gigabit Ethernet Controller 105e' if=p1p2 drv=e1000e unused=uio_pci_generic
After you load your driver for the NIC of choice, you need to check a few things to ensure that the driver is working properly. Otherwise, you might pull your hair out trying to debug problems, without realizing that the underlying driver is the culprit.
bridges : [14049db7-07eb-4b00-89d6-b05201d2978e, 221b954d-c2dc-48b2-923b-86a87765ed7b, db7a8c3c-5a03-4063-a5e8-23686f319473]
cur_cfg : 936
datapath_types : [netdev, system]
db_version : "7.16.1"
dpdk_initialized : true
dpdk_version : "DPDK 18.11.8"
external_ids : {hostname=maschinen, rundir="/var/run/openvswitch", system-id="35b95ef5-fd71-491f-8623-5ccbbc1eca6b"}
iface_types : [dpdk, dpdkr, dpdkvhostuser, dpdkvhostuserclient, erspan, geneve, gre, internal, "ip6erspan", "ip6gre", lisp, patch, stt, system, tap, vxlan]
manager_options : [a4329e63-5b63-4e8b-8dac-0e9bc8492c28]
next_cfg : 936
other_config : {dpdk-init="true", dpdk-lcore-mask="0x2", dpdk-socket-limit="2048", dpdk-socket-mem="1024", pmd-cpu-mask="0xC"}
ovs_version : "2.11.1"
ssl : []
statistics : {}
system_type : centos
system_version : "7"
Trying to add a PCI NIC that is using poll mode drivers (vfio or uio) to an OpenVSwitch bridge that does not have the datapath set correctly (to netdev) will result in an error when you add the port. This error will be reported both on an "ovs-vsctl show" command, and also reflected in the log file /var/log/openvswitch/ovs-vswitchd.log file.
This was one of the most common and confusing errors to debug, actually. In newer versions of OVS, they have added the datapath to the "ovs-vsctl show" command. My switch, however, predates this patch, so I have to examine my datapath another way:
# ovs-vsctl list bridge br-tun | grep datapath
datapath_id : "0000001b21c57204"
datapath_type : netdev
datapath_version : "<built-in>"
If your datapath on the bridge is NOT using netdev, you can change it by changing your datapath.
# ovs-vsctl set bridge br-tun datapath_type=netdev
So if your switch looks good, and your bridge is set with the right netdev datapath type, adding your NIC should be successful with the following command:
# ovs-vsctl add-port br-tun dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:01:00.0 ofport_request=1
Let me comment a bit on this command to shed some additional insight.
1. I always add my dpdk NICs - whether they are physical or virtual ports - with the name "dpdk" as a prefix as a matter of practice.
2. A physical PCI NIC is going to use type dpdk. Virtual interfaces, do NOT use that type, and use vhostuser of vhostuserclient types. The distinction between these two is a subsequent discussion.
3. The dpdk-devargs is where you map the port to a specific PCI address. You need to be SURE you are using the correct one, and not inadvertently switching these. A lot of mistakes are made by using the wrong PCI addresses, especially in cases where there are multiple ports on a NIC card where one may be 0000:01:00.0 and another 0000:01:00.1!
4. The ofport_request where you are giving your port a port number on the switch. If you set this to 1, and 1 is taken, you will get an error. But in general, I try to always make a physical NIC port 1. It is extremely rare to add more than one physical NIC to a bridge.
So after adding your NIC to the bridge, you can check the status of it with the ovs-vsctl show command:
# ovs-vsctl show
Bridge br-tun
fail_mode: standalone
Port "dpdk0" --> you will see an error here if the port did not add correctly!
Interface "dpdk0"
type: dpdk
options: {dpdk-devargs="0000:01:00.0"}
Port br-tun
Interface br-tun
type: internal
Another useful command, is to dump your specific bridge, by port number. We can use the "ofctl" command, rather than "vsctl" command.
# ovs-ofctl show br-tun
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000001b21c57204
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(dpdk0): addr:00:1b:21:c5:72:04
config: 0
state: 0
current: 1GB-FD AUTO_NEG
speed: 1000 Mbps now, 0 Mbps max
LOCAL(br-tun): addr:00:1b:21:c5:72:04 --> LOCAL port maps to the bridge port in ovs-vsctl command.
config: 0
state: 0
current: 10MB-FD COPPER
speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
And this concludes our post on adding a physical DPDK PCI NIC to an OpenVSwitch bridge.
No comments:
Post a Comment