Before we get into the procedure of adding virtual ports to the switch, it is important to understand the two types of DPDK virtual ports, and their differences.
In the earlier versions of DPDK+OVS, virtual interfaces were defined with a type called vhostuser. These interfaces would connect to OpenVSwitch. Meaning, from a socket perspective, that OpenVSwitch managed the socket. More technically, OVS binds to a socket in /var/run/openvswitch/<portname> behaving as a server, while the VMs connect to this socket as a client.
There is a fundamental flaw in this design. A rather major one! Picture a situation where a dozen virtual machines launch with ports that are connected to the OVS, and the OVS is rebooted! All of those sockets are destroyed on the OVS, leaving all of the VMs "stranded".
To address this flaw, the socket model needed to be reversed. The virtual machine (i.e. qemu) needed to act as the server, and the switch needed to be the client!
Hence, a new port type was created: vhostuserclient.
A more graphical and elaborative explanation of this can be found on this link:
Now because there are two sides to any socket connection, it makes sense that BOTH sides need to be configured properly with the proper port type for this communication to work.
This post deals with simply adding the right DPDK virtual port type (vhostuser or vhostuserclient) to the switch. But configuring the VM properly is also necessary, and will be covered in a follow-up post after this one is published.
I think the easiest way to show how these two port types are added, with some discussion.
VhostUser
To add a vhostuser port, the following command can be run:
# ovs-vsctl add-port br-tun dpdkvhost1 -- set Interface dpdkvhost1 type=dpdkvhostuser ofport_request=2
It is as simple as adding a port to a bridge, giving it a name, and using the appropriate type for a legacy virtual DPDK port (dpdkvhostuer). We also give it port number 2 (in our earlier post, we added a physical DPDK PCI NIC to port 1 so we will assume port 1 is used by that).
Notice, that there is no socket information included in this. OpenVSwitch will create a socket, by default, in /var/run/openvswitch/<portname> once the vhostuser port is added to the switch.
NOTE: The OVS socket location can be overridden but for simplicity we will assume default location. Another issue is socket permissions. When the VM launches under a different userid such as qemu, the socket will need to be writable by qemu!
The virtual machine, with a vhostuser interface defined on it, will need to be instructed where to connect; what socket to connect to. So because the VM needs to know where to connect to, it actually makes the OVS configuration somewhat simpler in this model because OVS will create the socket where it is configured to do so, the default being in /var/run/openvswitch.
So after adding a port to the bridge, we can do a quick show on our bridge to ensure it created properly.
# ovs-vsctl show
Bridge br-tun
fail_mode: standalone
Port "dpdkvhost1"
Interface "dpdkvhost1"
type: dpdkvhostuser
Port "dpdk0"
Interface "dpdk0"
type: dpdk
options: {dpdk-devargs="0000:01:00.0"}
Port br-tun
Interface br-tun
type: internal
With this configuration, we can do a test between a physical interface and a virtual interface, or the virtual interface can attempt to reach something outside of the host (i.e. a ping test to a default gateway and/or an internet address). With this configuration, a virtual machine could also attempt a DHCP request to obtain its IP address for the segment is on if a DHCP server indeed exists.
If we wanted to test between two virtual machines, another such interface would need to be added:
# ovs-vsctl add-port br-tun dpdkvhost2 -- set Interface dpdkvhost2 type=dpdkvhostuser ofport_request=3
And, this would result in the following configuration:
Bridge br-tun
fail_mode: standalone
Port "dpdkvhost2"
Interface "dpdkvhost2"
type: dpdkvhostuser
Port "dpdkvhost1"
Interface "dpdkvhost1"
type: dpdkvhostuser
Port "dpdk0"
Interface "dpdk0"
type: dpdk
options: {dpdk-devargs="0000:01:00.0"}
Port br-tun
Interface br-tun
type: internal
With this configuration, TWO virtual machines would connect to their respective OVS switch sockets:
VM1 connects to OVS socket for vhostusr1 --> /var/run/openvswitch/vhostusr1
VM2 connects to OVS socket for vhostusr2 --> /var/run/openvswitch/vhostusr2
Thanks to the PCI port we added earlier these two VMs "reach outside" to request an IP Address, and ping each other on the same segment if they both have an IP address.
dpdkvhost1 dpdkvhost2
=|==========|=
OVS Bridge (br-tun)
======|======
dpdk0
|
Upstream Router
VhostUserClient
This configuration looks similar to the vhostuser configuration, but with a subtle difference. In this case, the VM is the server in the client server socket model, so the OVS port, as a client, needs to know where the socket it in order to connect to it!
# ovs-vsctl add-port br-tun dpdkvhostclt1 -- set Interface dpdkvhostclt1 type=dpdkvhostuserclient "options:vhost-server-path=/var/lib/libvirt/qemu/vhost_sockets/dpdkvhostclt1" ofport_request=4
In this directive, the only thing that changes is the addition of the parameter telling OVS where the socket is to connect to, and of course the type of port needs to be set to dpdkvhostuserclient (instead of the vhostuser).
And, if we run out ovs-vsctl show command, we will see that the port looks similar to the vhostuser ports, except for two differences:
- the type is now vhostuserclient, rather than vhostuser
- the option parameter which instructs OVS (the socket client) where to connect to.
Bridge br-tun
fail_mode: standalone
Port "dpdkvhostclt1"
Interface "dpdkvhostclt1"
type: dpdkvhostuserclient
options: {vhost-server-path="/var/lib/libvirt/qemu/vhost_sockets/dpdkvhostclt1"}
Port "dpdkvhost2"
Interface "dpdkvhost2"
type: dpdkvhostuser
Port "dpdkvhost1"
Interface "dpdkvhost1"
type: dpdkvhostuser
Port "dpdk0"
Interface "dpdk0"
type: dpdk
options: {dpdk-devargs="0000:01:00.0"}
Port br-tun
Interface br-tun
type: internal
Setting up Flows
Just because we have added these port, does not necessarily mean they'll work after creation. The next step, is to enable flows (rules) for traffic forwarding between these ports.
Setting up switch flows is an in-depth topic in and of itself, and one we won't cover in this post. There are advanced OpenVSwitch tutorials on Flow Programming (OpenFlow).
The first thing you can generally do, if you don't have special flow requirements that you're aware of, is to set the traffic processing to "normal", as seen below for the br-tun bridge/switch.
# ovs-ofctl add-flow br-tun actions=normal
This should give normal L2/L3 packet processing. But, if you can't ping or your network forwarding behavior isn't as desired, you may need to program more detailed or sophisticated flows.
For simplicity, I can show you a couple of examples of how one could attempt to enable some traffic to flow between ports:
allows you to ping from the bridges out to the host on their PCI interfaces...
# ovs-ofctl add-flow br-tun in_port=LOCAL,actions=output:dpdk0
# ovs-ofctl add-flow br-prv in_port=LOCAL,actions=output:dpdk1
allows you to forward packets to the proper VM when they come into the host.
# ovs-ofctl add-flow br-tun ip_dst=192.168.30.202,actions=output:dpdkvhost1
# ovs-ofctl add-flow br-prv ip_dest=192.168.20.202,actions=output:dpdkvhost0
To debug the packet flows, you can dump them with the "dump-flows" command. There is a similarity between iptables rules (iptables -nvL) and openvswitch flows, and debugging is somewhat similar in that you can dump flows, and look for packet counts.
# ovs-ofctl dump-flows br-prv
cookie=0xd2e1f3bff05fa3bf, duration=153844.320s, table=0, n_packets=0, n_bytes=0, priority=2,in_port="phy-br-prv" actions=drop
cookie=0xd2e1f3bff05fa3bf, duration=153844.322s, table=0, n_packets=10224168, n_bytes=9510063469, priority=0 actions=NORMAL
In the example above, we have two flows on the bridge br-prv. And we do not see any packets being dropped. So, presumably, anything connected to this bridge should be able to communicate from a flow perspective.
After setting these kinds of flows, ping tests and traffic verification tests will need to be done.
I refer to this as "port plumbing" and these rules indeed can get very advanced, sophisticated and complex - potentially.
If you are launching a VM on Linux, via KVM (a script usually), or using Virsh Manager (which drives off of an xml file that describes the VM), you will need to set these "port plumbing" rules up manually, and you would probably start with the basic normal processing unless you want to do something sophisticated.
If you are using OpenStack, however, OpenStack does a lot of things automatically, and the things it does is influenced by your underlying OpenStack configuration (files). For example, if you are launching a DPDK VM on an OpenStack that is using OpenVSwitch, each compute node that will be running a neutron-openvswitch-agent service. This service, is actually a Ryu OpenFlow Controller, and when you start this service, it plumbs ports on behalf of OpenStack Neutron on the basis of your Neutron configuration. So you may look at your flows with just OpenVSwitch running and see a smaller subset of flows than you would, if the neutron-openvswitch-agent were running! I may get into some of this in a subsequent post, if time allows.
No comments:
Post a Comment