When one launches a virtual machine on Linux, you can configure your virtual machine hypervisor to run as a Type 1 hypervisor (qemu), or a Type 2 hypervisor (qemu-kvm). This post won't go into the details of the differences between those.
The options one can pass into qemu or qemu-kvm can be daunting.
But, if you want to launch a virtual machine just for the purpose of testing a DPDK adaptor, this can be done in a lightweight manner.
Remember, that a vhostuser port connects to a socket on the OpenVSwitch. This is the original, legacy manner in which DPDK was implemented between VMs and OpenVSwitch.
The drawback to this, of course, is that if the switch were to die, the VMs would be stranded with no connectivity (a bad thing).
This is why they came out with vhostuserclient ports, where OpenVSwitch connects to a socket that is managed by qemu when it starts the VM. This way, if the switch dies or is restarted, ports on the switch will reconnect to the VM-managed sockets. If a VM dies, it only removes its own sockets.
So we will cover the scenarios of launching a VM in both modes; vhostuser, and vhostuserclient.
VhostUser
With a vhostuser port, OpenStack creates the socket that qemu connects to. So this port needs to be created ahead of time on OpenVSwitch (see DPDK Hands-On Part VIII - creating DPDK ports).
Below is a snippet of a bash script that cranks a VM after some parameters are defined.
if [ $PORT_TYPE == "vhost-user" ]; then
# vhost-user - openvswitch binds to socket and acts as server
export VHOST_SOCK_DIR=/var/run/openvswitch
export VHOST_PORT="${VHOST_SOCK_DIR}/vhostport1"
# the vhostuser does not have the server parameter.
/usr/libexec/qemu-kvm -name $VM_NAME -cpu host -enable-kvm \
-m $GUEST_MEM -drive file=$QCOW2_IMAGE --nographic -snapshot \
-numa node,memdev=mem -mem-prealloc -smp sockets=1,cores=2 \
-object memory-backend-file,id=mem,size=$GUEST_MEM,mem-path=/dev/hugepages,share=on \
-chardev socket,id=char${MAC},path=${VHOST_PORT} \
-netdev type=vhost-user,id=default,chardev=char${MAC},vhostforce,queues=2 \
-device virtio-net-pci,mac=00:00:00:00:00:0${MAC}, netdev=default, mrg_rxbuf=off, mq=on, vectors=6
#1>qemu.kvm.vhostuser.out 2>&1
fi
So let's discuss these parameters:
- The Numa Node parameter tells you how to specify your cpus. A good link on this can be found at https://futurewei-cloud.github.io/ARM-Datacenter/qemu/how-to-configure-qemu-numa-nodes/
Now on my VM, I only have one Numa slot, which equates to one Numa Node (Numa Node 0), with 4 cores on it. So with the -smp sockets=1 cores=2 directive, I can specify 2 cores for my Virtual Machine.
- Mem Path
This parameter is specified to put the VM on hugepages. Any VM using DPDK ports should be backed by HugePages.
- Socket Parameters
The socket parameters are important. OpenVSwitch, if it creates the socket, will place the socket in /var/run/openvswitch by default. You have to create the port there manually, then tell the VM to connect to it specifically by the path and socket name.
- Multi queuing
You will notice that mq=on, and 2 queues are specified. Having more than one queue for sending and receiving data unlocks a bottleneck.
NOTE: I am not clear, actually, if by specifying 2 on "queues=2", that this means 2 x Tx queues AND 2 x Rx queues. I need to look into that and perhaps update this post (or if someone can comment on this, great).
The proof is in the pudding on this. When you launch the VM, you will need to check TWO places to ensure your networking initialized properly:
First, check /var/log/openvswitch/ovs-vswitchd.log
Second, check the output of the VM. Which means that you would need to redirect your output to a file for stdout and stderr and analyze accordingly. The green highlighted section above shows this. But this does affect your start stop of the VM, so I comment this out when running in normal mode.
VhostUserClient
The vhostuserclient works very similar to vhostuser, except that the socket is managed by qemu rather than OpenVSwitch!
# vhost-user-client - qemu binds to socket and acts as server
export VHOST_SOCK_DIR="/var/lib/libvirt/qemu/vhost_sockets"
export VHOST_PORT="${VHOST_SOCK_DIR}/dpdkvhostclt1"
echo "Starting VM..."
/usr/libexec/qemu-kvm -name $VM_NAME -cpu host -enable-kvm \
-m $GUEST_MEM -drive file=$QCOW2_IMAGE --nographic -snapshot \
-numa node,memdev=mem -mem-prealloc -smp sockets=1,cores=2 \
-object memory-backend-file,id=mem,size=$GUEST_MEM,mem-path=/dev/hugepages,share=on \
-chardev socket,id=char1,path=${VHOST_PORT},server \
-netdev type=vhost-user,id=default,chardev=char1,vhostforce,queues=2 \
-device virtio-netpci,mac=00:00:00:00:00:0${MAC},netdev=default,mrg_rxbuf=off,mq=on,vectors=6
Note the difference between the "chardev" directive above, in comparison with vhostuser "chardev" directive. The vhostuserclient directive has the additional "server" parameter included!!! This is the KEY to specifying that QEMU owns the socket, as opposed to OpenVSwitch.
One might wonder, how this works in practice. If you create a VM and it binds to a socket, how does OpenVSwitch know to connect to it? The answer is, OpenVSwitch won't - until you create the port on OpenVSwitch in vhostuserclient mode! At which point, OVS will know it is the client, and create the connection to the qemu process!
So a script that launches a VM with vhostuserclient interfaces, should probably create the OVS port first, then launch the VM. Otherwise, the VM won't have the connectivity it needs in a timely manner when it boots up (i.e. for DHCP to determine its IP Address and settings).
No comments:
Post a Comment