Friday, March 30, 2018

OpenVSwitch on OpenStack: Round II


Originally, I had attempted to get OpenVSwitch working on OpenStack and had to back that effort out. This was because I had some proof of concepts to deliver and didn't have the time to spend learning and debugging OpenVSwitch.

I circled back on OpenVSwitch and reconfigured my OpenStack network to use OpenVSwitch. This entailed recfonfiguration of Neutron on both the OpenStack Controller / Network node (my system has these combined into a single VM), and the respective OpenStack Compute Nodes.

The main issue I was having (and still have) is that when  Controller node reboots, it pulls an address on the physical NIC eth0. On OpenVswitch, when a physical node is attached as a port to a bridge, the IP Address needs to be set to the bridge, and cannot be on the physical NIC.

So what I am having to do right now, is flush the eth0 interface with:
ip a flush dev eth0

This removes all IP settings on eth0. Then I have to manually add the IP information onto the bridge (br-provider) using iproute.

Now this alone does not get things working. There are still a couple more things you have to do.

First, you have to explicitly set the link up on the openvswitch br-provider bridge in order to get packets to flow properly:
# ip l set br-provider up

Then, restarting the (properly configured) service neutron-openvswitch-agent should result in all of the bridge tap interfaces disappearing from the general network namespace of the host and reappearing on the openvswitch namespace, such that when you run ovs-vsctl show, they show up there instead of on the os command line by running "ip a".

But it appears this was the main culprit and everything seems to be working fine now with openvswitch.

Thursday, March 8, 2018

Jumbo Frames on Service Orchestration Systems

A friend of mine suggested configuring Jumbo Frames in order to improve the inter-VM performance on the service orchestration system (OpenStack and Open Baton) I had put together as a research project.

I have a lot of bridges in this architecture.

For example, on each KVM host, the adaptors are attached to bridges, which is actually the only way you can get VMs to use a "default" network (i.e. each VM to the same network the host interface is on, without a separate private network or NAT).

On my box, I have a bridge called br0, and I have a physical NIC connected to that bridge (eno1), and libvirtd puts a virtual network - a default virtual network called vnet0 - on the bridge that the VMs can use.

Using the command "ip l show br0", I noticed that the MTU on the bridge(s) had a setting of 1500.

I wanted to make the bridge 9014 MTU. So naturally I tried to use the command:
"ip link set dev br0 mtu 9014"

The command seemed to work fine (exit code 0). But the MTU remained at 1500 when I verified it with "ip l show br0". So it didn't really work.

I realized that in order for the bridge to use the 9014 MTU, all of the underlying interfaces connected to that bridge must be set to 9014 MTU. So first we set the mtu on the bridge members individually.
- "ip link set dev eno1 mtu 9014"
- "ip link set dev vnet0 mtu 9014"

Now, if we examine the mtu of the bridge:
"ip link show br0" - the mtu is now 9014!

NOTE: If it isn't (maybe it still says 1500), you can type: "ip link set dev br0 mtu 9014" and it should adjust up to 9014 if all of its member interfaces on the bridge are up to that number.

And, in fact, if all underlying NICs attached to the bridge are set to 9014, you can set the MTU of the bridge to 9014 or anything below that. e.g. "ip link set dev br0 8088". Basically, it is an LCD approach with the MTU. The MTU on a bridge is limited by its weakest link, or slowest adaptor connected to the bridge.

Now, if you bounce the interface - physical NIC on the bridge - it reverts back to 1500.  Also, the virtual machine network, if you stop it and restart it on libvirtd, will also revert back to 1500. Setting NICs to higher MTUs is OS-specific (i.e. scripts, interface config files, dependent on Distro). And setting it on libvirtd typically requires directives in the network.xml file for the specific network you are trying to increase the MTU on.

Now - after all of this, here is a wiser suggestion. Make sure your ROUTER supports Jumbo Frames! Packets going out to the internet are subject to fragmentation because the internet generally does not support jumbo frames from point A to point Z and all points between.  But you may still be using a router for internal networking. The Ubiquiti Edge Router I use, it turns out, has a hard limit of 2018 and I cannot use it for Jumbo Frames. The alternative would be to set up a Linux box that can do Jumbo frames and, essentially, build my own router.

But - for KVM virtual machines that might need to communicate to OpenStack-managed virtual machines on same hosts, the router may not come into play. All the router is probably used for is DHCP. So you can probably just set Jumbo Frames up on the respective KVM hosts and on libvirtd, and get some benefit of performance.


SLAs using Zabbix in a VMware Environment

 Zabbix 7 introduced some better support for SLAs. It also had better support for VMware. VMware, of course now owned by BroadSoft, has prio...