Thursday, April 12, 2018

Libnet and Libpcap

In the Jon Erickson book, he discusses the differences between libnet and libpcap.

Libnet is used to send packets (it doesn't receive).

Libpcap is used to filter (receive) packets - it doesn't send.

So you need both modes to have, well, "a full duplex solution".

I downloaded and compiled a bunch of libnet code examples so I can fiddle around and send packets under different example scenarios.  It's fairly easy to use, I think. All in C language.

Libpcap is a library that allows you to initialize a listener that goes into a loop, and you can pass in a BPF (Berkeley Packet Filter) and a Callback function that can handle packets that are fed into the callback function based on the filter criteria.

I had issues running the libpcap on VirtualBox virtual machines that had a bridged interface to the host. I need to re-run the code from the libpcap tutorial I was doing on a dedicated Linux box, or maybe change the adaptor type on the Virtual Box VMs.

Security and Hacking - Part I

I don't usually write much about Security and Hacking, but I will need to do a little bit of that because that is what I have been working on lately.

I went to the RSA show a couple years ago and that bootstrapped my involvement in security. The Dispersive DVN, after all, is all about Security.  We have had a number of people come in and Pen Test the networks, and I have read those reports.  Recently, as part of Research, once I finished Orchestration, they asked me if I would bolster my skills in this area and do some internal pen testing of our network.  This is a big undertaking, to say the least.

I started with a book called Hacking (2nd Edition), The Art of Exploitation, by Jon Erickson. This book is not for the script kiddies. It uses practical Assembler and C examples on a (dated) version of Ubuntu that you compile and run as part of going through the book.  I have gone through the entire book, page by page. I've learned some very interesting things from this book. Where I kind of got lost was in the ShellCode sections - which is essentially the one key point that separates the port scanners and tire kickers from the guys who know how to actually exploit and break into networks and systems.  I will need to go through this book, and these sections, probably iteratively to actually master the skills presented in this book.

I've built a "Pen Testing" station - on an Ubuntu VM and this VM is essentially my "attack plane" for the OpenStack network. It sits outside the OpenStack networks but can route to all of the networks inside OpenStack via the OpenStack router.

So far, I have run a series of half-open port scans and documented all of the ports I've been finding open on various network elements.

It appears that someone in a Load Testing group is trying to lasso me out of research and "make" me join this load testing team, which will make this an extracurricular effort if they succeed in doing this.

QoS with OpenBaton and OpenStack - Research Findings

Earlier on, I tried to get the Network Slicing module of OpenBaton working.

Network Slicing is, essentially, QoS. There are some extremely deep papers written on Network Slicing from a conceptual perspective (I have read the ones from Fraunhofer Fokus think tank in Berlin). For brevity I won't go into that here (maybe I will follow up when I have more time).

Initially, I had to change a lot of OpenBaton's code to get it to even compile and run. But it didn't work. After reading some papers, I decided that maybe the reason it wasn't working was because I was using LinuxBridge drivers and not OpenVSwitch (my theory was that the flows might be necessary for calculating QoS metrics on the fly).

Having gotten OpenVSwitch to work with my OpenStack, I once again attempted to get OpenBaton's Network Slicing to work. Once again I had to change the code (it did not support SSL) but was able to get it working (or should I say running) off the develop git branch. I tested the QoS with a "Bronze" bandwidth_limit policy on one of my network elements (VNFM), and it did not work. I set up two iPerf VMs and blew right past the bandwidth limit.

This led me to go back to OpenStack and examine QoS there, an "inside out" troubleshooting approach.

My OpenStack (Newton Release - updated) supports DSCP and Bandwidth Limit policies. It does not support minimum_bandwidth, which the Open Baton NSE (Network Slicing Engine) examples apply. So with that confirmed, I went ahead and used Neutron to apply a 3000kbps (300 burst) minimum bandwidth policy rule not only on the tenant network of a running VM (192.168.178.0/24), but I also put the policy on a specific port that the VM was running on (192.168.178.12). I went ahead and re-tested the QoS with iPerf, and lo and behold, I saw the traffic being throttled. I will mention that the throttling seemed to work "smoother and better" with TCP traffic on iPerf than with UDP traffic. UDP traffic on iPerf is notably slower than TCP anyway, because of things like buffering and windowing and such.

With this, I went back and re-tested the OpenBaton Network Slicing Engine and what I realized is that the OpenBaton NSE did not seem to be communicating with the OpenStack API to apply the QoS rule to the port. I mentioned this on Gitter to the guys at OpenBaton.

I doubt right now I will have time to go in and debug the source code to figure this out. They have not responded to my inquiry on this. There seems to be only one guy over in Europe that discusses QoS on that forum.

I would like to go back to the OpenStack and re-test the DSCP aspect of QoS. OpenBaton does not support DSCP so this would be an isolated exercise.

I will summarize by saying that there are NUMEROUS ways to "skin the cat" (why does this proverb exist?) with respect to QoS. A guy at work is using FirewallD (iptables) to put rate limiting in on the iptables direct rules as a means of governing traffic.  OpenVSwitch also has the ability to do QoS. So does OpenStack (which may use OpenVSwitch under the hood if you are using OVS drivers I guess). With all of these points in a network that MIGHT be using QoS, it makes me wonder how anything can actually work at the end of the day.

Friday, March 30, 2018

OpenVSwitch on OpenStack: Round II


Originally, I had attempted to get OpenVSwitch working on OpenStack and had to back that effort out. This was because I had some proof of concepts to deliver and didn't have the time to spend learning and debugging OpenVSwitch.

I circled back on OpenVSwitch and reconfigured my OpenStack network to use OpenVSwitch. This entailed recfonfiguration of Neutron on both the OpenStack Controller / Network node (my system has these combined into a single VM), and the respective OpenStack Compute Nodes.

The main issue I was having (and still have) is that when  Controller node reboots, it pulls an address on the physical NIC eth0. On OpenVswitch, when a physical node is attached as a port to a bridge, the IP Address needs to be set to the bridge, and cannot be on the physical NIC.

So what I am having to do right now, is flush the eth0 interface with:
ip a flush dev eth0

This removes all IP settings on eth0. Then I have to manually add the IP information onto the bridge (br-provider) using iproute.

Now this alone does not get things working. There are still a couple more things you have to do.

First, you have to explicitly set the link up on the openvswitch br-provider bridge in order to get packets to flow properly:
# ip l set br-provider up

Then, restarting the (properly configured) service neutron-openvswitch-agent should result in all of the bridge tap interfaces disappearing from the general network namespace of the host and reappearing on the openvswitch namespace, such that when you run ovs-vsctl show, they show up there instead of on the os command line by running "ip a".

But it appears this was the main culprit and everything seems to be working fine now with openvswitch.

Thursday, March 8, 2018

Jumbo Frames on Service Orchestration Systems

A friend of mine suggested configuring Jumbo Frames in order to improve the inter-VM performance on the service orchestration system (OpenStack and Open Baton) I had put together as a research project.

I have a lot of bridges in this architecture.

For example, on each KVM host, the adaptors are attached to bridges, which is actually the only way you can get VMs to use a "default" network (i.e. each VM to the same network the host interface is on, without a separate private network or NAT).

On my box, I have a bridge called br0, and I have a physical NIC connected to that bridge (eno1), and libvirtd puts a virtual network - a default virtual network called vnet0 - on the bridge that the VMs can use.

Using the command "ip l show br0", I noticed that the MTU on the bridge(s) had a setting of 1500.

I wanted to make the bridge 9014 MTU. So naturally I tried to use the command:
"ip link set dev br0 mtu 9014"

The command seemed to work fine (exit code 0). But the MTU remained at 1500 when I verified it with "ip l show br0". So it didn't really work.

I realized that in order for the bridge to use the 9014 MTU, all of the underlying interfaces connected to that bridge must be set to 9014 MTU. So first we set the mtu on the bridge members individually.
- "ip link set dev eno1 mtu 9014"
- "ip link set dev vnet0 mtu 9014"

Now, if we examine the mtu of the bridge:
"ip link show br0" - the mtu is now 9014!

NOTE: If it isn't (maybe it still says 1500), you can type: "ip link set dev br0 mtu 9014" and it should adjust up to 9014 if all of its member interfaces on the bridge are up to that number.

And, in fact, if all underlying NICs attached to the bridge are set to 9014, you can set the MTU of the bridge to 9014 or anything below that. e.g. "ip link set dev br0 8088". Basically, it is an LCD approach with the MTU. The MTU on a bridge is limited by its weakest link, or slowest adaptor connected to the bridge.

Now, if you bounce the interface - physical NIC on the bridge - it reverts back to 1500.  Also, the virtual machine network, if you stop it and restart it on libvirtd, will also revert back to 1500. Setting NICs to higher MTUs is OS-specific (i.e. scripts, interface config files, dependent on Distro). And setting it on libvirtd typically requires directives in the network.xml file for the specific network you are trying to increase the MTU on.

Now - after all of this, here is a wiser suggestion. Make sure your ROUTER supports Jumbo Frames! Packets going out to the internet are subject to fragmentation because the internet generally does not support jumbo frames from point A to point Z and all points between.  But you may still be using a router for internal networking. The Ubiquiti Edge Router I use, it turns out, has a hard limit of 2018 and I cannot use it for Jumbo Frames. The alternative would be to set up a Linux box that can do Jumbo frames and, essentially, build my own router.

But - for KVM virtual machines that might need to communicate to OpenStack-managed virtual machines on same hosts, the router may not come into play. All the router is probably used for is DHCP. So you can probably just set Jumbo Frames up on the respective KVM hosts and on libvirtd, and get some benefit of performance.


Thursday, February 8, 2018

Object Oriented Python - My Initial Opinion and Thoughts

The last couple of weeks, I have not been involved so much in virtualized networking as I have been writing a REST API Client in Python.

I have a pretty hardcore development background, ranging from various assembler languages to C and C++, and later on, Java (including Swing GUI, EJBs, and Application Servers like BEA WebLogic and IBM Websphere).

Then - just like that - I stopped developing. For a number of reasons.

Actually,  in getting out of the game, I found myself getting into Systems Integration, scope work (writing contracts and proposals), and so forth. Things that never really took me all that far from the original programming.

In getting back into research, the ability to use a vi editor, type 100wpm, and code quickly lends itself very well to doing research prototypes and stuff.

So with Orchestration and NFV, it became apparent I needed to write a REST client so that I could "do things" (i.e. provision, deprovision, MACD - an ops term for Move Add Change Delete).

I wound up choosing Python. And, thanks to quick fingers and google, I was able to crank some code very quickly.

Some things I did on this project include:

Object-Oriented classes with TRUE practical inheritance and encapsulation. Practical in the sense that it makes sense and is not done simply for the sake of exercising the language feature. For example, I created a class called Service, and extended that to L2 and L3 services, and the from there, I extended it further into use cases for an SDWAN (virtual elements that are L2 or L3 devices like bridges, gateways and switches).

I also exercised the JSON capabilities. Python seems to be built ground-up for REST APIs in the sense that it is extremely easy to render objects into dictionaries that in turn become JSON that can be sent up to a REST server.

The last feature I used was something called ArgParser. Initially, I was calling Main with a bunch of arguments. That was not feasible, so I then implemented the optarg functionality (people in Unix/Linux/POSIX are familiar with this). I then learned about ArgParser and refactored the code to use this rather cool way of passing arguments into a main function of a Python class to invoke it as a standalone from a command line.

There are some things to get used to in Python. For sure. It is very very loose. No terminators. No types. Yet, it is picky about how you indent your code (because you don't have the braces and parenthesis). I think it's okay for what it is designed for. For me the struggle was getting used to the lack of precision in it. Also, the fact that it barks at runtime rather than compile time for a lot of things is, in my opinion, quite bad for something you may shove out onto some kind of production server. I'll probably always lean on C / C++ but I do think Python has a place in this world.

Friday, January 19, 2018

OpenStack Networking - More Learnings

For a while now, OpenStack has been working just fine for the most part, except for some of the issues trying to switch back and forth between OpenVSwitch and LinuxBridge agents (earlier posts discuss some of these issues).

This week, I ran into some issues trying to use OpenStack with virtual machines that use MULTIPLE interfaces, as opposed to just a single eth0 interface.

This post will discuss some of those issues.

The first issue I ran into, happened when I added a 2nd and 3rd router to my existing network.

The diagram below depicts the architecture I had designed. The router on the top and bottom were the new routers, and the orange, red and brown networks were newly added networks.
Adding second and third routers to an OpenStack network



The first issue I ran into, was related to the fact that IMMEDIATELY, the DHCP seemed to stop working on any instantiated networks. Stranger than that, the IP assignment behavior became very strange. The VMs might (or might) not get an IP at all, or they might get an IP on their own or each other's network segments. But the IP was NEVER in the DHCP defined range.

I always use a range of .11 to .199 for all /24 networks, just to ensure that the handout of IPs is correct. This paid off, because in situations where the LAN segment was correct, I still found that VMs were getting a .3 address.

So I knew DHCP was not working. I finally realized that the ".3" addresses the VMs were getting, was a Floating IP that OpenStack hands an instance when DHCP is not working - sort of like a default.

So why was DHCP not working?

The answer - after considerable debugging - turned out to be the "agent" function in OpenStack.

If you are running LinuxBridgeAgent, you should see something that looks like this when you run the "neutron agent list" command:

Proper listing of Alive and State values on a Neutron Agent Listing in OpenStack
 Keep in mind, that there are two fields here to consider. One is the "Alive" field, and the other one is the "State" field.

Somehow, because I had at one time been running OpenVSwitch on this controller, I had entries in this list for OpenVSwitch agents. And I found that while the Alive value for those was DOWN, the STATE field was set to UP!!!!

Meanwhile, the LinuxBridge Agents were in a state of UP, but the STATE field was set to DOWN!!!

These were reversed! No wonder nothing was working. I figured this out after tracing ping traffic through the tap interfaces, only to find that the pings were disappearing at the router.

I wound up doing two things to fix this:
a. I deleted the OpenVSwitch entries altogether using the "neutron agent delete" command.
b. I changed the state to "UP" for the LinuxBridge agents by using the "neutron agent set" command - which results in what we see above as the final correct result.

All of a sudden, things started working. VMs got IP addresses, and got the CORRECT DHCP addresses because the DHCP requests were getting all of the way through and the responses were coming back.

Now....the second issue.

The internal network was created as a LOCAL network. It turns out that DHCP does not seem to work in networks of this type in OpenStack. If you do defined DHCP on the LOCAL network, OpenStack will define an IP for it. But the VM does seem to get the IP because DHCP requests and offers don't seem to flow.

Keep in mind (see diagram above) that the local network is shared, but it is NOT directly connected to a router. This COULD be one reason why it is not getting a DHCP address. -- So - you can just set one, right? WRONG!!!

I set up a couple of ports on this network, with FIXED IPs. And if the VMs are provisioned using these FIXED IPs - you cannot just arbitrarily add some other IP on the same segment to that interface and expect traffic to work. It won't. In other words, the IP needs to be the one OpenStack expects it to be. Period.

This is the same case for DHCP. See even though the DHCP does not seem to work on the LOCAL isolated network, if you do define a DHCP range on that network, OpenStack will reserve an address for the VM.  But the VM may not get the address. So you can assign it (i.e. manually, using iproute2 facility), but if you do not choose the same one that OpenStack has chosen for it, traffic will not work.  So your VM needs to somehow KNOW what address has been reserved for a specific VM in this case - and that's not easy to do.

Therfore, the solution seems to be to use a Port. That way the instantiator chooses the IP, and can set it for the VM if it does not get set through OpenStack (which I believe uses CloudInit to set that IP - though I should double check this to be sure).

SLAs using Zabbix in a VMware Environment

 Zabbix 7 introduced some better support for SLAs. It also had better support for VMware. VMware, of course now owned by BroadSoft, has prio...