Monday, November 27, 2017
Elasticity and Autoscaling - More Testing
Here are some new things that I tested:
1. Two Scaling policies in a single descriptor.
It does no good to have "just" a Scale Out, if you don't have a corresponding "Scale In"!
You cannot have true Elasticity without the expansion and contraction - obviously - right?
So I did this, and this parsed just fine, as it should have.
I also learned you can put these scaling directives in different levels of descriptors - like the NSD. If you do this, I presume that what will happen is that it will factor in all instances across VNFMs. But I did not test this.
2. I tested to make sure that the scaling MAXED OUT where it should.
If cumulative average CPU across instances was greater then 35% CPU, then the SCALE_OUT 3 would take affect. This seemed to work. I started with 2 instances, and as I added load to CPUs to boost the cum average up, it would scale out 3 - and then scale out 3 more for a total of 8 no matter what load was on the CPUs. So it maxed out at 8 and stayed put. This test passed.
I was curious to see if the engine would instantiate one VM at a time, or would it instantiate in bunches of 3 (per the descriptor), or would it actually just instantiate up to the max (which would be errant behavior). Nova in OpenStack staggers the instantiations so it APPEARS to look like it is doing one at a time up to three (i.e. 1-1-1), at which point re-processing may kick off another series of 1-1-1. So this is probably to-be-expected behavior. The devil is in the details when it comes to the Orchestrator, OpenStack, and the OpenStack Nova API in terms of whether it is possible and to what extent you can instantiate VMs simultaneously.
When a new VM comes up, it takes a while for it to participate in measurements. The scaling engine would actually skip the interval due to a "measurements received less than measurements requested" exception and only start evaluating things until and unless it had all of the VMs reporting in measurements that were expected. I have to think about whether I like this or not.
3. Elasticity Contraction - by using SCALE_IN_TO parameter.
I set things up so that it would scale in to 2 instances - to ensure at least two instances would always be running. I would do this when cumulative average CPU was less than 15% CPU across instances.
This test, actually, failed. I saw the alarm get generated, and I saw the engine attempting to scale in, but some kind of decision-making policy was rejecting the scale in "because conditions are not met".
We will need to go into the code and debug this, and see what is going on.
Thursday, November 23, 2017
Ubiquiti Edge Router ER-X - Impressive
I just love this router.
- The icon on the top that shows colorized ethernet plugs (colorization related to status). Cool.
- It was sooooo easy to configure it.
- It has a shell that takes you into Ubuntu Linux (uses Ubuntu = impressive). I'm not even sure it is using BusyBox or some slimmed down quasi-linux. It looks like a custom compile of Ubuntu.
- One cool feature is that it can support link aggregation. I am not using that feature, but it's cool.
- Has excellent support for IPv6.
It can also automatically switch the ports on the router so that you don't need a L3 switch to go with it.
So for example, you can set up:
- eth0 as the management port
- eth1 as the WAN port, and
- eth2, eth3 and eth4 are switched so that anything plugged into these are on the same network (you defined the network).
I have it set up with a hairpin NAT, and the firewall rules configured on it are rather trivial at the moment but designed to protect ingress through iptables rules.
This is truly a "power user" routing device, and it can fit into the palm of your hand; it is no bigger than a Raspberry Pi device.
This router also comes with some interesting Wizards that allow you to configure the router for certain use cases, like the WAN+LAN wizard.
So I have not done anything in-depth, but I spent an hour messing around with this device and I'm pretty impressed with it.
Security - Antivirus specifically
I have started to get smarter about Security. I went to RSA in 2016, and I bought a book on Exploits. This is very very hardcode book, and I have not managed to get through it all yet. It requires Assembler and C programming, and teaches you how hackers actually exploit code. I think once I finish this it will be awesome knowledge, and I am about halfway through it. I got pulled off of this due to the longer hours at work playing with virtualization and orchestration.
So - I am not current on malware. So I spent some time looking around this morning, reading anti-virus reviews.
It does not appear that there is much out there in the way of OpenSource AV. ClamAV looks like the only thing actively maintained. This is a bit of a surprise.
There are some free packages out there, but I am sure they probably nag you incessantly to buy or upgrade. The big question is this: Can you really trust FREE?
I also see some interesting Cloud-based packages out there that are working from outside your network. This would have been an absolute no-no for me in earlier times, but considering the danger of today's malware, maybe this kind of approach is worth re-examining, is good results are coming from it. One such company is Crystal Security.
I see some products like VoodooShield. And some new ones I had not previously encountered like GlarySoft Malware Hunter.
Of course, Kaspersky, ESET - these guys always get good reviews.
It is probably good to stay up to speed on this stuff. To take an hour here and there and stay current.
OpenBaton Fault Management and AutoScaling
Over the last month or so I have been testing some of the more advanced features of OpenBaton.
- Fault Management
- Auto Scaling
- Network Slicing
These have taken time to test. I was informed by the development team that the "release" code was not suitable for the kind of rigorous testing I planned to do, and that I needed to use the development branch.
This led me down a road of having to familiarize myself with the "git" software management utility. I know git has been around for a while and silently crept in as almost a de-facto standard for code repository and source management. In many shops it has replaced the classic stalwart Clearcase, CVS, SVN and other software that has actually been in use for decades. Even in my own company's shop, they brought in a "git guy", and of course since that is the recipe he cooks, we now use that. But up to this point, I had not really had a need to do more than "git clone". Now - I am having to work with different branches, and as I said, this took some time. Git is fairly simple if you do simple things, but it is far more complex "under the hood" than it looks, especially if you are doing non-simple things with it. I could do a post just on git alone. I'm not an authority on it, but have picked up a few things - including opinions - on it (some favorable, some not).
The first thing I tested was Fault Management (FM). Fault Management is essentially the ability to identify faults and trigger actions around those faults. The actions can be an attempt to heal - or it can be an attempt to Scale, or it can be an attempt to fail-over based on a configured redundancy mechanism. The ETSI standard descriptors allow you to specify all of this. The interesting thing about FM in a virtualized context is that it gets into the "philosophy" of whether it makes sense to spend effort healing something, as opposed to just killing it and re-instantiating it. This is called the "Cattle vs Pets" argument. I think there ARE cases where you need to fix and heal VMs (Pets), but in most cases I think VMs can be treated as Cattle. When VMs are treated as Pets, the nodes are generally going to be important (i.e. they manage something important, as in a control plane or signaling plane element), and cannot just be taken down and re-instantiated due to load or function.
I then tested AutoScaling - or, using a better term, Elasticity. This allows a virtualized network to expand and contract based on real-time utilization. This feature took me a while to get working due to having to compile different modules of code from different moving-target git branches over and over until I could finally get some code that wanted to work, with slight modifications and patches. When I finally got this working, it was super cool to see this work. I could do a separate post on this feature alone. After I got it working I wound up helping some other guys in a German network integration company get the feature working.
Network Slicing has been more difficult to get to work. That is probably a separate post altogether, related and intertwined with QoS such topics.
Thursday, September 21, 2017
OpenStack - Two Compute Nodes
You basically just install openstack-nova-compute, and your Neutron network plugin (linuxbridge-agent in my case).
The only question I had was whether two Compute Nodes can belong to the same OpenStack Region.
Thank goodness I found a ppt where a guy made it clear that one could run a slew of nodes in a single region (he had multiples in Region 1, and Region 2).
At one point, I decided I would install the OpenVSwitch on this second Compute Node. I'll probably write a second post on that. It did not appear to me that you could mix and match OpenVSwitch and LinuxBridge on different Compute Nodes (at least not easily?). This is because the Neutron L3 Agent config file has a driver field and only seems to accept one mode or the other. I could be wrong about this; more testing necessary. But I backed OpenVSwitch out and enabled LinuxBridge-Agent. Things seem to be working very well with the Linux Bridge Agent.
The Linux Bridge Agent creates Layer 2 Tap interfaces and puts these interfaces on a bridge. If you are using VXLAN protocol it will also manage those interfaces as well.
OpenVSwitch
I thought I would use OpenVSwitch on it.
This took me down a deep rabbit hole, as OpenVSwitch is a complex little bugger.
I installed the OpenVSwitch package, then the driver agent (on Compute Node). I wanted it to run in a Layer 2 mode because I had LinuxBridge Agent running on the first Compute Node and the Controller.
After setting OpenVSwitch up on the 2nd Compute node, I realized my external NIC was a bridge, so I tried to use veth pairs to make it work. Nope. As it turns out, the Controller (and L3 agent) seems to use drivers for OpenVSwitch OR LinuxBridge (not both). It appears that it is all or nothing and you cannot mix and match between LinuxBridgeAgent and OpenVSwitchAgent.
I backed it out and used / installed LinuxBridgeAgent.
OpenStack Functional Demo
I put the Controller in a VM and used the host as the Nova Compute Node.
I had all sorts of issues initially. The Keystone and Glance were fairly straightforward. I did not have DNS, so I used IP addresses for most urls, which is a double-edged sword. The complexity in OpenStack is with Nova (virtualization management) and Neutron (networking).
I did not create a "Network Node". I used only a Controller Node and a Compute Node. What one would normally put on a Network Node, runs on the Controller Node (L3 agent, DHCP Agent, Metadta Agent).
One issue was libguestfs was not working. I finally removed it from the box only to realizs that there was a yum dependency with the openstack-nova-conpute package. So I installed nova compute using an rpm with the --nodeps flag.
Getting linuxbridge agent to work took some fiddling. One issue is that it was not clear if I needed fo run LinuxBridgeAgent on the Controller. The instructions make it seem that it is only for the Conpute Node. Well, not so. Neutron creates a tap for every dhcp agent, and every port. ON THE CONTROLLER if that is where you run those services. So you install it both places.
The Neutron configuration file...is about 10,000 lines long, leaving many opportunities for misconfiguration (by omission, incorrect assumption/interpretation, or just plain typos). It took a while to sleuth out how OpenStack uses Nova, Neutron and the l3 agent and linuxbridge agent to create bridges, vnets and taps (ports). But - confusing again - is whether you need to configure all parms exactly same on both boxes, of if some are ignored on one node or the other. I was not impressed with these old style ini and config files. Nightmares of complexity.
Another major challenge I had was the external network. I failed to realize (until I did network debugging) that packets that leave the confines of OpenStack need to find their way back into OpenStack. This means having specific routes to internal OpenStacks networks via the OpenStack external gateway port on the OpenStack router from VMs sitting outside OpenStack.
Another confusing thing is that OpenStack runs namespaces (separate and distinct network stacks) to avoid IP Overlays (by default - the way Neutron is configured). Knowing how to navigate namespaces is / was a new topic for me and makes it harder to debug connectivity issues.
Finally, when I worked all of this out, I realized that the deployment of VMs was taking up almost 100% CPU. This led me down a rabbit hole to discover that I needed to use the kvm virt_type, and a CPU mode of host-passthrough to calm the box down.
Once I got this done, I could deploy efficiently.
Another thing (maybe this should be its own post) is the notion of setting ports that you can use on deployment (instead of saying "deploy to this network", you can say "use this port on this network" - which has its own IP and port assignment). Because you can attach multiple submets to a single network, I figured I could create ports for nodes that I wanted to reside on that submet. And I COULD! But - the ETSI MANO standards have not caught up with this kind of cardinality / association (per my testing anyway) so it only works if you use OpenStack GUI to deploy. Therefore, having a "one subnet to one network" rule is simpler and will work better for most situations I think.
In the end, I was able to do everything smoothly with, OpenStack. Save Images, create Flavors, Networks, and Deploy. But it all has to be configured "just so".
SLAs using Zabbix in a VMware Environment
Zabbix 7 introduced some better support for SLAs. It also had better support for VMware. VMware, of course now owned by BroadSoft, has prio...

-
After finishing up my last project, I was asked to reverse engineer a bunch of work a departing developer had done on Kubernetes. Immediat...
-
Initially, I started to follow some instructions on installing Kubernetes that someone sent to me in an email. I had trouble with those, s...
-
I did some more work on Kubernetes. So the way Kubernetes was set up in here, was that SD-WAN traffic would be "routed" through...