Thursday, September 21, 2017

OpenStack Functional Demo

Originally, with one CentOS 7 server (32 Gb RAM) that was set up to run Ansible and LibvirtD at my disposal, I installed Openstack on a single box.

I put the Controller in a VM and used the host as the Nova Compute Node.

I had all sorts of issues initially. The Keystone and Glance were fairly straightforward. I did not have DNS, so I used IP addresses for most urls, which is a double-edged sword. The complexity in OpenStack is with Nova (virtualization management) and Neutron (networking).

I did not create a "Network Node". I used only a Controller Node and a Compute Node. What one would normally put on a Network Node, runs on the Controller Node (L3 agent, DHCP Agent, Metadta Agent).

One issue was libguestfs was not working. I finally removed it from the box only to realizs that there was a yum dependency with the openstack-nova-conpute package. So I installed nova compute using an rpm with the --nodeps flag.

Getting linuxbridge agent to work took some fiddling. One issue is that it was not clear if I needed fo run LinuxBridgeAgent on the Controller. The instructions make it seem that it is only for the Conpute Node. Well, not so. Neutron creates a tap for every dhcp agent, and every port. ON THE CONTROLLER if that is where you run those services. So you install it both places.

The Neutron configuration file...is about 10,000 lines long, leaving many opportunities for misconfiguration (by omission, incorrect assumption/interpretation, or just plain typos). It took a while to sleuth out how OpenStack uses Nova, Neutron and the l3 agent and linuxbridge agent to create bridges, vnets and taps (ports). But - confusing again - is whether you need to configure all parms exactly same on both boxes, of if some are ignored on one node or the other. I was not impressed with these old style ini and config files. Nightmares of complexity.

Another major challenge I had was the external network. I failed to realize (until I did network debugging) that packets that leave the confines of OpenStack need to find their way back into OpenStack. This means having specific routes to internal OpenStacks networks via the OpenStack external gateway port on the OpenStack router from VMs sitting outside OpenStack.

Another confusing thing is that OpenStack runs namespaces (separate and distinct network stacks) to avoid IP Overlays (by default - the way Neutron  is configured). Knowing how to navigate namespaces is / was a new topic for me and makes it harder to debug connectivity issues.

Finally, when I worked all of this out, I realized that the deployment of VMs was taking up almost 100% CPU. This led me down a rabbit hole to discover that I needed to use the kvm virt_type, and a CPU mode of host-passthrough to calm the box down.

Once I got this done, I could deploy efficiently.

Another thing (maybe this should be its own post) is the notion of setting ports that you can use on deployment (instead of saying "deploy to this network", you can say "use this port on this network" - which has its own IP and port assignment). Because you can attach multiple submets to a single network, I figured I could create ports for nodes that I wanted to reside on that submet. And I COULD! But - the ETSI MANO standards have not caught up with this kind of cardinality / association (per my testing anyway) so it only works if you use OpenStack GUI to deploy. Therefore, having a "one subnet to one network" rule is simpler and will work better for most situations I think.

In the end, I was able to do everything smoothly with, OpenStack. Save Images, create Flavors, Networks, and Deploy. But it all has to be configured "just so".

No comments:

Fixing Clustering and Disk Issues on an N+1 Morpheus CMP Cluster

I had performed an upgrade on Morpheus which I thought was fairly successful. I had some issues doing this upgrade on CentOS 7 because it wa...