I've had more time to play with Ansible, in the background. A little bit at least. Baby steps.
I use it now to deploy SD-WAN networks, which have different types of KVM-based network elements that need to be configured differently on individual virtual machines.
I enhanced it a bit to deploy virtual-machine based routers (Quagga), as I was building a number of routing scenarios on the same KVM host.
I have made some changes to make Ansible work more to my liking:
1. Every VM gets a management adaptor that connects to a default network.
2. The default network is a NAT network that has its own subnet mask and ip range.
3. I assign each VM an IP on this management network in the hosts file on the KVM host.
The ansible launch-vm script uses the getent package to figure out which IP address the VM has by its name, which is defined in the inventory file.
Because the adaptor type I like to use is Realtek, I had to change guestfish in the launch-vm script to use adaptor name ens3. I also had to change it to use an external DNS Server, because the lack of a DNS server was causing some serious issues with the playbooks not running correctly; especially when they needed to locate a host by name (i.e. to do a yum install).
This ansible has turned out to be very convenient. I can deploy VMs lickety split now, freeing up the time I would normally spend tweaking and configuring individual VM instances.
I'm thinking of writing my own Ansible module for Quagga set up and configuration. That might be a project I get into.
Before I do that, I may enhance the playbooks a bit, adding some "when" clauses and things like that. So far everything I have done has been pretty vanilla.
Wednesday, July 19, 2017
Wednesday, July 5, 2017
Quagga Routing - OSPF and BGP
For the last month or two, I have been "learning by doing" routing, using Quagga.
Quagga uses an abstraction layer called Zebra that sits (architecturally not literally) on top of the various routing protocols that it supports (OSPF, BGP, RIP, et al).
I designed two geographically-separated cluster of OSPF routers - area 0.0.0.1 - and then joined them with a "backbone" of two OSPF routers in area 0.0.0.0. I hosted these on virtual machines running on a KVM host.
From there, I modified the architecture to use BGP to another colleague's KVM virtual machine host that ran BGP.
Some things we learned from this exercise:
1. We learned how to configure the routers using vtysh,
2. We had to make firewall accomodations for OSPF (which uses multicast) and BGP.
We also used an in-house code project that tunnels Quagga. I have not examined the source for this, but that worked also, and required us to make specific firewall changes to allow tunx interfaces.
Quagga uses an abstraction layer called Zebra that sits (architecturally not literally) on top of the various routing protocols that it supports (OSPF, BGP, RIP, et al).
I designed two geographically-separated cluster of OSPF routers - area 0.0.0.1 - and then joined them with a "backbone" of two OSPF routers in area 0.0.0.0. I hosted these on virtual machines running on a KVM host.
From there, I modified the architecture to use BGP to another colleague's KVM virtual machine host that ran BGP.
Some things we learned from this exercise:
1. We learned how to configure the routers using vtysh,
2. We had to make firewall accomodations for OSPF (which uses multicast) and BGP.
We also used an in-house code project that tunnels Quagga. I have not examined the source for this, but that worked also, and required us to make specific firewall changes to allow tunx interfaces.
Percona XtraDB Cluster High Availability Training
Tuesday 5/23 through 5/26 I attended Percona Cluster High Availability training.
The instructor was very knowledgeable and experienced.
One of the topics we covered was the "pluggable", or module-based, architecture that allow different kinds / types of engines that can be used with the MySQL database. He mentioned InnoDB, and how the Percona XtraDB is based off of the InnoDB engine.
We also covered tools and utilities, not only from MySQL, but 3rd Parties, including Percona (Percona Toolkit).
We spent some time on Administration, such as Backup and Recovery.
We then moved on into Galera replication, and used VirtualBox images to create and manage clusters of 3 databases.
I won't reproduce a full week of rather complex training topics and details in this blog, but it was good training. I will need to go back and revisit / review this information so that it doesn't go stale on me.
The instructor was very knowledgeable and experienced.
One of the topics we covered was the "pluggable", or module-based, architecture that allow different kinds / types of engines that can be used with the MySQL database. He mentioned InnoDB, and how the Percona XtraDB is based off of the InnoDB engine.
We also covered tools and utilities, not only from MySQL, but 3rd Parties, including Percona (Percona Toolkit).
We spent some time on Administration, such as Backup and Recovery.
We then moved on into Galera replication, and used VirtualBox images to create and manage clusters of 3 databases.
I won't reproduce a full week of rather complex training topics and details in this blog, but it was good training. I will need to go back and revisit / review this information so that it doesn't go stale on me.
Tuesday, May 16, 2017
Learning Ansible: KVM Deployment Use Case
"Pioneers get shot in the back", is what Stan Sigmund (do I have that spelled right?), the CEO of at&t used to say. Well, I don't know this firsthand. This is what some at&t employees told me once.
But it's true. It's always a lot safer to go in after the initial wave of invaders have taken all of the risk, and I think that's what Stan would have been referring to with that statement. It's about risk, which is a topic in an of itself, very blogworthy.
How does this relate to Ansible?
We have an engineer here who likes to run out in front of the curve. He did all of this research on Puppet, Chef, and Ansible, and chose Ansible. There are any number of blogs that tout the benefits of Ansible over these others, but in order to fully grasp those benefits, you need to study them all.
For me, I need to learn by doing, and then I can start to understand the benefits of one vs another.
So, I have started by taking a number of playbooks, and trying to get them working on my own system. I built a KVM host environment on a 32Gb server, and it made sense to see what I could do in terms of trying to automate the generation and spinup of these Virtual Machines.
There are a number of new things I have come across as I have been doing this:
1. Guestfish - Guestfish is a shell and command-line tool for examining and modifying virtual machine filesystems.
http://libguestfs.org/guestfish.1.html
2. getent - a small IP / host resolver that is written in Python.
https://pypi.python.org/pypi/getent
The scripts I am using are all set up to create a virtual machine using some defaults:
- default storage pool
- default network
Certainly this is easier to do than creating one-offs for every VM. But if you do this, you need to go into virt-manager and reprovision the networking and other things individually. Which kinds of defeats the purpose of using ansible in the first place (you can use a bash deploy script to generate a KVM).
So one of the things I did have to do was to hack the scripts to work with the storage pool I was using, which placed all of the images in MY directory, as opposed to where the default images were being placed.
Somehow, I need to enhance these scripts to put each VM on its own network subnet. This can all be done with virsh commands and variables, but I have not done that yet.
One problem, is that you need a MAC address to assign your adaptors if you're going to try and create those dynamically. I looked, and came across this link that can possibly serve as a weapon for doing this:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Virtualization/sect-Virtualization-Tips_and_tricks-Generating_a_new_unique_MAC_address.html
I have a handle on Ansible now; what a Playbook is, the Inventory File, what Tasks are, Roles are, Handlers, and the like. I understand all this, but can I swiftly and efficiently code all of this? No - not yet. I'm still reverse-engineer hacking from existing stuff. The background as an Integrator has honed those skills pretty well.
Ansible is as good as the underlying inputs that are fed into the process of generating outputs. It can be simple. It can be complicated. My impression is that it makes sense to crank it initially, and then enhance and hone over a period of time. Trying to everything up front and in one shot will be a huge time sink.
I'll probably write more about Ansible later. This is all for now.
But it's true. It's always a lot safer to go in after the initial wave of invaders have taken all of the risk, and I think that's what Stan would have been referring to with that statement. It's about risk, which is a topic in an of itself, very blogworthy.
How does this relate to Ansible?
We have an engineer here who likes to run out in front of the curve. He did all of this research on Puppet, Chef, and Ansible, and chose Ansible. There are any number of blogs that tout the benefits of Ansible over these others, but in order to fully grasp those benefits, you need to study them all.
For me, I need to learn by doing, and then I can start to understand the benefits of one vs another.
So, I have started by taking a number of playbooks, and trying to get them working on my own system. I built a KVM host environment on a 32Gb server, and it made sense to see what I could do in terms of trying to automate the generation and spinup of these Virtual Machines.
There are a number of new things I have come across as I have been doing this:
1. Guestfish - Guestfish is a shell and command-line tool for examining and modifying virtual machine filesystems.
http://libguestfs.org/guestfish.1.html
2. getent - a small IP / host resolver that is written in Python.
https://pypi.python.org/pypi/getent
The scripts I am using are all set up to create a virtual machine using some defaults:
- default storage pool
- default network
Certainly this is easier to do than creating one-offs for every VM. But if you do this, you need to go into virt-manager and reprovision the networking and other things individually. Which kinds of defeats the purpose of using ansible in the first place (you can use a bash deploy script to generate a KVM).
So one of the things I did have to do was to hack the scripts to work with the storage pool I was using, which placed all of the images in MY directory, as opposed to where the default images were being placed.
Somehow, I need to enhance these scripts to put each VM on its own network subnet. This can all be done with virsh commands and variables, but I have not done that yet.
One problem, is that you need a MAC address to assign your adaptors if you're going to try and create those dynamically. I looked, and came across this link that can possibly serve as a weapon for doing this:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Virtualization/sect-Virtualization-Tips_and_tricks-Generating_a_new_unique_MAC_address.html
I have a handle on Ansible now; what a Playbook is, the Inventory File, what Tasks are, Roles are, Handlers, and the like. I understand all this, but can I swiftly and efficiently code all of this? No - not yet. I'm still reverse-engineer hacking from existing stuff. The background as an Integrator has honed those skills pretty well.
Ansible is as good as the underlying inputs that are fed into the process of generating outputs. It can be simple. It can be complicated. My impression is that it makes sense to crank it initially, and then enhance and hone over a period of time. Trying to everything up front and in one shot will be a huge time sink.
I'll probably write more about Ansible later. This is all for now.
Thursday, April 20, 2017
OpenDNP3: What the *&^% are all these gcda files?
I downloaded some open source, and made some customization to it. The framework was in C/C++, and architected at a level that is more complex and sophisticated than many code bases I have seen.
I noticed that when I would run the application as a power user (i.e. root), it would write out a bunch of ".gcda" files. And, one time when I ran it as a non-power user, it had trouble writing those files out, and the application produced errors (it may not have even run, I can't remember).
Well, tonight, I finally looked into the topic of what a gcda file actually is.
It is a code coverage, or code profiling tool.
You compile with a certain flag called --coverage (using gcc on Linux here), and then the GCOV framework enables these gcda files to be generated. These are binary statistics files that are updated over time as the program is run, and upon proper exit of the application.
http://bobah.net/d4d/tools/code-coverage-with-gcov
I noticed that when I would run the application as a power user (i.e. root), it would write out a bunch of ".gcda" files. And, one time when I ran it as a non-power user, it had trouble writing those files out, and the application produced errors (it may not have even run, I can't remember).
Well, tonight, I finally looked into the topic of what a gcda file actually is.
It is a code coverage, or code profiling tool.
You compile with a certain flag called --coverage (using gcc on Linux here), and then the GCOV framework enables these gcda files to be generated. These are binary statistics files that are updated over time as the program is run, and upon proper exit of the application.
http://bobah.net/d4d/tools/code-coverage-with-gcov
Tuesday, March 28, 2017
Deploying Etherape on a non-Development system
Go to Etherape web site. Read package dependencies.
1. Download gtk+-2.24.31 sources
1a. Run "make configure"
- Install pango-devel
- Install atk-devel
- Install gdk-pixbuf2-devel
1b.
Re-ran "make configure" and passed dependencies, then ran "make" and "make install"
2. Downloaded libglade-2.6.4 sources
2a. Ran "make configure"
- Install libgnomeui-devel
3. Downloaded Etherape 9.1.4 sources
3a. Ran "make configure"
- Install libpcap-devel
- Install gnome-doc-utils
NOTE: I got some kind of error on a documentation package, but decided it was not critical to Etherape actually working.
3b. Ran "make" and then "make install"
1. Download gtk+-2.24.31 sources
1a. Run "make configure"
- Install pango-devel
- Install atk-devel
- Install gdk-pixbuf2-devel
1b.
Re-ran "make configure" and passed dependencies, then ran "make" and "make install"
2. Downloaded libglade-2.6.4 sources
2a. Ran "make configure"
- Install libgnomeui-devel
3. Downloaded Etherape 9.1.4 sources
3a. Ran "make configure"
- Install libpcap-devel
- Install gnome-doc-utils
NOTE: I got some kind of error on a documentation package, but decided it was not critical to Etherape actually working.
3b. Ran "make" and then "make install"
Thursday, March 16, 2017
NetFlow Kernel Module Programming
I have been doing some kernel module programming.This is not for kids.
Most examples on this are on kernels that pre-date the 3.10 kernels now in use (in other words, 2.6 kernels are the examples I mainly see that show how this magic is done).
But I've learned a bit from doing this. When I finally got into the more advanced kernel modules, where you need to start accessing data structures in C Programming language from the kernel headers, stuff started to not compile and I started to learn that the data structures have changed, et al.
The ultimate end to this is to write your own firewall using NetFlow. Will take some work.
But learning the NetFlow architecture, and how a packet traverses the NetFlow tables is very valuable because iptables is built on NetFlow.
I could write a lot more on this - but I'd bore you. I've compiled a lot of information and subject matter on this.
Most examples on this are on kernels that pre-date the 3.10 kernels now in use (in other words, 2.6 kernels are the examples I mainly see that show how this magic is done).
But I've learned a bit from doing this. When I finally got into the more advanced kernel modules, where you need to start accessing data structures in C Programming language from the kernel headers, stuff started to not compile and I started to learn that the data structures have changed, et al.
The ultimate end to this is to write your own firewall using NetFlow. Will take some work.
But learning the NetFlow architecture, and how a packet traverses the NetFlow tables is very valuable because iptables is built on NetFlow.
I could write a lot more on this - but I'd bore you. I've compiled a lot of information and subject matter on this.
Subscribe to:
Posts (Atom)
SLAs using Zabbix in a VMware Environment
Zabbix 7 introduced some better support for SLAs. It also had better support for VMware. VMware, of course now owned by BroadSoft, has prio...

-
After finishing up my last project, I was asked to reverse engineer a bunch of work a departing developer had done on Kubernetes. Immediat...
-
Initially, I started to follow some instructions on installing Kubernetes that someone sent to me in an email. I had trouble with those, s...
-
On this post, I wanted to remark about a package called etcd. In most installation documents for Kubernetes, these documents tend to abstr...