"Pioneers get shot in the back", is what Stan Sigmund (do I have that spelled right?), the CEO of at&t used to say. Well, I don't know this firsthand. This is what some at&t employees told me once.
But it's true. It's always a lot safer to go in after the initial wave of invaders have taken all of the risk, and I think that's what Stan would have been referring to with that statement. It's about risk, which is a topic in an of itself, very blogworthy.
How does this relate to Ansible?
We have an engineer here who likes to run out in front of the curve. He did all of this research on Puppet, Chef, and Ansible, and chose Ansible. There are any number of blogs that tout the benefits of Ansible over these others, but in order to fully grasp those benefits, you need to study them all.
For me, I need to learn by doing, and then I can start to understand the benefits of one vs another.
So, I have started by taking a number of playbooks, and trying to get them working on my own system. I built a KVM host environment on a 32Gb server, and it made sense to see what I could do in terms of trying to automate the generation and spinup of these Virtual Machines.
There are a number of new things I have come across as I have been doing this:
1. Guestfish - Guestfish is a shell and command-line tool for examining and modifying virtual machine filesystems.
http://libguestfs.org/guestfish.1.html
2. getent - a small IP / host resolver that is written in Python.
https://pypi.python.org/pypi/getent
The scripts I am using are all set up to create a virtual machine using some defaults:
- default storage pool
- default network
Certainly this is easier to do than creating one-offs for every VM. But if you do this, you need to go into virt-manager and reprovision the networking and other things individually. Which kinds of defeats the purpose of using ansible in the first place (you can use a bash deploy script to generate a KVM).
So one of the things I did have to do was to hack the scripts to work with the storage pool I was using, which placed all of the images in MY directory, as opposed to where the default images were being placed.
Somehow, I need to enhance these scripts to put each VM on its own network subnet. This can all be done with virsh commands and variables, but I have not done that yet.
One problem, is that you need a MAC address to assign your adaptors if you're going to try and create those dynamically. I looked, and came across this link that can possibly serve as a weapon for doing this:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Virtualization/sect-Virtualization-Tips_and_tricks-Generating_a_new_unique_MAC_address.html
I have a handle on Ansible now; what a Playbook is, the Inventory File, what Tasks are, Roles are, Handlers, and the like. I understand all this, but can I swiftly and efficiently code all of this? No - not yet. I'm still reverse-engineer hacking from existing stuff. The background as an Integrator has honed those skills pretty well.
Ansible is as good as the underlying inputs that are fed into the process of generating outputs. It can be simple. It can be complicated. My impression is that it makes sense to crank it initially, and then enhance and hone over a period of time. Trying to everything up front and in one shot will be a huge time sink.
I'll probably write more about Ansible later. This is all for now.
Tuesday, May 16, 2017
Thursday, April 20, 2017
OpenDNP3: What the *&^% are all these gcda files?
I downloaded some open source, and made some customization to it. The framework was in C/C++, and architected at a level that is more complex and sophisticated than many code bases I have seen.
I noticed that when I would run the application as a power user (i.e. root), it would write out a bunch of ".gcda" files. And, one time when I ran it as a non-power user, it had trouble writing those files out, and the application produced errors (it may not have even run, I can't remember).
Well, tonight, I finally looked into the topic of what a gcda file actually is.
It is a code coverage, or code profiling tool.
You compile with a certain flag called --coverage (using gcc on Linux here), and then the GCOV framework enables these gcda files to be generated. These are binary statistics files that are updated over time as the program is run, and upon proper exit of the application.
http://bobah.net/d4d/tools/code-coverage-with-gcov
I noticed that when I would run the application as a power user (i.e. root), it would write out a bunch of ".gcda" files. And, one time when I ran it as a non-power user, it had trouble writing those files out, and the application produced errors (it may not have even run, I can't remember).
Well, tonight, I finally looked into the topic of what a gcda file actually is.
It is a code coverage, or code profiling tool.
You compile with a certain flag called --coverage (using gcc on Linux here), and then the GCOV framework enables these gcda files to be generated. These are binary statistics files that are updated over time as the program is run, and upon proper exit of the application.
http://bobah.net/d4d/tools/code-coverage-with-gcov
Tuesday, March 28, 2017
Deploying Etherape on a non-Development system
Go to Etherape web site. Read package dependencies.
1. Download gtk+-2.24.31 sources
1a. Run "make configure"
- Install pango-devel
- Install atk-devel
- Install gdk-pixbuf2-devel
1b.
Re-ran "make configure" and passed dependencies, then ran "make" and "make install"
2. Downloaded libglade-2.6.4 sources
2a. Ran "make configure"
- Install libgnomeui-devel
3. Downloaded Etherape 9.1.4 sources
3a. Ran "make configure"
- Install libpcap-devel
- Install gnome-doc-utils
NOTE: I got some kind of error on a documentation package, but decided it was not critical to Etherape actually working.
3b. Ran "make" and then "make install"
1. Download gtk+-2.24.31 sources
1a. Run "make configure"
- Install pango-devel
- Install atk-devel
- Install gdk-pixbuf2-devel
1b.
Re-ran "make configure" and passed dependencies, then ran "make" and "make install"
2. Downloaded libglade-2.6.4 sources
2a. Ran "make configure"
- Install libgnomeui-devel
3. Downloaded Etherape 9.1.4 sources
3a. Ran "make configure"
- Install libpcap-devel
- Install gnome-doc-utils
NOTE: I got some kind of error on a documentation package, but decided it was not critical to Etherape actually working.
3b. Ran "make" and then "make install"
Thursday, March 16, 2017
NetFlow Kernel Module Programming
I have been doing some kernel module programming.This is not for kids.
Most examples on this are on kernels that pre-date the 3.10 kernels now in use (in other words, 2.6 kernels are the examples I mainly see that show how this magic is done).
But I've learned a bit from doing this. When I finally got into the more advanced kernel modules, where you need to start accessing data structures in C Programming language from the kernel headers, stuff started to not compile and I started to learn that the data structures have changed, et al.
The ultimate end to this is to write your own firewall using NetFlow. Will take some work.
But learning the NetFlow architecture, and how a packet traverses the NetFlow tables is very valuable because iptables is built on NetFlow.
I could write a lot more on this - but I'd bore you. I've compiled a lot of information and subject matter on this.
Most examples on this are on kernels that pre-date the 3.10 kernels now in use (in other words, 2.6 kernels are the examples I mainly see that show how this magic is done).
But I've learned a bit from doing this. When I finally got into the more advanced kernel modules, where you need to start accessing data structures in C Programming language from the kernel headers, stuff started to not compile and I started to learn that the data structures have changed, et al.
The ultimate end to this is to write your own firewall using NetFlow. Will take some work.
But learning the NetFlow architecture, and how a packet traverses the NetFlow tables is very valuable because iptables is built on NetFlow.
I could write a lot more on this - but I'd bore you. I've compiled a lot of information and subject matter on this.
Dell PowerEdge R330 - Lifecycle and iDRAC
For the first time in years and years; maybe ever to this extent, I delved into the guts of a hardware platform; the Dell PowerEdge platform.
We order a lot of these where I work; Dell R220 (originally), Dell R230, Dell R330 and Dell R430.
Dell R430 - Carrier Grade (redundant and scalable)
Dell R330 - Enterprise Grade (has redundancy; drives, RAID card, power supplies)
Dell R230 - Commercial / Consumer Grade (weaker computing power, no redundancy)
These go up, actually, to a R7xxx series (I know someone who bought one of those - an R710), but we don't go that high where I work.
I have played with these boxes quite a bit; adding memory, auxiliary network cards, and in one case had to set a jumper to clear NVRAM on the box. One a few boxes in the earlier days, I would configure RAID on them, and partition the drives in the CentOS installer (a Kickstart process takes away that fun for us nowadays).
One thing I have done, is install iDRAC cards into boxes that were not initially ordered with iDRAC cards. I learned that if you buy the wrong ones, they might be compatible with the box, but they may not have the screw holes to mount them on the motherboard (I had to return those).
Lately, I have been playing with the iDRAC and Lifecycle Controller functions on the Dell R330. I've learned that there are numerous version of iDRAC (newer boxes happen to be running iDRAC 8 while the ones from the last couple years are on 6 and 7). Dell has documentation on these versions, which use a primitive command line (CLI) syntax that has not changed much since I originally used RACADM in the 90s.
I also played with the OS-Passthrough feature. You can direct cable with a CAT5/6 the iDRAC and a spare port on the box, and put static IPs on both of those ports and create a closed-loop out-of-band management LAN without actually cabling the box into an external network infrastructure of any kind. This allows you to VPN or tunnel into a box, and then access the local management network to get into iDRAC. You do have to cable it though - there's no way to create a virtual LAN (that I saw). You can add another IP for Lifecycle Controller if you set that statically, and have 3 IPs; one for Lifecycle Controller, one for iDRAC, and then the IP that the Operating System statically assigns when the OS comes up.
iDRAC has a web front-end that can be configured and enabled. Licensing guides on what can be done in the GUI, whereas when you use the CLI the licensing does not seem to be very informative to the user on what restrictions might be in play.
I never did get the Life Cycle Controller web interface to work, if that even exists (maybe it has a client or software that runs remotely and accesses that - looking into). So this software as it stands, appears to only work if you're on the physical console of the box and access it via the F10 key at bootup.
Trying to learn some more but at this point, this is what I have learned.
We order a lot of these where I work; Dell R220 (originally), Dell R230, Dell R330 and Dell R430.
Dell R430 - Carrier Grade (redundant and scalable)
Dell R330 - Enterprise Grade (has redundancy; drives, RAID card, power supplies)
Dell R230 - Commercial / Consumer Grade (weaker computing power, no redundancy)
These go up, actually, to a R7xxx series (I know someone who bought one of those - an R710), but we don't go that high where I work.
I have played with these boxes quite a bit; adding memory, auxiliary network cards, and in one case had to set a jumper to clear NVRAM on the box. One a few boxes in the earlier days, I would configure RAID on them, and partition the drives in the CentOS installer (a Kickstart process takes away that fun for us nowadays).
One thing I have done, is install iDRAC cards into boxes that were not initially ordered with iDRAC cards. I learned that if you buy the wrong ones, they might be compatible with the box, but they may not have the screw holes to mount them on the motherboard (I had to return those).
Lately, I have been playing with the iDRAC and Lifecycle Controller functions on the Dell R330. I've learned that there are numerous version of iDRAC (newer boxes happen to be running iDRAC 8 while the ones from the last couple years are on 6 and 7). Dell has documentation on these versions, which use a primitive command line (CLI) syntax that has not changed much since I originally used RACADM in the 90s.
I also played with the OS-Passthrough feature. You can direct cable with a CAT5/6 the iDRAC and a spare port on the box, and put static IPs on both of those ports and create a closed-loop out-of-band management LAN without actually cabling the box into an external network infrastructure of any kind. This allows you to VPN or tunnel into a box, and then access the local management network to get into iDRAC. You do have to cable it though - there's no way to create a virtual LAN (that I saw). You can add another IP for Lifecycle Controller if you set that statically, and have 3 IPs; one for Lifecycle Controller, one for iDRAC, and then the IP that the Operating System statically assigns when the OS comes up.
iDRAC has a web front-end that can be configured and enabled. Licensing guides on what can be done in the GUI, whereas when you use the CLI the licensing does not seem to be very informative to the user on what restrictions might be in play.
I never did get the Life Cycle Controller web interface to work, if that even exists (maybe it has a client or software that runs remotely and accesses that - looking into). So this software as it stands, appears to only work if you're on the physical console of the box and access it via the F10 key at bootup.
Trying to learn some more but at this point, this is what I have learned.
Ansible Part I
Now that I have a bunch of VMs running on a KVM host and interfacing properly with proper network configuration, the next thing that would be good to do is to learn how to deploy these VMs in an efficient way.
Right now, I have bash scripts that generate the VMs. I have one for Spice Graphics, and another without Spice Graphics for a non-graphical Minimal CentOS. Once these OS images are installed, though, I have to do considerable tweaking to get software installed and configured on them.
This is where ansible comes in.
I have a book on Ansible - and a number of Ansible scripts and playbooks.
I have not had time to read the book, nor play with the playbooks, but I did have sense enough to delete all of the inventory files. The last thing you want to do is start running playbooks and farting up someone else's virtual machines using incorrect inventory.
So that's where we are....nowhere really, except an intent to get smart about Ansible.
Ansible - is an alternative to Chef and Puppet. I know a guy who did research on all of these and chose Ansible. So that's the history on "why Ansible".
Right now, I have bash scripts that generate the VMs. I have one for Spice Graphics, and another without Spice Graphics for a non-graphical Minimal CentOS. Once these OS images are installed, though, I have to do considerable tweaking to get software installed and configured on them.
This is where ansible comes in.
I have a book on Ansible - and a number of Ansible scripts and playbooks.
I have not had time to read the book, nor play with the playbooks, but I did have sense enough to delete all of the inventory files. The last thing you want to do is start running playbooks and farting up someone else's virtual machines using incorrect inventory.
So that's where we are....nowhere really, except an intent to get smart about Ansible.
Ansible - is an alternative to Chef and Puppet. I know a guy who did research on all of these and chose Ansible. So that's the history on "why Ansible".
More Work on KVM - Network Configuration
Been a while since I have posted anything on here. I'll do a few updates.
One of the projects I have been working on is the transition from Virtual Box (which I run on a Windows 10 laptop) and ESXi (which we used to run on large servers) to KVM.
What I have been doing is installing an entire network on a KVM host - with different CentOS 7 virtual machines.
Initially when I did this, I put each one of these on their own subnet (default network). Then - when one of the VMs needed a static IP, I learned how to use the virsh commands to edit the xml file for the default network and insert DHCP ranges, and - of these DHCP ranges - lock a specific IP to a specific host / MAC.
What I really meant to do, was to go back and reconfigure the network to resemble the Virtual Switch mechanism that ESXi provides through the user interface. But I could not - easily - figure out how to do this.
Later, a young greenhorn developer mentioned to me that the "Connection Details" tab in the Virt-Manager GUI would allow you to add/remove and start/stop various networks. And in exploring this, I learned that you can create Routed networks, NAT networks, and custom versions of these. You can also create internal networks.
It appears that you can "enable static routes" on both the NAT and Routed networks - a little confusing but made sense once you started trying to interact between VMs. I had some issues getting NAT networks to interface with Routed Networks until I wised up and, for the VM that needed internet access, created two network interfaces on that VM; one using a NAT network (external internet) and one using Routed (for internal network that could interface with other Routed VMs).
With that I was able to create 7-8 VMs that could interface with one another, and one of those VMs could get out to the internet as required.
There might be more sophisticated things you can do, but I think if you understand the types of networks and how to properly configure them, you should pretty much be where you want to be. I might need to read up on more advanced aspects of KVM but I think I'm good for now.
One of the projects I have been working on is the transition from Virtual Box (which I run on a Windows 10 laptop) and ESXi (which we used to run on large servers) to KVM.
What I have been doing is installing an entire network on a KVM host - with different CentOS 7 virtual machines.
Initially when I did this, I put each one of these on their own subnet (default network). Then - when one of the VMs needed a static IP, I learned how to use the virsh commands to edit the xml file for the default network and insert DHCP ranges, and - of these DHCP ranges - lock a specific IP to a specific host / MAC.
What I really meant to do, was to go back and reconfigure the network to resemble the Virtual Switch mechanism that ESXi provides through the user interface. But I could not - easily - figure out how to do this.
Later, a young greenhorn developer mentioned to me that the "Connection Details" tab in the Virt-Manager GUI would allow you to add/remove and start/stop various networks. And in exploring this, I learned that you can create Routed networks, NAT networks, and custom versions of these. You can also create internal networks.
It appears that you can "enable static routes" on both the NAT and Routed networks - a little confusing but made sense once you started trying to interact between VMs. I had some issues getting NAT networks to interface with Routed Networks until I wised up and, for the VM that needed internet access, created two network interfaces on that VM; one using a NAT network (external internet) and one using Routed (for internal network that could interface with other Routed VMs).
With that I was able to create 7-8 VMs that could interface with one another, and one of those VMs could get out to the internet as required.
There might be more sophisticated things you can do, but I think if you understand the types of networks and how to properly configure them, you should pretty much be where you want to be. I might need to read up on more advanced aspects of KVM but I think I'm good for now.
Subscribe to:
Posts (Atom)
SLAs using Zabbix in a VMware Environment
Zabbix 7 introduced some better support for SLAs. It also had better support for VMware. VMware, of course now owned by BroadSoft, has prio...

-
After finishing up my last project, I was asked to reverse engineer a bunch of work a departing developer had done on Kubernetes. Immediat...
-
Initially, I started to follow some instructions on installing Kubernetes that someone sent to me in an email. I had trouble with those, s...
-
I did some more work on Kubernetes. So the way Kubernetes was set up in here, was that SD-WAN traffic would be "routed" through...