Wednesday, July 26, 2017

NetFlow with Ntop


I had heard that Ntop supports Netflow on Linux.

I found a link / blog where someone else has played with this package for same or similar purposes. Let me share that here:
https://devops.profitbricks.com/tutorials/install-ntopng-network-traffic-monitoring-tool-on-centos-7/

I downloaded the Ntop package, and immediately it barked about the fact that I did not have kernel headers on the system.

This is bad, in my mind.

What box, running out in the field, would have kernel headers installed on it? That would be a bad security practice because it would mean that the box has a lot of stuff on it that it probably shouldn't have...specifically this would mean compilers, et al?

I also noticed that the package runs with a license code. There is a limited license it can run as, which is default configured.  But I'm not sure I like having software, at least for this purpose, that is dependent on licensing. I did not study whether it is a key license that is time expired, or if it calls out to a remote server to authenticate the license, et al.

I kind of stopped there. I did not play with it any further. I may come back to it, and if I do I will update this accordingly.

Saturday, July 22, 2017

NetFlow with nfcapd and fprobe

I spent some time researching and using NetFlow this week (about a day).

Basically, you download the nfdump package, which has the collector (nfcapd), and a GUI (nfsen) and a command line tool called nfdump.

You run the collector, which listens on a standard or specified port, and "something" (i.e. a router) that knows how to capture flows, will write netflow formatted files. Then you can use nfdump or nfsen to view these flows.

There are multiple versions of NetFlow - from version 5 all the way up to 9 (see the NetFlow Wiki). The different versions provide additional data (or extensions as they refer to them).

The tricky part in testing this is to mimic or simulate a router. To do this:

fprobe is a tool you can install to generate flows. But it does not appear to install with the yum package manager, so you need to download the source and compile it, or there is an rpm that can be downloaded and installed.

frpobe-ulog is another tool, but it runs over iptables and requires iptables rules to work. I was surprised to see that yum COULD find and install this program, but not fprobe.

There are a few other tools as well, but these were the two I tried out.

Both of these worked, although there is not a lot of documentation or forum discussion on the fprobe-ulog approach. I wound up using fprobe.

There is the question of what defines and constitutes a network flow. The Wikipedia defines this. I think that if you have a bunch of udp traffic, it is harder for Netflow to stitch the traffic together into a flow for hindsight analysis. But TCP of course is straightforward.

System Tap


I spent some time reading the Beginner's Guide to System Tap.

https://sourceware.org/systemtap/SystemTap_Beginners_Guide/

I learned the basics of writing, reading and compiling / running the system tap scripts.

I also enjoyed running the sample System Tap scripts that are mentioned specifically in Chapter 5 of this guide.

I wound up downloading a bunch of these; especially the networking ones.

Writing these efficiently would take some practice. But there is a good Reference Guide that can make the process of writing the scripts easier. The question is, could there be a use case for writing one of these scripts that someone hasn't already thought up, and written?

Wednesday, July 19, 2017

Ansible Part II

I've had more time to play with Ansible, in the background. A little bit at least. Baby steps.

I use it now to deploy SD-WAN networks, which have different types of KVM-based network elements that need to be configured differently on individual virtual machines.

I enhanced it a bit to deploy virtual-machine based routers (Quagga), as I was building a number of routing scenarios on the same KVM host.

I have made some changes to make Ansible work more to my liking:

1. Every VM gets a management adaptor that connects to a default network.
2. The default network is a NAT network that has its own subnet mask and ip range.
3. I assign each VM an IP on this management network in the hosts file on the KVM host.

The ansible launch-vm script uses the getent package to figure out which IP address the VM has by its name, which is defined in the inventory file.

Because the adaptor type I like to use is Realtek, I had to change guestfish in the launch-vm script to use adaptor name ens3. I also had to change it to use an external DNS Server, because the lack of a DNS server was causing some serious issues with the playbooks not running correctly; especially when they needed to locate a host by name (i.e. to do a yum install).

This ansible has turned out to be very convenient. I can deploy VMs lickety split now, freeing up the time I would normally spend tweaking and configuring individual VM instances.

I'm thinking of writing my own Ansible module for Quagga set up and configuration. That might be a project I get into.

Before I do that, I may enhance the playbooks a bit, adding some "when" clauses and things like that. So far everything I have done has been pretty vanilla.

Wednesday, July 5, 2017

Quagga Routing - OSPF and BGP

For the last month or two, I have been "learning by doing" routing, using Quagga.

Quagga uses an abstraction layer called Zebra that sits (architecturally not literally) on top of the various routing protocols that it supports (OSPF, BGP, RIP, et al).

I designed two geographically-separated cluster of OSPF routers - area 0.0.0.1 - and then joined them with a "backbone" of two OSPF routers in area 0.0.0.0. I hosted these on virtual machines running on a KVM host.

From there, I modified the architecture to use BGP to another colleague's KVM virtual machine host that ran BGP.

Some things we learned from this exercise:
1. We learned how to configure the routers using vtysh,
2. We had to make firewall accomodations for OSPF (which uses multicast) and BGP.

We also used an in-house code project that tunnels Quagga. I have not examined the source for this, but that worked also, and required us to make specific firewall changes to allow tunx interfaces.

Percona XtraDB Cluster High Availability Training

Tuesday 5/23 through 5/26 I attended Percona Cluster High Availability training.

The instructor was very knowledgeable and experienced.

One of the topics we covered was the "pluggable", or module-based, architecture that allow different kinds / types of engines that can be used with the MySQL database.  He mentioned InnoDB, and how the Percona XtraDB is based off of the InnoDB engine.

We also covered tools and utilities, not only from MySQL, but 3rd Parties, including Percona (Percona Toolkit).

We spent some time on Administration, such as Backup and Recovery.

We then moved on into Galera replication, and used VirtualBox images to create and manage clusters of 3 databases.

I won't reproduce a full week of rather complex training topics and details in this blog, but it was good training. I will need to go back and revisit / review this information so that it doesn't go stale on me.

SLAs using Zabbix in a VMware Environment

 Zabbix 7 introduced some better support for SLAs. It also had better support for VMware. VMware, of course now owned by BroadSoft, has prio...