Wednesday, August 21, 2019

Linux rpm package management tools with rpmbuilder and rpmreaper

In all of my years of using Linux, I have never created a package with a package manager build tool. I have of course used rpm, all the time. Querying packages, installing packages, removing packages. I just haven't generated, or built, a package myself. Until now.

We use CentOS here, which is a Red Hat Linux distribution. And Red Hat uses the Red Hat Package Manager (rpm) tools to administratively manage software packages on operating systems that are based on Red Hat Linux. Every package on an rpm-based system has a ".rpm" file suffix, and there is a binary called "rpm" that is used to install, uninstall, query, etc any and all packages on a system (that were created with Red Hat Package Management).

I had always heard that working with rpms (generating them) was tedious, painful, and a general pain in the a$$. One reason has to do with package dependencies. You can run into mutual or circular dependencies, nested dependencies, and many other issues. So I probably avoided making packages for these reasons.

One little-known, but very cool tool, is called rpmreaper. It is part of the epel-release repository. If you run this tool, you can visually inspect details about packages, as shown below.

Sample Screenshot of rpmreaper rpm Package Panagement Tool

So while I had no idea what I was doing, I spent a full day making a package and it didn't go too badly.  The rpm I put together parks a couple of kernel drivers and a configuration file on the system. That's it. Sounds simple, huh? Guess again.

First, kernel drivers it turns out, are compressed on Linux systems now. So I needed to use xz to compress the kernel drivers. Which means an uninstall needs to remove the compressed kernel drivers because the .ko files won't be there. And, when plopping new kernel modules onto a system, you do need to run commands like depmod to re-generate dependencies between the modules.

Now this rpm probably goes beyond what a typical rpm would do. I think as a best practice, an rpm will move files to where they need to be, or remove files from where they should be. That's it. And, they may do system things in an effort to support that charter.

Dependencies
I built the kernel drivers outside of the rpm. I could have gotten heavy and sophisticated and had the rpm compile the kernel drivers. This opens up a can of works about chipsets, target architectures, etc. I decided to keep it simple and that was easy to do, fortunately, because my box was an x86_64 architecture and so was the target boxes they wanted to install the rpms on.

So originally, I had dependencies for packages in the group "Development Tools". I commented those out. I instead put JUST the dependencies that the scripting in the rpm needed.
  • bash
  • xz (for compressing the kernel modules)
  • NetworkManager (nmcli)
  • ModemManager (mmcli)
Package Success or Failure
There was so much scripting to check for or start/stop services, and or load/unload kernel drivers that I learned that system return codes aside of the normal 0 exit code would cause the package install or package remove to fail outright. 

My solution to this was to provide feedback in the form of echo commands, and use an "|| true" (or true) to ensure that the command didn't cause the rpm itself to bail out. Because, the commands were really for for administrator convenience - not so much related to the deployment/removal of necessary files.

Definitions
Originally I was defining shell variables in the specific shell functions of the rpm specfile. That became redundant and error prone very quickly when I needed access to these same variables in pre/post script of both the install/uninstall sections of the rpm specfile.

Hence, I had to quickly learn and make use of definitions.Which are sort of like global variables. But, definitions are only used on the creation of the rpm itself. They are not referenced when you install or uninstall the package.

Versioning
Rpm specfiles, as you would expect, have nice versioning support, and it is wise to make use of that and document in your specfile what you are doing in each version iteration! 

Ok, in summary, this was interesting to have FINALLY created my own rpm package. I am sure there is a LOT more to learn, and the sophistication can go way beyond what my rpm is doing right now. I have about a 300 line specfile, mainly due to all of the scripting logic. I am only deploying 5 files in this rpm.

Thursday, August 15, 2019

Sierra Wireless EM7455 LTE Card on CentOS7

I had someone approach me trying to get some help. He had a Sierra Wireless LTE card that he wanted to use on CentOS7.  He had Network Manager running, and ModemManager, and he had two kernel modules loaded up called qcserial and qmi_wwan, but ModemManager would not recognize the card. So that's where we start.

I am not a low level expert on drivers these days (I don't do that day in day out), but have had some experience with drivers for wireless devices, such as USB 802.11x sticks. I had a TrendNet one years ago that wouldn't work with Linux until I found some sketch drivers on the web that I compiled and got to work. But, that entailed Network Manager and wpa_supplicant...not ModemManager. This was my first dive into LTE cards and drivers. Furthermore, I did not have the card in my hand, or on my own system.

So, apparently Ubuntu supports these cards natively, but CentOS7 doesn't.

I noticed that CentOS 7 does include a sierra.ko (sierra.ko.xz) module, which I thought should work with a Sierra Wireless EM7455 LTE-A card, which uses a Snapdragon X7 chip. We tested that out, by loading the sierra kernel module manually and starting ModemManager. No luck. Maybe it doesn't work with this EM7455 card? Not sure. I did see some support threads on the sierra.ko kernel module where the module only works for Sierra cards because Sierra does some interesting power management stuff with their driver (they made mention of another option.ko kernel module that should work with most other LTE cards). But this card, the EM7455 is indeed a Sierra LTE card. And the sierra.ko module didn't seem to work.

There are also a couple of other kernel modules that ARE on a CentOS7 box. These are called:

  • qcserial
  • qmi_wwan

The qcserial module creates a /dev/ttyUSB interface. The qmi_wwan creates a /dev/cdc-wdm interface. My understanding is that the serial interface is a control protocol for commands and statistics while the other is used for data transmission/reception (Tx/Rx). This is all part of a protocol called QMI; a Qualcomm protocol.

If you want to learn more about these protocols, this link below is absolutely fascinating as it discusses distinctions between GSM and CDMA, and the history of CDMA which has ties to Hollywood and a Beautiful Actress. Eventually it gets into QMI.

https://blogs.gnome.org/dcbw/2010/04/15/mobile-broadband-and-qualcomm-proprietary-protocols/

I think what is/was happening, is that when you crank the EM7455 card, these two drivers, qcserial and qmi_wwan are loaded but ModemManager still doesn't recognize the card. Either does NetworkManager.

So - the engineer heard that if he got access to two new drivers, GobiNet and GobiSerial, which are generated from a Sierra Wireless SDK, the card would work. You would need to blacklist the qcserial and qmi_wwan drivers though. The problem: how to get the SDK. I guess there might be some reason why Sierra Wireless doesn't release this SDK, which is probably, maybe, tied to royalties or licensing to Qualcom.

So we eventually obtained the SDK. We compiled it, and it produces, for our x86_64 architecture, two kernel modules:

  • GobiNet
  • GobiSerial

We (I) created an rpm (separate blog post about rpm package creation) to do all of the voodoo to get these drivers installed, along with the blacklist file, and configure an apn connection to a Verizon LTE access point.

Voila'. The drivers work! I think he said something about it using a ppp interface, though. And we specifically compiled GobiNet to use rawip with a rawip=1 setting on the Makefile.  So we may need to look into that but at least the LTE modem is now working.

By the way. You cannot rely just on Syslog for information about LTE. Because these are kernel drivers, you need to use dmesg to see what these modules are barking out!

So some more testing the engineer will do. But we have something that seems to work. I will wait to hear more feedback.

Thursday, August 1, 2019

OpenStack - Discussion on Cells

I have a partner who is still using OpenStack Newton.

I was asked to look into this, because OpenStack Newton is no longer supported by the OpenStack community; it has been End of Life'd (EOL).

OpenStack Ocata is still supported. I at one time set this up, and I didn't see any notable differences between Ocata and Newton, and my Service Orchestrator (Open Baton) seemed to still work with Ocata.

Ocata introduces the concept of Cells. Cells is an architecture concept that apparently (if I understand correctly), replaces (enhances?) the previous concept of Regions. It changes the way OpenStack is run, in terms of control and delegation of nodes and resources (Compute Node resources, specifically). It is a more hierarchical approach.

Here is a link on cells that I read to get this understanding: Discussion about OpenStack Cells

I didn't stop there, either. I read some more.

It turns out CERN (Particle Physics!? They run those Particle Accelerators and do stuff  more complex than anything I am writing about!?) - THEY are (I assume they still are) big on OpenStack. Tons of and tons of material on what CERN is doing. Architectures, Topologies, yada yada. I don't have time to read all of that.

But, I did read THIS article, on moving from Cells v1 to Cells v2. It told me all I  need to know. If you are using Cells, you need to jump over the Ocata release, and use Queens or later. Because more than half the OpenStack modules were deaf, dumb and blind as to the notion of what a Cell is. Obviously this causes problems.

So I guess the concept of a Cell is somewhat Beta, and partially supported in Ocata.

I guess you could move to Ocata in a small lab if you are not using Cells, and if the API remains a constant in conjunction with what happens to be leveraging it.

If anyone reads this, by all means feel free to correct and comment as necessary.

Wednesday, July 31, 2019

What on Earth is Canonical Doing with Ubuntu? Sheez

I have been using CentOS almost exclusively since I have been working here, first with CentOS 6, and then (now) CentOS7. I have watched the kernels move up from 3.x to 4.x, I have fought with (and still fight with) NetworkManager, etc.

You get used to what you use.

I have also used Ubuntu in the past, 14.04, and 16.04, but it has been a while.

So, I needed to install Ubuntu in order to run OSM, because OSM (on their website at least) wants to use Ubuntu. I think Ubuntu is probably bigger in Europe, is my guess.

So - for two straight days now, I have been trying to install a Ubuntu Cloud image and get it working on a KVM system. Something that SHOULD be simple, right? Wrong.

Here is a rundown of all of the headaches I have run into thus far, which has pissed me off about Ubuntu.

1. On 16.04, the root file system is only 2G for the cloud image you download off the web.

I ran out of disk space in no time flat. Trying to install X Windows and a Display Manager, which by default are not enabled on the cloud image. 

Trying to increase the root file system? Damn near impossible. I tried using qemu-img --resize, and that only created a /vdb file system. The ./dev./sda1 was STILL, and REMAINED, 2G. I could not install X Windows, I couldn't do squat. I am sure if I rolled up my sleeves, and got to work using advanced partitioning tools and whatnot, I could have made this happen. Or, maybe not. Point is, this was a hassle. An unnecessary hassle in my opinion.

2. I realized that the 18.04 Ubuntu uses a qcow2 format - which you CAN resize. Again, why Ubuntu is using 2G as a root file system size is beyond me, and this is ANNOYING. This is the year 2019.

So, I resized the image, and put a password on the image (cloud images are not set up to log in with prompt, only certificates, which of course is a good practice, albeit a hassle for what I needed).

3. I launched 18.04 and guess what? NO NETWORKING!!!! @^%$

I realize no networking was set up. At all! WHAT???

4. Let's go set up networking. Yikes! You CAN'T!!!!!! WHY? Because the iproute2 packages and legacy packages that everyone in the WORLD uses, are not on the machine!

They want you to use this newfangled tool called NetPlan to set up your networking!?!?

Fine. You can google this and set it up, which I did.

BUT WHY ARE ALL OF THESE LINUX DISTRIBUTIONS BECOMING SO DIFFERENT?

THAT IS NOT, I SAY...NOT...WHAT LINUX IS ALL ABOUT?????

I remember when Gentoo came out, and how different a beast it was. Now, the distinction between CentOS and Ubuntu is becoming a very wide chasm.

Thursday, July 25, 2019

ONAP - Just Too Big and Heavy?

I have been working with Service Orchestrators for a while now. Here are three of them I have had experience with:

  • Heat - which is an OpenStack Project, so while OpenStack can be considered the VIM (Virtual Infrastructure Manager), Heat is an Orchestrator that runs on top of OpenStack and allows you to deploy and manage services 
  • Open Baton - this was the ORIGINAL Reference Implementation for the ETSI MANO standards, out of a Think Tank in Germany (Frauenhofer Fokus).  
  • ADVA Ensemble - which is an ETSI-based Orchestrator that is not in the public domain. It is the IPR of ADVA Optical Networks, based out of Germany.
There are a few new Open Source initiatives that have surpassed Open Baton for sure, and probably Heat also. Here are a few of the more popular open source ones:
  • ONAP - a Tier 1 carrier solution, backed by the likes of at&t. 
  • OSM - I have not examined this one fully. TODO: Update this entry when I do.
  • Cloudify - a private commercial implementation that bills itself as being more lightweight than the ONAP solution.
I looked at ONAP today. Some initial YouTube presentations were completely inadequate for allowing me to "get started". One was a presentation by an at&t Vice President. Another was done by some architect who didn't show a single slide on the video (the camera was trained on the speaker the whole time).

This led me to do some digging around. I found THIS site: Setting up ONAP

Well, if you scroll down to the bottom of this, here is your "footprint" - meaning, your System Requirements, to install this.

ONAP System Requirements
Okay. This is for a Full Installation, I guess. The 3Tb of Disk is not that bad. You can put a NAS out there and achieve that, no problem.  But 148 VCPU????  THREE HUNDRED THIRTY SIX  Gig of RAM? OMG - That is a deal killer in terms of being able to install this in a lab here. 

I can go through and see if I can pare this down, but I have a feeling that I cannot install ONAP. This is a toy for big boys, who have huge servers and lots of dinero.

I might have to go over and look at OSM to see if that is more my size.

I will say that the underlying technologies include Ubuntu, OpenStack, Docker and Mysql - which are pretty mainline mainstream.

RUST Programming Language - Part II

I spent a few hours with RUST again yesterday.

There's definitely some things to learn with this language.

One thing I noticed, was that the Compiler is really good. Very very intelligent. It can make a lot of intelligent inferences about what you are doing (or trying to do). It will also bark out all kinds of warnings about things.

Cargo
In my previous post, I may have made light mention of something called Cargo. Cargo is a "Configuration Management" facility for RUST programmers. It can build (cargo build), check (cargo check), or build AND execute (cargo run) your program.

It also manages packages, and dependencies. So I think it probably is somewhat synonymous with pip in Python. If you are familiar with yum or some equivalent package manager on a Linux distribution, you can get the gist of what Cargo does from the perspective of pulling in packages you need for your project.

This link is a book on Cargo:  The Cargo Book

So yesterday, I wrote some code from the book I have been using, but deviated the code a little bit, and pulled in a package called strum, which allows you to iterate over an "Enum" object. My Enum object has Coins and Coin values (penny, nickel, dime, quarter) and I would use strum to iterator over this and print out the monetary value of each coin. Nothing super sophisticated, but in this RUST language, you definitely need to learn how to do the basics first.

Match Expression
Another interesting thing is that you can use simple "if / then" logic, but you can also use this more sophisticated "match" expression, or construct. So this is the "higher IQ" way to do matching, for the more advanced or off the beaten path cases (i.e. regular expression searches, etc).

Here is a link on that, which is actually a relative link to a more comprehensive book on RUST that has a lot more good stuff in it than just the Match expression.

https://doc.rust-lang.org/reference/expressions/match-expr.html

Tuesday, July 23, 2019

RUST Programming Language - Part I


In hearing about this "new" language, I have spent some time this week going through a book called "The Rust Programming Language", which can be found at this link:

The Rust Programming Language

I will have to come back and edit this post as I go through the book, but so far, I have been trying to learn "enough" about the language to understand WHY the language has even emerged - in other words, what it's selling point is, what deficiencies in other languages it addresses, etc.

What do I have to say right now?

Well, it's a Compiled language. It's been a long time coming that I have seen a new compiled language emerge. We have had nothing but runtime interpreted languages for many years now.

It has some interesting resemblances to C.

It has no advanced traditional object oriented capabilities, like Inheritance and Polymorphism. That said, it does support Encapsulation, and this concept of Traits (which up to now, seem to me to resemble Templates in C++ a bit - but I may revise this statement after I get more familiar with it).

The language is designed to be more "safe" than traditional C or C++, the latter of which, due to direct memory access and manipulation, can cause Segmentation Violations, etc. One example of course is References where one thread might de-reference a pointer that another thread might be using, accessing, manipulating, etc.

SLAs using Zabbix in a VMware Environment

 Zabbix 7 introduced some better support for SLAs. It also had better support for VMware. VMware, of course now owned by BroadSoft, has prio...