Friday, March 4, 2022

ESXi is NOT Linux

ESXi is not built upon the Linux kernel, but uses an own VMware proprietary kernel (the VMkernel) and software, and it misses most of the applications and components that are commonly found in all Linux distributions.

Because ESXi uses "-ix" commands (Unix, Linux, POSIX), it "looks and smells" like Linux, but in fact, these commands are similar to the package CygWin that one can run on a Windows system to get a Linux terminal and command line interpreter. ESXi does not use CygWin, however. They run something called BusyBox.

BusyBox is something used on a lot of small-factor home networking gear. PfSense for example, runs Berkeley Unix (BSD). But many small routers (Ubiquiti EdgeMax comes to mind) use different chipsets, different OS kernels, and then use BusyBox to abstract this kernel away from users by providing a common interface - meaning users don't need to learn a whole slew of new OS commands.

 ESXi has a LOT of things that Linux does NOT have:

1. File systems VMFS6 for example is the newest revision of VMFS.

2. Process Scheduler - and algorithms

3. Kernel hooks that tools like esxtop use (think system activity reporting in Unix and Linux) 

 

This article (the source for this post), discusses some nice facts in comparing ESXi to Linux:

ESXi-is-not-based-on-Linux

I learned some interesting things from this article, such as:

ESXi even uses the same binary format for executables (ELF) than Linux does, so it is really not a big surprise anymore that you can run some Linux binaries in an ESXi shell - provided that they are statically linked or only use libraries that are also available in ESXi! (I exploited this "feature" when describing how to run HP's hpacucli tool in ESXi and when building the ProFTPD package for ESXi).

...You cannot use binary Linux driver modules in ESXi. Lots of Linux device drivers can be adapted to ESXi though by modifying their source code and compiling them specifically for ESXi. That means that the VMkernel of ESXi implements a sub-set of the Linux kernel's driver interfaces, but also extends and adapts them to its own hypervisor-specific needs.

In my opinion this was another very clever move of the VMware ESXi architects and developers, because it makes it relatively easy to port an already existing Linux driver of a hardware device to ESXi. So the partners that produce such devices do not need to develop ESXi drivers from scratch. And it also enables non-commercial community developers to write device drivers for devices that are not supported by ESXi out-of-the-box!

There is a PDF download of the ESXi architecture, which can be downloaded here:

 https://www.vmware.com/techpapers/2007/architecture-of-vmware-esxi-1009.html

Tuesday, March 1, 2022

VMWare Clustered File Systems - VMFS5 vs VMFS6

 

 A nice table that describes the differences between VMWare's VMFS5 and the new VMFS 6.

Source: http://www.vmwarearena.com/difference-between-vmfs-5-vmfs-6/


For the difference in 512n versus 512e:


VMFSsparse:

VMFSsparse is a virtual disk format used when a VM snapshot is taken or when linked clones are created off the VM. VMFSsparse is implemented on top of VMFS and I/Os issued to a snapshot VM are processed by the VMFSsparse layer. VMFSsparse is essentially a redo-log that grows from empty (immediately after a VM snapshot is taken) to the size of its base VMDK (when the entire VMDK is re-written with new data after the VM snapshotting). This redo-log is just another file in the VMFS namespace and upon snapshot creation the base VMDK attached to the VM is changed to the newly created sparse VMDK.

SEsparse (space efficient):

SEsparse is a new virtual disk format that is similar to VMFSsparse (redo-logs) with some enhancements and new functionality. One of the differences of SEsparse with respect to VMFSsparse is that the block size is 4KB for SEsparse compared to 512 bytes for MFSsparse. Most of the performance aspects of VMFSsparse discussed above—impact of I/O type, snapshot depth, physical location of data, base VMDK type, etc.—applies to the SEsparse format also.

Wednesday, February 9, 2022

Jinja2 Templating in Ansible

Lately, I have been playing around with Jinja2 Templating in Ansible. Let me explain the context of that.

With the Morpheus CMP solution, it has an Automation Workflow engine that can be used to run Tasks, or whole sets of Workflows, in a variety of different technologies (scripting languages, Ansible, Chef, Puppet, et al).

To access the variables about your Virtual Machine, say after you launch it, you put tags into your script to reference variables. The tags can have subtle differences in syntax, depending on whether it is a bash script, a Python script, or an Ansible playbook.

This post, is related to Ansible specifically.

If you are needing an explicit specific value, the tag in an Ansible playbook would look as follows:
    - name: "set fact hostname"
      set_fact:
        dnsrecord: |
          {{ morpheus["instance"]["hostname"] | trim }}

Really strange, and confusing, syntax. Not to mention, this pipe to a supposed function called trim.

What language is this? I thought it was groovy, or some kind of groovy scripting language - at first. Then, I thought it was a form of Javascript. Finally, after some web research, I have come to learn that this markup is Ansible's Jinja2 scripting language.

First, I had to understand how Morpheus worked. I realized that I could use a Jinja2 tag to dump the entire object (in JSON) about a launched virtual machine (tons of data actually). Once I understood how Morpheus worked, and the JSON it generates, I was able to go to work snagging values that I needed in my scripts.

But - eventually, my needs (use cases) became more complex. I needed to loop through all of the interfaces of a virtual machine! How do you do THAT??

Well, I discovered that to do more sophisticated logic structures (i.e. like loops), the markup and tagging is different, and the distinctions are important. You can wind up pulling your hair out if you don't understand them.

Let's take an example where we loop through a VM's interfaces with Jinja2.

In this example, we loop through all interfaces of a virtual machine. But - we use an if statement to only grab the first interface's ip address. 

Note: To be optimized, we should break after we get that first ip address, but breaking out of loops is not straightforward in Jinja2, and there are only a handful of interfaces, so we will let the loop continue on, albeit wastefully.

    - name: set fact morpheusips
      set_fact:
         morpheusips: |
           {% for interface in morpheus['instance']['container']['server']['interfaces'] %}
             {% if loop.first %}
                {{ interface['ipAddress'] }}
             {% endif %}
           {% endfor %}

Note that an explicit specific value - has NO percent signs in the tag!

But, the "logic", like for loops, if statements, et al, those DO use percent signs in the tag!

This is extremely important to understand!

Now, the variable we get - morpheusips - is a string, which contains leading and trailing spaces, and has newlines - including an annoying newline at the end of the string which wreaked havoc when I needed to convert this string to an array (using the split function).  

I found myself having to write MORE code, to clean this up, and found more useful Jinja2 functions for doing this kind of string manipulation and conversion (to an array).

    - name: "Replace newlines and tabs with commas so we can split easier"
      set_fact:
         commasep: "{{ morpheusips | regex_replace('[\\r\\n\\t]+',',') | trim }}"


    - name: "Remove comma at the end of the string"
      set_fact:
         notrailcomma: "{{ commasep | regex_replace(',$','') | trim }}"

    - name: "convert the ip delimeter string to a list so we can iterate it"
      set_fact:
         morpheusiplst: "{{ notrailcomma.split(',') }}"

    - name: "Loop and Print variable out for morpheusiplst"
      ansible.builtin.debug:
         var: morpheusiplst

I am NOT a guru, or a SME, on Jinja2 Templating. But, this is a blog to share what I have been poking around with as I get used to it to solve some problems.


Thursday, December 2, 2021

HAProxy Problem Solved

Okay, an update on my latest post on HAProxy. It turns out that the issue had nothing to do with HAProxy at all, but that the clustering on the back-ends was not working properly.

So, I have reverted the HAProxy back to its original configuration, prior to getting into debug mode. 

Note: The Stats page in HAProxy, by the way, is an invaluable way to help see if your proxy is working correctly. You can see the load distribution, you can see the back-end health checks, etc. Plus the GUI is simple, informative and intuitive.

Below is a quick look at the front and back end configurations. In this configuration, we are using HAProxy as a Reverse Proxy, where we terminate incoming requests (SSL Termination), and rather than forward to our back-ends in the clear, we utilize back-end certificates to proxy to the back-ends using SSL. This is a bit unconventional perhaps in a typical reverse proxy scenario.

#---------------------------------------------------------------------
# ssl frontend
#---------------------------------------------------------------------
frontend sslfront
   mode tcp
   option tcplog
   bind *:443 ssl crt /etc/haproxy/cert/yourcert.pem
   default_backend sslback

#---------------------------------------------------------------------
# sticky load balancing to back-end nodes based on source ip.
#---------------------------------------------------------------------
backend sslback
   # redirects http requests to https which makes site https-only.
   #redirect scheme https if !{ ssl_fc }
   mode http
   balance roundrobin
   option ssl-hello-chk

   option httpchk GET /ping
   http-check expect string TESTPING

   stick-table type ip size 30k expire 30m
   stick on src
   #stick-table type binary len 32 size 30k expire 30m
   #stick on ssl_fc_session_id
   default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions

   server servername01 192.168.20.10:443 ssl check id 1 maxconn 1024
   server servername02 192.168.20.11:443 ssl check id 2 maxconn 1024
   server servername03 192.168.20.12:443 ssl check id 3 maxconn 1024

Monday, October 18, 2021

HAProxy - Aggravating Problem I Have Not Solved

I have not ever really blogged on proxies. I don't have a lot of proxy experience and don't consider myself a guru with proxies, load balancers, etc.

But more and more often, solutions have come in that require load distribution to an N+1 (Active Active) cluster. And, HAProxy is supposed to be a rather lightweight and simple approach, especially in situations where the mission is not totally critical, or the load is not seriously high.

I originally set HAProxy up to distribute load to a Cloudify cluster. And Cloudify provided the configuration for HAProxy that they had tested in their lab, and that they knew worked well. Later, I set HAProxy up to load balance our Morpheus cluster. Initially it was working fine. 

Or, so it seemed. Later, I noticed errors. The first thing you generally do when you see errors, is to tell HAProxy to use one node (and not 2 or 3), so that you can reduce troubleshooting complexity and examine the logs on just one back-end node.  So in doing this, I managed to rather quickly figure out that if I told HAProxy to use one back-end node, things worked fine. When I told HAProxy to use two or more back-end nodes, things didn't work.  

So that's where it all started.

The Problem
Below is a picture of what we are doing with HAProxy, and based on the picture below, web access comes in on the northbound side of the picture, and web access is not the problem we are having.  The problem, is that VMs that are deployed onto various internal networks by Morpheus "phone home" and they phone home on a different network interface. 

This works fine with a single back-end enabled. But if you enable more than one back-end in HAProxy, Morpheus fails to fully transition the state of the VM to "running".


HAProxy Flow

In testing this out a bit and dumping traffic, we initially noticed something interesting. The Source IP coming into each Morpheus node, was not the HAProxy VIP - it was the interface IP address. We wound up solving this, by telling KeepAliveD to delete and re-create the routes with the VIP to be used as the Source IP - but only when it had control of the VIP. But in the end, while this made traffic analysis (tcpdump on the Morpheus nodes) a bit more clear about the traffic flow, it did not solve the actual issue.

I STILL don't know why it works with one back-end, and not two or more. I have had Proxy experts in our organization come in and look, and they seem to think HAProxy is doing its job properly, and that the issue is on the back-end clustering. The vendor, however, is telling us the issue is with HAProxy.

Our next step may be to configure a different load balancer. That should definitely rule things out. I know Squid Proxy is quite popular, but these tools do have a Learning Curve, and I have zero zilch experience with Squid Proxy. I think we may use a Netscaler Load Balancer if we wind up going with another one.

I should mention that the HAProxy configuration is not the simplest. And as a result of configuring this, I have increased my general knowledge on Load Balancing.


Monday, October 4, 2021

The first Accelerated VNF on our NFV platform

 I haven't posted anything since April but that isn't because I haven't been busy.

We have our new NFV Platform up and running, and it is NOT on OpenStack. It is NOT on VMWare VIO. It also, is NOT on VMWare Telco Cloud!

We are using ESXi, vCenter, NSX-T for the SD-WAN, and Morpheus as a Cloud Management solution. Morpheus has a lot of different integrations, and a great user interface that gives tenants a place to log in and call home and self-manage their resources.

The diagram below depicts what this looks like from a Reference Architecture perspective.

The OSS, which is not covered in the diagram, is a combination of Zabbix and VROPS, both working in tandem to ensure that the clustered hosts and management functions are behaving properly.

The platform is optimized with E-NVDS, which is also referred to commonly as Enhanced Datapath which requires special DPDK drivers to be loaded on the ESXi hosts, for starters, as well as some configuration in the hypervisors. There are also settings to be made in the hypervisors to ensure that the E-NVDS is configured properly (separate upcoming post).

Now that the platform is up and running, it is time to start discussing workload types. There are a number of Workload Categories that I tend to use:

  1. Enterprise Workloads - Enterprise Applications, 3-Tier Architectures, etc.
  2. Telecommunications Workloads
    • Control Plane Workloads
    • Data Plane Workloads

Control Plane workloads are have more tolerances for latency and system resources than Data Plane Workloads do. 

Why? Because Control Plane workloads are typically TCP-based,  frequently use APIs (RESTful),  and tend to be more periodic in their behavior (periodic updates).  Most of the time, when you see issues related to Control Plane, it is related to back-hauling a lot of measurements and statistics (Telemetry Data). But generally speaking, this data in of itself does not have stringent requirements.

From a VM perspective, there are a few key things you need to do to ensure your VNF behaves as a true VNF and not as a standard workload VM. These include:

  • setting Latency Sensitivity to High, which turns off interrupts and ensures that poll mode drivers are used.
  • Enable Huge Pages on the VM by going into VM Advanced Settings and adding the parameter: sched.mem.lpage.enable1GHugePage = TRUE

Note: Another setting worth checking, although we did not actually set this parameter ourselves, is: sched.mem.pin = TRUE

Note: Another setting, sched.mem.maxmemctl ensures that ballooing is turned off. We do NOT have this setting, but it was mentioned to us, and we are researching this setting.

One issue we seemed to continually run into, was a vCenter alert called Virtual Machine Memory Usage, displaying in vCenter as a red banner with "Acknowledge and Reset to Green" links. The VM was in fact running, but vCenter seemed to have issues with it. The latest change we made that seems to have fixed this error, was to check the "Reserve all guest memory (All locked)" option checkbox.

This checkbox to Reserve all guest memory seemed intimidating at first, because the concern was that the VM could reserve all memory on the host. That is NOT what this setting does!!! What it does, is allow the VM to reserve all of its memory up-front - but just the VM memory that is specified (i.e. 24G). If the VM has has HugePages enabled, it makes sense that one would want the entire allotment of VM memory to  memory to be reserved up front and be contiguous. When we enabled this, our vCenter alerts disappeared.

Lastly, we decided to change DRS to Manual in VM Overrides. To find this setting amongst the huge number of settings hidden in vCenter, you go to the Cluster (not the Host, not the VM, not the Datacenter) and the option for VM Overrides is there, and you have four options:

  • None
  • Manual
  • Partial
  • Full

The thinking here, is that VMs with complex settings may not play well with vMotion. I will be doing more research on DRS for VNFs before considering setting this (back) to Partial or Full.

Monday, April 26, 2021

Tenancy is Critical on a Cloud Platform

With this new VMWare platform, it was ultimately decided to go with ESXi hypervisors, managed by vCenter, and NSX-T.  

During the POC, it was pointed out that this combination of solutions had some improvements and enhancements over OpenStack (DRS, vMotion, et al). But one thing seemed to be overlooked, and we pointed it out: Tenancy

VMWare attempts to address Tenancy with Vertical Stack point solutions, like vCloud Director (positioned at Service Providers), or vRealize Automation. The latter, is going through a complete transformation in its latest version.  These solutions are also expensive. And, if you don't have the budget, what are your options??

One option is to set up Resource Pools and Folders in vCenter. Not the cleanest solution because you cannot set policies, workflows, etc.

What else can you do? Well, you can use a Cloud Management solution.

We had Cloudify as an Orchestrator. And we evaluated that as a Cloud Management solution. But what we found in the end, was that Cloudify excelled at complex orchestration, but it was not designed and built, ground-up, to be a Cloud Management Platform.

It seemed that this (lack of) Tenancy seemed to become apparent to everyone all at once - once the platform came up on VMWare.  And, with Cloudify we lacked the Blueprint development to do the scores to hundreds of tasks that we needed to have. It needed integrations with NSX-T, vCenter, and a host of other solutions.

We looked at a couple of other solutions, and settled on a solution called Morpheus.

I will blog a bit more about Morpheus in upcoming posts. I have been very hands-on with it lately. 

SLAs using Zabbix in a VMware Environment

 Zabbix 7 introduced some better support for SLAs. It also had better support for VMware. VMware, of course now owned by BroadSoft, has prio...