Monday, November 27, 2017

Elasticity and Autoscaling - More Testing

Today I went back and did some further testing on AutoScaling.

Here are some new things that I tested:

1. Two Scaling policies in a single descriptor. 

It does no good to have "just" a Scale Out, if you don't have a corresponding "Scale In"!

You cannot have true Elasticity without the expansion and contraction - obviously - right?

So I did this, and this parsed just fine, as it should have.

I also learned you can put these scaling directives in different levels of descriptors - like the NSD. If you do this, I presume that what will happen is that it will factor in all instances across VNFMs. But I did not test this.

2. I tested to make sure that the scaling MAXED OUT where it should. 

If cumulative average CPU across instances was greater then 35% CPU, then the SCALE_OUT 3 would take affect. This seemed to work. I started with 2 instances, and as I added load to CPUs to boost the cum average up, it would scale out 3 - and then scale out 3 more for a total of 8 no matter what load was on the CPUs. So it maxed out at 8 and stayed put. This test passed.

I was curious to see if the engine would instantiate one VM at a time, or would it instantiate in bunches of 3 (per the descriptor), or would it actually just instantiate up to the max (which would be errant behavior).  Nova in OpenStack staggers the instantiations so it APPEARS to look like it is doing one at a time up to three (i.e. 1-1-1), at which point re-processing may kick off another series of 1-1-1.  So this is probably to-be-expected behavior. The devil is in the details when it comes to the Orchestrator, OpenStack, and the OpenStack Nova API in terms of whether it is possible and to what extent you can instantiate VMs simultaneously.

When a new VM comes up, it takes a while for it to participate in measurements. The scaling engine would actually skip the interval due to a "measurements received less than measurements requested" exception and only start evaluating things until and unless it had all of the VMs reporting in measurements that were expected.  I have to think about whether I like this or not.


3. Elasticity Contraction - by using SCALE_IN_TO parameter.

I set things up so that it would scale in to 2 instances - to ensure at least two instances would always be running. I would do this when cumulative average CPU was less than 15% CPU across instances.

This test, actually, failed. I saw the alarm get generated, and I saw the engine attempting to scale in, but some kind of decision-making policy was rejecting the scale in "because conditions are not met".

We will need to go into the code and debug this, and see what is going on.


No comments:

Zabbix to BigPanda Webhook Integration

Background BigPanda has made its way into the organization. I wasn't sure at first why, given that there's no shortage of Network Mo...