Friday, August 7, 2020

LACP not working on ESXi HOST - unless you use a Distributed vSwitch

Today we were trying to configure two NICs on an ESXi host in an Active-Active state, such that they would participate in a LAG using LACP with one NIC connected to one TOR (Top of Rack) Switch and the other connected to another separate TOR switch.

It didn't work.

There was no way to "bond" the two NICs (as you would typically do in Linux). ESXi only supported NIC Teaming. Perhaps only the most advanced networking folks realize, that NIC Teaming is not the same as NIC Bonding (we won't get into the weeds on that). And NIC Teaming and NIC Bonding are not the same as Link Aggregation.

So after configuring NIC Teaming, and enabling the second NIC on vSwitch0, poof! Lost connectivity.

Why? Well, ESXi runs Cisco Discovery Protocol (CDP). Not LACP, which the switch requires. So without LACP, there is no effective LAG, and the switch gets confused.

Finally, we read that in order to use LACP, you needed to use vDS - vmWare Distributed Switch. 

Huh? Another product? To do something we could do on a Linux box with no problems whatsoever?

Turns out, that to run vDS, you need to run vCenter Server. So they put the Distributed Switch on vCenter Server?

Doesn't that come at a performance cost? Just so they can charge licensing?

I was not impressed that I needed to use vCenter Server just to put 2 NICs on a box on a Link Aggregation Group.

No comments:

Zabbix to BigPanda Webhook Integration

Background BigPanda has made its way into the organization. I wasn't sure at first why, given that there's no shortage of Network Mo...