Showing posts with label Network. Show all posts
Showing posts with label Network. Show all posts

Thursday, September 12, 2019

Graphical Network Simulator-3 (GNS3) - Part II Installation on a Linux Server

Okay for Part II of GNS3, I came in today looking to install GNS3 on a Linux Server.

I noticed that GNS3 is designed to run on Ubuntu Linux, and as I tend to run in a CentOS7 shop, I am now faced with the hump of putting an Ubuntu server in here, or trying to get this to run on CentOS7. It should run on CentOS7, right? After all, this is a Linux world, right? 😏

I decided to take one of my 32Gb RAM servers, an HP box, that runs CentOS7, and follow a cookbook for installing GNS3 on it.

I followed this link:
https://gns3.com/discussions/how-to-install-gns3-on-centos-7-

I chose this box because it runs X Windows. It didn't have Python 3.6 on it, or the pip36 used for installing and managing python 3.6 packages.

A lot of steps in this thing.

Some questions I have about this cookbook that I need to look into:

1. Why does the cookbook use VirtualBox on Linux? I have KVM installed. Surely I can use that instead of VirtualBox. I only use VirtualBox on my Win10 laptop. So I have, for now, skipped that section.

2. What is IOU support? I will need to google that.

UPDATE: IOU (also called IOL, which stands for IOS on Linux, is basically an IOS simulator) that can run on an i386 chipset.  You would need and want that if you run any Cisco elements on the GNS3 simulator.

Friday, September 6, 2019

Graphical Network Simulator-3 (GNS3) - Part I Initial Introduction on a Laptop

Someone told me about this network modeling and simulation tool called Graphical Network Simulator-3. There is a Wikipedia page on this tool, which can be found here:

https://en.wikipedia.org/wiki/Graphical_Network_Simulator-3

Fascinating tool. Allows you to drag and drop network elements onto a canvas - but unlike the old tools, this tool can actually RUN the elements! To do this, you need to import image files as you drag and drop the elements out on the canvas. Below is an example of how, when dragging a simulated internet cloud onto the canvas will prompt for an image to run on a virtual machine.

Image Files Required for Network Elements in GNS3

Then, once you have the elements properly situated on the canvas, you can use a connector to interconnect them (it will prompt you for the NIC interface), and then, once your interconnection points are established, you can click a "run" button.

If all goes well everything turns green and packets start to flow. There is a built-in packet trace on each link line, which will dump packets to a file if you choose to do a packet capture.

Thursday, June 6, 2019

The Network Problem From Hell - Fixed - Circuitous Routing



Life is easy when you use a single network interface adaptor.  But when you start using multiple adaptors, you start running into complexities because packets can start taking multiple paths. 

One particular thing most network engineers want to avoid, is situations where a packet leaves through door #1 (e.g. NIC 1), and arrives through door #2.  To fix this, though, requires some more advanced network techniques and tricks (separate routing tables per NIC, and corresponding rules to direct packets to use those separate routing tables).

So, I had this problem where an OpenStack-managed virtual machine stopped working because it could not reach OpenStack itself, which was running on the SAME machine that the virtual machine was running on. It was driving me insane.

I thought the problem might be iptables on the host machine. I disabled those. Nope. 

I thought the problem might be OpenVSwitch. I moved the cable to a new NIC, and changed the bridge the virtual machine was using. Nope.

Compounding the problem, was that the OpenStack Host could ping the virtual machine. But the virtual machine could not ping the host. Why would it work one way, and not the other?

The Virtual Machine could ping the internet. It could ping the IP of the OpenStack router. It could ping the router that the host was connected to.

OpenStack uses Linux IP Namespaces, and in our case was using the Neutron OpenVSwitch Agent. An examination of these showed that the networking seemed to be configured just as it showed up in the Horizon Dashboard "Network Topology" visual interface.

One thing that is worth mentioning, is that the bridge mappings for provider networks is in the ml2_conf.ini file, and the openvswitch_agent.ini file. But the EXTERNAL OpenStack networks use a bridge based on a parameter setting in the l3_agent.ini file! So if the l3_agent.ini file has a bridge setting of, say, "br-ex" for external networks, and you don't have that bridge correspondingly configured in the other files, OpenStack will give you a message when you create the external network that it cannot reach the external network. We did run into this when trying to create different external networks on different bridges to solve the problem.

At wits end, I finally called over one of the more advanced networking guys in the company, and we began troubleshooting it using tcpdump. We finally realized that when the VM pinged the OpenStack host, the ICMP request packets were arriving on the expected NIC (em1 below), but no responses were going out on em1. When we changed tcpdump to use "any" interface, we saw no responses at all. Clearly the host was dropping the packets. But iptables was flushed! WHO/HOW were the packets getting dropped? (to be honest, we still aren't sure about this - more research required on that). But - we did figure out that the problem was a "circuitous routing" problem.

We figured maybe reverse path filtering was causing the issue. So we disabled that in the kernel. Didn't fix it.  

Finally, we realized that what was happening, is that the VM sends all of its packets through the external network bridge, which was attached to a 172.22.0.0/24 network, and the packet went to the router, which routed it to its 172.20.0.0/24 port, and then to the host machine. But because the host machine had TWO NICs on BOTH those networks, the host machine did not send replies back the same way they came in. It sent the replies to its em2 NIC which was bridged to br-compute. And it was HERE that the packets were getting dropped. Since that NIC is managed by OpenVSwitch, we believe a loop-prevention flow rule in OpenVSwitch, or perhaps Spanning Tree Protocol, caused the packets to get dropped.

Illustration of the Circuitous Routing Issue & How it was solved

The final solution was to put in a host route, so that any packet to that particular VM would be sent outside of the host, upstream to the router, and back in through the appropriate 172.22.0.0/24 port on the host/bridge, to the OpenStack Router, where it would be NATd back to the 192.168.100.19 IP of the virtual machine. 

Firewalls and Routing. These two seem to be where most problems occur in networking.

Wednesday, May 8, 2019

Berkeley Packet Filtering - replacement for iptables - AND nftables?

I came across this blog, from Jonathan Corbet, dated Feb 19th, 2018.

BPF Comes to Firewalls, by Jonathan Corbet

I found this rather fascinating, since I was aware that nftables seemed pre-ordained to be the successor to iptables. I had even purchased and read Steven Suehring's Linux Firewalls book, which covers both iptables and nftables.

At the end of the day, I only see iptables and firewalls based on iptables (e.g. FirewallD) being used. I have not encountered any nftables firewalls yet.

And the other noted point is that nftables IS in the current version of the Linux Kernel. BPF is not.

But, can BPF come into Linux distributions alongside nftables soon, and wind up replacing nftables?

That is the question.

Another interesting blog post addressing the impetus of BPF, is this one:

why-is-the-kernel-community-replacing-iptables


Wednesday, October 31, 2018

Data Plane Development Kit (DPDK)


I kept noticing that a lot of the carrier OEMs are implementing their "own" Virtual Switches.

I wasn't really sure why, and decided to look into the matter.  After all, there is a fast-performing OpenVSwitch, which while fairly complex, is powerful, flexible, and, well, open source.

Come to learn, there is actually a faster way to do networking than with native OpenVSwitch.

OpenVSwitch minimizes all of the context switching between user space and kernel space when it comes to taking packets from a physical port, and forwarding those packets to virtualized network functions (VNF) and back. 

But - DPDK provides a means to circumvent the kernel, and have practically everything in user space interacting directly to the hardware (bypassing the kernel).

This is fast, indeed, if you can do this. But it bypasses all of the purposes of a kernel network stack, so there has to be some sacrifice (which I need to look into and understand better).  One of the ways it bypasses the kernel is through Direct Memory Access (DMA), based on some limited reading (frankly, reading it and digesting it and understanding it usually requires several reads and a bit of concentration as this stuff gets very complex very fast).

The other question I have, is that if DPDK is bypassing the kernel en route to a physical NIC, what about other kernel-based networking services that are using that same NIC? How does that work?

I've got questions. More questions.

But up to now, I was unaware of this DPDK and its role in the new generation of virtual switches coming out. Even OpenVSwitch itself has a DPDK version.

Thursday, October 11, 2018

What is a Flooding Domain?


I have been working on configuring this Ciena 3906MVI premise router, with a Virtualized Network Function (VNF), and connecting that VNF back to some physical network ports.

This is a rather complex piece of hardware (under the hood).

I noticed in some commands, they were creating these Flooding Domains. And I didn't know what those were (there were sub-types called VPWS and VPLS and I need to look into that as well).

These Flooding Domains are then associated with "classifiers", like "Ingress Classifiers".

I didn't truly know what a Flooding Domain was. Not a lot on the web if you search those two words together. There's plenty of stuff on the concept of Flooding, however.

I found a link where someone asked what the difference between Flooding and Broadcasting is, and it is in this link where I found the best clues to get the proper understanding. So I will recap that there:

https://networkengineering.stackexchange.com/questions/36662/what-is-the-difference-between-broadcasting-and-flooding

What is the Difference between Broadcasting and Flooding?
Broadcasting is a term that is used on a broadcast domain, which is bounded by layer-3 (routers). Broadcasts are sent to a special broadcast address, both for layer-2 and layer-3. A broadcast cannot cross a layer-3 device, and every host in a broadcast domain must be interrupted and inspect a broadcast.
Flooding is used by a switch at layer-2 to send unknown unicast frames to all other interfaces. If a frame is not destined for a host which receives it, the host will ignore it and not be interrupted. This, too, is limited to a broadcast domain.
Flooding in OSPF (layer-3) means that the routes get delivered to every OSPF router in an area. It really has nothing to do with a broadcast. OSPF doesn't use broadcasts to send routes, it uses unicast or multicast to connect with its neighbors. Each OSPF router needs to have a full understanding of all the routers and routes in its area, and it tells all its neighbors about all its local routes, and any routes it hears about from other neighbors. (OSPF routers are unrepentant gossips.)
So, a Flooding Domain is essentially a "domain of packet delivery" - where the point of the inbound (ingress) packet is not where the packet exits (egress). That's my best definition.

SLAs using Zabbix in a VMware Environment

 Zabbix 7 introduced some better support for SLAs. It also had better support for VMware. VMware, of course now owned by BroadSoft, has prio...