I decided to try and enable DPDK on my computer.
This computer is a Dell T1700 Precision, circa 2015, which is a very very nice little development workstation server.
The VERY FIRST thing anyone needs to do, with DPDK, is ensure that their server has supported NICs. It all starts with the NIC cards. You cannot do DPDK without DPDK-compatible NICs.
There is a link at the DPDK website, which shows the list of NICs that are (or should be, as it always comes down to the level of testing, right?) compatible with DPDK.
That website is: DPDK Supported NICs
This T-1700 has an onboard NIC, and two ancillary NIC cards that ARE listed as DPDK-compatible NICs.These NICs are listed as:
82571EB/82571GB Gigabit Ethernet Controller and are part of the Intel e1000e family of NICs.
I was excited that I could use this server without having to invest and install in new NIC cards!
Let's first start, with specs on the computer. First, our CPU specifications.
CPU:
# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit,
64-bit
Byte
Order: Little
Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA
node(s): 1
Vendor
ID: GenuineIntel
CPU
family: 6
Model: 60
Model
name: Intel(R)
Core(TM) i5-4690 CPU @ 3.50GHz
Stepping: 3
CPU
MHz: 1183.471
CPU max
MHz: 3900.0000
CPU min
MHz: 800.0000
BogoMIPS: 6983.91
Virtualization: VT-x
L1d
cache: 32K
L1i
cache: 32K
L2
cache: 256K
L3
cache: 6144K
NUMA node0 CPU(s): 0-3
Flags: fpu
vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts
acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc
arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu
pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr
pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx
f16c rdrand lahf_lm abm epb invpcid_single ssbd ibrs ibpb stibp tpr_shadow vnmi
flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid
xsaveopt dtherm ida arat pln pts md_clear spec_ctrl intel_stibp flush_l1d
Let's take a look at our NUMA capabilities on this box. It says up above, we have one Numa Node. There is a utility called numactl on Linux, and we will run that with the "-H" option to get more information.
# numactl -H
available: 1 nodes (0)
node 0 cpus: 0 1 2 3
node 0 size: 16019 MB
node 0 free: 7554 MB
node distances:
node 0
0: 10
From this, we see we have 1 Numa Node. Numa Nodes equate to CPU sockets. And since we have one CPU socket, we have one Numa Node. All 4 cores of the CPU are on this node (Node 0 per above). Having just one Numa Node is not an optimal scenario for DPDK testing, but as long we are NUMA-capable, we can proceed.
Next, we will look at Memory.
Memory:
# lsmem --summary
Memory block size: 128M
Total online memory: 16G
Total offline memory: 0B16G memory. Should be more than enough for this exercise.
So how to get started?
Obviously the right way, would be to sit and read reams of documentation from both DPDK and OpenVSwitch. But, what fun is that? Booooring. I am one of those people who like to start running and run my head into the wall.
So, I did some searching, and found a couple of engineers who had scripts that enabled DPDK. I decided to study these, pick them apart, and use them as a basis to get started. I saw a lot of stuff in these scripts that had me googling stuff - IOMMU, HugePages, CPU and Masking, PCI, Poll Mode Drivers, etc.
In order to fully comprehend what was needed to enable DPDK, I would have to familiarize myself with these concepts. Then, hopefully, I could tweak this script, or even write new scripts, and get DPDK working on my box. That's the strategy.
I did realize, as time went on, that the scripts were essentially referring back to the DPDK and OpenVSwitch websites, albeit at different points in time as the content on these sites changes release by release.
No comments:
Post a Comment