docs: Add VPP with iperf and trex
Change-Id: I9f238b6092bc072fd875facfee5262c6b155043e Signed-off-by: jdenisco <jdenisco@cisco.com>
This commit is contained in:
@ -1,7 +1,7 @@
|
||||
.. _containers:
|
||||
|
||||
FD.io VPP with Containers
|
||||
=========================
|
||||
VPP with Containers
|
||||
====================
|
||||
|
||||
This section will cover connecting two Linux containers with VPP. A container is essentially a more efficient and faster VM, due to the fact that a container does not simulate a separate kernel and hardware. You can read more about `Linux containers here <https://linuxcontainers.org/>`_.
|
||||
|
||||
|
@ -9,7 +9,8 @@ extensive list, but should give a sampling of the many features contained in FD.
|
||||
|
||||
.. toctree::
|
||||
|
||||
contiv/index
|
||||
containers
|
||||
simpleperf/index.rst
|
||||
vhost/index.rst
|
||||
homegateway
|
||||
contiv/index.rst
|
||||
|
17
docs/usecases/simpleperf/index.rst
Normal file
17
docs/usecases/simpleperf/index.rst
Normal file
@ -0,0 +1,17 @@
|
||||
.. _simpleperf:
|
||||
|
||||
************************
|
||||
VPP with Iperf3 and TRex
|
||||
************************
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
iperf3
|
||||
iperf31
|
||||
trex
|
||||
trex1
|
||||
|
||||
|
||||
|
||||
|
237
docs/usecases/simpleperf/iperf3.rst
Normal file
237
docs/usecases/simpleperf/iperf3.rst
Normal file
@ -0,0 +1,237 @@
|
||||
.. _iperf3:
|
||||
|
||||
Introduction
|
||||
============
|
||||
|
||||
This tutorial shows how to use VPP use iperf3 and Trex to get some basic peformance
|
||||
numbers from a few basic configurations. Four examples are shown. In the first two
|
||||
examples, the **iperf3** tool is used to generate traffic, and in the last two examples
|
||||
the Cisco's `TRex Realistic Traffic Generator <http://trex-tgn.cisco.com/>`_ is used. For
|
||||
comparison purposes, the first example shows packet forwarding using ordinary kernel
|
||||
IP forwarding, and the second example shows packet forwarding using VPP.
|
||||
|
||||
Three Intel Xeon processor platform systems are used to connect to the VPP host to pass traffic
|
||||
using **iperf3** and Cisco’s `TRex <http://trex-tgn.cisco.com/>`_.
|
||||
|
||||
Intel 40 Gigabit Ethernet (GbE) network interface cards (NICs) are used to connect the hosts.
|
||||
|
||||
|
||||
Using Kernel Packet Forwarding with Iperf3
|
||||
===========================================
|
||||
|
||||
In this test, 40 GbE Intel Ethernet Network Adapters are used to connect the three
|
||||
systems. Figure 1 illustrates this configuration.
|
||||
|
||||
.. figure:: /_images/iperf3fig1.png
|
||||
|
||||
Figure 1: VPP runs on a host that connects to two other systems via 40 GbE NICs.
|
||||
|
||||
For comparison purposes, in the first example, we configure kernel forwarding in
|
||||
*csp2s22c03* and use the **iperf3** tool to measure network bandwidth between
|
||||
*csp2s22c03* and *net2s22c05*.
|
||||
|
||||
In the second example, we start the VPP engine in *csp2s22c03* instead of using
|
||||
kernel forwarding. On *csp2s22c03*, we configure the system to have the addresses
|
||||
10.10.1.1/24 and 10.10.2.1/24 on the two 40-GbE NICs. To find all network interfaces
|
||||
available on the system, use the lshw Linux command to list all network interfaces
|
||||
and the corresponding slots *[0000:xx:yy.z]*.
|
||||
|
||||
In this example, the 40-GbE interfaces are *ens802f0* and *ens802f1*.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
csp2s22c03$ sudo lshw -class network -businfo
|
||||
Bus info Device Class Description
|
||||
========================================================
|
||||
pci@0000:03:00.0 enp3s0f0 network Ethernet Controller 10-Gig
|
||||
pci@0000:03:00.1 enp3s0f1 network Ethernet Controller 10-Gig
|
||||
pci@0000:82:00.0 ens802f0 network Ethernet Controller XL710
|
||||
pci@0000:82:00.1 ens802f1 network Ethernet Controller XL710
|
||||
pci@0000:82:00.0 ens802f0d1 network Ethernet interface
|
||||
pci@0000:82:00.1 ens802f1d1 network Ethernet interface
|
||||
|
||||
|
||||
Configure the system *csp2s22c03* to have 10.10.1.1 and 10.10.2.1 on the two 40-GbE NICs
|
||||
*ens802f0* and *ens802f1*, respectively.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
csp2s22c03$ sudo ip addr add 10.10.1.1/24 dev ens802f0
|
||||
csp2s22c03$ sudo ip link set dev ens802f0 up
|
||||
csp2s22c03$ sudo ip addr add 10.10.2.1/24 dev ens802f1
|
||||
csp2s22c03$ sudo ip link set dev ens802f1 up
|
||||
|
||||
List the route table:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
csp2s22c03$ route
|
||||
Kernel IP routing table
|
||||
Destination Gateway Genmask Flags Metric Ref Use Iface
|
||||
default jf111-ldr1a-530 0.0.0.0 UG 0 0 0 enp3s0f1
|
||||
default 192.168.0.50 0.0.0.0 UG 100 0 0 enp3s0f0
|
||||
10.10.1.0 * 255.255.255.0 U 0 0 0 ens802f0
|
||||
10.10.2.0 * 255.255.255.0 U 0 0 0 ens802f1
|
||||
10.23.3.0 * 255.255.255.0 U 0 0 0 enp3s0f1
|
||||
link-local * 255.255.0.0 U 1000 0 0 enp3s0f1
|
||||
192.168.0.0 * 255.255.255.0 U 100 0 0 enp3s0f0
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
csp2s22c03$ ip route
|
||||
default via 10.23.3.1 dev enp3s0f1
|
||||
default via 192.168.0.50 dev enp3s0f0 proto static metric 100
|
||||
10.10.1.0/24 dev ens802f0 proto kernel scope link src 10.10.1.1
|
||||
10.10.2.0/24 dev ens802f1 proto kernel scope link src 10.10.2.1
|
||||
10.23.3.0/24 dev enp3s0f1 proto kernel scope link src 10.23.3.67
|
||||
169.254.0.0/16 dev enp3s0f1 scope link metric 1000
|
||||
192.168.0.0/24 dev enp3s0f0 proto kernel scope link src 192.168.0.142 metric 100
|
||||
|
||||
On *csp2s22c04*, we configure the system to have the address 10.10.1.2 and use
|
||||
the interface *ens802* to route IP packets 10.10.2.0/24. Use the lshw Linux
|
||||
command to list all network interfaces and the corresponding slots *[0000:xx:yy.z]*.
|
||||
|
||||
For example, the interface *ens802d1* *(ens802)* is connected to slot *[82:00.0]*:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
csp2s22c04$ sudo lshw -class network -businfo
|
||||
Bus info Device Class Description
|
||||
=====================================================
|
||||
pci@0000:03:00.0 enp3s0f0 network Ethernet Controller 10-Gigabit X540-AT2
|
||||
pci@0000:03:00.1 enp3s0f1 network Ethernet Controller 10-Gigabit X540-AT2
|
||||
pci@0000:82:00.0 ens802d1 network Ethernet Controller XL710 for 40GbE QSFP+
|
||||
pci@0000:82:00.0 ens802 network Ethernet interface
|
||||
|
||||
For kernel forwarding, set 10.10.1.2 to the interface *ens802*, and add a static
|
||||
route for IP packet 10.10.2.0/24:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
csp2s22c04$ sudo ip addr add 10.10.1.2/24 dev ens802
|
||||
csp2s22c04$ sudo ip link set dev ens802 up
|
||||
csp2s22c04$ sudo ip route add 10.10.2.0/24 via 10.10.1.1
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
csp2s22c04$ ifconfig
|
||||
enp3s0f0 Link encap:Ethernet HWaddr a4:bf:01:00:92:73
|
||||
inet addr:10.23.3.62 Bcast:10.23.3.255 Mask:255.255.255.0
|
||||
inet6 addr: fe80::a6bf:1ff:fe00:9273/64 Scope:Link
|
||||
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
|
||||
RX packets:3411 errors:0 dropped:0 overruns:0 frame:0
|
||||
TX packets:1179 errors:0 dropped:0 overruns:0 carrier:0
|
||||
collisions:0 txqueuelen:1000
|
||||
RX bytes:262230 (262.2 KB) TX bytes:139975 (139.9 KB)
|
||||
|
||||
ens802 Link encap:Ethernet HWaddr 68:05:ca:2e:76:e0
|
||||
inet addr:10.10.1.2 Bcast:0.0.0.0 Mask:255.255.255.0
|
||||
inet6 addr: fe80::6a05:caff:fe2e:76e0/64 Scope:Link
|
||||
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
|
||||
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
|
||||
TX packets:40 errors:0 dropped:0 overruns:0 carrier:0
|
||||
collisions:0 txqueuelen:1000
|
||||
RX bytes:0 (0.0 B) TX bytes:5480 (5.4 KB)
|
||||
|
||||
lo Link encap:Local Loopback
|
||||
inet addr:127.0.0.1 Mask:255.0.0.0
|
||||
inet6 addr: ::1/128 Scope:Host
|
||||
UP LOOPBACK RUNNING MTU:65536 Metric:1
|
||||
RX packets:31320 errors:0 dropped:0 overruns:0 frame:0
|
||||
TX packets:31320 errors:0 dropped:0 overruns:0 carrier:0
|
||||
collisions:0 txqueuelen:1
|
||||
RX bytes:40301788 (40.3 MB) TX bytes:40301788 (40.3 MB)
|
||||
|
||||
After setting the route, we can ping from *csp2s22c03* to *csp2s22c04*, and vice versa:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
csp2s22c03$ ping 10.10.1.2 -c 3
|
||||
PING 10.10.1.2 (10.10.1.2) 56(84) bytes of data.
|
||||
64 bytes from 10.10.1.2: icmp_seq=1 ttl=64 time=0.122 ms
|
||||
64 bytes from 10.10.1.2: icmp_seq=2 ttl=64 time=0.109 ms
|
||||
64 bytes from 10.10.1.2: icmp_seq=3 ttl=64 time=0.120 ms
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
csp2s22c04$ ping 10.10.1.1 -c 3
|
||||
PING 10.10.1.1 (10.10.1.1) 56(84) bytes of data.
|
||||
64 bytes from 10.10.1.1: icmp_seq=1 ttl=64 time=0.158 ms
|
||||
64 bytes from 10.10.1.1: icmp_seq=2 ttl=64 time=0.096 ms
|
||||
64 bytes from 10.10.1.1: icmp_seq=3 ttl=64 time=0.102 ms
|
||||
|
||||
Similarly, on *net2s22c05*, we configure the system to have the address *10.10.2.2*
|
||||
and use the interface *ens803f0* to route IP packets *10.10.1.0/24*. Use the lshw
|
||||
Linux command to list all network interfaces and the corresponding slots
|
||||
*[0000:xx:yy.z]*. For example, the interface *ens803f0* is connected to slot *[87:00.0]*:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
NET2S22C05$ sudo lshw -class network -businfo
|
||||
Bus info Device Class Description
|
||||
========================================================
|
||||
pci@0000:03:00.0 enp3s0f0 network Ethernet Controller 10-Gigabit X540-AT2
|
||||
pci@0000:03:00.1 enp3s0f1 network Ethernet Controller 10-Gigabit X540-AT2
|
||||
pci@0000:81:00.0 ens787f0 network 82599 10 Gigabit TN Network Connection
|
||||
pci@0000:81:00.1 ens787f1 network 82599 10 Gigabit TN Network Connection
|
||||
pci@0000:87:00.0 ens803f0 network Ethernet Controller XL710 for 40GbE QSFP+
|
||||
pci@0000:87:00.1 ens803f1 network Ethernet Controller XL710 for 40GbE QSFP+
|
||||
|
||||
For kernel forwarding, set 10.10.2.2 to the interface ens803f0, and add a static
|
||||
route for IP packet 10.10.1.0/24:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
NET2S22C05$ sudo ip addr add 10.10.2.2/24 dev ens803f0
|
||||
NET2S22C05$ sudo ip link set dev ens803f0 up
|
||||
NET2S22C05$ sudo ip route add 10.10.1.0/24 via 10.10.2.1
|
||||
|
||||
After setting the route, you can ping from *csp2s22c03* to *net2s22c05*, and vice
|
||||
versa. However, in order to ping between *net2s22c05* and *csp2s22c04*, kernel IP
|
||||
forwarding in *csp2s22c03* has to be enabled:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
csp2s22c03$ sysctl net.ipv4.ip_forward
|
||||
net.ipv4.ip_forward = 0
|
||||
csp2s22c03$ echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
|
||||
csp2s22c03$ sysctl net.ipv4.ip_forward
|
||||
net.ipv4.ip_forward = 1
|
||||
|
||||
If successful, verify that now you can ping between *net2s22c05* and *csp2s22c04*:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
NET2S22C05$ ping 10.10.1.2 -c 3
|
||||
PING 10.10.1.2 (10.10.1.2) 56(84) bytes of data.
|
||||
64 bytes from 10.10.1.2: icmp_seq=1 ttl=63 time=0.239 ms
|
||||
64 bytes from 10.10.1.2: icmp_seq=2 ttl=63 time=0.224 ms
|
||||
64 bytes from 10.10.1.2: icmp_seq=3 ttl=63 time=0.230 ms
|
||||
|
||||
We use the **iperf3** utility to measure network bandwidth between hosts. In this
|
||||
test, we download the **iperf3** utility tool on both *net2s22c05* and *csp2s22c04*.
|
||||
On *csp2s22c04*, we start the **iperf3** server with “iperf3 –s”, and then on *net2s22c05*,
|
||||
we start the **iperf3** client to connect to the server:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
NET2S22C05$ iperf3 -c 10.10.1.2
|
||||
Connecting to host 10.10.1.2, port 5201
|
||||
[ 4] local 10.10.2.2 port 54074 connected to 10.10.1.2 port 5201
|
||||
[ ID] Interval Transfer Bandwidth Retr Cwnd
|
||||
[ 4] 0.00-1.00 sec 936 MBytes 7.85 Gbits/sec 2120 447 KBytes
|
||||
[ 4] 1.00-2.00 sec 952 MBytes 7.99 Gbits/sec 1491 611 KBytes
|
||||
[ 4] 2.00-3.00 sec 949 MBytes 7.96 Gbits/sec 2309 604 KBytes
|
||||
[ 4] 3.00-4.00 sec 965 MBytes 8.10 Gbits/sec 1786 571 KBytes
|
||||
[ 4] 4.00-5.00 sec 945 MBytes 7.93 Gbits/sec 1984 424 KBytes
|
||||
[ 4] 5.00-6.00 sec 946 MBytes 7.94 Gbits/sec 1764 611 KBytes
|
||||
[ 4] 6.00-7.00 sec 979 MBytes 8.21 Gbits/sec 1499 655 KBytes
|
||||
[ 4] 7.00-8.00 sec 980 MBytes 8.22 Gbits/sec 1182 867 KBytes
|
||||
[ 4] 8.00-9.00 sec 1008 MBytes 8.45 Gbits/sec 945 625 KBytes
|
||||
[ 4] 9.00-10.00 sec 1015 MBytes 8.51 Gbits/sec 1394 611 KBytes
|
||||
- - - - - - - - - - - - - - - - - - - - - - - - -
|
||||
[ ID] Interval Transfer Bandwidth Retr
|
||||
[ 4] 0.00-10.00 sec 9.45 GBytes 8.12 Gbits/sec 16474 sender
|
||||
[ 4] 0.00-10.00 sec 9.44 GBytes 8.11 Gbits/sec receiver
|
||||
|
||||
iperf Done.
|
119
docs/usecases/simpleperf/iperf31.rst
Normal file
119
docs/usecases/simpleperf/iperf31.rst
Normal file
@ -0,0 +1,119 @@
|
||||
.. _iperf31:
|
||||
|
||||
Using VPP with Iperf3
|
||||
=====================
|
||||
|
||||
First, disable kernel IP forward in *csp2s22c03* to ensure the host cannot use
|
||||
kernel forwarding (all the settings in *net2s22c05* and *csp2s22c04* remain unchanged):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
csp2s22c03$ echo 0 | sudo tee /proc/sys/net/ipv4/ip_forward
|
||||
0
|
||||
csp2s22c03$ sysctl net.ipv4.ip_forward
|
||||
net.ipv4.ip_forward = 0
|
||||
|
||||
You can use DPDK’s device binding utility (./install-vpp-native/dpdk/sbin/dpdk-devbind)
|
||||
to list network devices and bind/unbind them from specific drivers. The flag “-s/--status”
|
||||
shows the status of devices; the flag “-b/--bind” selects the driver to bind. The
|
||||
status of devices in our system indicates that the two 40-GbE XL710 devices are located
|
||||
at 82:00.0 and 82:00.1. Use the device’s slots to bind them to the driver uio_pci_generic:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
csp2s22c03$ ./install-vpp-native/dpdk/sbin/dpdk-devbind -s
|
||||
|
||||
Network devices using DPDK-compatible driver
|
||||
============================================
|
||||
<none>
|
||||
|
||||
Network devices using kernel driver
|
||||
===================================
|
||||
0000:03:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=enp3s0f0 drv=ixgbe unused=vfio-pci,uio_pci_generic *Active*
|
||||
0000:03:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=enp3s0f1 drv=ixgbe unused=vfio-pci,uio_pci_generic *Active*
|
||||
0000:82:00.0 'Ethernet Controller XL710 for 40GbE QSFP+' if=ens802f0d1,ens802f0 drv=i40e unused=uio_pci_generic
|
||||
0000:82:00.1 'Ethernet Controller XL710 for 40GbE QSFP+' if=ens802f1d1,ens802f1 drv=i40e unused=uio_pci_generic
|
||||
|
||||
Other network devices
|
||||
=====================
|
||||
<none>
|
||||
|
||||
csp2s22c03$ sudo modprobe uio_pci_generic
|
||||
csp2s22c03$ sudo ./install-vpp-native/dpdk/sbin/dpdk-devbind --bind uio_pci_generic 82:00.0
|
||||
csp2s22c03$ sudo ./install-vpp-native/dpdk/sbin/dpdk-devbind --bind uio_pci_generic 82:00.1
|
||||
|
||||
csp2s22c03$ sudo ./install-vpp-native/dpdk/sbin/dpdk-devbind -s
|
||||
|
||||
Network devices using DPDK-compatible driver
|
||||
============================================
|
||||
0000:82:00.0 'Ethernet Controller XL710 for 40GbE QSFP+' drv=uio_pci_generic unused=i40e,vfio-pci
|
||||
0000:82:00.1 'Ethernet Controller XL710 for 40GbE QSFP+' drv=uio_pci_generic unused=i40e,vfio-pci
|
||||
|
||||
Network devices using kernel driver
|
||||
===================================
|
||||
0000:03:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=enp3s0f0 drv=ixgbe unused=vfio-pci,uio_pci_generic *Active*
|
||||
0000:03:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=enp3s0f1 drv=ixgbe unused=vfio-pci,uio_pci_generic *Active*
|
||||
|
||||
Start the VPP service, and verify that VPP is running:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
csp2s22c03$ sudo service vpp start
|
||||
csp2s22c03$ ps -ef | grep vpp
|
||||
root 105655 1 98 17:34 ? 00:00:02 /usr/bin/vpp -c /etc/vpp/startup.conf
|
||||
:w
|
||||
105675 105512 0 17:34 pts/4 00:00:00 grep --color=auto vpp
|
||||
|
||||
To access the VPP CLI, issue the command sudo vppctl . From the VPP interface, list
|
||||
all interfaces that are bound to DPDK using the command show interface:
|
||||
|
||||
VPP shows that the two 40-Gbps ports located at 82:0:0 and 82:0:1 are bound. Next,
|
||||
you need to assign IP addresses to those interfaces, bring them up, and verify:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
vpp# set interface ip address FortyGigabitEthernet82/0/0 10.10.1.1/24
|
||||
vpp# set interface ip address FortyGigabitEthernet82/0/1 10.10.2.1/24
|
||||
vpp# set interface state FortyGigabitEthernet82/0/0 up
|
||||
vpp# set interface state FortyGigabitEthernet82/0/1 up
|
||||
vpp# show interface address
|
||||
FortyGigabitEthernet82/0/0 (up):
|
||||
10.10.1.1/24
|
||||
FortyGigabitEthernet82/0/1 (up):
|
||||
10.10.2.1/24
|
||||
local0 (dn):
|
||||
|
||||
At this point VPP is operational. You can ping these interfaces either from *net2s22c05*
|
||||
or *csp2s22c04*. Moreover, VPP can forward packets whose IP address are 10.10.1.0/24 and
|
||||
10.10.2.0/24, so you can ping between *net2s22c05* and *csp2s22c04*. Also, you can
|
||||
run iperf3 as illustrated in the previous example, and the result from running iperf3
|
||||
between *net2s22c05* and *csp2s22c04* increases to 20.3 Gbits per second.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
ET2S22C05$ iperf3 -c 10.10.1.2
|
||||
Connecting to host 10.10.1.2, port 5201
|
||||
[ 4] local 10.10.2.2 port 54078 connected to 10.10.1.2 port 5201
|
||||
[ ID] Interval Transfer Bandwidth Retr Cwnd
|
||||
[ 4] 0.00-1.00 sec 2.02 GBytes 17.4 Gbits/sec 460 1.01 MBytes
|
||||
[ 4] 1.00-2.00 sec 3.28 GBytes 28.2 Gbits/sec 0 1.53 MBytes
|
||||
[ 4] 2.00-3.00 sec 2.38 GBytes 20.4 Gbits/sec 486 693 KBytes
|
||||
[ 4] 3.00-4.00 sec 2.06 GBytes 17.7 Gbits/sec 1099 816 KBytes
|
||||
[ 4] 4.00-5.00 sec 2.07 GBytes 17.8 Gbits/sec 614 1.04 MBytes
|
||||
[ 4] 5.00-6.00 sec 2.25 GBytes 19.3 Gbits/sec 2869 716 KBytes
|
||||
[ 4] 6.00-7.00 sec 2.26 GBytes 19.4 Gbits/sec 3321 683 KBytes
|
||||
[ 4] 7.00-8.00 sec 2.33 GBytes 20.0 Gbits/sec 2322 594 KBytes
|
||||
[ 4] 8.00-9.00 sec 2.28 GBytes 19.6 Gbits/sec 1690 1.23 MBytes
|
||||
[ 4] 9.00-10.00 sec 2.73 GBytes 23.5 Gbits/sec 573 680 KBytes
|
||||
- - - - - - - - - - - - - - - - - - - - - - - - -
|
||||
[ ID] Interval Transfer Bandwidth Retr
|
||||
[ 4] 0.00-10.00 sec 23.7 GBytes 20.3 Gbits/sec 13434 sender
|
||||
[ 4] 0.00-10.00 sec 23.7 GBytes 20.3 Gbits/sec receiver
|
||||
|
||||
iperf Done.
|
||||
|
||||
The **show run** command displays the graph runtime statistics. Observe that the
|
||||
average vector per node is 6.76, which means on average, a vector of 6.76 packets
|
||||
is handled in a graph node.
|
||||
|
||||
.. figure:: /_images/build-a-fast-network-stack-terminal.png
|
133
docs/usecases/simpleperf/trex.rst
Normal file
133
docs/usecases/simpleperf/trex.rst
Normal file
@ -0,0 +1,133 @@
|
||||
.. _trex:
|
||||
|
||||
Using VPP with TRex
|
||||
===================
|
||||
|
||||
In this example we use only two systems, *csp2s22c03* and *net2s22c05*, to run
|
||||
**TRex** VPP is installed on **csp2s22c03** and run as a packet forwarding
|
||||
engine. On *net2s22c05*, TRex is used to generate both client and server-side
|
||||
traffic. **TRex** is a high-performance traffic generator. It leverages DPDK and
|
||||
run in user space. Figure 2 illustrates this configuration.
|
||||
|
||||
VPP is set up on *csp2s22c03* exactly as it was in the previous example. Only
|
||||
the setup on *net2s22c05* is modified slightly to run TRex preconfigured traffic
|
||||
files.
|
||||
|
||||
.. figure:: /_images/trex.png
|
||||
|
||||
Figure 2: The TRex traffic generator sends packages to the host that has VPP running.
|
||||
|
||||
|
||||
First we install **TRex**.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
NET2S22C05$ wget --no-cache http://trex-tgn.cisco.com/trex/release/latest
|
||||
NET2S22C05$ tar -xzvf latest
|
||||
NET2S22C05$ cd v2.37
|
||||
|
||||
Then show the devices we have.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
NET2S22C05$ sudo ./dpdk_nic_bind.py -s
|
||||
|
||||
Network devices using DPDK-compatible driver
|
||||
============================================
|
||||
0000:87:00.0 'Ethernet Controller XL710 for 40GbE QSFP+' drv=vfio-pci unused=i40e
|
||||
0000:87:00.1 'Ethernet Controller XL710 for 40GbE QSFP+' drv=vfio-pci unused=i40e
|
||||
|
||||
Network devices using kernel driver
|
||||
===================================
|
||||
0000:03:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=enp3s0f0 drv=ixgbe unused=vfio-pci *Active*
|
||||
0000:03:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=enp3s0f1 drv=ixgbe unused=vfio-pci
|
||||
0000:81:00.0 '82599 10 Gigabit TN Network Connection' if=ens787f0 drv=ixgbe unused=vfio-pci
|
||||
0000:81:00.1 '82599 10 Gigabit TN Network Connection' if=ens787f1 drv=ixgbe unused=vfio-pci
|
||||
|
||||
Other network devices
|
||||
=====================
|
||||
<none>
|
||||
|
||||
Create the */etc/trex_cfg.yaml* configuration file. In this configuration file,
|
||||
the port should match the interfaces available in the target system, which is
|
||||
*net2s22c05* in our example. The IP addresses correspond to Figure 2. For more
|
||||
information on the configuration file, please refer to the `TRex Manual <http://trex-tgn.cisco.com/trex/doc/index.html>`_.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
NET2S22C05$ cat /etc/trex_cfg.yaml
|
||||
- port_limit: 2
|
||||
version: 2
|
||||
interfaces: ['87:00.0', '87:00.1']
|
||||
port_bandwidth_gb: 40
|
||||
port_info:
|
||||
- ip: 10.10.2.2
|
||||
default_gw: 10.10.2.1
|
||||
- ip: 10.10.1.2
|
||||
default_gw: 10.10.1.1
|
||||
|
||||
platform:
|
||||
master_thread_id: 0
|
||||
latency_thread_id: 1
|
||||
dual_if:
|
||||
- socket: 1
|
||||
threads: [22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43]
|
||||
|
||||
Stop the previous VPP session and start it again in order to add a route for new
|
||||
IP addresses 16.0.0.0/8 and 48.0.0.0/8, according to Figure 2. Those IP addresses
|
||||
are needed because TRex generates packets that use these addresses. Refer to the
|
||||
`TRex Manual <http://trex-tgn.cisco.com/trex/doc/index.html>`_ for details on
|
||||
these traffic templates.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
csp2s22c03$ sudo service vpp stop
|
||||
csp2s22c03$ sudo service vpp start
|
||||
csp2s22c03$ sudo vppctl
|
||||
_______ _ _ _____ ___
|
||||
__/ __/ _ \ (_)__ | | / / _ \/ _ \
|
||||
_/ _// // / / / _ \ | |/ / ___/ ___/
|
||||
/_/ /____(_)_/\___/ |___/_/ /_/
|
||||
|
||||
vpp# sho int
|
||||
Name Idx State Counter Count
|
||||
FortyGigabitEthernet82/0/0 1 down
|
||||
FortyGigabitEthernet82/0/1 2 down
|
||||
local0 0 down
|
||||
|
||||
vpp#
|
||||
vpp# set interface ip address FortyGigabitEthernet82/0/0 10.10.1.1/24
|
||||
vpp# set interface ip address FortyGigabitEthernet82/0/1 10.10.2.1/24
|
||||
vpp# set interface state FortyGigabitEthernet82/0/0 up
|
||||
vpp# set interface state FortyGigabitEthernet82/0/1 up
|
||||
vpp# ip route add 16.0.0.0/8 via 10.10.1.2
|
||||
vpp# ip route add 48.0.0.0/8 via 10.10.2.2
|
||||
vpp# clear run
|
||||
|
||||
Now, you can generate a simple traffic flow from *net2s22c05* using the traffic
|
||||
configuration file "cap2/dns.yaml".
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
NET2S22C05$ sudo ./t-rex-64 -f cap2/dns.yaml -d 1 -l 1000
|
||||
summary stats
|
||||
--------------
|
||||
Total-pkt-drop : 0 pkts
|
||||
Total-tx-bytes : 166886 bytes
|
||||
Total-tx-sw-bytes : 166716 bytes
|
||||
Total-rx-bytes : 166886 byte
|
||||
|
||||
Total-tx-pkt : 2528 pkts
|
||||
Total-rx-pkt : 2528 pkts
|
||||
Total-sw-tx-pkt : 2526 pkts
|
||||
Total-sw-err : 0 pkts
|
||||
Total ARP sent : 4 pkts
|
||||
Total ARP received : 2 pkts
|
||||
maximum-latency : 35 usec
|
||||
average-latency : 8 usec
|
||||
latency-any-error : OK
|
||||
|
||||
On *csp2s22c03*, the *show run* command displays the graph runtime statistics.
|
||||
|
||||
.. figure:: /_images/build-a-fast-network-stack-terminal-2.png
|
||||
|
44
docs/usecases/simpleperf/trex1.rst
Normal file
44
docs/usecases/simpleperf/trex1.rst
Normal file
@ -0,0 +1,44 @@
|
||||
.. _trex1:
|
||||
|
||||
Using VPP with TRex Mixed Traffic Templates
|
||||
===========================================
|
||||
|
||||
In this example, a more complicated traffic with delay profile on *net2s22c05* is
|
||||
generated using the traffic configuration file "avl/sfr_delay_10_1g.yaml":
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
NET2S22C05$ sudo ./t-rex-64 -f avl/sfr_delay_10_1g.yaml -c 2 -m 20 -d 100 -l 1000
|
||||
summary stats
|
||||
--------------
|
||||
Total-pkt-drop : 43309 pkts
|
||||
Total-tx-bytes : 251062132504 bytes
|
||||
Total-tx-sw-bytes : 21426636 bytes
|
||||
Total-rx-bytes : 251040139922 byte
|
||||
|
||||
Total-tx-pkt : 430598064 pkts
|
||||
Total-rx-pkt : 430554755 pkts
|
||||
Total-sw-tx-pkt : 324646 pkts
|
||||
Total-sw-err : 0 pkts
|
||||
Total ARP sent : 5 pkts
|
||||
Total ARP received : 4 pkts
|
||||
maximum-latency : 1278 usec
|
||||
average-latency : 9 usec
|
||||
latency-any-error : ERROR
|
||||
|
||||
On *csp2s22c03*, use the VCC CLI command show run to display the graph runtime statistics.
|
||||
Observe that the average vector per node is 10.69 and 14.47:
|
||||
|
||||
.. figure:: /_images/build-a-fast-network-stack-terminal-3.png
|
||||
|
||||
Summary
|
||||
=======
|
||||
|
||||
This tutorial showed how to download, compile, and install the VPP binary on an
|
||||
Intel® Architecture platform. Examples of /etc/sysctl.d/80-vpp.conf and
|
||||
/etc/vpp/startup.conf/startup.conf configuration files were provided to get the
|
||||
user up and running with VPP. The tutorial also illustrated how to detect and bind
|
||||
the network interfaces to a DPDK-compatible driver. You can use the VPP CLI to assign
|
||||
IP addresses to these interfaces and bring them up. Finally, four examples using iperf3
|
||||
and TRex were included, to show how VPP processes packets in batches.
|
||||
|
Reference in New Issue
Block a user