Initial commit of Sphinx docs

Change-Id: I9fca8fb98502dffc2555f9de7f507b6f006e0e77
Signed-off-by: John DeNisco <jdenisco@cisco.com>
This commit is contained in:
John DeNisco
2018-07-26 12:45:10 -04:00
committed by Dave Barach
parent 1d65279ffe
commit 06dcd45ff8
239 changed files with 12736 additions and 56 deletions
+266
View File
@@ -0,0 +1,266 @@
.. _Routing:
.. toctree::
Connecting the two Containers
_____________________________
Now for connecting these two linux containers to VPP and pinging between them.
Enter container *cone*, and check the current network configuration:
.. code-block:: console
root@cone:/# ip -o a
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever
30: veth0 inet 10.0.3.157/24 brd 10.0.3.255 scope global veth0\ valid_lft forever preferred_lft forever
30: veth0 inet6 fe80::216:3eff:fee2:d0ba/64 scope link \ valid_lft forever preferred_lft forever
32: veth_link1 inet6 fe80::2c9d:83ff:fe33:37e/64 scope link \ valid_lft forever preferred_lft forever
You can see that there are three network interfaces, *lo, veth0*, and *veth_link1*.
Notice that *veth_link1* has no assigned IP.
Check if the interfaces are down or up:
.. code-block:: console
root@cone:/# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
30: veth0@if31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:16:3e:e2:d0:ba brd ff:ff:ff:ff:ff:ff link-netnsid 0
32: veth_link1@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 2e:9d:83:33:03:7e brd ff:ff:ff:ff:ff:ff link-netnsid 0
.. _networkNote:
.. note::
Take note of the network index for **veth_link1**. In our case, it 32, and its parent index (the host machine, not the containers) is 33, shown by **veth_link1@if33**. Yours will most likely be different, but **please take note of these index's**.
Make sure your loopback interface is up, and assign an IP and gateway to veth_link1.
.. code-block:: console
root@cone:/# ip link set dev lo up
root@cone:/# ip addr add 172.16.1.2/24 dev veth_link1
root@cone:/# ip link set dev veth_link1 up
root@cone:/# dhclient -r
root@cone:/# ip route add default via 172.16.1.1 dev veth_link1
Here, the IP is 172.16.1.2/24 and the gateway is 172.16.1.1.
Run some commands to verify the changes:
.. code-block:: console
root@cone:/# ip -o a
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever
30: veth0 inet6 fe80::216:3eff:fee2:d0ba/64 scope link \ valid_lft forever preferred_lft forever
32: veth_link1 inet 172.16.1.2/24 scope global veth_link1\ valid_lft forever preferred_lft forever
32: veth_link1 inet6 fe80::2c9d:83ff:fe33:37e/64 scope link \ valid_lft forever preferred_lft forever
root@cone:/# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.16.1.1 0.0.0.0 UG 0 0 0 veth_link1
172.16.1.0 * 255.255.255.0 U 0 0 0 veth_link1
We see that the IP has been assigned, as well as our default gateway.
Now exit this container and repeat this process with container *ctwo*, except with IP 172.16.2.2/24 and gateway 172.16.2.1.
After thats done for *both* containers, exit from the container if you're in one:
.. code-block:: console
root@ctwo:/# exit
exit
root@localhost:~#
In the machine running the containers, run **ip link** to see the host *veth* network interfaces, and their link with their respective *container veth's*.
.. code-block:: console
root@localhost:~# ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noqueue state DOWN mode DEFAULT group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:33:82:8a brd ff:ff:ff:ff:ff:ff
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:d9:9f:ac brd ff:ff:ff:ff:ff:ff
4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:78:84:9d brd ff:ff:ff:ff:ff:ff
5: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
19: veth0C2FL7@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxcbr0 state UP mode DEFAULT group default qlen 1000
link/ether fe:0d:da:90:c1:65 brd ff:ff:ff:ff:ff:ff link-netnsid 1
21: veth8NA72P@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether fe:1c:9e:01:9f:82 brd ff:ff:ff:ff:ff:ff link-netnsid 1
31: vethXQMY4C@if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxcbr0 state UP mode DEFAULT group default qlen 1000
link/ether fe:9a:d9:29:40:bb brd ff:ff:ff:ff:ff:ff link-netnsid 0
33: vethQL7KOC@if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether fe:ed:89:54:47:a2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
Remember our network interface index 32 in *cone* from this :ref:`note <networkNote>`? We can see at the bottom the name of the 33rd index **vethQL7KOC@if32**. Keep note of this network interface name for the veth connected to *cone* (ex. vethQL7KOC), and the other network interface name for *ctwo*.
With VPP in the host machine, show current VPP interfaces:
.. code-block:: console
root@localhost:~# vppctl show inter
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
local0 0 down 0/0/0/0
Which should only output local0.
Based on the names of the network interfaces discussed previously, which are specific to my systems, we can create VPP host-interfaces:
.. code-block:: console
root@localhost:~# vppctl create host-interface name vethQL7K0C
root@localhost:~# vppctl create host-interface name veth8NA72P
Verify they have been set up properly:
.. code-block:: console
root@localhost:~# vppctl show inter
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
host-vethQL7K0C 1 down 9000/0/0/0
host-veth8NA72P 2 down 9000/0/0/0
local0 0 down 0/0/0/0
Which should output *three network interfaces*, local0, and the other two host network interfaces linked to the container veth's.
Set their state to up:
.. code-block:: console
root@localhost:~# vppctl set interface state host-vethQL7K0C up
root@localhost:~# vppctl set interface state host-veth8NA72P up
Verify they are now up:
.. code-block:: console
root@localhost:~# vppctl show inter
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
host-vethQL7K0C 1 up 9000/0/0/0
host-veth8NA72P 2 up 9000/0/0/0
local0 0 down 0/0/0/0
Add IP addresses for the other end of each veth link:
.. code-block:: console
root@localhost:~# vppctl set interface ip address host-vethQL7K0C 172.16.1.1/24
root@localhost:~# vppctl set interface ip address host-veth8NA72P 172.16.2.1/24
Verify the addresses are set properly by looking at the L3 table:
.. code-block:: console
root@localhost:~# vppctl show inter addr
host-vethQL7K0C (up):
L3 172.16.1.1/24
host-veth8NA72P (up):
L3 172.16.2.1/24
local0 (dn):
Or looking at the FIB by doing:
.. code-block:: console
root@localhost:~# vppctl show ip fib
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] locks:[src:plugin-hi:2, src:default-route:1, ]
0.0.0.0/0
unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:0 to:[0:0]]
[0] [@0]: dpo-drop ip4
0.0.0.0/32
unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:2 buckets:1 uRPF:1 to:[0:0]]
[0] [@0]: dpo-drop ip4
172.16.1.0/32
unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:10 buckets:1 uRPF:9 to:[0:0]]
[0] [@0]: dpo-drop ip4
172.16.1.0/24
unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:9 buckets:1 uRPF:8 to:[0:0]]
[0] [@4]: ipv4-glean: host-vethQL7K0C: mtu:9000 ffffffffffff02fec953f98c0806
172.16.1.1/32
unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:12 buckets:1 uRPF:13 to:[0:0]]
[0] [@2]: dpo-receive: 172.16.1.1 on host-vethQL7K0C
172.16.1.255/32
unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:11 buckets:1 uRPF:11 to:[0:0]]
[0] [@0]: dpo-drop ip4
172.16.2.0/32
unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:14 buckets:1 uRPF:15 to:[0:0]]
[0] [@0]: dpo-drop ip4
172.16.2.0/24
unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:13 buckets:1 uRPF:14 to:[0:0]]
[0] [@4]: ipv4-glean: host-veth8NA72P: mtu:9000 ffffffffffff02fe305400e80806
172.16.2.1/32
unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:16 buckets:1 uRPF:19 to:[0:0]]
[0] [@2]: dpo-receive: 172.16.2.1 on host-veth8NA72P
172.16.2.255/32
unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:15 buckets:1 uRPF:17 to:[0:0]]
[0] [@0]: dpo-drop ip4
224.0.0.0/4
unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:4 buckets:1 uRPF:3 to:[0:0]]
[0] [@0]: dpo-drop ip4
240.0.0.0/4
unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:3 buckets:1 uRPF:2 to:[0:0]]
[0] [@0]: dpo-drop ip4
255.255.255.255/32
unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:5 buckets:1 uRPF:4 to:[0:0]]
[0] [@0]: dpo-drop ip4
At long last you probably want to see some pings:
.. code-block:: console
root@localhost:~# lxc-attach -n cone -- ping -c3 172.16.2.2
PING 172.16.2.2 (172.16.2.2) 56(84) bytes of data.
64 bytes from 172.16.2.2: icmp_seq=1 ttl=63 time=0.102 ms
64 bytes from 172.16.2.2: icmp_seq=2 ttl=63 time=0.189 ms
64 bytes from 172.16.2.2: icmp_seq=3 ttl=63 time=0.150 ms
--- 172.16.2.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.102/0.147/0.189/0.035 ms
root@localhost:~# lxc-attach -n ctwo -- ping -c3 172.16.1.2
PING 172.16.1.2 (172.16.1.2) 56(84) bytes of data.
64 bytes from 172.16.1.2: icmp_seq=1 ttl=63 time=0.111 ms
64 bytes from 172.16.1.2: icmp_seq=2 ttl=63 time=0.089 ms
64 bytes from 172.16.1.2: icmp_seq=3 ttl=63 time=0.096 ms
--- 172.16.1.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.089/0.098/0.111/0.014 ms
Which should send/recieve three packets for each command.
This is the end of this guide. Great work!
+123
View File
@@ -0,0 +1,123 @@
.. _containerCreation:
.. toctree::
Creating Containers
___________________
First you should have root privileges:
.. code-block:: console
$ sudo bash
Then install packages for containers such as lxc:
.. code-block:: console
# apt-get install bridge-utils lxc
As quoted from the `lxc.conf manpage <https://linuxcontainers.org/it/lxc/manpages/man5/lxc.conf.5.html>`_, "container configuration is held in the config stored in the container's directory.
A basic configuration is generated at container creation time with the default's recommended for the chosen template as well as extra default keys coming from the default.conf file."
"That *default.conf* file is either located at /etc/lxc/default.conf or for unprivileged containers at ~/.config/lxc/default.conf."
Since we want to ping between two containers, we'll need to **add to this file**.
Look at the contents of *default.conf*, which should initially look like this:
.. code-block:: console
# cat /etc/lxc/default.conf
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx
As you can see, by default there is one veth interface.
Now you will *append to this file* so that each container you create will have an interface for a Linux bridge and an unconsumed second interface.
You can do this by piping *echo* output into *tee*, where each line is separated with a newline character *\\n* as shown below. Alternatively, you can manually add to this file with a text editor such as **vi**, but make sure you have root privileges.
.. code-block:: console
# echo -e "lxc.network.name = veth0\nlxc.network.type = veth\nlxc.network.name = veth_link1" | sudo tee -a /etc/lxc/default.conf
Inspect the contents again to verify the file was indeed modified:
.. code-block:: console
# cat /etc/lxc/default.conf
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx
lxc.network.name = veth0
lxc.network.type = veth
lxc.network.name = veth_link1
After this, we're ready to create the containers.
Creates an Ubuntu Xenial container named "cone".
.. code-block:: console
# lxc-create -t download -n cone -- --dist ubuntu --release xenial --arch amd64 --keyserver hkp://p80.pool.sks-keyservers.net:80
If successful, you'll get an output similar to this:
.. code-block:: console
You just created an Ubuntu xenial amd64 (20180625_07:42) container.
To enable SSH, run: apt install openssh-server
No default root or user password are set by LXC.
Make another container "ctwo".
.. code-block:: console
# lxc-create -t download -n ctwo -- --dist ubuntu --release xenial --arch amd64 --keyserver hkp://p80.pool.sks-keyservers.net:80
List your containers to verify they exist:
.. code-block:: console
# lxc-ls
cone ctwo
Start the first container:
.. code-block:: console
# lxc-start --name cone
And verify its running:
.. code-block:: console
# lxc-ls --fancy
NAME STATE AUTOSTART GROUPS IPV4 IPV6
cone RUNNING 0 - - -
ctwo STOPPED 0 - - -
.. note::
Here are some `lxc container commands <https://help.ubuntu.com/lts/serverguide/lxc.html.en-GB#lxc-basic-usage>`_ you may find useful:
.. code-block:: console
sudo lxc-ls --fancy
sudo lxc-start --name u1 --daemon
sudo lxc-info --name u1
sudo lxc-stop --name u1
sudo lxc-destroy --name u1
+49
View File
@@ -0,0 +1,49 @@
.. _containerSetup:
.. toctree::
Container packages
__________________
Now we can go into container *cone* and install prerequisites such as VPP, and perform some additional commands:
To enter our container via the shell, type:
.. code-block:: console
# lxc-attach -n cone
root@cone:/#
Run the linux DHCP setup and install VPP:
.. code-block:: console
root@cone:/# resolvconf -d eth0
root@cone:/# dhclient
root@cone:/# apt-get install -y wget
root@cone:/# echo "deb [trusted=yes] https://nexus.fd.io/content/repositories/fd.io.ubuntu.xenial.main/ ./" | sudo tee -a /etc/apt/sources.list.d/99fd.io.list
root@cone:/# apt-get update
root@cone:/# apt-get install -y --force-yes vpp
root@cone:/# sh -c 'echo \"\\ndpdk {\\n no-pci\\n}\" >> /etc/vpp/startup.conf'
After this is done, start VPP in this container:
.. code-block:: console
root@cone:/# service vpp start
Exit this container with the **exit** command (you *may* need to run **exit** twice):
.. code-block:: console
root@cone:/# exit
exit
root@cone:/# exit
exit
root@localhost:~#
Repeat the container setup on this page for the second container **ctwo**. Go to the end of the previous page if you forgot how to start a container.
+13
View File
@@ -0,0 +1,13 @@
.. _containers:
FD.io VPP with Containers
=========================
This section will cover connecting two Linux containers with VPP. A container is essentially a more efficient and faster VM, due to the fact that a container does not simulate a separate kernel and hardware. You can read more about `Linux containers here <https://linuxcontainers.org/>`_.
.. toctree::
containerCreation
containerSetup
Routing
+207
View File
@@ -0,0 +1,207 @@
.. _homegateway:
.. toctree::
Using VPP as a Home Gateway
===========================
Vpp running on a small system (with appropriate NICs) makes a fine
home gateway. The resulting system performs far in excess of
requirements: a TAG=vpp_debug image runs at a vector size of ~1.1
terminating a 90-mbit down / 10-mbit up cable modem connection.
At a minimum, install sshd and the isc-dhcp-server. If you prefer, you
can use dnsmasq.
Configuration files
-------------------
/etc/vpp/startup.conf::
unix {
nodaemon
log /var/log/vpp/vpp.log
full-coredump
cli-listen /run/vpp/cli.sock
startup-config /setup.gate
gid vpp
}
api-segment {
gid vpp
}
dpdk {
dev 0000:03:00.0
dev 0000:14:00.0
etc.
poll-sleep 10
}
isc-dhcp-server configuration::
subnet 192.168.1.0 netmask 255.255.255.0 {
range 192.168.1.10 192.168.1.99;
option routers 192.168.1.1;
option domain-name-servers 8.8.8.8;
}
If you decide to enable the vpp dns name resolver, substitute
192.168.1.2 for 8.8.8.8 in the dhcp server configuration.
/etc/ssh/sshd_config::
# What ports, IPs and protocols we listen for
Port <REDACTED-high-number-port>
# Change to no to disable tunnelled clear text passwords
PasswordAuthentication no
For your own comfort and safety, do NOT allow password authentication
and do not answer ssh requests on port 22. Experience shows several
hack attempts per hour on port 22, but none (ever) on random
high-number ports.
vpp configuration::
comment { This is the WAN interface }
set int state GigabitEthernet3/0/0 up
comment { set int mac address GigabitEthernet3/0/0 mac-to-clone-if-needed }
set dhcp client intfc GigabitEthernet3/0/0 hostname vppgate
comment { Create a BVI loopback interface}
loop create
set int l2 bridge loop0 1 bvi
set int ip address loop0 192.168.1.1/24
set int state loop0 up
comment { Add more inside interfaces as needed ... }
set int l2 bridge GigabitEthernet0/14/0 1
set int state GigabitEthernet0/14/0 up
comment { dhcp server and host-stack access }
tap connect lstack address 192.168.1.2/24
set int l2 bridge tapcli-0 1
set int state tapcli-0 up
comment { Configure NAT}
nat44 add interface address GigabitEthernet3/0/0
set interface nat44 in loop0 out GigabitEthernet3/0/0
comment { allow inbound ssh to the <REDACTED-high-number-port>
nat44 add static mapping local 192.168.1.2 <REDACTED> external GigabitEthernet3/0/0 <REDACTED> tcp
comment { if you want to use the vpp DNS server, add the following }
comment { Remember to adjust the isc-dhcp-server configuration appropriately }
comment { nat44 add identity mapping external GigabitEthernet3/0/0 udp 53053 }
comment { bin dns_name_server_add_del 8.8.8.8 }
comment { bin dns_name_server_add_del 68.87.74.166 }
comment { bin dns_enable_disable }
comment { see patch below, which adds these commands }
service restart isc-dhcp-server
add default linux route via 192.168.1.1
Patches
-------
You'll need this patch to add the "service restart" and "add default
linux route" commands::
diff --git a/src/vpp/vnet/main.c b/src/vpp/vnet/main.c
index 6e136e19..69189c93 100644
--- a/src/vpp/vnet/main.c
+++ b/src/vpp/vnet/main.c
@@ -18,6 +18,8 @@
#include <vlib/unix/unix.h>
#include <vnet/plugin/plugin.h>
#include <vnet/ethernet/ethernet.h>
+#include <vnet/ip/ip4_packet.h>
+#include <vnet/ip/format.h>
#include <vpp/app/version.h>
#include <vpp/api/vpe_msg_enum.h>
#include <limits.h>
@@ -400,6 +402,63 @@ VLIB_CLI_COMMAND (test_crash_command, static) = {
#endif
+static clib_error_t *
+restart_isc_dhcp_server_command_fn (vlib_main_t * vm,
+ unformat_input_t * input,
+ vlib_cli_command_t * cmd)
+{
+ int rv __attribute__((unused));
+ /* Wait three seconds... */
+ vlib_process_suspend (vm, 3.0);
+
+ rv = system ("/usr/sbin/service isc-dhcp-server restart");
+
+ vlib_cli_output (vm, "Restarted the isc-dhcp-server...");
+ return 0;
+}
+
+/* *INDENT-OFF* */
+VLIB_CLI_COMMAND (restart_isc_dhcp_server_command, static) = {
+ .path = "service restart isc-dhcp-server",
+ .short_help = "restarts the isc-dhcp-server",
+ .function = restart_isc_dhcp_server_command_fn,
+};
+/* *INDENT-ON* */
+
+static clib_error_t *
+add_default_linux_route_command_fn (vlib_main_t * vm,
+ unformat_input_t * input,
+ vlib_cli_command_t * c)
+{
+ int rv __attribute__((unused));
+ ip4_address_t ip4_addr;
+ u8 *cmd;
+
+ if (!unformat (input, "%U", unformat_ip4_address, &ip4_addr))
+ return clib_error_return (0, "default gateway address required...");
+
+ cmd = format (0, "/sbin/route add -net 0.0.0.0/0 gw %U",
+ format_ip4_address, &ip4_addr);
+ vec_add1 (cmd, 0);
+
+ rv = system (cmd);
+
+ vlib_cli_output (vm, "%s", cmd);
+
+ vec_free(cmd);
+
+ return 0;
+}
+
+/* *INDENT-OFF* */
+VLIB_CLI_COMMAND (add_default_linux_route_command, static) = {
+ .path = "add default linux route via",
+ .short_help = "Adds default linux route: 0.0.0.0/0 via <addr>",
+ .function = add_default_linux_route_command_fn,
+};
+/* *INDENT-ON* */
+
+
Using the temporal mac filter plugin
------------------------------------
If you need to restrict network access for certain devices to specific
daily time ranges, configure the "mactime" plugin. Enable the feature
on the NAT "inside" interfaces::
bin mactime_enable_disable GigabitEthernet0/14/0
bin mactime_enable_disable GigabitEthernet0/14/1
...
Create the required src-mac-address rule database. There are 4 rule
entry types:
* allow-static - pass traffic from this mac address
* drop-static - drop traffic from this mac address
* allow-range - pass traffic from this mac address at specific times
* drop-range - drop traffic from this mac address at specific times
Here are some examples::
bin mactime_add_del_range name alarm-system mac 00:de:ad:be:ef:00 allow-static
bin mactime_add_del_range name unwelcome mac 00:de:ad:be:ef:01 drop-static
bin mactime_add_del_range name not-during-business-hours mac <mac> drop-range Mon - Fri 7:59 - 18:01
bin mactime_add_del_range name monday-busines-hours mac <mac> allow-range Mon 7:59 - 18:01
+15
View File
@@ -0,0 +1,15 @@
.. _usecases:
Use Cases
==========
This chapter contains a sample of the many ways FD.io VPP can be used. It is by no means an
extensive list, but should give a sampling of the many features contained in FD.io VPP.
.. toctree::
containers
vhost/index.rst
homegateway
uc_vSwitchvRouter
+47
View File
@@ -0,0 +1,47 @@
.. _vswitch:
.. toctree::
.. _vswitchrtr:
vSwitch/vRouter
===============
FD.io VPP as a vSwitch/vRouter
------------------------------
.. note::
We need to provide commands and and show how to use VPP as a vSwitch/vRouter
One of the use cases for the FD.io VPP platform is to implement it as a
virtual switch or router. The following section describes examples of
possible implementations that can be created with the FD.io VPP platform. For
more in depth descriptions about other possible use cases, see the list
of
.. figure:: /_images/VPP_App_as_a_vSwitch_x201.jpg
:alt: Figure: Linux host as a vSwitch
:align: right
Figure: Linux host as a vSwitch
You can use the FD.io VPP platform to create out-of-the-box virtual switches
(vSwitch) and virtual routers (vRouter). The FD.io VPP platform allows you to
manage certain functions and configurations of these application through
a command-line interface (CLI).
Some of the functionality that a switching application can create
includes:
* Bridge Domains
* Ports (including tunnel ports)
* Connect ports to bridge domains
* Program ARP termination
Some of the functionality that a routing application can create
includes:
* Virtual Routing and Forwarding (VRF) tables (in the thousands)
* Routes (in the millions)
+17
View File
@@ -0,0 +1,17 @@
.. _vhost:
FD.io VPP with Virtual Machines
===============================
This chapter will describe how to use FD.io VPP with virtual machines. We describe
how to create Vhost port with VPP and how to connect them to VPP. We will also discuss
some limitations of Vhost.
.. toctree::
vhost
vhost02
vhost03
vhost04
vhost05
xmlexample
+106
View File
@@ -0,0 +1,106 @@
<domain type='kvm' id='54'>
<name>iperf-server</name>
<memory unit='KiB'>1048576</memory>
<currentMemory unit='KiB'>1048576</currentMemory>
<memoryBacking>
<hugepages>
<page size='2048' unit='KiB'/>
</hugepages>
</memoryBacking>
<vcpu placement='static'>1</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-i440fx-xenial'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode='host-model'>
<model fallback='allow'></model>
<numa>
<cell id='0' cpus='0' memory='262144' unit='KiB' memAccess='shared'/>
</numa>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/tmp/xenial-mod.img'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/scratch/jdenisco/sae/configs/cloud-config.iso'/>
<backingStore/>
<target dev='hda' bus='ide'/>
<readonly/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x7'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<alias name='virtio-serial0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</controller>
<interface type='vhostuser'>
<mac address='52:54:00:4c:47:f2'/>
<source type='unix' path='/tmp//vm00.sock' mode='server'/>
<model type='virtio'/>
<alias name='net1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/2'/>
<target port='0'/>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/2'>
<source path='/dev/pts/2'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'>
<listen type='address' address='127.0.0.1'/>
</graphics>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='apparmor' relabel='yes'>
<label>libvirt-2c4c9317-c7a5-4b37-b789-386ccda7348a</label>
<imagelabel>libvirt-2c4c9317-c7a5-4b37-b789-386ccda7348a</imagelabel>
</seclabel>
</domain>
+115
View File
@@ -0,0 +1,115 @@
.. toctree::
.. _vhost01:
Prerequisites
-------------
For this use case we will assume FD.io VPP is installed. We will also assume the user can create and start
basic virtual machines. This use case will use the linux virsh commands. For more information on virsh
refer to `virsh man page <https://linux.die.net/man/1/virsh>`_.
The image that we use is based on an Ubuntu cloud image downloaded from:
`Ubuntu Cloud Images <https://cloud-images.ubuntu.com/xenial/current>`_.
All FD.io VPP commands are being run from a su shell.
.. _vhosttopo:
Topology
---------
In this case we will use 2 systems. One system we will be running standard linux, the other will
be running FD.io VPP.
.. figure:: /_images/vhost-topo.png
:alt:
Vhost Use Case Topology
Creating The Virtual Interface
------------------------------
We will start on the system running FD.io VPP and show that no Virtual interfaces have been created.
We do this using the :ref:`showintcommand` command.
Notice we do not have any virtual interfaces. We do have an interface (TenGigabitEthernet86/0/0) that
is up. This interface is connected to a system running, in our example standard linux. We will use
this system to verify our connectivity to our VM with ping.
.. code-block:: console
$ sudo bash
# vppctl
_______ _ _ _____ ___
__/ __/ _ \ (_)__ | | / / _ \/ _ \
_/ _// // / / / _ \ | |/ / ___/ ___/
/_/ /____(_)_/\___/ |___/_/ /_/
vpp# clear interfaces
vpp# show int
Name Idx State Counter Count
TenGigabitEthernet86/0/0 1 up
TenGigabitEthernet86/0/1 2 down
local0 0 down
vpp#
For more information on the interface commands refer to: :ref:`intcommands`
The next step will be to create the virtual port using the :ref:`createvhostuser` command.
This command will create the virtual port in VPP and create a linux socket that the VM will
use to connect to VPP.
The port can be created using VPP as the socket server or client.
Creating the VPP port:
.. code-block:: console
vpp# create vhost socket /tmp/vm00.sock
VirtualEthernet0/0/0
vpp# show int
Name Idx State Counter Count
TenGigabitEthernet86/0/0 1 up
TenGigabitEthernet86/0/1 2 down
VirtualEthernet0/0/0 3 down
local0 0 down
vpp#
Notice the interface **VirtualEthernet0/0/0**. In this example we created the virtual interface as
a client.
We can get more detail on the vhost connection with the :ref:`showvhost` command.
.. code-block:: console
vpp# show vhost
Virtio vhost-user interfaces
Global:
coalesce frames 32 time 1e-3
number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 3)
virtio_net_hdr_sz 12
features mask (0xffffffffffffffff):
features (0x58208000):
VIRTIO_NET_F_MRG_RXBUF (15)
VIRTIO_NET_F_GUEST_ANNOUNCE (21)
VIRTIO_F_ANY_LAYOUT (27)
VIRTIO_F_INDIRECT_DESC (28)
VHOST_USER_F_PROTOCOL_FEATURES (30)
protocol features (0x3)
VHOST_USER_PROTOCOL_F_MQ (0)
VHOST_USER_PROTOCOL_F_LOG_SHMFD (1)
socket filename /tmp/vm00.sock type client errno "No such file or directory"
rx placement:
tx placement: spin-lock
thread 0 on vring 0
thread 1 on vring 0
Memory regions (total 0)
Notice **No such file or directory** and **Memory regions (total 0)**. This is because the
VM has not been created yet.
+109
View File
@@ -0,0 +1,109 @@
.. _vhost02:
Creating the Virtual Machine
----------------------------
We will now create the virtual machine. We use the "virsh create command". For the complete file we
use refer to :ref:`xmlexample`.
It is important to note that in the XML file we specify the socket path that is used to connect to
FD.io VPP.
This is done with a section that looks like this
.. code-block:: console
<interface type='vhostuser'>
<mac address='52:54:00:4c:47:f2'/>
<source type='unix' path='/tmp//vm00.sock' mode='server'/>
<model type='virtio'/>
<alias name='net1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
Notice the **interface type** and the **path** to the socket.
Now we create the VM. The virsh list command shows the VMs that have been created. We start with no VMs.
.. code-block:: console
$ virsh list
Id Name State
----------------------------------------------------
Create the VM with the virsh create command specifying our xml file.
.. code-block:: console
$ virsh create ./iperf3-vm.xml
Domain iperf-server3 created from ./iperf3-vm.xml
$ virsh list
Id Name State
----------------------------------------------------
65 iperf-server3 running
The VM is now created.
.. note::
After a VM is created an xml file can created with "virsh dumpxml".
.. code-block:: console
$ virsh dumpxml iperf-server3
<domain type='kvm' id='65'>
<name>iperf-server3</name>
<uuid>e23d37c1-10c3-4a6e-ae99-f315a4165641</uuid>
<memory unit='KiB'>262144</memory>
.....
Once the virtual machine is created notice the socket filename shows **Success** and
there are **Memory Regions**. At this point the VM and FD.io VPP are connected. Also
notice **qsz 256**. This system is running an older version of qemu. A queue size of 256
will affect vhost throughput. The qsz should be 1024. On the web you should be able to
find ways to install a newer version of qemu or change the queue size.
.. code-block:: console
vpp# show vhost
Virtio vhost-user interfaces
Global:
coalesce frames 32 time 1e-3
number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 3)
virtio_net_hdr_sz 12
features mask (0xffffffffffffffff):
features (0x58208000):
VIRTIO_NET_F_MRG_RXBUF (15)
VIRTIO_NET_F_GUEST_ANNOUNCE (21)
VIRTIO_F_ANY_LAYOUT (27)
VIRTIO_F_INDIRECT_DESC (28)
VHOST_USER_F_PROTOCOL_FEATURES (30)
protocol features (0x3)
VHOST_USER_PROTOCOL_F_MQ (0)
VHOST_USER_PROTOCOL_F_LOG_SHMFD (1)
socket filename /tmp/vm00.sock type client errno "Success"
rx placement:
thread 1 on vring 1, polling
tx placement: spin-lock
thread 0 on vring 0
thread 1 on vring 0
Memory regions (total 2)
region fd guest_phys_addr memory_size userspace_addr mmap_offset mmap_addr
====== ===== ================== ================== ================== ================== =============== ===
0 31 0x0000000000000000 0x00000000000a0000 0x00007f1db9c00000 0x0000000000000000 0x00007f7db0400 000
1 32 0x00000000000c0000 0x000000000ff40000 0x00007f1db9cc0000 0x00000000000c0000 0x00007f7d94ec0 000
Virtqueue 0 (TX)
qsz 256 last_avail_idx 0 last_used_idx 0
avail.flags 0 avail.idx 256 used.flags 1 used.idx 0
kickfd 33 callfd 34 errfd -1
Virtqueue 1 (RX)
qsz 256 last_avail_idx 8 last_used_idx 8
avail.flags 0 avail.idx 8 used.flags 1 used.idx 8
kickfd 29 callfd 35 errfd -1
+88
View File
@@ -0,0 +1,88 @@
.. _vhost03:
Bridge the Interfaces
---------------------
To connect the 2 interfaces we put them on an L2 bridge.
Use the "set interface l2 bridge" command.
.. code-block:: console
vpp# set interface l2 bridge VirtualEthernet0/0/0 100
vpp# set interface l2 bridge TenGigabitEthernet86/0/0 100
vpp# show bridge
BD-ID Index BSN Age(min) Learning U-Forwrd UU-Flood Flooding ARP-Term BVI-Intf
100 1 0 off on on on on off N/A
vpp# show bridge 100 det
BD-ID Index BSN Age(min) Learning U-Forwrd UU-Flood Flooding ARP-Term BVI-Intf
100 1 0 off on on on on off N/A
Interface If-idx ISN SHG BVI TxFlood VLAN-Tag-Rewrite
VirtualEthernet0/0/0 3 1 0 - * none
TenGigabitEthernet86/0/0 1 1 0 - * none
vpp# show vhost
Bring the Interfaces Up
-----------------------
We can now bring all the pertinent interfaces up. We can then we will then be able to communicate
with the VM from the remote system running Linux.
Bring the interfaces up with :ref:`setintstate` command.
.. code-block:: console
vpp# set interface state VirtualEthernet0/0/0 up
vpp# set interface state TenGigabitEthernet86/0/0 up
vpp# sh int
Name Idx State Counter Count
TenGigabitEthernet86/0/0 1 up rx packets 2
rx bytes 180
TenGigabitEthernet86/0/1 2 down
VirtualEthernet0/0/0 3 up tx packets 2
tx bytes 180
local0 0 down
Ping from the VM
----------------
The remote Linux system has an ip address of "10.0.0.2" we can now reach it from the VM.
Use the "virsh console" command to attach to the VM. "ctrl-D" to exit.
.. code-block:: console
$ virsh console iperf-server3
Connected to domain iperf-server3
Escape character is ^]
Ubuntu 16.04.3 LTS iperfvm ttyS0
.....
root@iperfvm:~# ping 10.0.0.2
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.154 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.159 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=0.208 ms
On VPP you can now see the packet counts increasing. The packets from the VM are seen as **rx packets**
on **VirtualEthernet0/0/0**, they are then bridged to **TenGigabitEthernet86/0/0** and are seen leaving the
system as **tx packets**. The reverse is true on the way in.
.. code-block:: console
vpp# sh int
Name Idx State Counter Count
TenGigabitEthernet86/0/0 1 up rx packets 16
rx bytes 1476
tx packets 14
tx bytes 1260
TenGigabitEthernet86/0/1 2 down
VirtualEthernet0/0/0 3 up rx packets 14
rx bytes 1260
tx packets 16
tx bytes 1476
local0 0 down
vpp#
+43
View File
@@ -0,0 +1,43 @@
.. _vhost04:
Cleanup
-------
Destroy the VMs with "virsh destroy"
.. code-block:: console
cto@tf-ucs-3:~$ virsh list
Id Name State
----------------------------------------------------
65 iperf-server3 running
cto@tf-ucs-3:~$ virsh destroy iperf-server3
Domain iperf-server3 destroyed
Delete the Virtual port in FD.io VPP
.. code-block:: console
vpp# delete vhost-user VirtualEthernet0/0/0
vpp# show int
Name Idx State Counter Count
TenGigabitEthernet86/0/0 1 up rx packets 21
rx bytes 1928
tx packets 19
tx bytes 1694
TenGigabitEthernet86/0/1 2 down
local0 0 down
Restart FD.io VPP
.. code-block:: console
# service vpp restart
# vppctl show int
Name Idx State Counter Count
TenGigabitEthernet86/0/0 1 down
TenGigabitEthernet86/0/1 2 down
local0 0 down
+25
View File
@@ -0,0 +1,25 @@
.. _vhost05:
Limitations
-----------
There are some limitations when using the qemu vhost driver. Some are described in this section.
Performance
^^^^^^^^^^^
VPP performance with vHost is limited by the Qemu vHost driver. FD.io VPP 18.04 CSIT vHost testing
shows with 2 threads, 2 cores and a Queue size of 1024 the maximum NDR throughput was about 7.5 Mpps.
This is about the limit at this time.
For all the details on the CSIT VM vhost connection refer to the
`CSIT VM vHost performance tests <https://docs.fd.io/csit/rls1804/report/vpp_performance_tests/packet_throughput_graphs/vm_vhost.html>`_.
Features
^^^^^^^^
These are the features not supported with FD.io VPP vHost.
* VPP implements vHost in device mode only. VPP is intended to work with Qemu which implements vHost in driver mode, it does not implement vHost driver mode.
* VPP vHost implementation does not support checksum or transmit segmentation offload.
* VPP vHost implementation does not support packet receive filtering feature for controlling receive traffic.
+11
View File
@@ -0,0 +1,11 @@
.. _xmlexample01:
The XML File
------------
An example of a file that could be used with the virsh create command.
.. literalinclude:: iperf-vm.xml
:language: XML
:emphasize-lines: 42-49, 74-80