docs: better docs, mv doxygen to sphinx

This patch refactors the VPP sphinx docs
in order to make it easier to consume
for external readers as well as VPP developers.

It also makes sphinx the single source
of documentation, which simplifies maintenance
and operation.

Most important updates are:

- reformat the existing documentation as rst
- split RELEASE.md and move it into separate rst files
- remove section 'events'
- remove section 'archive'
- remove section 'related projects'
- remove section 'feature by release'
- remove section 'Various links'
- make (Configuration reference, CLI docs,
  developer docs) top level items in the list
- move 'Use Cases' as part of 'About VPP'
- move 'Troubleshooting' as part of 'Getting Started'
- move test framework docs into 'Developer Documentation'
- add a 'Contributing' section for gerrit,
  docs and other contributer related infos
- deprecate doxygen and test-docs targets
- redirect the "make doxygen" target to "make docs"

Type: refactor

Change-Id: I552a5645d5b7964d547f99b1336e2ac24e7c209f
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
This commit is contained in:
Nathan Skrzypczak
2021-08-19 11:38:06 +02:00
committed by Dave Wallace
parent f47122e07e
commit 9ad39c026c
388 changed files with 23358 additions and 26302 deletions

View File

@@ -0,0 +1,109 @@
.. _ConnectingVPC:
.. toctree::
Interconnecting VPCs with Segment Routing & Performance Evaluation
____________________________________________________________________
Before reading this part, you should have a minimum understanding of AWS, especially on `VPC concepts <https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html>`_.
.. figure:: /_images/Connecting_VPC.svg
Figure 1: Simplified view of our final configuration.
In this section we will set VPP as Gateway of our VPC and, thanks to its support to Segment Routing per IPV6, we will interconnect several VPCs together. Figure 1 shows what will be our final configuration. We are interested in interconnecting several VPCs together since we could perform Service Chaining inside AWS.
Now we focus on the basic elements you should deploy inside the VPC in order to make this configuration works. Here you can find some scripts `to automate the deployment of these resources <https://github.com/francescospinelli94/Automating-Deployment-VPP>`_.
In our VPC we will have two instances: one, in which we will install VPP and the other one which will be our Client/Server machine. We suggest you to create 3 subnets inside your VPC: one associated with IPv4 addresses, for reaching your VMs through SSH. The second one, also with IPV4 addresses, that allows connectivity between the Client/Server machine and the VPP machine. Finally you need a third one, with both IPv4 and IPv6 address, to connect VPP with the Amazon IGW and we will use IPv6 addresses to implement Segment Routing. Moreover you have to attach to the Client/Server machine one additional NIC, while instead to the VPP machine you have to attach 2 different NIC. One will be used inside the IPv6 subnet while the other one will allow communications with the other VM. you can find an example in Figure 2
.. figure:: /_images/vpc_scheme.svg
Figure 2: Example of the resources present inside our VPC
Notice that the following example works with two VPCs, where in each of them there are a VM with VPP and a VM. Hence, you will have to execute the same commands also in the other VPC to make the connection between the two VPC possible.
Now, create a new VM instance (you can use same setting as before (Ubuntu Server 16.04 and m5 type)) and attach a NIC. Remember that the two Client/Server machine's NICs should stay in two different IPv4 Subnet. Afterwards, on the VM's terminal execute these commands:
.. code-block:: console
$ sudo /sbin/ip -4 addr add 10.1.2.113/24 dev ens6
$ sudo ifconfig ens6 up
$ sudo /sbin/ip -4 route add 10.2.0.0/16 via 10.1.4.117
Basically you are setting up the interface which you will use to reach VPP and telling that all the traffic belonging to the subnet 10.2.0.0/16, which in our case is the one of the other VPC, should go through VPP's interface. Remember also to do the same thing in the route table menu of the Amazon Console Management.
Now go to the terminal of VPP, enter in the VPP CLI and type these commands to set up the two virtual interfaces. To see how to bind the NICs to VPP, see here (Link AWS in VPP).
.. code-block:: console
vpp# set int state VirtualFunctionEthernet0/6/0 up
vpp# set int state VirtualFunctionEthernet0/7/0 up
Here instead you are assigning the IP addresses to the network interfaces.
.. code-block:: console
vpp# set int ip address VirtualFunctionEthernet0/6/0 10.1.4.117/24
vpp# set int ip address VirtualFunctionEthernet0/7/0 2600:1f14:e0e:7f00:f672:1039:4e41:e68/64
Afterwards, you should use the Segment Routing's functionalities. Note that for the localsid address we are using a different IPv6 address (you can generate another one through the Amazon console)
.. code-block:: console
vpp# set sr encaps source addr 2600:1f14:e0e:7f00:f672:1039:4e41:e68
vpp# sr localsid address 2600:1f14:e0e:7f00:8da1:c8fa:5301:1d1f behavior end.dx4 VirtualFunctionEthernet0/6/0 10.1.4.117
vpp# sr policy add bsid c:1::999:1 next 2600:1f14:135:cc00:43c1:e860:7ce9:e94a encap
vpp# sr steer l3 10.2.5.0/24 via bsid c:1::999:1
Finally, you are setting the ip6 discovery, telling which is the next hop (the IGW). Notice that the MAC address is the MAC address of the IGW.
.. code-block:: console
vpp# set ip6 neighbor VirtualFunctionEthernet0/7/0 fe80::84f:3fff:fe2a:aaf0 0a:4f:3f:2a:aa:f0
vpp# ip route add ::/0 via fe80::84f:3fff:fe2a:aaf0 VirtualFunctionEthernet0/7/0
Now go in the other VM instance in the other VPC, which could be located in another Amazon Region, and do the same commands. First in the VM:
.. code-block:: console
vpp# sudo /sbin/ip -4 addr add 10.2.5.190/24 dev ens6
vpp# sudo ifconfig ens6 up
vpp# sudo /sbin/ip -4 route add 10.2.0.0/16 via 10.2.5.21
Then, in VPP:
.. code-block:: console
vpp# set int state VirtualFunctionEthernet0/6/0 up
vpp# set int state VirtualFunctionEthernet0/7/0 up
vpp# set int ip address VirtualFunctionEthernet0/6/0 10.2.5.21/24
vpp# set int ip address VirtualFunctionEthernet0/7/0 2600:1f14:135:cc00:13b9:ff74:348d:7642/64
vpp# set sr encaps source addr 2600:1f14:135:cc00:13b9:ff74:348d:7642
vpp# sr policy add bsid c:3::999:1 next 2600:1f14:e0e:7f00:8da1:c8fa:5301:1d1f encap
vpp# sr steer l3 10.1.4.0/24 via bsid c:3::999:1
vpp# set ip6 neighbor VirtualFunctionEthernet0/7/0 fe80::86a:b7ff:fe5d:73c0 0a:4c:fd:b8:c1:3e
vpp# ip route add ::/0 via fe80::86a:b7ff:fe5d:73c0 VirtualFunctionEthernet0/7/0
Now if you try ping your Server machine from your Client Machine you should be able to reach it.
If you are interested in Performance evaluation inside this scenario, we will present a poster at INFOCOM'19, in which will be present our performance evaluation of Segment routing inside AWS:
*Francesco Spinelli, Luigi Iannone, and Jerome Tollet. “Chaining your Virtual Private Clouds with Segment Routing”. In:2019 IEEE INFOCOM Poster (INFOCOM2019 Poster). Paris, France, Apr. 2019*
**Troubleshooting**
* Remember to disable source/dest check on the VPP and VMs Network Card interfaces. You can do it through the Amazon Console.
* The commands work with VPP version 18.07. If you're using a different version, probably the syntax of some VPP commands will be slightly different.
* Be careful: if you stop your VM with VPP you will need to attach again the two NICs to VPP.

View File

@@ -0,0 +1,12 @@
.. _automatingthedeployment:
.. toctree::
Automating VPP deployment
__________________________
In order to make the VPP deployment easier inside AWS and Azure, we have created two different Terraform scripts, compatibles with both Public Cloud Provider. These scripts allow to automate the deployment of the resources. `Here you can find the scripts anf further information <https://github.com/francescospinelli94/Automating-Deployment-VPP>`_.

View File

@@ -0,0 +1,14 @@
.. _vppcloud:
VPP in the Cloud
================
This section will cover the VPP deployment inside two different Public Cloud Provider: Amazon AWS and Microsoft Azure. Furthermore, we describe how to interconnect several Public Cloud Regions together with Segment Routing per IPv6 and we show some Performance Evaluation. Finally, we make our Terraform scripts available to the community, which could help in automating the VPP deployment inside the Cloud.
.. toctree::
vppinaws
ConnectingVPC
vppinazure
automatingthedeployment

View File

@@ -0,0 +1,147 @@
.. _vppinaws:
.. toctree::
VPP in AWS
==========
Warning: before starting this guide you should have a minimum knowledge on how `AWS works <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html>`_!
First of all, you should log into your Virtual Machine inside AWS (we suggest to create an instance with Ubuntu 16.04 on a m5 type) and download some useful packages to make VPP installation as smooth as possible:
.. code-block:: console
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install build-essential
$ sudo apt-get install python-pip
$ sudo apt-get install libnuma-dev
$ sudo apt-get install make
$ sudo apt install libelf-dev
Afterwards, types the following commands to install VPP:
.. code-block:: console
$ curl -s https://packagecloud.io/install/repositories/fdio/1807/script.deb.sh | sudo bash
In this case we downloaded VPP version 18.07 but actually you can use any VPP version available. Then, you can install VPP with all of its plugins:
.. code-block:: console
$ sudo apt-get update
$ sudo apt-get install vpp
$ sudo apt-get install vpp-plugins vpp-dbg vpp-dev vpp-api-java vpp-api-python vpp-api-lua
Now, you need to bind the NICs (Network Card Interface) to VPP. Firstly you have the retrieve the PCI addresses of the NICs you want to bind:
.. code-block:: console
$ sudo lshw -class network -businfo
The PCI addresses have a format similar to this: 0000:00:0X.0. Once you retrieve them, you should copy them inside the startup file of VPP:
.. code-block:: console
$ sudo nano /etc/vpp/startup.conf
Here, inside the dpdk block, copy the PCI addresses of the NIC you want to bind to VPP.
.. code-block:: console
dev 0000:00:0X.0
Now you should install DPDK package. This will allow to bind the NICs to VPP through a script available inside the DPDK package:
.. code-block:: console
$ wget https://fast.dpdk.org/rel/dpdk-18.08.tar.xz
$ tar -xvf dpdk-18.08.tar.xz
$ cd ~/dpdk-18.08/usertools/
and open the script:
.. code-block:: console
$ ./dpdk-setup.sh
When the script is running, you should be able to execute several options. For the moment, just install T=x86_64-native-linuxapp-gcc and then close the script. Now go inside:
.. code-block:: console
$ cd ~/dpdk-18.08/x86_64-native-linuxapp-gcc/
and type:
.. code-block:: console
$ sudo modprobe uio
$ sudo insmod kmod/igb_uio.ko
In this way, the PCIs addresses should appear inside the setup file of DPDK and therefore you can bind them:
.. code-block:: console
$ ./dpdk-setup.sh
Inside the script, bind the NICs using the option 24.
Finally restart VPP and the NICs should appear inside VPP CLI:
.. code-block:: console
$ sudo service vpp stop
$ sudo service vpp start
$ sudo vppctl show int
Notice that if you stop the VM, you need to bind again the NICs.

View File

@@ -0,0 +1,160 @@
.. _vppinazure:
.. toctree::
VPP in Azure
============
Before starting, a few notes:
* in our configuration we use only DPDK 18.02, since with the newer versions, such as DPDK 18.05, we obtained several problems during VPP installation (mostly related with MLX4 PMD Drivers).
* Some of the commands are taken from `Azures DPDK page <https://docs.microsoft.com/en-us/azure/virtual-network/setup-dpdk>`_.
To bring DPDK inside Azure, we perform the following procedure:
Firstly, we install the DPDK dependencies:
.. code-block:: console
$ sudo add-apt-repository ppa:canonical-server/dpdk-azure y
$ sudo apt-get update
$ sudo apt-get install -y librdmacm-dev librdmacm1 build-essential libnuma-dev libmnl-dev
Then, we download DPDK 18.02:
.. code-block:: console
$ sudo wget https://fast.dpdk.org/rel/dpdk-18.02.2.tar.xz
$ tar -xvf dpdk-18.02.2.tar.xz
Finally, we build DPDK, modifying first its configuration files in order to make VPP compatible with MLX4 drivers:
Inside config/common_base, modify:
.. code-block:: console
CONFIG_RTE_BUILD_SHARED_LIB=n
CONFIG_RTE_LIBRTE_MLX4_PMD=y
CONFIG_RTE_LIBRTE_MLX4_DLOPEN_DEPS=y
CONFIG_RTE_LIBRTE_TAP_PMD=y
CONFIG_RTE_LIBRTE_FAILSAFE_PMD=y
and then:
.. code-block:: console
$ make config T=x86_64-native-linuxapp-gcc
$ sed -ri 's,(MLX._PMD=)n,\1y,' build/.config
$ make
Finally we build DPDK:
.. code-block:: console
$ make install T=x86_64-native-linuxapp-gcc DESTDIR=/home/ciscotest/test EXTRA_CFLAGS='-fPIC -pie'
And we reboot the instance:
.. code-block:: console
$ reboot instance
After the reboot, we type these commands:
.. code-block:: console
$ echo 1024 | sudo tee /sys/devices/system/node/node*/hugepages/hugepages-2048kB/nr_hugepages
$ mkdir /mnt/huge
$ sudo mount -t hugetlbfs nodev /mnt/huge
$ grep Huge /proc/meminfo
$ modprobe -a ib_uverbs
$ cd x86_64-native-linuxapp-gcc/
$ ls
$ cd lib/
$ ls
$ sudo cp librte_pmd_mlx4_glue.so.18.02.0 /usr/lib
**Now we focus on VPP installation:**
In our configuration we use VPP 18.07.
We perform this procedure in order to install VPP 18.07 with an external DPDK configuration inside Azure.
Firstly, we download VPP
.. code-block:: console
$ git clone https://gerrit.fd.io/r/vpp
$ git checkout v18.07
Then, we build VPP, using the external DPDK configuration we previously made:
We modify the path inside the vpp.mk file:
.. code-block:: console
$ build-data/platforms/vpp.mk
$ vpp_uses_external_dpdk = yes
$ vpp_dpdk_inc_dir = <PATH_TO_DESTDIR_NAME_FROM_ABOVE>/include/dpdk/
$ vpp_dpdk_lib_dir =<PATH_TO_DESTDIR_NAME_FROM_ABOVE>/lib
<PATH_TO_DESTDIR_NAME_FROM_ABOVE> is whatever the path used when compiling DPDK above. These paths have to be absolute path in order for it to work.
we modify build-data/platforms/vpp.mk to use
.. code-block:: console
vpp_uses_dpdk_mlx4_pmd = yes
.. code-block:: console
$ make build
$ cd build-root/
$ make V=0 PLATFORM=vpp TAG=vpp install-deb
$ sudo dpkg -i *.deb
Finally, we modify the startup.conf file:
.. code-block:: console
$ cd /etc/vpp
$ sudo nano startup.conf
Inside the DPDK block, the following commands:
.. code-block:: console
## Whitelist specific interface by specifying PCI address
dev 000X:00:0X.0
dev 000X:00:0X.0
# Running failsafe
vdev net_vdev_netvsc0,iface=eth1
vdev net_vdev_netvsc1,iface=eth2
*Please refer to Azure DPDK document to pick the right iface to use for failsafe vdev.*
and finally:
.. code-block:: console
$ sudo service vpp stop
$ sudo service vpp start
$ sudo service vpp status
$ sudo vppctl
Now VPP will work inside Azure!