Initial commit of Sphinx docs

Change-Id: I9fca8fb98502dffc2555f9de7f507b6f006e0e77
Signed-off-by: John DeNisco <jdenisco@cisco.com>
This commit is contained in:
John DeNisco
2018-07-26 12:45:10 -04:00
committed by Dave Barach
parent 1d65279ffe
commit 06dcd45ff8
239 changed files with 12736 additions and 56 deletions

View File

@@ -0,0 +1,10 @@
.. _cp:
=============
Control Plane
=============
* DHCP client/proxy
* DHCPv6 Proxy

View File

@@ -0,0 +1,33 @@
.. _dev:
=======
Devices
=======
Hardware
--------
* `DPDK <https://www.dpdk.org/>`_
* `Network Interfaces <https://doc.dpdk.org/guides/nics/>`_
* `Cryptographic Devices <https://doc.dpdk.org/guides/cryptodevs/>`_
* `Open Data Plane <https://github.com/FDio/odp4vpp>`_
* `Intel Ethernet Adaptive Virtual Function <https://www.intel.com/content/dam/www/public/us/en/documents/product-specifications/ethernet-adaptive-virtual-function-hardware-spec.pdf>`_
Operating System
----------------
* `Netmap <http://info.iet.unipi.it/~luigi/netmap/>`_
* `af_packet <http://man7.org/linux/man-pages/man7/packet.7.html>`_
* Tap V2 (FastTap)
Virtualization:
---------------
* SSVM
* Vhost / VirtIO
Containers
----------
* Vhost-user
* MemIF

View File

@@ -0,0 +1,32 @@
.. _features:
========
Features
========
.. rst-class:: center-align-table
+-------------------------+-----------+-----------+
| :ref:`sdn` | | |
+------------+------------+ :ref:`cp` | |
| | :ref:`l4` | | |
| +------------+-----------+ :ref:`pg` |
| :ref:`tun` | :ref:`l3` | | |
| +------------+ :ref:`tm` | |
| | :ref:`l2` | | |
+------------+------------+-----------+-----------+
| :ref:`dev` |
+-------------------------------------------------+
.. toctree::
:hidden:
devices.rst
integrations.rst
trafficmanagement.rst
l2.rst
l3.rst
l4.rst
tunnels.rst
controlplane.rst
plugins.rst

View File

@@ -0,0 +1,5 @@
.. _sdn:
========================
SDN & Cloud Integrations
========================

View File

@@ -0,0 +1,56 @@
.. _l2:
=======
Layer 2
=======
MAC Layer
---------
* Ethernet
Discovery
---------
* Cisco Discovery Protocol
* Link Layer Discovery Protocol (LLDP)
Link Layer Control Protocol
---------------------------
* Bit Index Explicit Replication Link Layer Multi-cast forwarding.
* Link Layer Control (LLC) - multiplex protocols over the MAC layer.
* Spatial Reuse Protocol (SRP)
* High-Level Data Link Control (HDLC)
* Logical link control (LLC)
* Link Agg Control Protocol (Active/Active, Active/Passive) 18.04
Virtual Private Networks
------------------------
* MPLS
* MPLS-o-Ethernet Deep label stacks supported
* Virtual Private LAN Service (VPLS)
* VLAN
* Q-in-Q
* Tag-rewrite (VTR) - push/pop/Translate (1:1,1:2, 2:1,2:2)
* Ethernet flow point Filtering
* Layer 2 Cross Connect
Bridging
---------
* Bridge Domains
* MAC Learning (50k addresses)
* Split-horizon group support
* Flooding
ARP
---
* Proxy
* Termination
* Bidirectional Forwarding Detection
Integrated Routing and Bridging (IRB)
-------------------------------------
* Flexibility to both route and switch between groups of ports.
* Bridged Virtual Interface (BVI) Support, allows traffic switched traffic to be routed.

View File

@@ -0,0 +1,55 @@
.. _l3:
=======
Layer 3
=======
IP Layer
--------
* ICMP
* IPv4
* IPv6
* IPSEC
* Link Local Addressing
MultiCast
---------
* Multicast FiB
* IGMP
Virtual Routing and forwarding (VRF)
------------------------------------
* VRF scaling, thousands of tables.
* Controlled cross-VRF lookups
Multi-path
----------
* Equal Cost Multi Path (ECMP)
* Unequal Cost Multi Path (UCMP)
IPv4
----
* ARP
* ARP Proxy
* ARP Snooping
IPv6
----
* Neighbour discovery (ND)
* ND Proxy
* Router Advertisement
* Segment Routing
* Distributed Virtual Routing Resolution
Forwarding Information Base (FIB)
---------------------------------
* Hierarchical FIB
* Memory efficient
* Multi-million entry scalable
* Lockless/concurrent updates
* Recursive lookups
* Next hop failure detection
* Shared FIB adjacencies
* Multicast support
* MPLS support

View File

@@ -0,0 +1,5 @@
.. _l4:
=======
Layer 4
=======

View File

@@ -0,0 +1,7 @@
.. _pg:
=======
Plugins
=======
* iOAM

View File

@@ -0,0 +1,55 @@
.. _tm:
==================
Traffic Management
==================
IP Layer Input Checks
---------------------
* Source Reverse Path Forwarding
* Time To Live expiration
* IP header checksum
* Layer 2 Length < IP Length
Classifiers
-----------
* Multiple million Classifiers - Arbitrary N-tuple
Policers
--------
* Colour Aware & Token Bucket
* Rounding Closest/Up/Down
* Limits in PPS/KBPS
* Types:
* Single Rate Two Colour
* Single Rate Three Colour
* Dual Rate Three Colour
* Action Triggers
* Conform
* Exceed
* Violate
* Actions Type
* Drop
* Transmit
* Mark-and-transmit
Switched Port Analyzer (SPAN)
* mirror traffic to another switch port
ACLs
----
* Stateful
* Stateless
COP
---
MAC/IP Pairing
--------------
(security feature).

View File

@@ -0,0 +1,32 @@
.. _tun:
=======
Tunnels
=======
Layer 2
-------
* L2TP
* PPP
* VLAN
Layer 3
-------
* Mapping of Address and Port with Encapsulation (MAP-E)
* Lightweight IPv4 over IPv6
* An Extension to the Dual-Stack Lite Architecture
* GENEVE
* VXLAN
Segment Routing
---------------
* IPv6
* MPLS
Generic Routing Encapsulation (GRE)
* GRE over IPSEC
* GRE over IP
* MPLS
* NSH

13
docs/overview/index.rst Normal file
View File

@@ -0,0 +1,13 @@
.. _overview:
=========================================
Overview
=========================================
.. toctree::
:maxdepth: 1
whatisvpp/index.rst
features/index.rst
performance/index.rst
supported.rst

View File

@@ -0,0 +1,12 @@
.. _current_ipv4_throughput:
.. toctree::
IPv4 Routed-Forwarding Performance Tests
****************************************
VPP NDR 64B packet throughput in 1t1c setup (1thread, 1core) is presented in the graph below.
.. raw:: html
<iframe src="https://docs.fd.io/csit/rls1804/report/_static/vpp/64B-1t1c-ethip4-ip4-ndrdisc.html" width="1200" height="1000" frameborder="0">

View File

@@ -0,0 +1,16 @@
.. _current_ipv6_throughput:
.. toctree::
IPv6 Routed-Forwarding Performance Tests
****************************************
VPP NDR 78B packet throughput in 1t1c setup (1 thread, 1 core) is presented in the graph below.
.. raw:: html
<iframe src="https://docs.fd.io/csit/rls1801/report/_static/vpp/78B-1t1c-ethip6-ip6-ndrdisc.html" width="1200" height="1000" frameborder="0">

View File

@@ -0,0 +1,12 @@
.. _current_l2_throughput:
.. toctree::
L2 Ethernet Switching Throughput Tests
***************************************
VPP NDR 64B packet throughput in 1 Core, 1 Thread setup, is presented in the graph below.
.. raw:: html
<iframe src="https://docs.fd.io/csit/rls1801/report/_static/vpp/64B-1t1c-l2-sel2-ndrdisc.html" width="1200" height="1000" frameborder="0">

View File

@@ -0,0 +1,13 @@
.. _current_ndr_throughput:
.. toctree::
NDR Performance Tests
*********************
This is a VPP NDR 64B packet throughput in 1 Core, 1 Thread setup, live graph of the NDR (No Drop Rate) L2 Performance Tests.
.. raw:: html
<iframe src="https://docs.fd.io/csit/rls1804/report/_static/vpp/64B-1t1c-l2-sel1-ndrdisc.html" width="800" height="1000" frameborder="0">

View File

@@ -0,0 +1,63 @@
.. _performance:
Performance
===========
Overview
^^^^^^^^
One of the benefits of FD.io VPP, is high performance on relatively low-power computing, this performance is based on the following features:
* A high-performance user-space network stack designed for commodity hardware.
- L2, L3 and L4 features and encapsulations.
* Optimized packet interfaces supporting a multitude of use cases.
- An integrated vhost-user backend for high speed VM-to-VM connectivity.
- An integrated memif container backend for high speed Container-to-Container connectivity.
- An integrated vhost based interface to punt packets to the Linux Kernel.
* The same optimized code-paths run execute on the host, and inside VMs and Linux containers.
* Leverages best-of-breed open source driver technology: `DPDK <https://www.dpdk.org/>`_.
* Tested at scale; linear core scaling, tested with millions of flows and mac addresses.
These features have been designed to take full advantage of common micro-processor optimization techniques, such as:
* Reducing cache and TLS misses by processing packets in vectors.
* Realizing `IPC <https://en.wikipedia.org/wiki/Instructions_per_cycle>`_ gains with vector instructions such as: SSE, AVX and NEON.
* Eliminating mode switching, context switches and blocking, to always be doing useful work.
* Cache-lined aliged buffers for cache and memory efficiency.
Packet Throughput Graphs
^^^^^^^^^^^^^^^^^^^^^^^^
These are some of the packet throughput graphs for FD.io VPP 18.04 from the CSIT `18.04 benchmarking report <https://docs.fd.io/csit/rls1804/report/>`_.
.. toctree::
current_l2_throughput.rst
current_ndr_throughput.rst
current_ipv4_throughput.rst
current_ipv6_throughput.rst
Trending Throughput Graphs
^^^^^^^^^^^^^^^^^^^^^^^^^^
These are some of the trending packet throughput graphs from the CSIT `trending dashboard <https://docs.fd.io/csit/master/trending/introduction/index.html>`_. **Please note that**, performance in the trending graphs will change on a nightly basis in line with the software development cycle.
.. toctree::
trending_l2_throughput.rst
trending_ipv4_throughput.rst
trending_ipv6_throughput.rst
For More information on CSIT
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
These are FD.io Continuous System Integration and Testing (CSIT)'s documentation links.
* `CSIT Code Documentation <https://docs.fd.io/csit/master/doc/overview.html>`_
* `CSIT Test Overview <https://docs.fd.io/csit/rls1804/report/introduction/overview.html>`_
* `VPP Performance Dashboard <https://docs.fd.io/csit/master/trending/introduction/index.html>`_

View File

@@ -0,0 +1,14 @@
.. _trending_ipv4_throughput:
.. toctree::
IPv4 Routed-Forwarding Performance Tests
****************************************
This is a live graph of the IPv4 Routed Forwarding Switching Performance Tests.
.. raw:: html
<iframe src="https://docs.fd.io/csit/master/trending/_static/vpp/cpta-ip4-1t1c-x520.html" width="1200" height="1000" frameborder="0">

View File

@@ -0,0 +1,16 @@
.. _trending_ipv6_throughput:
.. toctree::
IPv6 Routed-Forwarding Performance Tests
****************************************
VPP NDR 78B packet throughput in 1t1c setup (1 thread, 1 core) is presented in the trending graph below.
.. raw:: html
<iframe src="https://docs.fd.io/csit/master/trending/_static/vpp/cpta-ip6-1t1c-x520-1.html" width="1200" height="1000" frameborder="0">

View File

@@ -0,0 +1,14 @@
.. _trending_l2_throughput:
.. toctree::
L2 Ethernet Switching Performance Tests
***************************************
This is a live graph of the 1 Core, 1 Thread, L2 Ethernet Switching Performance Tests Test on the x520 NIC.
.. raw:: html
<iframe src="https://docs.fd.io/csit/master/trending/_static/vpp/cpta-l2-1t1c-x520.html" width="1200" height="1000" frameborder="0">

View File

@@ -0,0 +1,27 @@
.. _supported:
.. toctree::
Architectures and Operating Systems
***********************************
Architectures
-----------------------
* - The FD.io VPP platform supports:
* - x86/64
* - ARM
Operating Systems and Packaging
-------------------------------
FD.io VPP supports package installation on the following
recent LTS operating systems releases:
* - Operating Systems:
* - Debian
* - Ubuntu
* - CentOS
* - OpenSUSE

View File

@@ -0,0 +1,34 @@
.. _packet-processing:
=================
Packet Processing
=================
* Layer 2 - 4 Network Stack
* Fast lookup tables for routes, bridge entries
* Arbitrary n-tuple classifiers
* Control Plane, Traffic Management and Overlays
* `Linux <https://en.wikipedia.org/wiki/Linux>`_ and `FreeBSD <https://en.wikipedia.org/wiki/FreeBSD>`_ support
* Wide support for standard Operating System Interfaces such as AF_Packet, Tun/Tap & Netmap.
* Wide network and cryptograhic hardware support with `DPDK <https://www.dpdk.org/>`_.
* Container and Virtualization support
* Para-virtualized intefaces; Vhost and Virtio
* Network Adapters over PCI passthrough
* Native container interfaces; MemIF
* Universal Data Plane: one code base, for many use cases
* Discrete appliances; such as `Routers <https://en.wikipedia.org/wiki/Router_(computing)>`_ and `Switches <https://en.wikipedia.org/wiki/Network_switch>`_.
* `Cloud Infrastructure and Virtual Network Functions <https://en.wikipedia.org/wiki/Network_function_virtualization>`_
* `Cloud Native Infrastructure <https://www.cncf.io/>`_
* The same binary package for all use cases.
* Out of the box production quality, with thanks to `CSIT <https://wiki.fd.io/view/CSIT#Start_Here>`_.
For more information, please see :ref:`features` for the complete list.

View File

@@ -0,0 +1,24 @@
.. _developer-friendly:
==================
Developer Friendly
==================
* Extensive runtime counters; throughput, `intructions per cycle <https://en.wikipedia.org/wiki/Instructions_per_cycle>`_, errors, events etc.
* Integrated pipeline tracing facilities
* Multi-language API bindings
* Integrated command line for debugging
* Fault-tolerant and upgradable
* Runs as a standard user-space process for fault tolerance, software crashes seldom require more than a process restart.
* Improved fault-tolerance and upgradability when compared to running similar packet processing in the kernel, software updates never require system reboots.
* Development expierence is easier compared to similar kernel code
* Hardware isolation and protection (`iommu <https://en.wikipedia.org/wiki/Input%E2%80%93output_memory_management_unit>`_)
* Built for security
* Extensive white-box testing
* Image segment base address randomization
* Shared-memory segment base address randomization
* Stack bounds checking
* Static analysis with `Coverity <https://en.wikipedia.org/wiki/Coverity>`_

View File

@@ -0,0 +1,39 @@
.. _extensible:
=============================
Extensible and Modular Design
=============================
* Pluggable, easy to understand & extend
* Mature graph node architecture
* Full control to reorganize the pipeline
* Fast, plugins are equal citizens
**Modular, Flexible, and Extensible**
The FD.io VPP packet processing pipeline is decomposed into a packet processing
graph. This modular approach means that anyone can plugin new graph
nodes. This makes VPP easily exensible and means that plugins can be
customized for specific purposes. VPP is also configurable through it's
Low-Level API.
.. figure:: /_images/VPP_custom_application_packet_processing_graph.280.jpg
:alt: Extensible, modular graph node architecture?
Extensible and modular graph node architecture.
At runtime, the FD.io VPP platform assembles a vector of packets from RX rings,
typically up to 256 packets in a single vector. The packet processing graph is
then applied, node by node (including plugins) to the entire packet vector. The
received packets typically traverse the packet processing graph nodes in the
vector, when the network processing represented by each graph node is applied to
each packet in turn. Graph nodes are small and modular, and loosely
coupled. This makes it easy to introduce new graph nodes and rewire existing
graph nodes.
Plugins are `shared libraries <https://en.wikipedia.org/wiki/Library_(computing)>`_
and are loaded at runtime by VPP. VPP find plugins by searching the plugin path
for libraries, and then dynamically loads each one in turn on startup.
A plugin can introduce new graph nodes or rearrange the packet processing graph.
You can build a plugin completely independently of the FD.io VPP source tree,
which means you can treat it as an independent component.

View File

@@ -0,0 +1,16 @@
.. _fast:
================================
Fast, Scalable and Deterministic
================================
* `Continuous integration and system testing <https://wiki.fd.io/view/CSIT#Start_Here>`_
* Including continuous & extensive, latency and throughput testing
* Layer 2 Cross Connect (L2XC), typically achieve 15+ Mpps per core.
* Tested to achieve **zero** packet drops and ~15µs latency.
* Performance scales linearly with core/thread count
* Supporting millions of concurrent lookup tables entries
Please see :ref:`performance` for more information.

View File

@@ -0,0 +1,27 @@
.. _whatisvpp:
=========================================
What is VPP?
=========================================
FD.io's Vector Packet Processing (VPP) technology is a :ref:`fast`,
:ref:`packet-processing` stack that runs on commodity CPUs. It provides
out-of-the-box production quality switch/router functionality and much, much
more. FD.io VPP is at the same time, an :ref:`extensible` and
:ref:`developer-friendly` framework, capable of boot-strapping the development
of packet-processing applications. The benefits of FD.io VPP are its high
performance, proven technology, its modularity and flexibility, integrations and
rich feature set.
FD.io VPP is vector packet processing software, to learn more about what that
means, see the :ref:`what-is-vector-packet-processing` section.
For more detailed information on FD.io features, see the following sections:
.. toctree::
:maxdepth: 1
dataplane.rst
fast.rst
developer.rst
extensible.rst

View File

@@ -0,0 +1,73 @@
:orphan:
.. _what-is-vector-packet-processing:
=================================
What is vector packet processing?
=================================
FD.io VPP is developed using vector packet processing concepts, as opposed to
scalar packet processing, these concepts are explained in the following sections.
Vector packet processing is a common approach among high performance `Userspace
<https://en.wikipedia.org/wiki/User_space>`_ packet processing applications such
as developed with FD.io VPP and `DPDK
<https://en.wikipedia.org/wiki/Data_Plane_Development_Kit>`_. The scalar based
aproach tends to be favoured by Operating System `Kernel
<https://en.wikipedia.org/wiki/Kernel_(operating_system)>`_ Network Stacks and
Userspace stacks that don't have strict performance requirements.
**Scalar Packet Processing**
A scalar packet processing network stack typically processes one packet at a
time: an interrupt handling function takes a single packet from a Network
Inteface, and processes it through a set of functions: fooA calls fooB calls
fooC and so on.
.. code-block:: none
+---> fooA(packet1) +---> fooB(packet1) +---> fooC(packet1)
+---> fooA(packet2) +---> fooB(packet2) +---> fooC(packet2)
...
+---> fooA(packet3) +---> fooB(packet3) +---> fooC(packet3)
Scalar packet processing is simple, but inefficent in these ways:
* When the code path length exceeds the size of the Microprocessor's instruction
cache (I-cache), `thrashing
<https://en.wikipedia.org/wiki/Thrashing_(computer_science)>`_ occurs as the
Microprocessor is continually loading new instructions. In this model, each
packet incurs an identical set of I-cache misses.
* The associated deep call stack will also add load-store-unit pressure as
stack-locals fall out of the Microprocessor's Layer 1 Data Cache (D-cache).
**Vector Packet Processing**
In contrast, a vector packet processing network stack processes multiple packets
at a time, called 'vectors of packets' or simply a 'vector'. An interrupt
handling function takes the vector of packets from a Network Inteface, and
processes the vector through a set of functions: fooA calls fooB calls fooC and
so on.
.. code-block:: none
+---> fooA([packet1, +---> fooB([packet1, +---> fooC([packet1, +--->
packet2, packet2, packet2,
... ... ...
packet256]) packet256]) packet256])
This approach fixes:
* The I-cache thrashing problem described above, by ammoritizing the cost of
I-cache loads across multiple packets.
* The ineffeciences associated with the deep call stack by recieving vectors
of up to 256 packets at a time from the Network Interface, and processes them
using a directed graph of node. The graph scheduler invokes one node dispatch
function at a time, restricting stack depth to a few stack frames.
The further optimizations that this approaches enables are pipelining and
prefetching to minimize read latency on table data and parallelize packet loads
needed to process packets.