docs: update spelling word list and fix typos

- update wordlist and fix typos so that 'make docs-spell' passes
- sort spelling_wordlist.txt
- update docs maintainers list

Type: docs

Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
Change-Id: I38ac7850c604c323427d2bb6877ea98bd10bcc38
This commit is contained in:
Dave Wallace
2022-05-24 21:25:55 -04:00
parent e0301eeb7b
commit dac97e2c62
4 changed files with 742 additions and 729 deletions

View File

@ -41,6 +41,7 @@ F: src/vnet/bonding/
Sphinx Documents
I: docs
M: John DeNisco <jdenisco@cisco.com>
M: Dave Wallace <dwallacelf@gmail.com>
F: docs/
Infrastructure Library

File diff suppressed because it is too large Load Diff

View File

@ -10,7 +10,7 @@ back-and-forth (i.e. ICMP echo/ping requests and HTTP GET requests).
The intent of this example is to provide a relatively simple example of
connecting containers via VPP and allowing others to use it as a springboard of
sorts for their own projects and examples. Besides Docker and a handful of
common Linux command-line utlities, not much else is required to build this
common Linux command-line utilities, not much else is required to build this
example (due to most of the dependencies being lumped inside the containers
themselves).
@ -60,7 +60,7 @@ project.
other scripts in this project. Intended to be sourced (i.e. not intended to
be run directly). Some of the helper functions are used at run-time within
the containers, while others are intended to be run in the default namespace
on the host operating system to help with run-time configuration/bringup of
on the host operating system to help with run-time configuration/bring up of
the testbench.
* ``Dockerfile.vpp_testbench``: used to build the various Docker images used in
this project (i.e. so VPP, our test tools, etc.; are all encapsulated within
@ -81,7 +81,7 @@ Getting Started
^^^^^^^^^^^^^^^
First, we'll assume you are running on a Ubuntu 20.04 x86_64 setup (either on a
bare metal host or in a virtual machine), and have acquirec a copy of the
bare metal host or in a virtual machine), and have acquired a copy of the
project files (either by cloning the VPP git repository, or duplicating them
from :ref:`sec_file_listings_vpp_testbench`). Now, just run ``make``. The
process should take a few minutes as it pulls the baseline Ubuntu Docker image,
@ -96,11 +96,11 @@ can be cleaned-up via ``make stop``, and the whole process of starting,
testing, stopping, etc.; can be repeated as needed.
In addition to starting up the containers, ``make start`` will establish
variaous types of links/connections between the two containers, making use of
various types of links/connections between the two containers, making use of
both the Linux network stack, as well as VPP, to handle the "plumbing"
involved. This is to allow various types of connections between the two
containers, and to allow the reader to experiment with them (i.e. using
``vppctl`` to congfigure or trace packets going over VPP-managed links, use
``vppctl`` to configure or trace packets going over VPP-managed links, use
traditional Linux command line utilities like ``tcpdump``, ``iproute2``,
``ping``, etc.; to accomplish similar tasks over links managed purely by the
Linux network stack, etc.). Later labs will also encourage readers to compare
@ -177,4 +177,3 @@ entrypoint_server.sh
:caption: entrypoint_server.sh
:language: shell
:linenos:

View File

@ -20,14 +20,14 @@ impractical to parse headers which are split over multiple vnet
buffers, vnet_buffer_chain_linearize() is called after reassembly so
that L2/L3/L4 headers can be found in first buffer. Full reassembly
is costly and shouldn't be used unless necessary. Full reassembly is by
default enabled for both ipv4 and ipv6 traffic for "forus" traffic
default enabled for both ipv4 and ipv6 "for us" traffic
- that is packets aimed at VPP addresses. This can be disabled via API
if desired, in which case "forus" fragments are dropped.
if desired, in which case "for us" fragments are dropped.
2. Shallow (virtual) reassembly allows various classifying and/or
translating features to work with fragments without having to
understand fragmentation. It works by extracting L4 data and adding
them to vnet_buffer for each packet/fragment passing throught SVR
them to vnet_buffer for each packet/fragment passing through SVR
nodes. This operation is performed for both fragments and regular
packets, allowing consuming code to treat all packets in same way. SVR
caches incoming packet fragments (buffers) until first fragment is
@ -42,7 +42,7 @@ Multi-worker behaviour
Both reassembly types deal with fragments arriving on different workers
via handoff mechanism. All reassembly contexts are stored in pools.
Bihash mapping 5-tuple key to a value containing pool index and thread
index is used for lookups. When a lookup finds an existing reasembly on
index is used for lookups. When a lookup finds an existing reassembly on
a different thread, it hands off the fragment to that thread. If lookup
fails, a new reassembly context is created and current worker becomes
owner of that context. Further fragments received on other worker
@ -64,7 +64,7 @@ fragments per packet.
Custom applications
^^^^^^^^^^^^^^^^^^^
Both reassembly features allow to be used by custom applicatind which
Both reassembly features allow to be used by custom application which
are not part of VPP source tree. Be it patches or 3rd party plugins,
they can build their own graph paths by using "-custom*" versions of
nodes. Reassembly then reads next_index and error_next_index for each