docs: cleanup typos on readthrough

Type: style

Change-Id: I3b15035ea6c13cd1ca3cdc9dfa9b10a6e1be9880
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
This commit is contained in:
Paul Vinciguerra
2019-10-27 17:28:10 -04:00
committed by Dave Barach
parent 3b5e222f8a
commit 7fa3dd2881
51 changed files with 101 additions and 102 deletions

View File

@ -10,7 +10,7 @@
- 19.05 integration - 19.05 integration
- Remove bonding code - Remove bonding code
- Rework extended stats - Rework extended stats
- Debugging & Servicability - Debugging & Serviceability
- debug CLI leak-checker - debug CLI leak-checker
- vlib: add "memory-trace stats-segment" - vlib: add "memory-trace stats-segment"
- vppapitrace JSON/API trace converter - vppapitrace JSON/API trace converter
@ -130,7 +130,7 @@
- support quic streams and "connectable listeners" - support quic streams and "connectable listeners"
- worker unregister api - worker unregister api
- fix epoll with large events batch - fix epoll with large events batch
- ldp: add option to eanble transparent TLS connections - ldp: add option to enable transparent TLS connections
- udp: - udp:
- support close with data - support close with data
- fixed session migration - fixed session migration

View File

@ -74,7 +74,7 @@ search across 2**log2_size backing pages on a per-bucket basis.
To maintain *space* efficiency, we should configure the bucket array To maintain *space* efficiency, we should configure the bucket array
so that backing pages are effectively utilized. Lookup performance so that backing pages are effectively utilized. Lookup performance
tends to change *very litte* if the bucket array is too small or too tends to change *very little* if the bucket array is too small or too
large. large.
Bihash depends on selecting an effective hash function. If one were to Bihash depends on selecting an effective hash function. If one were to

View File

@ -93,7 +93,7 @@ implement a variety of features:
* Barrier synchronization of worker threads across thread-unsafe message handlers. * Barrier synchronization of worker threads across thread-unsafe message handlers.
Correctly-coded message handlers know nothing about the transport used Correctly-coded message handlers know nothing about the transport used
to deliver messages to/from VPP. It's reasonably straighforward to use to deliver messages to/from VPP. It's reasonably straightforward to use
multiple API message transport types simultaneously. multiple API message transport types simultaneously.
For historical reasons, binary api messages are (putatively) sent in For historical reasons, binary api messages are (putatively) sent in

View File

@ -164,7 +164,7 @@ Once the packages are built they can be found in the build-root directory.
vpp-plugins_18.07-rc0~456-gb361076_amd64.deb vpp-plugins_18.07-rc0~456-gb361076_amd64.deb
Finally, the created packages can be installed using the following commands. Install Finally, the created packages can be installed using the following commands. Install
the package that correspnds to OS that VPP will be running on: the package that corresponds to OS that VPP will be running on:
For Ubuntu: For Ubuntu:

View File

@ -156,7 +156,7 @@ Here are the contents of .../build-data/platforms/vpp.mk:
vpp_uses_dpdk = yes vpp_uses_dpdk = yes
# Uncoment to enable building unit tests # Uncomment to enable building unit tests
# vpp_enable_tests = yes # vpp_enable_tests = yes
vpp_root_packages = vpp vom vpp_root_packages = vpp vom

View File

@ -68,7 +68,7 @@ stacking occurs, the necessary VLIB graph arcs are automatically constructed
from the respected DPO type's registered graph nodes. from the respected DPO type's registered graph nodes.
The diagrams above show that for any given route the full data-plane graph is The diagrams above show that for any given route the full data-plane graph is
known before anypacket arrives. If that graph is composed of n objects, then the known before any packet arrives. If that graph is composed of n objects, then the
packet will visit n nodes and thus incur a forwarding cost of approximately n packet will visit n nodes and thus incur a forwarding cost of approximately n
times the graph node cost. This could be reduced if the graph were *collapsed* times the graph node cost. This could be reduced if the graph were *collapsed*
into a single DPO and associated node. However, collapsing a graph removes the into a single DPO and associated node. However, collapsing a graph removes the

View File

@ -15,8 +15,8 @@ child to parent relationship is thus fully known to the child, and hence a forwa
walk of the graph (from child to parent) is trivial. However, a parent does not choose walk of the graph (from child to parent) is trivial. However, a parent does not choose
its children, it does not even choose the type. All object types that form part of the its children, it does not even choose the type. All object types that form part of the
FIB control plane graph all inherit from a single base class14; *fib_node_t*. A *fib_node_t* FIB control plane graph all inherit from a single base class14; *fib_node_t*. A *fib_node_t*
indentifies the object's index and its associated virtual function table provides the identifies the object's index and its associated virtual function table provides the
parent a mechanism to Զisitՠthat object during the walk. The reason for a back-walk parent a mechanism to visit that object during the walk. The reason for a back-walk
is to inform all children that the state of the parent has changed in some way, and is to inform all children that the state of the parent has changed in some way, and
that the child may itself need to update. that the child may itself need to update.
@ -65,7 +65,7 @@ Choosing between a synchronous and an asynchronous walk is therefore a trade-off
time it takes to propagate a change in the parent to all of its children, versus the time it takes to propagate a change in the parent to all of its children, versus the
time it takes to act on a single route update. For example, if a route update where to time it takes to act on a single route update. For example, if a route update where to
affect millions of child recursive routes, then the rate at which such updates could be affect millions of child recursive routes, then the rate at which such updates could be
processed would be dependent on the number of child recursive route Рwhich would not be processed would be dependent on the number of child recursive route which would not be
good. At the time of writing FIB2.0 uses synchronous walk in all locations except when good. At the time of writing FIB2.0 uses synchronous walk in all locations except when
walking the children of a path-list, and it has more than 32 [#f15]_ children. This avoids the walking the children of a path-list, and it has more than 32 [#f15]_ children. This avoids the
case mentioned above. case mentioned above.

View File

@ -10,19 +10,19 @@ the route is resolved as the graph is complete from *fib_entry_t* to *ip_adjacen
In some routing models a VRF will consist of a set of tables for IPv4 and IPv6, and In some routing models a VRF will consist of a set of tables for IPv4 and IPv6, and
unicast and multicast. In VPP there is no such grouping. Each table is distinct from unicast and multicast. In VPP there is no such grouping. Each table is distinct from
each other. A table is indentified by its numerical ID. The ID range is separate for each other. A table is identified by its numerical ID. The ID range is separate for
each address family. each address family.
A table is comprised of two route data-bases; forwarding and non-forwarding. The A table is comprised of two route data-bases; forwarding and non-forwarding. The
forwarding data-base contains routes against which a packet will perform a longest forwarding data-base contains routes against which a packet will perform a longest
prefix match (LPM) in the data-plane. The non-forwarding DB contains all the routes prefix match (LPM) in the data-plane. The non-forwarding DB contains all the routes
with which VPP has been programmed Рsome of these routes may be unresolved for reasons with which VPP has been programmed some of these routes may be unresolved for reasons
that prevent their insertion into the forwarding DB that prevent their insertion into the forwarding DB
(see section: Adjacency source FIB entries). (see section: Adjacency source FIB entries).
The route data is decomposed into three parts; entry, path-list and paths; The route data is decomposed into three parts; entry, path-list and paths;
* The *fib_entry_t*, which contains the routeճ prefix, is representation of that prefix's entry in the FIB table. * The *fib_entry_t*, which contains the routes prefix, is representation of that prefix's entry in the FIB table.
* The *fib_path_t* is a description of where to send the packets destined to the route's prefix. There are several types of path. * The *fib_path_t* is a description of where to send the packets destined to the route's prefix. There are several types of path.
* Attached next-hop: the path is described with an interface and a next-hop. The next-hop is in the same sub-net as the router's own address on that interface, hence the peer is considered to be *attached* * Attached next-hop: the path is described with an interface and a next-hop. The next-hop is in the same sub-net as the router's own address on that interface, hence the peer is considered to be *attached*
@ -37,10 +37,10 @@ The route data is decomposed into three parts; entry, path-list and paths;
.. figure:: /_images/fib20fig2.png .. figure:: /_images/fib20fig2.png
Figure 2: Route data model Рclass diagram Figure 2: Route data model class diagram
Figure 2 shows an example of a route with two attached-next-hop paths. Each of these Figure 2 shows an example of a route with two attached-next-hop paths. Each of these
paths will *resolve* by finding the adjacency that matches the pathճ attributes, which paths will *resolve* by finding the adjacency that matches the paths attributes, which
are the same as the key for the adjacency data-base [#f3]_. The *forwarding information (FI)* are the same as the key for the adjacency data-base [#f3]_. The *forwarding information (FI)*
is the set of adjacencies that are available for load-balancing the traffic in the is the set of adjacencies that are available for load-balancing the traffic in the
data-plane. A path *contributes* an adjacency to the route's forwarding information, the data-plane. A path *contributes* an adjacency to the route's forwarding information, the
@ -68,10 +68,10 @@ forwarding information of multiple sources to be combined. Instead the FIB must
to use the forwarding information from only one source. This choice is based on a static to use the forwarding information from only one source. This choice is based on a static
priority assignment [#f4]_. The FIB must maintain the information each source has added priority assignment [#f4]_. The FIB must maintain the information each source has added
so it can be restored should that source become the best source. VPP has two so it can be restored should that source become the best source. VPP has two
*control-plane* sources; the API and the CLI Рthe API has the higher priority. *control-plane* sources; the API and the CLI the API has the higher priority.
Each *source* data is represented by a *fib_entry_src_t* object of which a Each *source* data is represented by a *fib_entry_src_t* object of which a
*fib_entry_t* maintains a sorted vector.n A prefix is *connected* when it is *fib_entry_t* maintains a sorted vector.n A prefix is *connected* when it is
applied to a routerճ interface. applied to a routers interface.
The following configuration: The following configuration:
@ -84,7 +84,7 @@ attached, and 192.168.1.1/32 which is connected and local (a.k.a receive or for-
Both prefixes are *interface* sourced. The interface source has a high priority, so Both prefixes are *interface* sourced. The interface source has a high priority, so
the accidental or nefarious addition of identical prefixes does not prevent the the accidental or nefarious addition of identical prefixes does not prevent the
router from correctly forwarding. Packets matching a connected prefix will router from correctly forwarding. Packets matching a connected prefix will
generate an ARP request for the packetճ destination address, this process is known generate an ARP request for the packets destination address, this process is known
as a *glean*. as a *glean*.
An *attached* prefix also results in a glean, but the router does not have its own An *attached* prefix also results in a glean, but the router does not have its own
@ -147,7 +147,7 @@ So while the following configuration is accepted:
$ ip arp 192.168.1.2 GigabitEthernet0/8/0 dead.dead.dead $ ip arp 192.168.1.2 GigabitEthernet0/8/0 dead.dead.dead
$ set interface ip table GigabitEthernet0/8/0 2 $ set interface ip table GigabitEthernet0/8/0 2
it does not result in the desired behaviour, where the adj-fib and connecteds are it does not result in the desired behaviour, where the adj-fib and connected adjacencies are
moved to table 2. moved to table 2.
Recursive Routes Recursive Routes

View File

@ -93,7 +93,7 @@ IP6 Heap
The IPv6 heap is used to allocate the memory needed for the The IPv6 heap is used to allocate the memory needed for the
data-structure within which the IPv6 prefixes are stored. IPv6 also data-structure within which the IPv6 prefixes are stored. IPv6 also
has the concept of forwarding and non-forwarding entries, however for has the concept of forwarding and non-forwarding entries, however for
IPv6 all the forwardind entries are stored in a single hash table IPv6 all the forwarding entries are stored in a single hash table
(same goes for the non-forwarding). The key to the hash table includes (same goes for the non-forwarding). The key to the hash table includes
the IPv6 table-id. the IPv6 table-id.

View File

@ -23,7 +23,7 @@ graph/chain rather than its usual terminal location.
The mid-chain adjacency is contributed by the gre_tunnel_t , which also becomes The mid-chain adjacency is contributed by the gre_tunnel_t , which also becomes
part of the FIB control-plane graph. Consequently it will be visited by a part of the FIB control-plane graph. Consequently it will be visited by a
back-walk when the forwarding information for the tunnelճ destination changes. back-walk when the forwarding information for the tunnel's destination changes.
This will trigger it to restack the mid-chain adjacency on the new This will trigger it to restack the mid-chain adjacency on the new
*load_balance_t* contributed by the parent *fib_entry_t*. *load_balance_t* contributed by the parent *fib_entry_t*.

View File

@ -32,7 +32,7 @@ or abort signal, then you can run the VPP debug binary and then execute **backtr
(gdb) bt (gdb) bt
#0 ip4_icmp_input (vm=0x7ffff7b89a40 <vlib_global_main>, node=0x7fffb6bb6900, frame=0x7fffb6725ac0) at /scratch/vpp-master/build-data/../src/vnet/ip/icmp4.c:187 #0 ip4_icmp_input (vm=0x7ffff7b89a40 <vlib_global_main>, node=0x7fffb6bb6900, frame=0x7fffb6725ac0) at /scratch/vpp-master/build-data/../src/vnet/ip/icmp4.c:187
#1 0x00007ffff78da4be in dispatch_node (vm=0x7ffff7b89a40 <vlib_global_main>, node=0x7fffb6bb 6900, type=VLIB_NODE_TYPE_INTERNAL, dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fffb6725ac0, last_time_stamp=10581236529 65565) at /scratch/vpp-master/build-data/../src/vlib/main.c:988 #1 0x00007ffff78da4be in dispatch_node (vm=0x7ffff7b89a40 <vlib_global_main>, node=0x7fffb6bb 6900, type=VLIB_NODE_TYPE_INTERNAL, dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fffb6725ac0, last_time_stamp=10581236529 65565) at /scratch/vpp-master/build-data/../src/vlib/main.c:988
#2 0x00007ffff78daa77 in dispatch_pending_node (vm=0x7ffff7b89a40 <vlib_global_main>, pending_f rame_index=6, last_time_stamp=1058123652965565) at /scratch/vpp-master/build-data/../src/vlib/main.c:1138 #2 0x00007ffff78daa77 in dispatch_pending_node (vm=0x7ffff7b89a40 <vlib_global_main>, pending_frame_index=6, last_time_stamp=1058123652965565) at /scratch/vpp-master/build-data/../src/vlib/main.c:1138
.... ....
Get to the GDB prompt Get to the GDB prompt

View File

@ -161,7 +161,7 @@ format specification. For example:
format\_junk() can invoke other user-format functions if desired. The format\_junk() can invoke other user-format functions if desired. The
programmer shoulders responsibility for argument type-checking. It is programmer shoulders responsibility for argument type-checking. It is
typical for user format functions to blow up spectaculary if the typical for user format functions to blow up spectacularly if the
va\_arg(va, type) macros don't match the caller's idea of reality. va\_arg(va, type) macros don't match the caller's idea of reality.
Unformat Unformat

View File

@ -31,7 +31,7 @@ the pre-data (rewrite space) area.
* VNET_BUFFER_F_L4_CHECKSUM_COMPUTED: tcp/udp checksum has been computed * VNET_BUFFER_F_L4_CHECKSUM_COMPUTED: tcp/udp checksum has been computed
* VNET_BUFFER_F_L4_CHECKSUM_CORRECT: tcp/udp checksum is correct * VNET_BUFFER_F_L4_CHECKSUM_CORRECT: tcp/udp checksum is correct
* VNET_BUFFER_F_VLAN_2_DEEP: two vlan tags present * VNET_BUFFER_F_VLAN_2_DEEP: two vlan tags present
* VNET_BUFFER_F_VLAN_1_DEEP: one vlag tag present * VNET_BUFFER_F_VLAN_1_DEEP: one vlan tag present
* VNET_BUFFER_F_SPAN_CLONE: packet has already been cloned (span feature) * VNET_BUFFER_F_SPAN_CLONE: packet has already been cloned (span feature)
* VNET_BUFFER_F_LOOP_COUNTER_VALID: packet look-up loop count valid * VNET_BUFFER_F_LOOP_COUNTER_VALID: packet look-up loop count valid
* VNET_BUFFER_F_LOCALLY_ORIGINATED: packet built by vpp * VNET_BUFFER_F_LOCALLY_ORIGINATED: packet built by vpp
@ -48,13 +48,13 @@ the pre-data (rewrite space) area.
* VNET_BUFFER_F_IS_DVR: packet to be reinjected into the l2 output path * VNET_BUFFER_F_IS_DVR: packet to be reinjected into the l2 output path
* VNET_BUFFER_F_QOS_DATA_VALID: QoS data valid in vnet_buffer_opaque2 * VNET_BUFFER_F_QOS_DATA_VALID: QoS data valid in vnet_buffer_opaque2
* VNET_BUFFER_F_GSO: generic segmentation offload requested * VNET_BUFFER_F_GSO: generic segmentation offload requested
* VNET_BUFFER_F_AVAIL1: avaliable bit * VNET_BUFFER_F_AVAIL1: available bit
* VNET_BUFFER_F_AVAIL2: avaliable bit * VNET_BUFFER_F_AVAIL2: available bit
* VNET_BUFFER_F_AVAIL3: avaliable bit * VNET_BUFFER_F_AVAIL3: available bit
* VNET_BUFFER_F_AVAIL4: avaliable bit * VNET_BUFFER_F_AVAIL4: available bit
* VNET_BUFFER_F_AVAIL5: avaliable bit * VNET_BUFFER_F_AVAIL5: available bit
* VNET_BUFFER_F_AVAIL6: avaliable bit * VNET_BUFFER_F_AVAIL6: available bit
* VNET_BUFFER_F_AVAIL7: avaliable bit * VNET_BUFFER_F_AVAIL7: available bit
* u32 flow_id: generic flow identifier * u32 flow_id: generic flow identifier
* u8 ref_count: buffer reference / clone count (e.g. for span replication) * u8 ref_count: buffer reference / clone count (e.g. for span replication)
* u8 buffer_pool_index: buffer pool index which owns this buffer * u8 buffer_pool_index: buffer pool index which owns this buffer

View File

@ -3,7 +3,7 @@
Multi-architecture support Multi-architecture support
========================== ==========================
This reference guide describes how to use the vpp muli-architecture support scheme This reference guide describes how to use the vpp multi-architecture support scheme
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1

View File

@ -325,7 +325,7 @@ these data to easily filter/track single packets as they traverse the
forwarding graph. forwarding graph.
Multiple records per packet are normal, and to be expected. Packets Multiple records per packet are normal, and to be expected. Packets
will appear multipe times as they traverse the vpp forwarding will appear multiple times as they traverse the vpp forwarding
graph. In this way, vpp graph dispatch traces are significantly graph. In this way, vpp graph dispatch traces are significantly
different from regular network packet captures from an end-station. different from regular network packet captures from an end-station.
This property complicates stateful packet analysis. This property complicates stateful packet analysis.
@ -494,7 +494,7 @@ These commands have the following optional parameters:
capture is off. capture is off.
- <b>max-bytes-per-pkt _nnnn_</b> - maximum number of bytes to trace - <b>max-bytes-per-pkt _nnnn_</b> - maximum number of bytes to trace
on a per-paket basis. Must be >32 and less than 9000. Default value: on a per-packet basis. Must be >32 and less than 9000. Default value:
512. 512.
- <b>filter</b> - Use the pcap rx / tx / drop trace filter, which must - <b>filter</b> - Use the pcap rx / tx / drop trace filter, which must
@ -529,7 +529,7 @@ These commands have the following optional parameters:
The "classify filter pcap | <interface-name> " debug CLI command The "classify filter pcap | <interface-name> " debug CLI command
constructs an arbitrary set of packet classifier tables for use with constructs an arbitrary set of packet classifier tables for use with
"pcap rx | tx | drop trace," and with the vpp packet tracer on a "pcap rx | tx | drop trace," and with the vpp packet tracer on a
per-interrface basis. per-interface basis.
Packets which match a rule in the classifier table chain will be Packets which match a rule in the classifier table chain will be
traced. The tables are automatically ordered so that matches in the traced. The tables are automatically ordered so that matches in the
@ -538,7 +538,7 @@ most specific table are tried first.
It's reasonably likely that folks will configure a single table with It's reasonably likely that folks will configure a single table with
one or two matches. As a result, we configure 8 hash buckets and 128K one or two matches. As a result, we configure 8 hash buckets and 128K
of match rule space by default. One can override the defaults by of match rule space by default. One can override the defaults by
specifiying "buckets <nnn>" and "memory-size <xxx>" as desired. specifying "buckets <nnn>" and "memory-size <xxx>" as desired.
To build up complex filter chains, repeatedly issue the classify To build up complex filter chains, repeatedly issue the classify
filter debug CLI command. Each command must specify the desired mask filter debug CLI command. Each command must specify the desired mask

View File

@ -56,7 +56,7 @@ _______
**Low-level API** **Low-level API**
Refer to inline API documentation in doxygen format in vapi.h header for description of functions. It's recommened to use the safer, high-level API provided by specialized headers (e.g. vpe.api.vapi.h or vpe.api.vapi.hpp). Refer to inline API documentation in doxygen format in vapi.h header for description of functions. It's recommended to use the safer, high-level API provided by specialized headers (e.g. vpe.api.vapi.h or vpe.api.vapi.hpp).
**C high-level API** **C high-level API**
@ -113,7 +113,7 @@ _________
*Create a Connection and execute the appropriate Request to subscribe to events (e.g. Want_stats)* *Create a Connection and execute the appropriate Request to subscribe to events (e.g. Want_stats)*
#. Create an Event_registration with a template argument being the type of event you are insterested in. #. Create an Event_registration with a template argument being the type of event you are interested in.
#. Call dispatch() or wait_for_response() to wait for the event. A callback will be called when an event occurs (if passed to Event_registration() constructor). Alternatively, read the result set. #. Call dispatch() or wait_for_response() to wait for the event. A callback will be called when an event occurs (if passed to Event_registration() constructor). Alternatively, read the result set.
.. note:: .. note::

View File

@ -7,7 +7,7 @@ Progressive VPP Tutorial
######################## ########################
Learn to run FD.io VPP on a single Ubuntu 16.04 VM using Vagrant with this walkthrough Learn to run FD.io VPP on a single Ubuntu 16.04 VM using Vagrant with this walkthrough
covering basic FD.io VPP senarios. Useful FD.io VPP commands will be used, and covering basic FD.io VPP scenarios. Useful FD.io VPP commands will be used, and
will discuss basic operations, and the state of a running FD.io VPP on a system. will discuss basic operations, and the state of a running FD.io VPP on a system.
.. note:: .. note::

View File

@ -9,7 +9,7 @@ Skills to be Learned
---------------------- ----------------------
#. Associate an interface with a bridge domain #. Associate an interface with a bridge domain
#. Create a loopback interaface #. Create a loopback interface
#. Create a BVI (Bridge Virtual Interface) for a bridge domain #. Create a BVI (Bridge Virtual Interface) for a bridge domain
#. Examine a bridge domain #. Examine a bridge domain
@ -46,7 +46,7 @@ To clear existing config from previous exercises run:
$ ps -ef | grep vpp | awk '{print $2}'| xargs sudo kill $ ps -ef | grep vpp | awk '{print $2}'| xargs sudo kill
$ sudo ip link del dev vpp1host $ sudo ip link del dev vpp1host
$ # do the next command if you are cleaing up from this example $ # do the next command if you are cleaning up from this example
$ sudo ip link del dev vpp1vpp2 $ sudo ip link del dev vpp1vpp2
Run FD.io VPP instances Run FD.io VPP instances

View File

@ -83,7 +83,7 @@ socket **/run/vpp/cli-vpp1.sock**
vpp# create interface memif id 0 master vpp# create interface memif id 0 master
This will create an interface on vpp1 memif0/0 using /run/vpp/memif as This will create an interface on vpp1 memif0/0 using /run/vpp/memif as
its socket file. The role of vpp1 for this memif inteface is 'master'. its socket file. The role of vpp1 for this memif interface is 'master'.
With what you have learned: With what you have learned:
@ -104,7 +104,7 @@ run/vpp/memif-vpp1vpp2 socket file
vpp# create interface memif id 0 slave vpp# create interface memif id 0 slave
This will create an interface on vpp2 memif0/0 using /run/vpp/memif as This will create an interface on vpp2 memif0/0 using /run/vpp/memif as
its socket file. The role of vpp1 for this memif inteface is 'slave'. its socket file. The role of vpp1 for this memif interface is 'slave'.
Use your previously used skills to: Use your previously used skills to:

View File

@ -24,7 +24,7 @@ the hugepage settings, perform the following commands:
# All groups allowed to access hugepages # All groups allowed to access hugepages
vm.hugetlb_shm_group=0 vm.hugetlb_shm_group=0
# Shared Memory Max must be greator or equal to the total size of hugepages. # Shared Memory Max must be greater or equal to the total size of hugepages.
# For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024 # For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024
# If the existing kernel.shmmax setting (cat /sys/proc/kernel/shmmax) # If the existing kernel.shmmax setting (cat /sys/proc/kernel/shmmax)
# is greater than the calculated TotalHugepageSize then set this parameter # is greater than the calculated TotalHugepageSize then set this parameter

View File

@ -244,7 +244,7 @@ attributes.
**Example:** cli-prompt vpp-2 **Example:** cli-prompt vpp-2
* **cli-history-limit <n>** * **cli-history-limit <n>**
Limit commmand history to <n> lines. A value of 0 disables command history. Limit command history to <n> lines. A value of 0 disables command history.
Default value: 50 Default value: 50
**Example:** cli-history-limit 100 **Example:** cli-history-limit 100
@ -336,7 +336,7 @@ Popular options include:
for all NICs except VICs, using ENIC driver, which has VLAN stripping on for all NICs except VICs, using ENIC driver, which has VLAN stripping on
by default. by default.
* **hqos** * **hqos**
Enable the Hierarchical Quaity-of-Service (HQoS) scheduler, default is Enable the Hierarchical Quality-of-Service (HQoS) scheduler, default is
disabled. This enables HQoS on specific output interface. disabled. This enables HQoS on specific output interface.
* **hqos { .. }** * **hqos { .. }**
HQoS can also have its own set of custom parameters. Setting a custom HQoS can also have its own set of custom parameters. Setting a custom
@ -413,7 +413,7 @@ Popular options include:
**Example:** enable-tcp-udp-checksum **Example:** enable-tcp-udp-checksum
* **no-multi-seg** * **no-multi-seg**
Disable mutli-segment buffers, improves performance but disables Jumbo MTU Disable multi-segment buffers, improves performance but disables Jumbo MTU
support. support.
**Example:** no-multi-seg **Example:** no-multi-seg
@ -435,7 +435,7 @@ Popular options include:
**Example:** log-level error **Example:** log-level error
* **dev default { .. }** * **dev default { .. }**
Change default settings for all intefaces. This sections supports the Change default settings for all interfaces. This sections supports the
same set of custom parameters described in *'dev <pci-dev> { .. }*'. same set of custom parameters described in *'dev <pci-dev> { .. }*'.
**Example:** **Example:**
@ -1216,8 +1216,8 @@ ____________________
A plugin can be disabled by default. It may still be in an experimental phase A plugin can be disabled by default. It may still be in an experimental phase
or only be needed in special circumstances. If this is the case, the plugin can or only be needed in special circumstances. If this is the case, the plugin can
be explicitely enabled in *'startup.conf'*. Also, a plugin that is enabled by be explicitly enabled in *'startup.conf'*. Also, a plugin that is enabled by
default can be explicitely disabled in *'startup.conf'*. default can be explicitly disabled in *'startup.conf'*.
Another useful use of this section is to disable all the plugins, then enable Another useful use of this section is to disable all the plugins, then enable
only the plugins that are desired. only the plugins that are desired.

View File

@ -11,15 +11,15 @@ Threads
------- -------
It usually is not needed, but VPP can be configured to run on isolated CPUs. In the example shown It usually is not needed, but VPP can be configured to run on isolated CPUs. In the example shown
VPP is configured with 2 workers. The main thread is also configured to run on a seperate CPU. The VPP is configured with 2 workers. The main thread is also configured to run on a separate CPU. The
stats thread will always run on CPU 0. This utilty will put the worker threads on CPUs that are stats thread will always run on CPU 0. This utility will put the worker threads on CPUs that are
associated with the ports that are configured. associated with the ports that are configured.
Grub Command Line Grub Command Line
----------------- -----------------
In general the Grub command line does not need to be changed. If the system is running many processes In general the Grub command line does not need to be changed. If the system is running many processes
it may be neccessary to isolate CPUs for VPP or other processes. it may be necessary to isolate CPUs for VPP or other processes.
Huge Pages Huge Pages
---------- ----------

View File

@ -10,7 +10,7 @@ the Sphinx Markup Constructs used in these documents. The Sphinx style guide can
For a more detailed list of Sphinx Markup Constructs please refer to: For a more detailed list of Sphinx Markup Constructs please refer to:
`Sphinx Markup Constructs <http://www.sphinx-doc.org/en/stable/markup/index.html>`_ `Sphinx Markup Constructs <http://www.sphinx-doc.org/en/stable/markup/index.html>`_
This document is also an example of a directory structure for a document that spans mutliple pages. This document is also an example of a directory structure for a document that spans multiple pages.
Notice we have the file **index.rst** and the then documents that are referenced in index.rst. The Notice we have the file **index.rst** and the then documents that are referenced in index.rst. The
referenced documents are shown at the bottom of this page. referenced documents are shown at the bottom of this page.

View File

@ -16,7 +16,7 @@ Bold text can be show with **Bold Text**, Italics with *Italic text*. Bullets li
Notes Notes
***** *****
A note can be used to describe something not in the normal flow of the paragragh. This A note can be used to describe something not in the normal flow of the paragraph. This
is an example of a note. is an example of a note.
.. note:: .. note::
@ -28,7 +28,7 @@ is an example of a note.
Code Blocks Code Blocks
*********** ***********
This paragraph describes how to do **Console Commands**. When showing VPP commands it is reccomended This paragraph describes how to do **Console Commands**. When showing VPP commands it is recommended
that the command be executed from the linux console as shown. The Highlighting in the final documents that the command be executed from the linux console as shown. The Highlighting in the final documents
shows up nicely this way. shows up nicely this way.

View File

@ -4,7 +4,7 @@
Including a file Including a file
**************** ****************
A complete file should be included with the following construct. It is recomended it be included with A complete file should be included with the following construct. It is recommended it be included with
it's own .rst file describing the file included. This is an example of an xml file is included. it's own .rst file describing the file included. This is an example of an xml file is included.
.. toctree:: .. toctree::

View File

@ -11,7 +11,7 @@ Code Blocks
=========== ===========
This paragraph describes how to do **Console Commands**. When showing This paragraph describes how to do **Console Commands**. When showing
VPP commands it is reccomended that the command be executed from the VPP commands it is recommended that the command be executed from the
linux console as shown. The Highlighting in the final documents shows up linux console as shown. The Highlighting in the final documents shows up
nicely this way. nicely this way.

View File

@ -12,7 +12,7 @@ MAC Layer
Discovery Discovery
--------- ---------
* Cisco Discovery Protocol * Cisco Discovery Protocol v2 (CDP)
* Link Layer Discovery Protocol (LLDP) * Link Layer Discovery Protocol (LLDP)
Link Layer Control Protocol Link Layer Control Protocol

View File

@ -27,7 +27,7 @@ These features have been designed to take full advantage of common micro-process
* Reducing cache and TLS misses by processing packets in vectors * Reducing cache and TLS misses by processing packets in vectors
* Realizing `IPC <https://en.wikipedia.org/wiki/Instructions_per_cycle>`_ gains with vector instructions such as: SSE, AVX and NEON * Realizing `IPC <https://en.wikipedia.org/wiki/Instructions_per_cycle>`_ gains with vector instructions such as: SSE, AVX and NEON
* Eliminating mode switching, context switches and blocking, to always be doing useful work * Eliminating mode switching, context switches and blocking, to always be doing useful work
* Cache-lined aliged buffers for cache and memory efficiency * Cache-lined aligned buffers for cache and memory efficiency
Packet Throughput Graphs Packet Throughput Graphs

View File

@ -16,10 +16,10 @@ This section identifies different components of packet processing and describes
* Wide support for standard Operating System Interfaces such as AF_Packet, Tun/Tap & Netmap. * Wide support for standard Operating System Interfaces such as AF_Packet, Tun/Tap & Netmap.
* Wide network and cryptograhic hardware support with `DPDK <https://www.dpdk.org/>`_. * Wide network and cryptographic hardware support with `DPDK <https://www.dpdk.org/>`_.
* Container and Virtualization support * Container and Virtualization support
* Para-virtualized intefaces; Vhost and Virtio * Para-virtualized interfaces; Vhost and Virtio
* Network Adapters over PCI passthrough * Network Adapters over PCI passthrough
* Native container interfaces; MemIF * Native container interfaces; MemIF

View File

@ -14,7 +14,7 @@ This section describes the different ways VPP is friendly to developers:
* Runs as a standard user-space process for fault tolerance, software crashes seldom require more than a process restart. * Runs as a standard user-space process for fault tolerance, software crashes seldom require more than a process restart.
* Improved fault-tolerance and upgradability when compared to running similar packet processing in the kernel, software updates never require system reboots. * Improved fault-tolerance and upgradability when compared to running similar packet processing in the kernel, software updates never require system reboots.
* Development expierence is easier compared to similar kernel code * Development experience is easier compared to similar kernel code
* Hardware isolation and protection (`iommu <https://en.wikipedia.org/wiki/Input%E2%80%93output_memory_management_unit>`_) * Hardware isolation and protection (`iommu <https://en.wikipedia.org/wiki/Input%E2%80%93output_memory_management_unit>`_)
* Built for security * Built for security

Some files were not shown because too many files have changed in this diff Show More