Alexander Chernavin 64d6463d2e flowprobe: fix tx flows generated for rewritten traffic
Currently, when IPFIX records generation is enabled for an interface in
the TX direction, some rewritten traffic is being sent from that
interface, and the Ethernet header's location has changed due to
rewriting, generated TX flows will contain fields with wrong and zero
values. For example, that can be observed when traffic is rewritten from
a subinterface to a hardware interface (i.e. when tags are removed). A
TX flow generated in this case will have wrong L2 fields because of an
incorrectly located Ethernet header. And zero L3/L4 fields because the
Ethernet type will match neither IP4 nor IP6.

The same code is executed to generate flows for both input and output
features. And the same mechanism is applied to identify the Ethernet
header in the buffer's data. However, such general code usually works
with the buffer's data conditionally based on the direction. For most
input features, the buffer's current_data will likely point to the IP
header. For most output features, the buffer's current_data will likely
point to the Ethernet header.

With this fix:
 - Keep relying on ethernet_buffer_get_header() to locate the Ethernet
   header for input features. And start using vlib_buffer_get_current()
   to locate the Ethernet header for output features. The function will
   account for the Ethernet header's position change in the buffer's
   data if there is rewriting.

 - After fixing Ethernet header determination in the buffer's data,
   L3/L4 fields will contain non-zero but still incorrect data. That is
   because IP header determination needs to be fixed too. It currently
   relies on the fact that the Ethernet header is always located at the
   beginning of the buffer's data and that l2_hdr_sz can be used as an
   IP header offset. However, this may not be the case after rewriting.
   So start calculating the actual offset of the IP header in the
   buffer's data.

 - Add a unit test to cover the case.

Type: fix
Change-Id: Icf3f9e6518912d06dff0d5aa48e103b3dc94edb7
Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
2023-11-12 21:52:13 +00:00
2023-11-02 16:08:57 +00:00
2023-10-26 16:32:17 +00:00
2023-11-02 13:41:32 +00:00
2023-06-08 13:16:56 +00:00
2023-11-03 05:06:43 +00:00

Vector Packet Processing

Introduction

The VPP platform is an extensible framework that provides out-of-the-box production quality switch/router functionality. It is the open source version of Cisco's Vector Packet Processing (VPP) technology: a high performance, packet-processing stack that can run on commodity CPUs.

The benefits of this implementation of VPP are its high performance, proven technology, its modularity and flexibility, and rich feature set.

For more information on VPP and its features please visit the FD.io website and What is VPP? pages.

Changes

Details of the changes leading up to this version of VPP can be found under doc/releasenotes.

Directory layout

Directory name Description
build-data Build metadata
build-root Build output directory
docs Sphinx Documentation
dpdk DPDK patches and build infrastructure
extras/libmemif Client library for memif
src/examples VPP example code
src/plugins VPP bundled plugins directory
src/svm Shared virtual memory allocation library
src/tests Standalone tests (not part of test harness)
src/vat VPP API test program
src/vlib VPP application library
src/vlibapi VPP API library
src/vlibmemory VPP Memory management
src/vnet VPP networking
src/vpp VPP application
src/vpp-api VPP application API bindings
src/vppinfra VPP core library
src/vpp/api Not-yet-relocated API bindings
test Unit tests and Python test harness

Getting started

In general anyone interested in building, developing or running VPP should consult the VPP wiki for more complete documentation.

In particular, readers are recommended to take a look at [Pulling, Building, Running, Hacking, Pushing](https://wiki.fd.io/view/VPP/Pulling,_Building,_Run ning,_Hacking_and_Pushing_VPP_Code) which provides extensive step-by-step coverage of the topic.

For the impatient, some salient information is distilled below.

Quick-start: On an existing Linux host

To install system dependencies, build VPP and then install it, simply run the build script. This should be performed a non-privileged user with sudo access from the project base directory:

./extras/vagrant/build.sh

If you want a more fine-grained approach because you intend to do some development work, the Makefile in the root directory of the source tree provides several convenience shortcuts as make targets that may be of interest. To see the available targets run:

make

Quick-start: Vagrant

The directory extras/vagrant contains a VagrantFile and supporting scripts to bootstrap a working VPP inside a Vagrant-managed Virtual Machine. This VM can then be used to test concepts with VPP or as a development platform to extend VPP. Some obvious caveats apply when using a VM for VPP since its performance will never match that of bare metal; if your work is timing or performance sensitive, consider using bare metal in addition or instead of the VM.

For this to work you will need a working installation of Vagrant. Instructions for this can be found [on the Setting up Vagrant wiki page] (https://wiki.fd.io/view/DEV/Setting_Up_Vagrant).

More information

Several modules provide documentation, see @subpage user_doc for more end-user-oriented information. Also see @subpage dev_doc for developer notes.

Visit the VPP wiki for details on more advanced building strategies and other development notes.

Description
No description provided
Readme Apache-2.0 551 MiB
Languages
C 78.9%
Python 15%
C++ 3.3%
CMake 0.7%
Go 0.6%
Other 1.4%