The _v3 was not handling endianness on flags (e.g. mode).
Marking _v3 as deprecated, but keeping it
as there might be users who learned to preprocess their flag values.
+ Also, format PCI product_name as a vector, not a string.
Type: fix
Change-Id: I50c4b44f3570f02518dbd9a43239c1a37612d24a
Signed-off-by: Vratko Polak <vrpolak@cisco.com>
more generic version of clib_sysfs_link_to_name with support for
format strings...
Type: improvement
Change-Id: I0cb263748970378c661415196eb7e08450370677
Signed-off-by: Damjan Marion <damarion@cisco.com>
format_hexdump currently requires the length parameter to be uword
(64-bits) hence all callers must make sure to cast the length to uword.
Use u32 instead to benefit from C automatic integer promotion: any
length smaller or equal to u32 will be promoted to int fitting in u32).
Only callers using a length of u64 needs to downcast.
It also makes it similar to other variants.
Type: fix
Change-Id: I09b52fdde3970cec0be4150a29126ff63106c75b
Signed-off-by: Benoît Ganne <bganne@cisco.com>
This change is part of VPP API cleanup initiative.
Type: refactor
Signed-off-by: Ondrej Fabry <ofabry@cisco.com>
Change-Id: I9f0f786b50aa77383b16e0f844c85f236f7aa8d0
Set the mask of calculating the next cqe index to the corresponding CQ
size instead of rxq size.
Type: fix
Signed-off-by: Jieqiang Wang <jieqiang.wang@arm.com>
Change-Id: I67494f029967af64051f51452eba1fd699984cd9
Previously we encountered the issue of failing to create completion
queues on some Arm platforms because DPDK may set MLX5_CQE_SIZE to 128
if DPDK MLX PMDs are built and DPDK plugin is loaded, which does not
satisfy the requirement of 64B size CQE by RDMA plugin.
We fixed this issue in 844a0e8b0("always use 64 byte CQEs for MLX5"),
but some of CSIT test cases failed due to this code change. It turns out
that we don't need to specify compressed CQE mode for txq CQ because
RDMA tx doesn't have the code logic to handle compressed CQEs, which
might cause unexpected behavior if it is enabled.
Type: fix
Fixes: 844a0e8b0 ("always use 64 byte CQEs for MLX5")
Signed-off-by: Jieqiang Wang <jieqiang.wang@arm.com>
Change-Id: I7909a6d44b15bcf39c15dfac9377b65520a0cbfb
When DPDK MLX PMDs are built, and the DPDK plugin is loaded, DPDK may
set the MLX5_CQE_SIZE environment variable to 128. This causes the RDMA
plugin to be unable to create completion queues. Since the RDMA plugin
expects the CQEs to be 64 bytes, set the cqe_size explicitly when
creating the CQ. This avoids any issues with different values for the
MLX5_CQE_SIZE environment variable.
Type: improvement
Signed-off-by: Nathan Brown <nathan.brown@arm.com>
Change-Id: Idfd078d3045a4dcb674325ef36f85a89df6fbebc
When n_rx_packets is less then 16(VEC256) or 8(VEC128), code execution
will fall into scalar path of processing packets. But with a wrong
initialization value for n_left set to zero, i in the for-loop will
equal to n_rx_packets. This leads to the bypass of required ip4 checksum
validation and byte count endianness conversion in scalar path.
Besides, refactor the code using while instead of for-loop to keep
consistency with VPP code style.
Type: fix
Fixes: bf93670c51 ("rdma: fix ipv4 checksum check in rdma-input node")
Signed-off-by: Lijian Zhang <lijian.zhang@arm.com>
Signed-off-by: Jieqiang Wang <jieqiang.wang@arm.com>
Change-Id: Ib4e8cb5202735f8b060c99caddf26035657551e1
CQE flags located in bits 16-31 at offset 0x1c should be defined as
actual numbers instead of indexes. Besides, L3 header type for IPv4 is
10(2 in decimal) and for IPv6 is 01(1 in decimal) according to CQE entry
fields description of page 120 in Mellanox Programmer Reference Manual.
(https://network.nvidia.com/files/doc-2020/ethernet-adapters-programming-manual.pdf)
Fixing this issue will lead to correct CQE flags printing for rdma-input
node when buffer trace is enabled.
Type: fix
Signed-off-by: Jieqiang Wang <jieqiang.wang@arm.com>
Change-Id: I9b578ca5cbd8cd93a577aa83131e31c79f60430e
- cqe_flags pointer should be incremented accordingly otherwise only the
first element in cqe_flags will be updated
- flag l3_ok should be set for match variable when verifying if packets
are IPv4 packets with flag l3_ok set
- mask/match variables should be converted to network byte order to
match the endianness of cqe_flags
- vector processing of checking cqe flags will set return value to
0xFFFF by mistake if packet numbers are not multiple of 16(VEC256) or
8(VEC128)
Type: fix
Signed-off-by: Jieqiang Wang <jieqiang.wang@arm.com>
Change-Id: I9fec09e449fdffbb0ace8e5a6ccfeb6869b5cac1
flags is u64, makes sure we do not overflow when shifting.
Type: fix
Change-Id: Ieea34187c0b568dc4d24c9415b9cff36907a5a87
Signed-off-by: Benoît Ganne <bganne@cisco.com>
- fix branch prediction for checking rdma ERROR flag
- add the missing right angle bracket to help message
Type: improvement
Signed-off-by: Jieqiang Wang <jieqiang.wang@arm.com>
Reviewed-by: Lijian Zhang <lijian.zhang@arm.com>
Reviewed-by: Tianyu Li <tianyu.li@arm.com>
Change-Id: I2ce667631b3e3f60939069e2a16ddba0ff12a695
- per hw-interface-class handlers
- ethernet set_mtu callback
- driver can now refuse MTU change
Type: improvement
Change-Id: I3d37c9129930ebec7bb70caf4263025413873048
Signed-off-by: Damjan Marion <damarion@cisco.com>
Make it shorter to type, easier to debug, make adding callbacks in
future simpler.
Type: improvement
Change-Id: I6cdd6375e36da23bd452a7c7273ff42789e94433
Signed-off-by: Damjan Marion <damarion@cisco.com>
REPLY_MSG_ID_BASE is the standard way to define reply message id base,
so this refactor makes all the files use that. This is a preparation
patch for future safety add-ons which rely on REPLY_MACRO* parameters to
be preprocessor tokens identifying the message instead,
Type: refactor
Signed-off-by: Klement Sekera <ksekera@cisco.com>
Change-Id: Ibe3e056a3d9326d08af45bbcb25588b11e870141
Remove aggressive inlining outside of the main loop to improve build
time (from 146s to 22s).
Type: refactor
Change-Id: I3824516a85b5e8d02894e66f19d891569c1a68fb
Signed-off-by: Benoît Ganne <bganne@cisco.com>
When switching to the direct verb chain buffer tx path, we must account
for all remaining packets, including the packets that would wrapped
around.
Previously we were using the 'n' counter but ignoring the 'n_wrap'
counter: if some packets would have wrapped around in the default path,
it would be ignored by the chained buffer tx path.
Compute the correct number of remaining packets based on the old and
current txq tail instead.
Also simplify the chained tx function parameters.
Type: fix
Change-Id: If12b41a8f143fda80290342e2904792f7501c559
Signed-off-by: Benoît Ganne <bganne@cisco.com>
The memory region is already registered right above, looks like a
copy/paste error.
Type: fix
Change-Id: I97aed821e719e1a34ac38c86d0473a8fdd671d4e
Signed-off-by: Benoît Ganne <bganne@cisco.com>
Current rdma input L3 validating behavior for scalar path is:
if any packet L3_OK flag matches, then unset skip_ip4_cksum.
The correct behavior should be if any packet L3_OK NOT match,
then unset skip_ip4_cksum. The logic is also different from
the vector path. This patch fixes the wrong behavior for scalar path.
Type: fix
Signed-off-by: Tianyu Li <tianyu.li@arm.com>
Change-Id: I5ca5ed3aa0c07d441f3c87b33f03ea8f7a3c9826
Type: improvement
This patch adds flags to represent the modern NICs capabilities.
Change-Id: I96d38d9ab7eac55974d72795cd100d8337168e1e
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
When using classifier to filter traces, not all packets will be traced.
In that case, we should only count traced packets.
Type: fix
Change-Id: I87d1e217b580ebff8c6ade7860eb43950420ae78
Signed-off-by: Benoît Ganne <bganne@cisco.com>
/vpp/src/plugins/rdma/rdma.h:203:17: error: field 'buffer_template' with variable sized type 'vlib_buffer_t' not at the end of a struct or class is a GNU extension [-Werror,-Wgnu-variable-sized-type-not-at-end]
vlib_buffer_t buffer_template;
Type: fix
Change-Id: I4661839f262e01fe274a2ee7b3cb70f9bc6b7c62
Signed-off-by: Damjan Marion <damarion@cisco.com>
This change leverages the striding RQ feature of
ConnectX-5 adapters to support chained buffers on
the RX path. In Striding RQ mode, WQE are SG lists
of data segments, each mapped to a vlib_buffer.
When a packet is received, it can consume one or
multiple data segments belonging to the WQE,
without wasting the whole WQE.
Change-Id: I74eba5b2c2c66538e75e046335058ba011cb27fd
Type: improvement
Signed-off-by: Mohammed Hawari <mohammed@hawari.fr>
Fix and optimize DMAC check in ethernet-input node to utilize NIC or
driver which support L3 DMAC-filtering mode so that DMAC check can be
bypassed safely for interfaces/sub-interfaces in L3 mode.
Checking of interface in L3-DMAC-filtering state to avoid DMAC check
require the following:
a) Fix interface driver init sequence for devices which supports L3
DMAC-filtering to indicate its capability and initialize interface
to L3 DMAC-filtering state.
b) Fix ethernet_set_flags() function and its associated callback
flags_change() functions registered by various drivers in interface
infra to provide proper L3 DMAC filtering status.
Maintain interface/sub-interface L3 config count so DMAC checks can be
bypassed if L3 forwarding is not setup on any main/sub-interfaces.
Type: fix
Ticket: VPP-1868
Signed-off-by: John Lo <loj@cisco.com>
Change-Id: I204d90459c13e9e486cfcba4e64e3d479bc9f2ae