Compare commits

...

39 Commits

Author SHA1 Message Date
Vratko Polak
095a953070 sr: use correct reply to sr_policy_add_v2
Type: fix
Fixes: c4c205b091934d96a173f4c0d75ef7e888298ac7

Change-Id: I110729601a9f19451297883b781ec56e2b31465b
Signed-off-by: Vratko Polak <vrpolak@cisco.com>
(cherry picked from commit 3a05db6264a4b2edf1fc7e6c35ee3b688baa463a)
2024-04-18 15:29:28 +00:00
Dave Wallace
401b53d939 misc: in crcchecker.py, don't check for uncommitted changes in CI
Type: fix

Change-Id: I63260a953e54518b3084b62fccdb4af81315b229
Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
(cherry picked from commit 3a0d7d2c95e8b8087c20b99fed5bcf62fac027d9)
2024-04-08 22:43:41 +00:00
Alexander Skorichenko
6b287b5301 fib: fix fib_path_create() with drop targets
Properly set type
	path->fp_type = FIB_PATH_TYPE_SPECIAL
for paths with (path->fp_cfg_flags & FIB_PATH_CFG_FLAG_DROP)

Type: fix

Change-Id: Id61dbcda781d872b878e6a6410c05b840795ed46
Signed-off-by: Alexander Skorichenko <askorichenko@netgate.com>
(cherry picked from commit 4b08632748727486e7ebfdcf4d992743595bc500)
2023-12-01 19:30:02 +00:00
Alexander Chernavin
f9c322be7d bfd: fix buffer leak when cannot send periodic packets
When a periodic BFD packet cannot be sent because the interface is
disabled, the allocated buffer needs to be freed. This currently will
occur for IPv4 sessions. However, buffers will leak for IPv6 sessions as
in this case, bfd_transport_control_frame() and bfd_transport_udp6()
will not indicate failure.

With this fix, stop always returning success in bfd_transport_udp6() and
start returning the actual return value.

Type: fix
Change-Id: I5fa4d9206e32cccae3053ef24966d80e2022fc81
Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
(cherry picked from commit 1f4023d55d7a9c777465d24065e91fc076602fb0)
2023-12-01 19:29:55 +00:00
Matthew Smith
b75bde18c4 ipsec: keep esp encrypt pointer and index synced
Type: fix

In esp_encrypt_inline(), an index and pointer to the last processed SA
are stored. If the next packet uses the same SA, we defer on updating
counters until a different SA is encountered.

The pointer was being retrieved, then the SA was checked to see if the
packet should be dropped due to no crypto/integ algs, then the index was
updated. If the check failed, we would skip further processing and now
the pointer refers to a different SA than the index. When you have a
batch of packets that are encrypted using an SA followed by a packet
which is dropped for no algs and then more packets to be encrypted using
the original SA, the packets that arrive after the one that was dropped
end up being processed using a pointer that refers to the wrong SA data.
This can result in a segv.

Update the current_sa_index at the same time that the sa0 pointer is
updated.

Signed-off-by: Matthew Smith <mgsmith@netgate.com>
Change-Id: I65f1511a37475b4f737f5e1b51749c0a30e88806
(cherry picked from commit dac9e566cd16fc375fff14280b37cb5135584fc6)
2023-12-01 19:29:47 +00:00
Alexander Chernavin
a56e75fd71 flowprobe: fix L3 header offset calculation for tx flows
The recent TX flows generation fix introduced "l3_hdr_offset" which
represents the offset of the IP header in the buffer's data. The problem
is that it is erroneously defined as a 16-bit unsigned integer. If the
calculated offset is negative, "l3_hdr_offset" will get a value close to
UINT16_MAX. And the code will search the IP header somewhere beyond the
buffer's data. For example, this will occur in the case when an ICMP
error is being sent in response to a received packet.

With this fix, make "l3_hdr_offset" a signed integer.

Type: fix
Change-Id: I6f1283c7ba02656d0f592519b5863e68348c5583
Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
(cherry picked from commit bae6b6d1f2a2e6623257afab21e05da2d795323a)
2023-12-01 19:29:38 +00:00
Alexander Chernavin
da5ddd1714 flowprobe: fix tx flows generated for rewritten traffic
Currently, when IPFIX records generation is enabled for an interface in
the TX direction, some rewritten traffic is being sent from that
interface, and the Ethernet header's location has changed due to
rewriting, generated TX flows will contain fields with wrong and zero
values. For example, that can be observed when traffic is rewritten from
a subinterface to a hardware interface (i.e. when tags are removed). A
TX flow generated in this case will have wrong L2 fields because of an
incorrectly located Ethernet header. And zero L3/L4 fields because the
Ethernet type will match neither IP4 nor IP6.

The same code is executed to generate flows for both input and output
features. And the same mechanism is applied to identify the Ethernet
header in the buffer's data. However, such general code usually works
with the buffer's data conditionally based on the direction. For most
input features, the buffer's current_data will likely point to the IP
header. For most output features, the buffer's current_data will likely
point to the Ethernet header.

With this fix:
 - Keep relying on ethernet_buffer_get_header() to locate the Ethernet
   header for input features. And start using vlib_buffer_get_current()
   to locate the Ethernet header for output features. The function will
   account for the Ethernet header's position change in the buffer's
   data if there is rewriting.

 - After fixing Ethernet header determination in the buffer's data,
   L3/L4 fields will contain non-zero but still incorrect data. That is
   because IP header determination needs to be fixed too. It currently
   relies on the fact that the Ethernet header is always located at the
   beginning of the buffer's data and that l2_hdr_sz can be used as an
   IP header offset. However, this may not be the case after rewriting.
   So start calculating the actual offset of the IP header in the
   buffer's data.

 - Add a unit test to cover the case.

Type: fix
Change-Id: Icf3f9e6518912d06dff0d5aa48e103b3dc94edb7
Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
(cherry picked from commit 64d6463d2eac0c0fe434f3a7aa56fe4d85c046d9)
2023-12-01 19:29:23 +00:00
Alexander Chernavin
bec4f4a7ab flowprobe: fix clearing interface state on feature disabling
As a result of recent fixes, all currently stored flows of an interface
are deleted when the feature is being disabled for the interface. This
includes stopping the timer and freeing the flow entries for further
reuse. The problem is that meta information is not cleared in the flow
entries being deleted. For example, packet delta count will keep its
value. The next flow that gets one of these pool entries will already
have a non-zero packet count. So the counting of packets will start from
a non-zero value. And incorrect packet delta count will be exported for
that flow.

With this fix, clear meta information too when clearing interface state.
Also, update the corresponding test to cover this case.

Type: fix
Change-Id: I9a73b3958adfd1676e66b0ed50f1478920671cca
Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
(cherry picked from commit dab1dfeea9fec04a9a90a82dc5d770fbff344540)
2023-12-01 19:29:08 +00:00
Alexander Chernavin
b8b02937b1 flowprobe: fix accumulation of tcp flags in flow entries
Currently, TCP flags of a flow entry don't get reset once the flow is
exported (unlike other meta information about a flow - packet delta
count and octet delta count). So TCP flags are accumulated as long as
the flow is active. When the flow expires, it is exported the last time,
and its pool entry is freed for further reuse. The next flow that gets
this pool entry will already have non-zero TCP flags. If it's a TCP
flow, the flags will keep being accumulated. This might look fine when
exported. If it's a non-TCP flow, that will definitely look erroneous.

With this fix, reset TCP flags once the flow is exported. Also, cover
the reuse case with tests.

Type: fix
Change-Id: I5f8560afffcfe107909117d3d063e8a69793437e
Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
(cherry picked from commit 21922cec7339f48989f230248de36a98816c4b1b)
2023-12-01 19:28:52 +00:00
Matthew Smith
0d7d22cf67 fib: only update glean for interface if necessary
Type: improvement

If an interface address is added, the glean adjacency for it's covering
prefix is updated with that address. In the case of multiple addresses
within the same prefix being added, the most recently added one will end
up being used as the sender protocol address for ARP requests.

Similar behavior occurs when an interface address is deleted. The glean
adjacency is updated to some appropriate entry under it's covering
prefix. If there were multiple interface addresses configured, we may
update the address on the adjacency even though the address currently in
use is not the one being deleted.

Add a new value PROVIDES_GLEAN to fib_entry_src_flag_t. The flag
identifies whether a source interface entry is being used as the address
for the glean adjacency for the covering prefix.

Update logic so that the glean is only updated on adding an interface
address if there is not already a sibling entry in use which has the
flag set. Also, only update the glean on deleting an interface address
if the address being deleted has the flag set.

Also update unit test which validates expected behavior in the case
where multiple addresses within a prefix are configured on an interface.

Signed-off-by: Matthew Smith <mgsmith@netgate.com>
Change-Id: I7d918b8dd703735b20ec76e0a60af6d7e571b766
(cherry picked from commit 9e5694b405e0200725a993f0c17d452fab508435)
2023-12-01 19:28:40 +00:00
Alexander Chernavin
6cc757eff7 flowprobe: fix sending L4 fields in L2 template and flows
Currently, when L2 and L4 recording is enabled on the L2 datapath, the
L2 template will contain L4 fields and L2 flows will be exported with
those fields always set to zero.

With this fix, when L4 recording is enabled, add L4 fields to templates
other than the L2 template (i.e. to the IP4, IP6, L2_IP4, and L2_IP6
templates). And export L2 flows without L4 fields. Also, cover that case
in the tests.

Type: fix
Change-Id: Id5ed8b99af5634fb9d5c6e695203344782fdac01
Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
(cherry picked from commit 6b027cfdbcb750b8aa1b8ab9a3904c1b2dca6f15)
2023-12-01 19:28:14 +00:00
Alexander Chernavin
9dc9136ec4 flowprobe: fix corrupted packets sent after feature disabling
When IPFIX flow record generation is enabled on an interface and the
active timer is set, flows will be saved and then exported according to
the active and passive timers. If then disable the feature on the
interface, the flow entries currently saved will remain in the state
tables. They will gradually expire and be exported. The problem is that
the template for them has already been removed. And they will be sent
with zero template ID which will make them unreadable.

A similar problem will occur if feature settings are "changed" on the
interface - i.e. disable the feature and re-enable it with different
settings (e.g. set a different datapath). The remaining flows that
correspond to the previous feature settings will be eventually sent
either with zero template ID or with template ID that corresponds to the
current feature settings on the interface (and look like garbage data).

With this fix, flush the current buffers before template removal and
clear the remaining flows of the interface during feature disabling.

Type: fix
Change-Id: I1e57db06adfdd3a02fed1a6a89b5418f85a35e16
Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
(cherry picked from commit f68afe85a6e4d5e00fdad1af19a76eb40fdfa388)
2023-12-01 19:26:58 +00:00
Alexander Chernavin
74a7a5ae08 ethernet: run callbacks for subifs too when mac changes
When MAC address changes for an interface, address change callbacks are
executed for it. In turn adjacencies register a callback for MAC address
changes to be able to update their rewrite strings accordingly.

Subinterfaces inherit MAC address from the parent interface. When MAC
address of the parent interface changes, it also implies MAC address
change for its subinterfaces. The problem is that this is currently not
considered when address change callbacks are executed. After MAC address
change on the parent interface, packets sent from subinterfaces might
have wrong source MAC address as the result of stale adjacencies. For
example, ARP messages might be sent with the wrong (previous) MAC
address and address resolution will fail.

With this fix, when address change callbacks are executed for an
interface, they will be also executed for its subinterfaces. And
adjacencies will be able to update accordingly.

Type: fix
Change-Id: I87349698c10b9c3a31a28c0287e6dc711d9413a2
Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
(cherry picked from commit 8a92b68bc8eaaec48d144fba62490a32f28eb422)
2023-12-01 19:26:17 +00:00
Alexander Chernavin
70591c147d flowprobe: fix sending L2 flows using L2_IP6 template
Currently, L2 flows are exported using L2_IP6 template if L3 or L4
recording is enabled on L2 datapath. That occurs because during feature
enable, L2 template is added and its ID is not saved immediately. Then
L2_IP4 and L2_IP6 templates are added overwriting "template_id" each
time. And in the end, the current value of "template_id" is saved for L2
template. The problem is that "template_id" at that point contains the
ID of L2_IP6 template.

With this fix, save the template ID immediately after adding a template
for all variants (datapaths). Also, cover the case with a test.

Type: fix
Change-Id: Id27288043b3b8f0e89e77f45ae9a01fa7439e20e
Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
(cherry picked from commit 120095d3d33bfac64c1f3c870f8a332eeaf638f0)
2023-12-01 19:19:44 +00:00
Steven Luong
6c2464d032 memif: contention between memif_disconnect and memif RX/TX threads
memif_disconect may be called without barrier sync. It removes stuff in mq
without protection which may cause troubles for memif RX/TX worker threads.

The fix is to protect mq removal in memif_disconnect.

Type: fix

Change-Id: I368c466d1f13df98980dfa87e8442fbcd822a428
Signed-off-by: Steven Luong <sluong@cisco.com>
(cherry picked from commit 34c721fb47155135bf2173ca7b9a31aaacfde190)
2023-12-01 16:36:09 +00:00
Neale Ranns
6d83dddeb1 fib: Don't use an address from an attached prefix when sending ARP requests.
Change-Id: I4c3144794dd0bd7de6150929e53f6d305c496b17

Type: fix
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I7b0c2c2dec5e867970599b8f2f2da17f2ff0b17c
(cherry picked from commit 39528796098973fe9a5411e0f6f94268c3324e94)
2023-11-30 16:24:53 +01:00
Florin Coras
d20bacd0e5 tcp: allow fins in syns in syn-rcvd
Also make sure connection is properly cleaned up.

Type: fix

Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I02f83e9a1e17cbbbd2ee74044d02049b2fd2f21c
(cherry picked from commit da2ae9af61fbdb3b68eb72f8d35294fdb3720303)
2023-10-25 17:20:00 +00:00
Florin Coras
dcb10ce353 tcp: handle syn-ack in fin-wait-2 in rcv process
Type: fix

Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: If74e04498423bed42593e79ec92482421cfda8d2
(cherry picked from commit 61d63e8323d11240edab44ff714def1c573fc987)
2023-10-25 17:19:51 +00:00
Florin Coras
a98ef25fc7 tcp: initialize connection index on rst w packet
Type: fix

Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: Ie01d7e8d6eddf3ba88f2cd6eb8369c8ec8179cb4
(cherry picked from commit 0094fe0190b623dbef0e57b7f4032ba3cf5f36b0)
2023-10-25 17:19:43 +00:00
Florin Coras
bfa5a1a7fa session: fix duplicate rx events
Be less aggressive with rx events on connect/accept notification.

Type: fix

Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: Ie93a08c7eef69383bf0301a163fd2131dd51372a
(cherry picked from commit 054aa8c2f2681e3a4a4af02d9119fb6eaa5dbad6)
2023-10-25 17:19:36 +00:00
Steven Luong
946cb7b22b session: ignore connecting half_open session in session_tx_fifo_dequeue_internal
s->tx_fifo is 0 for the connecting half open session.

Type: fix

Change-Id: I2ba1ae99a2fa4fae1896587f40e0e4fb73c1edcb
Signed-off-by: Steven Luong <sluong@cisco.com>
(cherry picked from commit 947aa8fffcd85563ed0bad620f739e76c6002f50)
2023-10-25 17:19:29 +00:00
Brian Morris
170ab64736 tls: Fix SSL_CTX leak on every client session
Type: fix

Change-Id: I35b3920288269073cdd35f79c938396128d169c9
Signed-off-by: Brian Morris <bmorris2@cisco.com>
(cherry picked from commit 733e093e7099552a4609dc5efadf9261df7778d4)
2023-10-25 17:19:21 +00:00
Florin Coras
5a164283ad session: fix tx deq ntf assert for cl
Type: fix

Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I97a04ed0417f1a3433665f6aa1a9424138fd54cb
(cherry picked from commit fa9f37c15ceb32c4b4d6fd0d352cfd5a38a6ab94)
2023-10-25 17:18:59 +00:00
Piotr Bronowski
74209bac28 dpdk-cryptodev: improve dequeue behavior, fix cache stats logging
This patch provides minor improvements to the logic governing dequeuing
from the ring. Previously whenever a frame was dequeued
we've been trying to dequeue from the ring another one till
inflight == 0. Now threshold is set for 8 frames pending in the cache
to be consumed by the vnet. This threshold has been chosen based on
cache ring stats observation in the system under load.
Some unnecessary logic for setting deq_tail has been removed.
Also logging has been corrected, and cache ring logic simplied.

Type: improvement
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I19f3daf5913006e9cb23e142a163f596e85f5bda
(cherry picked from commit 7cc17f6df9b3f4b45aaac16ba0aa098d6cd58794)
2023-10-25 17:18:40 +00:00
Andrew Yourtchenko
7c4027fa5e misc: VPP 23.10 Release Notes
Type: docs
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
Change-Id: Icd40064c06ccc53efba1cd9564613108b999b656
2023-10-20 11:24:41 +02:00
Florin Coras
fe95c23795 session: ignore app rx ntf if transport closed
Type: fix

Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: Id56a101a6350903b00f7c96705fb86039e70e12c
(cherry picked from commit a0b8c8fdf3fc555fc2ed7792d67bf3fb4fb99b9f)
2023-10-11 20:05:48 +00:00
Dave Wallace
015a6f7f17 vppinfra: fix coverity issue CID 323952
Type: fix
Fixes: 08600ccfa

Change-Id: I53ba0d96507b55ab7cd735073d6c4cf20a3cc948
Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
(cherry picked from commit 05cc62dd504bbb0fb230fcf3786ed7f4d5be2364)
2023-10-11 03:13:11 +00:00
Florin Coras
471dc6b1e3 session: maintain old state on premature close
Type: fix

Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I2ea821e0499a3874c4579f5480ea86f30ebe615f
(cherry picked from commit 84c9ee3d696ef5c1162530a30ba591b806a7e175)
2023-10-10 23:49:41 +00:00
Florin Coras
1ec3a70f66 session: propagate delayed rx evts after connect/accept
Type: fix

Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I4a2e8f864df7269ec5a3c4fd4d8785a67b687d58
(cherry picked from commit 431b489c5a4f60a82781ace60d07471d003787af)
2023-10-09 23:39:49 +00:00
Florin Coras
9003233377 tls: propagate reads to app irrespective of state
Session input node handles rx notifications even if session not fully
accepted/connected

Type: fix

Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I6560c45db8f8e0b7f0dc3bdd0939f13ca2f43f15
(cherry picked from commit aa7b88120ad83a29a05522bed4e5aa71524b8aba)
2023-10-09 21:46:16 +00:00
Florin Coras
3c06859f9f session: handle accept and connect errors
If builtin apps refuse connections, they should be cleaned up.

Type: fix

Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I95ef22902ac3fe873e15e250aa5f03031c2dc0c4
(cherry picked from commit 9ffec14a2202e1268c4a2f189c39a90986090a25)
2023-10-09 21:42:49 +00:00
Florin Coras
4ba523740f tls: no read after app close
Type: fix

Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I34f8ee2e36d07e8e55e21561528fc6b73feb852f
(cherry picked from commit 3843d0dd03a3ebbdb5d13b54e1b871a8ea72498c)
2023-10-09 21:41:12 +00:00
Florin Coras
05919da49d tls: report error if connected cannot be initialized
Type: fix

Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I987ac6b461b473836917bce6ce0d4ac109cc8ddb
(cherry picked from commit a3d55df1e91a7df4ad4c0e1b639ba12a1ed04c79)
2023-10-09 21:40:38 +00:00
Damjan Marion
b53daca83f vppinfra: fix string termination in clib_file_get_resolved_basename
Type: fix
Fixes: 40f4810
Change-Id: Idf51462c8154663de23154f17a894b7245c9fbf0
Signed-off-by: Damjan Marion <damarion@cisco.com>
(cherry picked from commit 08600ccfa12f529d6ca7b852106227fc5f7addbf)
2023-10-09 21:38:26 +00:00
Florin Coras
15d0c7a3fb tls: limit openssl engine max read burst
Type: improvement

Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: Ic7a8fd37d79fc9c09c8b1539d630f3b8983b8bb3
(cherry picked from commit c1b038001e1f18effb3c9ff5daa9e9cac1cd66e8)
2023-10-09 21:37:55 +00:00
Florin Coras
f9af6b32ef tls: init connection for prealloced app sessions
Type: fix

Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: Icd62dc110e3a73b24372f3a5162f8008b7edee9f
(cherry picked from commit a127d3c157cb6e7658451a877abbfe0dd16c982a)
2023-10-09 21:37:24 +00:00
Florin Coras
ee2e502736 tls: ignore tx events for not fully established sessions
Type: fix

Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I401a116a1a46c0dc5d591115de5ff0eef2f6440b
2023-10-09 21:36:45 +00:00
Florin Coras
e7295fd974 tls: fix formatting of half open connections
Type: fix

Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: If96dc748a716a261edfcb1020210bd73058e382f
2023-10-02 19:33:49 +00:00
Andrew Yourtchenko
14df6fc1ea misc: Initial changes for stable/2310 branch
Type: docs
Change-Id: I82d323c6e4585772e5c9a9f5b5bbb77b65c1da85
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
2023-09-20 16:56:20 +02:00
33 changed files with 1492 additions and 188 deletions

View File

@ -2,3 +2,4 @@
host=gerrit.fd.io
port=29418
project=vpp
defaultbranch=stable/2310

View File

@ -6,6 +6,7 @@ Release notes
.. toctree::
:maxdepth: 2
v23.10
v23.06
v23.02
v22.10.1

File diff suppressed because it is too large Load Diff

View File

@ -82,13 +82,15 @@ def filelist_from_git_ls():
def is_uncommitted_changes():
"""Returns true if there are uncommitted changes in the repo"""
git_status = "git status --porcelain -uno"
returncode = run(git_status.split(), stdout=PIPE, stderr=PIPE)
if returncode.returncode != 0:
sys.exit(returncode.returncode)
# Don't run this check in the Jenkins CI
if os.getenv("FDIOTOOLS_IMAGE") is None:
git_status = "git status --porcelain -uno"
returncode = run(git_status.split(), stdout=PIPE, stderr=PIPE)
if returncode.returncode != 0:
sys.exit(returncode.returncode)
if returncode.stdout:
return True
if returncode.stdout:
return True
return False

View File

@ -672,50 +672,73 @@ cryptodev_show_cache_rings_fn (vlib_main_t *vm, unformat_input_t *input,
{
cryptodev_main_t *cmt = &cryptodev_main;
u32 thread_index = 0;
u16 i;
vec_foreach_index (thread_index, cmt->per_thread_data)
{
cryptodev_engine_thread_t *cet = cmt->per_thread_data + thread_index;
cryptodev_cache_ring_t *ring = &cet->cache_ring;
u16 head = ring->head;
u16 tail = ring->tail;
u16 n_cached = ((head == tail) && (ring->frames[head].f == 0)) ?
0 :
((head == tail) && (ring->frames[head].f != 0)) ?
(CRYPTODEV_CACHE_QUEUE_MASK + 1) :
(head > tail) ?
(head - tail) :
(CRYPTODEV_CACHE_QUEUE_MASK - tail + head);
u16 n_cached = (CRYPTODEV_CACHE_QUEUE_SIZE - tail + head) &
CRYPTODEV_CACHE_QUEUE_MASK;
u16 enq_head = ring->enq_head;
u16 deq_tail = ring->deq_tail;
u16 n_frames_inflight =
((enq_head == deq_tail) && (ring->frames[enq_head].f == 0)) ?
(enq_head == deq_tail) ?
0 :
((enq_head == deq_tail) && (ring->frames[enq_head].f != 0)) ?
CRYPTODEV_CACHE_QUEUE_MASK + 1 :
(enq_head > deq_tail) ?
(enq_head - deq_tail) :
(CRYPTODEV_CACHE_QUEUE_MASK - deq_tail + enq_head);
((CRYPTODEV_CACHE_QUEUE_SIZE + enq_head - deq_tail) &
CRYPTODEV_CACHE_QUEUE_MASK);
/* even if some elements of dequeued frame are still pending for deq
* we consider the frame as processed */
u16 n_frames_processed =
((tail == deq_tail) && (ring->frames[deq_tail].f == 0)) ? 0 :
((tail == deq_tail) && (ring->frames[deq_tail].f != 0)) ? 1 :
(deq_tail > tail) ? (deq_tail - tail + 1) :
(CRYPTODEV_CACHE_QUEUE_MASK - tail + deq_tail - 1);
((tail == deq_tail) && (ring->frames[deq_tail].f == 0)) ?
0 :
((CRYPTODEV_CACHE_QUEUE_SIZE - tail + deq_tail) &
CRYPTODEV_CACHE_QUEUE_MASK) +
1;
/* even if some elements of enqueued frame are still pending for enq
* we consider the frame as enqueued */
u16 n_frames_pending =
(head == enq_head) ? 0 :
((CRYPTODEV_CACHE_QUEUE_SIZE - enq_head + head) &
CRYPTODEV_CACHE_QUEUE_MASK) -
1;
u16 elts_to_enq =
(ring->frames[enq_head].n_elts - ring->frames[enq_head].enq_elts_head);
u16 elts_to_deq =
(ring->frames[deq_tail].n_elts - ring->frames[deq_tail].deq_elts_tail);
u32 elts_total = 0;
for (i = 0; i < CRYPTODEV_CACHE_QUEUE_SIZE; i++)
elts_total += ring->frames[i].n_elts;
if (vlib_num_workers () > 0 && thread_index == 0)
continue;
vlib_cli_output (vm, "\n\n");
vlib_cli_output (vm, "Frames total: %u", n_cached);
vlib_cli_output (vm, "Frames pending in the ring: %u",
n_cached - n_frames_inflight - n_frames_processed);
vlib_cli_output (vm, "Frames cached in the ring: %u", n_cached);
vlib_cli_output (vm, "Frames cached but not processed: %u",
n_frames_pending);
vlib_cli_output (vm, "Frames inflight: %u", n_frames_inflight);
vlib_cli_output (vm, "Frames dequed but not returned: %u",
n_frames_processed);
vlib_cli_output (vm, "Frames processed: %u", n_frames_processed);
vlib_cli_output (vm, "Elements total: %u", elts_total);
vlib_cli_output (vm, "Elements inflight: %u", cet->inflight);
vlib_cli_output (vm, "Head: %u", head);
vlib_cli_output (vm, "Tail: %u", tail);
vlib_cli_output (vm, "Head index: %u", head);
vlib_cli_output (vm, "Tail index: %u", tail);
vlib_cli_output (vm, "Current frame index beeing enqueued: %u",
enq_head);
vlib_cli_output (vm, "Current frame index being dequeued: %u", deq_tail);
vlib_cli_output (vm,
"Elements in current frame to be enqueued: %u, waiting "
"to be enqueued: %u",
ring->frames[enq_head].n_elts, elts_to_enq);
vlib_cli_output (vm,
"Elements in current frame to be dequeued: %u, waiting "
"to be dequeued: %u",
ring->frames[deq_tail].n_elts, elts_to_deq);
vlib_cli_output (vm, "\n\n");
}
return 0;

View File

@ -32,6 +32,7 @@
#define CRYPTODEV_MAX_IV_SIZE 16
#define CRYPTODEV_MAX_AAD_SIZE 16
#define CRYPTODEV_MAX_N_SGL 8 /**< maximum number of segments */
#define CRYPTODEV_MAX_PROCESED_IN_CACHE_QUEUE 8
#define CRYPTODEV_IV_OFFSET (offsetof (cryptodev_op_t, iv))
#define CRYPTODEV_AAD_OFFSET (offsetof (cryptodev_op_t, aad))
@ -303,19 +304,24 @@ cryptodev_cache_ring_push (cryptodev_cache_ring_t *r,
vnet_crypto_async_frame_t *f)
{
u16 head = r->head;
u16 tail = r->tail;
cryptodev_cache_ring_elt_t *ring_elt = &r->frames[head];
/**
* in debug mode we do the ring sanity test when a frame is enqueued to
* the ring.
**/
#if CLIB_DEBUG > 0
u16 tail = r->tail;
u16 n_cached = (head >= tail) ? (head - tail) :
(CRYPTODEV_CACHE_QUEUE_MASK - tail + head);
ERROR_ASSERT (n_cached < VNET_CRYPTO_FRAME_POOL_SIZE);
ERROR_ASSERT (n_cached < CRYPTODEV_CACHE_QUEUE_SIZE);
ERROR_ASSERT (r->raw == 0 && r->frames[head].raw == 0 &&
r->frames[head].f == 0);
#endif
/*the ring capacity is CRYPTODEV_CACHE_QUEUE_SIZE - 1*/
if (PREDICT_FALSE (head + 1) == tail)
return 0;
ring_elt->f = f;
ring_elt->n_elts = f->n_elts;
/* update head */

View File

@ -148,6 +148,9 @@ cryptodev_frame_linked_algs_enqueue (vlib_main_t *vm,
cryptodev_cache_ring_elt_t *ring_elt =
cryptodev_cache_ring_push (ring, frame);
if (PREDICT_FALSE (ring_elt == NULL))
return -1;
ring_elt->aad_len = 1;
ring_elt->op_type = (u8) op_type;
return 0;
@ -295,6 +298,10 @@ cryptodev_frame_aead_enqueue (vlib_main_t *vm,
ERROR_ASSERT (frame->n_elts > 0);
cryptodev_cache_ring_elt_t *ring_elt =
cryptodev_cache_ring_push (ring, frame);
if (PREDICT_FALSE (ring_elt == NULL))
return -1;
ring_elt->aad_len = aad_len;
ring_elt->op_type = (u8) op_type;
return 0;
@ -462,7 +469,7 @@ cryptodev_frame_dequeue_internal (vlib_main_t *vm, u32 *nb_elts_processed,
vnet_crypto_async_frame_t *frame = NULL;
cryptodev_cache_ring_t *ring = &cet->cache_ring;
u16 *const deq = &ring->deq_tail;
u16 n_deq, idx, left_to_deq, i;
u16 n_deq, left_to_deq;
u16 max_to_deq = 0;
u16 inflight = cet->inflight;
u8 dequeue_more = 0;
@ -472,29 +479,12 @@ cryptodev_frame_dequeue_internal (vlib_main_t *vm, u32 *nb_elts_processed,
u32 n_elts, n;
u64 err0 = 0, err1 = 0, err2 = 0, err3 = 0; /* partial errors mask */
idx = ring->deq_tail;
for (i = 0; i < VNET_CRYPTO_FRAME_POOL_SIZE; i++)
{
u32 frame_inflight =
CRYPTODEV_CACHE_RING_GET_FRAME_ELTS_INFLIGHT (ring, idx);
if (PREDICT_TRUE (frame_inflight > 0))
break;
idx++;
idx &= (VNET_CRYPTO_FRAME_POOL_SIZE - 1);
}
ERROR_ASSERT (i != VNET_CRYPTO_FRAME_POOL_SIZE);
ring->deq_tail = idx;
left_to_deq =
ring->frames[*deq].f->n_elts - ring->frames[*deq].deq_elts_tail;
max_to_deq = clib_min (left_to_deq, CRYPTODE_DEQ_MAX);
/* deq field can be used to track frame that is currently dequeued
based on that you can specify the amount of elements to deq for the frame */
n_deq =
rte_cryptodev_dequeue_burst (cet->cryptodev_id, cet->cryptodev_q,
(struct rte_crypto_op **) cops, max_to_deq);
@ -547,9 +537,13 @@ cryptodev_frame_dequeue_internal (vlib_main_t *vm, u32 *nb_elts_processed,
ring->frames[*deq].deq_elts_tail += n_deq;
if (cryptodev_cache_ring_update_deq_tail (ring, deq))
{
u32 fr_processed =
(CRYPTODEV_CACHE_QUEUE_SIZE - ring->tail + ring->deq_tail) &
CRYPTODEV_CACHE_QUEUE_MASK;
*nb_elts_processed = frame->n_elts;
*enqueue_thread_idx = frame->enqueue_thread_index;
dequeue_more = (max_to_deq < CRYPTODE_DEQ_MAX);
dequeue_more = (fr_processed < CRYPTODEV_MAX_PROCESED_IN_CACHE_QUEUE);
}
cet->inflight = inflight;

View File

@ -118,6 +118,9 @@ cryptodev_frame_linked_algs_enqueue (vlib_main_t *vm,
cryptodev_cache_ring_elt_t *ring_elt =
cryptodev_cache_ring_push (ring, frame);
if (PREDICT_FALSE (ring_elt == NULL))
return -1;
ring_elt->aad_len = 1;
ring_elt->op_type = (u8) op_type;
return 0;
@ -272,6 +275,9 @@ cryptodev_raw_aead_enqueue (vlib_main_t *vm, vnet_crypto_async_frame_t *frame,
cryptodev_cache_ring_elt_t *ring_elt =
cryptodev_cache_ring_push (ring, frame);
if (PREDICT_FALSE (ring_elt == NULL))
return -1;
ring_elt->aad_len = aad_len;
ring_elt->op_type = (u8) op_type;
return 0;
@ -466,32 +472,17 @@ cryptodev_raw_dequeue_internal (vlib_main_t *vm, u32 *nb_elts_processed,
cryptodev_cache_ring_t *ring = &cet->cache_ring;
u16 *const deq = &ring->deq_tail;
u32 n_success;
u16 n_deq, indice, i, left_to_deq;
u16 n_deq, i, left_to_deq;
u16 max_to_deq = 0;
u16 inflight = cet->inflight;
u8 dequeue_more = 0;
int dequeue_status;
indice = *deq;
for (i = 0; i < VNET_CRYPTO_FRAME_POOL_SIZE; i++)
{
if (PREDICT_TRUE (
CRYPTODEV_CACHE_RING_GET_FRAME_ELTS_INFLIGHT (ring, indice) > 0))
break;
indice += 1;
indice &= CRYPTODEV_CACHE_QUEUE_MASK;
}
ERROR_ASSERT (i != VNET_CRYPTO_FRAME_POOL_SIZE);
*deq = indice;
left_to_deq = ring->frames[*deq].n_elts - ring->frames[*deq].deq_elts_tail;
max_to_deq = clib_min (left_to_deq, CRYPTODE_DEQ_MAX);
/* you can use deq field to track frame that is currently dequeued */
/* based on that you can specify the amount of elements to deq for the frame
/* deq field can be used to track frame that is currently dequeued */
/* based on thatthe amount of elements to deq for the frame can be specified
*/
n_deq = rte_cryptodev_raw_dequeue_burst (
@ -516,9 +507,13 @@ cryptodev_raw_dequeue_internal (vlib_main_t *vm, u32 *nb_elts_processed,
if (cryptodev_cache_ring_update_deq_tail (ring, deq))
{
u32 fr_processed =
(CRYPTODEV_CACHE_QUEUE_SIZE - ring->tail + ring->deq_tail) &
CRYPTODEV_CACHE_QUEUE_MASK;
*nb_elts_processed = frame->n_elts;
*enqueue_thread_idx = frame->enqueue_thread_index;
dequeue_more = max_to_deq < CRYPTODE_DEQ_MAX;
dequeue_more = (fr_processed < CRYPTODEV_MAX_PROCESED_IN_CACHE_QUEUE);
}
int res =
@ -555,24 +550,18 @@ cryptodev_raw_dequeue (vlib_main_t *vm, u32 *nb_elts_processed,
u8 dequeue_more = 1;
while (cet->inflight > 0 && dequeue_more)
{
dequeue_more = cryptodev_raw_dequeue_internal (vm, nb_elts_processed,
enqueue_thread_idx);
}
if (PREDICT_TRUE (ring->frames[ring->enq_head].f != 0))
cryptodev_enqueue_frame_to_qat (vm, &ring->frames[ring->enq_head]);
if (PREDICT_TRUE (ring_elt->f != 0))
if (PREDICT_TRUE (ring_elt->f != 0) &&
(ring_elt->n_elts == ring_elt->deq_elts_tail))
{
if (ring_elt->enq_elts_head == ring_elt->deq_elts_tail)
{
vlib_node_set_interrupt_pending (
vlib_get_main_by_index (vm->thread_index), cm->crypto_node_index);
ret_frame = cryptodev_cache_ring_pop (ring);
return ret_frame;
}
vlib_node_set_interrupt_pending (
vlib_get_main_by_index (vm->thread_index), cm->crypto_node_index);
ret_frame = cryptodev_cache_ring_pop (ring);
}
return ret_frame;

View File

@ -245,6 +245,7 @@ flowprobe_template_rewrite_inline (ipfix_exporter_t *exp, flow_report_t *fr,
flowprobe_main_t *fm = &flowprobe_main;
flowprobe_record_t flags = fr->opaque.as_uword;
bool collect_ip4 = false, collect_ip6 = false;
bool collect_l4 = false;
stream = &exp->streams[fr->stream_index];
@ -257,6 +258,10 @@ flowprobe_template_rewrite_inline (ipfix_exporter_t *exp, flow_report_t *fr,
if (which == FLOW_VARIANT_L2_IP6)
flags |= FLOW_RECORD_L2_IP6;
}
if (flags & FLOW_RECORD_L4)
{
collect_l4 = (which != FLOW_VARIANT_L2);
}
field_count += flowprobe_template_common_field_count ();
if (flags & FLOW_RECORD_L2)
@ -265,7 +270,7 @@ flowprobe_template_rewrite_inline (ipfix_exporter_t *exp, flow_report_t *fr,
field_count += flowprobe_template_ip4_field_count ();
if (collect_ip6)
field_count += flowprobe_template_ip6_field_count ();
if (flags & FLOW_RECORD_L4)
if (collect_l4)
field_count += flowprobe_template_l4_field_count ();
/* allocate rewrite space */
@ -304,7 +309,7 @@ flowprobe_template_rewrite_inline (ipfix_exporter_t *exp, flow_report_t *fr,
f = flowprobe_template_ip4_fields (f);
if (collect_ip6)
f = flowprobe_template_ip6_fields (f);
if (flags & FLOW_RECORD_L4)
if (collect_l4)
f = flowprobe_template_l4_fields (f);
/* Back to the template packet... */
@ -503,6 +508,43 @@ flowprobe_create_state_tables (u32 active_timer)
return error;
}
static clib_error_t *
flowprobe_clear_state_if_index (u32 sw_if_index)
{
flowprobe_main_t *fm = &flowprobe_main;
clib_error_t *error = 0;
u32 worker_i;
u32 entry_i;
if (fm->active_timer > 0)
{
vec_foreach_index (worker_i, fm->pool_per_worker)
{
pool_foreach_index (entry_i, fm->pool_per_worker[worker_i])
{
flowprobe_entry_t *e =
pool_elt_at_index (fm->pool_per_worker[worker_i], entry_i);
if (e->key.rx_sw_if_index == sw_if_index ||
e->key.tx_sw_if_index == sw_if_index)
{
e->packetcount = 0;
e->octetcount = 0;
e->prot.tcp.flags = 0;
if (fm->passive_timer > 0)
{
tw_timer_stop_2t_1w_2048sl (
fm->timers_per_worker[worker_i],
e->passive_timer_handle);
}
flowprobe_delete_by_index (worker_i, entry_i);
}
}
}
}
return error;
}
static int
validate_feature_on_interface (flowprobe_main_t * fm, u32 sw_if_index,
u8 which)
@ -548,12 +590,17 @@ flowprobe_interface_add_del_feature (flowprobe_main_t *fm, u32 sw_if_index,
{
if (which == FLOW_VARIANT_L2)
{
if (!is_add)
{
flowprobe_flush_callback_l2 ();
}
if (fm->record & FLOW_RECORD_L2)
{
rv = flowprobe_template_add_del (1, UDP_DST_PORT_ipfix, flags,
flowprobe_data_callback_l2,
flowprobe_template_rewrite_l2,
is_add, &template_id);
fm->template_reports[flags] = (is_add) ? template_id : 0;
}
if (fm->record & FLOW_RECORD_L3 || fm->record & FLOW_RECORD_L4)
{
@ -576,20 +623,30 @@ flowprobe_interface_add_del_feature (flowprobe_main_t *fm, u32 sw_if_index,
flags | FLOW_RECORD_L2_IP4;
fm->context[FLOW_VARIANT_L2_IP6].flags =
flags | FLOW_RECORD_L2_IP6;
fm->template_reports[flags] = template_id;
}
}
else if (which == FLOW_VARIANT_IP4)
rv = flowprobe_template_add_del (1, UDP_DST_PORT_ipfix, flags,
flowprobe_data_callback_ip4,
flowprobe_template_rewrite_ip4,
is_add, &template_id);
{
if (!is_add)
{
flowprobe_flush_callback_ip4 ();
}
rv = flowprobe_template_add_del (
1, UDP_DST_PORT_ipfix, flags, flowprobe_data_callback_ip4,
flowprobe_template_rewrite_ip4, is_add, &template_id);
fm->template_reports[flags] = (is_add) ? template_id : 0;
}
else if (which == FLOW_VARIANT_IP6)
rv = flowprobe_template_add_del (1, UDP_DST_PORT_ipfix, flags,
flowprobe_data_callback_ip6,
flowprobe_template_rewrite_ip6,
is_add, &template_id);
{
if (!is_add)
{
flowprobe_flush_callback_ip6 ();
}
rv = flowprobe_template_add_del (
1, UDP_DST_PORT_ipfix, flags, flowprobe_data_callback_ip6,
flowprobe_template_rewrite_ip6, is_add, &template_id);
fm->template_reports[flags] = (is_add) ? template_id : 0;
}
}
if (rv && rv != VNET_API_ERROR_VALUE_EXIST)
{
@ -600,7 +657,6 @@ flowprobe_interface_add_del_feature (flowprobe_main_t *fm, u32 sw_if_index,
if (which != (u8) ~ 0)
{
fm->context[which].flags = fm->record;
fm->template_reports[flags] = (is_add) ? template_id : 0;
}
if (direction == FLOW_DIRECTION_RX || direction == FLOW_DIRECTION_BOTH)
@ -645,6 +701,11 @@ flowprobe_interface_add_del_feature (flowprobe_main_t *fm, u32 sw_if_index,
vlib_process_signal_event (vm, flowprobe_timer_node.index, 1, 0);
}
if (!is_add && fm->initialized)
{
flowprobe_clear_state_if_index (sw_if_index);
}
return 0;
}

View File

@ -168,6 +168,8 @@ typedef struct
extern flowprobe_main_t flowprobe_main;
extern vlib_node_registration_t flowprobe_walker_node;
void flowprobe_delete_by_index (u32 my_cpu_number, u32 poolindex);
void flowprobe_flush_callback_ip4 (void);
void flowprobe_flush_callback_ip6 (void);
void flowprobe_flush_callback_l2 (void);

View File

@ -384,9 +384,11 @@ add_to_flow_record_state (vlib_main_t *vm, vlib_node_runtime_t *node,
flowprobe_record_t flags = fm->context[which].flags;
bool collect_ip4 = false, collect_ip6 = false;
ASSERT (b);
ethernet_header_t *eth = ethernet_buffer_get_header (b);
ethernet_header_t *eth = (direction == FLOW_DIRECTION_TX) ?
vlib_buffer_get_current (b) :
ethernet_buffer_get_header (b);
u16 ethertype = clib_net_to_host_u16 (eth->type);
u16 l2_hdr_sz = sizeof (ethernet_header_t);
i16 l3_hdr_offset = (u8 *) eth - b->data + sizeof (ethernet_header_t);
/* *INDENT-OFF* */
flowprobe_key_t k = {};
/* *INDENT-ON* */
@ -423,13 +425,13 @@ add_to_flow_record_state (vlib_main_t *vm, vlib_node_runtime_t *node,
while (clib_net_to_host_u16 (ethv->type) == ETHERNET_TYPE_VLAN)
{
ethv++;
l2_hdr_sz += sizeof (ethernet_vlan_header_tv_t);
l3_hdr_offset += sizeof (ethernet_vlan_header_tv_t);
}
k.ethertype = ethertype = clib_net_to_host_u16 ((ethv)->type);
}
if (collect_ip6 && ethertype == ETHERNET_TYPE_IP6)
{
ip6 = (ip6_header_t *) (b->data + l2_hdr_sz);
ip6 = (ip6_header_t *) (b->data + l3_hdr_offset);
if (flags & FLOW_RECORD_L3)
{
k.src_address.as_u64[0] = ip6->src_address.as_u64[0];
@ -448,7 +450,7 @@ add_to_flow_record_state (vlib_main_t *vm, vlib_node_runtime_t *node,
}
if (collect_ip4 && ethertype == ETHERNET_TYPE_IP4)
{
ip4 = (ip4_header_t *) (b->data + l2_hdr_sz);
ip4 = (ip4_header_t *) (b->data + l3_hdr_offset);
if (flags & FLOW_RECORD_L3)
{
k.src_address.ip4.as_u32 = ip4->src_address.as_u32;
@ -701,6 +703,7 @@ flowprobe_export_entry (vlib_main_t * vm, flowprobe_entry_t * e)
ipfix_exporter_t *exp = pool_elt_at_index (flow_report_main.exporters, 0);
vlib_buffer_t *b0;
bool collect_ip4 = false, collect_ip6 = false;
bool collect_l4 = false;
flowprobe_variant_t which = e->key.which;
flowprobe_record_t flags = fm->context[which].flags;
u16 offset =
@ -719,6 +722,10 @@ flowprobe_export_entry (vlib_main_t * vm, flowprobe_entry_t * e)
collect_ip4 = which == FLOW_VARIANT_L2_IP4 || which == FLOW_VARIANT_IP4;
collect_ip6 = which == FLOW_VARIANT_L2_IP6 || which == FLOW_VARIANT_IP6;
}
if (flags & FLOW_RECORD_L4)
{
collect_l4 = (which != FLOW_VARIANT_L2);
}
offset += flowprobe_common_add (b0, e, offset);
@ -728,13 +735,14 @@ flowprobe_export_entry (vlib_main_t * vm, flowprobe_entry_t * e)
offset += flowprobe_l3_ip6_add (b0, e, offset);
if (collect_ip4)
offset += flowprobe_l3_ip4_add (b0, e, offset);
if (flags & FLOW_RECORD_L4)
if (collect_l4)
offset += flowprobe_l4_add (b0, e, offset);
/* Reset per flow-export counters */
e->packetcount = 0;
e->octetcount = 0;
e->last_exported = vlib_time_now (vm);
e->prot.tcp.flags = 0;
b0->current_length = offset;
@ -955,8 +963,7 @@ flowprobe_flush_callback_l2 (void)
flush_record (FLOW_VARIANT_L2_IP6);
}
static void
void
flowprobe_delete_by_index (u32 my_cpu_number, u32 poolindex)
{
flowprobe_main_t *fm = &flowprobe_main;

View File

@ -100,6 +100,8 @@ memif_disconnect (memif_if_t * mif, clib_error_t * err)
memif_region_t *mr;
memif_queue_t *mq;
int i;
vlib_main_t *vm = vlib_get_main ();
int with_barrier = 0;
if (mif == 0)
return;
@ -141,6 +143,12 @@ memif_disconnect (memif_if_t * mif, clib_error_t * err)
clib_mem_free (mif->sock);
}
if (vlib_worker_thread_barrier_held () == 0)
{
with_barrier = 1;
vlib_worker_thread_barrier_sync (vm);
}
/* *INDENT-OFF* */
vec_foreach_index (i, mif->rx_queues)
{
@ -198,6 +206,9 @@ memif_disconnect (memif_if_t * mif, clib_error_t * err)
vec_free (mif->remote_name);
vec_free (mif->remote_if_name);
clib_fifo_free (mif->msg_queue);
if (with_barrier)
vlib_worker_thread_barrier_release (vm);
}
static clib_error_t *

View File

@ -72,7 +72,7 @@ openssl_ctx_free (tls_ctx_t * ctx)
SSL_free (oc->ssl);
vec_free (ctx->srv_hostname);
SSL_CTX_free (oc->client_ssl_ctx);
#ifdef HAVE_OPENSSL_ASYNC
openssl_evt_free (ctx->evt_index, ctx->c_thread_index);
#endif
@ -163,7 +163,7 @@ openssl_lctx_get (u32 lctx_index)
return -1;
static int
openssl_read_from_ssl_into_fifo (svm_fifo_t * f, SSL * ssl)
openssl_read_from_ssl_into_fifo (svm_fifo_t *f, SSL *ssl, u32 max_len)
{
int read, rv, n_fs, i;
const int n_segs = 2;
@ -174,6 +174,7 @@ openssl_read_from_ssl_into_fifo (svm_fifo_t * f, SSL * ssl)
if (!max_enq)
return 0;
max_enq = clib_min (max_len, max_enq);
n_fs = svm_fifo_provision_chunks (f, fs, n_segs, max_enq);
if (n_fs < 0)
return 0;
@ -533,9 +534,10 @@ static inline int
openssl_ctx_read_tls (tls_ctx_t *ctx, session_t *tls_session)
{
openssl_ctx_t *oc = (openssl_ctx_t *) ctx;
const u32 max_len = 128 << 10;
session_t *app_session;
int read;
svm_fifo_t *f;
int read;
if (PREDICT_FALSE (SSL_in_init (oc->ssl)))
{
@ -549,7 +551,7 @@ openssl_ctx_read_tls (tls_ctx_t *ctx, session_t *tls_session)
app_session = session_get_from_handle (ctx->app_session_handle);
f = app_session->rx_fifo;
read = openssl_read_from_ssl_into_fifo (f, oc->ssl);
read = openssl_read_from_ssl_into_fifo (f, oc->ssl, max_len);
/* Unrecoverable protocol error. Reset connection */
if (PREDICT_FALSE (read < 0))
@ -558,8 +560,7 @@ openssl_ctx_read_tls (tls_ctx_t *ctx, session_t *tls_session)
return 0;
}
/* If handshake just completed, session may still be in accepting state */
if (read && app_session->session_state >= SESSION_STATE_READY)
if (read)
tls_notify_app_enqueue (ctx, app_session);
if ((SSL_pending (oc->ssl) > 0) ||
@ -738,30 +739,31 @@ openssl_ctx_init_client (tls_ctx_t * ctx)
return -1;
}
oc->ssl_ctx = SSL_CTX_new (method);
if (oc->ssl_ctx == NULL)
oc->client_ssl_ctx = SSL_CTX_new (method);
if (oc->client_ssl_ctx == NULL)
{
TLS_DBG (1, "SSL_CTX_new returned null");
return -1;
}
SSL_CTX_set_ecdh_auto (oc->ssl_ctx, 1);
SSL_CTX_set_mode (oc->ssl_ctx, SSL_MODE_ENABLE_PARTIAL_WRITE);
SSL_CTX_set_ecdh_auto (oc->client_ssl_ctx, 1);
SSL_CTX_set_mode (oc->client_ssl_ctx, SSL_MODE_ENABLE_PARTIAL_WRITE);
#ifdef HAVE_OPENSSL_ASYNC
if (om->async)
SSL_CTX_set_mode (oc->ssl_ctx, SSL_MODE_ASYNC);
SSL_CTX_set_mode (oc->client_ssl_ctx, SSL_MODE_ASYNC);
#endif
rv = SSL_CTX_set_cipher_list (oc->ssl_ctx, (const char *) om->ciphers);
rv =
SSL_CTX_set_cipher_list (oc->client_ssl_ctx, (const char *) om->ciphers);
if (rv != 1)
{
TLS_DBG (1, "Couldn't set cipher");
return -1;
}
SSL_CTX_set_options (oc->ssl_ctx, flags);
SSL_CTX_set_cert_store (oc->ssl_ctx, om->cert_store);
SSL_CTX_set_options (oc->client_ssl_ctx, flags);
SSL_CTX_set1_cert_store (oc->client_ssl_ctx, om->cert_store);
oc->ssl = SSL_new (oc->ssl_ctx);
oc->ssl = SSL_new (oc->client_ssl_ctx);
if (oc->ssl == NULL)
{
TLS_DBG (1, "Couldn't initialize ssl struct");

View File

@ -33,7 +33,7 @@ typedef struct tls_ctx_openssl_
{
tls_ctx_t ctx; /**< First */
u32 openssl_ctx_index;
SSL_CTX *ssl_ctx;
SSL_CTX *client_ssl_ctx;
SSL *ssl;
BIO *rbio;
BIO *wbio;

View File

@ -445,7 +445,7 @@ picotls_ctx_read (tls_ctx_t *ctx, session_t *tcp_session)
app_session = session_get_from_handle (ctx->app_session_handle);
wrote = ptls_tcp_to_app_write (ptls_ctx, app_session->rx_fifo, tcp_rx_fifo);
if (wrote && app_session->session_state >= SESSION_STATE_READY)
if (wrote)
tls_notify_app_enqueue (ctx, app_session);
if (ptls_ctx->read_buffer_offset || svm_fifo_max_dequeue (tcp_rx_fifo))

View File

@ -483,7 +483,7 @@ bfd_transport_udp6 (vlib_main_t *vm, vlib_node_runtime_t *rt, u32 bi,
is_echo ? &bm->tx_echo_counter :
&bm->tx_counter);
}
return 1;
return rv;
}
static bfd_session_t *

View File

@ -303,8 +303,17 @@ ethernet_mac_change (vnet_hw_interface_t * hi,
{
ethernet_address_change_ctx_t *cb;
u32 id, sw_if_index;
vec_foreach (cb, em->address_change_callbacks)
cb->function (em, hi->sw_if_index, cb->function_opaque);
{
cb->function (em, hi->sw_if_index, cb->function_opaque);
/* clang-format off */
hash_foreach (id, sw_if_index, hi->sub_interface_sw_if_index_by_id,
({
cb->function (em, sw_if_index, cb->function_opaque);
}));
/* clang-format on */
}
}
return (NULL);

View File

@ -153,10 +153,14 @@ typedef enum fib_entry_src_attribute_t_ {
* the source is inherited from its cover
*/
FIB_ENTRY_SRC_ATTRIBUTE_INHERITED,
/**
* the source is currently used as glean src address
*/
FIB_ENTRY_SRC_ATTRIBUTE_PROVIDES_GLEAN,
/**
* Marker. add new entries before this one.
*/
FIB_ENTRY_SRC_ATTRIBUTE_LAST = FIB_ENTRY_SRC_ATTRIBUTE_INHERITED,
FIB_ENTRY_SRC_ATTRIBUTE_LAST = FIB_ENTRY_SRC_ATTRIBUTE_PROVIDES_GLEAN,
} fib_entry_src_attribute_t;
@ -166,6 +170,7 @@ typedef enum fib_entry_src_attribute_t_ {
[FIB_ENTRY_SRC_ATTRIBUTE_ACTIVE] = "active", \
[FIB_ENTRY_SRC_ATTRIBUTE_STALE] = "stale", \
[FIB_ENTRY_SRC_ATTRIBUTE_INHERITED] = "inherited", \
[FIB_ENTRY_SRC_ATTRIBUTE_PROVIDES_GLEAN] = "provides-glean", \
}
#define FOR_EACH_FIB_SRC_ATTRIBUTE(_item) \
@ -180,6 +185,7 @@ typedef enum fib_entry_src_flag_t_ {
FIB_ENTRY_SRC_FLAG_ACTIVE = (1 << FIB_ENTRY_SRC_ATTRIBUTE_ACTIVE),
FIB_ENTRY_SRC_FLAG_STALE = (1 << FIB_ENTRY_SRC_ATTRIBUTE_STALE),
FIB_ENTRY_SRC_FLAG_INHERITED = (1 << FIB_ENTRY_SRC_ATTRIBUTE_INHERITED),
FIB_ENTRY_SRC_FLAG_PROVIDES_GLEAN = (1 << FIB_ENTRY_SRC_ATTRIBUTE_PROVIDES_GLEAN),
} __attribute__ ((packed)) fib_entry_src_flag_t;
extern u8 * format_fib_entry_src_flags(u8 *s, va_list *args);

View File

@ -87,8 +87,16 @@ fib_entry_src_interface_update_glean (fib_entry_t *cover,
if (fib_prefix_is_cover(&adj->sub_type.glean.rx_pfx,
&local->fe_prefix))
{
adj->sub_type.glean.rx_pfx.fp_addr = local->fe_prefix.fp_addr;
return (1);
fib_entry_src_t *local_src;
local_src = fib_entry_src_find (local, FIB_SOURCE_INTERFACE);
if (local_src != NULL)
{
adj->sub_type.glean.rx_pfx.fp_addr =
local->fe_prefix.fp_addr;
local_src->fes_flags |= FIB_ENTRY_SRC_FLAG_PROVIDES_GLEAN;
return (1);
}
}
}
}
@ -116,6 +124,52 @@ fib_entry_src_interface_path_swap (fib_entry_src_t *src,
src->fes_pl = fib_path_list_create(pl_flags, paths);
}
typedef struct fesi_find_glean_ctx_t_ {
fib_node_index_t glean_node_index;
} fesi_find_glean_ctx_t;
static walk_rc_t
fib_entry_src_interface_find_glean_walk (fib_entry_t *cover,
fib_node_index_t covered,
void *ctx)
{
fesi_find_glean_ctx_t *find_glean_ctx = ctx;
fib_entry_t *covered_entry;
fib_entry_src_t *covered_src;
covered_entry = fib_entry_get (covered);
covered_src = fib_entry_src_find (covered_entry, FIB_SOURCE_INTERFACE);
if ((covered_src != NULL) &&
(covered_src->fes_flags & FIB_ENTRY_SRC_FLAG_PROVIDES_GLEAN))
{
find_glean_ctx->glean_node_index = covered;
return WALK_STOP;
}
return WALK_CONTINUE;
}
static fib_entry_t *
fib_entry_src_interface_find_glean (fib_entry_t *cover)
{
fib_entry_src_t *src;
src = fib_entry_src_find (cover, FIB_SOURCE_INTERFACE);
if (src == NULL)
/* the cover is not an interface source */
return NULL;
fesi_find_glean_ctx_t ctx = {
.glean_node_index = ~0,
};
fib_entry_cover_walk (cover, fib_entry_src_interface_find_glean_walk,
&ctx);
return (ctx.glean_node_index == ~0) ? NULL :
fib_entry_get (ctx.glean_node_index);
}
/*
* Source activate.
* Called when the source is teh new longer best source on the entry
@ -128,6 +182,8 @@ fib_entry_src_interface_activate (fib_entry_src_t *src,
if (FIB_ENTRY_FLAG_LOCAL & src->fes_entry_flags)
{
u8 update_glean;
/*
* Track the covering attached/connected cover. This is so that
* during an attached export of the cover, this local prefix is
@ -141,10 +197,17 @@ fib_entry_src_interface_activate (fib_entry_src_t *src,
cover = fib_entry_get(src->u.interface.fesi_cover);
/*
* Before adding as a child of the cover, check whether an existing
* child has already been used to populate the glean adjacency. If so,
* we don't need to update the adjacency.
*/
update_glean = (fib_entry_src_interface_find_glean (cover) == NULL);
src->u.interface.fesi_sibling =
fib_entry_cover_track(cover, fib_entry_get_index(fib_entry));
fib_entry_src_interface_update_glean(cover, fib_entry);
if (update_glean)
fib_entry_src_interface_update_glean(cover, fib_entry);
}
return (!0);
@ -167,15 +230,19 @@ fib_entry_src_interface_deactivate (fib_entry_src_t *src,
if (FIB_NODE_INDEX_INVALID != src->u.interface.fesi_cover)
{
cover = fib_entry_get(src->u.interface.fesi_cover);
fib_entry_cover_untrack(cover, src->u.interface.fesi_sibling);
src->u.interface.fesi_cover = FIB_NODE_INDEX_INVALID;
src->u.interface.fesi_sibling = ~0;
fib_entry_cover_walk(cover,
fib_entry_src_interface_update_glean_walk,
NULL);
/* If this was the glean address, find a new one */
if (src->fes_flags & FIB_ENTRY_SRC_FLAG_PROVIDES_GLEAN)
{
fib_entry_cover_walk(cover,
fib_entry_src_interface_update_glean_walk,
NULL);
src->fes_flags &= ~FIB_ENTRY_SRC_FLAG_PROVIDES_GLEAN;
}
}
}

View File

@ -1365,7 +1365,8 @@ fib_path_create (fib_node_index_t pl_index,
dpo_copy(&path->exclusive.fp_ex_dpo, &rpath->dpo);
}
else if ((path->fp_cfg_flags & FIB_PATH_CFG_FLAG_ICMP_PROHIBIT) ||
(path->fp_cfg_flags & FIB_PATH_CFG_FLAG_ICMP_UNREACH))
(path->fp_cfg_flags & FIB_PATH_CFG_FLAG_ICMP_UNREACH) ||
(path->fp_cfg_flags & FIB_PATH_CFG_FLAG_DROP))
{
path->fp_type = FIB_PATH_TYPE_SPECIAL;
}

View File

@ -534,7 +534,11 @@ fib_table_route_path_fixup (const fib_prefix_t *prefix,
else if (fib_route_path_is_attached(path))
{
path->frp_flags |= FIB_ROUTE_PATH_GLEAN;
fib_prefix_normalize(prefix, &path->frp_connected);
/*
* attached prefixes are not suitable as the source of ARP requests
* so don't save the prefix in the glean adj
*/
clib_memset(&path->frp_connected, 0, sizeof(path->frp_connected));
}
if (*eflags & FIB_ENTRY_FLAG_DROP)
{

View File

@ -187,12 +187,16 @@ ip4_arp_inline (vlib_main_t * vm,
/* resolve the packet's destination */
ip4_header_t *ip0 = vlib_buffer_get_current (p0);
resolve0 = ip0->dst_address;
src0 = adj0->sub_type.glean.rx_pfx.fp_addr.ip4;
}
else
/* resolve the incomplete adj */
resolve0 = adj0->sub_type.nbr.next_hop.ip4;
if (is_glean && adj0->sub_type.glean.rx_pfx.fp_len)
/* the glean is for a connected, local prefix */
src0 = adj0->sub_type.glean.rx_pfx.fp_addr.ip4;
else
{
/* resolve the incomplete adj */
resolve0 = adj0->sub_type.nbr.next_hop.ip4;
/* Src IP address in ARP header. */
if (!fib_sas4_get (sw_if_index0, &resolve0, &src0) &&
!ip4_sas_by_sw_if_index (sw_if_index0, &resolve0, &src0))

View File

@ -690,6 +690,7 @@ esp_encrypt_inline (vlib_main_t *vm, vlib_node_runtime_t *node,
current_sa_packets = current_sa_bytes = 0;
sa0 = ipsec_sa_get (sa_index0);
current_sa_index = sa_index0;
if (PREDICT_FALSE ((sa0->crypto_alg == IPSEC_CRYPTO_ALG_NONE &&
sa0->integ_alg == IPSEC_INTEG_ALG_NONE) &&
@ -701,7 +702,6 @@ esp_encrypt_inline (vlib_main_t *vm, vlib_node_runtime_t *node,
sa_index0);
goto trace;
}
current_sa_index = sa_index0;
vlib_prefetch_combined_counter (&ipsec_sa_counters, thread_index,
current_sa_index);

View File

@ -596,7 +596,7 @@ session_program_io_event (app_worker_t *app_wrk, session_t *s,
/* Special events for connectionless sessions */
et += SESSION_IO_EVT_BUILTIN_RX - SESSION_IO_EVT_RX;
ASSERT (s->thread_index == 0);
ASSERT (s->thread_index == 0 || et == SESSION_IO_EVT_TX_MAIN);
session_event_t evt = {
.event_type = et,
.session_handle = session_handle (s),

View File

@ -77,10 +77,12 @@ app_worker_flush_events_inline (app_worker_t *app_wrk, u32 thread_index,
{
application_t *app = application_get (app_wrk->app_index);
svm_msg_q_t *mq = app_wrk->event_queue;
u8 ring_index, mq_is_cong;
session_state_t old_state;
session_event_t *evt;
u32 n_evts = 128, i;
u8 ring_index, mq_is_cong;
session_t *s;
int rv;
n_evts = clib_min (n_evts, clib_fifo_elts (app_wrk->wrk_evts[thread_index]));
@ -111,16 +113,18 @@ app_worker_flush_events_inline (app_worker_t *app_wrk, u32 thread_index,
{
case SESSION_IO_EVT_RX:
s = session_get (evt->session_index, thread_index);
s->flags &= ~SESSION_F_RX_EVT;
/* Application didn't confirm accept yet */
if (PREDICT_FALSE (s->session_state == SESSION_STATE_ACCEPTING))
if (PREDICT_FALSE (s->session_state == SESSION_STATE_ACCEPTING ||
s->session_state == SESSION_STATE_CONNECTING))
break;
s->flags &= ~SESSION_F_RX_EVT;
app->cb_fns.builtin_app_rx_callback (s);
break;
/* Handle sessions that might not be on current thread */
case SESSION_IO_EVT_BUILTIN_RX:
s = session_get_from_handle_if_valid (evt->session_handle);
if (!s || s->session_state == SESSION_STATE_ACCEPTING)
if (!s || s->session_state == SESSION_STATE_ACCEPTING ||
s->session_state == SESSION_STATE_CONNECTING)
break;
s->flags &= ~SESSION_F_RX_EVT;
app->cb_fns.builtin_app_rx_callback (s);
@ -145,16 +149,46 @@ app_worker_flush_events_inline (app_worker_t *app_wrk, u32 thread_index,
break;
case SESSION_CTRL_EVT_ACCEPTED:
s = session_get (evt->session_index, thread_index);
app->cb_fns.session_accept_callback (s);
old_state = s->session_state;
if (app->cb_fns.session_accept_callback (s))
{
session_close (s);
s->app_wrk_index = SESSION_INVALID_INDEX;
break;
}
if (is_builtin)
{
if (old_state >= SESSION_STATE_TRANSPORT_CLOSING)
{
session_set_state (s, old_state);
app_worker_close_notify (app_wrk, s);
}
}
break;
case SESSION_CTRL_EVT_CONNECTED:
if (!(evt->as_u64[1] & 0xffffffff))
s = session_get (evt->session_index, thread_index);
{
s = session_get (evt->session_index, thread_index);
old_state = s->session_state;
}
else
s = 0;
app->cb_fns.session_connected_callback (app_wrk->wrk_index,
evt->as_u64[1] >> 32, s,
evt->as_u64[1] & 0xffffffff);
rv = app->cb_fns.session_connected_callback (
app_wrk->wrk_index, evt->as_u64[1] >> 32, s,
evt->as_u64[1] & 0xffffffff);
if (!s)
break;
if (rv)
{
session_close (s);
s->app_wrk_index = SESSION_INVALID_INDEX;
break;
}
if (old_state >= SESSION_STATE_TRANSPORT_CLOSING)
{
session_set_state (s, old_state);
app_worker_close_notify (app_wrk, s);
}
break;
case SESSION_CTRL_EVT_DISCONNECTED:
s = session_get (evt->session_index, thread_index);

View File

@ -456,6 +456,7 @@ session_mq_accepted_reply_handler (session_worker_t *wrk,
a->app_index = mp->context;
a->handle = mp->handle;
vnet_disconnect_session (a);
s->app_wrk_index = SESSION_INVALID_INDEX;
return;
}
@ -1611,7 +1612,9 @@ session_tx_fifo_dequeue_internal (session_worker_t * wrk,
clib_llist_index_t ei;
u32 n_packets;
if (PREDICT_FALSE (s->session_state >= SESSION_STATE_TRANSPORT_CLOSED))
if (PREDICT_FALSE ((s->session_state >= SESSION_STATE_TRANSPORT_CLOSED) ||
(s->session_state == SESSION_STATE_CONNECTING &&
(s->flags & SESSION_F_HALF_OPEN))))
return 0;
/* Clear custom-tx flag used to request reschedule for tx */
@ -1784,7 +1787,7 @@ session_event_dispatch_io (session_worker_t * wrk, vlib_node_runtime_t * node,
break;
case SESSION_IO_EVT_RX:
s = session_event_get_session (wrk, e);
if (!s)
if (!s || s->session_state >= SESSION_STATE_TRANSPORT_CLOSED)
break;
transport_app_rx_evt (session_get_transport_proto (s),
s->connection_index, s->thread_index);

View File

@ -163,7 +163,7 @@ vl_api_sr_policy_add_v2_t_handler (vl_api_sr_policy_add_v2_t *mp)
mp->type, ntohl (mp->fib_table), mp->is_encap, 0, NULL);
vec_free (segments);
REPLY_MACRO (VL_API_SR_POLICY_ADD_REPLY);
REPLY_MACRO (VL_API_SR_POLICY_ADD_V2_REPLY);
}
static void

View File

@ -2123,7 +2123,7 @@ tcp46_rcv_process_inline (vlib_main_t *vm, vlib_node_runtime_t *node,
case TCP_STATE_SYN_RCVD:
/* Make sure the segment is exactly right */
if (tc->rcv_nxt != vnet_buffer (b[0])->tcp.seq_number || is_fin)
if (tc->rcv_nxt != vnet_buffer (b[0])->tcp.seq_number)
{
tcp_send_reset_w_pkt (tc, b[0], thread_index, is_ip4);
error = TCP_ERROR_SEGMENT_INVALID;
@ -2143,6 +2143,10 @@ tcp46_rcv_process_inline (vlib_main_t *vm, vlib_node_runtime_t *node,
goto drop;
}
/* Avoid notifying app if connection is about to be closed */
if (PREDICT_FALSE (is_fin))
break;
/* Update rtt and rto */
tcp_estimate_initial_rtt (tc);
tcp_connection_tx_pacer_update (tc);
@ -2363,15 +2367,15 @@ tcp46_rcv_process_inline (vlib_main_t *vm, vlib_node_runtime_t *node,
tcp_cfg.closewait_time);
break;
case TCP_STATE_SYN_RCVD:
/* Send FIN-ACK, enter LAST-ACK and because the app was not
* notified yet, set a cleanup timer instead of relying on
* disconnect notify and the implicit close call. */
/* Send FIN-ACK and enter TIME-WAIT, as opposed to LAST-ACK,
* because the app was not notified yet and we want to avoid
* session state transitions to ensure cleanup does not
* propagate to app. */
tcp_connection_timers_reset (tc);
tc->rcv_nxt += 1;
tcp_send_fin (tc);
tcp_connection_set_state (tc, TCP_STATE_LAST_ACK);
tcp_timer_set (&wrk->timer_wheel, tc, TCP_TIMER_WAITCLOSE,
tcp_cfg.lastack_time);
tcp_connection_set_state (tc, TCP_STATE_TIME_WAIT);
tcp_program_cleanup (wrk, tc);
break;
case TCP_STATE_CLOSE_WAIT:
case TCP_STATE_CLOSING:
@ -3238,6 +3242,8 @@ do { \
_(FIN_WAIT_2, TCP_FLAG_RST | TCP_FLAG_ACK, TCP_INPUT_NEXT_RCV_PROCESS,
TCP_ERROR_NONE);
_(FIN_WAIT_2, TCP_FLAG_SYN, TCP_INPUT_NEXT_RCV_PROCESS, TCP_ERROR_NONE);
_ (FIN_WAIT_2, TCP_FLAG_SYN | TCP_FLAG_ACK, TCP_INPUT_NEXT_RCV_PROCESS,
TCP_ERROR_NONE);
_(CLOSE_WAIT, TCP_FLAG_ACK, TCP_INPUT_NEXT_RCV_PROCESS, TCP_ERROR_NONE);
_(CLOSE_WAIT, TCP_FLAG_FIN | TCP_FLAG_ACK, TCP_INPUT_NEXT_RCV_PROCESS,
TCP_ERROR_NONE);

View File

@ -667,6 +667,7 @@ tcp_send_reset_w_pkt (tcp_connection_t * tc, vlib_buffer_t * pkt,
b = vlib_get_buffer (vm, bi);
tcp_init_buffer (vm, b);
vnet_buffer (b)->tcp.connection_index = tc->c_c_index;
/* Make and write options */
tcp_hdr_len = sizeof (tcp_header_t);

View File

@ -227,7 +227,12 @@ tls_notify_app_connected (tls_ctx_t * ctx, session_error_t err)
app_session->opaque = ctx->parent_app_api_context;
if ((err = app_worker_init_connected (app_wrk, app_session)))
goto failed;
{
app_worker_connect_notify (app_wrk, 0, err, ctx->parent_app_api_context);
ctx->no_app_session = 1;
session_free (app_session);
return -1;
}
app_session->session_state = SESSION_STATE_READY;
parent_app_api_ctx = ctx->parent_app_api_context;
@ -244,9 +249,6 @@ tls_notify_app_connected (tls_ctx_t * ctx, session_error_t err)
return 0;
failed:
ctx->no_app_session = 1;
tls_disconnect (ctx->tls_ctx_handle, vlib_get_thread_index ());
send_reply:
return app_worker_connect_notify (app_wrk, 0, err,
ctx->parent_app_api_context);
@ -486,6 +488,9 @@ tls_session_accept_callback (session_t * tls_session)
* on tls_session rx and potentially invalidating the session pool */
app_session = session_alloc (ctx->c_thread_index);
app_session->session_state = SESSION_STATE_CREATED;
app_session->session_type =
session_type_from_proto_and_ip (TRANSPORT_PROTO_TLS, ctx->tcp_is_ip4);
app_session->connection_index = ctx->tls_ctx_handle;
ctx->c_s_index = app_session->session_index;
TLS_DBG (1, "Accept on listener %u new connection [%u]%x",
@ -511,7 +516,7 @@ tls_app_rx_callback (session_t * tls_session)
return 0;
ctx = tls_ctx_get (tls_session->opaque);
if (PREDICT_FALSE (ctx->no_app_session))
if (PREDICT_FALSE (ctx->no_app_session || ctx->app_closed))
{
TLS_DBG (1, "Local App closed");
return 0;
@ -938,15 +943,18 @@ tls_cleanup_ho (u32 ho_index)
int
tls_custom_tx_callback (void *session, transport_send_params_t * sp)
{
session_t *app_session = (session_t *) session;
session_t *as = (session_t *) session;
tls_ctx_t *ctx;
if (PREDICT_FALSE (app_session->session_state
>= SESSION_STATE_TRANSPORT_CLOSED))
return 0;
if (PREDICT_FALSE (as->session_state >= SESSION_STATE_TRANSPORT_CLOSED ||
as->session_state <= SESSION_STATE_ACCEPTING))
{
sp->flags |= TRANSPORT_SND_F_DESCHED;
return 0;
}
ctx = tls_ctx_get (app_session->connection_index);
return tls_ctx_write (ctx, app_session, sp);
ctx = tls_ctx_get (as->connection_index);
return tls_ctx_write (ctx, as, sp);
}
u8 *
@ -1057,6 +1065,7 @@ format_tls_half_open (u8 * s, va_list * args)
{
u32 ho_index = va_arg (*args, u32);
u32 __clib_unused thread_index = va_arg (*args, u32);
u32 __clib_unused verbose = va_arg (*args, u32);
session_t *tcp_ho;
tls_ctx_t *ho_ctx;
@ -1102,7 +1111,7 @@ tls_enable (vlib_main_t * vm, u8 is_en)
vnet_app_attach_args_t _a, *a = &_a;
u64 options[APP_OPTIONS_N_OPTIONS];
tls_main_t *tm = &tls_main;
u32 fifo_size = 128 << 12;
u32 fifo_size = 512 << 10;
if (!is_en)
{

Some files were not shown because too many files have changed in this diff Show More