Compare commits

...

20 Commits

Author SHA1 Message Date
b3aff922ff VPP-1459:Ip4 lookup fail when exist prefix cover.
Change-Id: I4ba0aeb65219596475345e42b8cd34019f5594c6
Signed-off-by: mu.duojiao <mu.duojiao@zte.com.cn>
(cherry picked from commit 9744e6d0273c0d7d11ab4f271c8694f69d51ccf3)
2018-10-17 07:54:25 +00:00
7d76878ab3 tls: fix multi threaded medium scale test (VPP-1457)
- ensure session enqueue epoch does not wrap between two enqueues
- use 3 states for echo clients app, to distinguish between starting and
closing phases
- force tcp fin retransmit if out of buffers while sending a fin

Change-Id: I6f2cab46affd1148aba2a33fb6d58bcc54f32805
Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-10-17 07:33:24 +00:00
84112dd4f9 acl-plugin: tuplemerge: refresh the pointer to hash-readied ACL entries per each collision in split_partition() (VPP-1458)
A pointer to hash-ready ACL rules is only set once, which might cause a crash if there are colliding entries
from more than one ACL applied.

Solution: reload the pointer based on the element being processed.

Change-Id: I7a701c2c3b4236d67293159f2a33c4f967168953
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
2018-10-16 17:18:26 +02:00
d6a0d0e206 vcl: fix bidirectional tests (VPP-1455)
- add epoll dequeued events beyond maxevents to unhandled
- filter multiple epoll rx events

Change-Id: I618f5f02b19581473de891b3b59bb6a0faad10b5
Signed-off-by: Florin Coras <fcoras@cisco.com>
(cherry picked from commit aa27eb95b7ee3bb69b62166d5e418e973cbbdcfa)
2018-10-16 10:05:57 +00:00
347c523c23 vmxnet3: show vmxnet3 with filtering capability [VPP-1452]
show vmxnet3 desc may display 5000 lines of output since it has 5 tables. Each
table may have 1000 entries. It would not be very useful to debug problem.

We need filtering capability for the subject show command. We need to be able
to display the descriptor table per interface, per interface per table, and
per interface per table per slot. The latter is the most useful.

tested the following valid combinations
show vmxnet3
show vmxnet3 desc
show vmxnet3 vmxnet3-0/13/0/0
show vmxnet3 vmxnet3-0/13/0/0 desc
show vmxnet3 vmxnet3-0/13/0/0 rx-comp
show vmxnet3 vmxnet3-0/13/0/0 rx-comp 1
show vmxnet3 vmxnet3-0/13/0/0 tx-comp
show vmxnet3 vmxnet3-0/13/0/0 tx-comp 1
show vmxnet3 vmxnet3-0/13/0/0 rx-desc-0
show vmxnet3 vmxnet3-0/13/0/0 rx-desc-0 1
show vmxnet3 vmxnet3-0/13/0/0 rx-desc-1
show vmxnet3 vmxnet3-0/13/0/0 rx-desc-1 1
show vmxnet3 vmxnet3-0/13/0/0 tx-desc
show vmxnet3 vmxnet3-0/13/0/0 tx-desc 1

negative tests and command is rejected
show vmxnet3 abc
show vmxnet3 desc abc
show vmxnet3 vmxnet3-0/13/0/0 abc
show vmxnet3 vmxnet3-0/13/0/0 desc abc
show vmxnet3 vmxnet3-0/13/0/0 rx-comp abc
show vmxnet3 vmxnet3-0/13/0/0 rx-comp 1 abc

Change-Id: I0ff233413496e58236f8fb4a94e493494c20c5cb
Signed-off-by: Steven <sluong@cisco.com>
2018-10-15 21:56:14 +00:00
3d29e83112 vmxnet3: vmxnet3_test_plugin.so: undefined symbol: format_vlib_pci_addr [VPP-1456]
When using vpp_api_test, there is an undefined symbol error for
format_vlib_pci_addr when vmxnet3_test_plugin.so is loaded.

The cause is due to vlib not included in vpp_api_test. Remove the reference
for vlib.so in vmxnet3_test.

Change-Id: I37c00dfe2f843d99ad6c4fc7af6ed10bac4c2df8
Signed-off-by: Steven <sluong@cisco.com>
2018-10-15 10:26:41 -07:00
051984c6a1 VPP-1448: Fix error when recurse on down the trie.
Change-Id: Idfed8243643780d3f52dfe6e6ec621c440daa6ae
Signed-off-by: mu.duojiao <mu.duojiao@zte.com.cn>
(cherry picked from commit 59a829533c1345945dc1b6decc3afe29494e85cd)
2018-10-15 08:43:25 +00:00
6a86ca9627 vxlan:fix ip6 tunnel deletion
Change-Id: I70fb7394f85b26f7e632d74fc31ef83597efdd16
Signed-off-by: Eyal Bari <ebari@cisco.com>
(cherry picked from commit f8d5e214687c17fba000607336295e054672459d)
2018-10-14 23:01:19 +00:00
795539326b vcl: fix empty epoll returns (VPP-1453)
Change-Id: I0b191ddb749b1aa132c2d33b8359c146b36d27af
Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-10-14 10:49:03 +00:00
02a60e01a7 session: don't wait indefinitely for apps to consume evts (VPP-1454)
Change-Id: I544b24d2b2c4a09829773cf180d1747f4b087d4c
Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-10-12 17:01:10 -07:00
9a5f9c9a43 L2-flood: no clone for 1 replication
Change-Id: If178dd38e7920f35588f5d821ff097168b078026
Signed-off-by: Neale Ranns <nranns@cisco.com>
(cherry picked from commit b9fa29d513bfad0d9f18e8ed8c2da3feaa6d3bf0)
2018-10-12 07:35:46 +00:00
9864f87b1b vmxnet3: better error handling [VPP-1449]
try harder on output - if there is no descriptor space available, try to free
up some and check again.
make sure we free the buffer if error is encountered on input.

Change-Id: I41a45213e29de71935afe707889e515037cd081f
Signed-off-by: Steven <sluong@cisco.com>
(cherry picked from commit 8b0995366110ff8c97d1d10aaa8291ad465b0b2f)
2018-10-11 19:59:04 -07:00
125760947a bfd:fix handling session creation batch
when multiple session creating script is ran (via exec) only the first
one actually starts

Change-Id: I0fc36f65795c8921cf180e0b555c446e5a80be45
Signed-off-by: Eyal Bari <ebari@cisco.com>
(cherry picked from commit 0db9b04cf0f9c892a00988e7a61ae703aa83b721)
2018-10-11 23:38:24 +00:00
0d222f88ed Stats: Include stat_segment.h in packages.
Change-Id: I976c0aba8397badf64763c4dbddce67009a4fb23
Signed-off-by: Ole Troan <ot@cisco.com>
2018-10-11 23:37:37 +00:00
713322bd32 Integer underflow and out-of-bounds read (VPP-1442)
Change-Id: Ife2a83b9d7f733f36e0e786ef79edcd394d7c0f9
Signed-off-by: Neale Ranns <nranns@cisco.com>
2018-10-11 20:51:14 +00:00
33f276e0af NAT44: identity NAT fix (VPP-1441)
Change-Id: Ic4affc54d15d08b9b730f6ec6146ee053b28b4b6
Signed-off-by: Matus Fabian <matfabia@cisco.com>
2018-10-11 20:40:02 +00:00
7212e61d92 acl-plugin: reduce the syslog level for debug messages (VPP-1443)
Change-Id: Ie8380cb39424548bf64cb19aee59ec20e29d1e39
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
2018-10-11 19:35:26 +00:00
376414f4c3 vnet: complete the fix for l3_hdr_offset calculation for single loop fastpass case (VPP-1444)
20e6d36b has moved the calculation of the l3_hdr_offset into the determine_next_node()
function, with the assumption that the current_data in the buffer is at
the L3 header. This is not the case for the single loop fastpath,
where the vlib_buffer_advance() call is made after the call to
determine_next_node(), as a day1 behavior. As a result - that path
incorrectly sets the l3_hdr_offset.

Solution: move the vlib_buffer_advance() call to before determine_next_node()

Change-Id: Id5eaa084c43fb6564f8239df4a0b3dc0412b15de
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
2018-10-11 17:48:27 +00:00
5551e41f78 Fix vpp-ext-deps package version in stable branch
Change-Id: Ifb33622b50113501f1d23ab94ba9da708678d6be
Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-10-11 17:13:35 +00:00
90395743d3 Update .gitreview for stable 18.10 branch
Change-Id: I9f3d551acad6fd2fdd733f7f49e8c75ef43ceebc
Signed-off-by: Marco Varlese <marco.varlese@suse.com>
2018-10-10 09:38:07 +02:00
36 changed files with 766 additions and 252 deletions

View File

@ -2,3 +2,4 @@
host=gerrit.fd.io
port=29418
project=vpp
defaultbranch=stable/1810

View File

@ -20,7 +20,7 @@ MAKE_ARGS ?= -j
BUILD_DIR ?= $(CURDIR)/_build
INSTALL_DIR ?= $(CURDIR)/_install
PKG_VERSION ?= $(shell git describe --abbrev=0 | cut -d- -f1 | cut -dv -f2)
PKG_SUFFIX ?= $(shell git log --oneline $$(git describe --abbrev=0).. . | wc -l)
PKG_SUFFIX ?= $(shell git log --oneline v$(PKG_VERSION)-rc0.. . | wc -l)
JOBS := $(if $(shell [ -f /proc/cpuinfo ] && head /proc/cpuinfo),\
$(shell grep -c ^processor /proc/cpuinfo), 2)

View File

@ -1487,6 +1487,8 @@ split_partition(acl_main_t *am, u32 first_index,
int i=0;
u64 collisions = vec_len(pae->colliding_rules);
for(i=0; i<collisions; i++){
/* reload the hash acl info as it might be a different ACL# */
ha = vec_elt_at_index(am->hash_acl_infos, pae->acl_index);
DBG( "TM-collision: base_ace:%d (ace_mask:%d, first_collision_mask:%d)",
pae->ace_index, pae->mask_type_index, coll_mask_type_index);

View File

@ -689,7 +689,7 @@ acl_fa_session_cleaner_process (vlib_main_t * vm, vlib_node_runtime_t * rt,
}
}
}
acl_log_err
acl_log_info
("ACL_FA_CLEANER_DELETE_BY_SW_IF_INDEX bitmap: %U, clear_all: %u",
format_bitmap_hex, clear_sw_if_index_bitmap, clear_all);
vec_foreach (pw0, am->per_worker_data)
@ -727,7 +727,7 @@ acl_fa_session_cleaner_process (vlib_main_t * vm, vlib_node_runtime_t * rt,
pw0->pending_clear_sw_if_index_bitmap =
clib_bitmap_dup (clear_sw_if_index_bitmap);
}
acl_log_err
acl_log_info
("ACL_FA_CLEANER: thread %u, pending clear bitmap: %U",
(am->per_worker_data - pw0), format_bitmap_hex,
pw0->pending_clear_sw_if_index_bitmap);
@ -738,8 +738,9 @@ acl_fa_session_cleaner_process (vlib_main_t * vm, vlib_node_runtime_t * rt,
send_interrupts_to_workers (vm, am);
/* now wait till they all complete */
acl_log_err ("CLEANER mains len: %u per-worker len: %d",
vec_len (vlib_mains), vec_len (am->per_worker_data));
acl_log_info ("CLEANER mains len: %u per-worker len: %d",
vec_len (vlib_mains),
vec_len (am->per_worker_data));
vec_foreach (pw0, am->per_worker_data)
{
CLIB_MEMORY_BARRIER ();
@ -758,7 +759,7 @@ acl_fa_session_cleaner_process (vlib_main_t * vm, vlib_node_runtime_t * rt,
}
}
}
acl_log_err ("ACL_FA_NODE_CLEAN: cleaning done");
acl_log_info ("ACL_FA_NODE_CLEAN: cleaning done");
clib_bitmap_free (clear_sw_if_index_bitmap);
}
am->fa_cleaner_cnt_delete_by_sw_index_ok++;

View File

@ -631,7 +631,6 @@ snat_add_static_mapping (ip4_address_t l_addr, ip4_address_t e_addr,
clib_bihash_kv_8_8_t kv, value;
snat_address_t *a = 0;
u32 fib_index = ~0;
uword *p;
snat_interface_t *interface;
int i;
snat_main_per_thread_data_t *tsm;
@ -643,6 +642,8 @@ snat_add_static_mapping (ip4_address_t l_addr, ip4_address_t e_addr,
u64 user_index;
snat_session_t *s;
snat_static_map_resolve_t *rp, *rp_match = 0;
nat44_lb_addr_port_t *local;
u8 find = 0;
if (!sm->endpoint_dependent)
{
@ -732,19 +733,42 @@ snat_add_static_mapping (ip4_address_t l_addr, ip4_address_t e_addr,
if (is_add)
{
if (m)
return VNET_API_ERROR_VALUE_EXIST;
{
if (is_identity_static_mapping (m))
{
/* *INDENT-OFF* */
vec_foreach (local, m->locals)
{
if (local->vrf_id == vrf_id)
return VNET_API_ERROR_VALUE_EXIST;
}
/* *INDENT-ON* */
vec_add2 (m->locals, local, 1);
local->vrf_id = vrf_id;
local->fib_index =
fib_table_find_or_create_and_lock (FIB_PROTOCOL_IP4, vrf_id,
FIB_SOURCE_PLUGIN_LOW);
m_key.addr = m->local_addr;
m_key.port = m->local_port;
m_key.protocol = m->proto;
m_key.fib_index = local->fib_index;
kv.key = m_key.as_u64;
kv.value = m - sm->static_mappings;
clib_bihash_add_del_8_8 (&sm->static_mapping_by_local, &kv, 1);
return 0;
}
else
return VNET_API_ERROR_VALUE_EXIST;
}
if (twice_nat && addr_only)
return VNET_API_ERROR_UNSUPPORTED;
/* Convert VRF id to FIB index */
if (vrf_id != ~0)
{
p = hash_get (sm->ip4_main->fib_index_by_table_id, vrf_id);
if (!p)
return VNET_API_ERROR_NO_SUCH_FIB;
fib_index = p[0];
}
fib_index =
fib_table_find_or_create_and_lock (FIB_PROTOCOL_IP4, vrf_id,
FIB_SOURCE_PLUGIN_LOW);
/* If not specified use inside VRF id from SNAT plugin startup config */
else
{
@ -752,7 +776,7 @@ snat_add_static_mapping (ip4_address_t l_addr, ip4_address_t e_addr,
vrf_id = sm->inside_vrf_id;
}
if (!out2in_only)
if (!(out2in_only || identity_nat))
{
m_key.addr = l_addr;
m_key.port = addr_only ? 0 : l_port;
@ -825,15 +849,23 @@ snat_add_static_mapping (ip4_address_t l_addr, ip4_address_t e_addr,
m->tag = vec_dup (tag);
m->local_addr = l_addr;
m->external_addr = e_addr;
m->vrf_id = vrf_id;
m->fib_index = fib_index;
m->twice_nat = twice_nat;
if (out2in_only)
m->flags |= NAT_STATIC_MAPPING_FLAG_OUT2IN_ONLY;
if (addr_only)
m->flags |= NAT_STATIC_MAPPING_FLAG_ADDR_ONLY;
if (identity_nat)
m->flags |= NAT_STATIC_MAPPING_FLAG_IDENTITY_NAT;
{
m->flags |= NAT_STATIC_MAPPING_FLAG_IDENTITY_NAT;
vec_add2 (m->locals, local, 1);
local->vrf_id = vrf_id;
local->fib_index = fib_index;
}
else
{
m->vrf_id = vrf_id;
m->fib_index = fib_index;
}
if (!addr_only)
{
m->local_port = l_port;
@ -855,7 +887,7 @@ snat_add_static_mapping (ip4_address_t l_addr, ip4_address_t e_addr,
m_key.addr = m->local_addr;
m_key.port = m->local_port;
m_key.protocol = m->proto;
m_key.fib_index = m->fib_index;
m_key.fib_index = fib_index;
kv.key = m_key.as_u64;
kv.value = m - sm->static_mappings;
if (!out2in_only)
@ -920,6 +952,25 @@ snat_add_static_mapping (ip4_address_t l_addr, ip4_address_t e_addr,
return VNET_API_ERROR_NO_SUCH_ENTRY;
}
if (identity_nat)
{
for (i = 0; i < vec_len (m->locals); i++)
{
if (m->locals[i].vrf_id == vrf_id)
{
find = 1;
break;
}
}
if (!find)
return VNET_API_ERROR_NO_SUCH_ENTRY;
fib_index = m->locals[i].fib_index;
vec_del1 (m->locals, i);
}
else
fib_index = m->fib_index;
/* Free external address port */
if (!(addr_only || sm->static_mapping_only || out2in_only))
{
@ -958,23 +1009,17 @@ snat_add_static_mapping (ip4_address_t l_addr, ip4_address_t e_addr,
m_key.addr = m->local_addr;
m_key.port = m->local_port;
m_key.protocol = m->proto;
m_key.fib_index = m->fib_index;
m_key.fib_index = fib_index;
kv.key = m_key.as_u64;
if (!out2in_only)
clib_bihash_add_del_8_8 (&sm->static_mapping_by_local, &kv, 0);
m_key.addr = m->external_addr;
m_key.port = m->external_port;
m_key.fib_index = 0;
kv.key = m_key.as_u64;
clib_bihash_add_del_8_8 (&sm->static_mapping_by_external, &kv, 0);
/* Delete session(s) for static mapping if exist */
if (!(sm->static_mapping_only) ||
(sm->static_mapping_only && sm->static_mapping_connection_tracking))
{
u_key.addr = m->local_addr;
u_key.fib_index = m->fib_index;
u_key.fib_index = fib_index;
kv.key = u_key.as_u64;
if (!clib_bihash_search_8_8 (&tsm->user_hash, &kv, &value))
{
@ -1018,6 +1063,16 @@ snat_add_static_mapping (ip4_address_t l_addr, ip4_address_t e_addr,
}
}
fib_table_unlock (fib_index, FIB_PROTOCOL_IP4, FIB_SOURCE_PLUGIN_LOW);
if (vec_len (m->locals))
return 0;
m_key.addr = m->external_addr;
m_key.port = m->external_port;
m_key.fib_index = 0;
kv.key = m_key.as_u64;
clib_bihash_add_del_8_8 (&sm->static_mapping_by_external, &kv, 0);
vec_free (m->tag);
vec_free (m->workers);
/* Delete static mapping from pool */
@ -1137,6 +1192,7 @@ nat44_add_del_lb_static_mapping (ip4_address_t e_addr, u16 e_port,
m->external_port = e_port;
m->proto = proto;
m->twice_nat = twice_nat;
m->flags |= NAT_STATIC_MAPPING_FLAG_LB;
if (out2in_only)
m->flags |= NAT_STATIC_MAPPING_FLAG_OUT2IN_ONLY;
m->affinity = affinity;
@ -1205,6 +1261,9 @@ nat44_add_del_lb_static_mapping (ip4_address_t e_addr, u16 e_port,
if (!m)
return VNET_API_ERROR_NO_SUCH_ENTRY;
if (!is_lb_static_mapping (m))
return VNET_API_ERROR_INVALID_VALUE;
/* Free external address port */
if (!(sm->static_mapping_only || out2in_only))
{
@ -2041,7 +2100,7 @@ snat_static_mapping_match (snat_main_t * sm,
if (by_external)
{
if (vec_len (m->locals))
if (is_lb_static_mapping (m))
{
if (PREDICT_FALSE (lb != 0))
*lb = m->affinity ? AFFINITY_LB_NAT : LB_NAT;
@ -2612,7 +2671,7 @@ nat44_ed_get_worker_out2in_cb (ip4_header_t * ip, u32 rx_fib_index)
(&sm->static_mapping_by_external, &kv, &value))
{
m = pool_elt_at_index (sm->static_mappings, value.value);
if (!vec_len (m->locals))
if (!is_lb_static_mapping (m))
return m->workers[0];
hash = ip->src_address.as_u32 + (ip->src_address.as_u32 >> 8) +

View File

@ -183,6 +183,7 @@ typedef enum
#define NAT_STATIC_MAPPING_FLAG_ADDR_ONLY 1
#define NAT_STATIC_MAPPING_FLAG_OUT2IN_ONLY 2
#define NAT_STATIC_MAPPING_FLAG_IDENTITY_NAT 4
#define NAT_STATIC_MAPPING_FLAG_LB 8
/* *INDENT-OFF* */
typedef CLIB_PACKED(struct
@ -666,6 +667,12 @@ unformat_function_t unformat_snat_protocol;
*/
#define is_identity_static_mapping(sm) (sm->flags & NAT_STATIC_MAPPING_FLAG_IDENTITY_NAT)
/** \brief Check if NAT static mapping is load-balancing.
@param sm NAT static mapping
@return 1 if load-balancing
*/
#define is_lb_static_mapping(sm) (sm->flags & NAT_STATIC_MAPPING_FLAG_LB)
/* logging */
#define nat_log_err(...) \
vlib_log(VLIB_LOG_LEVEL_ERR, snat_main.log_class, __VA_ARGS__)

View File

@ -1100,7 +1100,7 @@ vl_api_nat44_static_mapping_dump_t_handler (vl_api_nat44_static_mapping_dump_t
/* *INDENT-OFF* */
pool_foreach (m, sm->static_mappings,
({
if (!is_identity_static_mapping(m) && !vec_len (m->locals))
if (!is_identity_static_mapping(m) && !is_lb_static_mapping (m))
send_nat44_static_mapping_details (m, reg, mp->context);
}));
/* *INDENT-ON* */
@ -1181,17 +1181,17 @@ static void *vl_api_nat44_add_del_identity_mapping_t_print
if (mp->addr_only == 0)
s =
format (s, "protocol %d port %d", mp->protocol,
format (s, " protocol %d port %d", mp->protocol,
clib_net_to_host_u16 (mp->port));
if (mp->vrf_id != ~0)
s = format (s, "vrf %d", clib_net_to_host_u32 (mp->vrf_id));
s = format (s, " vrf %d", clib_net_to_host_u32 (mp->vrf_id));
FINISH;
}
static void
send_nat44_identity_mapping_details (snat_static_mapping_t * m,
send_nat44_identity_mapping_details (snat_static_mapping_t * m, int index,
vl_api_registration_t * reg, u32 context)
{
vl_api_nat44_identity_mapping_details_t *rmp;
@ -1205,7 +1205,7 @@ send_nat44_identity_mapping_details (snat_static_mapping_t * m,
clib_memcpy (rmp->ip_address, &(m->local_addr), 4);
rmp->port = htons (m->local_port);
rmp->sw_if_index = ~0;
rmp->vrf_id = htonl (m->vrf_id);
rmp->vrf_id = htonl (m->locals[index].vrf_id);
rmp->protocol = snat_proto_to_ip_proto (m->proto);
rmp->context = context;
if (m->tag)
@ -1258,8 +1258,11 @@ static void
/* *INDENT-OFF* */
pool_foreach (m, sm->static_mappings,
({
if (is_identity_static_mapping(m) && !vec_len (m->locals))
send_nat44_identity_mapping_details (m, reg, mp->context);
if (is_identity_static_mapping(m) && !is_lb_static_mapping (m))
{
for (j = 0; j < vec_len (m->locals); j++)
send_nat44_identity_mapping_details (m, j, reg, mp->context);
}
}));
/* *INDENT-ON* */
@ -1689,7 +1692,7 @@ static void
/* *INDENT-OFF* */
pool_foreach (m, sm->static_mappings,
({
if (vec_len(m->locals))
if (is_lb_static_mapping(m))
send_nat44_lb_static_mapping_details (m, reg, mp->context);
}));
/* *INDENT-ON* */

View File

@ -220,6 +220,23 @@ format_snat_static_mapping (u8 * s, va_list * args)
snat_static_mapping_t *m = va_arg (*args, snat_static_mapping_t *);
nat44_lb_addr_port_t *local;
if (is_identity_static_mapping (m))
{
if (is_addr_only_static_mapping (m))
s = format (s, "identity mapping %U",
format_ip4_address, &m->local_addr);
else
s = format (s, "identity mapping %U:%d",
format_ip4_address, &m->local_addr, m->local_port);
/* *INDENT-OFF* */
vec_foreach (local, m->locals)
s = format (s, " vrf %d", local->vrf_id);
/* *INDENT-ON* */
return s;
}
if (is_addr_only_static_mapping (m))
s = format (s, "local %U external %U vrf %d %s %s",
format_ip4_address, &m->local_addr,
@ -230,7 +247,7 @@ format_snat_static_mapping (u8 * s, va_list * args)
is_out2in_only_static_mapping (m) ? "out2in-only" : "");
else
{
if (vec_len (m->locals))
if (is_lb_static_mapping (m))
{
s = format (s, "%U external %U:%d %s %s",
format_snat_protocol, m->proto,

View File

@ -16,7 +16,8 @@ vfio driver can still be used with recent kernels which support no-iommu mode.
##Known issues
* NUMA support
* TSO
* TSO/LRO
* RSS/multiple queues
* VLAN filter
## Usage

View File

@ -184,7 +184,8 @@ VLIB_CLI_COMMAND (vmxnet3_test_command, static) = {
/* *INDENT-ON* */
static void
show_vmxnet3 (vlib_main_t * vm, u32 * hw_if_indices, u8 show_descr)
show_vmxnet3 (vlib_main_t * vm, u32 * hw_if_indices, u8 show_descr,
u8 show_one_table, u32 which, u8 show_one_slot, u32 slot)
{
u32 i, desc_idx;
vmxnet3_device_t *vd;
@ -228,6 +229,8 @@ show_vmxnet3 (vlib_main_t * vm, u32 * hw_if_indices, u8 show_descr)
rxq->rx_comp_ring.next);
vlib_cli_output (vm, " RX completion generation flag 0x%x",
rxq->rx_comp_ring.gen);
/* RX descriptors tables */
for (rid = 0; rid < VMXNET3_RX_RING_SIZE; rid++)
{
vmxnet3_rx_ring *ring = &rxq->rx_ring[rid];
@ -248,16 +251,70 @@ show_vmxnet3 (vlib_main_t * vm, u32 * hw_if_indices, u8 show_descr)
vlib_cli_output (vm, " %5u 0x%016llx 0x%08x",
desc_idx, rxd->address, rxd->flags);
}
}
else if (show_one_table)
{
if (((which == VMXNET3_SHOW_RX_DESC0) && (rid == 0)) ||
((which == VMXNET3_SHOW_RX_DESC1) && (rid == 1)))
{
vlib_cli_output (vm, "RX descriptors table");
vlib_cli_output (vm, " %5s %18s %10s",
"slot", "address", "flags");
if (show_one_slot)
{
rxd = &rxq->rx_desc[rid][slot];
vlib_cli_output (vm, " %5u 0x%016llx 0x%08x",
slot, rxd->address, rxd->flags);
}
else
for (desc_idx = 0; desc_idx < rxq->size; desc_idx++)
{
rxd = &rxq->rx_desc[rid][desc_idx];
vlib_cli_output (vm, " %5u 0x%016llx 0x%08x",
desc_idx, rxd->address,
rxd->flags);
}
}
}
}
/* RX completion table */
if (show_descr)
{
vlib_cli_output (vm, "RX completion descriptors table");
vlib_cli_output (vm, " %5s %10s %10s %10s %10s",
"slot", "index", "rss", "len", "flags");
for (desc_idx = 0; desc_idx < rxq->size; desc_idx++)
{
rx_comp = &rxq->rx_comp[desc_idx];
vlib_cli_output (vm, " %5u 0x%08x %10u %10u 0x%08x",
desc_idx, rx_comp->index, rx_comp->rss,
rx_comp->len, rx_comp->flags);
}
}
else if (show_one_table)
{
if (which == VMXNET3_SHOW_RX_COMP)
{
vlib_cli_output (vm, "RX completion descriptors table");
vlib_cli_output (vm, " %5s %10s %10s %10s %10s",
"slot", "index", "rss", "len", "flags");
for (desc_idx = 0; desc_idx < rxq->size; desc_idx++)
if (show_one_slot)
{
rx_comp = &rxq->rx_comp[desc_idx];
rx_comp = &rxq->rx_comp[slot];
vlib_cli_output (vm, " %5u 0x%08x %10u %10u 0x%08x",
desc_idx, rx_comp->index, rx_comp->rss,
slot, rx_comp->index, rx_comp->rss,
rx_comp->len, rx_comp->flags);
}
else
for (desc_idx = 0; desc_idx < rxq->size; desc_idx++)
{
rx_comp = &rxq->rx_comp[desc_idx];
vlib_cli_output (vm,
" %5u 0x%08x %10u %10u 0x%08x",
desc_idx, rx_comp->index, rx_comp->rss,
rx_comp->len, rx_comp->flags);
}
}
}
}
@ -285,6 +342,7 @@ show_vmxnet3 (vlib_main_t * vm, u32 * hw_if_indices, u8 show_descr)
desc_idx, txd->address, txd->flags[0],
txd->flags[1]);
}
vlib_cli_output (vm, "TX completion descriptors table");
vlib_cli_output (vm, " %5s %10s %10s",
"slot", "index", "flags");
@ -295,6 +353,50 @@ show_vmxnet3 (vlib_main_t * vm, u32 * hw_if_indices, u8 show_descr)
desc_idx, tx_comp->index, tx_comp->flags);
}
}
else if (show_one_table)
{
if (which == VMXNET3_SHOW_TX_DESC)
{
vlib_cli_output (vm, "TX descriptors table");
vlib_cli_output (vm, " %5s %18s %10s %10s",
"slot", "address", "flags0", "flags1");
if (show_one_slot)
{
txd = &txq->tx_desc[slot];
vlib_cli_output (vm, " %5u 0x%016llx 0x%08x 0x%08x",
slot, txd->address, txd->flags[0],
txd->flags[1]);
}
else
for (desc_idx = 0; desc_idx < txq->size; desc_idx++)
{
txd = &txq->tx_desc[desc_idx];
vlib_cli_output (vm, " %5u 0x%016llx 0x%08x 0x%08x",
desc_idx, txd->address, txd->flags[0],
txd->flags[1]);
}
}
else if (which == VMXNET3_SHOW_TX_COMP)
{
vlib_cli_output (vm, "TX completion descriptors table");
vlib_cli_output (vm, " %5s %10s %10s",
"slot", "index", "flags");
if (show_one_slot)
{
tx_comp = &txq->tx_comp[slot];
vlib_cli_output (vm, " %5u 0x%08x 0x%08x",
slot, tx_comp->index, tx_comp->flags);
}
else
for (desc_idx = 0; desc_idx < txq->size; desc_idx++)
{
tx_comp = &txq->tx_comp[desc_idx];
vlib_cli_output (vm, " %5u 0x%08x 0x%08x",
desc_idx, tx_comp->index,
tx_comp->flags);
}
}
}
}
}
}
@ -308,8 +410,9 @@ show_vmxnet3_fn (vlib_main_t * vm, unformat_input_t * input,
vmxnet3_device_t *vd;
clib_error_t *error = 0;
u32 hw_if_index, *hw_if_indices = 0;
vnet_hw_interface_t *hi;
u8 show_descr = 0;
vnet_hw_interface_t *hi = 0;
u8 show_descr = 0, show_one_table = 0, show_one_slot = 0;
u32 which = ~0, slot;
while (unformat_check_input (input) != UNFORMAT_END_OF_INPUT)
{
@ -325,8 +428,110 @@ show_vmxnet3_fn (vlib_main_t * vm, unformat_input_t * input,
}
vec_add1 (hw_if_indices, hw_if_index);
}
else if (unformat (input, "descriptors") || unformat (input, "desc"))
else if (unformat (input, "desc"))
show_descr = 1;
else if (hi)
{
vmxnet3_device_t *vd =
vec_elt_at_index (vmxm->devices, hi->dev_instance);
if (unformat (input, "rx-comp"))
{
show_one_table = 1;
which = VMXNET3_SHOW_RX_COMP;
if (unformat (input, "%u", &slot))
{
vmxnet3_rxq_t *rxq = vec_elt_at_index (vd->rxqs, 0);
if (slot >= rxq->size)
{
error = clib_error_return (0,
"slot size must be < rx queue "
"size %u", rxq->size);
goto done;
}
show_one_slot = 1;
}
}
else if (unformat (input, "rx-desc-0"))
{
show_one_table = 1;
which = VMXNET3_SHOW_RX_DESC0;
if (unformat (input, "%u", &slot))
{
vmxnet3_rxq_t *rxq = vec_elt_at_index (vd->rxqs, 0);
if (slot >= rxq->size)
{
error = clib_error_return (0,
"slot size must be < rx queue "
"size %u", rxq->size);
goto done;
}
show_one_slot = 1;
}
}
else if (unformat (input, "rx-desc-1"))
{
show_one_table = 1;
which = VMXNET3_SHOW_RX_DESC1;
if (unformat (input, "%u", &slot))
{
vmxnet3_rxq_t *rxq = vec_elt_at_index (vd->rxqs, 0);
if (slot >= rxq->size)
{
error = clib_error_return (0,
"slot size must be < rx queue "
"size %u", rxq->size);
goto done;
}
show_one_slot = 1;
}
}
else if (unformat (input, "tx-comp"))
{
show_one_table = 1;
which = VMXNET3_SHOW_TX_COMP;
if (unformat (input, "%u", &slot))
{
vmxnet3_txq_t *txq = vec_elt_at_index (vd->txqs, 0);
if (slot >= txq->size)
{
error = clib_error_return (0,
"slot size must be < tx queue "
"size %u", txq->size);
goto done;
}
show_one_slot = 1;
}
}
else if (unformat (input, "tx-desc"))
{
show_one_table = 1;
which = VMXNET3_SHOW_TX_DESC;
if (unformat (input, "%u", &slot))
{
vmxnet3_txq_t *txq = vec_elt_at_index (vd->txqs, 0);
if (slot >= txq->size)
{
error = clib_error_return (0,
"slot size must be < tx queue "
"size %u", txq->size);
goto done;
}
show_one_slot = 1;
}
}
else
{
error = clib_error_return (0, "unknown input `%U'",
format_unformat_error, input);
goto done;
}
}
else
{
error = clib_error_return (0, "unknown input `%U'",
@ -342,7 +547,8 @@ show_vmxnet3_fn (vlib_main_t * vm, unformat_input_t * input,
);
}
show_vmxnet3 (vm, hw_if_indices, show_descr);
show_vmxnet3 (vm, hw_if_indices, show_descr, show_one_table, which,
show_one_slot, slot);
done:
vec_free (hw_if_indices);
@ -352,7 +558,8 @@ done:
/* *INDENT-OFF* */
VLIB_CLI_COMMAND (show_vmxnet3_command, static) = {
.path = "show vmxnet3",
.short_help = "show vmxnet3 [<interface>]",
.short_help = "show vmxnet3 [[<interface>] ([desc] | ([rx-comp] | "
"[rx-desc-0] | [rx-desc-1] | [tx-comp] | [tx-desc]) [<slot>])]",
.function = show_vmxnet3_fn,
};
/* *INDENT-ON* */

View File

@ -27,6 +27,7 @@
_(BUFFER_ALLOC, "buffer alloc error") \
_(RX_PACKET_NO_SOP, "Rx packet error - no SOP") \
_(RX_PACKET, "Rx packet error") \
_(RX_PACKET_EOP, "Rx packet error found on EOP") \
_(NO_BUFFER, "Rx no buffer error")
typedef enum
@ -79,7 +80,6 @@ vmxnet3_device_input_inline (vlib_main_t * vm, vlib_node_runtime_t * node,
uword n_trace = vlib_get_trace_count (vm, node);
u32 n_rx_packets = 0, n_rx_bytes = 0;
vmxnet3_rx_comp *rx_comp;
u32 comp_idx;
u32 desc_idx;
vmxnet3_rxq_t *rxq;
u32 thread_index = vm->thread_index;
@ -98,16 +98,14 @@ vmxnet3_device_input_inline (vlib_main_t * vm, vlib_node_runtime_t * node,
comp_ring = &rxq->rx_comp_ring;
bi = buffer_indices;
next = nexts;
rx_comp = &rxq->rx_comp[comp_ring->next];
while (PREDICT_TRUE (n_rx_packets < VLIB_FRAME_SIZE) &&
(comp_ring->gen ==
(rxq->rx_comp[comp_ring->next].flags & VMXNET3_RXCF_GEN)))
(comp_ring->gen == (rx_comp->flags & VMXNET3_RXCF_GEN)))
{
vlib_buffer_t *b0;
u32 bi0;
comp_idx = comp_ring->next;
rx_comp = &rxq->rx_comp[comp_idx];
rid = vmxnet3_find_rid (vd, rx_comp);
ring = &rxq->rx_ring[rid];
@ -117,10 +115,15 @@ vmxnet3_device_input_inline (vlib_main_t * vm, vlib_node_runtime_t * node,
{
vlib_error_count (vm, node->node_index,
VMXNET3_INPUT_ERROR_NO_BUFFER, 1);
if (hb)
{
vlib_buffer_free_one (vm, vlib_get_buffer_index (vm, hb));
hb = 0;
}
prev_b0 = 0;
break;
}
vmxnet3_rx_comp_ring_advance_next (rxq);
desc_idx = rx_comp->index & VMXNET3_RXC_INDEX;
ring->consume = desc_idx;
rxd = &rxq->rx_desc[rid][desc_idx];
@ -146,14 +149,14 @@ vmxnet3_device_input_inline (vlib_main_t * vm, vlib_node_runtime_t * node,
{
vlib_buffer_free_one (vm, bi0);
vlib_error_count (vm, node->node_index,
VMXNET3_INPUT_ERROR_RX_PACKET, 1);
VMXNET3_INPUT_ERROR_RX_PACKET_EOP, 1);
if (hb && vlib_get_buffer_index (vm, hb) != bi0)
{
vlib_buffer_free_one (vm, vlib_get_buffer_index (vm, hb));
hb = 0;
}
prev_b0 = 0;
continue;
goto next;
}
if (rx_comp->index & VMXNET3_RXCI_SOP)
@ -199,7 +202,7 @@ vmxnet3_device_input_inline (vlib_main_t * vm, vlib_node_runtime_t * node,
vlib_buffer_free_one (vm, vlib_get_buffer_index (vm, hb));
hb = 0;
}
continue;
goto next;
}
}
else if (prev_b0) // !sop && !eop
@ -213,7 +216,15 @@ vmxnet3_device_input_inline (vlib_main_t * vm, vlib_node_runtime_t * node,
}
else
{
ASSERT (0);
vlib_error_count (vm, node->node_index,
VMXNET3_INPUT_ERROR_RX_PACKET, 1);
vlib_buffer_free_one (vm, bi0);
if (hb && vlib_get_buffer_index (vm, hb) != bi0)
{
vlib_buffer_free_one (vm, vlib_get_buffer_index (vm, hb));
hb = 0;
}
goto next;
}
n_rx_bytes += b0->current_length;
@ -275,6 +286,10 @@ vmxnet3_device_input_inline (vlib_main_t * vm, vlib_node_runtime_t * node,
hb = 0;
got_packet = 0;
}
next:
vmxnet3_rx_comp_ring_advance_next (rxq);
rx_comp = &rxq->rx_comp[comp_ring->next];
}
if (PREDICT_FALSE ((n_trace = vlib_get_trace_count (vm, node))))

View File

@ -143,15 +143,22 @@ VNET_DEVICE_CLASS_TX_FN (vmxnet3_device_class) (vlib_main_t * vm,
}
if (PREDICT_FALSE (space_left < space_needed))
{
vlib_buffer_free_one (vm, bi0);
vlib_error_count (vm, node->node_index,
VMXNET3_TX_ERROR_NO_FREE_SLOTS, 1);
buffers++;
n_left--;
/*
* Drop this packet. But we may have enough room for the next packet
*/
continue;
vmxnet3_txq_release (vm, vd, txq);
space_left = vmxnet3_tx_ring_space_left (txq);
if (PREDICT_FALSE (space_left < space_needed))
{
vlib_buffer_free_one (vm, bi0);
vlib_error_count (vm, node->node_index,
VMXNET3_TX_ERROR_NO_FREE_SLOTS, 1);
buffers++;
n_left--;
/*
* Drop this packet. But we may have enough room for the next
* packet
*/
continue;
}
}
/*

View File

@ -43,6 +43,20 @@ enum
#undef _
};
#define foreach_vmxnet3_show_entry \
_(RX_COMP, "rx comp") \
_(RX_DESC0, "rx desc 0") \
_(RX_DESC1, "rx desc 1") \
_(TX_COMP, "tx comp") \
_(TX_DESC, "tx desc")
enum
{
#define _(a, b) VMXNET3_SHOW_##a,
foreach_vmxnet3_show_entry
#undef _
};
/* BAR 0 */
#define VMXNET3_REG_IMR 0x0000 /* Interrupt Mask Register */
#define VMXNET3_REG_TXPROD 0x0600 /* Tx Producer Index */
@ -396,8 +410,8 @@ typedef struct
typedef struct
{
CLIB_CACHE_LINE_ALIGN_MARK (cacheline0);
u64 next;
u32 gen;
u16 next;
} vmxnet3_rx_comp_ring;
typedef struct
@ -423,8 +437,8 @@ typedef struct
typedef struct
{
CLIB_CACHE_LINE_ALIGN_MARK (cacheline0);
u64 next;
u32 gen;
u16 next;
} vmxnet3_tx_comp_ring;
typedef struct

View File

@ -227,6 +227,14 @@ api_vmxnet3_dump (vat_main_t * vam)
return ret;
}
static u8 *
format_pci_addr (u8 * s, va_list * va)
{
vlib_pci_addr_t *addr = va_arg (*va, vlib_pci_addr_t *);
return format (s, "%04x:%02x:%02x.%x", addr->domain, addr->bus,
addr->slot, addr->function);
}
static void
vl_api_vmxnet3_details_t_handler (vl_api_vmxnet3_details_t * mp)
{
@ -246,7 +254,7 @@ vl_api_vmxnet3_details_t_handler (vl_api_vmxnet3_details_t * mp)
" state %s\n",
mp->if_name, ntohl (mp->sw_if_index), format_ethernet_address,
mp->hw_addr, mp->version,
format_vlib_pci_addr, &pci_addr,
format_pci_addr, &pci_addr,
ntohs (mp->rx_next),
ntohs (mp->rx_qid),
ntohs (mp->rx_qsize), ntohs (mp->rx_fill[0]),

View File

@ -153,6 +153,7 @@ typedef struct
/* Socket configuration state */
u8 is_vep;
u8 is_vep_session;
u8 has_rx_evt;
u32 attr;
u32 wait_cont_idx;
vppcom_epoll_t vep;

View File

@ -438,7 +438,7 @@ vcl_test_write (int fd, uint8_t * buf, uint32_t nbytes,
{
if (stats)
stats->tx_eagain++;
continue;
break;
}
else
break;

View File

@ -1293,13 +1293,14 @@ vppcom_session_read_internal (uint32_t session_handle, void *buf, int n,
is_ct = vcl_session_is_ct (s);
mq = is_ct ? s->our_evt_q : wrk->app_event_queue;
rx_fifo = s->rx_fifo;
s->has_rx_evt = 0;
if (svm_fifo_is_empty (rx_fifo))
{
if (is_nonblocking)
{
svm_fifo_unset_event (rx_fifo);
return VPPCOM_OK;
return VPPCOM_EWOULDBLOCK;
}
while (svm_fifo_is_empty (rx_fifo))
{
@ -1385,13 +1386,14 @@ vppcom_session_read_segments (uint32_t session_handle,
is_ct = vcl_session_is_ct (s);
mq = is_ct ? s->our_evt_q : wrk->app_event_queue;
rx_fifo = s->rx_fifo;
s->has_rx_evt = 0;
if (svm_fifo_is_empty (rx_fifo))
{
if (is_nonblocking)
{
svm_fifo_unset_event (rx_fifo);
return VPPCOM_OK;
return VPPCOM_EWOULDBLOCK;
}
while (svm_fifo_is_empty (rx_fifo))
{
@ -1551,7 +1553,8 @@ vppcom_session_write (uint32_t session_handle, void *buf, size_t n)
{
svm_fifo_set_want_tx_evt (tx_fifo, 1);
svm_msg_q_lock (mq);
svm_msg_q_wait (mq);
if (svm_msg_q_is_empty (mq))
svm_msg_q_wait (mq);
svm_msg_q_sub_w_lock (mq, &msg);
e = svm_msg_q_msg_data (mq, &msg);
@ -2303,11 +2306,12 @@ vcl_epoll_wait_handle_mq_event (vcl_worker_t * wrk, session_event_t * e,
sid = e->fifo->client_session_index;
session = vcl_session_get (wrk, sid);
session_events = session->vep.ev.events;
if (!(EPOLLIN & session->vep.ev.events))
if (!(EPOLLIN & session->vep.ev.events) || session->has_rx_evt)
break;
add_event = 1;
events[*num_ev].events |= EPOLLIN;
session_evt_data = session->vep.ev.data.u64;
session->has_rx_evt = 1;
break;
case FIFO_EVENT_APP_TX:
sid = e->fifo->client_session_index;
@ -2324,11 +2328,12 @@ vcl_epoll_wait_handle_mq_event (vcl_worker_t * wrk, session_event_t * e,
session = vcl_ct_session_get_from_fifo (wrk, e->fifo, 0);
sid = session->session_index;
session_events = session->vep.ev.events;
if (!(EPOLLIN & session->vep.ev.events))
if (!(EPOLLIN & session->vep.ev.events) || session->has_rx_evt)
break;
add_event = 1;
events[*num_ev].events |= EPOLLIN;
session_evt_data = session->vep.ev.data.u64;
session->has_rx_evt = 1;
break;
case SESSION_IO_EVT_CT_RX:
session = vcl_ct_session_get_from_fifo (wrk, e->fifo, 1);
@ -2452,15 +2457,13 @@ handle_dequeued:
{
msg = vec_elt_at_index (wrk->mq_msg_vector, i);
e = svm_msg_q_msg_data (mq, msg);
vcl_epoll_wait_handle_mq_event (wrk, e, events, num_ev);
if (*num_ev < maxevents)
vcl_epoll_wait_handle_mq_event (wrk, e, events, num_ev);
else
vec_add1 (wrk->unhandled_evts_vector, *e);
svm_msg_q_free_msg (mq, msg);
if (*num_ev == maxevents)
{
i += 1;
break;
}
}
vec_delete (wrk->mq_msg_vector, i, 0);
vec_reset_length (wrk->mq_msg_vector);
return *num_ev;
}
@ -2508,6 +2511,7 @@ vppcom_epoll_wait_eventfd (vcl_worker_t * wrk, struct epoll_event *events,
u64 buf;
vec_validate (wrk->mq_events, pool_elts (wrk->mq_evt_conns));
again:
n_mq_evts = epoll_wait (wrk->mqs_epfd, wrk->mq_events,
vec_len (wrk->mq_events), wait_for_time);
for (i = 0; i < n_mq_evts; i++)
@ -2516,6 +2520,8 @@ vppcom_epoll_wait_eventfd (vcl_worker_t * wrk, struct epoll_event *events,
n_read = read (mqc->mq_fd, &buf, sizeof (buf));
vcl_epoll_wait_handle_mq (wrk, mqc->mq, events, maxevents, 0, &n_evts);
}
if (!n_evts && n_mq_evts > 0)
goto again;
return (int) n_evts;
}

View File

@ -366,10 +366,15 @@ vlib_buffer_enqueue_to_next (vlib_main_t * vm, vlib_node_runtime_t * node,
n_enqueued = count_trailing_zeros (~bitmap) / 2;
#else
u16 x = 0;
x |= next_index ^ nexts[1];
x |= next_index ^ nexts[2];
x |= next_index ^ nexts[3];
n_enqueued = (x == 0) ? 4 : 1;
if (count + 3 < max)
{
x |= next_index ^ nexts[1];
x |= next_index ^ nexts[2];
x |= next_index ^ nexts[3];
n_enqueued = (x == 0) ? 4 : 1;
}
else
n_enqueued = 1;
#endif
if (PREDICT_FALSE (n_enqueued > max))

View File

@ -1165,6 +1165,7 @@ bfd_process (vlib_main_t * vm, vlib_node_runtime_t * rt, vlib_frame_t * f)
}
}
now = clib_cpu_time_now ();
uword *session_index;
switch (event_type)
{
case ~0: /* no events => timeout */
@ -1180,35 +1181,41 @@ bfd_process (vlib_main_t * vm, vlib_node_runtime_t * rt, vlib_frame_t * f)
* each event or timeout */
break;
case BFD_EVENT_NEW_SESSION:
bfd_lock (bm);
if (!pool_is_free_index (bm->sessions, *event_data))
{
bfd_session_t *bs =
pool_elt_at_index (bm->sessions, *event_data);
bfd_send_periodic (vm, rt, bm, bs, now);
bfd_set_timer (bm, bs, now, 1);
}
else
{
BFD_DBG ("Ignoring event for non-existent session index %u",
(u32) * event_data);
}
bfd_unlock (bm);
vec_foreach (session_index, event_data)
{
bfd_lock (bm);
if (!pool_is_free_index (bm->sessions, *session_index))
{
bfd_session_t *bs =
pool_elt_at_index (bm->sessions, *session_index);
bfd_send_periodic (vm, rt, bm, bs, now);
bfd_set_timer (bm, bs, now, 1);
}
else
{
BFD_DBG ("Ignoring event for non-existent session index %u",
(u32) * session_index);
}
bfd_unlock (bm);
}
break;
case BFD_EVENT_CONFIG_CHANGED:
bfd_lock (bm);
if (!pool_is_free_index (bm->sessions, *event_data))
{
bfd_session_t *bs =
pool_elt_at_index (bm->sessions, *event_data);
bfd_on_config_change (vm, rt, bm, bs, now);
}
else
{
BFD_DBG ("Ignoring event for non-existent session index %u",
(u32) * event_data);
}
bfd_unlock (bm);
vec_foreach (session_index, event_data)
{
bfd_lock (bm);
if (!pool_is_free_index (bm->sessions, *session_index))
{
bfd_session_t *bs =
pool_elt_at_index (bm->sessions, *session_index);
bfd_on_config_change (vm, rt, bm, bs, now);
}
else
{
BFD_DBG ("Ignoring event for non-existent session index %u",
(u32) * session_index);
}
bfd_unlock (bm);
}
break;
default:
vlib_log_err (bm->log_class, "BUG: event type 0x%wx", event_type);

View File

@ -657,9 +657,9 @@ ethernet_input_inline (vlib_main_t * vm,
(hi->hw_address != 0) &&
!eth_mac_equal ((u8 *) e0, hi->hw_address))
error0 = ETHERNET_ERROR_L3_MAC_MISMATCH;
vlib_buffer_advance (b0, sizeof (ethernet_header_t));
determine_next_node (em, variant, 0, type0, b0,
&error0, &next0);
vlib_buffer_advance (b0, sizeof (ethernet_header_t));
}
goto ship_it0;
}

44
src/vnet/ip/ip4_mtrie.c Normal file → Executable file
View File

@ -369,10 +369,10 @@ set_leaf (ip4_fib_mtrie_t * m,
old_ply->n_non_empty_leafs -=
ip4_fib_mtrie_leaf_is_non_empty (old_ply, dst_byte);
new_leaf = ply_create (m, old_leaf,
clib_max (old_ply->dst_address_bits_of_leaves
[dst_byte], ply_base_len),
ply_base_len);
new_leaf =
ply_create (m, old_leaf,
old_ply->dst_address_bits_of_leaves[dst_byte],
ply_base_len);
new_ply = get_next_ply_for_leaf (m, new_leaf);
/* Refetch since ply_create may move pool. */
@ -492,10 +492,10 @@ set_root_leaf (ip4_fib_mtrie_t * m,
if (ip4_fib_mtrie_leaf_is_terminal (old_leaf))
{
/* There is a leaf occupying the slot. Replace it with a new ply */
new_leaf = ply_create (m, old_leaf,
clib_max (old_ply->dst_address_bits_of_leaves
[dst_byte], ply_base_len),
ply_base_len);
new_leaf =
ply_create (m, old_leaf,
old_ply->dst_address_bits_of_leaves[dst_byte],
ply_base_len);
new_ply = get_next_ply_for_leaf (m, new_leaf);
__sync_val_compare_and_swap (&old_ply->leaves[dst_byte], old_leaf,
@ -551,9 +551,7 @@ unset_leaf (ip4_fib_mtrie_t * m,
old_ply->leaves[i] =
ip4_fib_mtrie_leaf_set_adj_index (a->cover_adj_index);
old_ply->dst_address_bits_of_leaves[i] =
clib_max (old_ply->dst_address_bits_base,
a->cover_address_length);
old_ply->dst_address_bits_of_leaves[i] = a->cover_address_length;
old_ply->n_non_empty_leafs +=
ip4_fib_mtrie_leaf_is_non_empty (old_ply, i);
@ -714,24 +712,23 @@ format_ip4_fib_mtrie_leaf (u8 * s, va_list * va)
return s;
}
#define FORMAT_PLY(s, _p, _i, _base_address, _ply_max_len, _indent) \
#define FORMAT_PLY(s, _p, _a, _i, _base_address, _ply_max_len, _indent) \
({ \
u32 a, ia_length; \
ip4_address_t ia; \
ip4_fib_mtrie_leaf_t _l = p->leaves[(_i)]; \
\
a = (_base_address) + ((_i) << (32 - (_ply_max_len))); \
a = (_base_address) + ((_a) << (32 - (_ply_max_len))); \
ia.as_u32 = clib_host_to_net_u32 (a); \
ia_length = (_p)->dst_address_bits_of_leaves[(_i)]; \
s = format (s, "\n%U%20U %U", \
format_white_space, (_indent) + 2, \
s = format (s, "\n%U%U %U", \
format_white_space, (_indent) + 4, \
format_ip4_address_and_length, &ia, ia_length, \
format_ip4_fib_mtrie_leaf, _l); \
\
if (ip4_fib_mtrie_leaf_is_next_ply (_l)) \
s = format (s, "\n%U%U", \
format_white_space, (_indent) + 2, \
format_ip4_fib_mtrie_ply, m, a, \
s = format (s, "\n%U", \
format_ip4_fib_mtrie_ply, m, a, (_indent) + 8, \
ip4_fib_mtrie_leaf_get_next_ply_index (_l)); \
s; \
})
@ -741,21 +738,20 @@ format_ip4_fib_mtrie_ply (u8 * s, va_list * va)
{
ip4_fib_mtrie_t *m = va_arg (*va, ip4_fib_mtrie_t *);
u32 base_address = va_arg (*va, u32);
u32 indent = va_arg (*va, u32);
u32 ply_index = va_arg (*va, u32);
ip4_fib_mtrie_8_ply_t *p;
u32 indent;
int i;
p = pool_elt_at_index (ip4_ply_pool, ply_index);
indent = format_get_indent (s);
s = format (s, "ply index %d, %d non-empty leaves", ply_index,
p->n_non_empty_leafs);
s = format (s, "%Uply index %d, %d non-empty leaves",
format_white_space, indent, ply_index, p->n_non_empty_leafs);
for (i = 0; i < ARRAY_LEN (p->leaves); i++)
{
if (ip4_fib_mtrie_leaf_is_non_empty (p, i))
{
s = FORMAT_PLY (s, p, i, base_address,
s = FORMAT_PLY (s, p, i, i, base_address,
p->dst_address_bits_base + 8, indent);
}
}
@ -791,7 +787,7 @@ format_ip4_fib_mtrie (u8 * s, va_list * va)
if (p->dst_address_bits_of_leaves[slot] > 0)
{
s = FORMAT_PLY (s, p, slot, base_address, 16, 2);
s = FORMAT_PLY (s, p, i, slot, base_address, 16, 0);
}
}
}

View File

@ -209,77 +209,87 @@ l2flood_node_fn (vlib_main_t * vm,
bi0, L2FLOOD_NEXT_DROP);
continue;
}
vec_validate (msm->clones[thread_index], n_clones);
vec_reset_length (msm->clones[thread_index]);
/*
* the header offset needs to be large enough to incorporate
* all the L3 headers that could be touched when doing BVI
* processing. So take the current l2 length plus 2 * IPv6
* headers (for tunnel encap)
*/
n_cloned = vlib_buffer_clone (vm, bi0,
msm->clones[thread_index],
n_clones,
(vnet_buffer (b0)->l2.l2_len +
sizeof (udp_header_t) +
2 * sizeof (ip6_header_t)));
if (PREDICT_FALSE (n_cloned != n_clones))
else if (n_clones > 1)
{
b0->error = node->errors[L2FLOOD_ERROR_REPL_FAIL];
}
vec_validate (msm->clones[thread_index], n_clones);
vec_reset_length (msm->clones[thread_index]);
/*
* for all but the last clone, these are not BVI bound
*/
for (clone0 = 0; clone0 < n_cloned - 1; clone0++)
{
/*
* the header offset needs to be large enough to incorporate
* all the L3 headers that could be touched when doing BVI
* processing. So take the current l2 length plus 2 * IPv6
* headers (for tunnel encap)
*/
n_cloned = vlib_buffer_clone (vm, bi0,
msm->clones[thread_index],
n_clones,
(vnet_buffer (b0)->l2.l2_len +
sizeof (udp_header_t) +
2 * sizeof (ip6_header_t)));
if (PREDICT_FALSE (n_cloned != n_clones))
{
b0->error = node->errors[L2FLOOD_ERROR_REPL_FAIL];
}
/*
* for all but the last clone, these are not BVI bound
*/
for (clone0 = 0; clone0 < n_cloned - 1; clone0++)
{
member = msm->members[thread_index][clone0];
ci0 = msm->clones[thread_index][clone0];
c0 = vlib_get_buffer (vm, ci0);
to_next[0] = ci0;
to_next += 1;
n_left_to_next -= 1;
if (PREDICT_FALSE ((node->flags & VLIB_NODE_FLAG_TRACE) &&
(b0->flags & VLIB_BUFFER_IS_TRACED)))
{
ethernet_header_t *h0;
l2flood_trace_t *t;
if (c0 != b0)
vlib_buffer_copy_trace_flag (vm, b0, ci0);
t = vlib_add_trace (vm, node, c0, sizeof (*t));
h0 = vlib_buffer_get_current (c0);
t->sw_if_index = sw_if_index0;
t->bd_index = vnet_buffer (c0)->l2.bd_index;
clib_memcpy (t->src, h0->src_address, 6);
clib_memcpy (t->dst, h0->dst_address, 6);
}
/* Do normal L2 forwarding */
vnet_buffer (c0)->sw_if_index[VLIB_TX] =
member->sw_if_index;
vlib_validate_buffer_enqueue_x1 (vm, node, next_index,
to_next, n_left_to_next,
ci0, next0);
if (PREDICT_FALSE (0 == n_left_to_next))
{
vlib_put_next_frame (vm, node, next_index,
n_left_to_next);
vlib_get_next_frame (vm, node, next_index, to_next,
n_left_to_next);
}
}
member = msm->members[thread_index][clone0];
ci0 = msm->clones[thread_index][clone0];
c0 = vlib_get_buffer (vm, ci0);
to_next[0] = ci0;
to_next += 1;
n_left_to_next -= 1;
if (PREDICT_FALSE ((node->flags & VLIB_NODE_FLAG_TRACE) &&
(b0->flags & VLIB_BUFFER_IS_TRACED)))
{
ethernet_header_t *h0;
l2flood_trace_t *t;
if (c0 != b0)
vlib_buffer_copy_trace_flag (vm, b0, ci0);
t = vlib_add_trace (vm, node, c0, sizeof (*t));
h0 = vlib_buffer_get_current (c0);
t->sw_if_index = sw_if_index0;
t->bd_index = vnet_buffer (c0)->l2.bd_index;
clib_memcpy (t->src, h0->src_address, 6);
clib_memcpy (t->dst, h0->dst_address, 6);
}
/* Do normal L2 forwarding */
vnet_buffer (c0)->sw_if_index[VLIB_TX] = member->sw_if_index;
vlib_validate_buffer_enqueue_x1 (vm, node, next_index,
to_next, n_left_to_next,
ci0, next0);
if (PREDICT_FALSE (0 == n_left_to_next))
{
vlib_put_next_frame (vm, node, next_index, n_left_to_next);
vlib_get_next_frame (vm, node, next_index,
to_next, n_left_to_next);
}
}
else
{
/* one clone */
ci0 = bi0;
member = msm->members[thread_index][0];
}
/*
* the last clone that might go to a BVI
*/
member = msm->members[thread_index][clone0];
ci0 = msm->clones[thread_index][clone0];
c0 = vlib_get_buffer (vm, ci0);
to_next[0] = ci0;

View File

@ -208,7 +208,7 @@ echo_client_node_fn (vlib_main_t * vm, vlib_node_runtime_t * node,
connections_this_batch =
ecm->connections_this_batch_by_thread[my_thread_index];
if ((ecm->run_test == 0) ||
if ((ecm->run_test != ECHO_CLIENTS_RUNNING) ||
((vec_len (connection_indices) == 0)
&& vec_len (connections_this_batch) == 0))
return 0;
@ -352,6 +352,16 @@ echo_clients_init (vlib_main_t * vm)
return 0;
}
static void
echo_clients_session_disconnect (stream_session_t * s)
{
echo_client_main_t *ecm = &echo_client_main;
vnet_disconnect_args_t _a, *a = &_a;
a->handle = session_handle (s);
a->app_index = ecm->app_index;
vnet_disconnect_session (a);
}
static int
echo_clients_session_connected_callback (u32 app_index, u32 api_context,
stream_session_t * s, u8 is_fail)
@ -361,6 +371,9 @@ echo_clients_session_connected_callback (u32 app_index, u32 api_context,
u32 session_index;
u8 thread_index;
if (PREDICT_FALSE (ecm->run_test != ECHO_CLIENTS_STARTING))
return -1;
if (is_fail)
{
clib_warning ("connection %d failed!", api_context);
@ -407,7 +420,7 @@ echo_clients_session_connected_callback (u32 app_index, u32 api_context,
__sync_fetch_and_add (&ecm->ready_connections, 1);
if (ecm->ready_connections == ecm->expected_connections)
{
ecm->run_test = 1;
ecm->run_test = ECHO_CLIENTS_RUNNING;
/* Signal the CLI process that the action is starting... */
signal_evt_to_cli (1);
}
@ -447,6 +460,12 @@ echo_clients_rx_callback (stream_session_t * s)
echo_client_main_t *ecm = &echo_client_main;
eclient_session_t *sp;
if (PREDICT_FALSE (ecm->run_test != ECHO_CLIENTS_RUNNING))
{
echo_clients_session_disconnect (s);
return -1;
}
sp = pool_elt_at_index (ecm->sessions,
s->server_rx_fifo->client_session_index);
receive_data_chunk (ecm, sp);
@ -624,6 +643,7 @@ echo_clients_command_fn (vlib_main_t * vm,
ecm->vlib_main = vm;
ecm->tls_engine = TLS_ENGINE_OPENSSL;
ecm->no_copy = 0;
ecm->run_test = ECHO_CLIENTS_STARTING;
if (thread_main->n_vlib_mains > 1)
clib_spinlock_init (&ecm->sessions_lock);
@ -825,7 +845,7 @@ echo_clients_command_fn (vlib_main_t * vm,
error = clib_error_return (0, "failed: test bytes");
cleanup:
ecm->run_test = 0;
ecm->run_test = ECHO_CLIENTS_EXITING;
vlib_process_wait_for_event_or_clock (vm, 10e-3);
for (i = 0; i < vec_len (ecm->connection_index_by_thread); i++)
{

View File

@ -105,6 +105,12 @@ typedef struct
vlib_main_t *vlib_main;
} echo_client_main_t;
enum
{
ECHO_CLIENTS_STARTING,
ECHO_CLIENTS_RUNNING,
ECHO_CLIENTS_EXITING
} echo_clients_test_state_e;
extern echo_client_main_t echo_client_main;
vlib_node_registration_t echo_clients_node;

View File

@ -153,7 +153,7 @@ session_free (stream_session_t * s)
memset (s, 0xFA, sizeof (*s));
}
static void
void
session_free_w_fifos (stream_session_t * s)
{
segment_manager_dealloc_fifos (s->svm_segment_index, s->server_rx_fifo,
@ -197,7 +197,7 @@ session_alloc_for_connection (transport_connection_t * tc)
s = session_alloc (thread_index);
s->session_type = session_type_from_proto_and_ip (tc->proto, tc->is_ip4);
s->session_state = SESSION_STATE_CONNECTING;
s->enqueue_epoch = ~0;
s->enqueue_epoch = (u64) ~ 0;
/* Attach transport to session and vice versa */
s->connection_index = tc->c_index;
@ -393,7 +393,7 @@ session_enqueue_stream_connection (transport_connection_t * tc,
* by calling stream_server_flush_enqueue_events () */
session_manager_main_t *smm = vnet_get_session_manager_main ();
u32 thread_index = s->thread_index;
u32 enqueue_epoch = smm->current_enqueue_epoch[tc->proto][thread_index];
u64 enqueue_epoch = smm->current_enqueue_epoch[tc->proto][thread_index];
if (s->enqueue_epoch != enqueue_epoch)
{
@ -434,7 +434,7 @@ session_enqueue_dgram_connection (stream_session_t * s,
* by calling stream_server_flush_enqueue_events () */
session_manager_main_t *smm = vnet_get_session_manager_main ();
u32 thread_index = s->thread_index;
u32 enqueue_epoch = smm->current_enqueue_epoch[proto][thread_index];
u64 enqueue_epoch = smm->current_enqueue_epoch[proto][thread_index];
if (s->enqueue_epoch != enqueue_epoch)
{

View File

@ -195,7 +195,7 @@ struct _session_manager_main
clib_rwlock_t *peekers_rw_locks;
/** Per-proto, per-worker enqueue epoch counters */
u32 *current_enqueue_epoch[TRANSPORT_N_PROTO];
u64 *current_enqueue_epoch[TRANSPORT_N_PROTO];
/** Per-proto, per-worker thread vector of sessions to enqueue */
u32 **session_to_enqueue[TRANSPORT_N_PROTO];
@ -308,6 +308,7 @@ stream_session_is_valid (u32 si, u8 thread_index)
stream_session_t *session_alloc (u32 thread_index);
int session_alloc_fifos (segment_manager_t * sm, stream_session_t * s);
void session_free (stream_session_t * s);
void session_free_w_fifos (stream_session_t * s);
always_inline stream_session_t *
session_get (u32 si, u32 thread_index)

View File

@ -278,30 +278,6 @@ send_session_accept_callback (stream_session_t * s)
return 0;
}
void
mq_send_local_session_disconnected_cb (u32 app_wrk_index,
local_session_t * ls)
{
app_worker_t *app_wrk = app_worker_get (app_wrk_index);
svm_msg_q_msg_t _msg, *msg = &_msg;
session_disconnected_msg_t *mp;
svm_msg_q_t *app_mq;
session_event_t *evt;
application_t *app;
app = application_get (app_wrk->app_index);
app_mq = app_wrk->event_queue;
svm_msg_q_lock_and_alloc_msg_w_ring (app_mq, SESSION_MQ_CTRL_EVT_RING,
SVM_Q_WAIT, msg);
evt = svm_msg_q_msg_data (app_mq, msg);
memset (evt, 0, sizeof (*evt));
evt->event_type = SESSION_CTRL_EVT_DISCONNECTED;
mp = (session_disconnected_msg_t *) evt->data;
mp->handle = application_local_session_handle (ls);
mp->context = app->api_client_index;
svm_msg_q_add_and_unlock (app_mq, msg);
}
static void
send_session_disconnect_callback (stream_session_t * s)
{
@ -421,6 +397,23 @@ static session_cb_vft_t session_cb_vft = {
.del_segment_callback = send_del_segment_callback,
};
static int
mq_try_lock_and_alloc_msg (svm_msg_q_t * app_mq, svm_msg_q_msg_t * msg)
{
int rv;
u8 try = 0;
while (try < 100)
{
rv = svm_msg_q_lock_and_alloc_msg_w_ring (app_mq,
SESSION_MQ_CTRL_EVT_RING,
SVM_Q_NOWAIT, msg);
if (!rv)
return 0;
try++;
}
return -1;
}
static int
mq_send_session_accepted_cb (stream_session_t * s)
{
@ -436,8 +429,8 @@ mq_send_session_accepted_cb (stream_session_t * s)
app = application_get (app_wrk->app_index);
app_mq = app_wrk->event_queue;
svm_msg_q_lock_and_alloc_msg_w_ring (app_mq, SESSION_MQ_CTRL_EVT_RING,
SVM_Q_WAIT, msg);
if (mq_try_lock_and_alloc_msg (app_mq, msg))
return -1;
evt = svm_msg_q_msg_data (app_mq, msg);
memset (evt, 0, sizeof (*evt));
@ -523,8 +516,8 @@ mq_send_session_disconnected_cb (stream_session_t * s)
app = application_get (app_wrk->app_index);
app_mq = app_wrk->event_queue;
svm_msg_q_lock_and_alloc_msg_w_ring (app_mq, SESSION_MQ_CTRL_EVT_RING,
SVM_Q_WAIT, msg);
if (mq_try_lock_and_alloc_msg (app_mq, msg))
return;
evt = svm_msg_q_msg_data (app_mq, msg);
memset (evt, 0, sizeof (*evt));
evt->event_type = SESSION_CTRL_EVT_DISCONNECTED;
@ -534,6 +527,30 @@ mq_send_session_disconnected_cb (stream_session_t * s)
svm_msg_q_add_and_unlock (app_mq, msg);
}
void
mq_send_local_session_disconnected_cb (u32 app_wrk_index,
local_session_t * ls)
{
app_worker_t *app_wrk = app_worker_get (app_wrk_index);
svm_msg_q_msg_t _msg, *msg = &_msg;
session_disconnected_msg_t *mp;
svm_msg_q_t *app_mq;
session_event_t *evt;
application_t *app;
app = application_get (app_wrk->app_index);
app_mq = app_wrk->event_queue;
if (mq_try_lock_and_alloc_msg (app_mq, msg))
return;
evt = svm_msg_q_msg_data (app_mq, msg);
memset (evt, 0, sizeof (*evt));
evt->event_type = SESSION_CTRL_EVT_DISCONNECTED;
mp = (session_disconnected_msg_t *) evt->data;
mp->handle = application_local_session_handle (ls);
mp->context = app->api_client_index;
svm_msg_q_add_and_unlock (app_mq, msg);
}
static void
mq_send_session_reset_cb (stream_session_t * s)
{
@ -544,8 +561,8 @@ mq_send_session_reset_cb (stream_session_t * s)
session_event_t *evt;
app_mq = app->event_queue;
svm_msg_q_lock_and_alloc_msg_w_ring (app_mq, SESSION_MQ_CTRL_EVT_RING,
SVM_Q_WAIT, msg);
if (mq_try_lock_and_alloc_msg (app_mq, msg))
return;
evt = svm_msg_q_msg_data (app_mq, msg);
memset (evt, 0, sizeof (*evt));
evt->event_type = SESSION_CTRL_EVT_RESET;
@ -576,8 +593,8 @@ mq_send_session_connected_cb (u32 app_wrk_index, u32 api_context,
return -1;
}
svm_msg_q_lock_and_alloc_msg_w_ring (app_mq, SESSION_MQ_CTRL_EVT_RING,
SVM_Q_WAIT, msg);
if (mq_try_lock_and_alloc_msg (app_mq, msg))
return -1;
evt = svm_msg_q_msg_data (app_mq, msg);
memset (evt, 0, sizeof (*evt));
evt->event_type = SESSION_CTRL_EVT_CONNECTED;
@ -656,8 +673,9 @@ mq_send_session_bound_cb (u32 app_wrk_index, u32 api_context,
return -1;
}
svm_msg_q_lock_and_alloc_msg_w_ring (app_mq, SESSION_MQ_CTRL_EVT_RING,
SVM_Q_WAIT, msg);
if (mq_try_lock_and_alloc_msg (app_mq, msg))
return -1;
evt = svm_msg_q_msg_data (app_mq, msg);
memset (evt, 0, sizeof (*evt));
evt->event_type = SESSION_CTRL_EVT_BOUND;

View File

@ -67,7 +67,7 @@ typedef struct _stream_session_t
u8 thread_index;
/** To avoid n**2 "one event per frame" check */
u8 enqueue_epoch;
u64 enqueue_epoch;
/** svm segment index where fifos were allocated */
u32 svm_segment_index;
@ -120,6 +120,9 @@ typedef struct local_session_
/** Port for connection. Overlaps thread_index/enqueue_epoch */
u16 port;
/** Partly overlaps enqueue_epoch */
u8 pad_epoch[7];
/** Segment index where fifos were allocated */
u32 svm_segment_index;

View File

@ -1078,7 +1078,15 @@ tcp_send_fin (tcp_connection_t * tc)
tcp_retransmit_timer_force_update (tc);
if (PREDICT_FALSE (tcp_get_free_buffer_index (tm, &bi)))
return;
{
/* Out of buffers so program fin retransmit ASAP */
tcp_timer_update (tc, TCP_TIMER_RETRANSMIT, 1);
tc->flags |= TCP_CONN_FINSNT;
tc->snd_una_max += 1;
tc->snd_nxt = tc->snd_una_max;
return;
}
b = vlib_get_buffer (vm, bi);
tcp_init_buffer (vm, b);
fin_snt = tc->flags & TCP_CONN_FINSNT;

View File

@ -119,6 +119,7 @@ tls_ctx_half_open_alloc (void)
{
clib_rwlock_writer_lock (&tm->half_open_rwlock);
pool_get (tm->half_open_ctx_pool, ctx);
ctx_index = ctx - tm->half_open_ctx_pool;
clib_rwlock_writer_unlock (&tm->half_open_rwlock);
}
else
@ -126,10 +127,10 @@ tls_ctx_half_open_alloc (void)
/* reader lock assumption: only main thread will call pool_get */
clib_rwlock_reader_lock (&tm->half_open_rwlock);
pool_get (tm->half_open_ctx_pool, ctx);
ctx_index = ctx - tm->half_open_ctx_pool;
clib_rwlock_reader_unlock (&tm->half_open_rwlock);
}
memset (ctx, 0, sizeof (*ctx));
ctx_index = ctx - tm->half_open_ctx_pool;
return ctx_index;
}
@ -254,6 +255,8 @@ tls_notify_app_connected (tls_ctx_t * ctx, u8 is_failed)
{
TLS_DBG (1, "failed to notify app");
tls_disconnect (ctx->tls_ctx_handle, vlib_get_thread_index ());
session_free_w_fifos (app_session);
return -1;
}
session_lookup_add_connection (&ctx->connection,

Some files were not shown because too many files have changed in this diff Show More