Compare commits

...

19 Commits

Author SHA1 Message Date
Dave Barach
511ce2572a Clean up multi-thread barrier-sync hold-down timer
Main thread: don't bother with the barrier sync hold-down timer if
none of the worker threads are busy.

Worker threads: avoid epoll_pwait (10ms timeout) when the
control-plane has been active in the last half-second.

Cherry-pick a recent dangling reference fix: pool_elt_at_index after
e.g. rx callback is required, in case the unix file pool expands.

Manual feature backport to 18.07

Change-Id: I745fbb8a12aeda34b0ec7b6dcda66c0e25c3eee1
Signed-off-by: Dave Barach <dave@barachs.net>
2019-05-24 13:56:01 +00:00
Steven Luong
070b810a88 mp_safe SW_INTERFACE_DUMP, SW_INTERFACE_DETAILS, SW_INTERFACE_TAG_ADD_DEL,
BRIDGE_DOMAIN_DUMP, CONTROL_PING, CONTROL_PING_REPLY, and show interface CLI

Change-Id: I2927573b66bb5dd134b37ffb72af0e6676750917
Signed-off-by: Steven Luong <sluong@cisco.com>
(cherry picked from commit 15c31921a628c5500cbed2ebc588d7ddbaa970a3)
2019-05-02 19:54:37 +00:00
Igor Mikhailov (imichail)
19baa37762 Fix 'show interface span' field length
Allow to display longer interface names, e.g. VirtualEthernet0/0/0.102
The field length (32) is now the same as for 'show interface'.

Change-Id: I1cb1efd459acb800bfaeeec40b672c8b17cd8c3d
Signed-off-by: Igor Mikhailov (imichail) <imichail@cisco.com>
(cherry picked from commit 0ac827e15c5ee2134a15bf5e023e03967ddcbaa8)
2019-03-05 19:51:25 +00:00
Steven Luong
783adb1527 vhost: VPP stalls with vhost performing control plane actions [VPP-1572]
Symptom
-------
With NDR traffic blasting at VPP, bringing up a new VM with vhost
connection to VPP causes packet drops. I am able to recreate this
problem easily using a simple setup like this.

TREX-------------- switch ---- VPP
    |---------------|  |-------|

Cause
-----
The reason for the packet drops is due to vhost holding onto the worker
barrier lock for too long in vhost_user_socket_read(). There are quite a
few of system calls inside the routine. At the end of the routine, it
unconditionally calls vhost_user_update_iface_state() for all message
types. vhost_user_update_iface_state() also unconditionally calls
vhost_user_rx_thread_placement() and vhost_user_tx_thread_placement().
vhost_user_rx_thread_placement scraps out all existing cpu/queue mappings
for the interface and creates brand new cpu/queue mappings for the
interface. This process is very disruptive and very expensive. In my
opinion, this area of code needs a makeover.

Fixes
-----
* vhost_user_socket_read() is rewritten that it should not hold
  onto the worker barrier lock for system calls, or at least minimize the
  need for doing it.
* Remove the call to vhost_user_update_iface_state as a default route at
  the end of vhost_user_socket_read(). There is only a couple of message
  types which really need to call vhost_user_update_iface_state(). We put
  the call to those message types which need it.
* Remove vhost_user_rx_thread_placement() and
  vhost_user_tx_thread_placement from vhost_user_update_iface_state().
  There is no need to repetatively change the cpu/queue mappings.
* vhost_user_rx_thread_placement() is actually quite expensive. It should
  be called only once per queue for the interface. There is no need to
  scrap the existing cpu/queue mappings and create new cpu/queue mappings
  when the additional queues becomes active/enable.
* Change to create the cpu/queue mappings for the first RX when the
  interface is created. Dont remove the cpu/queue mapping when the
  interface is disconnected. Remove the cpu/queue mapping only when the
  interface is deleted.

The create vhost user interface CLI also has some very expensive system
calls if the command is entered with the optional keyword "server"

As a bonus, This patch makes the create vhost user interface binary-api and
CLI thread safe. Do the protection for the small amount of code which is
thread unsafe.

Change-Id: I664c57d76dc92a116119221f3d91fa67914e440a
Signed-off-by: Steven Luong <sluong@cisco.com>
2019-02-21 16:10:57 -08:00
Steven
2b98236eaa bond: packet drops on VPP bond interface [VPP-1544]
We register callback for VNET_HW_INTERFACE_LINK_UP_DOWN_FUNCTION and
VNET_SW_INTERFACE_ADMIN_UP_DOWN_FUNCTION to add and remove the slave
interface from the bond interface accordingly. For static bonding without
lacp, one would think that it is good enough to put the slave interface into
the ective slave set as soon as it is configured. Wrong, sometimes the slave
interface is configured to be part of the bonding without ever bringing up the
hardware carrier or setting the admin state to up. In that case, we send
traffic to the "dead" slave interface.

The fix is to make sure both the carrier and admin state are up before we put
the slave into the active set for forwarding traffic.

Change-Id: I93b1c36d5481ca76cc8b87e8ca1b375ca3bd453b
Signed-off-by: Steven <sluong@cisco.com>
(cherry picked from commit e43278f75fe3188551580c7d7991958805756e2f)
2019-01-30 15:30:36 +00:00
Steven Luong
3f69a51658 install-dep: force osleap boost dep install
Triple commit this patch to stable/1807. Manually created it due to merge
conflict from cherrypicking the original patch 16631. This patch differs from
16631 that it skips the second chunk from the original patch, listed below,
because it has no significance which is also the source of the merge conflict.

@@ -309,7 +309,7 @@
 	@sudo -E zypper install -y $(RPM_SUSE_DEPENDS)
 else ifeq ($(filter opensuse-leap,$(OS_ID)),$(OS_ID))
 	@sudo -E zypper refresh
-	@sudo -E zypper install -y $(RPM_SUSE_DEPENDS)
+	@sudo -E zypper install  -y $(RPM_SUSE_DEPENDS)
 else ifeq ($(filter opensuse,$(OS_ID)),$(OS_ID))
 	@sudo -E zypper refresh
 	@sudo -E zypper install -y $(RPM_SUSE_DEPENDS)

This patch is needed for stable/1807 because verify job failed for
https://gerrit.fd.io/r/#/c/17031/

Change-Id: Iab863ab57738179ec59d6cd088cc83354acada08
Signed-off-by: Steven Luong <sluong@cisco.com>
2019-01-24 10:08:44 -08:00
Neale Ranns
909ba93249 MPLS: buffer over-run with incorrectly init'd vector. fix VAT dump
Change-Id: Ifdbb4c4cffd90c4ec8b39513d284ebf7be39eca5
Signed-off-by: Neale Ranns <nranns@cisco.com>
(cherry picked from commit 44cea225e2238a3c549f17f315cd1fbc6978c277)
2018-12-05 11:28:45 +00:00
Neale Ranns
3351801ce3 IPSEC-AH: fix packet drop
Change-Id: I45b97cfd0c3785bfbf6d142d362bd3d4d56bae00
Signed-off-by: Neale Ranns <nranns@cisco.com>
(cherry picked from commit ad5f2de9041070c007cedb87f94b72193125db17)
2018-12-05 06:29:23 +00:00
Juraj Sloboda
31aa6f267f vhost_user: Fix setting MTU using uninitialized variable
Change-Id: I0caa5fd584e3785f237d08f3d3be23e9bfee7605
Signed-off-by: Juraj Sloboda <jsloboda@cisco.com>
(cherry picked from commit 83c46a2c5c97320e029b4dd154a45212530f221d)
2018-11-03 19:24:38 +00:00
mu.duojiao
c548f5df3f VPP-1448: Fix error when recurse on down the trie.
Change-Id: Idfed8243643780d3f52dfe6e6ec621c440daa6ae
Signed-off-by: mu.duojiao <mu.duojiao@zte.com.cn>
(cherry picked from commit 59a829533c1345945dc1b6decc3afe29494e85cd)
2018-10-17 15:13:06 +00:00
mu.duojiao
41b2ae7c1d VPP-1459:Ip4 lookup fail when exist prefix cover.
Change-Id: I4ba0aeb65219596475345e42b8cd34019f5594c6
Signed-off-by: mu.duojiao <mu.duojiao@zte.com.cn>
(cherry picked from commit 9744e6d0273c0d7d11ab4f271c8694f69d51ccf3)
(cherry picked from commit b3aff922ffbddd61b44df50271e4aaee2820a432)
2018-10-17 11:08:58 +00:00
Andrew Yourtchenko
da7bcd4bd6 acl-plugin: tuplemerge: refresh the pointer to hash-readied ACL entries per each collision in split_partition() (VPP-1458)
A pointer to hash-ready ACL rules is only set once, which might cause a crash if there are colliding entries
from more than one ACL applied.

Solution: reload the pointer based on the element being processed.

Change-Id: I7a701c2c3b4236d67293159f2a33c4f967168953
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
(cherry picked from commit 84112dd4f98e5a31a8c7340a741f89e77fd03363)
2018-10-17 07:59:19 +00:00
Marco Varlese
9c335ce8e7 Fix wrong dependencies
I was reported an issue affecting VPP build only with 1 thread (e.g. -j1
option to make). That is quite important from a reproducible build
perspective.
This patch addresses that issue.

Change-Id: Ia8e3b9a9716a260d8b6f1c2d92dd166eddf6716f
Signed-off-by: Marco Varlese <marco.varlese@suse.com>
2018-10-02 12:30:39 +00:00
Neale Ranns
21064cec96 IGMP: handle (*,G) report with no source addresses
Change-Id: I363370b9d4a27b992bad55c48fc930a2fbea2165
Signed-off-by: Neale Ranns <nranns@cisco.com>
2018-10-01 09:42:16 +00:00
Marco Varlese
bc0c8fe6ff SCTP: fix overflow issue with timestamp
Change-Id: I03bb47a2baa4375b7bf9347d95c4cc8de37fe510
Signed-off-by: Marco Varlese <marco.varlese@suse.com>
2018-10-01 07:52:25 +00:00
Ole Troan
639f573dca IP ttl check in ip4-input missing for single packet path.
Change-Id: Idc17b2f8794d37cd3242a97395ab56bd633ca575
Signed-off-by: Ole Troan <ot@cisco.com>
2018-09-28 15:05:07 +00:00
Neale Ranns
6a5bc5173a MPLS tunnel dump fix
Change-Id: I9d3d5243841d5b888f079e3ea5dc1e2e8befd1dc
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
2018-09-25 19:47:37 +00:00
Neale Ranns
d159e6f311 BIER; bi-dir to/from underlay
set and check a special RX interface value as the packet enters and
exits a BIER domain

Change-Id: I5ff2f0e2d1b3ce0f3598b935f518fc11eb0896ee
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
(cherry picked from commit fe4e48f617f3e0f62880adebdcfb5989aa4e6db7)
2018-09-25 15:27:05 +00:00
Neale Ranns
1e5a2c6f19 GRE: fix 4o6 and 6o4 adj stacking
Change-Id: I13dc5eab8835c4f3b95906816d42dccfeee8b092
Signed-off-by: Neale Ranns <nranns@cisco.com>
(cherry picked from commit 2646c80db8d3d1a3cd7555328d5a0038798f861e)
2018-09-21 09:42:34 +00:00
54 changed files with 859 additions and 414 deletions

View File

@ -127,14 +127,14 @@ RPM_SUSE_PLATFORM_DEPS = distribution-release shadow rpm-build
ifeq ($(OS_ID),opensuse)
ifeq ($(SUSE_NAME),Tumbleweed)
RPM_SUSE_DEVEL_DEPS = libboost_headers-devel libboost_thread-devel gcc
RPM_SUSE_DEVEL_DEPS = libboost_headers1_68_0-devel-1.68.0 libboost_thread1_68_0-devel-1.68.0 gcc
RPM_SUSE_PYTHON_DEPS += python2-ply python2-virtualenv
endif
ifeq ($(SUSE_ID),15.0)
RPM_SUSE_DEVEL_DEPS = libboost_headers-devel libboost_thread-devel gcc6
RPM_SUSE_DEVEL_DEPS = libboost_headers1_68_0-devel-1.68.0 libboost_thread1_68_0-devel-1.68.0 gcc6
RPM_SUSE_PYTHON_DEPS += python2-ply python2-virtualenv
else
RPM_SUSE_DEVEL_DEPS += boost_1_61-devel gcc6
RPM_SUSE_DEVEL_DEPS += libboost_headers1_68_0-devel-1.68.0 gcc6
RPM_SUSE_PYTHON_DEPS += python-virtualenv
endif
endif

View File

@ -19,7 +19,7 @@ acl_plugin_la_LDFLAGS += -Wl,-lm,-ldl
acl_plugin_la_LIBADD =
acl_plugin_la_DEPENDENCIES =
acl_plugin_la_SOURCES = \
acl/acl.c \
@ -48,7 +48,7 @@ libacl_plugin_avx2_la_CFLAGS = \
$(AM_CFLAGS) @CPU_AVX2_FLAGS@ \
-DCLIB_MARCH_VARIANT=avx2
noinst_LTLIBRARIES += libacl_plugin_avx2.la
acl_plugin_la_LIBADD += libacl_plugin_avx2.la
acl_plugin_la_DEPENDENCIES += libacl_plugin_avx2.la
endif
if CC_SUPPORTS_AVX512
@ -60,7 +60,7 @@ libacl_plugin_avx512_la_CFLAGS = \
$(AM_CFLAGS) @CPU_AVX512_FLAGS@ \
-DCLIB_MARCH_VARIANT=avx512
noinst_LTLIBRARIES += libacl_plugin_avx512.la
acl_plugin_la_LIBADD += libacl_plugin_avx512.la
acl_plugin_la_DEPENDENCIES += libacl_plugin_avx512.la
endif
endif

View File

@ -1478,6 +1478,8 @@ split_partition(acl_main_t *am, u32 first_index,
int i=0;
u64 collisions = vec_len(pae->colliding_rules);
for(i=0; i<collisions; i++){
/* reload the hash acl info as it might be a different ACL# */
ha = vec_elt_at_index(am->hash_acl_infos, pae->acl_index);
DBG( "TM-collision: base_ace:%d (ace_mask:%d, first_collision_mask:%d)",
pae->ace_index, pae->mask_type_index, coll_mask_type_index);

View File

@ -14,7 +14,7 @@
vppplugins_LTLIBRARIES += avf_plugin.la
vppapitestplugins_LTLIBRARIES += avf_test_plugin.la
avf_plugin_la_LIBADD =
avf_plugin_la_DEPENDENCIES =
avf_plugin_la_SOURCES = \
avf/cli.c \
avf/device.c \
@ -51,7 +51,7 @@ libavf_plugin_avx2_la_CFLAGS = \
$(AM_CFLAGS) @CPU_AVX2_FLAGS@ \
-DCLIB_MARCH_VARIANT=avx2
noinst_LTLIBRARIES += libavf_plugin_avx2.la
avf_plugin_la_LIBADD += libavf_plugin_avx2.la
avf_plugin_la_DEPENDENCIES += libavf_plugin_avx2.la
endif
if CC_SUPPORTS_AVX512
@ -63,7 +63,7 @@ libavf_plugin_avx512_la_CFLAGS = \
$(AM_CFLAGS) @CPU_AVX512_FLAGS@ \
-DCLIB_MARCH_VARIANT=avx512
noinst_LTLIBRARIES += libavf_plugin_avx512.la
avf_plugin_la_LIBADD += libavf_plugin_avx512.la
avf_plugin_la_DEPENDENCIES += libavf_plugin_avx512.la
endif
endif

View File

@ -31,7 +31,7 @@ dpdk_plugin_la_LDFLAGS += -Wl,-lnuma
endif
dpdk_plugin_la_LDFLAGS += -Wl,-lm,-ldl
dpdk_plugin_la_LIBADD =
dpdk_plugin_la_DEPENDENCIES =
dpdk_plugin_la_SOURCES = \
dpdk/main.c \
@ -71,7 +71,7 @@ libdpdk_plugin_avx2_la_CFLAGS = \
$(AM_CFLAGS) @CPU_AVX2_FLAGS@ \
-DCLIB_MARCH_VARIANT=avx2
noinst_LTLIBRARIES += libdpdk_plugin_avx2.la
dpdk_plugin_la_LIBADD += libdpdk_plugin_avx2.la
dpdk_plugin_la_DEPENDENCIES += libdpdk_plugin_avx2.la
endif
if CC_SUPPORTS_AVX512
@ -83,7 +83,7 @@ libdpdk_plugin_avx512_la_CFLAGS = \
$(AM_CFLAGS) @CPU_AVX512_FLAGS@ \
-DCLIB_MARCH_VARIANT=avx512
noinst_LTLIBRARIES += libdpdk_plugin_avx512.la
dpdk_plugin_la_LIBADD += libdpdk_plugin_avx512.la
dpdk_plugin_la_DEPENDENCIES += libdpdk_plugin_avx512.la
endif
endif

View File

@ -32,8 +32,13 @@ igmp_group_mk_source_list (const igmp_membership_group_v3_t * r)
n = clib_net_to_host_u16 (r->n_src_addresses);
if (0 == n)
return (NULL);
{
/* a (*,G) join has no source address specified */
vec_validate (srcs, 0);
srcs[0].ip4.as_u32 = 0;
}
else
{
vec_validate (srcs, n - 1);
s = r->src_addresses;
@ -42,6 +47,7 @@ igmp_group_mk_source_list (const igmp_membership_group_v3_t * r)
srcs[ii].ip4 = *s;
s++;
}
}
return (srcs);
}

View File

@ -14,7 +14,7 @@
vppplugins_LTLIBRARIES += memif_plugin.la
vppapitestplugins_LTLIBRARIES += memif_test_plugin.la
memif_plugin_la_LIBADD =
memif_plugin_la_DEPENDENCIES =
memif_plugin_la_SOURCES = memif/memif.c \
memif/memif_api.c \
memif/cli.c \
@ -49,7 +49,7 @@ memif_plugin_avx2_la_CFLAGS = \
$(AM_CFLAGS) @CPU_AVX2_FLAGS@ \
-DCLIB_MARCH_VARIANT=avx2
noinst_LTLIBRARIES += memif_plugin_avx2.la
memif_plugin_la_LIBADD += memif_plugin_avx2.la
memif_plugin_la_DEPENDENCIES += memif_plugin_avx2.la
endif
if CC_SUPPORTS_AVX512
@ -61,7 +61,7 @@ memif_plugin_avx512_la_CFLAGS = \
$(AM_CFLAGS) @CPU_AVX512_FLAGS@ \
-DCLIB_MARCH_VARIANT=avx512
noinst_LTLIBRARIES += memif_plugin_avx512.la
memif_plugin_la_LIBADD += memif_plugin_avx512.la
memif_plugin_la_DEPENDENCIES += memif_plugin_avx512.la
endif
endif

View File

@ -19929,14 +19929,14 @@ vl_api_mpls_fib_path_print (vat_main_t * vam, vl_api_fib_path_t * fp)
print (vam->ofp,
" weight %d, sw_if_index %d, is_local %d, is_drop %d, "
"is_unreach %d, is_prohitbit %d, afi %d, next_hop %U",
ntohl (fp->weight), ntohl (fp->sw_if_index), fp->is_local,
fp->weight, ntohl (fp->sw_if_index), fp->is_local,
fp->is_drop, fp->is_unreach, fp->is_prohibit, fp->afi,
format_ip6_address, fp->next_hop);
else if (fp->afi == IP46_TYPE_IP4)
print (vam->ofp,
" weight %d, sw_if_index %d, is_local %d, is_drop %d, "
"is_unreach %d, is_prohitbit %d, afi %d, next_hop %U",
ntohl (fp->weight), ntohl (fp->sw_if_index), fp->is_local,
fp->weight, ntohl (fp->sw_if_index), fp->is_local,
fp->is_drop, fp->is_unreach, fp->is_prohibit, fp->afi,
format_ip4_address, fp->next_hop);
}

View File

@ -61,6 +61,9 @@ typedef struct vlib_main_t
CLIB_CACHE_LINE_ALIGN_MARK (cacheline0);
/* Instruction level timing state. */
clib_time_t clib_time;
/* Offset from main thread time */
f64 time_offset;
f64 time_last_barrier_release;
/* Time stamp of last node dispatch. */
u64 cpu_time_last_node_dispatch;
@ -224,7 +227,7 @@ void vlib_worker_loop (vlib_main_t * vm);
always_inline f64
vlib_time_now (vlib_main_t * vm)
{
return clib_time_now (&vm->clib_time);
return clib_time_now (&vm->clib_time) + vm->time_offset;
}
always_inline f64

View File

@ -1448,7 +1448,9 @@ vlib_worker_thread_barrier_sync_int (vlib_main_t * vm)
f64 t_entry;
f64 t_open;
f64 t_closed;
f64 max_vector_rate;
u32 count;
int i;
if (vec_len (vlib_mains) < 2)
return;
@ -1468,13 +1470,43 @@ vlib_worker_thread_barrier_sync_int (vlib_main_t * vm)
return;
}
/*
* Need data to decide if we're working hard enough to honor
* the barrier hold-down timer.
*/
max_vector_rate = 0.0;
for (i = 1; i < vec_len (vlib_mains); i++)
max_vector_rate =
clib_max (max_vector_rate,
vlib_last_vectors_per_main_loop_as_f64 (vlib_mains[i]));
vlib_worker_threads[0].barrier_sync_count++;
/* Enforce minimum barrier open time to minimize packet loss */
ASSERT (vm->barrier_no_close_before <= (now + BARRIER_MINIMUM_OPEN_LIMIT));
while ((now = vlib_time_now (vm)) < vm->barrier_no_close_before)
;
/*
* If any worker thread seems busy, which we define
* as a vector rate above 10, we enforce the barrier hold-down timer
*/
if (max_vector_rate > 10.0)
{
while (1)
{
now = vlib_time_now (vm);
/* Barrier hold-down timer expired? */
if (now >= vm->barrier_no_close_before)
break;
if ((vm->barrier_no_close_before - now)
> (2.0 * BARRIER_MINIMUM_OPEN_LIMIT))
{
clib_warning
("clock change: would have waited for %.4f seconds",
(vm->barrier_no_close_before - now));
break;
}
}
}
/* Record time of closure */
t_open = now - vm->barrier_epoch;
vm->barrier_epoch = now;
@ -1559,6 +1591,14 @@ vlib_worker_thread_barrier_release (vlib_main_t * vm)
deadline = now + BARRIER_SYNC_TIMEOUT;
/*
* Note when we let go of the barrier.
* Workers can use this to derive a reasonably accurate
* time offset. See vlib_time_now(...)
*/
vm->time_last_barrier_release = vlib_time_now (vm);
CLIB_MEMORY_STORE_BARRIER ();
*vlib_worker_threads->wait_at_barrier = 0;
while (*vlib_worker_threads->workers_at_barrier > 0)
@ -1844,6 +1884,45 @@ threads_init (vlib_main_t * vm)
VLIB_INIT_FUNCTION (threads_init);
static clib_error_t *
show_clock_command_fn (vlib_main_t * vm,
unformat_input_t * input, vlib_cli_command_t * cmd)
{
int i;
f64 now;
now = vlib_time_now (vm);
vlib_cli_output (vm, "Time now %.9f", now);
if (vec_len (vlib_mains) == 1)
return 0;
vlib_cli_output (vm, "Time last barrier release %.9f",
vm->time_last_barrier_release);
for (i = 1; i < vec_len (vlib_mains); i++)
{
if (vlib_mains[i] == 0)
continue;
vlib_cli_output (vm, "Thread %d offset %.9f error %.9f", i,
vlib_mains[i]->time_offset,
vm->time_last_barrier_release -
vlib_mains[i]->time_last_barrier_release);
}
return 0;
}
/* *INDENT-OFF* */
VLIB_CLI_COMMAND (f_command, static) =
{
.path = "show clock",
.short_help = "show clock",
.function = show_clock_command_fn,
};
/* *INDENT-ON* */
/*
* fd.io coding-style-patch-verification: ON
*

View File

@ -400,7 +400,7 @@ vlib_worker_thread_barrier_check (void)
{
if (PREDICT_FALSE (*vlib_worker_threads->wait_at_barrier))
{
vlib_main_t *vm;
vlib_main_t *vm = vlib_get_main ();
clib_smp_atomic_add (vlib_worker_threads->workers_at_barrier, 1);
if (CLIB_DEBUG > 0)
{
@ -409,6 +409,21 @@ vlib_worker_thread_barrier_check (void)
}
while (*vlib_worker_threads->wait_at_barrier)
;
/*
* Recompute the offset from thread-0 time.
* Note that vlib_time_now adds vm->time_offset, so
* clear it first. Save the resulting idea of "now", to
* see how well we're doing. See show_clock_command_fn(...)
*/
{
f64 now;
vm->time_offset = 0.0;
now = vlib_time_now (vm);
vm->time_offset = vlib_global_main.time_last_barrier_release - now;
vm->time_last_barrier_release = vlib_time_now (vm);
}
if (CLIB_DEBUG > 0)
vm->parked_at_barrier = 0;
clib_smp_atomic_add (vlib_worker_threads->workers_at_barrier, -1);

View File

@ -145,9 +145,13 @@ linux_epoll_input_inline (vlib_main_t * vm, vlib_node_runtime_t * node,
vlib_node_main_t *nm = &vm->node_main;
u32 ticks_until_expiration;
f64 timeout;
f64 now;
int timeout_ms = 0, max_timeout_ms = 10;
f64 vector_rate = vlib_last_vectors_per_main_loop (vm);
if (is_main == 0)
now = vlib_time_now (vm);
/*
* If we've been asked for a fixed-sleep between main loop polls,
* do so right away.
@ -194,8 +198,9 @@ linux_epoll_input_inline (vlib_main_t * vm, vlib_node_runtime_t * node,
}
node->input_main_loops_per_call = 0;
}
else if (is_main == 0 && vector_rate < 2 &&
nm->input_node_counts_by_state[VLIB_NODE_STATE_POLLING] == 0)
else if (is_main == 0 && vector_rate < 2
&& (vlib_global_main.time_last_barrier_release + 0.5 < now)
&& nm->input_node_counts_by_state[VLIB_NODE_STATE_POLLING] == 0)
{
timeout = 10e-3;
timeout_ms = max_timeout_ms;
@ -223,12 +228,32 @@ linux_epoll_input_inline (vlib_main_t * vm, vlib_node_runtime_t * node,
em->epoll_events,
vec_len (em->epoll_events), timeout_ms);
}
}
else
{
/*
* Worker thread, no epoll fd's, sleep for 100us at a time
* and check for a barrier sync request
*/
if (timeout_ms)
usleep (timeout_ms * 1000);
return 0;
{
struct timespec ts, tsrem;
f64 limit = now + (f64) timeout_ms * 1e-3;
while (vlib_time_now (vm) < limit)
{
/* Sleep for 100us at a time */
ts.tv_sec = 0;
ts.tv_nsec = 1000 * 100;
while (nanosleep (&ts, &tsrem) < 0)
ts = tsrem;
if (*vlib_worker_threads->wait_at_barrier)
goto done;
}
}
goto done;
}
}
@ -238,7 +263,7 @@ linux_epoll_input_inline (vlib_main_t * vm, vlib_node_runtime_t * node,
vlib_panic_with_error (vm, clib_error_return_unix (0, "epoll_wait"));
/* non fatal error (e.g. EINTR). */
return 0;
goto done;
}
em->epoll_waits += 1;
@ -314,6 +339,7 @@ linux_epoll_input_inline (vlib_main_t * vm, vlib_node_runtime_t * node,
}
}
done:
return 0;
}

View File

@ -1294,7 +1294,7 @@ libvnet_avx2_la_CFLAGS = \
$(AM_CFLAGS) @CPU_AVX2_FLAGS@ \
-DCLIB_MARCH_VARIANT=avx2
noinst_LTLIBRARIES += libvnet_avx2.la
libvnet_la_LIBADD += libvnet_avx2.la
libvnet_la_DEPENDENCIES += libvnet_avx2.la
endif
if CC_SUPPORTS_AVX512
@ -1306,7 +1306,7 @@ libvnet_avx512_la_CFLAGS = \
$(AM_CFLAGS) @CPU_AVX512_FLAGS@ \
-DCLIB_MARCH_VARIANT=avx512
noinst_LTLIBRARIES += libvnet_avx512.la
libvnet_la_LIBADD += libvnet_avx512.la
libvnet_la_DEPENDENCIES += libvnet_avx512.la
endif
endif

View File

@ -72,6 +72,7 @@ bier_disp_dispatch_inline (vlib_main_t * vm,
bdei0 = vnet_buffer(b0)->ip.adj_index[VLIB_TX];
hdr0 = vlib_buffer_get_current(b0);
bde0 = bier_disp_entry_get(bdei0);
vnet_buffer(b0)->ip.adj_index[VLIB_RX] = BIER_RX_ITF;
/*
* header is in network order - flip it, we are about to

View File

@ -87,11 +87,10 @@ format_bier_drop_trace (u8 * s, va_list * args)
return s;
}
/* *INDENT-OFF* */
VLIB_REGISTER_NODE (bier_drop_node, static) =
{
.function = bier_drop,.
name = "bier-drop",
.function = bier_drop,
.name = "bier-drop",
.vector_size = sizeof (u32),
.format_trace = format_bier_drop_trace,
.n_next_nodes = 1,

View File

@ -26,6 +26,11 @@
#include <vnet/bier/bier_bit_string.h>
#include <vnet/ip/ip6_packet.h>
/**
* Special Value of the BIER RX interface
*/
#define BIER_RX_ITF (~0 - 1)
/**
* Mask and shift values for the fields incorporated
* into the header's first word

View File

@ -68,7 +68,7 @@ bier_imp_add_or_lock (const bier_table_id_t *bti,
pool_get_aligned(bier_imp_pool, bi, CLIB_CACHE_LINE_BYTES);
bi->bi_tbl = *bti;
btii = bier_table_add_or_lock(bti, MPLS_LABEL_INVALID);
btii = bier_table_lock(bti);
/*
* init the BIER header we will paint on in the data plane

View File

@ -120,6 +120,14 @@ bier_imp_dpo_inline (vlib_main_t * vm,
vlib_buffer_advance(b0, -(sizeof(bier_hdr_t) +
bier_hdr_len_id_to_num_bytes(bimp0->bi_tbl.bti_hdr_len)));
hdr0 = vlib_buffer_get_current(b0);
/* RPF check */
if (PREDICT_FALSE(BIER_RX_ITF == vnet_buffer(b0)->ip.adj_index[VLIB_RX]))
{
next0 = 0;
}
else
{
clib_memcpy(hdr0, &bimp0->bi_hdr,
(sizeof(bier_hdr_t) +
bier_hdr_len_id_to_num_bytes(bimp0->bi_tbl.bti_hdr_len)));
@ -144,6 +152,7 @@ bier_imp_dpo_inline (vlib_main_t * vm,
next0 = bimp0->bi_dpo[fproto].dpoi_next_node;
vnet_buffer(b0)->ip.adj_index[VLIB_TX] =
bimp0->bi_dpo[fproto].dpoi_index;
}
if (PREDICT_FALSE(b0->flags & VLIB_BUFFER_IS_TRACED))
{
@ -194,7 +203,7 @@ VLIB_REGISTER_NODE (bier_imp_ip4_node) = {
.format_trace = format_bier_imp_trace,
.n_next_nodes = 1,
.next_nodes = {
[0] = "error-drop",
[0] = "bier-drop",
}
};
VLIB_NODE_FUNCTION_MULTIARCH (bier_imp_ip4_node, bier_imp_ip4)

View File

@ -329,6 +329,70 @@ bier_table_mk_ecmp (index_t bti)
return (bt);
}
static index_t
bier_table_create (const bier_table_id_t *btid,
mpls_label_t local_label)
{
/*
* add a new table
*/
bier_table_t *bt;
index_t bti;
u32 key;
key = bier_table_mk_key(btid);
pool_get_aligned(bier_table_pool, bt, CLIB_CACHE_LINE_BYTES);
bier_table_init(bt, btid, local_label);
hash_set(bier_tables_by_key, key, bier_table_get_index(bt));
bti = bier_table_get_index(bt);
if (bier_table_is_main(bt))
{
bt = bier_table_mk_ecmp(bti);
/*
* add whichever mpls-fib or bift we need
*/
if (local_label != MPLS_LABEL_INVALID)
{
bt->bt_ll = local_label;
bier_table_mk_lfib(bt);
}
else
{
bier_table_mk_bift(bt);
}
}
return (bti);
}
index_t
bier_table_lock (const bier_table_id_t *btid)
{
bier_table_t *bt;
index_t bti;
bt = bier_table_find(btid);
if (NULL == bt)
{
bti = bier_table_create(btid, MPLS_LABEL_INVALID);
bt = bier_table_get(bti);
}
else
{
bti = bier_table_get_index(bt);
}
bier_table_lock_i(bt);
return (bti);
}
index_t
bier_table_add_or_lock (const bier_table_id_t *btid,
mpls_label_t local_label)
@ -379,36 +443,8 @@ bier_table_add_or_lock (const bier_table_id_t *btid,
}
else
{
/*
* add a new table
*/
u32 key;
key = bier_table_mk_key(btid);
pool_get_aligned(bier_table_pool, bt, CLIB_CACHE_LINE_BYTES);
bier_table_init(bt, btid, local_label);
hash_set(bier_tables_by_key, key, bier_table_get_index(bt));
bti = bier_table_get_index(bt);
if (bier_table_is_main(bt))
{
bt = bier_table_mk_ecmp(bti);
/*
* add whichever mpls-fib or bift we need
*/
if (local_label != MPLS_LABEL_INVALID)
{
bt->bt_ll = local_label;
bier_table_mk_lfib(bt);
}
else
{
bier_table_mk_bift(bt);
}
}
bti = bier_table_create(btid, local_label);
bt = bier_table_get(bti);
}
bier_table_lock_i(bt);

View File

@ -91,6 +91,7 @@ STATIC_ASSERT((sizeof(bier_table_t) <= 2*CLIB_CACHE_LINE_BYTES),
extern index_t bier_table_add_or_lock(const bier_table_id_t *id,
mpls_label_t ll);
extern index_t bier_table_lock(const bier_table_id_t *id);
extern void bier_table_unlock(const bier_table_id_t *id);
extern void bier_table_route_add(const bier_table_id_t *bti,

View File

@ -516,11 +516,13 @@ bond_enslave (vlib_main_t * vm, bond_enslave_args_t * args)
ethernet_set_rx_redirect (vnm, sif_hw, 1);
}
if ((bif->mode == BOND_MODE_LACP) && bm->lacp_enable_disable)
if (bif->mode == BOND_MODE_LACP)
{
if (bm->lacp_enable_disable)
(*bm->lacp_enable_disable) (vm, bif, sif, 1);
}
else
else if (sif->port_enabled &&
(sif_hw->flags & VNET_HW_INTERFACE_FLAG_LINK_UP))
{
bond_enable_collecting_distributing (vm, sif);
}

View File

@ -404,21 +404,23 @@ bond_sw_interface_up_down (vnet_main_t * vnm, u32 sw_if_index, u32 flags)
if (sif)
{
sif->port_enabled = flags & VNET_SW_INTERFACE_FLAG_ADMIN_UP;
if (sif->lacp_enabled)
return 0;
if (sif->port_enabled == 0)
{
if (sif->lacp_enabled == 0)
{
bond_disable_collecting_distributing (vm, sif);
}
}
else
{
if (sif->lacp_enabled == 0)
{
vnet_main_t *vnm = vnet_get_main ();
vnet_hw_interface_t *hw =
vnet_get_sup_hw_interface (vnm, sw_if_index);
if (hw->flags & VNET_HW_INTERFACE_FLAG_LINK_UP)
bond_enable_collecting_distributing (vm, sif);
}
}
}
return 0;
}
@ -437,21 +439,18 @@ bond_hw_interface_up_down (vnet_main_t * vnm, u32 hw_if_index, u32 flags)
sif = bond_get_slave_by_sw_if_index (sw->sw_if_index);
if (sif)
{
if (sif->lacp_enabled)
return 0;
if (!(flags & VNET_HW_INTERFACE_FLAG_LINK_UP))
{
if (sif->lacp_enabled == 0)
{
bond_disable_collecting_distributing (vm, sif);
}
}
else
{
if (sif->lacp_enabled == 0)
else if (sif->port_enabled)
{
bond_enable_collecting_distributing (vm, sif);
}
}
}
return 0;
}

File diff suppressed because it is too large Load Diff

View File

@ -245,6 +245,14 @@ typedef struct
/* The rx queue policy (interrupt/adaptive/polling) for this queue */
u32 mode;
/*
* It contains the device queue number. -1 if it does not. The idea is
* to not invoke vnet_hw_interface_assign_rx_thread and
* vnet_hw_interface_unassign_rx_thread more than once for the duration of
* the interface even if it is disconnected and reconnected.
*/
i16 qid;
} vhost_user_vring_t;
#define VHOST_USER_EVENT_START_TIMER 1
@ -288,9 +296,6 @@ typedef struct
/* Whether to use spinlock or per_cpu_tx_qid assignment */
u8 use_tx_spinlock;
u16 *per_cpu_tx_qid;
/* Vector of active rx queues for this interface */
u16 *rx_queues;
} vhost_user_intf_t;
typedef struct

View File

@ -234,6 +234,9 @@ vhost_user_api_hookup (vlib_main_t * vm)
foreach_vpe_api_msg;
#undef _
/* Mark CREATE_VHOST_USER_IF as mp safe */
am->is_mp_safe[VL_API_CREATE_VHOST_USER_IF] = 1;
/*
* Set up the (msg_name, crc, message-id) table
*/

View File

@ -305,6 +305,22 @@ fib_forw_chain_type_from_dpo_proto (dpo_proto_t proto)
return (FIB_FORW_CHAIN_TYPE_UNICAST_IP4);
}
fib_forward_chain_type_t
fib_forw_chain_type_from_fib_proto (fib_protocol_t proto)
{
switch (proto)
{
case FIB_PROTOCOL_IP4:
return (FIB_FORW_CHAIN_TYPE_UNICAST_IP4);
case FIB_PROTOCOL_IP6:
return (FIB_FORW_CHAIN_TYPE_UNICAST_IP6);
case FIB_PROTOCOL_MPLS:
return (FIB_FORW_CHAIN_TYPE_MPLS_NON_EOS);
}
ASSERT(0);
return (FIB_FORW_CHAIN_TYPE_UNICAST_IP4);
}
vnet_link_t
fib_forw_chain_type_to_link_type (fib_forward_chain_type_t fct)
{

View File

@ -177,6 +177,11 @@ extern fib_forward_chain_type_t fib_forw_chain_type_from_link_type(vnet_link_t l
*/
extern fib_forward_chain_type_t fib_forw_chain_type_from_dpo_proto(dpo_proto_t proto);
/**
* @brief Convert from a fib-protocol to a chain type.
*/
extern fib_forward_chain_type_t fib_forw_chain_type_from_fib_proto(fib_protocol_t proto);
/**
* @brief Convert from a chain type to the DPO proto it will install
*/

View File

@ -127,7 +127,9 @@ gre_tunnel_from_fib_node (fib_node_t * node)
void
gre_tunnel_stack (adj_index_t ai)
{
fib_forward_chain_type_t fib_fwd;
gre_main_t *gm = &gre_main;
dpo_id_t tmp = DPO_INVALID;
ip_adjacency_t *adj;
gre_tunnel_t *gt;
u32 sw_if_index;
@ -149,9 +151,7 @@ gre_tunnel_stack (adj_index_t ai)
return;
}
dpo_id_t tmp = DPO_INVALID;
fib_forward_chain_type_t fib_fwd = (FIB_PROTOCOL_IP6 == adj->ia_nh_proto) ?
FIB_FORW_CHAIN_TYPE_UNICAST_IP6 : FIB_FORW_CHAIN_TYPE_UNICAST_IP4;
fib_fwd = fib_forw_chain_type_from_fib_proto (gt->tunnel_dst.fp_proto);
fib_entry_contribute_forwarding (gt->fib_entry_index, fib_fwd, &tmp);
if (DPO_LOAD_BALANCE == tmp.dpoi_type)

View File

@ -1223,6 +1223,11 @@ interface_api_hookup (vlib_main_t * vm)
foreach_vpe_api_msg;
#undef _
/* Mark these APIs as mp safe */
am->is_mp_safe[VL_API_SW_INTERFACE_DUMP] = 1;
am->is_mp_safe[VL_API_SW_INTERFACE_DETAILS] = 1;
am->is_mp_safe[VL_API_SW_INTERFACE_TAG_ADD_DEL] = 1;
/*
* Set up the (msg_name, crc, message-id) table
*/

View File

@ -469,6 +469,7 @@ VLIB_CLI_COMMAND (show_sw_interfaces_command, static) = {
.path = "show interface",
.short_help = "show interface [address|addr|features|feat] [<interface> [<interface> [..]]] [verbose]",
.function = show_sw_interfaces,
.is_mp_safe = 1,
};
/* *INDENT-ON* */

Some files were not shown because too many files have changed in this diff Show More