Compare commits

...

14 Commits

Author SHA1 Message Date
Andrew Yourtchenko
fce396738f misc: VPP 20.01 Release Notes
Type: docs
Change-Id: Iee518fbb9c72716cc90a3ea8efbf3ecbaa969a84
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
2020-01-29 20:33:31 +00:00
Andrew Yourtchenko
fc98203b5d misc: Markdown cleanups for the 20.01 release
Type: docs
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
Change-Id: I821197364a2fee9b52b1f014288b1f5e9e3c494c
2020-01-29 19:32:53 +00:00
Benoît Ganne
a2c1951e56 devices: vhost: fix data offset on input
Regardless of whether the virtio_net_hdr is sent as a separate
descriptors or in the same descriptor as the data, we always want to
skip the header length - maybe moving to the next descriptor along the
way.

Type: fix

Change-Id: Iaa70aeb310e589639b20f8c7029aaa8d3ce5d307
Signed-off-by: Benoît Ganne <bganne@cisco.com>
(cherry picked from commit 5ecc1e4d433a34845c7bfd761dc990413e6c321b)
2020-01-29 18:35:11 +00:00
Florin Coras
fb15c0c0cf session tcp: fix packet tracing
Type: fix
Ticket: VPP-1830

Change-Id: Ib823d016c64998779fb1d00b8aad3acb5e8340be
Signed-off-by: Florin Coras <fcoras@cisco.com>
(cherry picked from commit 30928f87a3c9d98e288d1364d50c032e052e69ab)
2020-01-28 16:28:32 +00:00
Neale Ranns
7d3c2b738e fib: Reload the adj after possible realloc (VPP-1822)
Type: fix
Fixes: 418b225931634f6d113d2971cb9550837d69929d

Change-Id: Ia5f4ea24188c4f3de87e06a7fd07b40bcb47cfc1
Signed-off-by: Neale Ranns <nranns@cisco.com>
2020-01-27 20:55:34 +00:00
Dave Wallace
664c9613ac nsim: enable output scheduling on main thread
Type: fix
Ticket: VPP-1813

Change-Id: I5d47cb9bc7eb7f3c8485e3b42f0701e81d87ba2a
Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
(cherry picked from commit c0c4eec3bc309bcc656eade82f17754875f9ed7c)
2020-01-27 20:55:12 +00:00
Satoru Matsushima
0c514f0d76 srv6-mobile: Update the document
Updating the document of srv6-mobile plugin code integrated into stable/2001. The reason of this patch just only for the document is that the latest commit to master was reverted by the release master for some reason. The commit included not only the code for new feature, but also the updated document for the already merged code into stable/2001. The previous doc is work-in-progress status in terms of its CLI and features. It seems there is some confusion on this patch is trying to update the document of outside feature of stable/2001 which is not true. This patch exactly reflects the correct information of srv6-mobile plugin within the extent of stable/2001.


Type: docs



Signed-off-by: Satoru Matsushima <satoru.matsushima@gmail.com>
Change-Id: I376386ef6fc9584ab945db7358e3c4a698471e9b
Signed-off-by: Satoru Matsushima <satoru.matsushima@gmail.com>
2020-01-27 16:20:25 +00:00
Florin Coras
abd9312516 session: fix node runtime in pre-input queue handler
Call session queue node with the right node runtime instead of the
pre-input node runtime.

Type: fix
Ticket: VPP-1826

Change-Id: I43d20bed4930fc877b187ce7ecdce62034b393c5
Signed-off-by: Florin Coras <fcoras@cisco.com>
(cherry picked from commit 2d8829cbb5f3d214fbc09bf4258573659e0c5e60)
2020-01-25 19:31:43 +00:00
Dave Barach
9af7a98cf8 api: mark api_trace_command_fn thread-safe
Binary API trace replay with multiple worker threads depends in many
cases on worker thread graph replica maintenance. If we (implicitly)
assert a worker thread barrier at the debug CLI level, all graph
replica changes are deferred until the replay operation completes. If
an interface is deleted, the wheels may fall off.

Type: fix
Ticket: VPP-1824

Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I9b07d43f8501caa5519e5ff9ae4c19dc2661cc84
2020-01-24 00:21:06 +00:00
Florin Coras
864af09508 hsa: proxy app fixes
Type: fix
Ticket: VPP-1825

Change-Id: Icb4b331c9346d3781f4ddd6f62891c78d4059c1f
Signed-off-by: Florin Coras <fcoras@cisco.com>
(cherry picked from commit f5c7305c4ab21fe1c3eeeee1484449586464813a)
2020-01-23 19:00:29 +00:00
Neale Ranns
20398a368c fib: FIB crash removing labelled route (VPP-1818)
Type: fix

The crash occured trying to retreive a NULL path list to walk the path
extensions. A walk shoul not be required, because there should be no
extensins, since all paths are removed. The problem is that when the
paths were added, they were not sorted, hence neither were the
extensions and when they were updated, duplicate extensions were added,
and hence a path removal did not remove them all.
Fix is to make sure paths are sorted.

Change-Id: I069d937de8e7bc8aae3d92f588db4daff727d863
Signed-off-by: Neale Ranns <nranns@cisco.com>
(cherry picked from commit 257749c40946a9269140d322e374d74c3b6eefb8)
2020-01-22 22:34:43 +00:00
Neale Ranns
29acfa2ad5 ipsec: re-enable DPDK IPSec for tunnel decap/encap (VPP-1823)
Type: fix

Change-Id: Iff9b1960b122f7d326efc37770b4ae3e81eb3122
Signed-off-by: Neale Ranns <nranns@cisco.com>
2020-01-22 19:23:38 +00:00
Neale Ranns
e3cabba9b8 fib: Adjacency realloc during rewrite update walk (VPP-1822)
Type: fix

Change-Id: I0e826284c50713d322ee7943d87fd3363cfbdfbc
Signed-off-by: Neale Ranns <nranns@cisco.com>
2020-01-21 20:26:43 +00:00
Andrew Yourtchenko
c7fe31cfff misc: Initial changes for stable/2001 branch
Type: docs
Change-Id: I0a8a43bd5436b5d3cdd9b8937cd0b2366e523f91
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
2020-01-15 22:09:55 +00:00
35 changed files with 1706 additions and 438 deletions

View File

@ -2,3 +2,4 @@
host=gerrit.fd.io
port=29418
project=vpp
defaultbranch=stable/2001

1298
RELEASE.md

File diff suppressed because it is too large Load Diff

View File

@ -16,4 +16,7 @@ Programming notes for developers.
- @subpage stats_doc
- @subpage if_stats_client_doc
- @subpage api_lang_doc
- @subpage handoff_queue_demo_plugin
- @subpage handoff_queue_demo_plugin
- @subpage lcov_code_coverage
- @subpage mdata_doc

View File

@ -16,6 +16,7 @@ Several modules provide operational, dataplane-user focused documentation.
- @subpage lldp_doc
- @subpage map_doc
- @subpage marvel_plugin_doc
- @subpage srv6_mobile_plugin
- @subpage mtu_doc
- @subpage nat64_doc
- @subpage nat_ha_doc

View File

@ -1,3 +1,5 @@
# Code coverage analysis with lcov {#lcov_code_coverage}
## Prerequisites
The Linux gcov and lcov tools are fussy about gcc / g++ compiler

View File

@ -256,7 +256,10 @@ dpdk_esp_decrypt_inline (vlib_main_t * vm,
if (is_ip6)
priv->next = DPDK_CRYPTO_INPUT_NEXT_DECRYPT6_POST;
else
priv->next = DPDK_CRYPTO_INPUT_NEXT_DECRYPT4_POST;
{
priv->next = DPDK_CRYPTO_INPUT_NEXT_DECRYPT4_POST;
b0->flags |= VNET_BUFFER_F_IS_IP4;
}
/* FIXME multi-seg */
vlib_increment_combined_counter

View File

@ -66,6 +66,8 @@ static char *esp_encrypt_error_strings[] = {
extern vlib_node_registration_t dpdk_esp4_encrypt_node;
extern vlib_node_registration_t dpdk_esp6_encrypt_node;
extern vlib_node_registration_t dpdk_esp4_encrypt_tun_node;
extern vlib_node_registration_t dpdk_esp6_encrypt_tun_node;
typedef struct
{
@ -411,8 +413,16 @@ dpdk_esp_encrypt_inline (vlib_main_t * vm,
}
else /* transport mode */
{
priv->next = DPDK_CRYPTO_INPUT_NEXT_INTERFACE_OUTPUT;
rewrite_len = vnet_buffer (b0)->ip.save_rewrite_length;
if (is_tun)
{
rewrite_len = 0;
priv->next = DPDK_CRYPTO_INPUT_NEXT_MIDCHAIN;
}
else
{
priv->next = DPDK_CRYPTO_INPUT_NEXT_INTERFACE_OUTPUT;
rewrite_len = vnet_buffer (b0)->ip.save_rewrite_length;
}
u16 adv = sizeof (esp_header_t) + iv_size + udp_encap_adv;
vlib_buffer_advance (b0, -adv - rewrite_len);
u8 *src = ((u8 *) ih0) - rewrite_len;
@ -576,7 +586,10 @@ dpdk_esp_encrypt_inline (vlib_main_t * vm,
}
if (is_ip6)
{
vlib_node_increment_counter (vm, dpdk_esp6_encrypt_node.index,
vlib_node_increment_counter (vm,
(is_tun ?
dpdk_esp6_encrypt_tun_node.index :
dpdk_esp6_encrypt_node.index),
ESP_ENCRYPT_ERROR_RX_PKTS,
from_frame->n_vectors);
@ -585,7 +598,10 @@ dpdk_esp_encrypt_inline (vlib_main_t * vm,
}
else
{
vlib_node_increment_counter (vm, dpdk_esp4_encrypt_node.index,
vlib_node_increment_counter (vm,
(is_tun ?
dpdk_esp4_encrypt_tun_node.index :
dpdk_esp4_encrypt_node.index),
ESP_ENCRYPT_ERROR_RX_PKTS,
from_frame->n_vectors);

View File

@ -1049,9 +1049,11 @@ dpdk_ipsec_process (vlib_main_t * vm, vlib_node_runtime_t * rt,
"dpdk-esp4-encrypt",
"dpdk-esp4-encrypt-tun",
"dpdk-esp4-decrypt",
"dpdk-esp4-decrypt",
"dpdk-esp6-encrypt",
"dpdk-esp6-encrypt-tun",
"dpdk-esp6-decrypt",
"dpdk-esp6-decrypt",
dpdk_ipsec_check_support,
add_del_sa_session);
int rv = ipsec_select_esp_backend (im, idx);

View File

@ -38,6 +38,7 @@
_(IP4_LOOKUP, "ip4-lookup") \
_(IP6_LOOKUP, "ip6-lookup") \
_(INTERFACE_OUTPUT, "interface-output") \
_(MIDCHAIN, "adj-midchain-tx") \
_(DECRYPT4_POST, "dpdk-esp4-decrypt-post") \
_(DECRYPT6_POST, "dpdk-esp6-decrypt-post")

View File

@ -69,13 +69,12 @@ delete_proxy_session (session_t * s, int is_active_open)
uword *p;
u64 handle;
clib_spinlock_lock_if_init (&pm->sessions_lock);
handle = session_handle (s);
clib_spinlock_lock_if_init (&pm->sessions_lock);
if (is_active_open)
{
active_open_session = s;
p = hash_get (pm->proxy_session_by_active_open_handle, handle);
if (p == 0)
{
@ -85,17 +84,14 @@ delete_proxy_session (session_t * s, int is_active_open)
}
else if (!pool_is_free_index (pm->sessions, p[0]))
{
active_open_session = s;
ps = pool_elt_at_index (pm->sessions, p[0]);
if (ps->vpp_server_handle != ~0)
server_session = session_get_from_handle (ps->vpp_server_handle);
else
server_session = 0;
}
}
else
{
server_session = s;
p = hash_get (pm->proxy_session_by_server_handle, handle);
if (p == 0)
{
@ -105,12 +101,11 @@ delete_proxy_session (session_t * s, int is_active_open)
}
else if (!pool_is_free_index (pm->sessions, p[0]))
{
server_session = s;
ps = pool_elt_at_index (pm->sessions, p[0]);
if (ps->vpp_active_open_handle != ~0)
active_open_session = session_get_from_handle
(ps->vpp_active_open_handle);
else
active_open_session = 0;
}
}
@ -121,8 +116,6 @@ delete_proxy_session (session_t * s, int is_active_open)
pool_put (pm->sessions, ps);
}
clib_spinlock_unlock_if_init (&pm->sessions_lock);
if (active_open_session)
{
a->handle = session_handle (active_open_session);
@ -140,6 +133,8 @@ delete_proxy_session (session_t * s, int is_active_open)
session_handle (server_session));
vnet_disconnect_session (a);
}
clib_spinlock_unlock_if_init (&pm->sessions_lock);
}
static int
@ -232,6 +227,7 @@ proxy_rx_callback (session_t * s)
if (PREDICT_FALSE (max_dequeue == 0))
return 0;
max_dequeue = clib_min (pm->rcv_buffer_size, max_dequeue);
actual_transfer = svm_fifo_peek (rx_fifo, 0 /* relative_offset */ ,
max_dequeue, pm->rx_buf[thread_index]);
@ -239,7 +235,6 @@ proxy_rx_callback (session_t * s)
clib_memset (a, 0, sizeof (*a));
clib_spinlock_lock_if_init (&pm->sessions_lock);
pool_get (pm->sessions, ps);
clib_memset (ps, 0, sizeof (*ps));
ps->server_rx_fifo = rx_fifo;
@ -376,22 +371,6 @@ static session_cb_vft_t active_open_clients = {
};
/* *INDENT-ON* */
static void
create_api_loopbacks (vlib_main_t * vm)
{
proxy_main_t *pm = &proxy_main;
api_main_t *am = vlibapi_get_main ();
vl_shmem_hdr_t *shmem_hdr;
shmem_hdr = am->shmem_hdr;
pm->vl_input_queue = shmem_hdr->vl_input_queue;
pm->server_client_index =
vl_api_memclnt_create_internal ("proxy_server", pm->vl_input_queue);
pm->active_open_client_index =
vl_api_memclnt_create_internal ("proxy_active_open", pm->vl_input_queue);
}
static int
proxy_server_attach ()
{
@ -405,6 +384,7 @@ proxy_server_attach ()
if (pm->private_segment_size)
segment_size = pm->private_segment_size;
a->name = format (0, "proxy-server");
a->api_client_index = pm->server_client_index;
a->session_cb_vft = &proxy_session_cb_vft;
a->options = options;
@ -424,6 +404,7 @@ proxy_server_attach ()
}
pm->server_app_index = a->app_index;
vec_free (a->name);
return 0;
}
@ -439,6 +420,7 @@ active_open_attach (void)
a->api_client_index = pm->active_open_client_index;
a->session_cb_vft = &active_open_clients;
a->name = format (0, "proxy-active-open");
options[APP_OPTIONS_ACCEPT_COOKIE] = 0x12345678;
options[APP_OPTIONS_SEGMENT_SIZE] = 512 << 20;
@ -458,6 +440,8 @@ active_open_attach (void)
pm->active_open_app_index = a->app_index;
vec_free (a->name);
return 0;
}
@ -480,9 +464,6 @@ proxy_server_create (vlib_main_t * vm)
u32 num_threads;
int i;
if (pm->server_client_index == (u32) ~ 0)
create_api_loopbacks (vm);
num_threads = 1 /* main thread */ + vtm->n_threads;
vec_validate (proxy_main.server_event_queue, num_threads - 1);
vec_validate (proxy_main.active_open_event_queue, num_threads - 1);
@ -535,6 +516,7 @@ proxy_server_create_command_fn (vlib_main_t * vm, unformat_input_t * input,
pm->private_segment_count = 0;
pm->private_segment_size = 0;
pm->server_uri = 0;
pm->client_uri = 0;
while (unformat_check_input (input) != UNFORMAT_END_OF_INPUT)
{
@ -556,9 +538,9 @@ proxy_server_create_command_fn (vlib_main_t * vm, unformat_input_t * input,
pm->private_segment_size = tmp;
}
else if (unformat (input, "server-uri %s", &pm->server_uri))
;
vec_add1 (pm->server_uri, 0);
else if (unformat (input, "client-uri %s", &pm->client_uri))
pm->client_uri = format (0, "%s%c", pm->client_uri, 0);
vec_add1 (pm->client_uri, 0);
else
return clib_error_return (0, "unknown input `%U'",
format_unformat_error, input);

View File

@ -182,7 +182,8 @@ nsim_configure (nsim_main_t * nsm, f64 bandwidth, f64 delay, f64 packet_size,
vec_validate (nsm->wheel_by_thread, num_workers);
/* Initialize the output scheduler wheels */
for (i = num_workers ? 1 : 0; i < num_workers + 1; i++)
i = (!nsm->poll_main_thread && num_workers) ? 1 : 0;
for (; i < num_workers + 1; i++)
{
nsim_wheel_t *wp;
@ -205,7 +206,8 @@ nsim_configure (nsim_main_t * nsm, f64 bandwidth, f64 delay, f64 packet_size,
vlib_worker_thread_barrier_sync (vm);
/* turn on the ring scrapers */
for (i = num_workers ? 1 : 0; i < num_workers + 1; i++)
i = (!nsm->poll_main_thread && num_workers) ? 1 : 0;
for (; i < num_workers + 1; i++)
{
vlib_main_t *this_vm = vlib_mains[i];
@ -287,6 +289,28 @@ nsim_cross_connect_enable_disable_command_fn (vlib_main_t * vm,
return 0;
}
static clib_error_t *
nsim_config (vlib_main_t * vm, unformat_input_t * input)
{
nsim_main_t *nsm = &nsim_main;
while (unformat_check_input (input) != UNFORMAT_END_OF_INPUT)
{
if (unformat (input, "poll-main-thread"))
{
nsm->poll_main_thread = 1;
}
else
{
return clib_error_return (0, "unknown input '%U'",
format_unformat_error, input);
}
}
return 0;
}
VLIB_CONFIG_FUNCTION (nsim_config, "nsim");
/*?
* Enable or disable network simulation cross-connect on two interfaces
* The network simulator must have already been configured, see
@ -584,6 +608,8 @@ set_nsim_command_fn (vlib_main_t * vm,
return clib_error_return
(0, "drop fraction must be between zero and 1");
}
else if (unformat (input, "poll-main-thread"))
nsm->poll_main_thread = 1;
else
break;
}

View File

@ -68,6 +68,7 @@ typedef struct
f64 bandwidth;
f64 packet_size;
f64 drop_fraction;
u32 poll_main_thread;
u64 mmap_size;

View File

@ -1,178 +0,0 @@
SRv6 Mobile User Plane Plugin for VPP
========================
## Introduction
This plugin module can provide the stateless mobile user plane protocols translation between GTP-U and SRv6.
The functions of the translation take advantage of SRv6 network programmability.
[SRv6 Mobile User Plane](https://tools.ietf.org/html/draft-ietf-dmm-srv6-mobile-uplane) defines the user plane protocol using SRv6
including following stateless translation functions:
- **T.M.GTP4.D:**
GTP-U over UDP/IPv4 -> SRv6
- **End.M.GTP4.E:**
SRv6 -> GTP-U over UDP/IPv4
- **End.M.GTP6.D:**
GTP-U over UDP/IPv6 -> SRv6
- **End.M.GTP6.E:**
SRv6 -> GTP-U over UDP/IPv6
These functions benefit user plane(overlay) to be able to utilize data plane(underlay) networks properly. And also it benefits
data plane to be able to handle user plane in routing paradigm.
## Getting started
To play with SRv6 Mobile User Plane on VPP, you need to install following packages:
docker
python3
pip3
Python packages (use pip):
docker
scapy
jinja2
### Quick-start
1. Build up the docker container image as following:
```
$ git clone https://github.com/filvarga/srv6-mobile.git
$ cd ./srv6-mobile/extras/ietf105
$ ./runner.py infra build
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ietf105-image latest 577e786b7ec6 2 days ago 5.57GB
ubuntu 18.04 4c108a37151f 4 weeks ago 64.2MB
```
The runner script [runner.py](test/runner.py) has features to automate configurations and procedures for the test.
2. Instantiate test Scenario
Let's try following command to instantiate a topology:
```
$ ./runner.py infra start
```
This command instantiates 4 VPP containers with following topology:
![Topology Diagram](test/topo-init.png)
You can check the instantiated docker instances with "docker ps".
```
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
44cb98994500 ietf105-image "/bin/sh -c 'vpp -c …" About a minute ago Up About a minute hck-vpp-4
6d65fff8aee9 ietf105-image "/bin/sh -c 'vpp -c …" About a minute ago Up About a minute hck-vpp-3
ad123b516b24 ietf105-image "/bin/sh -c 'vpp -c …" About a minute ago Up About a minute hck-vpp-2
5efed405b96a ietf105-image "/bin/sh -c 'vpp -c …" About a minute ago Up About a minute hck-vpp-1
```
You can login to and configure each instantiated container.
```
$ ./runner.py cmd vppctl 0
Verified image: None
connecting to: hck-vpp-1
_______ _ _ _____ ___
__/ __/ _ \ (_)__ | | / / _ \/ _ \
_/ _// // / / / _ \ | |/ / ___/ ___/
/_/ /____(_)_/\___/ |___/_/ /_/
vpp#
```
## Test Scenarios
### SRv6 Drop-in between GTP-U tunnel
This test scenario introduces SRv6 path between GTP-U tunnel transparently. A GTP-U packet sent out from one end to another is translated to SRv6 and then back to GTP-U. All GTP-U tunnel identifiers are preserved in IPv6 header and SRH.
#### GTP-U over UDP/IPv4 case
This case uses SRv6 end functions, T.M.GTP4.D and End.M.GTP4.E.
![Topology Diagram](test/topo-test_gtp4d.png)
VPP1 is configured with "T.M.GTP4.D", and VPP4 is configured with "End.M.GTP4.E". Others are configured with "End". The packet generator sends a GTP-U packet over UDP/IPv4 toward the packet capture. VPP1 translates it to SRv6 toward D4::TEID with SR policy <D2::, D3::> in SRH. VPP4 translates the SRv6 packet to the original GTP-U packet and send out to the packet capture.
To start this case with IPv4 payload over GTP-U, you can run:
```
$ ./runner.py test tmap
```
If you want to use IPv6 payload instead of IPv4, you can run:
```
$ ./runner.py test tmap_ipv6
```
#### GTP-U over UDP/IPv6 case
This case uses SRv6 end functions, End.M.GTP6.D.Di and End.M.GTP6.E.
![Topology Diagram](test/topo-test_gtp6d.png)
VPP1 is configured with "End.M.GTP6.D.Di", and VPP4 is configured with "End.M.GTP4.E". Others are configured with "End". The packet generator sends a GTP-U packet over UDP/IPv6 toward D:: of the packet capture. VPP1 translates it to SRv6 toward D:: with SR policy <D2::, D3::, D4::TEID> in SRH. VPP4 translates the SRv6 packet to the original GTP-U packet and send out to the packet capture.
To start this case with IPv4 payload over GTP-U, you can run:
```
$ ./runner.py test gtp6_drop_in
```
If you want to use IPv6 payload instead of IPv4, you can run:
```
$ ./runner.py test gtp6_drop_in_ipv6
```
### GTP-U to SRv6
This test scenario demonstrates GTP-U to SRv6 translation. A GTP-U packet sent out from one end to another is translated to SRv6.
#### GTP-U over UDP/IPv6 case
##### IPv4 payload
This case uses SRv6 end functions, End.M.GTP6.D and End.DT4.
![Topology Diagram](test/topo-test_gtp6.png)
VPP1 is configured with "End.M.GTP6.D", and VPP4 is configured with "End.DT4". Others are configured with "End". The packet generator sends a GTP-U packet over UDP/IPv6 toward D::2. VPP1 translates it to SRv6 toward the IPv6 destination consists of D4:: and TEID of GTP-U with SR policy <D2::, D3::> in SRH. VPP4 decapsulates the SRv6 packet and lookup the table for the inner IPv4 packet and send out to the packet capture.
To start this case, you can run:
```
$ ./runner.py test gtp6
```
##### IPv6 payload
This case uses SRv6 end functions, End.M.GTP6.D and End.DT6.
![Topology Diagram](test/topo-test_gtp6ip6.png)
The configurations are same with IPv4 payload case, except D4:: is configured as "End.DT6" in VPP4. VPP4 decapsulates the SRv6 packet and lookup the table for the inner IPv6 packet and send out to the packet capture.
If you want to use IPv6 payload instead of IPv4, you can run:
```
$ ./runner.py test gtp6_ipv6
```
## More information
TBD

View File

@ -1,173 +0,0 @@
# What's `runner.py` doing?
## Common configurations
### VPP1
```
create host-interface name eth1
set int ip addr host-eth1 A1::1/120
set int state host-eth1 up
ip route add ::/0 via host-eth1 A1::2
```
### VPP2
```
create host-interface name eth1
set int ip addr host-eth1 A1::2/120
create host-interface name eth2
set int ip addr host-eth2 A2::1/120
set int state host-eth1 up
set int state host-eth2 up
ip route add ::/0 via host-eth2 A2::2
```
### VPP3
```
create host-interface name eth1
set int ip addr host-eth1 A2::2/120
create host-interface name eth2
set int ip addr host-eth2 A3::1/120
set int state host-eth1 up
set int state host-eth2 up
ip route add ::/0 via host-eth1 A2::1
```
### VPP4
```
create host-interface name eth1
set int ip addr host-eth1 A3::2/120
set int state host-eth1 up
ip route add ::/0 via host-eth1 A3::1
```
## Drop-in for GTP-U over IPv4
What's happened when you run `test tmap`:
$ ./runner.py test tmap
Setting up a virtual interface of packet generator:
#### VPP1
```
create packet-generator interface pg0
set int mac address pg0 aa:bb:cc:dd:ee:01
set int ip addr pg0 172.16.0.1/30
set ip arp pg0 172.16.0.2/30 aa:bb:cc:dd:ee:02
```
#### VPP4
```
create packet-generator interface pg0
set int mac address pg0 aa:bb:cc:dd:ee:11
set int ip addr pg0 1.0.0.2/30
set ip arp pg0 1.0.0.1 aa:bb:cc:dd:ee:22
```
SRv6 and IP routing settings:
#### VPP1
```
sr policy add bsid D1:: next D2:: next D3:: gtp4_removal sr_prefix D4::/32 v6src_prefix C1::/64
sr steer l3 172.20.0.1/32 via bsid D1::
```
#### VPP2
```
sr localsid address D2:: behavior end
ip route add D3::/128 via host-eth2 A2::2
```
#### VPP3
```
sr localsid address D3:: behavior end
ip route add D4::/32 via host-eth2 A3::2
```
#### VPP4
```
sr localsid prefix D4::/32 behavior end.m.gtp4.e v4src_position 64
ip route add 172.20.0.1/32 via pg0 1.0.0.1
```
## Packet generator and testing
Example how to build custom SRv6 packet in scapy and ipaddress pkgs
s = '\x11' * 4 + IPv4Address(u"192.168.192.10").packed + '\x11' * 8
ip6 = IPv6Address(s)
IPv6(dst=ip6, src=ip6)
## end.m.gtp4.e
First set behavior so our localsid node is called with the packet
matching C1::1 in fib table
sr localsid address C1::1 behavior end.m.gtp4.ess
show sr localsids behaviors
show sr localsid
We should send a well formated packet to C::1 destination address
that contains the correct spec as for end.m.gtp4.e with encapsulated
ipv4 src and dst address and teid with port for the conversion to
GTPU IPv4 packet
## additional commands
gdb - breakpoint
break sr_policy_rewrite.c:1620
break src/plugins/srv6-end/node.c:84
TMAP
Linux:
ip link add tmp1 type veth peer name tmp2
ip link set dev tmp1 up
ip link set dev tmp2 up
ip addr add 172.20.0.2/24 dev tmp2
create host-interface name tmp1
set int mac address host-tmp1 02:fe:98:c6:c8:7b
set interface ip address host-tmp1 172.20.0.1/24
set interface state host-tmp1 up
VPP
set sr encaps source addr C1::
sr policy add bsid D1::999:2 next D2:: next D3:: gtp4_removal sr-prefix fc34:5678::/64 local-prefix C1::/64
sr steer l3 172.21.0.0/24 via bsid d1::999:2
END
Linux
create host-interface name tmp1
set int mac address host-tmp1 02:fe:98:c6:c8:7b
set interface ip address host-tmp1 A1::1/64
set interface state host-tmp1 up
VPP
sr localsid address 1111:1111:c0a8:c00a:1122:1111:1111:1111 behavior end.m.gtp4.e
trace add af-packet-input 10
sr localsid address C3:: behavior end.m.gtp4.e
sr localsid address 2001:200:0:1ce1:3000:757f:0:2 behavior end.m.gtp4.e

View File

@ -0,0 +1,105 @@
# What's `runner.py` doing? {#srv6_mobile_runner_doc}
## Common configurations
### VPP1
```
create host-interface name eth1
set int ip addr host-eth1 A1::1/120
set int state host-eth1 up
ip route add ::/0 via host-eth1 A1::2
```
### VPP2
```
create host-interface name eth1
set int ip addr host-eth1 A1::2/120
create host-interface name eth2
set int ip addr host-eth2 A2::1/120
set int state host-eth1 up
set int state host-eth2 up
ip route add ::/0 via host-eth2 A2::2
```
### VPP3
```
create host-interface name eth1
set int ip addr host-eth1 A2::2/120
create host-interface name eth2
set int ip addr host-eth2 A3::1/120
set int state host-eth1 up
set int state host-eth2 up
ip route add ::/0 via host-eth1 A2::1
```
### VPP4
```
create host-interface name eth1
set int ip addr host-eth1 A3::2/120
set int state host-eth1 up
ip route add ::/0 via host-eth1 A3::1
```
## Drop-in for GTP-U over IPv4
Drop-in mode is handy to test both GTP-U-to-SRv6 and SRv6-to-GTP-U functions at same time. Let's see what's happened when you run `test gtp4`:
$ ./runner.py test gtp4
Setting up a virtual interface of packet generator:
#### VPP1
```
create packet-generator interface pg0
set int mac address pg0 aa:bb:cc:dd:ee:01
set int ip addr pg0 172.16.0.1/30
set ip arp pg0 172.16.0.2/30 aa:bb:cc:dd:ee:02
```
#### VPP4
```
create packet-generator interface pg0
set int mac address pg0 aa:bb:cc:dd:ee:11
set int ip addr pg0 1.0.0.2/30
set ip arp pg0 1.0.0.1 aa:bb:cc:dd:ee:22
```
SRv6 and IP routing settings:
#### VPP1
```
sr policy add bsid D4:: next D2:: next D3::
sr policy add bsid D5:: behavior t.m.gtp4.d D4::/32 v6src_prefix C1::/64 nhtype ipv4
sr steer l3 172.20.0.1/32 via bsid D5::
```
#### VPP2
```
sr localsid address D2:: behavior end
ip route add D3::/128 via host-eth2 A2::2
```
#### VPP3
```
sr localsid address D3:: behavior end
ip route add D4::/32 via host-eth2 A3::2
```
#### VPP4
```
sr localsid prefix D4::/32 behavior end.m.gtp4.e v4src_position 64
ip route add 172.20.0.1/32 via pg0 1.0.0.1
```

View File

@ -0,0 +1,142 @@
SRv6 Mobile User Plane Plugins {#srv6_mobile_plugin_doc}
========================
# Introduction
This plugin module can provide the stateless mobile user plane protocols translation between GTP-U and SRv6. The functions of the translation take advantage of SRv6 network programmability.
[SRv6 Mobile User Plane](https://tools.ietf.org/html/draft-ietf-dmm-srv6-mobile-uplane) defines the user plane protocol using SRv6
including following stateless translation functions:
- **T.M.GTP4.D:**
GTP-U over UDP/IPv4 -> SRv6
- **End.M.GTP4.E:**
SRv6 -> GTP-U over UDP/IPv4
- **End.M.GTP6.D:**
GTP-U over UDP/IPv6 -> SRv6
- **End.M.GTP6.E:**
SRv6 -> GTP-U over UDP/IPv6
These functions benefit user plane(overlay) to be able to utilize data plane(underlay) networks properly. And also it benefits data plane to be able to handle user plane in routing paradigm.
Noted that the prefix of function names follow naming convention of SRv6 network programming. "T" means transit function, "End" means end function, "M" means Mobility specific function. The suffix "D" and "E" mean that "decapsulation" and "encapsulation" respectively.
# Implementation
All SRv6 mobile functions are implemented as VPP plugin modules. The plugin modules leverage the sr_policy and sr_localsid mechanisms.
# Configurations
## GTP-U to SRv6
The GTP-U tunnel and flow identifiers of a receiving packet are mapped to a Segment Identifier(SID) of sending SRv6 packets.
### IPv4 infrastructure case
In case that **IPv4** networks are the infrastructure of GTP-U, T.M.GTP4.D function translates the receiving GTP-U packets to SRv6 packets.
A T.M.GTP4.D function is associated with the following mandatory parameters:
- SID: A SRv6 SID to represents the function
- DST-PREFIX: Prefix of remote SRv6 segment. The destination address or last SID of out packets consists of the prefix followed by dst IPv4 address, QFI and TEID of the receiving packets.
- SRC-PREFIX: Prefix for src address of sending packets. The src IPv6 address consists of the prefix followed by the src IPv4 address of the receiving packets.
The following command instantiates a new T.M.GTP4.D function.
```
sr policy add bsid SID behavior t.m.gtp4.d DST-PREFIX v6src_prefix SRC-PREFIX [nhtype {ipv4|ipv6|non-ip}]
```
For example, the below command configures the SID 2001:db8::1 with `t.m.gtp4.d` behavior for translating receiving GTP-U over IPv4 packets to SRv6 packets with next-header type is IPv4.
```
sr policy add bsid 2001:db8::1 behavior t.m.gtp4.d D1::/32 v6src_prefix A1::/64 nhtype ipv4
```
It should be interesting how a SRv6 BSID works to decapsulate the receiving GTP-U packets over IPv4 header. To utilize ```t.m.gtp4.d``` function, you need to configure some SR steering policy like:
```
sr steer l3 172.20.0.1/32 via bsid 2001:db8::1
```
The above steering policy with the BSID of `t.m.gtp4.d` would work properly for the GTP-U packets destined to 172.20.0.1.
If you have a SID(s) list of SR policy which the configured gtp4.d function to be applied, the SR Policy can be configured as following:
```
sr policy add bsid D1:: next A1:: next B1:: next C1::
```
### IPv6 infrastructure case
In case that GTP-U is deployed over **IPv6** infrastructure, you don't need to configure T.M.GTP4.D function and associated SR steering policy. Instead of that, you just need to configure a localsid of End.M.GTP6.D segment.
An End.M.GTP6.D segment is associated with the following mandatory parameters:
- SID-PREFIX: SRv6 SID prefix to represent the function. In this function, it should be the dst address of receiving GTP-U packets.
- DST-PREFIX: Prefix of remote SRv6 Segment. The destination address or last SID of output packets consists of the prefix followed by QFI and TEID of the receiving packets.
The following command instantiates a new End.M.GTP6.D function.
```
sr localsid prefix SID-PREFIX behavior end.m.gtp6.d DST-PREFIX [nhtype {ipv4|ipv6|non-ip}]
```
For example, the below command configures the SID prefix 2001:db8::/64 with `end.m.gtp6.d` behavior for translating receiving GTP-U over IPv6 packets which have IPv6 destination addresses within 2001:db8::/64 to SRv6 packets. The dst IPv6 address of the outgoing packets consists of D4::/64 followed by QFI and TEID.
```
sr localsid prefix 2001:db8::/64 behavior end.m.gtp6.d D4::/64
```
In another case, the translated packets from GTP-U over IPv6 to SRv6 will be re-translated back to GTP-U, which is so called 'Drop-In' mode.
In Drop-In mode, an additional IPv6 specific end segment is required, named End.M.GTP6.D.Di. It is because that unlike `end.m.gtp6.d`, it needs to preserve original IPv6 dst address as the last SID in the SRH.
Regardless of that difference exists, the required configuration parameters are same as `end.m.gtp6.d`.
The following command instantiates a new End.M.GTP6.D.Di function.
```
sr localsid prefix 2001:db8::/64 behavior end.m.gtp6.d.di D4::/64
```
## SRv6 to GTP-U
The SRv6 Mobile functions on SRv6 to GTP-U direction are End.M.GTP4.E and End.M.GTP6.D.
In this direction with GTP-U over IPv4 infrastructure, an End.M.GTP4.E segment is associated with the following mandatory parameters:
- SID-PREFIX: SRv6 SID prefix to represent the function.
- V4SRC-ADDR-POSITION: Integer number indicates bit position where IPv4 src address embedded.
The following command instantiates a new End.M.GTP4.E function.
```
sr localsid prefix SID-PREFIX behavior end.m.gtp4.e v4src_position V4SRC-ADDR-POSITION
```
For example, the below command configures the SID prefix 2001:db8::/32 with `end.m.gtp4.e` behavior for translating the receiving SRv6 packets to GTP-U packets encapsulated with UDP/IPv4 header. All the GTP-U tunnel and flow identifiers are extracted from the active SID in the receiving packets. The src IPv4 address of sending GTP-U packets is extracted from the configured bit position in the src IPv6 address.
```
sr localsid prefix 2001:db8::/32 behavior end.m.gtp4.e v4src_position 64
```
In IPv6 infrastructure case, an End.M.GTP6.E segment is associated with the following mandatory parameters:
- SID-PREFIX: SRv6 SID prefix to represent the function.
The following command instantiates a new End.M.GTP6.E function.
```
sr localsid prefix SID-PREFIX behavior end.m.gtp6.e
```
For example, the below command configures the SID prefix 2001:db8::/64 with `end.m.gtp6.e` behavior for translating the receiving SRv6 packets to GTP-U packets encapsulated with UDP/IPv6 header. While the last SID indicates GTP-U dst IPv6 address, 32-bits GTP-U TEID and 6-bits QFI are extracted from the active SID in the receiving packets.
```
sr localsid prefix 2001:db8::/64 behavior end.m.gtp6.e
```
To run some demo setup please refer to: @subpage srv6_mobile_runner_doc

View File

@ -671,6 +671,16 @@ vl_msg_api_process_file (vlib_main_t * vm, u8 * filename,
am->replay_in_progress = 0;
}
/** api_trace_command_fn - control the binary API trace / replay feature
Note: this command MUST be marked thread-safe. Replay with
multiple worker threads depends in many cases on worker thread
graph replica maintenance. If we (implicitly) assert a worker
thread barrier at the debug CLI level, all graph replica changes
are deferred until the replay operation completes. If an interface
is deleted, the wheels fall off.
*/
static clib_error_t *
api_trace_command_fn (vlib_main_t * vm,
unformat_input_t * input, vlib_cli_command_t * cmd)
@ -691,12 +701,16 @@ api_trace_command_fn (vlib_main_t * vm,
{
if (unformat (input, "nitems %d", &nitems))
;
vlib_worker_thread_barrier_sync (vm);
vl_msg_api_trace_configure (am, which, nitems);
vl_msg_api_trace_onoff (am, which, 1 /* on */ );
vlib_worker_thread_barrier_release (vm);
}
else if (unformat (input, "off"))
{
vlib_worker_thread_barrier_sync (vm);
vl_msg_api_trace_onoff (am, which, 0);
vlib_worker_thread_barrier_release (vm);
}
else if (unformat (input, "save %s", &filename))
{
@ -718,7 +732,9 @@ api_trace_command_fn (vlib_main_t * vm,
vlib_cli_output (vm, "Couldn't create %s\n", chroot_filename);
goto out;
}
vlib_worker_thread_barrier_sync (vm);
rv = vl_msg_api_trace_save (am, which, fp);
vlib_worker_thread_barrier_release (vm);
fclose (fp);
if (rv == -1)
vlib_cli_output (vm, "API Trace data not present\n");
@ -775,8 +791,10 @@ api_trace_command_fn (vlib_main_t * vm,
}
else if (unformat (input, "free"))
{
vlib_worker_thread_barrier_sync (vm);
vl_msg_api_trace_onoff (am, which, 0);
vl_msg_api_trace_free (am, which);
vlib_worker_thread_barrier_release (vm);
}
else if (unformat (input, "post-mortem-on"))
vl_msg_api_post_mortem_dump_enable_disable (1 /* enable */ );
@ -801,8 +819,9 @@ VLIB_CLI_COMMAND (api_trace_command, static) =
{
.path = "api trace",
.short_help = "api trace [on|off][first <n>][last <n>][status][free]"
"[post-mortem-on][dump|custom-dump|save|replay <file>]",
"[post-mortem-on][dump|custom-dump|save|replay <file>]",
.function = api_trace_command_fn,
.is_mp_safe = 1,
};
/* *INDENT-ON* */

View File

@ -347,7 +347,7 @@ adj_nbr_update_rewrite_internal (ip_adjacency_t *adj,
u8 *rewrite)
{
ip_adjacency_t *walk_adj;
adj_index_t walk_ai;
adj_index_t walk_ai, ai;
vlib_main_t * vm;
u32 old_next;
int do_walk;
@ -355,7 +355,7 @@ adj_nbr_update_rewrite_internal (ip_adjacency_t *adj,
vm = vlib_get_main();
old_next = adj->lookup_next_index;
walk_ai = adj_get_index(adj);
ai = walk_ai = adj_get_index(adj);
if (VNET_LINK_MPLS == adj->ia_link)
{
/*
@ -399,7 +399,7 @@ adj_nbr_update_rewrite_internal (ip_adjacency_t *adj,
* DPO, this adj will no longer be in use and its lock count will drop to 0.
* We don't want it to be deleted as part of this endeavour.
*/
adj_lock(adj_get_index(adj));
adj_lock(ai);
adj_lock(walk_ai);
/*
@ -511,10 +511,11 @@ adj_nbr_update_rewrite_internal (ip_adjacency_t *adj,
*/
if (do_walk)
{
walk_adj = adj_get(walk_ai);
walk_adj->ia_flags &= ~ADJ_FLAG_SYNC_WALK_ACTIVE;
}
adj_unlock(adj_get_index(adj));
adj_unlock(ai);
adj_unlock(walk_ai);
}

View File

@ -559,17 +559,7 @@ vhost_user_if_input (vlib_main_t * vm,
}
}
if (PREDICT_TRUE (vui->is_any_layout) ||
(!(desc_table[desc_current].flags & VIRTQ_DESC_F_NEXT)))
{
/* ANYLAYOUT or single buffer */
desc_data_offset = vui->virtio_net_hdr_sz;
}
else
{
/* CSR case without ANYLAYOUT, skip 1st buffer */
desc_data_offset = desc_table[desc_current].len;
}
desc_data_offset = vui->virtio_net_hdr_sz;
if (enable_csum)
{

View File

@ -580,6 +580,13 @@ fib_table_entry_path_add (u32 fib_index,
return (fib_entry_index);
}
static int
fib_route_path_cmp_for_sort (void * v1,
void * v2)
{
return (fib_route_path_cmp(v1, v2));
}
fib_node_index_t
fib_table_entry_path_add2 (u32 fib_index,
const fib_prefix_t *prefix,
@ -598,6 +605,11 @@ fib_table_entry_path_add2 (u32 fib_index,
{
fib_table_route_path_fixup(prefix, &flags, &rpaths[ii]);
}
/*
* sort the paths provided by the control plane. this means
* the paths and the extension on the entry will be sorted.
*/
vec_sort_with_function(rpaths, fib_route_path_cmp_for_sort);
if (FIB_NODE_INDEX_INVALID == fib_entry_index)
{
@ -740,13 +752,6 @@ fib_table_entry_path_remove (u32 fib_index,
vec_free(paths);
}
static int
fib_route_path_cmp_for_sort (void * v1,
void * v2)
{
return (fib_route_path_cmp(v1, v2));
}
fib_node_index_t
fib_table_entry_update (u32 fib_index,
const fib_prefix_t *prefix,

View File

@ -167,9 +167,11 @@ ipsec_register_esp_backend (vlib_main_t * vm, ipsec_main_t * im,
const char *esp4_encrypt_node_name,
const char *esp4_encrypt_node_tun_name,
const char *esp4_decrypt_node_name,
const char *esp4_decrypt_tun_node_name,
const char *esp6_encrypt_node_name,
const char *esp6_encrypt_node_tun_name,
const char *esp6_decrypt_node_name,
const char *esp6_decrypt_tun_node_name,
check_support_cb_t esp_check_support_cb,
add_del_sa_sess_cb_t esp_add_del_sa_sess_cb)
{
@ -186,6 +188,12 @@ ipsec_register_esp_backend (vlib_main_t * vm, ipsec_main_t * im,
&b->esp6_encrypt_node_index, &b->esp6_encrypt_next_index);
ipsec_add_node (vm, esp6_decrypt_node_name, "ipsec6-input-feature",
&b->esp6_decrypt_node_index, &b->esp6_decrypt_next_index);
ipsec_add_node (vm, esp4_decrypt_tun_node_name, "ipsec4-tun-input",
&b->esp4_decrypt_tun_node_index,
&b->esp4_decrypt_tun_next_index);
ipsec_add_node (vm, esp6_decrypt_tun_node_name, "ipsec6-tun-input",
&b->esp6_decrypt_tun_node_index,
&b->esp6_decrypt_tun_next_index);
ipsec_add_feature ("ip4-output", esp4_encrypt_node_tun_name,
&b->esp44_encrypt_tun_feature_index);
@ -255,6 +263,10 @@ ipsec_select_esp_backend (ipsec_main_t * im, u32 backend_idx)
im->esp6_decrypt_node_index = b->esp6_decrypt_node_index;
im->esp6_encrypt_next_index = b->esp6_encrypt_next_index;
im->esp6_decrypt_next_index = b->esp6_decrypt_next_index;
im->esp4_decrypt_tun_node_index = b->esp4_decrypt_tun_node_index;
im->esp4_decrypt_tun_next_index = b->esp4_decrypt_tun_next_index;
im->esp6_decrypt_tun_node_index = b->esp6_decrypt_tun_node_index;
im->esp6_decrypt_tun_next_index = b->esp6_decrypt_tun_next_index;
im->esp44_encrypt_tun_feature_index = b->esp44_encrypt_tun_feature_index;
im->esp64_encrypt_tun_feature_index = b->esp64_encrypt_tun_feature_index;
@ -303,9 +315,11 @@ ipsec_init (vlib_main_t * vm)
"esp4-encrypt",
"esp4-encrypt-tun",
"esp4-decrypt",
"esp4-decrypt-tun",
"esp6-encrypt",
"esp6-encrypt-tun",
"esp6-decrypt",
"esp6-decrypt-tun",
ipsec_check_esp_support, NULL);
im->esp_default_backend = idx;

View File

@ -61,6 +61,10 @@ typedef struct
u32 esp6_decrypt_node_index;
u32 esp6_encrypt_next_index;
u32 esp6_decrypt_next_index;
u32 esp4_decrypt_tun_node_index;
u32 esp4_decrypt_tun_next_index;
u32 esp6_decrypt_tun_node_index;
u32 esp6_decrypt_tun_next_index;
u32 esp44_encrypt_tun_feature_index;
u32 esp46_encrypt_tun_feature_index;
u32 esp66_encrypt_tun_feature_index;
@ -120,19 +124,23 @@ typedef struct
u32 error_drop_node_index;
u32 esp4_encrypt_node_index;
u32 esp4_decrypt_node_index;
u32 esp4_decrypt_tun_node_index;
u32 ah4_encrypt_node_index;
u32 ah4_decrypt_node_index;
u32 esp6_encrypt_node_index;
u32 esp6_decrypt_node_index;
u32 esp6_decrypt_tun_node_index;
u32 ah6_encrypt_node_index;
u32 ah6_decrypt_node_index;
/* next node indices */
u32 esp4_encrypt_next_index;
u32 esp4_decrypt_next_index;
u32 esp4_decrypt_tun_next_index;
u32 ah4_encrypt_next_index;
u32 ah4_decrypt_next_index;
u32 esp6_encrypt_next_index;
u32 esp6_decrypt_next_index;
u32 esp6_decrypt_tun_next_index;
u32 ah6_encrypt_next_index;
u32 ah6_decrypt_next_index;
@ -248,9 +256,11 @@ u32 ipsec_register_esp_backend (vlib_main_t * vm, ipsec_main_t * im,
const char *esp4_encrypt_node_name,
const char *esp4_encrypt_tun_node_name,
const char *esp4_decrypt_node_name,
const char *esp4_decrypt_tun_node_name,
const char *esp6_encrypt_node_name,
const char *esp6_encrypt_tun_node_name,
const char *esp6_decrypt_node_name,
const char *esp6_decrypt_tun_node_name,
check_support_cb_t esp_check_support_cb,
add_del_sa_sess_cb_t esp_add_del_sa_sess_cb);

View File

@ -55,8 +55,7 @@ typedef enum ipsec_tun_next_t_
#define _(v, s) IPSEC_TUN_PROTECT_NEXT_##v,
foreach_ipsec_input_next
#undef _
IPSEC_TUN_PROTECT_NEXT_DECRYPT,
IPSEC_TUN_PROTECT_N_NEXT,
IPSEC_TUN_PROTECT_N_NEXT,
} ipsec_tun_next_t;
typedef struct
@ -311,7 +310,7 @@ ipsec_tun_protect_input_inline (vlib_main_t * vm, vlib_node_runtime_t * node,
n_bytes = len0;
}
next[0] = IPSEC_TUN_PROTECT_NEXT_DECRYPT;
next[0] = im->esp4_decrypt_tun_next_index; //IPSEC_TUN_PROTECT_NEXT_DECRYPT;
}
trace00:
if (PREDICT_FALSE (is_trace))
@ -358,8 +357,7 @@ VLIB_NODE_FN (ipsec4_tun_input_node) (vlib_main_t * vm,
vlib_node_runtime_t * node,
vlib_frame_t * from_frame)
{
return ipsec_tun_protect_input_inline (vm, node, from_frame,
0 /* is_ip6 */ );
return ipsec_tun_protect_input_inline (vm, node, from_frame, 0);
}
/* *INDENT-OFF* */
@ -374,7 +372,6 @@ VLIB_REGISTER_NODE (ipsec4_tun_input_node) = {
.next_nodes = {
[IPSEC_TUN_PROTECT_NEXT_DROP] = "ip4-drop",
[IPSEC_TUN_PROTECT_NEXT_PUNT] = "punt-dispatch",
[IPSEC_TUN_PROTECT_NEXT_DECRYPT] = "esp4-decrypt-tun",
}
};
/* *INDENT-ON* */
@ -383,8 +380,7 @@ VLIB_NODE_FN (ipsec6_tun_input_node) (vlib_main_t * vm,
vlib_node_runtime_t * node,
vlib_frame_t * from_frame)
{
return ipsec_tun_protect_input_inline (vm, node, from_frame,
1 /* is_ip6 */ );
return ipsec_tun_protect_input_inline (vm, node, from_frame, 1);
}
/* *INDENT-OFF* */
@ -399,7 +395,6 @@ VLIB_REGISTER_NODE (ipsec6_tun_input_node) = {
.next_nodes = {
[IPSEC_TUN_PROTECT_NEXT_DROP] = "ip6-drop",
[IPSEC_TUN_PROTECT_NEXT_PUNT] = "punt-dispatch",
[IPSEC_TUN_PROTECT_NEXT_DECRYPT] = "esp6-decrypt-tun",
}
};
/* *INDENT-ON* */

View File

@ -501,7 +501,7 @@ format_session_queue_trace (u8 * s, va_list * args)
CLIB_UNUSED (vlib_node_t * node) = va_arg (*args, vlib_node_t *);
session_queue_trace_t *t = va_arg (*args, session_queue_trace_t *);
s = format (s, "SESSION_QUEUE: session index %d, server thread index %d",
s = format (s, "session index %d thread index %d",
t->session_index, t->server_thread_index);
return s;
}
@ -543,7 +543,7 @@ session_tx_trace_frame (vlib_main_t * vm, vlib_node_runtime_t * node,
for (i = 0; i < clib_min (n_trace, n_segs); i++)
{
b = vlib_get_buffer (vm, to_next[i - n_segs]);
b = vlib_get_buffer (vm, to_next[i]);
vlib_trace_buffer (vm, node, next_index, b, 1 /* follow_chain */ );
t = vlib_add_trace (vm, node, b, sizeof (*t));
t->session_index = s->session_index;
@ -1610,6 +1610,7 @@ session_queue_pre_input_inline (vlib_main_t * vm, vlib_node_runtime_t * node,
session_main_t *sm = &session_main;
if (!sm->wrk[0].vpp_event_queue)
return 0;
node = vlib_node_get_runtime (vm, session_queue_node.index);
return session_queue_node_fn (vm, node, frame);
}

Some files were not shown because too many files have changed in this diff Show More