From patchwork Thu Feb 27 16:18:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lukas Bartosik [C]" X-Patchwork-Id: 66098 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 62536A055F; Thu, 27 Feb 2020 17:18:53 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 29EED1BFCE; Thu, 27 Feb 2020 17:18:51 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id B10611BFCE for ; Thu, 27 Feb 2020 17:18:48 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 01RFsgmk002482; Thu, 27 Feb 2020 08:18:48 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=YTc4kB5iqRTNjEC2mzHJndWQAkkGMeSuOnsLUAU4zXI=; b=ZjIojW56v+zSWeXjkx0ozVeYZS42XCza/+oMtlq4A7INd6RnKhE65nSVi6BXQK+/NZLh JtNkvjIDqfd/l5IF65f/+pLISvWnqF6ArZPspPMZNtX2cUB405V6u2fpjnCStNgjqtRo 5e0uIA5Cs+rhJo+iIlIc1oQ/dzXNOcBbPOuavJZOOwvaKiNzvfFx4aZ5vddsE+SmFj80 v3p6rM7aGx2FzCViI8y2WvyxWeuulW04s+LpZzt563ATtNBybSJ+mcRgklsLZGTiqwvH Do4h486zSIzAfkaOuDDOh6FJX5qeRRQbe/F/c06d23DwzwP2alPS0PbgAmK5whQ0Y8OO 1w== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2ydchth8e5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 27 Feb 2020 08:18:48 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:18:46 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:18:45 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 27 Feb 2020 08:18:44 -0800 Received: from luke.marvell.com (unknown [10.95.130.81]) by maili.marvell.com (Postfix) with ESMTP id 29F5C3F703F; Thu, 27 Feb 2020 08:18:41 -0800 (PST) From: Lukasz Bartosik To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Ankur Dwivedi , Jerin Jacob , Narayana Prasad , Anoob Joseph , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Konstantin Ananyev" , Date: Thu, 27 Feb 2020 17:18:23 +0100 Message-ID: <1582820317-7333-2-git-send-email-lbartosik@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1582820317-7333-1-git-send-email-lbartosik@marvell.com> References: <1582185727-6749-1-git-send-email-lbartosik@marvell.com> <1582820317-7333-1-git-send-email-lbartosik@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-27_05:2020-02-26, 2020-02-27 signatures=0 Subject: [dpdk-dev] [PATCH v5 01/15] examples/ipsec-secgw: add default rte flow for inline Rx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ankur Dwivedi The default flow created would enable security processing on all ESP packets. If the default flow is created, SA based rte_flow creation would be skipped. Signed-off-by: Ankur Dwivedi Signed-off-by: Anoob Joseph --- examples/ipsec-secgw/ipsec-secgw.c | 61 +++++++++++++++++++++++++++++++++----- examples/ipsec-secgw/ipsec.c | 5 +++- examples/ipsec-secgw/ipsec.h | 6 ++++ 3 files changed, 63 insertions(+), 9 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 4799bc9..e1ee7c3 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -129,6 +129,8 @@ struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } }; +struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; + #define CMD_LINE_OPT_CONFIG "config" #define CMD_LINE_OPT_SINGLE_SA "single-sa" #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" @@ -2432,6 +2434,48 @@ reassemble_init(void) return rc; } +static void +create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) +{ + struct rte_flow_action action[2]; + struct rte_flow_item pattern[2]; + struct rte_flow_attr attr = {0}; + struct rte_flow_error err; + struct rte_flow *flow; + int ret; + + if (!(rx_offloads & DEV_RX_OFFLOAD_SECURITY)) + return; + + /* Add the default rte_flow to enable SECURITY for all ESP packets */ + + pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP; + pattern[0].spec = NULL; + pattern[0].mask = NULL; + pattern[0].last = NULL; + pattern[1].type = RTE_FLOW_ITEM_TYPE_END; + + action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY; + action[0].conf = NULL; + action[1].type = RTE_FLOW_ACTION_TYPE_END; + action[1].conf = NULL; + + attr.ingress = 1; + + ret = rte_flow_validate(port_id, &attr, pattern, action, &err); + if (ret) + return; + + flow = rte_flow_create(port_id, &attr, pattern, action, &err); + if (flow == NULL) + return; + + flow_info_tbl[port_id].rx_def_flow = flow; + RTE_LOG(INFO, IPSEC, + "Created default flow enabling SECURITY for all ESP traffic on port %d\n", + port_id); +} + int32_t main(int32_t argc, char **argv) { @@ -2440,7 +2484,8 @@ main(int32_t argc, char **argv) uint32_t i; uint8_t socket_id; uint16_t portid; - uint64_t req_rx_offloads, req_tx_offloads; + uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; + uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; size_t sess_sz; /* init EAL */ @@ -2502,8 +2547,10 @@ main(int32_t argc, char **argv) if ((enabled_port_mask & (1 << portid)) == 0) continue; - sa_check_offloads(portid, &req_rx_offloads, &req_tx_offloads); - port_init(portid, req_rx_offloads, req_tx_offloads); + sa_check_offloads(portid, &req_rx_offloads[portid], + &req_tx_offloads[portid]); + port_init(portid, req_rx_offloads[portid], + req_tx_offloads[portid]); } cryptodevs_init(); @@ -2513,11 +2560,9 @@ main(int32_t argc, char **argv) if ((enabled_port_mask & (1 << portid)) == 0) continue; - /* - * Start device - * note: device must be started before a flow rule - * can be installed. - */ + /* Create flow before starting the device */ + create_default_ipsec_flow(portid, req_rx_offloads[portid]); + ret = rte_eth_dev_start(portid); if (ret < 0) rte_exit(EXIT_FAILURE, "rte_eth_dev_start: " diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c index 6e81207..d406571 100644 --- a/examples/ipsec-secgw/ipsec.c +++ b/examples/ipsec-secgw/ipsec.c @@ -275,6 +275,10 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, unsigned int i; unsigned int j; + /* Don't create flow if default flow is created */ + if (flow_info_tbl[sa->portid].rx_def_flow) + return 0; + ret = rte_eth_dev_info_get(sa->portid, &dev_info); if (ret != 0) { RTE_LOG(ERR, IPSEC, @@ -410,7 +414,6 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, ips->security.ol_flags = sec_cap->ol_flags; ips->security.ctx = sec_ctx; } - sa->cdev_id_qp = 0; return 0; } diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 4f2fd61..8f5d382 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -87,6 +87,12 @@ struct app_sa_prm { extern struct app_sa_prm app_sa_prm; +struct flow_info { + struct rte_flow *rx_def_flow; +}; + +extern struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; + enum { IPSEC_SESSION_PRIMARY = 0, IPSEC_SESSION_FALLBACK = 1, From patchwork Thu Feb 27 16:18:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lukas Bartosik [C]" X-Patchwork-Id: 66099 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5B5A1A055F; Thu, 27 Feb 2020 17:19:06 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 064201BFE3; Thu, 27 Feb 2020 17:18:55 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id A04292C4F for ; Thu, 27 Feb 2020 17:18:50 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 01RFsgBG002476; Thu, 27 Feb 2020 08:18:50 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=y6MFPf3wP4yxMlS9Qp3DVJ3wOIhgHIAGKwy2DK34xcA=; b=HrnJB5jp5QrlhKh31aWf55Ehfvi/yF10p0QZ9PIlAVZbsiZdveJMPDhAuhNTGwxz0HBJ n1RmUotyDAIJxJsNI4hFIQp/8oByC8oX8ikg8wzjYAZ5d5NDj8VmELgGF5Dsjc/nB3LB GkrEHRSFt6UH0cVKjZedfSm1bnTBQ1lh7N/b8i9mv7mjN3JN9U77QksoPs/7nVzBv/Tm MVLxrCBK4eCKDYV0JpeHteyHaPBHVzFTQeKPtmlS3oJdia82PMCZTq9z6ELQ3z9gCkUW BSw07Ga2rQqOmy5SJhINnAme9DOnaPcQ3P4u2rPvDiLvsPLxhdYLE7B75oUPzKi15Tz3 UQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2ydchth8eb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 27 Feb 2020 08:18:49 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:18:47 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 27 Feb 2020 08:18:47 -0800 Received: from luke.marvell.com (unknown [10.95.130.81]) by maili.marvell.com (Postfix) with ESMTP id 392953F7040; Thu, 27 Feb 2020 08:18:45 -0800 (PST) From: Lukasz Bartosik To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Anoob Joseph , Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Konstantin Ananyev" , Date: Thu, 27 Feb 2020 17:18:24 +0100 Message-ID: <1582820317-7333-3-git-send-email-lbartosik@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1582820317-7333-1-git-send-email-lbartosik@marvell.com> References: <1582185727-6749-1-git-send-email-lbartosik@marvell.com> <1582820317-7333-1-git-send-email-lbartosik@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-27_05:2020-02-26, 2020-02-27 signatures=0 Subject: [dpdk-dev] [PATCH v5 02/15] examples/ipsec-secgw: add framework for eventmode helper X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Anoob Joseph Add framework for eventmode helper. Event mode involves initialization of multiple devices like eventdev, ethdev and etc. Add routines to initialize and uninitialize event device. Generate a default config for event device if it is not specified in the configuration. Currently event helper supports single event device only. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/Makefile | 1 + examples/ipsec-secgw/event_helper.c | 320 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 107 ++++++++++++ examples/ipsec-secgw/meson.build | 4 +- 4 files changed, 430 insertions(+), 2 deletions(-) create mode 100644 examples/ipsec-secgw/event_helper.c create mode 100644 examples/ipsec-secgw/event_helper.h diff --git a/examples/ipsec-secgw/Makefile b/examples/ipsec-secgw/Makefile index ad83d79..66d05d4 100644 --- a/examples/ipsec-secgw/Makefile +++ b/examples/ipsec-secgw/Makefile @@ -16,6 +16,7 @@ SRCS-y += sad.c SRCS-y += rt.c SRCS-y += ipsec_process.c SRCS-y += ipsec-secgw.c +SRCS-y += event_helper.c CFLAGS += -gdwarf-2 diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c new file mode 100644 index 0000000..0c38474 --- /dev/null +++ b/examples/ipsec-secgw/event_helper.c @@ -0,0 +1,320 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#include +#include + +#include "event_helper.h" + +static int +eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) +{ + int lcore_count, nb_eventdev, nb_eth_dev, ret; + struct eventdev_params *eventdev_config; + struct rte_event_dev_info dev_info; + + /* Get the number of event devices */ + nb_eventdev = rte_event_dev_count(); + if (nb_eventdev == 0) { + EH_LOG_ERR("No event devices detected"); + return -EINVAL; + } + + if (nb_eventdev != 1) { + EH_LOG_ERR("Event mode does not support multiple event devices. " + "Please provide only one event device."); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + if (nb_eth_dev == 0) { + EH_LOG_ERR("No eth devices detected"); + return -EINVAL; + } + + /* Get the number of lcores */ + lcore_count = rte_lcore_count(); + + /* Read event device info */ + ret = rte_event_dev_info_get(0, &dev_info); + if (ret < 0) { + EH_LOG_ERR("Failed to read event device info %d", ret); + return ret; + } + + /* Check if enough ports are available */ + if (dev_info.max_event_ports < 2) { + EH_LOG_ERR("Not enough event ports available"); + return -EINVAL; + } + + /* Get the first event dev conf */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Save number of queues & ports available */ + eventdev_config->eventdev_id = 0; + eventdev_config->nb_eventqueue = dev_info.max_event_queues; + eventdev_config->nb_eventport = dev_info.max_event_ports; + eventdev_config->ev_queue_mode = RTE_EVENT_QUEUE_CFG_ALL_TYPES; + + /* Check if there are more queues than required */ + if (eventdev_config->nb_eventqueue > nb_eth_dev + 1) { + /* One queue is reserved for Tx */ + eventdev_config->nb_eventqueue = nb_eth_dev + 1; + } + + /* Check if there are more ports than required */ + if (eventdev_config->nb_eventport > lcore_count) { + /* One port per lcore is enough */ + eventdev_config->nb_eventport = lcore_count; + } + + /* Update the number of event devices */ + em_conf->nb_eventdev++; + + return 0; +} + +static int +eh_validate_conf(struct eventmode_conf *em_conf) +{ + int ret; + + /* + * Check if event devs are specified. Else probe the event devices + * and initialize the config with all ports & queues available + */ + if (em_conf->nb_eventdev == 0) { + ret = eh_set_default_conf_eventdev(em_conf); + if (ret != 0) + return ret; + } + + return 0; +} + +static int +eh_initialize_eventdev(struct eventmode_conf *em_conf) +{ + struct rte_event_queue_conf eventq_conf = {0}; + struct rte_event_dev_info evdev_default_conf; + struct rte_event_dev_config eventdev_conf; + struct eventdev_params *eventdev_config; + int nb_eventdev = em_conf->nb_eventdev; + uint8_t eventdev_id; + int nb_eventqueue; + uint8_t i, j; + int ret; + + for (i = 0; i < nb_eventdev; i++) { + + /* Get eventdev config */ + eventdev_config = &(em_conf->eventdev_config[i]); + + /* Get event dev ID */ + eventdev_id = eventdev_config->eventdev_id; + + /* Get the number of queues */ + nb_eventqueue = eventdev_config->nb_eventqueue; + + /* Reset the default conf */ + memset(&evdev_default_conf, 0, + sizeof(struct rte_event_dev_info)); + + /* Get default conf of eventdev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR( + "Error in getting event device info[devID:%d]", + eventdev_id); + return ret; + } + + memset(&eventdev_conf, 0, sizeof(struct rte_event_dev_config)); + eventdev_conf.nb_events_limit = + evdev_default_conf.max_num_events; + eventdev_conf.nb_event_queues = nb_eventqueue; + eventdev_conf.nb_event_ports = + eventdev_config->nb_eventport; + eventdev_conf.nb_event_queue_flows = + evdev_default_conf.max_event_queue_flows; + eventdev_conf.nb_event_port_dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + eventdev_conf.nb_event_port_enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Configure event device */ + ret = rte_event_dev_configure(eventdev_id, &eventdev_conf); + if (ret < 0) { + EH_LOG_ERR("Error in configuring event device"); + return ret; + } + + /* Configure event queues */ + for (j = 0; j < nb_eventqueue; j++) { + + memset(&eventq_conf, 0, + sizeof(struct rte_event_queue_conf)); + + /* Per event dev queues can be ATQ or SINGLE LINK */ + eventq_conf.event_queue_cfg = + eventdev_config->ev_queue_mode; + /* + * All queues need to be set with sched_type as + * schedule type for the application stage. One queue + * would be reserved for the final eth tx stage. This + * will be an atomic queue. + */ + if (j == nb_eventqueue-1) { + eventq_conf.schedule_type = + RTE_SCHED_TYPE_ATOMIC; + } else { + eventq_conf.schedule_type = + em_conf->ext_params.sched_type; + } + + /* Set max atomic flows to 1024 */ + eventq_conf.nb_atomic_flows = 1024; + eventq_conf.nb_atomic_order_sequences = 1024; + + /* Setup the queue */ + ret = rte_event_queue_setup(eventdev_id, j, + &eventq_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to setup event queue %d", + ret); + return ret; + } + } + + /* Configure event ports */ + for (j = 0; j < eventdev_config->nb_eventport; j++) { + ret = rte_event_port_setup(eventdev_id, j, NULL); + if (ret < 0) { + EH_LOG_ERR("Failed to setup event port %d", + ret); + return ret; + } + } + } + + /* Start event devices */ + for (i = 0; i < nb_eventdev; i++) { + + /* Get eventdev config */ + eventdev_config = &(em_conf->eventdev_config[i]); + + ret = rte_event_dev_start(eventdev_config->eventdev_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start event device %d, %d", + i, ret); + return ret; + } + } + return 0; +} + +int32_t +eh_devs_init(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + uint16_t port_id; + int ret; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return 0; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Validate the requested config */ + ret = eh_validate_conf(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to validate the requested config %d", ret); + return ret; + } + + /* Stop eth devices before setting up adapter */ + RTE_ETH_FOREACH_DEV(port_id) { + + /* Use only the ports enabled */ + if ((conf->eth_portmask & (1 << port_id)) == 0) + continue; + + rte_eth_dev_stop(port_id); + } + + /* Setup eventdev */ + ret = eh_initialize_eventdev(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize event dev %d", ret); + return ret; + } + + /* Start eth devices after setting up adapter */ + RTE_ETH_FOREACH_DEV(port_id) { + + /* Use only the ports enabled */ + if ((conf->eth_portmask & (1 << port_id)) == 0) + continue; + + ret = rte_eth_dev_start(port_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start eth dev %d, %d", + port_id, ret); + return ret; + } + } + + return 0; +} + +int32_t +eh_devs_uninit(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + uint16_t id; + int ret, i; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return 0; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Stop and release event devices */ + for (i = 0; i < em_conf->nb_eventdev; i++) { + + id = em_conf->eventdev_config[i].eventdev_id; + rte_event_dev_stop(id); + + ret = rte_event_dev_close(id); + if (ret < 0) { + EH_LOG_ERR("Failed to close event dev %d, %d", id, ret); + return ret; + } + } + + return 0; +} diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h new file mode 100644 index 0000000..040f977 --- /dev/null +++ b/examples/ipsec-secgw/event_helper.h @@ -0,0 +1,107 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#ifndef _EVENT_HELPER_H_ +#define _EVENT_HELPER_H_ + +#include + +#define RTE_LOGTYPE_EH RTE_LOGTYPE_USER4 + +#define EH_LOG_ERR(...) \ + RTE_LOG(ERR, EH, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) + +/* Max event devices supported */ +#define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS + +/** + * Packet transfer mode of the application + */ +enum eh_pkt_transfer_mode { + EH_PKT_TRANSFER_MODE_POLL = 0, + EH_PKT_TRANSFER_MODE_EVENT, +}; + +/* Event dev params */ +struct eventdev_params { + uint8_t eventdev_id; + uint8_t nb_eventqueue; + uint8_t nb_eventport; + uint8_t ev_queue_mode; +}; + +/* Eventmode conf data */ +struct eventmode_conf { + int nb_eventdev; + /**< No of event devs */ + struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; + /**< Per event dev conf */ + union { + RTE_STD_C11 + struct { + uint64_t sched_type : 2; + /**< Schedule type */ + }; + uint64_t u64; + } ext_params; + /**< 64 bit field to specify extended params */ +}; + +/** + * Event helper configuration + */ +struct eh_conf { + enum eh_pkt_transfer_mode mode; + /**< Packet transfer mode of the application */ + uint32_t eth_portmask; + /**< + * Mask of the eth ports to be used. This portmask would be + * checked while initializing devices using helper routines. + */ + void *mode_params; + /**< Mode specific parameters */ +}; + +/** + * Initialize event mode devices + * + * Application can call this function to get the event devices, eth devices + * and eth rx & tx adapters initialized according to the default config or + * config populated using the command line args. + * + * Application is expected to initialize the eth devices and then the event + * mode helper subsystem will stop & start eth devices according to its + * requirement. Call to this function should be done after the eth devices + * are successfully initialized. + * + * @param conf + * Event helper configuration + * @return + * - 0 on success. + * - (<0) on failure. + */ +int32_t +eh_devs_init(struct eh_conf *conf); + +/** + * Release event mode devices + * + * Application can call this function to release event devices, + * eth rx & tx adapters according to the config. + * + * Call to this function should be done before application stops + * and closes eth devices. This function will not close and stop + * eth devices. + * + * @param conf + * Event helper configuration + * @return + * - 0 on success. + * - (<0) on failure. + */ +int32_t +eh_devs_uninit(struct eh_conf *conf); + +#endif /* _EVENT_HELPER_H_ */ diff --git a/examples/ipsec-secgw/meson.build b/examples/ipsec-secgw/meson.build index 6bd5b78..2415d47 100644 --- a/examples/ipsec-secgw/meson.build +++ b/examples/ipsec-secgw/meson.build @@ -6,9 +6,9 @@ # To build this example as a standalone application with an already-installed # DPDK instance, use 'make' -deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec'] +deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec', 'eventdev'] allow_experimental_apis = true sources = files( - 'esp.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', + 'esp.c', 'event_helper.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', 'parser.c', 'rt.c', 'sa.c', 'sad.c', 'sp4.c', 'sp6.c' ) From patchwork Thu Feb 27 16:18:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lukas Bartosik [C]" X-Patchwork-Id: 66100 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 723AEA055F; Thu, 27 Feb 2020 17:19:18 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7A6BB1BFEB; Thu, 27 Feb 2020 17:18:56 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id A0A311BFDA for ; Thu, 27 Feb 2020 17:18:53 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 01RFsgao002477; Thu, 27 Feb 2020 08:18:53 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=N4jnclpNkr8niNnJ3JhgRlnM3RamXjKMnesNDb8gVlw=; b=AnfbtVq6v2GGAx5m0vCj3hWWzpNe89h8adZmNmgIlkQvZ8TCldJEYhi+pn3qZs1XbOjh Qpi7g9zEWsxF+iFiyRt5Dr+xODWkZoV2MInlbJ9dyT4YH3CkSZtJaHFLSGvglywmb4oq yghOxbldsCLkv4oyxrEmVFtl3OV45H6F+YCzsIEqytjekat57lbN/GXtsmMeVsoTWE9j CtwCN9Oh+wT/KzL931s6o76vaRclHImYVo4qAJvV9W8SFdX3W2t1zmVTG31fpfNo+6tw Uyu1TuAGRJ9/PRFNrnC6y6eowp8G51/bj63QjeV9A6t591Pz9YX6xgRsOvDU4hXDHiqq KQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2ydchth8eg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 27 Feb 2020 08:18:52 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:18:50 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 27 Feb 2020 08:18:50 -0800 Received: from luke.marvell.com (unknown [10.95.130.81]) by maili.marvell.com (Postfix) with ESMTP id 483F63F703F; Thu, 27 Feb 2020 08:18:48 -0800 (PST) From: Lukasz Bartosik To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Anoob Joseph , Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Konstantin Ananyev" , Date: Thu, 27 Feb 2020 17:18:25 +0100 Message-ID: <1582820317-7333-4-git-send-email-lbartosik@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1582820317-7333-1-git-send-email-lbartosik@marvell.com> References: <1582185727-6749-1-git-send-email-lbartosik@marvell.com> <1582820317-7333-1-git-send-email-lbartosik@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-27_05:2020-02-26, 2020-02-27 signatures=0 Subject: [dpdk-dev] [PATCH v5 03/15] examples/ipsec-secgw: add eventdev port-lcore link X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Anoob Joseph Add event device port-lcore link and specify which event queues should be connected to the event port. Generate a default config for event port-lcore links if it is not specified in the configuration. This routine will check the number of available ports and then create links according to the number of cores available. This patch also adds a new entry in the eventmode conf to denote that all queues are to be linked with every port. This enables one core to receive packets from all ethernet ports. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/event_helper.c | 126 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 33 ++++++++++ 2 files changed, 159 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 0c38474..c90249f 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -1,11 +1,33 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (C) 2020 Marvell International Ltd. */ +#include #include #include +#include #include "event_helper.h" +static inline unsigned int +eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) +{ + unsigned int next_core; + + /* Get next active core skipping cores reserved as eth cores */ + do { + /* Get the next core */ + next_core = rte_get_next_lcore(prev_core, 0, 0); + + /* Check if we have reached max lcores */ + if (next_core == RTE_MAX_LCORE) + return next_core; + + prev_core = next_core; + } while (rte_bitmap_get(em_conf->eth_core_mask, next_core)); + + return next_core; +} + static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -77,6 +99,71 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_link(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + struct eh_event_link_info *link; + unsigned int lcore_id = -1; + int i, link_index; + + /* + * Create a 1:1 mapping from event ports to cores. If the number + * of event ports is lesser than the cores, some cores won't + * execute worker. If there are more event ports, then some ports + * won't be used. + * + */ + + /* + * The event queue-port mapping is done according to the link. Since + * we are falling back to the default link config, enabling + * "all_ev_queue_to_ev_port" mode flag. This will map all queues + * to the port. + */ + em_conf->ext_params.all_ev_queue_to_ev_port = 1; + + /* Get first event dev conf */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Loop through the ports */ + for (i = 0; i < eventdev_config->nb_eventport; i++) { + + /* Get next active core id */ + lcore_id = eh_get_next_active_core(em_conf, + lcore_id); + + if (lcore_id == RTE_MAX_LCORE) { + /* Reached max cores */ + return 0; + } + + /* Save the current combination as one link */ + + /* Get the index */ + link_index = em_conf->nb_link; + + /* Get the corresponding link */ + link = &(em_conf->link[link_index]); + + /* Save link */ + link->eventdev_id = eventdev_config->eventdev_id; + link->event_port_id = i; + link->lcore_id = lcore_id; + + /* + * Don't set eventq_id as by default all queues + * need to be mapped to the port, which is controlled + * by the operating mode. + */ + + /* Update number of links */ + em_conf->nb_link++; + } + + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -91,6 +178,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if links are specified. Else generate a default config for + * the event ports used. + */ + if (em_conf->nb_link == 0) { + ret = eh_set_default_conf_link(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -102,6 +199,8 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) struct rte_event_dev_config eventdev_conf; struct eventdev_params *eventdev_config; int nb_eventdev = em_conf->nb_eventdev; + struct eh_event_link_info *link; + uint8_t *queue = NULL; uint8_t eventdev_id; int nb_eventqueue; uint8_t i, j; @@ -199,6 +298,33 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) } } + /* Make event queue - event port link */ + for (j = 0; j < em_conf->nb_link; j++) { + + /* Get link info */ + link = &(em_conf->link[j]); + + /* Get event dev ID */ + eventdev_id = link->eventdev_id; + + /* + * If "all_ev_queue_to_ev_port" params flag is selected, all + * queues need to be mapped to the port. + */ + if (em_conf->ext_params.all_ev_queue_to_ev_port) + queue = NULL; + else + queue = &(link->eventq_id); + + /* Link queue to port */ + ret = rte_event_port_link(eventdev_id, link->event_port_id, + queue, NULL, 1); + if (ret < 0) { + EH_LOG_ERR("Failed to link event port %d", ret); + return ret; + } + } + /* Start event devices */ for (i = 0; i < nb_eventdev; i++) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 040f977..c8afc84 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -16,6 +16,13 @@ /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS +/* Max event queues supported per event device */ +#define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV + +/* Max event-lcore links */ +#define EVENT_MODE_MAX_LCORE_LINKS \ + (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) + /** * Packet transfer mode of the application */ @@ -32,17 +39,43 @@ struct eventdev_params { uint8_t ev_queue_mode; }; +/** + * Event-lcore link configuration + */ +struct eh_event_link_info { + uint8_t eventdev_id; + /**< Event device ID */ + uint8_t event_port_id; + /**< Event port ID */ + uint8_t eventq_id; + /**< Event queue to be linked to the port */ + uint8_t lcore_id; + /**< Lcore to be polling on this port */ +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; /**< No of event devs */ struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; /**< Per event dev conf */ + uint8_t nb_link; + /**< No of links */ + struct eh_event_link_info + link[EVENT_MODE_MAX_LCORE_LINKS]; + /**< Per link conf */ + struct rte_bitmap *eth_core_mask; + /**< Core mask of cores to be used for software Rx and Tx */ union { RTE_STD_C11 struct { uint64_t sched_type : 2; /**< Schedule type */ + uint64_t all_ev_queue_to_ev_port : 1; + /**< + * When enabled, all event queues need to be mapped to + * each event port + */ }; uint64_t u64; } ext_params; From patchwork Thu Feb 27 16:18:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lukas Bartosik [C]" X-Patchwork-Id: 66101 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9325AA055F; Thu, 27 Feb 2020 17:19:29 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0B2FB1BFF0; Thu, 27 Feb 2020 17:19:01 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id C7AD71BFED for ; Thu, 27 Feb 2020 17:18:56 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 01RFsqaN002551; Thu, 27 Feb 2020 08:18:56 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=uNW1JEV7eOXa1nGLkkV5ZE8QWZOeJKAXZDvw40vDyAw=; b=p3AAzYrXotOaEZo59iHmu0V6isTSjZcJbZJQ9JZUIc9fKY1ITA5DpJxQT/erkMRdcmyv ylQYK0WTLpUHKgR8RyPlMcWr9a++Cpqh7IDoyx99KLymH1xBor1JxxH71gDvHNT4ht8Q AHOQ8AcPAqeW6XNy1S7y2CPDzVhSg5ymORBmuspbaH/PCC+6Sor4c9A2KkagNx9njOW5 D26a4x6FzC9uZYT9L3eruPA956gwG7odCjkD4EQ6APWQwgrToRr93MaxUV8/o9i2xMgu 7LK/QLJzd+MfmQlwhWukDvtQGCR5619lkUATBwFyEpdclD/NuEO2IEK7We8KOnZiU6rc Sw== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2ydchth8ev-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 27 Feb 2020 08:18:56 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:18:54 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 27 Feb 2020 08:18:54 -0800 Received: from luke.marvell.com (unknown [10.95.130.81]) by maili.marvell.com (Postfix) with ESMTP id 562EA3F7040; Thu, 27 Feb 2020 08:18:51 -0800 (PST) From: Lukasz Bartosik To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Anoob Joseph , Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Konstantin Ananyev" , Date: Thu, 27 Feb 2020 17:18:26 +0100 Message-ID: <1582820317-7333-5-git-send-email-lbartosik@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1582820317-7333-1-git-send-email-lbartosik@marvell.com> References: <1582185727-6749-1-git-send-email-lbartosik@marvell.com> <1582820317-7333-1-git-send-email-lbartosik@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-27_05:2020-02-26, 2020-02-27 signatures=0 Subject: [dpdk-dev] [PATCH v5 04/15] examples/ipsec-secgw: add Rx adapter support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Anoob Joseph Add Rx adapter support. The event helper init routine will initialize the Rx adapter according to the configuration. If Rx adapter config is not present it will generate a default config. If there are enough event queues available it will map eth ports and event queues 1:1 (one eth port will be connected to one event queue). Otherwise it will map all eth ports to one event queue. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/event_helper.c | 273 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/event_helper.h | 29 ++++ 2 files changed, 301 insertions(+), 1 deletion(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index c90249f..2653e86 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -4,10 +4,58 @@ #include #include #include +#include #include +#include #include "event_helper.h" +static int +eh_get_enabled_cores(struct rte_bitmap *eth_core_mask) +{ + int i, count = 0; + + RTE_LCORE_FOREACH(i) { + /* Check if this core is enabled in core mask*/ + if (rte_bitmap_get(eth_core_mask, i)) { + /* Found enabled core */ + count++; + } + } + return count; +} + +static inline unsigned int +eh_get_next_eth_core(struct eventmode_conf *em_conf) +{ + static unsigned int prev_core = -1; + unsigned int next_core; + + /* + * Make sure we have at least one eth core running, else the following + * logic would lead to an infinite loop. + */ + if (eh_get_enabled_cores(em_conf->eth_core_mask) == 0) { + EH_LOG_ERR("No enabled eth core found"); + return RTE_MAX_LCORE; + } + + /* Only some cores are marked as eth cores, skip others */ + do { + /* Get the next core */ + next_core = rte_get_next_lcore(prev_core, 0, 1); + + /* Check if we have reached max lcores */ + if (next_core == RTE_MAX_LCORE) + return next_core; + + /* Update prev_core */ + prev_core = next_core; + } while (!(rte_bitmap_get(em_conf->eth_core_mask, next_core))); + + return next_core; +} + static inline unsigned int eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) { @@ -164,6 +212,82 @@ eh_set_default_conf_link(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) +{ + struct rx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + struct rx_adapter_conf *adapter; + bool single_ev_queue = false; + int eventdev_id; + int nb_eth_dev; + int adapter_id; + int conn_id; + int i; + + /* Create one adapter with eth queues mapped to event queue(s) */ + + if (em_conf->nb_eventdev == 0) { + EH_LOG_ERR("No event devs registered"); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + + /* Use the first event dev */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Get eventdev ID */ + eventdev_id = eventdev_config->eventdev_id; + adapter_id = 0; + + /* Get adapter conf */ + adapter = &(em_conf->rx_adapter[adapter_id]); + + /* Set adapter conf */ + adapter->eventdev_id = eventdev_id; + adapter->adapter_id = adapter_id; + adapter->rx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Map all queues of eth device (port) to an event queue. If there + * are more event queues than eth ports then create 1:1 mapping. + * Otherwise map all eth ports to a single event queue. + */ + if (nb_eth_dev > eventdev_config->nb_eventqueue) + single_ev_queue = true; + + for (i = 0; i < nb_eth_dev; i++) { + + /* Use only the ports enabled */ + if ((em_conf->eth_portmask & (1 << i)) == 0) + continue; + + /* Get the connection id */ + conn_id = adapter->nb_connections; + + /* Get the connection */ + conn = &(adapter->conn[conn_id]); + + /* Set mapping between eth ports & event queues*/ + conn->ethdev_id = i; + conn->eventq_id = single_ev_queue ? 0 : i; + + /* Add all eth queues eth port to event queue */ + conn->ethdev_rx_qid = -1; + + /* Update no of connections */ + adapter->nb_connections++; + + } + + /* We have setup one adapter */ + em_conf->nb_rx_adapter = 1; + + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -188,6 +312,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if rx adapters are specified. Else generate a default config + * with one rx adapter and all eth queues - event queue mapped. + */ + if (em_conf->nb_rx_adapter == 0) { + ret = eh_set_default_conf_rx_adapter(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -341,6 +475,104 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) return 0; } +static int +eh_rx_adapter_configure(struct eventmode_conf *em_conf, + struct rx_adapter_conf *adapter) +{ + struct rte_event_eth_rx_adapter_queue_conf queue_conf = {0}; + struct rte_event_dev_info evdev_default_conf = {0}; + struct rte_event_port_conf port_conf = {0}; + struct rx_adapter_connection_info *conn; + uint8_t eventdev_id; + uint32_t service_id; + int ret; + int j; + + /* Get event dev ID */ + eventdev_id = adapter->eventdev_id; + + /* Get default configuration of event dev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to get event dev info %d", ret); + return ret; + } + + /* Setup port conf */ + port_conf.new_event_threshold = 1200; + port_conf.dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + port_conf.enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Create Rx adapter */ + ret = rte_event_eth_rx_adapter_create(adapter->adapter_id, + adapter->eventdev_id, &port_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to create rx adapter %d", ret); + return ret; + } + + /* Setup various connections in the adapter */ + for (j = 0; j < adapter->nb_connections; j++) { + /* Get connection */ + conn = &(adapter->conn[j]); + + /* Setup queue conf */ + queue_conf.ev.queue_id = conn->eventq_id; + queue_conf.ev.sched_type = em_conf->ext_params.sched_type; + queue_conf.ev.event_type = RTE_EVENT_TYPE_ETHDEV; + + /* Add queue to the adapter */ + ret = rte_event_eth_rx_adapter_queue_add(adapter->adapter_id, + conn->ethdev_id, conn->ethdev_rx_qid, + &queue_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to add eth queue to rx adapter %d", + ret); + return ret; + } + } + + /* Get the service ID used by rx adapter */ + ret = rte_event_eth_rx_adapter_service_id_get(adapter->adapter_id, + &service_id); + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR("Failed to get service id used by rx adapter %d", + ret); + return ret; + } + + rte_service_set_runstate_mapped_check(service_id, 0); + + /* Start adapter */ + ret = rte_event_eth_rx_adapter_start(adapter->adapter_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start rx adapter %d", ret); + return ret; + } + + return 0; +} + +static int +eh_initialize_rx_adapter(struct eventmode_conf *em_conf) +{ + struct rx_adapter_conf *adapter; + int i, ret; + + /* Configure rx adapters */ + for (i = 0; i < em_conf->nb_rx_adapter; i++) { + adapter = &(em_conf->rx_adapter[i]); + ret = eh_rx_adapter_configure(em_conf, adapter); + if (ret < 0) { + EH_LOG_ERR("Failed to configure rx adapter %d", ret); + return ret; + } + } + return 0; +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -364,6 +596,9 @@ eh_devs_init(struct eh_conf *conf) /* Get eventmode conf */ em_conf = conf->mode_params; + /* Eventmode conf would need eth portmask */ + em_conf->eth_portmask = conf->eth_portmask; + /* Validate the requested config */ ret = eh_validate_conf(em_conf); if (ret < 0) { @@ -388,6 +623,13 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Setup Rx adapter */ + ret = eh_initialize_rx_adapter(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize rx adapter %d", ret); + return ret; + } + /* Start eth devices after setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { @@ -410,8 +652,8 @@ int32_t eh_devs_uninit(struct eh_conf *conf) { struct eventmode_conf *em_conf; + int ret, i, j; uint16_t id; - int ret, i; if (conf == NULL) { EH_LOG_ERR("Invalid event helper configuration"); @@ -429,6 +671,35 @@ eh_devs_uninit(struct eh_conf *conf) /* Get eventmode conf */ em_conf = conf->mode_params; + /* Stop and release rx adapters */ + for (i = 0; i < em_conf->nb_rx_adapter; i++) { + + id = em_conf->rx_adapter[i].adapter_id; + ret = rte_event_eth_rx_adapter_stop(id); + if (ret < 0) { + EH_LOG_ERR("Failed to stop rx adapter %d", ret); + return ret; + } + + for (j = 0; j < em_conf->rx_adapter[i].nb_connections; j++) { + + ret = rte_event_eth_rx_adapter_queue_del(id, + em_conf->rx_adapter[i].conn[j].ethdev_id, -1); + if (ret < 0) { + EH_LOG_ERR( + "Failed to remove rx adapter queues %d", + ret); + return ret; + } + } + + ret = rte_event_eth_rx_adapter_free(id); + if (ret < 0) { + EH_LOG_ERR("Failed to free rx adapter %d", ret); + return ret; + } + } + /* Stop and release event devices */ for (i = 0; i < em_conf->nb_eventdev; i++) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index c8afc84..00ce14e 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -16,6 +16,12 @@ /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS +/* Max Rx adapters supported */ +#define EVENT_MODE_MAX_RX_ADAPTERS RTE_EVENT_MAX_DEVS + +/* Max Rx adapter connections */ +#define EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER 16 + /* Max event queues supported per event device */ #define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV @@ -53,12 +59,33 @@ struct eh_event_link_info { /**< Lcore to be polling on this port */ }; +/* Rx adapter connection info */ +struct rx_adapter_connection_info { + uint8_t ethdev_id; + uint8_t eventq_id; + int32_t ethdev_rx_qid; +}; + +/* Rx adapter conf */ +struct rx_adapter_conf { + int32_t eventdev_id; + int32_t adapter_id; + uint32_t rx_core_id; + uint8_t nb_connections; + struct rx_adapter_connection_info + conn[EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER]; +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; /**< No of event devs */ struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; /**< Per event dev conf */ + uint8_t nb_rx_adapter; + /**< No of Rx adapters */ + struct rx_adapter_conf rx_adapter[EVENT_MODE_MAX_RX_ADAPTERS]; + /**< Rx adapter conf */ uint8_t nb_link; /**< No of links */ struct eh_event_link_info @@ -66,6 +93,8 @@ struct eventmode_conf { /**< Per link conf */ struct rte_bitmap *eth_core_mask; /**< Core mask of cores to be used for software Rx and Tx */ + uint32_t eth_portmask; + /**< Mask of the eth ports to be used */ union { RTE_STD_C11 struct { From patchwork Thu Feb 27 16:18:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lukas Bartosik [C]" X-Patchwork-Id: 66102 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id AC63AA055F; Thu, 27 Feb 2020 17:19:44 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 195A31BFE2; Thu, 27 Feb 2020 17:19:05 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 212F81BFF0 for ; Thu, 27 Feb 2020 17:19:00 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 01RFsqaO002551; Thu, 27 Feb 2020 08:18:59 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=1KbWFVd0d50d8jp5eOOJWZ1/PPE1c0hhTariP2mMG+o=; b=iPybwyRK6X4tIT74dCCgWh0Q6FohwUlrxUYKjFeVp5V6/VZ2utwwR4T04WqWxBAvUPrN LaG/yUgWEflQ+NHryfzLYyDa2XaULCe3YjhZUOBOsxEcSfUvC22cE680lDTcM5GVL1kk HweoUuGrGFzJSn221qLS5MjiRZceaSz8AuayO8VjKw4JTfHlfkGXZQAKBfmqFqosJUQ5 fh8zpiKSAqLbzvUVrXArTepgLEeHTZVIdXa/b42am7PHXrehk5jO3PmYnz6qbdzSWfw+ 5nP5HJBk5k4vpsj9jl1T0ouQz2N7aj9N715rdWKF3Na1UxWvAdTjEVwpZ1CDGcVXYzYD tw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2ydchth8f0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 27 Feb 2020 08:18:59 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:18:56 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 27 Feb 2020 08:18:56 -0800 Received: from luke.marvell.com (unknown [10.95.130.81]) by maili.marvell.com (Postfix) with ESMTP id 65A0A3F703F; Thu, 27 Feb 2020 08:18:54 -0800 (PST) From: Lukasz Bartosik To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Anoob Joseph , Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Konstantin Ananyev" , Date: Thu, 27 Feb 2020 17:18:27 +0100 Message-ID: <1582820317-7333-6-git-send-email-lbartosik@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1582820317-7333-1-git-send-email-lbartosik@marvell.com> References: <1582185727-6749-1-git-send-email-lbartosik@marvell.com> <1582820317-7333-1-git-send-email-lbartosik@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-27_05:2020-02-26, 2020-02-27 signatures=0 Subject: [dpdk-dev] [PATCH v5 05/15] examples/ipsec-secgw: add Tx adapter support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Anoob Joseph Add Tx adapter support. The event helper init routine will initialize the Tx adapter according to the configuration. If Tx adapter config is not present it will generate a default config. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/event_helper.c | 313 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 48 ++++++ 2 files changed, 361 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 2653e86..fca1e08 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -5,6 +5,7 @@ #include #include #include +#include #include #include @@ -76,6 +77,22 @@ eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) return next_core; } +static struct eventdev_params * +eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) +{ + int i; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + if (em_conf->eventdev_config[i].eventdev_id == eventdev_id) + break; + } + + /* No match */ + if (i == em_conf->nb_eventdev) + return NULL; + + return &(em_conf->eventdev_config[i]); +} static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -288,6 +305,95 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) +{ + struct tx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + struct tx_adapter_conf *tx_adapter; + int eventdev_id; + int adapter_id; + int nb_eth_dev; + int conn_id; + int i; + + /* + * Create one Tx adapter with all eth queues mapped to event queues + * 1:1. + */ + + if (em_conf->nb_eventdev == 0) { + EH_LOG_ERR("No event devs registered"); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + + /* Use the first event dev */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Get eventdev ID */ + eventdev_id = eventdev_config->eventdev_id; + adapter_id = 0; + + /* Get adapter conf */ + tx_adapter = &(em_conf->tx_adapter[adapter_id]); + + /* Set adapter conf */ + tx_adapter->eventdev_id = eventdev_id; + tx_adapter->adapter_id = adapter_id; + + /* TODO: Tx core is required only when internal port is not present */ + tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Application uses one event queue per adapter for submitting + * packets for Tx. Reserve the last queue available and decrement + * the total available event queues for this + */ + + /* Queue numbers start at 0 */ + tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; + + /* + * Map all Tx queues of the eth device (port) to the event device. + */ + + /* Set defaults for connections */ + + /* + * One eth device (port) is one connection. Map all Tx queues + * of the device to the Tx adapter. + */ + + for (i = 0; i < nb_eth_dev; i++) { + + /* Use only the ports enabled */ + if ((em_conf->eth_portmask & (1 << i)) == 0) + continue; + + /* Get the connection id */ + conn_id = tx_adapter->nb_connections; + + /* Get the connection */ + conn = &(tx_adapter->conn[conn_id]); + + /* Add ethdev to connections */ + conn->ethdev_id = i; + + /* Add all eth tx queues to adapter */ + conn->ethdev_tx_qid = -1; + + /* Update no of connections */ + tx_adapter->nb_connections++; + } + + /* We have setup one adapter */ + em_conf->nb_tx_adapter = 1; + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -322,6 +428,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if tx adapters are specified. Else generate a default config + * with one tx adapter. + */ + if (em_conf->nb_tx_adapter == 0) { + ret = eh_set_default_conf_tx_adapter(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -573,6 +689,133 @@ eh_initialize_rx_adapter(struct eventmode_conf *em_conf) return 0; } +static int +eh_tx_adapter_configure(struct eventmode_conf *em_conf, + struct tx_adapter_conf *adapter) +{ + struct rte_event_dev_info evdev_default_conf = {0}; + struct rte_event_port_conf port_conf = {0}; + struct tx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + uint8_t tx_port_id = 0; + uint8_t eventdev_id; + uint32_t service_id; + int ret, j; + + /* Get event dev ID */ + eventdev_id = adapter->eventdev_id; + + /* Get event device conf */ + eventdev_config = eh_get_eventdev_params(em_conf, eventdev_id); + + /* Create Tx adapter */ + + /* Get default configuration of event dev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to get event dev info %d", ret); + return ret; + } + + /* Setup port conf */ + port_conf.new_event_threshold = + evdev_default_conf.max_num_events; + port_conf.dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + port_conf.enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Create adapter */ + ret = rte_event_eth_tx_adapter_create(adapter->adapter_id, + adapter->eventdev_id, &port_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to create tx adapter %d", ret); + return ret; + } + + /* Setup various connections in the adapter */ + for (j = 0; j < adapter->nb_connections; j++) { + + /* Get connection */ + conn = &(adapter->conn[j]); + + /* Add queue to the adapter */ + ret = rte_event_eth_tx_adapter_queue_add(adapter->adapter_id, + conn->ethdev_id, conn->ethdev_tx_qid); + if (ret < 0) { + EH_LOG_ERR("Failed to add eth queue to tx adapter %d", + ret); + return ret; + } + } + + /* Setup Tx queue & port */ + + /* Get event port used by the adapter */ + ret = rte_event_eth_tx_adapter_event_port_get( + adapter->adapter_id, &tx_port_id); + if (ret) { + EH_LOG_ERR("Failed to get tx adapter port id %d", ret); + return ret; + } + + /* + * Tx event queue is reserved for Tx adapter. Unlink this queue + * from all other ports + * + */ + for (j = 0; j < eventdev_config->nb_eventport; j++) { + rte_event_port_unlink(eventdev_id, j, + &(adapter->tx_ev_queue), 1); + } + + /* Link Tx event queue to Tx port */ + ret = rte_event_port_link(eventdev_id, tx_port_id, + &(adapter->tx_ev_queue), NULL, 1); + if (ret != 1) { + EH_LOG_ERR("Failed to link event queue to port"); + return ret; + } + + /* Get the service ID used by Tx adapter */ + ret = rte_event_eth_tx_adapter_service_id_get(adapter->adapter_id, + &service_id); + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR("Failed to get service id used by tx adapter %d", + ret); + return ret; + } + + rte_service_set_runstate_mapped_check(service_id, 0); + + /* Start adapter */ + ret = rte_event_eth_tx_adapter_start(adapter->adapter_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start tx adapter %d", ret); + return ret; + } + + return 0; +} + +static int +eh_initialize_tx_adapter(struct eventmode_conf *em_conf) +{ + struct tx_adapter_conf *adapter; + int i, ret; + + /* Configure Tx adapters */ + for (i = 0; i < em_conf->nb_tx_adapter; i++) { + adapter = &(em_conf->tx_adapter[i]); + ret = eh_tx_adapter_configure(em_conf, adapter); + if (ret < 0) { + EH_LOG_ERR("Failed to configure tx adapter %d", ret); + return ret; + } + } + return 0; +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -630,6 +873,13 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Setup Tx adapter */ + ret = eh_initialize_tx_adapter(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize tx adapter %d", ret); + return ret; + } + /* Start eth devices after setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { @@ -713,5 +963,68 @@ eh_devs_uninit(struct eh_conf *conf) } } + /* Stop and release tx adapters */ + for (i = 0; i < em_conf->nb_tx_adapter; i++) { + + id = em_conf->tx_adapter[i].adapter_id; + ret = rte_event_eth_tx_adapter_stop(id); + if (ret < 0) { + EH_LOG_ERR("Failed to stop tx adapter %d", ret); + return ret; + } + + for (j = 0; j < em_conf->tx_adapter[i].nb_connections; j++) { + + ret = rte_event_eth_tx_adapter_queue_del(id, + em_conf->tx_adapter[i].conn[j].ethdev_id, -1); + if (ret < 0) { + EH_LOG_ERR( + "Failed to remove tx adapter queues %d", + ret); + return ret; + } + } + + ret = rte_event_eth_tx_adapter_free(id); + if (ret < 0) { + EH_LOG_ERR("Failed to free tx adapter %d", ret); + return ret; + } + } + return 0; } + +uint8_t +eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id) +{ + struct eventdev_params *eventdev_config; + struct eventmode_conf *em_conf; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Get event device conf */ + eventdev_config = eh_get_eventdev_params(em_conf, eventdev_id); + + if (eventdev_config == NULL) { + EH_LOG_ERR("Failed to read eventdev config"); + return -EINVAL; + } + + /* + * The last queue is reserved to be used as atomic queue for the + * last stage (eth packet tx stage) + */ + return eventdev_config->nb_eventqueue - 1; +} diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 00ce14e..913b172 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -19,9 +19,15 @@ /* Max Rx adapters supported */ #define EVENT_MODE_MAX_RX_ADAPTERS RTE_EVENT_MAX_DEVS +/* Max Tx adapters supported */ +#define EVENT_MODE_MAX_TX_ADAPTERS RTE_EVENT_MAX_DEVS + /* Max Rx adapter connections */ #define EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER 16 +/* Max Tx adapter connections */ +#define EVENT_MODE_MAX_CONNECTIONS_PER_TX_ADAPTER 16 + /* Max event queues supported per event device */ #define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV @@ -29,6 +35,9 @@ #define EVENT_MODE_MAX_LCORE_LINKS \ (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) +/* Max adapters that one Tx core can handle */ +#define EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE EVENT_MODE_MAX_TX_ADAPTERS + /** * Packet transfer mode of the application */ @@ -76,6 +85,23 @@ struct rx_adapter_conf { conn[EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER]; }; +/* Tx adapter connection info */ +struct tx_adapter_connection_info { + uint8_t ethdev_id; + int32_t ethdev_tx_qid; +}; + +/* Tx adapter conf */ +struct tx_adapter_conf { + int32_t eventdev_id; + int32_t adapter_id; + uint32_t tx_core_id; + uint8_t nb_connections; + struct tx_adapter_connection_info + conn[EVENT_MODE_MAX_CONNECTIONS_PER_TX_ADAPTER]; + uint8_t tx_ev_queue; +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; @@ -86,6 +112,10 @@ struct eventmode_conf { /**< No of Rx adapters */ struct rx_adapter_conf rx_adapter[EVENT_MODE_MAX_RX_ADAPTERS]; /**< Rx adapter conf */ + uint8_t nb_tx_adapter; + /**< No of Tx adapters */ + struct tx_adapter_conf tx_adapter[EVENT_MODE_MAX_TX_ADAPTERS]; + /** Tx adapter conf */ uint8_t nb_link; /**< No of links */ struct eh_event_link_info @@ -166,4 +196,22 @@ eh_devs_init(struct eh_conf *conf); int32_t eh_devs_uninit(struct eh_conf *conf); +/** + * Get eventdev tx queue + * + * If the application uses event device which does not support internal port + * then it needs to submit the events to a Tx queue before final transmission. + * This Tx queue will be created internally by the eventmode helper subsystem, + * and application will need its queue ID when it runs the execution loop. + * + * @param mode_conf + * Event helper configuration + * @param eventdev_id + * Event device ID + * @return + * Tx queue ID + */ +uint8_t +eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); + #endif /* _EVENT_HELPER_H_ */ From patchwork Thu Feb 27 16:18:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lukas Bartosik [C]" X-Patchwork-Id: 66103 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2A5CDA055F; Thu, 27 Feb 2020 17:19:58 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A06AE1BF76; Thu, 27 Feb 2020 17:19:06 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id CC0971BFCF for ; Thu, 27 Feb 2020 17:19:02 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 01RFsqaR002551; Thu, 27 Feb 2020 08:19:02 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=TZFZfaHoMrvL2b6dCVQUjByLbCv0OlyB2z7SmjUt3QE=; b=tEza+dS+yF7qnLwwZE3F7Q5Q10TNeAHuqGnQ1giAkGafPbw9huH9OemKLL4Zadm/r1N1 QNZMH0H4GlGhXTUQ3yLG6g/70XTaSxrVa8nguDTJ/FgjvngPeCsaqhTe18Wai5zSOtwv Jm/z8ryh1pay8lmSOSBAgmrHIEFTPdZOUeUimn9TmFj4czj2OU83dF+F+MviUPKdFEAm g1N08mw4HOQIq8ELahAxsbctANvt9f+G9NbZ32v+aiBDxJbxpYBAvcCU+UNSD24JmkGJ /zEEX+BTEPEpK/J4mEv+o6GNKfpwQXPxNHr/2KRMxsiYU6LNRkNz32Ly2COwBJ40y/tx bg== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2ydchth8fe-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 27 Feb 2020 08:19:02 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:19:00 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 27 Feb 2020 08:19:00 -0800 Received: from luke.marvell.com (unknown [10.95.130.81]) by maili.marvell.com (Postfix) with ESMTP id 734043F7040; Thu, 27 Feb 2020 08:18:57 -0800 (PST) From: Lukasz Bartosik To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Anoob Joseph , Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Konstantin Ananyev" , Date: Thu, 27 Feb 2020 17:18:28 +0100 Message-ID: <1582820317-7333-7-git-send-email-lbartosik@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1582820317-7333-1-git-send-email-lbartosik@marvell.com> References: <1582185727-6749-1-git-send-email-lbartosik@marvell.com> <1582820317-7333-1-git-send-email-lbartosik@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-27_05:2020-02-26, 2020-02-27 signatures=0 Subject: [dpdk-dev] [PATCH v5 06/15] examples/ipsec-secgw: add routines to display config X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Anoob Joseph Add routines to display the eventmode configuration and provide an overview of the devices used. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/event_helper.c | 207 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 14 +++ 2 files changed, 221 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index fca1e08..d09bf7d 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -816,6 +816,210 @@ eh_initialize_tx_adapter(struct eventmode_conf *em_conf) return 0; } +static void +eh_display_operating_mode(struct eventmode_conf *em_conf) +{ + char sched_types[][32] = { + "RTE_SCHED_TYPE_ORDERED", + "RTE_SCHED_TYPE_ATOMIC", + "RTE_SCHED_TYPE_PARALLEL", + }; + EH_LOG_INFO("Operating mode:"); + + EH_LOG_INFO("\tScheduling type: \t%s", + sched_types[em_conf->ext_params.sched_type]); + + EH_LOG_INFO(""); +} + +static void +eh_display_event_dev_conf(struct eventmode_conf *em_conf) +{ + char queue_mode[][32] = { + "", + "ATQ (ALL TYPE QUEUE)", + "SINGLE LINK", + }; + char print_buf[256] = { 0 }; + int i; + + EH_LOG_INFO("Event Device Configuration:"); + + for (i = 0; i < em_conf->nb_eventdev; i++) { + sprintf(print_buf, + "\tDev ID: %-2d \tQueues: %-2d \tPorts: %-2d", + em_conf->eventdev_config[i].eventdev_id, + em_conf->eventdev_config[i].nb_eventqueue, + em_conf->eventdev_config[i].nb_eventport); + sprintf(print_buf + strlen(print_buf), + "\tQueue mode: %s", + queue_mode[em_conf->eventdev_config[i].ev_queue_mode]); + EH_LOG_INFO("%s", print_buf); + } + EH_LOG_INFO(""); +} + +static void +eh_display_rx_adapter_conf(struct eventmode_conf *em_conf) +{ + int nb_rx_adapter = em_conf->nb_rx_adapter; + struct rx_adapter_connection_info *conn; + struct rx_adapter_conf *adapter; + char print_buf[256] = { 0 }; + int i, j; + + EH_LOG_INFO("Rx adapters configured: %d", nb_rx_adapter); + + for (i = 0; i < nb_rx_adapter; i++) { + adapter = &(em_conf->rx_adapter[i]); + EH_LOG_INFO( + "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d" + "\tRx core: %-2d", + adapter->adapter_id, + adapter->nb_connections, + adapter->eventdev_id, + adapter->rx_core_id); + + for (j = 0; j < adapter->nb_connections; j++) { + conn = &(adapter->conn[j]); + + sprintf(print_buf, + "\t\tEthdev ID: %-2d", conn->ethdev_id); + + if (conn->ethdev_rx_qid == -1) + sprintf(print_buf + strlen(print_buf), + "\tEth rx queue: %-2s", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "\tEth rx queue: %-2d", + conn->ethdev_rx_qid); + + sprintf(print_buf + strlen(print_buf), + "\tEvent queue: %-2d", conn->eventq_id); + EH_LOG_INFO("%s", print_buf); + } + } + EH_LOG_INFO(""); +} + +static void +eh_display_tx_adapter_conf(struct eventmode_conf *em_conf) +{ + int nb_tx_adapter = em_conf->nb_tx_adapter; + struct tx_adapter_connection_info *conn; + struct tx_adapter_conf *adapter; + char print_buf[256] = { 0 }; + int i, j; + + EH_LOG_INFO("Tx adapters configured: %d", nb_tx_adapter); + + for (i = 0; i < nb_tx_adapter; i++) { + adapter = &(em_conf->tx_adapter[i]); + sprintf(print_buf, + "\tTx adapter ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", + adapter->adapter_id, + adapter->nb_connections, + adapter->eventdev_id); + if (adapter->tx_core_id == (uint32_t)-1) + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2s", "[INTERNAL PORT]"); + else if (adapter->tx_core_id == RTE_MAX_LCORE) + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2s", "[NONE]"); + else + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2d,\tInput event queue: %-2d", + adapter->tx_core_id, adapter->tx_ev_queue); + + EH_LOG_INFO("%s", print_buf); + + for (j = 0; j < adapter->nb_connections; j++) { + conn = &(adapter->conn[j]); + + sprintf(print_buf, + "\t\tEthdev ID: %-2d", conn->ethdev_id); + + if (conn->ethdev_tx_qid == -1) + sprintf(print_buf + strlen(print_buf), + "\tEth tx queue: %-2s", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "\tEth tx queue: %-2d", + conn->ethdev_tx_qid); + EH_LOG_INFO("%s", print_buf); + } + } + EH_LOG_INFO(""); +} + +static void +eh_display_link_conf(struct eventmode_conf *em_conf) +{ + struct eh_event_link_info *link; + char print_buf[256] = { 0 }; + int i; + + EH_LOG_INFO("Links configured: %d", em_conf->nb_link); + + for (i = 0; i < em_conf->nb_link; i++) { + link = &(em_conf->link[i]); + + sprintf(print_buf, + "\tEvent dev ID: %-2d\tEvent port: %-2d", + link->eventdev_id, + link->event_port_id); + + if (em_conf->ext_params.all_ev_queue_to_ev_port) + sprintf(print_buf + strlen(print_buf), + "Event queue: %-2s\t", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "Event queue: %-2d\t", link->eventq_id); + + sprintf(print_buf + strlen(print_buf), + "Lcore: %-2d", link->lcore_id); + EH_LOG_INFO("%s", print_buf); + } + EH_LOG_INFO(""); +} + +void +eh_display_conf(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return; + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* Display user exposed operating modes */ + eh_display_operating_mode(em_conf); + + /* Display event device conf */ + eh_display_event_dev_conf(em_conf); + + /* Display Rx adapter conf */ + eh_display_rx_adapter_conf(em_conf); + + /* Display Tx adapter conf */ + eh_display_tx_adapter_conf(em_conf); + + /* Display event-lcore link */ + eh_display_link_conf(em_conf); +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -849,6 +1053,9 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Display the current configuration */ + eh_display_conf(conf); + /* Stop eth devices before setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 913b172..8eb5e25 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -13,6 +13,11 @@ RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) +#define EH_LOG_INFO(...) \ + RTE_LOG(INFO, EH, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) + /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS @@ -214,4 +219,13 @@ eh_devs_uninit(struct eh_conf *conf); uint8_t eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); +/** + * Display event mode configuration + * + * @param conf + * Event helper configuration + */ +void +eh_display_conf(struct eh_conf *conf); + #endif /* _EVENT_HELPER_H_ */ From patchwork Thu Feb 27 16:18:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lukas Bartosik [C]" X-Patchwork-Id: 66104 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0FFF5A055F; Thu, 27 Feb 2020 17:20:11 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3F8621BFCF; Thu, 27 Feb 2020 17:19:13 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id DFEFE1C002 for ; Thu, 27 Feb 2020 17:19:06 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 01RFshE4002507; Thu, 27 Feb 2020 08:19:06 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=z6tdI1MTpn4hmGH3EyjMGpL0d+Vr3X8oTITcicAxais=; b=BrvXzis8la08u1TG97UZipFi01uFQKekJf53MqC9G0WGQie7NdMm55JtwGLSINwyXupP rp1QZR/cdrFlBtepuc2mqdEWvVSNVnqr1/hDbQTC1CEKpH3Ww50Woz/Fsjc3hOkLTCwp L9adeMzATz2Y6acAo3J9rYKbBPENQH+1F7p6oztYUisOyxoH4Z/LUXr+etsUgYaA4DIF xDIBTrYp0sVBzszVIMOekK0r5b02p2blfp7jRD85lcH+F5UdqdMIqyQHV8KsaWOWiZhn ow6kXXOXuZegFCuX2GRGOYICagVOzZw4BMtX8822NePMBPJLO7ppAsD/Rq2IfHCFcjG6 Ug== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2ydchth8fu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 27 Feb 2020 08:19:06 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:19:04 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:19:03 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 27 Feb 2020 08:19:03 -0800 Received: from luke.marvell.com (unknown [10.95.130.81]) by maili.marvell.com (Postfix) with ESMTP id 7E6D53F7041; Thu, 27 Feb 2020 08:19:00 -0800 (PST) From: Lukasz Bartosik To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Anoob Joseph , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Konstantin Ananyev" , Date: Thu, 27 Feb 2020 17:18:29 +0100 Message-ID: <1582820317-7333-8-git-send-email-lbartosik@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1582820317-7333-1-git-send-email-lbartosik@marvell.com> References: <1582185727-6749-1-git-send-email-lbartosik@marvell.com> <1582820317-7333-1-git-send-email-lbartosik@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-27_05:2020-02-26, 2020-02-27 signatures=0 Subject: [dpdk-dev] [PATCH v5 07/15] examples/ipsec-secgw: add routines to launch workers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In eventmode workers can be drafted differently according to the capabilities of the underlying event device. The added functions will receive an array of such workers and probe the eventmode properties to choose the worker. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/event_helper.c | 336 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 48 ++++++ 2 files changed, 384 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index d09bf7d..e3dfaf5 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -11,6 +11,8 @@ #include "event_helper.h" +static volatile bool eth_core_running; + static int eh_get_enabled_cores(struct rte_bitmap *eth_core_mask) { @@ -93,6 +95,16 @@ eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) return &(em_conf->eventdev_config[i]); } +static inline bool +eh_dev_has_burst_mode(uint8_t dev_id) +{ + struct rte_event_dev_info dev_info; + + rte_event_dev_info_get(dev_id, &dev_info); + return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_BURST_MODE) ? + true : false; +} + static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -689,6 +701,257 @@ eh_initialize_rx_adapter(struct eventmode_conf *em_conf) return 0; } +static int32_t +eh_start_worker_eth_core(struct eventmode_conf *conf, uint32_t lcore_id) +{ + uint32_t service_id[EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE]; + struct rx_adapter_conf *rx_adapter; + struct tx_adapter_conf *tx_adapter; + int service_count = 0; + int adapter_id; + int32_t ret; + int i; + + EH_LOG_INFO("Entering eth_core processing on lcore %u", lcore_id); + + /* + * Parse adapter config to check which of all Rx adapters need + * to be handled by this core. + */ + for (i = 0; i < conf->nb_rx_adapter; i++) { + /* Check if we have exceeded the max allowed */ + if (service_count > EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE) { + EH_LOG_ERR( + "Exceeded the max allowed adapters per rx core"); + break; + } + + rx_adapter = &(conf->rx_adapter[i]); + if (rx_adapter->rx_core_id != lcore_id) + continue; + + /* Adapter is handled by this core */ + adapter_id = rx_adapter->adapter_id; + + /* Get the service ID for the adapters */ + ret = rte_event_eth_rx_adapter_service_id_get(adapter_id, + &(service_id[service_count])); + + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR( + "Failed to get service id used by rx adapter"); + return ret; + } + + /* Update service count */ + service_count++; + } + + /* + * Parse adapter config to see which of all Tx adapters need + * to be handled by this core. + */ + for (i = 0; i < conf->nb_tx_adapter; i++) { + /* Check if we have exceeded the max allowed */ + if (service_count > EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE) { + EH_LOG_ERR( + "Exceeded the max allowed adapters per tx core"); + break; + } + + tx_adapter = &conf->tx_adapter[i]; + if (tx_adapter->tx_core_id != lcore_id) + continue; + + /* Adapter is handled by this core */ + adapter_id = tx_adapter->adapter_id; + + /* Get the service ID for the adapters */ + ret = rte_event_eth_tx_adapter_service_id_get(adapter_id, + &(service_id[service_count])); + + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR( + "Failed to get service id used by tx adapter"); + return ret; + } + + /* Update service count */ + service_count++; + } + + eth_core_running = true; + + while (eth_core_running) { + for (i = 0; i < service_count; i++) { + /* Initiate adapter service */ + rte_service_run_iter_on_app_lcore(service_id[i], 0); + } + } + + return 0; +} + +static int32_t +eh_stop_worker_eth_core(void) +{ + if (eth_core_running) { + EH_LOG_INFO("Stopping eth cores"); + eth_core_running = false; + } + return 0; +} + +static struct eh_app_worker_params * +eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, + struct eh_app_worker_params *app_wrkrs, uint8_t nb_wrkr_param) +{ + struct eh_app_worker_params curr_conf = { {{0} }, NULL}; + struct eh_event_link_info *link = NULL; + struct eh_app_worker_params *tmp_wrkr; + struct eventmode_conf *em_conf; + uint8_t eventdev_id; + int i; + + /* Get eventmode config */ + em_conf = conf->mode_params; + + /* + * Use event device from the first lcore-event link. + * + * Assumption: All lcore-event links tied to a core are using the + * same event device. In other words, one core would be polling on + * queues of a single event device only. + */ + + /* Get a link for this lcore */ + for (i = 0; i < em_conf->nb_link; i++) { + link = &(em_conf->link[i]); + if (link->lcore_id == lcore_id) + break; + } + + if (link == NULL) { + EH_LOG_ERR("No valid link found for lcore %d", lcore_id); + return NULL; + } + + /* Get event dev ID */ + eventdev_id = link->eventdev_id; + + /* Populate the curr_conf with the capabilities */ + + /* Check for burst mode */ + if (eh_dev_has_burst_mode(eventdev_id)) + curr_conf.cap.burst = EH_RX_TYPE_BURST; + else + curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; + + /* Parse the passed list and see if we have matching capabilities */ + + /* Initialize the pointer used to traverse the list */ + tmp_wrkr = app_wrkrs; + + for (i = 0; i < nb_wrkr_param; i++, tmp_wrkr++) { + + /* Skip this if capabilities are not matching */ + if (tmp_wrkr->cap.u64 != curr_conf.cap.u64) + continue; + + /* If the checks pass, we have a match */ + return tmp_wrkr; + } + + return NULL; +} + +static int +eh_verify_match_worker(struct eh_app_worker_params *match_wrkr) +{ + /* Verify registered worker */ + if (match_wrkr->worker_thread == NULL) { + EH_LOG_ERR("No worker registered"); + return 0; + } + + /* Success */ + return 1; +} + +static uint8_t +eh_get_event_lcore_links(uint32_t lcore_id, struct eh_conf *conf, + struct eh_event_link_info **links) +{ + struct eh_event_link_info *link_cache; + struct eventmode_conf *em_conf = NULL; + struct eh_event_link_info *link; + uint8_t lcore_nb_link = 0; + size_t single_link_size; + size_t cache_size; + int index = 0; + int i; + + if (conf == NULL || links == NULL) { + EH_LOG_ERR("Invalid args"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + if (em_conf == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get the number of links registered */ + for (i = 0; i < em_conf->nb_link; i++) { + + /* Get link */ + link = &(em_conf->link[i]); + + /* Check if we have link intended for this lcore */ + if (link->lcore_id == lcore_id) { + + /* Update the number of links for this core */ + lcore_nb_link++; + + } + } + + /* Compute size of one entry to be copied */ + single_link_size = sizeof(struct eh_event_link_info); + + /* Compute size of the buffer required */ + cache_size = lcore_nb_link * sizeof(struct eh_event_link_info); + + /* Compute size of the buffer required */ + link_cache = calloc(1, cache_size); + + /* Get the number of links registered */ + for (i = 0; i < em_conf->nb_link; i++) { + + /* Get link */ + link = &(em_conf->link[i]); + + /* Check if we have link intended for this lcore */ + if (link->lcore_id == lcore_id) { + + /* Cache the link */ + memcpy(&link_cache[index], link, single_link_size); + + /* Update index */ + index++; + } + } + + /* Update the links for application to use the cached links */ + *links = link_cache; + + /* Return the number of cached links */ + return lcore_nb_link; +} + static int eh_tx_adapter_configure(struct eventmode_conf *em_conf, struct tx_adapter_conf *adapter) @@ -1202,6 +1465,79 @@ eh_devs_uninit(struct eh_conf *conf) return 0; } +void +eh_launch_worker(struct eh_conf *conf, struct eh_app_worker_params *app_wrkr, + uint8_t nb_wrkr_param) +{ + struct eh_app_worker_params *match_wrkr; + struct eh_event_link_info *links = NULL; + struct eventmode_conf *em_conf; + uint32_t lcore_id; + uint8_t nb_links; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return; + } + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Check if this is eth core */ + if (rte_bitmap_get(em_conf->eth_core_mask, lcore_id)) { + eh_start_worker_eth_core(em_conf, lcore_id); + return; + } + + if (app_wrkr == NULL || nb_wrkr_param == 0) { + EH_LOG_ERR("Invalid args"); + return; + } + + /* + * This is a regular worker thread. The application registers + * multiple workers with various capabilities. Run worker + * based on the selected capabilities of the event + * device configured. + */ + + /* Get the first matching worker for the event device */ + match_wrkr = eh_find_worker(lcore_id, conf, app_wrkr, nb_wrkr_param); + if (match_wrkr == NULL) { + EH_LOG_ERR("Failed to match worker registered for lcore %d", + lcore_id); + goto clean_and_exit; + } + + /* Verify sanity of the matched worker */ + if (eh_verify_match_worker(match_wrkr) != 1) { + EH_LOG_ERR("Failed to validate the matched worker"); + goto clean_and_exit; + } + + /* Get worker links */ + nb_links = eh_get_event_lcore_links(lcore_id, conf, &links); + + /* Launch the worker thread */ + match_wrkr->worker_thread(links, nb_links); + + /* Free links info memory */ + free(links); + +clean_and_exit: + + /* Flag eth_cores to stop, if started */ + eh_stop_worker_eth_core(); +} + uint8_t eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 8eb5e25..9a4dfab 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -40,6 +40,9 @@ #define EVENT_MODE_MAX_LCORE_LINKS \ (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) +/* Max adapters that one Rx core can handle */ +#define EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE EVENT_MODE_MAX_RX_ADAPTERS + /* Max adapters that one Tx core can handle */ #define EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE EVENT_MODE_MAX_TX_ADAPTERS @@ -51,6 +54,14 @@ enum eh_pkt_transfer_mode { EH_PKT_TRANSFER_MODE_EVENT, }; +/** + * Event mode packet rx types + */ +enum eh_rx_types { + EH_RX_TYPE_NON_BURST = 0, + EH_RX_TYPE_BURST +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; @@ -161,6 +172,22 @@ struct eh_conf { /**< Mode specific parameters */ }; +/* Workers registered by the application */ +struct eh_app_worker_params { + union { + RTE_STD_C11 + struct { + uint64_t burst : 1; + /**< Specify status of rx type burst */ + }; + uint64_t u64; + } cap; + /**< Capabilities of this worker */ + void (*worker_thread)(struct eh_event_link_info *links, + uint8_t nb_links); + /**< Worker thread */ +}; + /** * Initialize event mode devices * @@ -228,4 +255,25 @@ eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); void eh_display_conf(struct eh_conf *conf); + +/** + * Launch eventmode worker + * + * The application can request the eventmode helper subsystem to launch the + * worker based on the capabilities of event device and the options selected + * while initializing the eventmode. + * + * @param conf + * Event helper configuration + * @param app_wrkr + * List of all the workers registered by application, along with its + * capabilities + * @param nb_wrkr_param + * Number of workers passed by the application + * + */ +void +eh_launch_worker(struct eh_conf *conf, struct eh_app_worker_params *app_wrkr, + uint8_t nb_wrkr_param); + #endif /* _EVENT_HELPER_H_ */ From patchwork Thu Feb 27 16:18:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lukas Bartosik [C]" X-Patchwork-Id: 66105 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5197BA055F; Thu, 27 Feb 2020 17:20:24 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BB6AF1C014; Thu, 27 Feb 2020 17:19:14 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 3BABF1C00E for ; Thu, 27 Feb 2020 17:19:09 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 01RFsqaS002551; Thu, 27 Feb 2020 08:19:08 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=kD20SEDqzUwA7MUfgGVgzno+2UGl8JLAoRz2TV7yQUA=; b=Y9r/VX+PpaN1GmYdGIcyQ1U7oxpoxLK3CjQuc3KGERWTYSBsESPsSfW6wu3iMWSc3bpx xzwa+g/W7Rm+R2qU1wNMOSi9U/8qarbwT9DOscb6gqeJ4wtILmSRd0qLl/UGWXsG63ZJ ajm1SdTW0PzXCJmaHDNcEp102FW7vEWbPp3queOxh86zNx+xpx62cqIZa8PPwjGg3U0f sc7PEefCH/Im6ObXf3l3+J5GjEyp6rp4+X9y1zOIUYNTF9ViVDfp6ZndoPXag1dyZbUY jj2JQzv2EWM8v5mCxODNzQ1M/Z2QNCpMeqrSEdcmnqfnlFOc2H7qTrJB7oJ9XzUx9qAC 9w== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2ydchth8g1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 27 Feb 2020 08:19:08 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:19:06 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 27 Feb 2020 08:19:06 -0800 Received: from luke.marvell.com (unknown [10.95.130.81]) by maili.marvell.com (Postfix) with ESMTP id 961FE3F7043; Thu, 27 Feb 2020 08:19:03 -0800 (PST) From: Lukasz Bartosik To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Anoob Joseph , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Konstantin Ananyev" , Date: Thu, 27 Feb 2020 17:18:30 +0100 Message-ID: <1582820317-7333-9-git-send-email-lbartosik@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1582820317-7333-1-git-send-email-lbartosik@marvell.com> References: <1582185727-6749-1-git-send-email-lbartosik@marvell.com> <1582820317-7333-1-git-send-email-lbartosik@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-27_05:2020-02-26, 2020-02-27 signatures=0 Subject: [dpdk-dev] [PATCH v5 08/15] examples/ipsec-secgw: add support for internal ports X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for Rx and Tx internal ports. When internal ports are available then a packet can be received from eth port and forwarded to event queue by HW without any software intervention. The same applies to Tx side where a packet sent to an event queue can by forwarded by HW to eth port without any software intervention. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/event_helper.c | 179 +++++++++++++++++++++++++++++++----- examples/ipsec-secgw/event_helper.h | 11 +++ 2 files changed, 167 insertions(+), 23 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index e3dfaf5..fe047ab 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -95,6 +95,39 @@ eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) return &(em_conf->eventdev_config[i]); } + +static inline bool +eh_dev_has_rx_internal_port(uint8_t eventdev_id) +{ + bool flag = true; + int j; + + RTE_ETH_FOREACH_DEV(j) { + uint32_t caps = 0; + + rte_event_eth_rx_adapter_caps_get(eventdev_id, j, &caps); + if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) + flag = false; + } + return flag; +} + +static inline bool +eh_dev_has_tx_internal_port(uint8_t eventdev_id) +{ + bool flag = true; + int j; + + RTE_ETH_FOREACH_DEV(j) { + uint32_t caps = 0; + + rte_event_eth_tx_adapter_caps_get(eventdev_id, j, &caps); + if (!(caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)) + flag = false; + } + return flag; +} + static inline bool eh_dev_has_burst_mode(uint8_t dev_id) { @@ -175,6 +208,42 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) return 0; } +static void +eh_do_capability_check(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + int all_internal_ports = 1; + uint32_t eventdev_id; + int i; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + + /* Get the event dev conf */ + eventdev_config = &(em_conf->eventdev_config[i]); + eventdev_id = eventdev_config->eventdev_id; + + /* Check if event device has internal port for Rx & Tx */ + if (eh_dev_has_rx_internal_port(eventdev_id) && + eh_dev_has_tx_internal_port(eventdev_id)) { + eventdev_config->all_internal_ports = 1; + } else { + all_internal_ports = 0; + } + } + + /* + * If Rx & Tx internal ports are supported by all event devices then + * eth cores won't be required. Override the eth core mask requested + * and decrement number of event queues by one as it won't be needed + * for Tx. + */ + if (all_internal_ports) { + rte_bitmap_reset(em_conf->eth_core_mask); + for (i = 0; i < em_conf->nb_eventdev; i++) + em_conf->eventdev_config[i].nb_eventqueue--; + } +} + static int eh_set_default_conf_link(struct eventmode_conf *em_conf) { @@ -246,7 +315,10 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) struct rx_adapter_connection_info *conn; struct eventdev_params *eventdev_config; struct rx_adapter_conf *adapter; + bool rx_internal_port = true; bool single_ev_queue = false; + int nb_eventqueue; + uint32_t caps = 0; int eventdev_id; int nb_eth_dev; int adapter_id; @@ -276,14 +348,21 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) /* Set adapter conf */ adapter->eventdev_id = eventdev_id; adapter->adapter_id = adapter_id; - adapter->rx_core_id = eh_get_next_eth_core(em_conf); + + /* + * If event device does not have internal ports for passing + * packets then reserved one queue for Tx path + */ + nb_eventqueue = eventdev_config->all_internal_ports ? + eventdev_config->nb_eventqueue : + eventdev_config->nb_eventqueue - 1; /* * Map all queues of eth device (port) to an event queue. If there * are more event queues than eth ports then create 1:1 mapping. * Otherwise map all eth ports to a single event queue. */ - if (nb_eth_dev > eventdev_config->nb_eventqueue) + if (nb_eth_dev > nb_eventqueue) single_ev_queue = true; for (i = 0; i < nb_eth_dev; i++) { @@ -305,11 +384,24 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) /* Add all eth queues eth port to event queue */ conn->ethdev_rx_qid = -1; + /* Get Rx adapter capabilities */ + rte_event_eth_rx_adapter_caps_get(eventdev_id, i, &caps); + if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) + rx_internal_port = false; + /* Update no of connections */ adapter->nb_connections++; } + if (rx_internal_port) { + /* Rx core is not required */ + adapter->rx_core_id = -1; + } else { + /* Rx core is required */ + adapter->rx_core_id = eh_get_next_eth_core(em_conf); + } + /* We have setup one adapter */ em_conf->nb_rx_adapter = 1; @@ -322,6 +414,8 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) struct tx_adapter_connection_info *conn; struct eventdev_params *eventdev_config; struct tx_adapter_conf *tx_adapter; + bool tx_internal_port = true; + uint32_t caps = 0; int eventdev_id; int adapter_id; int nb_eth_dev; @@ -355,18 +449,6 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) tx_adapter->eventdev_id = eventdev_id; tx_adapter->adapter_id = adapter_id; - /* TODO: Tx core is required only when internal port is not present */ - tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); - - /* - * Application uses one event queue per adapter for submitting - * packets for Tx. Reserve the last queue available and decrement - * the total available event queues for this - */ - - /* Queue numbers start at 0 */ - tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; - /* * Map all Tx queues of the eth device (port) to the event device. */ @@ -396,10 +478,30 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) /* Add all eth tx queues to adapter */ conn->ethdev_tx_qid = -1; + /* Get Tx adapter capabilities */ + rte_event_eth_tx_adapter_caps_get(eventdev_id, i, &caps); + if (!(caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)) + tx_internal_port = false; + /* Update no of connections */ tx_adapter->nb_connections++; } + if (tx_internal_port) { + /* Tx core is not required */ + tx_adapter->tx_core_id = -1; + } else { + /* Tx core is required */ + tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Use one event queue per adapter for submitting packets + * for Tx. Reserving the last queue available + */ + /* Queue numbers start at 0 */ + tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; + } + /* We have setup one adapter */ em_conf->nb_tx_adapter = 1; return 0; @@ -420,6 +522,9 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* Perform capability check for the selected event devices */ + eh_do_capability_check(em_conf); + /* * Check if links are specified. Else generate a default config for * the event ports used. @@ -523,11 +628,13 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) eventdev_config->ev_queue_mode; /* * All queues need to be set with sched_type as - * schedule type for the application stage. One queue - * would be reserved for the final eth tx stage. This - * will be an atomic queue. + * schedule type for the application stage. One + * queue would be reserved for the final eth tx + * stage if event device does not have internal + * ports. This will be an atomic queue. */ - if (j == nb_eventqueue-1) { + if (!eventdev_config->all_internal_ports && + j == nb_eventqueue-1) { eventq_conf.schedule_type = RTE_SCHED_TYPE_ATOMIC; } else { @@ -841,6 +948,12 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, /* Populate the curr_conf with the capabilities */ + /* Check for Tx internal port */ + if (eh_dev_has_tx_internal_port(eventdev_id)) + curr_conf.cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + else + curr_conf.cap.tx_internal_port = EH_TX_TYPE_NO_INTERNAL_PORT; + /* Check for burst mode */ if (eh_dev_has_burst_mode(eventdev_id)) curr_conf.cap.burst = EH_RX_TYPE_BURST; @@ -1012,6 +1125,16 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, } } + /* + * Check if Tx core is assigned. If Tx core is not assigned then + * the adapter has internal port for submitting Tx packets and + * Tx event queue & port setup is not required + */ + if (adapter->tx_core_id == (uint32_t) (-1)) { + /* Internal port is present */ + goto skip_tx_queue_port_setup; + } + /* Setup Tx queue & port */ /* Get event port used by the adapter */ @@ -1051,6 +1174,7 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, rte_service_set_runstate_mapped_check(service_id, 0); +skip_tx_queue_port_setup: /* Start adapter */ ret = rte_event_eth_tx_adapter_start(adapter->adapter_id); if (ret < 0) { @@ -1135,13 +1259,22 @@ eh_display_rx_adapter_conf(struct eventmode_conf *em_conf) for (i = 0; i < nb_rx_adapter; i++) { adapter = &(em_conf->rx_adapter[i]); - EH_LOG_INFO( - "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d" - "\tRx core: %-2d", + sprintf(print_buf, + "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", adapter->adapter_id, adapter->nb_connections, - adapter->eventdev_id, - adapter->rx_core_id); + adapter->eventdev_id); + if (adapter->rx_core_id == (uint32_t)-1) + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2s", "[INTERNAL PORT]"); + else if (adapter->rx_core_id == RTE_MAX_LCORE) + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2s", "[NONE]"); + else + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2d", adapter->rx_core_id); + + EH_LOG_INFO("%s", print_buf); for (j = 0; j < adapter->nb_connections; j++) { conn = &(adapter->conn[j]); diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 9a4dfab..25c8563 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -62,12 +62,21 @@ enum eh_rx_types { EH_RX_TYPE_BURST }; +/** + * Event mode packet tx types + */ +enum eh_tx_types { + EH_TX_TYPE_INTERNAL_PORT = 0, + EH_TX_TYPE_NO_INTERNAL_PORT +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; uint8_t nb_eventqueue; uint8_t nb_eventport; uint8_t ev_queue_mode; + uint8_t all_internal_ports; }; /** @@ -179,6 +188,8 @@ struct eh_app_worker_params { struct { uint64_t burst : 1; /**< Specify status of rx type burst */ + uint64_t tx_internal_port : 1; + /**< Specify whether tx internal port is available */ }; uint64_t u64; } cap; From patchwork Thu Feb 27 16:18:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lukas Bartosik [C]" X-Patchwork-Id: 66106 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 18666A055F; Thu, 27 Feb 2020 17:20:36 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1A0411C026; Thu, 27 Feb 2020 17:19:16 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 3CC5D2C4F for ; Thu, 27 Feb 2020 17:19:12 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 01RFsgap002477; Thu, 27 Feb 2020 08:19:11 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=56hv1fEUQpgjUcj/adH7rv695hZsV2WEQpocMqORJwM=; b=l4nmXV/VjvOshxJEreDvASRjVt9eQpzuauheGKzAE2YvQfCsrO9B+pbulrOJ/BL0LPgy TLsnVIGhUDOLtp21LY/6Tg/g97YYIRZJxX2Snmm9mtd4nJHmInXwyTERwzWQcsgqQCm0 lCqmMbUJZt4982ndfc/1341KACFGJ2pVNLRRIYWiFWfJkl9B503pojmJs2QYYyQy/yk+ 0NdsTr6H37OH+XpnMuHQS4lHJ+Z6rz/sUDqxpUkbnMS/Qa6ukfqkNIvdWZmurAU1v0sw lUIRWkOsBsZLOHMB0ZIJh18midefWRG9srzcapOpH9IEFgp9FVHw5N7X57WV/diK0q6K Xw== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2ydchth8g2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 27 Feb 2020 08:19:11 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:19:09 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 27 Feb 2020 08:19:09 -0800 Received: from luke.marvell.com (unknown [10.95.130.81]) by maili.marvell.com (Postfix) with ESMTP id A389B3F704C; Thu, 27 Feb 2020 08:19:06 -0800 (PST) From: Lukasz Bartosik To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Anoob Joseph , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Konstantin Ananyev" , Date: Thu, 27 Feb 2020 17:18:31 +0100 Message-ID: <1582820317-7333-10-git-send-email-lbartosik@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1582820317-7333-1-git-send-email-lbartosik@marvell.com> References: <1582185727-6749-1-git-send-email-lbartosik@marvell.com> <1582820317-7333-1-git-send-email-lbartosik@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-27_05:2020-02-26, 2020-02-27 signatures=0 Subject: [dpdk-dev] [PATCH v5 09/15] examples/ipsec-secgw: add event helper config init/uninit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add eventmode helper eh_conf_init and eh_conf_uninit functions which purpose is to initialize and unitialize eventmode helper configuration. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/event_helper.c | 103 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 23 ++++++++ 2 files changed, 126 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index fe047ab..0854fc2 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -1379,6 +1379,109 @@ eh_display_link_conf(struct eventmode_conf *em_conf) EH_LOG_INFO(""); } +struct eh_conf * +eh_conf_init(void) +{ + struct eventmode_conf *em_conf = NULL; + struct eh_conf *conf = NULL; + unsigned int eth_core_id; + void *bitmap = NULL; + uint32_t nb_bytes; + + /* Allocate memory for config */ + conf = calloc(1, sizeof(struct eh_conf)); + if (conf == NULL) { + EH_LOG_ERR("Failed to allocate memory for eventmode helper " + "config"); + return NULL; + } + + /* Set default conf */ + + /* Packet transfer mode: poll */ + conf->mode = EH_PKT_TRANSFER_MODE_POLL; + + /* Keep all ethernet ports enabled by default */ + conf->eth_portmask = -1; + + /* Allocate memory for event mode params */ + conf->mode_params = calloc(1, sizeof(struct eventmode_conf)); + if (conf->mode_params == NULL) { + EH_LOG_ERR("Failed to allocate memory for event mode params"); + goto free_conf; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Allocate and initialize bitmap for eth cores */ + nb_bytes = rte_bitmap_get_memory_footprint(RTE_MAX_LCORE); + if (!nb_bytes) { + EH_LOG_ERR("Failed to get bitmap footprint"); + goto free_em_conf; + } + + bitmap = rte_zmalloc("event-helper-ethcore-bitmap", nb_bytes, + RTE_CACHE_LINE_SIZE); + if (!bitmap) { + EH_LOG_ERR("Failed to allocate memory for eth cores bitmap\n"); + goto free_em_conf; + } + + em_conf->eth_core_mask = rte_bitmap_init(RTE_MAX_LCORE, bitmap, + nb_bytes); + if (!em_conf->eth_core_mask) { + EH_LOG_ERR("Failed to initialize bitmap"); + goto free_bitmap; + } + + /* Set schedule type as not set */ + em_conf->ext_params.sched_type = SCHED_TYPE_NOT_SET; + + /* Set two cores as eth cores for Rx & Tx */ + + /* Use first core other than master core as Rx core */ + eth_core_id = rte_get_next_lcore(0, /* curr core */ + 1, /* skip master core */ + 0 /* wrap */); + + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); + + /* Use next core as Tx core */ + eth_core_id = rte_get_next_lcore(eth_core_id, /* curr core */ + 1, /* skip master core */ + 0 /* wrap */); + + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); + + return conf; + +free_bitmap: + rte_free(bitmap); +free_em_conf: + free(em_conf); +free_conf: + free(conf); + return NULL; +} + +void +eh_conf_uninit(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf = NULL; + + if (!conf || !conf->mode_params) + return; + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Free evenmode configuration memory */ + rte_free(em_conf->eth_core_mask); + free(em_conf); + free(conf); +} + void eh_display_conf(struct eh_conf *conf) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 25c8563..e17cab1 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -46,6 +46,9 @@ /* Max adapters that one Tx core can handle */ #define EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE EVENT_MODE_MAX_TX_ADAPTERS +/* Used to indicate that queue schedule type is not set */ +#define SCHED_TYPE_NOT_SET 3 + /** * Packet transfer mode of the application */ @@ -200,6 +203,26 @@ struct eh_app_worker_params { }; /** + * Allocate memory for event helper configuration and initialize + * it with default values. + * + * @return + * - pointer to event helper configuration structure on success. + * - NULL on failure. + */ +struct eh_conf * +eh_conf_init(void); + +/** + * Uninitialize event helper configuration and release its memory +. * + * @param conf + * Event helper configuration + */ +void +eh_conf_uninit(struct eh_conf *conf); + +/** * Initialize event mode devices * * Application can call this function to get the event devices, eth devices From patchwork Thu Feb 27 16:18:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lukas Bartosik [C]" X-Patchwork-Id: 66107 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4FC19A055F; Thu, 27 Feb 2020 17:20:52 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B6CFA1C024; Thu, 27 Feb 2020 17:19:18 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 0BD731C01F for ; Thu, 27 Feb 2020 17:19:14 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 01RFttcm011386; Thu, 27 Feb 2020 08:19:14 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=juaDo11IUX09LA29St0jNjkXpA1qgVe8G5ofpTGXnH0=; b=X+GLlfNt+Uz2mTrOjM1HQMkee4oVd3LAxWwQQt9QxQ7UQVELkIo5TbHc0XocS8EPK5ro wKJT8lP6wWeBSHU57UtmvvjGFnOfZKev1pU4Z/HJdmHbUPx5GWzrBGPuaWPHvD3fXhpb 8UHqdAgTEa56eJYmpqs7Lk9UDLGTc5U9sCVMtLdQCw1fbl3jadHo+IBD+NuPeIVZvj5q Z5sPlK9rtmtZeGMAqZIjkggnYz+4FbAHVlePrwo6FQPcrVTeknAQHMPavL9ZMVrT1V6X T1XrrV5vFhkAK/iRRsDJ9vwdp8DR1kGJ1cf3isUNOUdB3OLBDrAcDi2TzEX/hkjRdw88 eA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2ydcm19hb9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 27 Feb 2020 08:19:14 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:19:12 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 27 Feb 2020 08:19:12 -0800 Received: from luke.marvell.com (unknown [10.95.130.81]) by maili.marvell.com (Postfix) with ESMTP id B23A13F703F; Thu, 27 Feb 2020 08:19:09 -0800 (PST) From: Lukasz Bartosik To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Anoob Joseph , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Konstantin Ananyev" , Date: Thu, 27 Feb 2020 17:18:32 +0100 Message-ID: <1582820317-7333-11-git-send-email-lbartosik@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1582820317-7333-1-git-send-email-lbartosik@marvell.com> References: <1582185727-6749-1-git-send-email-lbartosik@marvell.com> <1582820317-7333-1-git-send-email-lbartosik@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-27_05:2020-02-26, 2020-02-27 signatures=0 Subject: [dpdk-dev] [PATCH v5 10/15] examples/ipsec-secgw: add eventmode to ipsec-secgw X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add eventmode support to ipsec-secgw. With the aid of event helper configure and use the eventmode capabilities. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/event_helper.c | 3 + examples/ipsec-secgw/event_helper.h | 14 ++ examples/ipsec-secgw/ipsec-secgw.c | 301 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/ipsec.h | 24 +++ examples/ipsec-secgw/sa.c | 21 +-- examples/ipsec-secgw/sad.h | 5 - 6 files changed, 340 insertions(+), 28 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 0854fc2..076f1f2 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -960,6 +960,8 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, else curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; + curr_conf.cap.ipsec_mode = conf->ipsec_mode; + /* Parse the passed list and see if we have matching capabilities */ /* Initialize the pointer used to traverse the list */ @@ -1400,6 +1402,7 @@ eh_conf_init(void) /* Packet transfer mode: poll */ conf->mode = EH_PKT_TRANSFER_MODE_POLL; + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; /* Keep all ethernet ports enabled by default */ conf->eth_portmask = -1; diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index e17cab1..b65b343 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -73,6 +73,14 @@ enum eh_tx_types { EH_TX_TYPE_NO_INTERNAL_PORT }; +/** + * Event mode ipsec mode types + */ +enum eh_ipsec_mode_types { + EH_IPSEC_MODE_TYPE_APP = 0, + EH_IPSEC_MODE_TYPE_DRIVER +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; @@ -182,6 +190,10 @@ struct eh_conf { */ void *mode_params; /**< Mode specific parameters */ + + /** Application specific params */ + enum eh_ipsec_mode_types ipsec_mode; + /**< Mode of ipsec run */ }; /* Workers registered by the application */ @@ -193,6 +205,8 @@ struct eh_app_worker_params { /**< Specify status of rx type burst */ uint64_t tx_internal_port : 1; /**< Specify whether tx internal port is available */ + uint64_t ipsec_mode : 1; + /**< Specify ipsec processing level */ }; uint64_t u64; } cap; diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index e1ee7c3..0f692d7 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -2,6 +2,7 @@ * Copyright(c) 2016 Intel Corporation */ +#include #include #include #include @@ -14,9 +15,11 @@ #include #include #include +#include #include #include +#include #include #include #include @@ -41,13 +44,17 @@ #include #include #include +#include #include #include +#include "event_helper.h" #include "ipsec.h" #include "parser.h" #include "sad.h" +volatile bool force_quit; + #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define MAX_JUMBO_PKT_LEN 9600 @@ -134,12 +141,20 @@ struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; #define CMD_LINE_OPT_CONFIG "config" #define CMD_LINE_OPT_SINGLE_SA "single-sa" #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" +#define CMD_LINE_OPT_TRANSFER_MODE "transfer-mode" +#define CMD_LINE_OPT_SCHEDULE_TYPE "event-schedule-type" #define CMD_LINE_OPT_RX_OFFLOAD "rxoffload" #define CMD_LINE_OPT_TX_OFFLOAD "txoffload" #define CMD_LINE_OPT_REASSEMBLE "reassemble" #define CMD_LINE_OPT_MTU "mtu" #define CMD_LINE_OPT_FRAG_TTL "frag-ttl" +#define CMD_LINE_ARG_EVENT "event" +#define CMD_LINE_ARG_POLL "poll" +#define CMD_LINE_ARG_ORDERED "ordered" +#define CMD_LINE_ARG_ATOMIC "atomic" +#define CMD_LINE_ARG_PARALLEL "parallel" + enum { /* long options mapped to a short option */ @@ -150,6 +165,8 @@ enum { CMD_LINE_OPT_CONFIG_NUM, CMD_LINE_OPT_SINGLE_SA_NUM, CMD_LINE_OPT_CRYPTODEV_MASK_NUM, + CMD_LINE_OPT_TRANSFER_MODE_NUM, + CMD_LINE_OPT_SCHEDULE_TYPE_NUM, CMD_LINE_OPT_RX_OFFLOAD_NUM, CMD_LINE_OPT_TX_OFFLOAD_NUM, CMD_LINE_OPT_REASSEMBLE_NUM, @@ -161,6 +178,8 @@ static const struct option lgopts[] = { {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM}, {CMD_LINE_OPT_SINGLE_SA, 1, 0, CMD_LINE_OPT_SINGLE_SA_NUM}, {CMD_LINE_OPT_CRYPTODEV_MASK, 1, 0, CMD_LINE_OPT_CRYPTODEV_MASK_NUM}, + {CMD_LINE_OPT_TRANSFER_MODE, 1, 0, CMD_LINE_OPT_TRANSFER_MODE_NUM}, + {CMD_LINE_OPT_SCHEDULE_TYPE, 1, 0, CMD_LINE_OPT_SCHEDULE_TYPE_NUM}, {CMD_LINE_OPT_RX_OFFLOAD, 1, 0, CMD_LINE_OPT_RX_OFFLOAD_NUM}, {CMD_LINE_OPT_TX_OFFLOAD, 1, 0, CMD_LINE_OPT_TX_OFFLOAD_NUM}, {CMD_LINE_OPT_REASSEMBLE, 1, 0, CMD_LINE_OPT_REASSEMBLE_NUM}, @@ -1199,13 +1218,19 @@ main_loop(__attribute__((unused)) void *dummy) } static int32_t -check_params(void) +check_poll_mode_params(struct eh_conf *eh_conf) { uint8_t lcore; uint16_t portid; uint16_t i; int32_t socket_id; + if (!eh_conf) + return -EINVAL; + + if (eh_conf->mode != EH_PKT_TRANSFER_MODE_POLL) + return 0; + if (lcore_params == NULL) { printf("Error: No port/queue/core mappings\n"); return -1; @@ -1292,6 +1317,8 @@ print_usage(const char *prgname) " --config (port,queue,lcore)[,(port,queue,lcore)]" " [--single-sa SAIDX]" " [--cryptodev_mask MASK]" + " [--transfer-mode MODE]" + " [--event-schedule-type TYPE]" " [--" CMD_LINE_OPT_RX_OFFLOAD " RX_OFFLOAD_MASK]" " [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]" " [--" CMD_LINE_OPT_REASSEMBLE " REASSEMBLE_TABLE_SIZE]" @@ -1310,11 +1337,24 @@ print_usage(const char *prgname) " -c specifies inbound SAD cache size,\n" " zero value disables the cache (default value: 128)\n" " -f CONFIG_FILE: Configuration file\n" - " --config (port,queue,lcore): Rx queue configuration\n" + " --config (port,queue,lcore): Rx queue configuration. In poll\n" + " mode determines which queues from\n" + " which ports are mapped to which cores.\n" + " In event mode this option is not used\n" + " as packets are dynamically scheduled\n" + " to cores by HW.\n" " --single-sa SAIDX: Use single SA index for outbound traffic,\n" " bypassing the SP\n" " --cryptodev_mask MASK: Hexadecimal bitmask of the crypto\n" " devices to configure\n" + " --transfer-mode MODE\n" + " \"poll\" : Packet transfer via polling (default)\n" + " \"event\" : Packet transfer via event device\n" + " --event-schedule-type TYPE queue schedule type, used only when\n" + " transfer mode is set to event\n" + " \"ordered\" : Ordered (default)\n" + " \"atomic\" : Atomic\n" + " \"parallel\" : Parallel\n" " --" CMD_LINE_OPT_RX_OFFLOAD ": bitmask of the RX HW offload capabilities to enable/use\n" " (DEV_RX_OFFLOAD_*)\n" @@ -1449,8 +1489,45 @@ print_app_sa_prm(const struct app_sa_prm *prm) printf("Frag TTL: %" PRIu64 " ns\n", frag_ttl_ns); } +static int +parse_transfer_mode(struct eh_conf *conf, const char *optarg) +{ + if (!strcmp(CMD_LINE_ARG_POLL, optarg)) + conf->mode = EH_PKT_TRANSFER_MODE_POLL; + else if (!strcmp(CMD_LINE_ARG_EVENT, optarg)) + conf->mode = EH_PKT_TRANSFER_MODE_EVENT; + else { + printf("Unsupported packet transfer mode\n"); + return -EINVAL; + } + + return 0; +} + +static int +parse_schedule_type(struct eh_conf *conf, const char *optarg) +{ + struct eventmode_conf *em_conf = NULL; + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + if (!strcmp(CMD_LINE_ARG_ORDERED, optarg)) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; + else if (!strcmp(CMD_LINE_ARG_ATOMIC, optarg)) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ATOMIC; + else if (!strcmp(CMD_LINE_ARG_PARALLEL, optarg)) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_PARALLEL; + else { + printf("Unsupported queue schedule type\n"); + return -EINVAL; + } + + return 0; +} + static int32_t -parse_args(int32_t argc, char **argv) +parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) { int opt; int64_t ret; @@ -1548,6 +1625,7 @@ parse_args(int32_t argc, char **argv) /* else */ single_sa = 1; single_sa_idx = ret; + eh_conf->ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; printf("Configured with single SA index %u\n", single_sa_idx); break; @@ -1562,6 +1640,25 @@ parse_args(int32_t argc, char **argv) /* else */ enabled_cryptodev_mask = ret; break; + + case CMD_LINE_OPT_TRANSFER_MODE_NUM: + ret = parse_transfer_mode(eh_conf, optarg); + if (ret < 0) { + printf("Invalid packet transfer mode\n"); + print_usage(prgname); + return -1; + } + break; + + case CMD_LINE_OPT_SCHEDULE_TYPE_NUM: + ret = parse_schedule_type(eh_conf, optarg); + if (ret < 0) { + printf("Invalid queue schedule type\n"); + print_usage(prgname); + return -1; + } + break; + case CMD_LINE_OPT_RX_OFFLOAD_NUM: ret = parse_mask(optarg, &dev_rx_offload); if (ret != 0) { @@ -2476,16 +2573,141 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) port_id); } +static void +signal_handler(int signum) +{ + if (signum == SIGINT || signum == SIGTERM) { + printf("\n\nSignal %d received, preparing to exit...\n", + signum); + force_quit = true; + } +} + +static void +ev_mode_sess_verify(struct ipsec_sa *sa, int nb_sa) +{ + struct rte_ipsec_session *ips; + int32_t i; + + if (!sa || !nb_sa) + return; + + for (i = 0; i < nb_sa; i++) { + ips = ipsec_get_primary_session(&sa[i]); + if (ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) + rte_exit(EXIT_FAILURE, "Event mode supports only " + "inline protocol sessions\n"); + } + +} + +static int32_t +check_event_mode_params(struct eh_conf *eh_conf) +{ + struct eventmode_conf *em_conf = NULL; + struct lcore_params *params; + uint16_t portid; + + if (!eh_conf || !eh_conf->mode_params) + return -EINVAL; + + /* Get eventmode conf */ + em_conf = eh_conf->mode_params; + + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_POLL && + em_conf->ext_params.sched_type != SCHED_TYPE_NOT_SET) { + printf("error: option --event-schedule-type applies only to " + "event mode\n"); + return -EINVAL; + } + + if (eh_conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return 0; + + /* Set schedule type to ORDERED if it wasn't explicitly set by user */ + if (em_conf->ext_params.sched_type == SCHED_TYPE_NOT_SET) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; + + /* + * Event mode currently supports only inline protocol sessions. + * If there are other types of sessions configured then exit with + * error. + */ + ev_mode_sess_verify(sa_in, nb_sa_in); + ev_mode_sess_verify(sa_out, nb_sa_out); + + + /* Option --config does not apply to event mode */ + if (nb_lcore_params > 0) { + printf("error: option --config applies only to poll mode\n"); + return -EINVAL; + } + + /* + * In order to use the same port_init routine for both poll and event + * modes initialize lcore_params with one queue for each eth port + */ + lcore_params = lcore_params_array; + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + + params = &lcore_params[nb_lcore_params++]; + params->port_id = portid; + params->queue_id = 0; + params->lcore_id = 0; + } + + return 0; +} + +static void +inline_sessions_free(struct sa_ctx *sa_ctx) +{ + struct rte_ipsec_session *ips; + struct ipsec_sa *sa; + int32_t ret; + uint32_t i; + + if (!sa_ctx) + return; + + for (i = 0; i < sa_ctx->nb_sa; i++) { + + sa = &sa_ctx->sa[i]; + if (!sa->spi) + continue; + + ips = ipsec_get_primary_session(sa); + if (ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL && + ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) + continue; + + if (!rte_eth_dev_is_valid_port(sa->portid)) + continue; + + ret = rte_security_session_destroy( + rte_eth_dev_get_sec_ctx(sa->portid), + ips->security.ses); + if (ret) + RTE_LOG(ERR, IPSEC, "Failed to destroy security " + "session type %d, spi %d\n", + ips->type, sa->spi); + } +} + int32_t main(int32_t argc, char **argv) { int32_t ret; uint32_t lcore_id; + uint32_t cdev_id; uint32_t i; uint8_t socket_id; uint16_t portid; uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; + struct eh_conf *eh_conf = NULL; size_t sess_sz; /* init EAL */ @@ -2495,8 +2717,17 @@ main(int32_t argc, char **argv) argc -= ret; argv += ret; + force_quit = false; + signal(SIGINT, signal_handler); + signal(SIGTERM, signal_handler); + + /* initialize event helper configuration */ + eh_conf = eh_conf_init(); + if (eh_conf == NULL) + rte_exit(EXIT_FAILURE, "Failed to init event helper config"); + /* parse application arguments (after the EAL ones) */ - ret = parse_args(argc, argv); + ret = parse_args(argc, argv, eh_conf); if (ret < 0) rte_exit(EXIT_FAILURE, "Invalid parameters\n"); @@ -2513,8 +2744,11 @@ main(int32_t argc, char **argv) rte_exit(EXIT_FAILURE, "Invalid unprotected portmask 0x%x\n", unprotected_port_mask); - if (check_params() < 0) - rte_exit(EXIT_FAILURE, "check_params failed\n"); + if (check_poll_mode_params(eh_conf) < 0) + rte_exit(EXIT_FAILURE, "check_poll_mode_params failed\n"); + + if (check_event_mode_params(eh_conf) < 0) + rte_exit(EXIT_FAILURE, "check_event_mode_params failed\n"); ret = init_lcore_rx_queues(); if (ret < 0) @@ -2555,6 +2789,18 @@ main(int32_t argc, char **argv) cryptodevs_init(); + /* + * Set the enabled port mask in helper config for use by helper + * sub-system. This will be used while initializing devices using + * helper sub-system. + */ + eh_conf->eth_portmask = enabled_port_mask; + + /* Initialize eventmode components */ + ret = eh_devs_init(eh_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", ret); + /* start ports */ RTE_ETH_FOREACH_DEV(portid) { if ((enabled_port_mask & (1 << portid)) == 0) @@ -2614,5 +2860,48 @@ main(int32_t argc, char **argv) return -1; } + /* Uninitialize eventmode components */ + ret = eh_devs_uninit(eh_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "eh_devs_uninit failed, err=%d\n", ret); + + /* Free eventmode configuration memory */ + eh_conf_uninit(eh_conf); + + /* Destroy inline inbound and outbound sessions */ + for (i = 0; i < NB_SOCKETS && i < rte_socket_count(); i++) { + socket_id = rte_socket_id_by_idx(i); + inline_sessions_free(socket_ctx[socket_id].sa_in); + inline_sessions_free(socket_ctx[socket_id].sa_out); + } + + for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) { + printf("Closing cryptodev %d...", cdev_id); + rte_cryptodev_stop(cdev_id); + rte_cryptodev_close(cdev_id); + printf(" Done\n"); + } + + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + + printf("Closing port %d...", portid); + if (flow_info_tbl[portid].rx_def_flow) { + struct rte_flow_error err; + + ret = rte_flow_destroy(portid, + flow_info_tbl[portid].rx_def_flow, &err); + if (ret) + RTE_LOG(ERR, IPSEC, "Failed to destroy flow " + " for port %u, err msg: %s\n", portid, + err.message); + } + rte_eth_dev_stop(portid); + rte_eth_dev_close(portid); + printf(" Done\n"); + } + printf("Bye...\n"); + return 0; } diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 8f5d382..ec3d60b 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -159,6 +159,24 @@ struct ipsec_sa { struct rte_security_session_conf sess_conf; } __rte_cache_aligned; +struct ipsec_xf { + struct rte_crypto_sym_xform a; + struct rte_crypto_sym_xform b; +}; + +struct ipsec_sad { + struct rte_ipsec_sad *sad_v4; + struct rte_ipsec_sad *sad_v6; +}; + +struct sa_ctx { + void *satbl; /* pointer to array of rte_ipsec_sa objects*/ + struct ipsec_sad sad; + struct ipsec_xf *xf; + uint32_t nb_sa; + struct ipsec_sa sa[]; +}; + struct ipsec_mbuf_metadata { struct ipsec_sa *sa; struct rte_crypto_op cop; @@ -253,6 +271,12 @@ struct ipsec_traffic { struct traffic_type ip6; }; +extern struct ipsec_sa *sa_out; +extern uint32_t nb_sa_out; + +extern struct ipsec_sa *sa_in; +extern uint32_t nb_sa_in; + uint16_t ipsec_inbound(struct ipsec_ctx *ctx, struct rte_mbuf *pkts[], uint16_t nb_pkts, uint16_t len); diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index 4822d6b..0eb52d1 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -135,14 +135,14 @@ const struct supported_aead_algo aead_algos[] = { #define SA_INIT_NB 128 -static struct ipsec_sa *sa_out; +struct ipsec_sa *sa_out; +uint32_t nb_sa_out; static uint32_t sa_out_sz; -static uint32_t nb_sa_out; static struct ipsec_sa_cnt sa_out_cnt; -static struct ipsec_sa *sa_in; +struct ipsec_sa *sa_in; +uint32_t nb_sa_in; static uint32_t sa_in_sz; -static uint32_t nb_sa_in; static struct ipsec_sa_cnt sa_in_cnt; static const struct supported_cipher_algo * @@ -826,19 +826,6 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound) printf("\n"); } -struct ipsec_xf { - struct rte_crypto_sym_xform a; - struct rte_crypto_sym_xform b; -}; - -struct sa_ctx { - void *satbl; /* pointer to array of rte_ipsec_sa objects*/ - struct ipsec_sad sad; - struct ipsec_xf *xf; - uint32_t nb_sa; - struct ipsec_sa sa[]; -}; - static struct sa_ctx * sa_create(const char *name, int32_t socket_id, uint32_t nb_sa) { diff --git a/examples/ipsec-secgw/sad.h b/examples/ipsec-secgw/sad.h index 55712ba..473aaa9 100644 --- a/examples/ipsec-secgw/sad.h +++ b/examples/ipsec-secgw/sad.h @@ -18,11 +18,6 @@ struct ipsec_sad_cache { RTE_DECLARE_PER_LCORE(struct ipsec_sad_cache, sad_cache); -struct ipsec_sad { - struct rte_ipsec_sad *sad_v4; - struct rte_ipsec_sad *sad_v6; -}; - int ipsec_sad_create(const char *name, struct ipsec_sad *sad, int socket_id, struct ipsec_sa_cnt *sa_cnt); From patchwork Thu Feb 27 16:18:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lukas Bartosik [C]" X-Patchwork-Id: 66108 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 13075A055F; Thu, 27 Feb 2020 17:21:10 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8C3311C013; Thu, 27 Feb 2020 17:19:29 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id B81FF1C030 for ; Thu, 27 Feb 2020 17:19:18 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 01RFsgaq002477; Thu, 27 Feb 2020 08:19:18 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=SXibhsIOQwHZjIEc5DXawKWtqj3fSru9LBn8RhkHsqU=; b=wzirMg0Rqth/yjt/tnnk0Er6gyhYq3cFxzmb9b6GaeVUuiQGGHRER/apySTPB3KOYZ3j 2yCDFiqwDTsXuo5vjbs7Qc5qGcWOrPfxp9MGZ0bMuFzimlAkUWQWMOwaXcLwcP1e00WH sIrXsTYCen4KKbOPmbS04Z1Vg58moYr0UtoGED+u5+q0A9klCYpJw9ud5oq2TxghXOsk xAZ56cm/Ew1wS5aFIORu4ofbrtpiMGTrES+ZN59xISqFMz03XHV8SxGrpIoRrlho/vRh xUJY8kEqUUEa0cB0JqFdN+S/mJUE5yz2JH+snYog+LlRtYDaSxTlfRwZ+bdoHV/OnisC nw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2ydchth8g6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 27 Feb 2020 08:19:18 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:19:15 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 27 Feb 2020 08:19:15 -0800 Received: from luke.marvell.com (unknown [10.95.130.81]) by maili.marvell.com (Postfix) with ESMTP id EEB363F7040; Thu, 27 Feb 2020 08:19:12 -0800 (PST) From: Lukasz Bartosik To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Anoob Joseph , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Konstantin Ananyev" , Date: Thu, 27 Feb 2020 17:18:33 +0100 Message-ID: <1582820317-7333-12-git-send-email-lbartosik@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1582820317-7333-1-git-send-email-lbartosik@marvell.com> References: <1582185727-6749-1-git-send-email-lbartosik@marvell.com> <1582820317-7333-1-git-send-email-lbartosik@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-27_05:2020-02-26, 2020-02-27 signatures=0 Subject: [dpdk-dev] [PATCH v5 11/15] examples/ipsec-secgw: add driver mode worker X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add driver inbound and outbound worker thread for ipsec-secgw. In driver mode application does as little as possible. It simply forwards packets back to port from which traffic was received instructing HW to apply inline security processing using first outbound SA configured for a given port. If a port does not have SA configured outbound traffic on that port will be silently dropped. The aim of this mode is to measure HW capabilities. Driver mode is selected with single-sa option. The single-sa option accepts SA index however in event mode the SA index is ignored. Example command to run ipsec-secgw in driver mode: ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel --single-sa 0 Signed-off-by: Anoob Joseph Signed-off-by: Ankur Dwivedi Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/Makefile | 1 + examples/ipsec-secgw/ipsec-secgw.c | 34 +++--- examples/ipsec-secgw/ipsec-secgw.h | 25 +++++ examples/ipsec-secgw/ipsec.h | 11 ++ examples/ipsec-secgw/ipsec_worker.c | 218 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/meson.build | 2 +- 6 files changed, 272 insertions(+), 19 deletions(-) create mode 100644 examples/ipsec-secgw/ipsec-secgw.h create mode 100644 examples/ipsec-secgw/ipsec_worker.c diff --git a/examples/ipsec-secgw/Makefile b/examples/ipsec-secgw/Makefile index 66d05d4..c4a272a 100644 --- a/examples/ipsec-secgw/Makefile +++ b/examples/ipsec-secgw/Makefile @@ -16,6 +16,7 @@ SRCS-y += sad.c SRCS-y += rt.c SRCS-y += ipsec_process.c SRCS-y += ipsec-secgw.c +SRCS-y += ipsec_worker.c SRCS-y += event_helper.c CFLAGS += -gdwarf-2 diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 0f692d7..9ad4be5 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -71,8 +71,6 @@ volatile bool force_quit; #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */ -#define NB_SOCKETS 4 - /* Configure how many packets ahead to prefetch, when reading packets */ #define PREFETCH_OFFSET 3 @@ -80,8 +78,6 @@ volatile bool force_quit; #define MAX_LCORE_PARAMS 1024 -#define UNPROTECTED_PORT(port) (unprotected_port_mask & (1 << portid)) - /* * Configurable number of RX/TX ring descriptors */ @@ -188,15 +184,15 @@ static const struct option lgopts[] = { {NULL, 0, 0, 0} }; +uint32_t unprotected_port_mask; +uint32_t single_sa_idx; /* mask of enabled ports */ static uint32_t enabled_port_mask; static uint64_t enabled_cryptodev_mask = UINT64_MAX; -static uint32_t unprotected_port_mask; static int32_t promiscuous_on = 1; static int32_t numa_on = 1; /**< NUMA is enabled by default. */ static uint32_t nb_lcores; static uint32_t single_sa; -static uint32_t single_sa_idx; /* * RX/TX HW offload capabilities to enable/use on ethernet ports. @@ -282,7 +278,7 @@ static struct rte_eth_conf port_conf = { }, }; -static struct socket_ctx socket_ctx[NB_SOCKETS]; +struct socket_ctx socket_ctx[NB_SOCKETS]; /* * Determine is multi-segment support required: @@ -1003,12 +999,12 @@ process_pkts(struct lcore_conf *qconf, struct rte_mbuf **pkts, prepare_traffic(pkts, &traffic, nb_pkts); if (unlikely(single_sa)) { - if (UNPROTECTED_PORT(portid)) + if (is_unprotected_port(portid)) process_pkts_inbound_nosp(&qconf->inbound, &traffic); else process_pkts_outbound_nosp(&qconf->outbound, &traffic); } else { - if (UNPROTECTED_PORT(portid)) + if (is_unprotected_port(portid)) process_pkts_inbound(&qconf->inbound, &traffic); else process_pkts_outbound(&qconf->outbound, &traffic); @@ -1119,8 +1115,8 @@ drain_outbound_crypto_queues(const struct lcore_conf *qconf, } /* main processing loop */ -static int32_t -main_loop(__attribute__((unused)) void *dummy) +void +ipsec_poll_mode_worker(void) { struct rte_mbuf *pkts[MAX_PKT_BURST]; uint32_t lcore_id; @@ -1164,13 +1160,13 @@ main_loop(__attribute__((unused)) void *dummy) RTE_LOG(ERR, IPSEC, "SAD cache init on lcore %u, failed with code: %d\n", lcore_id, rc); - return rc; + return; } if (qconf->nb_rx_queue == 0) { RTE_LOG(DEBUG, IPSEC, "lcore %u has nothing to do\n", lcore_id); - return 0; + return; } RTE_LOG(INFO, IPSEC, "entering main loop on lcore %u\n", lcore_id); @@ -1183,7 +1179,7 @@ main_loop(__attribute__((unused)) void *dummy) lcore_id, portid, queueid); } - while (1) { + while (!force_quit) { cur_tsc = rte_rdtsc(); /* TX queue buffer drain */ @@ -1207,7 +1203,7 @@ main_loop(__attribute__((unused)) void *dummy) process_pkts(qconf, pkts, nb_rx, portid); /* dequeue and process completed crypto-ops */ - if (UNPROTECTED_PORT(portid)) + if (is_unprotected_port(portid)) drain_inbound_crypto_queues(qconf, &qconf->inbound); else @@ -1343,8 +1339,10 @@ print_usage(const char *prgname) " In event mode this option is not used\n" " as packets are dynamically scheduled\n" " to cores by HW.\n" - " --single-sa SAIDX: Use single SA index for outbound traffic,\n" - " bypassing the SP\n" + " --single-sa SAIDX: In poll mode use single SA index for\n" + " outbound traffic, bypassing the SP\n" + " In event mode selects driver submode,\n" + " SA index value is ignored\n" " --cryptodev_mask MASK: Hexadecimal bitmask of the crypto\n" " devices to configure\n" " --transfer-mode MODE\n" @@ -2854,7 +2852,7 @@ main(int32_t argc, char **argv) check_all_ports_link_status(enabled_port_mask); /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, CALL_MASTER); RTE_LCORE_FOREACH_SLAVE(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h new file mode 100644 index 0000000..a07a920 --- /dev/null +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#ifndef _IPSEC_SECGW_H_ +#define _IPSEC_SECGW_H_ + +#include + +#define NB_SOCKETS 4 + +/* Port mask to identify the unprotected ports */ +extern uint32_t unprotected_port_mask; + +/* Index of SA in single mode */ +extern uint32_t single_sa_idx; + +extern volatile bool force_quit; + +static inline uint8_t +is_unprotected_port(uint16_t port_id) +{ + return unprotected_port_mask & (1 << port_id); +} + +#endif /* _IPSEC_SECGW_H_ */ diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index ec3d60b..ad913bf 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -13,6 +13,8 @@ #include #include +#include "ipsec-secgw.h" + #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2 #define RTE_LOGTYPE_IPSEC_IPIP RTE_LOGTYPE_USER3 @@ -271,6 +273,15 @@ struct ipsec_traffic { struct traffic_type ip6; }; +/* Socket ctx */ +extern struct socket_ctx socket_ctx[NB_SOCKETS]; + +void +ipsec_poll_mode_worker(void); + +int +ipsec_launch_one_lcore(void *args); + extern struct ipsec_sa *sa_out; extern uint32_t nb_sa_out; diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c new file mode 100644 index 0000000..b7a1ef9 --- /dev/null +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -0,0 +1,218 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2016 Intel Corporation + * Copyright (C) 2020 Marvell International Ltd. + */ +#include + +#include "event_helper.h" +#include "ipsec.h" +#include "ipsec-secgw.h" + +static inline void +ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) +{ + /* Save the destination port in the mbuf */ + m->port = port_id; + + /* Save eth queue for Tx */ + rte_event_eth_tx_adapter_txq_set(m, 0); +} + +static inline void +prepare_out_sessions_tbl(struct sa_ctx *sa_out, + struct rte_security_session **sess_tbl, uint16_t size) +{ + struct rte_ipsec_session *pri_sess; + struct ipsec_sa *sa; + uint32_t i; + + if (!sa_out) + return; + + for (i = 0; i < sa_out->nb_sa; i++) { + + sa = &sa_out->sa[i]; + if (!sa) + continue; + + pri_sess = ipsec_get_primary_session(sa); + if (!pri_sess) + continue; + + if (pri_sess->type != + RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { + + RTE_LOG(ERR, IPSEC, "Invalid session type %d\n", + pri_sess->type); + continue; + } + + if (sa->portid >= size) { + RTE_LOG(ERR, IPSEC, + "Port id >= than table size %d, %d\n", + sa->portid, size); + continue; + } + + /* Use only first inline session found for a given port */ + if (sess_tbl[sa->portid]) + continue; + sess_tbl[sa->portid] = pri_sess->security.ses; + } +} + +/* + * Event mode exposes various operating modes depending on the + * capabilities of the event device and the operating mode + * selected. + */ + +/* Workers registered */ +#define IPSEC_EVENTMODE_WORKERS 1 + +/* + * Event mode worker + * Operating parameters : non-burst - Tx internal port - driver mode + */ +static void +ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, + uint8_t nb_links) +{ + struct rte_security_session *sess_tbl[RTE_MAX_ETHPORTS] = { NULL }; + unsigned int nb_rx = 0; + struct rte_mbuf *pkt; + struct rte_event ev; + uint32_t lcore_id; + int32_t socket_id; + int16_t port_id; + + /* Check if we have links registered for this lcore */ + if (nb_links == 0) { + /* No links registered - exit */ + return; + } + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Get socket ID */ + socket_id = rte_lcore_to_socket_id(lcore_id); + + /* + * Prepare security sessions table. In outbound driver mode + * we always use first session configured for a given port + */ + prepare_out_sessions_tbl(socket_ctx[socket_id].sa_out, sess_tbl, + RTE_MAX_ETHPORTS); + + RTE_LOG(INFO, IPSEC, + "Launching event mode worker (non-burst - Tx internal port - " + "driver mode) on lcore %d\n", lcore_id); + + /* We have valid links */ + + /* Check if it's single link */ + if (nb_links != 1) { + RTE_LOG(INFO, IPSEC, + "Multiple links not supported. Using first link\n"); + } + + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_port_id); + while (!force_quit) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + pkt = ev.mbuf; + port_id = pkt->port; + + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); + + /* Process packet */ + ipsec_event_pre_forward(pkt, port_id); + + if (!is_unprotected_port(port_id)) { + + if (unlikely(!sess_tbl[port_id])) { + rte_pktmbuf_free(pkt); + continue; + } + + /* Save security session */ + pkt->udata64 = (uint64_t) sess_tbl[port_id]; + + /* Mark the packet for Tx security offload */ + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; + } + + /* + * Since tx internal port is available, events can be + * directly enqueued to the adapter and it would be + * internally submitted to the eth device. + */ + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + } +} + +static uint8_t +ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) +{ + struct eh_app_worker_params *wrkr; + uint8_t nb_wrkr_param = 0; + + /* Save workers */ + wrkr = wrkrs; + + /* Non-burst - Tx internal port - driver mode */ + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drv_mode; + wrkr++; + + return nb_wrkr_param; +} + +static void +ipsec_eventmode_worker(struct eh_conf *conf) +{ + struct eh_app_worker_params ipsec_wrkr[IPSEC_EVENTMODE_WORKERS] = { + {{{0} }, NULL } }; + uint8_t nb_wrkr_param; + + /* Populate l2fwd_wrkr params */ + nb_wrkr_param = ipsec_eventmode_populate_wrkr_params(ipsec_wrkr); + + /* + * Launch correct worker after checking + * the event device's capabilities. + */ + eh_launch_worker(conf, ipsec_wrkr, nb_wrkr_param); +} + +int ipsec_launch_one_lcore(void *args) +{ + struct eh_conf *conf; + + conf = (struct eh_conf *)args; + + if (conf->mode == EH_PKT_TRANSFER_MODE_POLL) { + /* Run in poll mode */ + ipsec_poll_mode_worker(); + } else if (conf->mode == EH_PKT_TRANSFER_MODE_EVENT) { + /* Run in event mode */ + ipsec_eventmode_worker(conf); + } + return 0; +} diff --git a/examples/ipsec-secgw/meson.build b/examples/ipsec-secgw/meson.build index 2415d47..f9ba2a2 100644 --- a/examples/ipsec-secgw/meson.build +++ b/examples/ipsec-secgw/meson.build @@ -10,5 +10,5 @@ deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec', 'eventdev'] allow_experimental_apis = true sources = files( 'esp.c', 'event_helper.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', - 'parser.c', 'rt.c', 'sa.c', 'sad.c', 'sp4.c', 'sp6.c' + 'ipsec_worker.c', 'parser.c', 'rt.c', 'sa.c', 'sad.c', 'sp4.c', 'sp6.c' ) From patchwork Thu Feb 27 16:18:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lukas Bartosik [C]" X-Patchwork-Id: 66109 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 61A00A055F; Thu, 27 Feb 2020 17:21:20 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AA2631C067; Thu, 27 Feb 2020 17:19:30 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 204C01C02A for ; Thu, 27 Feb 2020 17:19:21 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 01RFtsrU011302; Thu, 27 Feb 2020 08:19:21 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=KvYWiX28s9l3eR85i7h7h5fbw6obvImjHZ7+XvodG/c=; b=NNuKEez9avfzdstSf/yvzqzx+uVit0y/lEwq2Ej6hAn5ULJvjxmg7IGhkqiwEBLiLQYz LFhZm8NOkXq1Ighro1Wv7A5Bt/C2ymzy0h0uReXo6TEpCmBCLWMA00pX+yLr+6ZZxI6Q NWos/4syzrbbh4IbK38fGn6W9pDQRN96CzMayaGFrPxXSOUwIJDCASTFHJ+yQHrVjTGp Wm6227j7pA+blmPOmKEL4NVdAhjN3+rmHaUfKZiJLw7zDuf+7Tb85nO2i135evITCwAe BJs8sJVHs8trFKv1iXdh9sl1fGoy6BFOyGqVz39i/zBl8Zipb5QaUn2J7GjdsXBrkUU/ Hw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2ydcm19hbm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 27 Feb 2020 08:19:21 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:19:19 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:19:19 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 27 Feb 2020 08:19:18 -0800 Received: from luke.marvell.com (unknown [10.95.130.81]) by maili.marvell.com (Postfix) with ESMTP id 0A0403F7043; Thu, 27 Feb 2020 08:19:15 -0800 (PST) From: Lukasz Bartosik To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Anoob Joseph , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Konstantin Ananyev" , Date: Thu, 27 Feb 2020 17:18:34 +0100 Message-ID: <1582820317-7333-13-git-send-email-lbartosik@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1582820317-7333-1-git-send-email-lbartosik@marvell.com> References: <1582185727-6749-1-git-send-email-lbartosik@marvell.com> <1582820317-7333-1-git-send-email-lbartosik@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-27_05:2020-02-26, 2020-02-27 signatures=0 Subject: [dpdk-dev] [PATCH v5 12/15] examples/ipsec-secgw: add app mode worker X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add application inbound/outbound worker thread and IPsec application processing code for event mode. Example ipsec-secgw command in app mode: ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel Signed-off-by: Anoob Joseph Signed-off-by: Ankur Dwivedi Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/ipsec-secgw.c | 31 +-- examples/ipsec-secgw/ipsec-secgw.h | 63 ++++++ examples/ipsec-secgw/ipsec.h | 16 -- examples/ipsec-secgw/ipsec_worker.c | 435 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/ipsec_worker.h | 41 ++++ 5 files changed, 538 insertions(+), 48 deletions(-) create mode 100644 examples/ipsec-secgw/ipsec_worker.h diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 9ad4be5..a03958d 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -50,13 +50,12 @@ #include "event_helper.h" #include "ipsec.h" +#include "ipsec_worker.h" #include "parser.h" #include "sad.h" volatile bool force_quit; -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 - #define MAX_JUMBO_PKT_LEN 9600 #define MEMPOOL_CACHE_SIZE 256 @@ -86,29 +85,6 @@ volatile bool force_quit; static uint16_t nb_rxd = IPSEC_SECGW_RX_DESC_DEFAULT; static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; -#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ - (((uint64_t)((a) & 0xff) << 56) | \ - ((uint64_t)((b) & 0xff) << 48) | \ - ((uint64_t)((c) & 0xff) << 40) | \ - ((uint64_t)((d) & 0xff) << 32) | \ - ((uint64_t)((e) & 0xff) << 24) | \ - ((uint64_t)((f) & 0xff) << 16) | \ - ((uint64_t)((g) & 0xff) << 8) | \ - ((uint64_t)(h) & 0xff)) -#else -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ - (((uint64_t)((h) & 0xff) << 56) | \ - ((uint64_t)((g) & 0xff) << 48) | \ - ((uint64_t)((f) & 0xff) << 40) | \ - ((uint64_t)((e) & 0xff) << 32) | \ - ((uint64_t)((d) & 0xff) << 24) | \ - ((uint64_t)((c) & 0xff) << 16) | \ - ((uint64_t)((b) & 0xff) << 8) | \ - ((uint64_t)(a) & 0xff)) -#endif -#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) - #define ETHADDR_TO_UINT64(addr) __BYTES_TO_UINT64( \ (addr)->addr_bytes[0], (addr)->addr_bytes[1], \ (addr)->addr_bytes[2], (addr)->addr_bytes[3], \ @@ -120,11 +96,6 @@ static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; #define MTU_TO_FRAMELEN(x) ((x) + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN) -/* port/source ethernet addr and destination ethernet addr */ -struct ethaddr_info { - uint64_t src, dst; -}; - struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { { 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) }, { 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) }, diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h index a07a920..4b53cb5 100644 --- a/examples/ipsec-secgw/ipsec-secgw.h +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -8,6 +8,69 @@ #define NB_SOCKETS 4 +#define MAX_PKT_BURST 32 + +#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 + +#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN +#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ + (((uint64_t)((a) & 0xff) << 56) | \ + ((uint64_t)((b) & 0xff) << 48) | \ + ((uint64_t)((c) & 0xff) << 40) | \ + ((uint64_t)((d) & 0xff) << 32) | \ + ((uint64_t)((e) & 0xff) << 24) | \ + ((uint64_t)((f) & 0xff) << 16) | \ + ((uint64_t)((g) & 0xff) << 8) | \ + ((uint64_t)(h) & 0xff)) +#else +#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ + (((uint64_t)((h) & 0xff) << 56) | \ + ((uint64_t)((g) & 0xff) << 48) | \ + ((uint64_t)((f) & 0xff) << 40) | \ + ((uint64_t)((e) & 0xff) << 32) | \ + ((uint64_t)((d) & 0xff) << 24) | \ + ((uint64_t)((c) & 0xff) << 16) | \ + ((uint64_t)((b) & 0xff) << 8) | \ + ((uint64_t)(a) & 0xff)) +#endif + +#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) + +struct traffic_type { + const uint8_t *data[MAX_PKT_BURST * 2]; + struct rte_mbuf *pkts[MAX_PKT_BURST * 2]; + void *saptr[MAX_PKT_BURST * 2]; + uint32_t res[MAX_PKT_BURST * 2]; + uint32_t num; +}; + +struct ipsec_traffic { + struct traffic_type ipsec; + struct traffic_type ip4; + struct traffic_type ip6; +}; + +/* Fields optimized for devices without burst */ +struct traffic_type_nb { + const uint8_t *data; + struct rte_mbuf *pkt; + uint32_t res; + uint32_t num; +}; + +struct ipsec_traffic_nb { + struct traffic_type_nb ipsec; + struct traffic_type_nb ip4; + struct traffic_type_nb ip6; +}; + +/* port/source ethernet addr and destination ethernet addr */ +struct ethaddr_info { + uint64_t src, dst; +}; + +extern struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS]; + /* Port mask to identify the unprotected ports */ extern uint32_t unprotected_port_mask; diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index ad913bf..f8f29f9 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -15,11 +15,9 @@ #include "ipsec-secgw.h" -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2 #define RTE_LOGTYPE_IPSEC_IPIP RTE_LOGTYPE_USER3 -#define MAX_PKT_BURST 32 #define MAX_INFLIGHT 128 #define MAX_QP_PER_LCORE 256 @@ -259,20 +257,6 @@ struct cnt_blk { uint32_t cnt; } __attribute__((packed)); -struct traffic_type { - const uint8_t *data[MAX_PKT_BURST * 2]; - struct rte_mbuf *pkts[MAX_PKT_BURST * 2]; - void *saptr[MAX_PKT_BURST * 2]; - uint32_t res[MAX_PKT_BURST * 2]; - uint32_t num; -}; - -struct ipsec_traffic { - struct traffic_type ipsec; - struct traffic_type ip4; - struct traffic_type ip6; -}; - /* Socket ctx */ extern struct socket_ctx socket_ctx[NB_SOCKETS]; diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index b7a1ef9..5fde667 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -2,11 +2,51 @@ * Copyright(c) 2010-2016 Intel Corporation * Copyright (C) 2020 Marvell International Ltd. */ +#include #include +#include +#include #include "event_helper.h" #include "ipsec.h" #include "ipsec-secgw.h" +#include "ipsec_worker.h" + +static inline enum pkt_type +process_ipsec_get_pkt_type(struct rte_mbuf *pkt, uint8_t **nlp) +{ + struct rte_ether_hdr *eth; + + eth = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) { + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + + offsetof(struct ip, ip_p)); + if (**nlp == IPPROTO_ESP) + return PKT_TYPE_IPSEC_IPV4; + else + return PKT_TYPE_PLAIN_IPV4; + } else if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) { + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + + offsetof(struct ip6_hdr, ip6_nxt)); + if (**nlp == IPPROTO_ESP) + return PKT_TYPE_IPSEC_IPV6; + else + return PKT_TYPE_PLAIN_IPV6; + } + + /* Unknown/Unsupported type */ + return PKT_TYPE_INVALID; +} + +static inline void +update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid) +{ + struct rte_ether_hdr *ethhdr; + + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + memcpy(ðhdr->s_addr, ðaddr_tbl[portid].src, RTE_ETHER_ADDR_LEN); + memcpy(ðhdr->d_addr, ðaddr_tbl[portid].dst, RTE_ETHER_ADDR_LEN); +} static inline void ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) @@ -61,6 +101,290 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, } } +static inline int +check_sp(struct sp_ctx *sp, const uint8_t *nlp, uint32_t *sa_idx) +{ + uint32_t res; + + if (unlikely(sp == NULL)) + return 0; + + rte_acl_classify((struct rte_acl_ctx *)sp, &nlp, &res, 1, + DEFAULT_MAX_CATEGORIES); + + if (unlikely(res == 0)) { + /* No match */ + return 0; + } + + if (res == DISCARD) + return 0; + else if (res == BYPASS) { + *sa_idx = -1; + return 1; + } + + *sa_idx = res - 1; + return 1; +} + +static inline uint16_t +route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) +{ + uint32_t dst_ip; + uint16_t offset; + uint32_t hop; + int ret; + + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip, ip_dst); + dst_ip = *rte_pktmbuf_mtod_offset(pkt, uint32_t *, offset); + dst_ip = rte_be_to_cpu_32(dst_ip); + + ret = rte_lpm_lookup((struct rte_lpm *)rt_ctx, dst_ip, &hop); + + if (ret == 0) { + /* We have a hit */ + return hop; + } + + /* else */ + return RTE_MAX_ETHPORTS; +} + +/* TODO: To be tested */ +static inline uint16_t +route6_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) +{ + uint8_t dst_ip[16]; + uint8_t *ip6_dst; + uint16_t offset; + uint32_t hop; + int ret; + + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip6_hdr, ip6_dst); + ip6_dst = rte_pktmbuf_mtod_offset(pkt, uint8_t *, offset); + memcpy(&dst_ip[0], ip6_dst, 16); + + ret = rte_lpm6_lookup((struct rte_lpm6 *)rt_ctx, dst_ip, &hop); + + if (ret == 0) { + /* We have a hit */ + return hop; + } + + /* else */ + return RTE_MAX_ETHPORTS; +} + +static inline uint16_t +get_route(struct rte_mbuf *pkt, struct route_table *rt, enum pkt_type type) +{ + if (type == PKT_TYPE_PLAIN_IPV4 || type == PKT_TYPE_IPSEC_IPV4) + return route4_pkt(pkt, rt->rt4_ctx); + else if (type == PKT_TYPE_PLAIN_IPV6 || type == PKT_TYPE_IPSEC_IPV6) + return route6_pkt(pkt, rt->rt6_ctx); + + return RTE_MAX_ETHPORTS; +} + +static inline int +process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, + struct rte_event *ev) +{ + struct ipsec_sa *sa = NULL; + struct rte_mbuf *pkt; + uint16_t port_id = 0; + enum pkt_type type; + uint32_t sa_idx; + uint8_t *nlp; + + /* Get pkt from event */ + pkt = ev->mbuf; + + /* Check the packet type */ + type = process_ipsec_get_pkt_type(pkt, &nlp); + + switch (type) { + case PKT_TYPE_PLAIN_IPV4: + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { + if (unlikely(pkt->ol_flags & + PKT_RX_SEC_OFFLOAD_FAILED)) { + RTE_LOG(ERR, IPSEC, + "Inbound security offload failed\n"); + goto drop_pkt_and_exit; + } + sa = pkt->userdata; + } + + /* Check if we have a match */ + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + + case PKT_TYPE_PLAIN_IPV6: + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { + if (unlikely(pkt->ol_flags & + PKT_RX_SEC_OFFLOAD_FAILED)) { + RTE_LOG(ERR, IPSEC, + "Inbound security offload failed\n"); + goto drop_pkt_and_exit; + } + sa = pkt->userdata; + } + + /* Check if we have a match */ + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + + default: + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + goto drop_pkt_and_exit; + } + + /* Check if the packet has to be bypassed */ + if (sa_idx == BYPASS) + goto route_and_send_pkt; + + /* Validate sa_idx */ + if (sa_idx >= ctx->sa_ctx->nb_sa) + goto drop_pkt_and_exit; + + /* Else the packet has to be protected with SA */ + + /* If the packet was IPsec processed, then SA pointer should be set */ + if (sa == NULL) + goto drop_pkt_and_exit; + + /* SPI on the packet should match with the one in SA */ + if (unlikely(sa->spi != ctx->sa_ctx->sa[sa_idx].spi)) + goto drop_pkt_and_exit; + +route_and_send_pkt: + port_id = get_route(pkt, rt, type); + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { + /* no match */ + goto drop_pkt_and_exit; + } + /* else, we have a matching route */ + + /* Update mac addresses */ + update_mac_addrs(pkt, port_id); + + /* Update the event with the dest port */ + ipsec_event_pre_forward(pkt, port_id); + return PKT_FORWARDED; + +drop_pkt_and_exit: + RTE_LOG(ERR, IPSEC, "Inbound packet dropped\n"); + rte_pktmbuf_free(pkt); + ev->mbuf = NULL; + return PKT_DROPPED; +} + +static inline int +process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, + struct rte_event *ev) +{ + struct rte_ipsec_session *sess; + struct sa_ctx *sa_ctx; + struct rte_mbuf *pkt; + uint16_t port_id = 0; + struct ipsec_sa *sa; + enum pkt_type type; + uint32_t sa_idx; + uint8_t *nlp; + + /* Get pkt from event */ + pkt = ev->mbuf; + + /* Check the packet type */ + type = process_ipsec_get_pkt_type(pkt, &nlp); + + switch (type) { + case PKT_TYPE_PLAIN_IPV4: + /* Check if we have a match */ + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + case PKT_TYPE_PLAIN_IPV6: + /* Check if we have a match */ + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + default: + /* + * Only plain IPv4 & IPv6 packets are allowed + * on protected port. Drop the rest. + */ + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + goto drop_pkt_and_exit; + } + + /* Check if the packet has to be bypassed */ + if (sa_idx == BYPASS) { + port_id = get_route(pkt, rt, type); + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { + /* no match */ + goto drop_pkt_and_exit; + } + /* else, we have a matching route */ + goto send_pkt; + } + + /* Validate sa_idx */ + if (sa_idx >= ctx->sa_ctx->nb_sa) + goto drop_pkt_and_exit; + + /* Else the packet has to be protected */ + + /* Get SA ctx*/ + sa_ctx = ctx->sa_ctx; + + /* Get SA */ + sa = &(sa_ctx->sa[sa_idx]); + + /* Get IPsec session */ + sess = ipsec_get_primary_session(sa); + + /* Allow only inline protocol for now */ + if (sess->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { + RTE_LOG(ERR, IPSEC, "SA type not supported\n"); + goto drop_pkt_and_exit; + } + + if (sess->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA) + pkt->userdata = sess->security.ses; + + /* Mark the packet for Tx security offload */ + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; + + /* Get the port to which this pkt need to be submitted */ + port_id = sa->portid; + +send_pkt: + /* Update mac addresses */ + update_mac_addrs(pkt, port_id); + + /* Update the event with the dest port */ + ipsec_event_pre_forward(pkt, port_id); + return PKT_FORWARDED; + +drop_pkt_and_exit: + RTE_LOG(ERR, IPSEC, "Outbound packet dropped\n"); + rte_pktmbuf_free(pkt); + ev->mbuf = NULL; + return PKT_DROPPED; +} + /* * Event mode exposes various operating modes depending on the * capabilities of the event device and the operating mode @@ -68,7 +392,7 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, */ /* Workers registered */ -#define IPSEC_EVENTMODE_WORKERS 1 +#define IPSEC_EVENTMODE_WORKERS 2 /* * Event mode worker @@ -146,7 +470,7 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, } /* Save security session */ - pkt->udata64 = (uint64_t) sess_tbl[port_id]; + pkt->userdata = sess_tbl[port_id]; /* Mark the packet for Tx security offload */ pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; @@ -165,6 +489,105 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, } } +/* + * Event mode worker + * Operating parameters : non-burst - Tx internal port - app mode + */ +static void +ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, + uint8_t nb_links) +{ + struct lcore_conf_ev_tx_int_port_wrkr lconf; + unsigned int nb_rx = 0; + struct rte_event ev; + uint32_t lcore_id; + int32_t socket_id; + int ret; + + /* Check if we have links registered for this lcore */ + if (nb_links == 0) { + /* No links registered - exit */ + return; + } + + /* We have valid links */ + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Get socket ID */ + socket_id = rte_lcore_to_socket_id(lcore_id); + + /* Save routing table */ + lconf.rt.rt4_ctx = socket_ctx[socket_id].rt_ip4; + lconf.rt.rt6_ctx = socket_ctx[socket_id].rt_ip6; + lconf.inbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_in; + lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in; + lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in; + lconf.inbound.session_pool = socket_ctx[socket_id].session_pool; + lconf.inbound.session_priv_pool = + socket_ctx[socket_id].session_priv_pool; + lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out; + lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out; + lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out; + lconf.outbound.session_pool = socket_ctx[socket_id].session_pool; + lconf.outbound.session_priv_pool = + socket_ctx[socket_id].session_priv_pool; + + RTE_LOG(INFO, IPSEC, + "Launching event mode worker (non-burst - Tx internal port - " + "app mode) on lcore %d\n", lcore_id); + + /* Check if it's single link */ + if (nb_links != 1) { + RTE_LOG(INFO, IPSEC, + "Multiple links not supported. Using first link\n"); + } + + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_port_id); + + while (!force_quit) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + if (unlikely(ev.event_type != RTE_EVENT_TYPE_ETHDEV)) { + RTE_LOG(ERR, IPSEC, "Invalid event type %u", + ev.event_type); + + continue; + } + + if (is_unprotected_port(ev.mbuf->port)) + ret = process_ipsec_ev_inbound(&lconf.inbound, + &lconf.rt, &ev); + else + ret = process_ipsec_ev_outbound(&lconf.outbound, + &lconf.rt, &ev); + if (ret != 1) + /* The pkt has been dropped */ + continue; + + /* + * Since tx internal port is available, events can be + * directly enqueued to the adapter and it would be + * internally submitted to the eth device. + */ + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + } +} + static uint8_t ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) { @@ -180,6 +603,14 @@ ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drv_mode; wrkr++; + nb_wrkr_param++; + + /* Non-burst - Tx internal port - app mode */ + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_APP; + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_app_mode; + nb_wrkr_param++; return nb_wrkr_param; } diff --git a/examples/ipsec-secgw/ipsec_worker.h b/examples/ipsec-secgw/ipsec_worker.h new file mode 100644 index 0000000..5d85cf1 --- /dev/null +++ b/examples/ipsec-secgw/ipsec_worker.h @@ -0,0 +1,41 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#ifndef _IPSEC_WORKER_H_ +#define _IPSEC_WORKER_H_ + +#include "ipsec.h" + +enum pkt_type { + PKT_TYPE_PLAIN_IPV4 = 1, + PKT_TYPE_IPSEC_IPV4, + PKT_TYPE_PLAIN_IPV6, + PKT_TYPE_IPSEC_IPV6, + PKT_TYPE_INVALID +}; + +enum { + PKT_DROPPED = 0, + PKT_FORWARDED, + PKT_POSTED /* for lookaside case */ +}; + +struct route_table { + struct rt_ctx *rt4_ctx; + struct rt_ctx *rt6_ctx; +}; + +/* + * Conf required by event mode worker with tx internal port + */ +struct lcore_conf_ev_tx_int_port_wrkr { + struct ipsec_ctx inbound; + struct ipsec_ctx outbound; + struct route_table rt; +} __rte_cache_aligned; + +void ipsec_poll_mode_worker(void); + +int ipsec_launch_one_lcore(void *args); + +#endif /* _IPSEC_WORKER_H_ */ From patchwork Thu Feb 27 16:18:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lukas Bartosik [C]" X-Patchwork-Id: 66110 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8E059A055F; Thu, 27 Feb 2020 17:21:38 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0B99A1C06B; Thu, 27 Feb 2020 17:19:32 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 1E36B37AF for ; Thu, 27 Feb 2020 17:19:25 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 01RFtriD011125; Thu, 27 Feb 2020 08:19:24 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=sXPoY2Mf2y3U6JWaO9Sb80s21Kwd2PpzzoZQBMG6BuQ=; b=fSuF6EnIMWXdR5XSaOodMVg5i7a5yFaTvKlttfDFmi/5XYynLR9sD3YZtlcA2WQ3LSeH a+dWSpAjz1VIWJ+w3c1VKGEMLp2GMrcC9GmNkOr8Pdb+KDsu2BDMNlVcIqDNMw1zQ58b 401DYwuLpDJg7/wcATAmQm/FmnCw5/8bAZRmzUe9lzJuoZMo+ZYQOTMRHtPLbWg2FTFX UqL8KP4ZVAuEfFfEQ7rd9RPk4sUlNoOivhIoL1EYergzloekFBc8xvtXojqxcIbO24g8 42e1k7Pc1WKK8yybiRO9TZK/LixagmK17BKvBtHHzU3NIPDJx62ybIzbwBMqLhS6e4sL +Q== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2ydcm19hbr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 27 Feb 2020 08:19:24 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:19:22 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:19:22 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 27 Feb 2020 08:19:21 -0800 Received: from luke.marvell.com (unknown [10.95.130.81]) by maili.marvell.com (Postfix) with ESMTP id 46B1D3F703F; Thu, 27 Feb 2020 08:19:19 -0800 (PST) From: Lukasz Bartosik To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Anoob Joseph , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Konstantin Ananyev" , Date: Thu, 27 Feb 2020 17:18:35 +0100 Message-ID: <1582820317-7333-14-git-send-email-lbartosik@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1582820317-7333-1-git-send-email-lbartosik@marvell.com> References: <1582185727-6749-1-git-send-email-lbartosik@marvell.com> <1582820317-7333-1-git-send-email-lbartosik@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-27_05:2020-02-26, 2020-02-27 signatures=0 Subject: [dpdk-dev] [PATCH v5 13/15] examples/ipsec-secgw: make number of buffers dynamic X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Make number of buffers in a pool nb_mbuf_in_pool dependent on number of ports, cores and crypto queues. Add command line option -s which when used overrides dynamic calculation of number of buffers in a pool. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/ipsec-secgw.c | 71 ++++++++++++++++++++++++++++++++------ 1 file changed, 60 insertions(+), 11 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index a03958d..5335c4c 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -60,8 +60,6 @@ volatile bool force_quit; #define MEMPOOL_CACHE_SIZE 256 -#define NB_MBUF (32000) - #define CDEV_QUEUE_DESC 2048 #define CDEV_MAP_ENTRIES 16384 #define CDEV_MP_NB_OBJS 1024 @@ -164,6 +162,7 @@ static int32_t promiscuous_on = 1; static int32_t numa_on = 1; /**< NUMA is enabled by default. */ static uint32_t nb_lcores; static uint32_t single_sa; +static uint32_t nb_bufs_in_pool; /* * RX/TX HW offload capabilities to enable/use on ethernet ports. @@ -1280,6 +1279,7 @@ print_usage(const char *prgname) " [-e]" " [-a]" " [-c]" + " [-s NUMBER_OF_MBUFS_IN_PKT_POOL]" " -f CONFIG_FILE" " --config (port,queue,lcore)[,(port,queue,lcore)]" " [--single-sa SAIDX]" @@ -1303,6 +1303,9 @@ print_usage(const char *prgname) " -a enables SA SQN atomic behaviour\n" " -c specifies inbound SAD cache size,\n" " zero value disables the cache (default value: 128)\n" + " -s number of mbufs in packet pool, if not specified number\n" + " of mbufs will be calculated based on number of cores,\n" + " ports and crypto queues\n" " -f CONFIG_FILE: Configuration file\n" " --config (port,queue,lcore): Rx queue configuration. In poll\n" " mode determines which queues from\n" @@ -1507,7 +1510,7 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) argvopt = argv; - while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:c:", + while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:c:s:", lgopts, &option_index)) != EOF) { switch (opt) { @@ -1541,6 +1544,19 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) cfgfile = optarg; f_present = 1; break; + + case 's': + ret = parse_decimal(optarg); + if (ret < 0) { + printf("Invalid number of buffers in a pool: " + "%s\n", optarg); + print_usage(prgname); + return -1; + } + + nb_bufs_in_pool = ret; + break; + case 'j': ret = parse_decimal(optarg); if (ret < RTE_MBUF_DEFAULT_BUF_SIZE || @@ -1913,12 +1929,12 @@ check_cryptodev_mask(uint8_t cdev_id) return -1; } -static int32_t +static uint16_t cryptodevs_init(void) { struct rte_cryptodev_config dev_conf; struct rte_cryptodev_qp_conf qp_conf; - uint16_t idx, max_nb_qps, qp, i; + uint16_t idx, max_nb_qps, qp, total_nb_qps, i; int16_t cdev_id; struct rte_hash_parameters params = { 0 }; @@ -1946,6 +1962,7 @@ cryptodevs_init(void) printf("lcore/cryptodev/qp mappings:\n"); idx = 0; + total_nb_qps = 0; for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) { struct rte_cryptodev_info cdev_info; @@ -1979,6 +1996,7 @@ cryptodevs_init(void) if (qp == 0) continue; + total_nb_qps += qp; dev_conf.socket_id = rte_cryptodev_socket_id(cdev_id); dev_conf.nb_queue_pairs = qp; dev_conf.ff_disable = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO; @@ -2011,7 +2029,7 @@ cryptodevs_init(void) printf("\n"); - return 0; + return total_nb_qps; } static void @@ -2665,20 +2683,36 @@ inline_sessions_free(struct sa_ctx *sa_ctx) } } +static uint32_t +calculate_nb_mbufs(uint16_t nb_ports, uint16_t nb_crypto_qp, uint32_t nb_rxq, + uint32_t nb_txq) +{ + return RTE_MAX((nb_rxq * nb_rxd + + nb_ports * nb_lcores * MAX_PKT_BURST + + nb_ports * nb_txq * nb_txd + + nb_lcores * MEMPOOL_CACHE_SIZE + + nb_crypto_qp * CDEV_QUEUE_DESC + + nb_lcores * frag_tbl_sz * + FRAG_TBL_BUCKET_ENTRIES), + 8192U); +} + int32_t main(int32_t argc, char **argv) { int32_t ret; - uint32_t lcore_id; + uint32_t lcore_id, nb_txq, nb_rxq = 0; uint32_t cdev_id; uint32_t i; uint8_t socket_id; - uint16_t portid; + uint16_t portid, nb_crypto_qp, nb_ports = 0; uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; struct eh_conf *eh_conf = NULL; size_t sess_sz; + nb_bufs_in_pool = 0; + /* init EAL */ ret = rte_eal_init(argc, argv); if (ret < 0) @@ -2727,6 +2761,22 @@ main(int32_t argc, char **argv) sess_sz = max_session_size(); + nb_crypto_qp = cryptodevs_init(); + + if (nb_bufs_in_pool == 0) { + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + nb_ports++; + nb_rxq += get_port_nb_rx_queues(portid); + } + + nb_txq = nb_lcores; + + nb_bufs_in_pool = calculate_nb_mbufs(nb_ports, nb_crypto_qp, + nb_rxq, nb_txq); + } + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { if (rte_lcore_is_enabled(lcore_id) == 0) continue; @@ -2740,11 +2790,12 @@ main(int32_t argc, char **argv) if (socket_ctx[socket_id].mbuf_pool) continue; - pool_init(&socket_ctx[socket_id], socket_id, NB_MBUF); + pool_init(&socket_ctx[socket_id], socket_id, nb_bufs_in_pool); session_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); session_priv_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); } + printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool); RTE_ETH_FOREACH_DEV(portid) { if ((enabled_port_mask & (1 << portid)) == 0) @@ -2756,8 +2807,6 @@ main(int32_t argc, char **argv) req_tx_offloads[portid]); } - cryptodevs_init(); - /* * Set the enabled port mask in helper config for use by helper * sub-system. This will be used while initializing devices using From patchwork Thu Feb 27 16:18:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lukas Bartosik [C]" X-Patchwork-Id: 66111 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3DA7EA055F; Thu, 27 Feb 2020 17:21:50 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 595071C07D; Thu, 27 Feb 2020 17:19:33 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 7A3AD37AF for ; Thu, 27 Feb 2020 17:19:28 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 01RFsgBK002476; Thu, 27 Feb 2020 08:19:28 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=6jRuT/Q+hEZM+81htMYfGC4FAHRgoeo2zuAizTE06ls=; b=i6mGVj/prk85iUas1lELSeAUk73KtMkuigWyNVwM2UIg9VBKA5+ydLi+K86VGalBTFnB 94nSBUjH+PgVAkB9KWFST0AUIHN13JidIz5MuYm8R08G7LOQ20kTVaELYGKVaMtM5y2/ 4t/gR2034WBav1aA+k1Ox3N74jD3yfpcN8s0RXwIRN7raP/aB9K6rLGjJwVlW+sN3rYS +NOzqVJvXgu7ZKDvYBqnKa/LIK/bdfnC3YZn4vFg7rzSnDEdxt36KOSy0WDpZ35MsrcN T0xCFwP51lM1LCN4hi6dhUu/09zX1f9fj+FK2OXaAL1hHCRvmJuZ9PHFE/orLxX6YQGF 1Q== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2ydchth8gk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 27 Feb 2020 08:19:27 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:19:26 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:19:25 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 27 Feb 2020 08:19:24 -0800 Received: from luke.marvell.com (unknown [10.95.130.81]) by maili.marvell.com (Postfix) with ESMTP id 5422E3F7043; Thu, 27 Feb 2020 08:19:22 -0800 (PST) From: Lukasz Bartosik To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Anoob Joseph , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Konstantin Ananyev" , Date: Thu, 27 Feb 2020 17:18:36 +0100 Message-ID: <1582820317-7333-15-git-send-email-lbartosik@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1582820317-7333-1-git-send-email-lbartosik@marvell.com> References: <1582185727-6749-1-git-send-email-lbartosik@marvell.com> <1582820317-7333-1-git-send-email-lbartosik@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-27_05:2020-02-26, 2020-02-27 signatures=0 Subject: [dpdk-dev] [PATCH v5 14/15] doc: add event mode support to ipsec-secgw X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Document addition of event mode support to ipsec-secgw application. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- doc/guides/sample_app_ug/ipsec_secgw.rst | 135 ++++++++++++++++++++++++++----- 1 file changed, 113 insertions(+), 22 deletions(-) diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst index 5ec9b1e..038f593 100644 --- a/doc/guides/sample_app_ug/ipsec_secgw.rst +++ b/doc/guides/sample_app_ug/ipsec_secgw.rst @@ -1,5 +1,6 @@ .. SPDX-License-Identifier: BSD-3-Clause Copyright(c) 2016-2017 Intel Corporation. + Copyright (C) 2020 Marvell International Ltd. IPsec Security Gateway Sample Application ========================================= @@ -61,6 +62,44 @@ The Path for the IPsec Outbound traffic is: * Routing. * Write packet to port. +The application supports two modes of operation: poll mode and event mode. + +* In the poll mode a core receives packets from statically configured list + of eth ports and eth ports' queues. + +* In the event mode a core receives packets as events. After packet processing + is done core submits them back as events to an event device. This enables + multicore scaling and HW assisted scheduling by making use of the event device + capabilities. The event mode configuration is predefined. All packets reaching + given eth port will arrive at the same event queue. All event queues are mapped + to all event ports. This allows all cores to receive traffic from all ports. + Since the underlying event device might have varying capabilities, the worker + threads can be drafted differently to maximize performance. For example, if an + event device - eth device pair has Tx internal port, then application can call + rte_event_eth_tx_adapter_enqueue() instead of regular rte_event_enqueue_burst(). + So a thread which assumes that the device pair has internal port will not be the + right solution for another pair. The infrastructure added for the event mode aims + to help application to have multiple worker threads by maximizing performance from + every type of event device without affecting existing paths/use cases. The worker + to be used will be determined by the operating conditions and the underlying device + capabilities. **Currently the application provides non-burst, internal port worker + threads and supports inline protocol only.** It also provides infrastructure for + non-internal port however does not define any worker threads. + +Additionally the event mode introduces two submodes of processing packets: + +* Driver submode: This submode has bare minimum changes in the application to support + IPsec. There are no lookups, no routing done in the application. And for inline + protocol use case, the worker thread resembles l2fwd worker thread as the IPsec + processing is done entirely in HW. This mode can be used to benchmark the raw + performance of the HW. The driver submode is selected with --single-sa option + (used also by poll mode). When --single-sa option is used in conjution with event + mode then index passed to --single-sa is ignored. + +* App submode: This submode has all the features currently implemented with the + application (non librte_ipsec path). All the lookups, routing follows existing + methods and report numbers that can be compared against regular poll mode + benchmark numbers. Constraints ----------- @@ -94,13 +133,18 @@ The application has a number of command line options:: -p PORTMASK -P -u PORTMASK -j FRAMESIZE -l -w REPLAY_WINOW_SIZE -e -a -c SAD_CACHE_SIZE - --config (port,queue,lcore)[,(port,queue,lcore] + -s NUMBER_OF_MBUFS_IN_PACKET_POOL + -f CONFIG_FILE_PATH + --config (port,queue,lcore)[,(port,queue,lcore)] --single-sa SAIDX + --cryptodev_mask MASK + --transfer-mode MODE + --event-schedule-type TYPE --rxoffload MASK --txoffload MASK - --mtu MTU --reassemble NUM - -f CONFIG_FILE_PATH + --mtu MTU + --frag-ttl FRAG_TTL_NS Where: @@ -138,12 +182,37 @@ Where: Zero value disables cache. Default value: 128. -* ``--config (port,queue,lcore)[,(port,queue,lcore)]``: determines which queues - from which ports are mapped to which cores. +* ``-s``: sets number of mbufs in packet pool, if not provided number of mbufs + will be calculated based on number of cores, eth ports and crypto queues. + +* ``-f CONFIG_FILE_PATH``: the full path of text-based file containing all + configuration items for running the application (See Configuration file + syntax section below). ``-f CONFIG_FILE_PATH`` **must** be specified. + **ONLY** the UNIX format configuration file is accepted. + +* ``--config (port,queue,lcore)[,(port,queue,lcore)]``: in poll mode determines + which queues from which ports are mapped to which cores. In event mode this + option is not used as packets are dynamically scheduled to cores by HW. -* ``--single-sa SAIDX``: use a single SA for outbound traffic, bypassing the SP - on both Inbound and Outbound. This option is meant for debugging/performance - purposes. +* ``--single-sa SAIDX``: in poll mode use a single SA for outbound traffic, + bypassing the SP on both Inbound and Outbound. This option is meant for + debugging/performance purposes. In event mode selects driver submode, SA index + value is ignored. + +* ``--cryptodev_mask MASK``: hexadecimal bitmask of the crypto devices + to configure. + +* ``--transfer-mode MODE``: sets operating mode of the application + "poll" : packet transfer via polling (default) + "event" : Packet transfer via event device + +* ``--event-schedule-type TYPE``: queue schedule type, applies only when + --transfer-mode is set to event. + "ordered" : Ordered (default) + "atomic" : Atomic + "parallel" : Parallel + When --event-schedule-type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event + device will ensure the ordering. Ordering will be lost when tried in PARALLEL. * ``--rxoffload MASK``: RX HW offload capabilities to enable/use on this port (bitmask of DEV_RX_OFFLOAD_* values). It is an optional parameter and @@ -155,6 +224,10 @@ Where: allows user to disable some of the TX HW offload capabilities. By default all HW TX offloads are enabled. +* ``--reassemble NUM``: max number of entries in reassemble fragment table. + Zero value disables reassembly functionality. + Default value: 0. + * ``--mtu MTU``: MTU value (in bytes) on all attached ethernet ports. Outgoing packets with length bigger then MTU will be fragmented. Incoming packets with length bigger then MTU will be discarded. @@ -167,26 +240,17 @@ Where: Should be lower for low number of reassembly buckets. Valid values: from 1 ns to 10 s. Default value: 10000000 (10 s). -* ``--reassemble NUM``: max number of entries in reassemble fragment table. - Zero value disables reassembly functionality. - Default value: 0. - -* ``-f CONFIG_FILE_PATH``: the full path of text-based file containing all - configuration items for running the application (See Configuration file - syntax section below). ``-f CONFIG_FILE_PATH`` **must** be specified. - **ONLY** the UNIX format configuration file is accepted. - The mapping of lcores to port/queues is similar to other l3fwd applications. -For example, given the following command line:: +For example, given the following command line to run application in poll mode:: ./build/ipsec-secgw -l 20,21 -n 4 --socket-mem 0,2048 \ - --vdev "crypto_null" -- -p 0xf -P -u 0x3 \ + --vdev "crypto_null" -- -p 0xf -P -u 0x3 \ --config="(0,0,20),(1,0,20),(2,0,21),(3,0,21)" \ - -f /path/to/config_file \ + -f /path/to/config_file --transfer-mode poll \ -where each options means: +where each option means: * The ``-l`` option enables cores 20 and 21. @@ -200,7 +264,7 @@ where each options means: * The ``-P`` option enables promiscuous mode. -* The ``-u`` option sets ports 1 and 2 as unprotected, leaving 2 and 3 as protected. +* The ``-u`` option sets ports 0 and 1 as unprotected, leaving 2 and 3 as protected. * The ``--config`` option enables one queue per port with the following mapping: @@ -228,6 +292,33 @@ where each options means: **note** the parser only accepts UNIX format text file. Other formats such as DOS/MAC format will cause a parse error. +* The ``--transfer-mode`` option selects poll mode for processing packets. + +Similarly for example, given the following command line to run application in +event app mode:: + + ./build/ipsec-secgw -c 0x3 -- -P -p 0x3 -u 0x1 \ + -f /path/to/config_file --transfer-mode event \ + --event-schedule-type parallel \ + +where each option means: + +* The ``-c`` option selects cores 0 and 1 to run on. + +* The ``-P`` option enables promiscuous mode. + +* The ``-p`` option enables ports (detected) 0 and 1. + +* The ``-u`` option sets ports 0 as unprotected, leaving 1 as protected. + +* The ``-f /path/to/config_file`` option has the same behavior as in poll + mode example. + +* The ``--transfer-mode`` option selects event mode for processing packets. + +* The ``--event-schedule-type`` option selects parallel ordering of event queues. + + Refer to the *DPDK Getting Started Guide* for general information on running applications and the Environment Abstraction Layer (EAL) options. From patchwork Thu Feb 27 16:18:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lukas Bartosik [C]" X-Patchwork-Id: 66112 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 31537A055F; Thu, 27 Feb 2020 17:22:03 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 199021BFDD; Thu, 27 Feb 2020 17:19:42 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 978551C065 for ; Thu, 27 Feb 2020 17:19:30 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 01RFsgar002477; Thu, 27 Feb 2020 08:19:30 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=tMH+Dp+xtK4eEWBo3JHpE9WDU479l5rBCrab27k2k3s=; b=o4bMkSX/ef80LYYDE8+nPbHx/jQ8XoXCym6tr4rEjrXjVmWjxCIzeV3Q6SuBCg1h1yJV ZinaGx5DKCMh/rz9Rpo0dzsiL/W48EuJf+OvgIuu2Itk+KxZ/v3kk9tUDFH4rp1pZOlB Lb2WjwJz9xYpbJcXv8QfpJe7E0OB6lTIF+uMSJyanTABY8pt4Km5xaWO3FKkIgvbNXQ9 zhEsXboDbJs9w13o7VzPfy949Lj6C5crE0MRaPOBcROrrgCttCyXJ9mK0K08LeFXuYn5 MYeq59eZQSIygh2rQu0Sj7A+HL90yODO6u5vdW/pfAkN+PozoY6uYqsYmHDGHRDB6Jhn qA== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2ydchth8gs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 27 Feb 2020 08:19:30 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 Feb 2020 08:19:28 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 27 Feb 2020 08:19:28 -0800 Received: from luke.marvell.com (unknown [10.95.130.81]) by maili.marvell.com (Postfix) with ESMTP id 622583F7041; Thu, 27 Feb 2020 08:19:25 -0800 (PST) From: Lukasz Bartosik To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Anoob Joseph , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Konstantin Ananyev" , Date: Thu, 27 Feb 2020 17:18:37 +0100 Message-ID: <1582820317-7333-16-git-send-email-lbartosik@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1582820317-7333-1-git-send-email-lbartosik@marvell.com> References: <1582185727-6749-1-git-send-email-lbartosik@marvell.com> <1582820317-7333-1-git-send-email-lbartosik@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-27_05:2020-02-26, 2020-02-27 signatures=0 Subject: [dpdk-dev] [PATCH v5 15/15] examples/ipsec-secgw: reserve crypto queues in event mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Reserve minimum number of crypto queues equal to number of ports. This is to fulfill inline protocol offload requirements. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/ipsec-secgw.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 5335c4c..ce36e6d 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -1930,7 +1930,7 @@ check_cryptodev_mask(uint8_t cdev_id) } static uint16_t -cryptodevs_init(void) +cryptodevs_init(uint16_t req_queue_num) { struct rte_cryptodev_config dev_conf; struct rte_cryptodev_qp_conf qp_conf; @@ -1993,6 +1993,7 @@ cryptodevs_init(void) i++; } + qp = RTE_MIN(max_nb_qps, RTE_MAX(req_queue_num, qp)); if (qp == 0) continue; @@ -2761,7 +2762,16 @@ main(int32_t argc, char **argv) sess_sz = max_session_size(); - nb_crypto_qp = cryptodevs_init(); + /* + * In event mode request minimum number of crypto queues + * to be reserved equal to number of ports. + */ + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_EVENT) + nb_crypto_qp = rte_eth_dev_count_avail(); + else + nb_crypto_qp = 0; + + nb_crypto_qp = cryptodevs_init(nb_crypto_qp); if (nb_bufs_in_pool == 0) { RTE_ETH_FOREACH_DEV(portid) {