From patchwork Thu Feb 20 08:02:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lukas Bartosik [C]" X-Patchwork-Id: 65948 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5CFA9A0555; Thu, 20 Feb 2020 09:03:49 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AB9951BF9A; Thu, 20 Feb 2020 09:02:44 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 938354C81 for ; Thu, 20 Feb 2020 09:02:40 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 01K81Krm016886; Thu, 20 Feb 2020 00:02:39 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=kD20SEDqzUwA7MUfgGVgzno+2UGl8JLAoRz2TV7yQUA=; b=yrVCVI6JFf+vqCJMI0xLdCre9lGdLwJZbq5nCTpdYKcE+olrI7R2lPiJjy6fOpHrPYh9 GLgajFrF4j49XBhS8Z7nGrZ4jz0yox5ACapX/XpJ2Ior4qDPxlPHg9+BTACRYZIvqQ0M VHMr9vOAqsiBhOVFKAFBjmpaOEcQCfg/2lZulW3MWerLpkKKx8KTKxz54RoOeMlOw0Ua YG1vO2aFSp859SHYW+YoJEs0EtEeMfmDBBbFBzsGsGW9Me0cjtubUDlBLmBF5QOA9SKb 4jtzCpUUwX2pEQcHDwwjGheBtUFWUc23aAB9QlCdja2uiR+ScTGqc7inaklnTKEg1l3t og== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2y8ubhefj3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 20 Feb 2020 00:02:39 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 20 Feb 2020 00:02:38 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 20 Feb 2020 00:02:38 -0800 Received: from luke.marvell.com (unknown [10.95.130.24]) by maili.marvell.com (Postfix) with ESMTP id 7A86B3F7040; Thu, 20 Feb 2020 00:02:34 -0800 (PST) From: Lukasz Bartosik To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Anoob Joseph , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Konstantin Ananyev" , Date: Thu, 20 Feb 2020 09:02:00 +0100 Message-ID: <1582185727-6749-9-git-send-email-lbartosik@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1582185727-6749-1-git-send-email-lbartosik@marvell.com> References: <1580824721-21527-1-git-send-email-lbartosik@marvell.com> <1582185727-6749-1-git-send-email-lbartosik@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-20_02:2020-02-19, 2020-02-20 signatures=0 Subject: [dpdk-dev] [PATCH v4 08/15] examples/ipsec-secgw: add support for internal ports X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for Rx and Tx internal ports. When internal ports are available then a packet can be received from eth port and forwarded to event queue by HW without any software intervention. The same applies to Tx side where a packet sent to an event queue can by forwarded by HW to eth port without any software intervention. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/event_helper.c | 179 +++++++++++++++++++++++++++++++----- examples/ipsec-secgw/event_helper.h | 11 +++ 2 files changed, 167 insertions(+), 23 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index e3dfaf5..fe047ab 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -95,6 +95,39 @@ eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) return &(em_conf->eventdev_config[i]); } + +static inline bool +eh_dev_has_rx_internal_port(uint8_t eventdev_id) +{ + bool flag = true; + int j; + + RTE_ETH_FOREACH_DEV(j) { + uint32_t caps = 0; + + rte_event_eth_rx_adapter_caps_get(eventdev_id, j, &caps); + if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) + flag = false; + } + return flag; +} + +static inline bool +eh_dev_has_tx_internal_port(uint8_t eventdev_id) +{ + bool flag = true; + int j; + + RTE_ETH_FOREACH_DEV(j) { + uint32_t caps = 0; + + rte_event_eth_tx_adapter_caps_get(eventdev_id, j, &caps); + if (!(caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)) + flag = false; + } + return flag; +} + static inline bool eh_dev_has_burst_mode(uint8_t dev_id) { @@ -175,6 +208,42 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) return 0; } +static void +eh_do_capability_check(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + int all_internal_ports = 1; + uint32_t eventdev_id; + int i; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + + /* Get the event dev conf */ + eventdev_config = &(em_conf->eventdev_config[i]); + eventdev_id = eventdev_config->eventdev_id; + + /* Check if event device has internal port for Rx & Tx */ + if (eh_dev_has_rx_internal_port(eventdev_id) && + eh_dev_has_tx_internal_port(eventdev_id)) { + eventdev_config->all_internal_ports = 1; + } else { + all_internal_ports = 0; + } + } + + /* + * If Rx & Tx internal ports are supported by all event devices then + * eth cores won't be required. Override the eth core mask requested + * and decrement number of event queues by one as it won't be needed + * for Tx. + */ + if (all_internal_ports) { + rte_bitmap_reset(em_conf->eth_core_mask); + for (i = 0; i < em_conf->nb_eventdev; i++) + em_conf->eventdev_config[i].nb_eventqueue--; + } +} + static int eh_set_default_conf_link(struct eventmode_conf *em_conf) { @@ -246,7 +315,10 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) struct rx_adapter_connection_info *conn; struct eventdev_params *eventdev_config; struct rx_adapter_conf *adapter; + bool rx_internal_port = true; bool single_ev_queue = false; + int nb_eventqueue; + uint32_t caps = 0; int eventdev_id; int nb_eth_dev; int adapter_id; @@ -276,14 +348,21 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) /* Set adapter conf */ adapter->eventdev_id = eventdev_id; adapter->adapter_id = adapter_id; - adapter->rx_core_id = eh_get_next_eth_core(em_conf); + + /* + * If event device does not have internal ports for passing + * packets then reserved one queue for Tx path + */ + nb_eventqueue = eventdev_config->all_internal_ports ? + eventdev_config->nb_eventqueue : + eventdev_config->nb_eventqueue - 1; /* * Map all queues of eth device (port) to an event queue. If there * are more event queues than eth ports then create 1:1 mapping. * Otherwise map all eth ports to a single event queue. */ - if (nb_eth_dev > eventdev_config->nb_eventqueue) + if (nb_eth_dev > nb_eventqueue) single_ev_queue = true; for (i = 0; i < nb_eth_dev; i++) { @@ -305,11 +384,24 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) /* Add all eth queues eth port to event queue */ conn->ethdev_rx_qid = -1; + /* Get Rx adapter capabilities */ + rte_event_eth_rx_adapter_caps_get(eventdev_id, i, &caps); + if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) + rx_internal_port = false; + /* Update no of connections */ adapter->nb_connections++; } + if (rx_internal_port) { + /* Rx core is not required */ + adapter->rx_core_id = -1; + } else { + /* Rx core is required */ + adapter->rx_core_id = eh_get_next_eth_core(em_conf); + } + /* We have setup one adapter */ em_conf->nb_rx_adapter = 1; @@ -322,6 +414,8 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) struct tx_adapter_connection_info *conn; struct eventdev_params *eventdev_config; struct tx_adapter_conf *tx_adapter; + bool tx_internal_port = true; + uint32_t caps = 0; int eventdev_id; int adapter_id; int nb_eth_dev; @@ -355,18 +449,6 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) tx_adapter->eventdev_id = eventdev_id; tx_adapter->adapter_id = adapter_id; - /* TODO: Tx core is required only when internal port is not present */ - tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); - - /* - * Application uses one event queue per adapter for submitting - * packets for Tx. Reserve the last queue available and decrement - * the total available event queues for this - */ - - /* Queue numbers start at 0 */ - tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; - /* * Map all Tx queues of the eth device (port) to the event device. */ @@ -396,10 +478,30 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) /* Add all eth tx queues to adapter */ conn->ethdev_tx_qid = -1; + /* Get Tx adapter capabilities */ + rte_event_eth_tx_adapter_caps_get(eventdev_id, i, &caps); + if (!(caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)) + tx_internal_port = false; + /* Update no of connections */ tx_adapter->nb_connections++; } + if (tx_internal_port) { + /* Tx core is not required */ + tx_adapter->tx_core_id = -1; + } else { + /* Tx core is required */ + tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Use one event queue per adapter for submitting packets + * for Tx. Reserving the last queue available + */ + /* Queue numbers start at 0 */ + tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; + } + /* We have setup one adapter */ em_conf->nb_tx_adapter = 1; return 0; @@ -420,6 +522,9 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* Perform capability check for the selected event devices */ + eh_do_capability_check(em_conf); + /* * Check if links are specified. Else generate a default config for * the event ports used. @@ -523,11 +628,13 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) eventdev_config->ev_queue_mode; /* * All queues need to be set with sched_type as - * schedule type for the application stage. One queue - * would be reserved for the final eth tx stage. This - * will be an atomic queue. + * schedule type for the application stage. One + * queue would be reserved for the final eth tx + * stage if event device does not have internal + * ports. This will be an atomic queue. */ - if (j == nb_eventqueue-1) { + if (!eventdev_config->all_internal_ports && + j == nb_eventqueue-1) { eventq_conf.schedule_type = RTE_SCHED_TYPE_ATOMIC; } else { @@ -841,6 +948,12 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, /* Populate the curr_conf with the capabilities */ + /* Check for Tx internal port */ + if (eh_dev_has_tx_internal_port(eventdev_id)) + curr_conf.cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + else + curr_conf.cap.tx_internal_port = EH_TX_TYPE_NO_INTERNAL_PORT; + /* Check for burst mode */ if (eh_dev_has_burst_mode(eventdev_id)) curr_conf.cap.burst = EH_RX_TYPE_BURST; @@ -1012,6 +1125,16 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, } } + /* + * Check if Tx core is assigned. If Tx core is not assigned then + * the adapter has internal port for submitting Tx packets and + * Tx event queue & port setup is not required + */ + if (adapter->tx_core_id == (uint32_t) (-1)) { + /* Internal port is present */ + goto skip_tx_queue_port_setup; + } + /* Setup Tx queue & port */ /* Get event port used by the adapter */ @@ -1051,6 +1174,7 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, rte_service_set_runstate_mapped_check(service_id, 0); +skip_tx_queue_port_setup: /* Start adapter */ ret = rte_event_eth_tx_adapter_start(adapter->adapter_id); if (ret < 0) { @@ -1135,13 +1259,22 @@ eh_display_rx_adapter_conf(struct eventmode_conf *em_conf) for (i = 0; i < nb_rx_adapter; i++) { adapter = &(em_conf->rx_adapter[i]); - EH_LOG_INFO( - "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d" - "\tRx core: %-2d", + sprintf(print_buf, + "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", adapter->adapter_id, adapter->nb_connections, - adapter->eventdev_id, - adapter->rx_core_id); + adapter->eventdev_id); + if (adapter->rx_core_id == (uint32_t)-1) + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2s", "[INTERNAL PORT]"); + else if (adapter->rx_core_id == RTE_MAX_LCORE) + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2s", "[NONE]"); + else + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2d", adapter->rx_core_id); + + EH_LOG_INFO("%s", print_buf); for (j = 0; j < adapter->nb_connections; j++) { conn = &(adapter->conn[j]); diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 9a4dfab..25c8563 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -62,12 +62,21 @@ enum eh_rx_types { EH_RX_TYPE_BURST }; +/** + * Event mode packet tx types + */ +enum eh_tx_types { + EH_TX_TYPE_INTERNAL_PORT = 0, + EH_TX_TYPE_NO_INTERNAL_PORT +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; uint8_t nb_eventqueue; uint8_t nb_eventport; uint8_t ev_queue_mode; + uint8_t all_internal_ports; }; /** @@ -179,6 +188,8 @@ struct eh_app_worker_params { struct { uint64_t burst : 1; /**< Specify status of rx type burst */ + uint64_t tx_internal_port : 1; + /**< Specify whether tx internal port is available */ }; uint64_t u64; } cap;