From patchwork Mon Jan 20 13:45:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 64927 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 076EDA0528; Mon, 20 Jan 2020 14:47:10 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 470221BE8A; Mon, 20 Jan 2020 14:46:28 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 5F2CA1BE81 for ; Mon, 20 Jan 2020 14:46:27 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 00KDjqJr018888; Mon, 20 Jan 2020 05:46:26 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=/UmO1cEedtfeOtQoyEYzmG1rQbIvuIqKK6gArgXEaLU=; b=lLxAny1RR1kRWNoFRefAwb969Njat6bPUvFxPTIGiizYAAMGeKu+yOsH1UpLtkDjj9Jd oGrmJzYxFMtEp19f8EveRmEk1/b41DS+zmkMBcM3B6K9whwJexbz3z/OWn9WhuVoNXs7 oeLuwRmPB6dEz5zy0nqBh7Yp0XO9KapxuyDXskNDlgCwFXqEh7rqdFzVy/3uNCDJMRol 8bS4VBVeQrnnz7NKd9oVIoo365+3IRLe50ikWhmIz3qBUkcMerFUIDouGBMuYo6IZBZZ 2C0y1/2uWVpakaEq3yYgPSz1ZjfRjEihohrlQoiGXOtPpJ1IbpMZTcTG3+zK0LpEO8jQ xw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2xm08v6jk9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 20 Jan 2020 05:46:26 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 20 Jan 2020 05:46:25 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 20 Jan 2020 05:46:24 -0800 Received: from ajoseph83.caveonetworks.com (unknown [10.29.45.60]) by maili.marvell.com (Postfix) with ESMTP id 554EB3F703F; Mon, 20 Jan 2020 05:46:21 -0800 (PST) From: Anoob Joseph To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Lukasz Bartosik , Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Anoob Joseph , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , Konstantin Ananyev , Date: Mon, 20 Jan 2020 19:15:14 +0530 Message-ID: <1579527918-360-9-git-send-email-anoobj@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1579527918-360-1-git-send-email-anoobj@marvell.com> References: <1575808249-31135-1-git-send-email-anoobj@marvell.com> <1579527918-360-1-git-send-email-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-01-20_02:2020-01-20, 2020-01-20 signatures=0 Subject: [dpdk-dev] [PATCH v2 08/12] examples/ipsec-secgw: add support for internal ports X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Lukasz Bartosik Add support for Rx and Tx internal ports. When internal ports are available then a packet can be received from eth port and forwarded to event queue by HW without any software intervention. The same applies to Tx side where a packet sent to an event queue can by forwarded by HW to eth port without any software intervention. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/event_helper.c | 179 +++++++++++++++++++++++++++++++----- examples/ipsec-secgw/event_helper.h | 11 +++ 2 files changed, 167 insertions(+), 23 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 95dc4e6..9719ab4 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -95,6 +95,39 @@ eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) return &(em_conf->eventdev_config[i]); } + +static inline bool +eh_dev_has_rx_internal_port(uint8_t eventdev_id) +{ + int j; + bool flag = true; + + RTE_ETH_FOREACH_DEV(j) { + uint32_t caps = 0; + + rte_event_eth_rx_adapter_caps_get(eventdev_id, j, &caps); + if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) + flag = false; + } + return flag; +} + +static inline bool +eh_dev_has_tx_internal_port(uint8_t eventdev_id) +{ + int j; + bool flag = true; + + RTE_ETH_FOREACH_DEV(j) { + uint32_t caps = 0; + + rte_event_eth_tx_adapter_caps_get(eventdev_id, j, &caps); + if (!(caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)) + flag = false; + } + return flag; +} + static inline bool eh_dev_has_burst_mode(uint8_t dev_id) { @@ -179,6 +212,42 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) return 0; } +static void +eh_do_capability_check(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + int all_internal_ports = 1; + uint32_t eventdev_id; + int i; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + + /* Get the event dev conf */ + eventdev_config = &(em_conf->eventdev_config[i]); + eventdev_id = eventdev_config->eventdev_id; + + /* Check if event device has internal port for Rx & Tx */ + if (eh_dev_has_rx_internal_port(eventdev_id) && + eh_dev_has_tx_internal_port(eventdev_id)) { + eventdev_config->all_internal_ports = 1; + } else { + all_internal_ports = 0; + } + } + + /* + * If Rx & Tx internal ports are supported by all event devices then + * eth cores won't be required. Override the eth core mask requested + * and decrement number of event queues by one as it won't be needed + * for Tx. + */ + if (all_internal_ports) { + rte_bitmap_reset(em_conf->eth_core_mask); + for (i = 0; i < em_conf->nb_eventdev; i++) + em_conf->eventdev_config[i].nb_eventqueue--; + } +} + static int eh_set_default_conf_link(struct eventmode_conf *em_conf) { @@ -250,7 +319,10 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) struct rx_adapter_connection_info *conn; struct eventdev_params *eventdev_config; struct rx_adapter_conf *adapter; + bool rx_internal_port = true; bool single_ev_queue = false; + int nb_eventqueue; + uint32_t caps = 0; int eventdev_id; int nb_eth_dev; int adapter_id; @@ -280,14 +352,21 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) /* Set adapter conf */ adapter->eventdev_id = eventdev_id; adapter->adapter_id = adapter_id; - adapter->rx_core_id = eh_get_next_eth_core(em_conf); + + /* + * If event device does not have internal ports for passing + * packets then reserved one queue for Tx path + */ + nb_eventqueue = eventdev_config->all_internal_ports ? + eventdev_config->nb_eventqueue : + eventdev_config->nb_eventqueue - 1; /* * Map all queues of eth device (port) to an event queue. If there * are more event queues than eth ports then create 1:1 mapping. * Otherwise map all eth ports to a single event queue. */ - if (nb_eth_dev > eventdev_config->nb_eventqueue) + if (nb_eth_dev > nb_eventqueue) single_ev_queue = true; for (i = 0; i < nb_eth_dev; i++) { @@ -309,11 +388,24 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) /* Add all eth queues eth port to event queue */ conn->ethdev_rx_qid = -1; + /* Get Rx adapter capabilities */ + rte_event_eth_rx_adapter_caps_get(eventdev_id, i, &caps); + if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) + rx_internal_port = false; + /* Update no of connections */ adapter->nb_connections++; } + if (rx_internal_port) { + /* Rx core is not required */ + adapter->rx_core_id = -1; + } else { + /* Rx core is required */ + adapter->rx_core_id = eh_get_next_eth_core(em_conf); + } + /* We have setup one adapter */ em_conf->nb_rx_adapter = 1; @@ -326,6 +418,8 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) struct tx_adapter_connection_info *conn; struct eventdev_params *eventdev_config; struct tx_adapter_conf *tx_adapter; + bool tx_internal_port = true; + uint32_t caps = 0; int eventdev_id; int adapter_id; int nb_eth_dev; @@ -359,18 +453,6 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) tx_adapter->eventdev_id = eventdev_id; tx_adapter->adapter_id = adapter_id; - /* TODO: Tx core is required only when internal port is not present */ - tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); - - /* - * Application uses one event queue per adapter for submitting - * packets for Tx. Reserve the last queue available and decrement - * the total available event queues for this - */ - - /* Queue numbers start at 0 */ - tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; - /* * Map all Tx queues of the eth device (port) to the event device. */ @@ -400,10 +482,30 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) /* Add all eth tx queues to adapter */ conn->ethdev_tx_qid = -1; + /* Get Tx adapter capabilities */ + rte_event_eth_tx_adapter_caps_get(eventdev_id, i, &caps); + if (!(caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)) + tx_internal_port = false; + /* Update no of connections */ tx_adapter->nb_connections++; } + if (tx_internal_port) { + /* Tx core is not required */ + tx_adapter->tx_core_id = -1; + } else { + /* Tx core is required */ + tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Use one event queue per adapter for submitting packets + * for Tx. Reserving the last queue available + */ + /* Queue numbers start at 0 */ + tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; + } + /* We have setup one adapter */ em_conf->nb_tx_adapter = 1; return 0; @@ -424,6 +526,9 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* Perform capability check for the selected event devices */ + eh_do_capability_check(em_conf); + /* * Check if links are specified. Else generate a default config for * the event ports used. @@ -529,11 +634,13 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) eventdev_config->ev_queue_mode; /* * All queues need to be set with sched_type as - * schedule type for the application stage. One queue - * would be reserved for the final eth tx stage. This - * will be an atomic queue. + * schedule type for the application stage. One + * queue would be reserved for the final eth tx + * stage if event device does not have internal + * ports. This will be an atomic queue. */ - if (j == nb_eventqueue-1) { + if (!eventdev_config->all_internal_ports && + j == nb_eventqueue-1) { eventq_conf.schedule_type = RTE_SCHED_TYPE_ATOMIC; } else { @@ -847,6 +954,12 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, /* Populate the curr_conf with the capabilities */ + /* Check for Tx internal port */ + if (eh_dev_has_tx_internal_port(eventdev_id)) + curr_conf.cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + else + curr_conf.cap.tx_internal_port = EH_TX_TYPE_NO_INTERNAL_PORT; + /* Check for burst mode */ if (eh_dev_has_burst_mode(eventdev_id)) curr_conf.cap.burst = EH_RX_TYPE_BURST; @@ -1018,6 +1131,16 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, } } + /* + * Check if Tx core is assigned. If Tx core is not assigned then + * the adapter has internal port for submitting Tx packets and + * Tx event queue & port setup is not required + */ + if (adapter->tx_core_id == (uint32_t) (-1)) { + /* Internal port is present */ + goto skip_tx_queue_port_setup; + } + /* Setup Tx queue & port */ /* Get event port used by the adapter */ @@ -1057,6 +1180,7 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, rte_service_set_runstate_mapped_check(service_id, 0); +skip_tx_queue_port_setup: /* Start adapter */ ret = rte_event_eth_tx_adapter_start(adapter->adapter_id); if (ret < 0) { @@ -1141,13 +1265,22 @@ eh_display_rx_adapter_conf(struct eventmode_conf *em_conf) for (i = 0; i < nb_rx_adapter; i++) { adapter = &(em_conf->rx_adapter[i]); - EH_LOG_INFO( - "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d" - "\tRx core: %-2d", + sprintf(print_buf, + "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", adapter->adapter_id, adapter->nb_connections, - adapter->eventdev_id, - adapter->rx_core_id); + adapter->eventdev_id); + if (adapter->rx_core_id == (uint32_t)-1) + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2s", "[INTERNAL PORT]"); + else if (adapter->rx_core_id == RTE_MAX_LCORE) + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2s", "[NONE]"); + else + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2d", adapter->rx_core_id); + + EH_LOG_INFO("%s", print_buf); for (j = 0; j < adapter->nb_connections; j++) { conn = &(adapter->conn[j]); diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 31a158e..15a7bd6 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -66,12 +66,21 @@ enum eh_rx_types { EH_RX_TYPE_BURST }; +/** + * Event mode packet tx types + */ +enum eh_tx_types { + EH_TX_TYPE_INTERNAL_PORT = 0, + EH_TX_TYPE_NO_INTERNAL_PORT +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; uint8_t nb_eventqueue; uint8_t nb_eventport; uint8_t ev_queue_mode; + uint8_t all_internal_ports; }; /** @@ -183,6 +192,8 @@ struct eh_app_worker_params { struct { uint64_t burst : 1; /**< Specify status of rx type burst */ + uint64_t tx_internal_port : 1; + /**< Specify whether tx internal port is available */ }; uint64_t u64; } cap;