From patchwork Wed Jun 5 09:31:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 140748 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 48C324415C; Wed, 5 Jun 2024 11:32:28 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 32A5F40A82; Wed, 5 Jun 2024 11:32:28 +0200 (CEST) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam04on2069.outbound.protection.outlook.com [40.107.100.69]) by mails.dpdk.org (Postfix) with ESMTP id 198FF40A80 for ; Wed, 5 Jun 2024 11:32:26 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mK0DLt1Me9O4b0w13Z+1Mdwm5D1eY4CEcMWSy9FcUd3Ui0COf/mmvLBS5lNmNtYWSBrokr3jWTRw1c5PfuJKfF6AIchZfd+beAHCRJxI5JJk0XAU5JFcVWtuOx6BhV45c8DyAWclewrRYEQYu0p5bCjFrJ58odk76Y/y+EQpV0V/aHvOi/oDepg5K47n104RJF8/LKbzv6BUpKk+Br0XNqGAhNaUHCz0hR042NjyPpqk/bpHQjAdX8DkopHoKtYxvF42QHDrJ2FKpG9Pp7de+tW1vMg8puisMJ6F6S9DFAZn8EF4OJBIzEgvkMmROoaCp/ztQE0HCpBxpPs5U3g4OA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DtqdLK138XQLej4LfCpsF37yTsf5XKpWeHW0MKsrV9M=; b=MEHwdguLYmW7J5pEjXbJ8DZqIqS7e+Zi9C2TtApvmvG94sJszM35CBjb2z/ekTINo/SS8SY8ictoOMujVDR7bR6+EfGIbUa1xW3YyZo7BYYS38tOs8POGfftaTE9rbU8qd+k7MhAwZOnonMbWoC+zb5hocXDmrlxt1hVmTDDc2IXI+Psez9OWX6/GJBPqfGMJvG0ykqt5TqzInMeQfLsXyOYJIvs2T6NxG2wYHVgf4hN5Owkb3DtdnhX+xdGRzN6Pc/VclC3nfOIYJZ0tQiZlcl2/S0JDODYzWci/cc9b6mKcK4OdYmqYl0nEHNKFQBjPw43hmpTmMBPj9wxKge4SQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DtqdLK138XQLej4LfCpsF37yTsf5XKpWeHW0MKsrV9M=; b=GbTil7+oqPW93CG46/nfyfGWxdhe+h+jDnGeWObtfdae+VRXxx0wo9aXWIYuBZbz1pPZ4KOFvSt/isamFeIDTO+icmEJ0r4GxuNuxmRgjQ0OCgQJ7ogIX9gCsyttiia/zOSVA6b8D84unvVWzrzjdEIs4ZIoova0ClitrHi+Fl/Lpxtpk0e7ogIZS7IGrecQrPrFBzVJ5OZuFl2QxhmjU5PpnY2o/CJM+PU0nn69/r+hNOa2ZLJeWrmB8DwuWju9ZB1eOcUIGAqoPvwGIKEuySG42LZXbIp4TYG23Yi/wxZ31REiSrXT0z3660xLbIafJiriQVJJkoHKhiEU5HCXgA== Received: from SJ2PR07CA0013.namprd07.prod.outlook.com (2603:10b6:a03:505::15) by MN0PR12MB5715.namprd12.prod.outlook.com (2603:10b6:208:372::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.21; Wed, 5 Jun 2024 09:32:22 +0000 Received: from CY4PEPF0000EDD0.namprd03.prod.outlook.com (2603:10b6:a03:505:cafe::7a) by SJ2PR07CA0013.outlook.office365.com (2603:10b6:a03:505::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.30 via Frontend Transport; Wed, 5 Jun 2024 09:32:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000EDD0.mail.protection.outlook.com (10.167.241.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.15 via Frontend Transport; Wed, 5 Jun 2024 09:32:21 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Wed, 5 Jun 2024 02:31:56 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Wed, 5 Jun 2024 02:31:54 -0700 From: Suanming Mou To: Dariusz Sosnowski , Viacheslav Ovsiienko , Ori Kam , Matan Azrad CC: , Subject: [PATCH v2 1/3] net/mlx5: add match with Tx queue item Date: Wed, 5 Jun 2024 17:31:39 +0800 Message-ID: <20240605093141.1826221-1-suanmingm@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240531035034.1731943-1-suanmingm@nvidia.com> References: <20240531035034.1731943-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EDD0:EE_|MN0PR12MB5715:EE_ X-MS-Office365-Filtering-Correlation-Id: 3f762baa-a133-4d55-581e-08dc854266c1 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230031|1800799015|82310400017|376005|36860700004; X-Microsoft-Antispam-Message-Info: anjq11REqgBnyw5xr8pmLaBp3hGC2HMC+sTnfXUu85KwPtVip6P6UYVDY2HedsskzuQhXtScGry2hfhW4ykAhHnPe6K4H7sEG4SfEUHu7oetqaWyo7YQxUFMfTpmFjJzMozlxQwT+Y+M0tWk3qLN6ZUgfB80pXlm4YdxBRYEQI3i/7eTPqa0uyKCKHh9PvdFPBFV6MTjwQZWPYXBvs2nrRxYOf85Fu132lzdnghjQWllDpwmkRk9BMa1XgXh4Fokk96TfqJsg+6Wn7Bf1VAL2nQyVN+mhmK2O9utgMm0a92jmKzIVbJACY1whhjaIcDDDmfbL3s5HaRlIDbywb9Va7ic4udERNdIU16w8iKkittwN3gG58m9DdQyzngwZ1vNwDP/S9SA/H0gqoa6iRHYmEWPIw5lT+x6ld3eu1VoC898b21RBk8UrIYf3gUayk3JFiesBu4XEHYHU6IpuubLcGqqsHx+nrDYGvjOm6v3DcfaWXsTajEHZpe7FpHoP9hQU+sW3whE0nBwynArT1jp94cJqFJ/vKBODDeBtOD0L0KHPgI9imWTQeeOvzrdYVMZzZONZgs+QxwGlH1U5LRrVBcoW76oepi11Gl/6d4W7FqdS25qu0M72P4S7LhZmwR0nxtfaDX2AAW2SmrvKuFYLk6GOarZGakv7ra+ccxiyuR2rObp9AZ5Sx51a4ke+CwHNfmmPFH3VtclVGOL5K4dQ6075UmUMXjDn5s7ZTD59A2s9S+76Y7Xfo/spvXDxJIF3wGqGGFFqA5Ie3xhbPmRmMdxn+rDY52SjGk4qiHPTr1j2gKUCoBYHLlu01BEk+C2gnJUQ3rRYiAkXU6+n0j9CYhgfe/cQbkn9JT0AcuJ3zp1WBAmZkZMh6b7xsxy1Hb/HWXpu+ifbsAA3z4bWffH4nMQhwjaa/w8c6Fvnbn2GTNqZ8RIU8LJqx0g4yc6Ia8FfL42welvJFZPf6IzPZel3xV5yKWekGFL+WPFLhva5FvF4CLJtUgbchk8x8zlGGFo8ehe0H14O/+m6Oc0OfDfCrMB6/SLUeaghhemNyyai+Ako52on1sgOeteuC1shRGzweSTFDn3uhkY9AcuOBx2m9A6H6yGfjpxNgieMIZmzfgPFCVwEfSSypNS13wYHZt1A4BAIcrlrGiqWi49V8v8P+GVbOYEqybuM9ZW5rsMSaSxZTmGlqN/ZqqhTpcERfBStkvUXadbZZ7sKULJJC5mqTksUlAIGZ8/65Xkw2NMmoFacYPv57BMP7dA/kSMAkV4xX/Qup0wCUMENjHETGvxg3hv2Y2eHj58VXdvz1pdTNd4GqPwmtB/zk7BorvRl4LatK21XrP+lJW5d1gazcJVEclhiVtZAGkfPiHSnvZ+LetclTOFRcf0EEGcMzpZSIsn X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(1800799015)(82310400017)(376005)(36860700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jun 2024 09:32:21.6488 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3f762baa-a133-4d55-581e-08dc854266c1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EDD0.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB5715 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org With the item RTE_FLOW_ITEM_TYPE_TX_QUEUE, user will be able to set the Tx queue index and create flow match with that queue index. This commit adds match with RTE_FLOW_ITEM_TX_QUEUE item. Signed-off-by: Suanming Mou Acked-by: Dariusz Sosnowski Signed-off-by: Suanming Mou Acked-by: Dariusz Sosnowski --- doc/guides/nics/features/mlx5.ini | 1 + doc/guides/rel_notes/release_24_07.rst | 4 ++ drivers/net/mlx5/hws/mlx5dr_definer.c | 50 +++++++++++++++++++ drivers/net/mlx5/mlx5_flow.h | 58 ++++++++++++++++++++++ drivers/net/mlx5/mlx5_flow_dv.c | 69 ++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 1 + 6 files changed, 183 insertions(+) diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini index 81a7067cc3..056e04275b 100644 --- a/doc/guides/nics/features/mlx5.ini +++ b/doc/guides/nics/features/mlx5.ini @@ -92,6 +92,7 @@ quota = Y random = Y tag = Y tcp = Y +tx_queue = Y udp = Y vlan = Y vxlan = Y diff --git a/doc/guides/rel_notes/release_24_07.rst b/doc/guides/rel_notes/release_24_07.rst index ffbe9ce051..46efc04eac 100644 --- a/doc/guides/rel_notes/release_24_07.rst +++ b/doc/guides/rel_notes/release_24_07.rst @@ -81,6 +81,10 @@ New Features * Added SSE/NEON vector datapath. +* **Updated NVIDIA mlx5 driver.** + + * Added match with Tx queue. + Removed Items ------------- diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index dabfac8abc..bc128c7b99 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -6,6 +6,7 @@ #define GTP_PDU_SC 0x85 #define BAD_PORT 0xBAD +#define BAD_SQN 0xBAD #define ETH_TYPE_IPV4_VXLAN 0x0800 #define ETH_TYPE_IPV6_VXLAN 0x86DD #define UDP_VXLAN_PORT 4789 @@ -878,6 +879,22 @@ mlx5dr_definer_vxlan_gpe_rsvd0_set(struct mlx5dr_definer_fc *fc, DR_SET(tag, rsvd0, fc->byte_off, fc->bit_off, fc->bit_mask); } +static void +mlx5dr_definer_tx_queue_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_tx_queue *v = item_spec; + uint32_t sqn = 0; + int ret; + + ret = flow_hw_conv_sqn(fc->extra_data, v->tx_queue, &sqn); + if (unlikely(ret)) + sqn = BAD_SQN; + + DR_SET(tag, sqn, fc->byte_off, fc->bit_off, fc->bit_mask); +} + static int mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd, struct rte_flow_item *item, @@ -1850,6 +1867,35 @@ mlx5dr_definer_conv_item_metadata(struct mlx5dr_definer_conv_data *cd, return 0; } +static int +mlx5dr_definer_conv_item_tx_queue(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_tx_queue *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!m) + return 0; + + if (m->tx_queue) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_SOURCE_QP]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_tx_queue_set; + /* User extra_data to save DPDK port_id. */ + fc->extra_data = flow_hw_get_port_id(cd->ctx); + if (fc->extra_data == UINT16_MAX) { + DR_LOG(ERR, "Invalid port for item tx_queue"); + rte_errno = EINVAL; + return rte_errno; + } + DR_CALC_SET_HDR(fc, source_qp_gvmi, source_qp); + } + + return 0; +} + static int mlx5dr_definer_conv_item_sq(struct mlx5dr_definer_conv_data *cd, struct rte_flow_item *item, @@ -3150,6 +3196,10 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, ret = mlx5dr_definer_conv_item_vxlan(&cd, items, i); item_flags |= MLX5_FLOW_LAYER_VXLAN; break; + case RTE_FLOW_ITEM_TYPE_TX_QUEUE: + ret = mlx5dr_definer_conv_item_tx_queue(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_SQ; + break; case MLX5_RTE_FLOW_ITEM_TYPE_SQ: ret = mlx5dr_definer_conv_item_sq(&cd, items, i); item_flags |= MLX5_FLOW_ITEM_SQ; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index dd5b30a8a4..357267e0c3 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -19,6 +19,7 @@ #include "mlx5.h" #include "rte_pmd_mlx5.h" #include "hws/mlx5dr.h" +#include "mlx5_tx.h" /* E-Switch Manager port, used for rte_flow_item_port_id. */ #define MLX5_PORT_ESW_MGR UINT32_MAX @@ -1945,6 +1946,63 @@ struct flow_hw_port_info { extern struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; +/* + * Get sqn for given tx_queue. + * Used in HWS rule creation. + */ +static __rte_always_inline int +flow_hw_get_sqn(struct rte_eth_dev *dev, uint16_t tx_queue, uint32_t *sqn) +{ + struct mlx5_txq_ctrl *txq; + + /* Means Tx queue is PF0. */ + if (tx_queue == UINT16_MAX) { + *sqn = 0; + return 0; + } + txq = mlx5_txq_get(dev, tx_queue); + if (unlikely(!txq)) + return -ENOENT; + *sqn = mlx5_txq_get_sqn(txq); + mlx5_txq_release(dev, tx_queue); + return 0; +} + +/* + * Convert sqn for given rte_eth_dev port. + * Used in HWS rule creation. + */ +static __rte_always_inline int +flow_hw_conv_sqn(uint16_t port_id, uint16_t tx_queue, uint32_t *sqn) +{ + if (port_id >= RTE_MAX_ETHPORTS) + return -EINVAL; + return flow_hw_get_sqn(&rte_eth_devices[port_id], tx_queue, sqn); +} + +/* + * Get given rte_eth_dev port_id. + * Used in HWS rule creation. + */ +static __rte_always_inline uint16_t +flow_hw_get_port_id(void *dr_ctx) +{ +#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) + uint16_t port_id; + + MLX5_ETH_FOREACH_DEV(port_id, NULL) { + struct mlx5_priv *priv; + + priv = rte_eth_devices[port_id].data->dev_private; + if (priv->dr_ctx == dr_ctx) + return port_id; + } +#else + RTE_SET_USED(dr_ctx); +#endif + return UINT16_MAX; +} + /* * Get metadata match tag and mask for given rte_eth_dev port. * Used in HWS rule creation. diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 06f5427abf..14cdd4468d 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -8025,6 +8025,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, return ret; last_item = MLX5_FLOW_ITEM_TAG; break; + case RTE_FLOW_ITEM_TYPE_TX_QUEUE: case MLX5_RTE_FLOW_ITEM_TYPE_SQ: last_item = MLX5_FLOW_ITEM_SQ; break; @@ -12199,6 +12200,52 @@ flow_dv_translate_create_counter(struct rte_eth_dev *dev, return counter; } +/** + * Add Tx queue matcher + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. + * + * @return + * 0 on success otherwise -errno and errno is set. + */ +static int +flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, uint32_t key_type) +{ + const struct rte_flow_item_tx_queue *queue_m; + const struct rte_flow_item_tx_queue *queue_v; + void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); + uint32_t tx_queue; + uint32_t sqn = 0; + int ret; + + MLX5_ITEM_UPDATE(item, key_type, queue_v, queue_m, &rte_flow_item_tx_queue_mask); + if (!queue_m || !queue_v) + return -EINVAL; + if (key_type & MLX5_SET_MATCHER_V) { + tx_queue = queue_v->tx_queue; + if (key_type == MLX5_SET_MATCHER_SW_V) + tx_queue &= queue_m->tx_queue; + ret = flow_hw_get_sqn(dev, tx_queue, &sqn); + if (unlikely(ret)) + return -ret; + } else { + /* Due to tx_queue to sqn converting, only fully masked value support. */ + if (queue_m->tx_queue != rte_flow_item_tx_queue_mask.tx_queue) + return -EINVAL; + sqn = UINT32_MAX; + } + MLX5_SET(fte_match_set_misc, misc_v, source_sqn, sqn); + return 0; +} + /** * Add SQ matcher * @@ -14169,6 +14216,14 @@ flow_dv_translate_items(struct rte_eth_dev *dev, flow_dv_translate_mlx5_item_tag(dev, key, items, key_type); last_item = MLX5_FLOW_ITEM_TAG; break; + case RTE_FLOW_ITEM_TYPE_TX_QUEUE: + ret = flow_dv_translate_item_tx_queue(dev, key, items, key_type); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "invalid tx_queue item"); + last_item = MLX5_FLOW_ITEM_SQ; + break; case MLX5_RTE_FLOW_ITEM_TYPE_SQ: flow_dv_translate_item_sq(key, items, key_type); last_item = MLX5_FLOW_ITEM_SQ; @@ -14399,6 +14454,20 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, wks.last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : MLX5_FLOW_ITEM_OUTER_FLEX; break; + case RTE_FLOW_ITEM_TYPE_TX_QUEUE: + ret = flow_dv_translate_item_tx_queue(dev, match_value, items, + MLX5_SET_MATCHER_SW_V); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "invalid tx_queue item spec"); + ret = flow_dv_translate_item_tx_queue(dev, match_mask, items, + MLX5_SET_MATCHER_SW_M); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "invalid tx_queue item mask"); + break; case MLX5_RTE_FLOW_ITEM_TYPE_SQ: flow_dv_translate_item_sq(match_value, items, MLX5_SET_MATCHER_SW_V); diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 63194935a3..cf698b3ec8 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -8018,6 +8018,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_MPLS: case RTE_FLOW_ITEM_TYPE_GENEVE: case MLX5_RTE_FLOW_ITEM_TYPE_SQ: + case RTE_FLOW_ITEM_TYPE_TX_QUEUE: case RTE_FLOW_ITEM_TYPE_GRE: case RTE_FLOW_ITEM_TYPE_GRE_KEY: case RTE_FLOW_ITEM_TYPE_GRE_OPTION: From patchwork Wed Jun 5 09:31:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 140749 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C610E4415C; Wed, 5 Jun 2024 11:32:34 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AA28940DFB; Wed, 5 Jun 2024 11:32:29 +0200 (CEST) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2081.outbound.protection.outlook.com [40.107.101.81]) by mails.dpdk.org (Postfix) with ESMTP id 81B6140A81 for ; Wed, 5 Jun 2024 11:32:27 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dDBbtyPnYvESb11/MHkljSg/PP/mVvfziLge2DelJ0yFlKz9JkjRGsK2L2WJ+mkAzYjTfuAEaLy7e/SYFis3Kj4nWN0Fmsfyyp8v49xHlBf4DxMBtXS02F95RxWsx3ltrrHCqiIsH6EdJaiZLnNrmY5kj7vQ4kNvVBcw2K5r2UUu1UAEDNQCUx99bWjzYGMdo6Y9tO4djwgfQpn/KsdklFW/lNe7esa8snzOGGtOtoxvZeMSQdK4eOvP5yNcPgHQfHdgsFo6vMljnwurhkPYF8zuyu7tvZmg/+FDTlab41cFNZSOma80BH8dtDm4yp4z+q76j5MuQcV5zmkYXEVc6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DwN6LhvDGx6qdo03RWv/k1sWiWTnsXBdtPTFVMTMHIQ=; b=LUoUzbDTmz04iCSnwviRIoaMssZl/3ZYJ+xCkBknr4xirAY3yDParylhZpCesJwLPZ38XjK4EwXUU4aZumFAwH6yuU/Dl20vyfhRdKW6uRHl9FDyXyudbdVX4Urp85CrgVwsHXtudabuxxIF7M06O7dY190fc5v7GTc9vrPvXr8FSLz4kI3rIGVJji9gJfSkpiwiIqxhjhTj0jJtbdsteWaKz5UEaSgKnu8uN91svixaov0V5UmrXUtpM+5A0wWqinhTF1zIqfLgpJS8sjh/mxkXkftfQAjnhH+LDOOMyrbVcT+x4DoEkuaTqQUbdeYwbp5fMwV91hjhzpkyi+V7IA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DwN6LhvDGx6qdo03RWv/k1sWiWTnsXBdtPTFVMTMHIQ=; b=Cri6SbIhM9aaLtmH9fhPehM06HY31+P5bytfqtmSrDQEKxJh3rlD0lRvTgOq8571KzTcN29oyYco7Tet+0QZtpR10Sizg7/nmxx7Huxl4jsjuNCNLFQdDScZwJUbs5F4KVxV7DiR60XXaQXqzp8Z4vLRK6lC8BLIxZZgx26jCgJDnX+tzc72lS44decYEfARlQBmeh/2QbGacy6vzFE/5t5RW50OrJftqmBY/ERNC/tCkfMfO7pfKNXynpukLwurF4TOtaAYgnX9rO4x2CE0AmRGs3GddPeZhFiQNNb3KxV4Xrjf/piqTjXocMZpt4Zbm3rTeglS7G5rdE0GDGi6NA== Received: from CY5PR15CA0183.namprd15.prod.outlook.com (2603:10b6:930:81::29) by CY8PR12MB7147.namprd12.prod.outlook.com (2603:10b6:930:5d::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7611.22; Wed, 5 Jun 2024 09:32:25 +0000 Received: from CY4PEPF0000EDD1.namprd03.prod.outlook.com (2603:10b6:930:81:cafe::a7) by CY5PR15CA0183.outlook.office365.com (2603:10b6:930:81::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.31 via Frontend Transport; Wed, 5 Jun 2024 09:32:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000EDD1.mail.protection.outlook.com (10.167.241.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.15 via Frontend Transport; Wed, 5 Jun 2024 09:32:24 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Wed, 5 Jun 2024 02:31:58 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Wed, 5 Jun 2024 02:31:56 -0700 From: Suanming Mou To: Dariusz Sosnowski , Viacheslav Ovsiienko , Ori Kam , Matan Azrad CC: , Subject: [PATCH v2 2/3] net/mlx5: rename external Rx queue to external queue Date: Wed, 5 Jun 2024 17:31:40 +0800 Message-ID: <20240605093141.1826221-2-suanmingm@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240605093141.1826221-1-suanmingm@nvidia.com> References: <20240531035034.1731943-1-suanmingm@nvidia.com> <20240605093141.1826221-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EDD1:EE_|CY8PR12MB7147:EE_ X-MS-Office365-Filtering-Correlation-Id: 9128d994-2373-4f64-86c8-08dc854268ae X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230031|1800799015|82310400017|36860700004|376005; X-Microsoft-Antispam-Message-Info: DNXAncRev0MxSKfpzu6SZXe/RcwU+l9GdPV0l3FzkIRb/J+dhpZE9fSGnqibRxUw5rpPHrWPgw4pXdKDAdTaf2UWOEod0FoDU9RZ6c14azHjbLKVS+m6QbV1TOwlKSl8eO1gxhZpWcK5mP9sEj5gUy/24TNN1/C8PheIoNH+Oss1+VFTtQ64c+ziog4OW9PqhpXnIC8We1SZj4K+FBwUmYkMfnV43XCb9nzLTb9m33VQb73u+Nw0qee60SVp0VeRp5laNcZUpPlPougAWyNvPD0LUfO7rPV3V2XarI1wiMytyg/VR3nsoonrTQkup+Wx72tf3yavkSFTBE2ID6WAio3gJ5vy1M/oYqUbxwtG1b+frehzpF+ViSF0l+L8NKI0wnDJP3OLzalKPtDfnxvTDnnDMm4mQVeHamCFrSu1xobCUFcICT2ixZo/4zz1g11ReKP6+lmg7bO2k32Bl91HRKPUqmDy8wHKde3Y7HNmNOfosCN4cZwlQvmcSpiEtNnezGya8a4lQTe8UQ4pvHc50jjxs8yjIXxDLbH01+p58UVM6L/h1GrPEuoEmm1P8xGGtS+I3+cvSlLCyHX38SmqvOGQR5Jt8HDuMSCHA7gFDNGvOFOWHM/ea+lehKBsDzO6C7c14+GIHENvggKqhh7kQ3lE7p1uCe94ZeCt3sdECdCvCX/bTy54hbWXrAdbz4h9HbGtWNc4Sl9A5j2MJy9xToFnRMk7nVilnSeb8ZS/VW5Y8Sg0LPzED7RwXq0BokQ+ZfabKPqRAlNVagyONeRUDvwYq0ErhChNlZ2J+PCINZcAJOYpyRyD8glGHOoS+MgOz0plCgolwbgL3VedtjD6+r2K8uHZ7nhphNskFJnGIdRHIy2GtOWLA+LVwnszv1w61wWMctB1iypu64nlzEZG+CSWRnhs1aSM7LCEKRmCD2YnQuZ/QeMjI3/dfhJpZy30tEUiJZ6H8CUNtFk8gscrMxiU+YOkLWsvwBNrAUT+TA06Zmuxr4jdXx7Vec2R5OTOsbMJp/iL067P2X0qvSeRNhUqNhvsS2VqErVEwCL35a+GXZ5ZDdikZtfGkgJml/uQpolQqJ0a3HjIV7zKpE/70T/EU5VKnRTKZE3ZS8rTcm9fmYASq6RNvw3MiQgAz1NaJGqltLEQb31+31UtAsuQhSsJ2fjD5fKXoY66lEGDR/R/l+SrZJUM9lvarhtXV6BMwJ0ku64M5eTgNJPkHH3XvpWNcaQkt/L24Xjf8t8dltLrVMKv07zx2KoKJ2LaKg/escM9q1mkds+cynIFd0KZDz6f/69CD+/54Yxw92Aaypor81rtyXyOx+KsbPbcm1G9+QKshb+ZFJNMMuf/DMelZYAtHIpIQPRMerFf2z5Diwk= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(1800799015)(82310400017)(36860700004)(376005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jun 2024 09:32:24.8965 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9128d994-2373-4f64-86c8-08dc854268ae X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EDD1.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB7147 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Due to external Tx queue will be supported, in order to reuse the external queue struct, rename the current external Rx queue to external queue. Signed-off-by: Suanming Mou Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/linux/mlx5_os.c | 2 +- drivers/net/mlx5/mlx5.h | 2 +- drivers/net/mlx5/mlx5_devx.c | 2 +- drivers/net/mlx5/mlx5_rx.h | 8 ++++---- drivers/net/mlx5/mlx5_rxq.c | 16 ++++++++-------- 5 files changed, 15 insertions(+), 15 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index de3df17108..99de52936a 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1216,7 +1216,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, */ if (mlx5_imported_pd_and_ctx(sh->cdev) && mlx5_devx_obj_ops_en(sh)) { priv->ext_rxqs = mlx5_malloc(MLX5_MEM_ZERO | MLX5_MEM_RTE, - sizeof(struct mlx5_external_rxq) * + sizeof(struct mlx5_external_q) * MLX5_MAX_EXT_RX_QUEUES, 0, SOCKET_ID_ANY); if (priv->ext_rxqs == NULL) { diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index e2c22ffe97..e85308f6e0 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1882,7 +1882,7 @@ struct mlx5_priv { /* RX/TX queues. */ unsigned int rxqs_n; /* RX queues array size. */ unsigned int txqs_n; /* TX queues array size. */ - struct mlx5_external_rxq *ext_rxqs; /* External RX queues array. */ + struct mlx5_external_q *ext_rxqs; /* External RX queues array. */ struct mlx5_rxq_priv *(*rxq_privs)[]; /* RX queue non-shared data. */ struct mlx5_txq_data *(*txqs)[]; /* TX queues. */ struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. */ diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 9fa400fc48..cae9d578ab 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -673,7 +673,7 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev, } for (i = 0; i != queues_n; ++i) { if (mlx5_is_external_rxq(dev, queues[i])) { - struct mlx5_external_rxq *ext_rxq = + struct mlx5_external_q *ext_rxq = mlx5_ext_rxq_get(dev, queues[i]); rqt_attr->rq_list[i] = ext_rxq->hw_id; diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index d008e4dd3a..decb14e708 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -186,7 +186,7 @@ struct mlx5_rxq_priv { }; /* External RX queue descriptor. */ -struct mlx5_external_rxq { +struct mlx5_external_q { uint32_t hw_id; /* Queue index in the Hardware. */ RTE_ATOMIC(uint32_t) refcnt; /* Reference counter. */ }; @@ -227,10 +227,10 @@ uint32_t mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx); struct mlx5_rxq_priv *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx); struct mlx5_rxq_ctrl *mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx); struct mlx5_rxq_data *mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx); -struct mlx5_external_rxq *mlx5_ext_rxq_ref(struct rte_eth_dev *dev, +struct mlx5_external_q *mlx5_ext_rxq_ref(struct rte_eth_dev *dev, uint16_t idx); uint32_t mlx5_ext_rxq_deref(struct rte_eth_dev *dev, uint16_t idx); -struct mlx5_external_rxq *mlx5_ext_rxq_get(struct rte_eth_dev *dev, +struct mlx5_external_q *mlx5_ext_rxq_get(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_verify(struct rte_eth_dev *dev); @@ -661,7 +661,7 @@ static __rte_always_inline bool mlx5_is_external_rxq(struct rte_eth_dev *dev, uint16_t queue_idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_external_rxq *rxq; + struct mlx5_external_q *rxq; if (!priv->ext_rxqs || queue_idx < RTE_PMD_MLX5_EXTERNAL_RX_QUEUE_ID_MIN) return false; diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index f67aaa6178..d6c84b84e4 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -2133,10 +2133,10 @@ mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx) * @return * A pointer to the queue if it exists, NULL otherwise. */ -struct mlx5_external_rxq * +struct mlx5_external_q * mlx5_ext_rxq_ref(struct rte_eth_dev *dev, uint16_t idx) { - struct mlx5_external_rxq *rxq = mlx5_ext_rxq_get(dev, idx); + struct mlx5_external_q *rxq = mlx5_ext_rxq_get(dev, idx); rte_atomic_fetch_add_explicit(&rxq->refcnt, 1, rte_memory_order_relaxed); return rxq; @@ -2156,7 +2156,7 @@ mlx5_ext_rxq_ref(struct rte_eth_dev *dev, uint16_t idx) uint32_t mlx5_ext_rxq_deref(struct rte_eth_dev *dev, uint16_t idx) { - struct mlx5_external_rxq *rxq = mlx5_ext_rxq_get(dev, idx); + struct mlx5_external_q *rxq = mlx5_ext_rxq_get(dev, idx); return rte_atomic_fetch_sub_explicit(&rxq->refcnt, 1, rte_memory_order_relaxed) - 1; } @@ -2172,7 +2172,7 @@ mlx5_ext_rxq_deref(struct rte_eth_dev *dev, uint16_t idx) * @return * A pointer to the queue if it exists, NULL otherwise. */ -struct mlx5_external_rxq * +struct mlx5_external_q * mlx5_ext_rxq_get(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; @@ -2336,7 +2336,7 @@ int mlx5_ext_rxq_verify(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_external_rxq *rxq; + struct mlx5_external_q *rxq; uint32_t i; int ret = 0; @@ -3206,7 +3206,7 @@ mlx5_rxq_timestamp_set(struct rte_eth_dev *dev) * Pointer to concurrent external RxQ on success, * NULL otherwise and rte_errno is set. */ -static struct mlx5_external_rxq * +static struct mlx5_external_q * mlx5_external_rx_queue_get_validate(uint16_t port_id, uint16_t dpdk_idx) { struct rte_eth_dev *dev; @@ -3252,7 +3252,7 @@ int rte_pmd_mlx5_external_rx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx, uint32_t hw_idx) { - struct mlx5_external_rxq *ext_rxq; + struct mlx5_external_q *ext_rxq; uint32_t unmapped = 0; ext_rxq = mlx5_external_rx_queue_get_validate(port_id, dpdk_idx); @@ -3284,7 +3284,7 @@ rte_pmd_mlx5_external_rx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx, int rte_pmd_mlx5_external_rx_queue_id_unmap(uint16_t port_id, uint16_t dpdk_idx) { - struct mlx5_external_rxq *ext_rxq; + struct mlx5_external_q *ext_rxq; uint32_t mapped = 1; ext_rxq = mlx5_external_rx_queue_get_validate(port_id, dpdk_idx); From patchwork Wed Jun 5 09:31:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 140747 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E9E2C4415C; Wed, 5 Jun 2024 11:32:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D0180402DC; Wed, 5 Jun 2024 11:32:18 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2047.outbound.protection.outlook.com [40.107.236.47]) by mails.dpdk.org (Postfix) with ESMTP id 5E68E40289 for ; Wed, 5 Jun 2024 11:32:17 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=av1lMrsvjP048vk1h4+jT905hmgrGDWY7//rsTjeiAdNmJorwX2mfxWkc+wv5ggI94heXtpFbwukxjTXeGjcJcz70rW5N0dMMOX/0Vkg00n9EKDhvbBhPUvPO9RQXUwrCLoaFN549HKSvu+jjHvmEkVaoVlHBxJyIVRREnM3Af4k7Drr6izK5kbbwRspuw/RlmO5qtaEGuISCbPn7gDnBGVXtwGEDJxzopsSzweJRp+XBqn7HzW7wl5Nq0Ov08SCPJhZ6n5KqJyUI9cniP9UKF4lt4+ePG9FguQOOZUTMn+KagcRhzqEkbI7kQM46ylL1emNwY4yJmfDhe34yg7uZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XlDtw0F5eBkO3trV1veeVCEJhbL1Gct6Ec/rwMFt8NQ=; b=R3Ic68nSHwVjqTQglul3VHXguFtqUDw2JF7MyLvmpX/Dfnv7r0qrIsEYtla7o388M63oIh+TOeNrwKOyjDAOhAn6Qb8Cu7hJi5+jgj06v1xC8tbhJj+zIN5+IMiNtKkXyE4FA4S7LUbM1YRRjqdePa6RSK1/6BxpbQxfrUN8PkD47Kxj42i1DlfsZnSA3W1Rgl4acxmfwXcO28V2ziN5N7QVFse2S3EERjmVWG1scySHSPYS3gV9N8rpUjpBKzKwPjhwPDLjhbRxq+/P7AEV4QT/LZ9Ld7+IwflKVT11WKK6GklMT6xTuJsebduD1NZgEJokwijetWbCrOdSAHl/WQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XlDtw0F5eBkO3trV1veeVCEJhbL1Gct6Ec/rwMFt8NQ=; b=fzePFl/HMX7kRROvXhHM4AS1mWhh97EHLpun0k8Uku6Pw3buR0bMhSEny78Jc8BeBloWqWV3rgqAq7BCVA18Sc3P5zh8k8K72tmBto8HTeuX9RldVxhGn9NW5MgMtKjm8ooS74VSRTRmhneD9p3ykRc/IyJ8OptSEOinKJG+3qW6XonXcZv1igAdcwJrX7Y+SARgyaWw9G8UtyWDEouS4ovfjP5ODN5b7/bqZuB9CAo+04vmHdfa0+q8+pRe70f10+Z99gJNeY48FwwV5+q9PwYHqtp+TwsB2SZkiXiygCGTw3xwV1/ijW10emDjOumrXjFeH5ys4Z+VoKh6yXAirw== Received: from CH2PR04CA0009.namprd04.prod.outlook.com (2603:10b6:610:52::19) by MW6PR12MB9017.namprd12.prod.outlook.com (2603:10b6:303:23b::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.31; Wed, 5 Jun 2024 09:32:14 +0000 Received: from CH1PEPF0000AD76.namprd04.prod.outlook.com (2603:10b6:610:52:cafe::b1) by CH2PR04CA0009.outlook.office365.com (2603:10b6:610:52::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.31 via Frontend Transport; Wed, 5 Jun 2024 09:32:14 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CH1PEPF0000AD76.mail.protection.outlook.com (10.167.244.53) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.15 via Frontend Transport; Wed, 5 Jun 2024 09:32:14 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Wed, 5 Jun 2024 02:32:00 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Wed, 5 Jun 2024 02:31:58 -0700 From: Suanming Mou To: Dariusz Sosnowski , Viacheslav Ovsiienko , Ori Kam , Matan Azrad CC: , Subject: [PATCH v2 3/3] net/mlx5: add external Tx queue map and unmap Date: Wed, 5 Jun 2024 17:31:41 +0800 Message-ID: <20240605093141.1826221-3-suanmingm@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240605093141.1826221-1-suanmingm@nvidia.com> References: <20240531035034.1731943-1-suanmingm@nvidia.com> <20240605093141.1826221-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD76:EE_|MW6PR12MB9017:EE_ X-MS-Office365-Filtering-Correlation-Id: b7809b77-99aa-4a31-0706-08dc8542625d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230031|82310400017|36860700004|1800799015|376005; X-Microsoft-Antispam-Message-Info: SQz0Md1KDtlJ1AlSEHbOg5WXYeMKYGlh4hHitJk4Cr0qzUkqyd7ocWrQInkZFLx0WX5sC2bHxpT9teNyDolEAiRYBrFBfTqdg6BUvhcCet9mE4jyBpzTbXjC8+2Z8IExYrDDR7EPESlrGG0TVVrZNGYOZEUcdIQeCoV794ipgtbr+oirE5EAMzLs+kM16elNaSwcXS1AvZ/I0+oVvoGsZ66IRWidvKu6ueaTnL0BqDCWAfug6qHu/7ADs+GKUmC+ZdYYf35vtrtH3nQumoR3R7uX/E1hBkqFha1yp5p4ORhVgCALE3gYCusvyNP6d4H+h9kwyFYx3rRZ+WJLLDUuIJf4H9SkkJSw7/FBaNh+V72uH5S9KoZQ41fPVxzAUcgKVUbViIDsytCbZac3M+hyfQRFwAYuQISMGkxOcJvUd+1b7bmeDkDyCCjgFZy8i8npMcFHtOYUpZKQLY4SkDVCY1CNM/2chJz2243tNUiLAbOvRaThliJCTYdI1BlOVcH5U6ssoLdBxSNESX34BbDZkX6IUVJStuYhXrHvTR0gUDwOicyy5+aRg9utCNfzysXTghVNT9jgxvDEcUOm6bBTjyYKkjZ4qGQgUaGazvORCJLPejIgdJb+h5o/Wa5qQYEJFnEX0eFJrhO/TjIoRyXE/iN5wuJvs7JD2B6/+fXSjx2Ay/7IhjIY5dR+TbRqquZtmVtYsI76jxzfqmAKKFt5OOr+Nn7M1z4stbtrrFDOJ2NS0M2ffCjdAC+oLyQfhTYARHOYEWXM73NodpU7Xwa+oybqLvld/iyjgOVl2IlpnPThK4v+U01LT7cd7mJVGksx7uo3RYzse2Y5n9qFnDMHR+vvJ003Sra88LsvkdPVUxR59c99+s5IvsUR9EqDNDyYdIeSyXFlwNjWo97DHv6WTyFmTqVfDxTWz6aTg2BQPetw8VWtlo8+jsnSfQ3WRR/1ItzfgamOopb8B4Hz2lZCwBXO/t/K0bfWsJl8EZJXqErqB2BRf0R5JcVpjPiJXo/TMgm1o99rPmAtKvQMlmz9Yzo0Zs8ekTdaFnhCEo/BPPNLhpbjLFVv/0tTzxSRwbgEwJML6IOL+RRzcSHVt5Gl8T94CASSw+nqaelsDP2iyEeiZJ6SIYfHhAuXwZDJ5U9rQpr1K4fH2giP4SaNcoleqBp14cOGyLBmRbzTxEtK/hdS87ljbV8NESTGoPTC326V4LtNzGiphkYfjjyMUYetcVaFZgkz1A+fIVGHzrKv29c9k4eNJ9LkOzquSzC+5J6JL/QbhEA1xfvZn8uOmTfBYDgyXgd/mQtPpapsrtz0WOnkt6e4PkP9vIEOlY8+xNKXcqZUee0yGm8cqzXVNntBUr8SB8qTjDn49PB9nujnlxo= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(82310400017)(36860700004)(1800799015)(376005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jun 2024 09:32:14.2336 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b7809b77-99aa-4a31-0706-08dc8542625d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD76.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB9017 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org For using external created Tx queues in RTE_FLOW_ITEM_TX_QUEUE, this commit provides the map and unmap functions to convert the external created SQ's devx ID to DPDK flow item Tx queue ID. Signed-off-by: Suanming Mou Acked-by: Dariusz Sosnowski --- v2: add feature and release notes. --- doc/guides/nics/mlx5.rst | 1 + doc/guides/rel_notes/release_24_07.rst | 1 + drivers/net/mlx5/linux/mlx5_os.c | 12 +- drivers/net/mlx5/mlx5.c | 5 + drivers/net/mlx5/mlx5.h | 7 ++ drivers/net/mlx5/mlx5_defs.h | 3 + drivers/net/mlx5/mlx5_devx.c | 40 +++++++ drivers/net/mlx5/mlx5_devx.h | 1 + drivers/net/mlx5/mlx5_ethdev.c | 8 ++ drivers/net/mlx5/mlx5_flow.h | 6 + drivers/net/mlx5/mlx5_rx.h | 6 - drivers/net/mlx5/mlx5_rxq.c | 22 +--- drivers/net/mlx5/mlx5_tx.h | 25 ++++ drivers/net/mlx5/mlx5_txq.c | 152 +++++++++++++++++++++++++ drivers/net/mlx5/rte_pmd_mlx5.h | 48 ++++++++ drivers/net/mlx5/version.map | 3 + 16 files changed, 314 insertions(+), 26 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index b5928d40b2..5cd41d3c7f 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -169,6 +169,7 @@ Features - Sub-Function. - Matching on represented port. - Matching on aggregated affinity. +- Matching on external Tx queue. Limitations diff --git a/doc/guides/rel_notes/release_24_07.rst b/doc/guides/rel_notes/release_24_07.rst index 46efc04eac..3a3257fcd5 100644 --- a/doc/guides/rel_notes/release_24_07.rst +++ b/doc/guides/rel_notes/release_24_07.rst @@ -84,6 +84,7 @@ New Features * **Updated NVIDIA mlx5 driver.** * Added match with Tx queue. + * Added match with external Tx queue. Removed Items diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 99de52936a..f887501a9b 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1224,7 +1224,16 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = ENOMEM; goto error; } - DRV_LOG(DEBUG, "External RxQ is supported."); + priv->ext_txqs = mlx5_malloc(MLX5_MEM_ZERO | MLX5_MEM_RTE, + sizeof(struct mlx5_external_q) * + MLX5_MAX_EXT_TX_QUEUES, 0, + SOCKET_ID_ANY); + if (priv->ext_txqs == NULL) { + DRV_LOG(ERR, "Fail to allocate external TxQ array."); + err = ENOMEM; + goto error; + } + DRV_LOG(DEBUG, "External queue is supported."); } priv->sh = sh; priv->dev_port = spawn->phys_port; @@ -1762,6 +1771,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, if (eth_dev && priv->flex_item_map) mlx5_flex_item_port_cleanup(eth_dev); mlx5_free(priv->ext_rxqs); + mlx5_free(priv->ext_txqs); mlx5_free(priv); if (eth_dev != NULL) eth_dev->data->dev_private = NULL; diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index d15302d00d..e41b1e82d7 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -2436,6 +2436,10 @@ mlx5_dev_close(struct rte_eth_dev *dev) if (ret) DRV_LOG(WARNING, "port %u some Verbs Tx queue still remain", dev->data->port_id); + ret = mlx5_ext_txq_verify(dev); + if (ret) + DRV_LOG(WARNING, "Port %u some external TxQ still remain.", + dev->data->port_id); ret = mlx5_txq_verify(dev); if (ret) DRV_LOG(WARNING, "port %u some Tx queues still remain", @@ -2447,6 +2451,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) if (priv->hrxqs) mlx5_list_destroy(priv->hrxqs); mlx5_free(priv->ext_rxqs); + mlx5_free(priv->ext_txqs); sh->port[priv->dev_port - 1].nl_ih_port_id = RTE_MAX_ETHPORTS; /* * The interrupt handler port id must be reset before priv is reset diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index e85308f6e0..91ceceb34a 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -381,6 +381,12 @@ struct mlx5_lb_ctx { RTE_ATOMIC(uint16_t) refcnt; /* Reference count for representors. */ }; +/* External queue descriptor. */ +struct mlx5_external_q { + uint32_t hw_id; /* Queue index in the Hardware. */ + RTE_ATOMIC(uint32_t) refcnt; /* Reference counter. */ +}; + /* HW steering queue job descriptor type. */ enum mlx5_hw_job_type { MLX5_HW_Q_JOB_TYPE_CREATE, /* Flow create job type. */ @@ -1883,6 +1889,7 @@ struct mlx5_priv { unsigned int rxqs_n; /* RX queues array size. */ unsigned int txqs_n; /* TX queues array size. */ struct mlx5_external_q *ext_rxqs; /* External RX queues array. */ + struct mlx5_external_q *ext_txqs; /* External TX queues array. */ struct mlx5_rxq_priv *(*rxq_privs)[]; /* RX queue non-shared data. */ struct mlx5_txq_data *(*txqs)[]; /* TX queues. */ struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. */ diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index dc5216cb24..9c454983be 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -183,6 +183,9 @@ /* Maximum number of external Rx queues supported by rte_flow */ #define MLX5_MAX_EXT_RX_QUEUES (UINT16_MAX - RTE_PMD_MLX5_EXTERNAL_RX_QUEUE_ID_MIN + 1) +/* Maximum number of external Tx queues supported by rte_flow */ +#define MLX5_MAX_EXT_TX_QUEUES (UINT16_MAX - MLX5_EXTERNAL_TX_QUEUE_ID_MIN + 1) + /* * Linux definition of static_assert is found in /usr/include/assert.h. * Windows does not require a redefinition. diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index cae9d578ab..f23eb1def6 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -27,6 +27,46 @@ #include "mlx5_flow.h" #include "mlx5_flow_os.h" +/** + * Validate given external queue's port is valid or not. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * + * @return + * 0 on success, non-0 otherwise + */ +int +mlx5_devx_extq_port_validate(uint16_t port_id) +{ + struct rte_eth_dev *dev; + struct mlx5_priv *priv; + + if (rte_eth_dev_is_valid_port(port_id) < 0) { + DRV_LOG(ERR, "There is no Ethernet device for port %u.", + port_id); + rte_errno = ENODEV; + return -rte_errno; + } + dev = &rte_eth_devices[port_id]; + priv = dev->data->dev_private; + if (!mlx5_imported_pd_and_ctx(priv->sh->cdev)) { + DRV_LOG(ERR, "Port %u " + "external queue isn't supported on local PD and CTX.", + port_id); + rte_errno = ENOTSUP; + return -rte_errno; + } + if (!mlx5_devx_obj_ops_en(priv->sh)) { + DRV_LOG(ERR, + "Port %u external queue isn't supported by Verbs API.", + port_id); + rte_errno = ENOTSUP; + return -rte_errno; + } + return 0; +} + /** * Modify RQ vlan stripping offload * diff --git a/drivers/net/mlx5/mlx5_devx.h b/drivers/net/mlx5/mlx5_devx.h index ebd1da455a..4ab8cfbd22 100644 --- a/drivers/net/mlx5/mlx5_devx.h +++ b/drivers/net/mlx5/mlx5_devx.h @@ -12,6 +12,7 @@ int mlx5_txq_devx_modify(struct mlx5_txq_obj *obj, enum mlx5_txq_modify_type type, uint8_t dev_port); void mlx5_txq_devx_obj_release(struct mlx5_txq_obj *txq_obj); int mlx5_devx_modify_rq(struct mlx5_rxq_priv *rxq, uint8_t type); +int mlx5_devx_extq_port_validate(uint16_t port_id); extern struct mlx5_obj_ops devx_obj_ops; diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index aea799341c..1b721cda5e 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -123,6 +123,14 @@ mlx5_dev_configure(struct rte_eth_dev *dev) dev->data->port_id, priv->txqs_n, txqs_n); priv->txqs_n = txqs_n; } + if (priv->ext_txqs && txqs_n >= MLX5_EXTERNAL_TX_QUEUE_ID_MIN) { + DRV_LOG(ERR, "port %u cannot handle this many Tx queues (%u), " + "the maximal number of internal Tx queues is %u", + dev->data->port_id, txqs_n, + MLX5_EXTERNAL_TX_QUEUE_ID_MIN - 1); + rte_errno = EINVAL; + return -rte_errno; + } if (rxqs_n > priv->sh->dev_cap.ind_table_max_size) { DRV_LOG(ERR, "port %u cannot handle this many Rx queues (%u)", dev->data->port_id, rxqs_n); diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 357267e0c3..ba75b99139 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1954,12 +1954,18 @@ static __rte_always_inline int flow_hw_get_sqn(struct rte_eth_dev *dev, uint16_t tx_queue, uint32_t *sqn) { struct mlx5_txq_ctrl *txq; + struct mlx5_external_q *ext_txq; /* Means Tx queue is PF0. */ if (tx_queue == UINT16_MAX) { *sqn = 0; return 0; } + if (mlx5_is_external_txq(dev, tx_queue)) { + ext_txq = mlx5_ext_txq_get(dev, tx_queue); + *sqn = ext_txq->hw_id; + return 0; + } txq = mlx5_txq_get(dev, tx_queue); if (unlikely(!txq)) return -ENOENT; diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index decb14e708..1485556d89 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -185,12 +185,6 @@ struct mlx5_rxq_priv { uint32_t lwm_devx_subscribed:1; }; -/* External RX queue descriptor. */ -struct mlx5_external_q { - uint32_t hw_id; /* Queue index in the Hardware. */ - RTE_ATOMIC(uint32_t) refcnt; /* Reference counter. */ -}; - /* mlx5_rxq.c */ extern uint8_t rss_hash_default_key[]; diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index d6c84b84e4..f13fc3b353 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -3211,6 +3211,7 @@ mlx5_external_rx_queue_get_validate(uint16_t port_id, uint16_t dpdk_idx) { struct rte_eth_dev *dev; struct mlx5_priv *priv; + int ret; if (dpdk_idx < RTE_PMD_MLX5_EXTERNAL_RX_QUEUE_ID_MIN) { DRV_LOG(ERR, "Queue index %u should be in range: [%u, %u].", @@ -3218,28 +3219,11 @@ mlx5_external_rx_queue_get_validate(uint16_t port_id, uint16_t dpdk_idx) rte_errno = EINVAL; return NULL; } - if (rte_eth_dev_is_valid_port(port_id) < 0) { - DRV_LOG(ERR, "There is no Ethernet device for port %u.", - port_id); - rte_errno = ENODEV; + ret = mlx5_devx_extq_port_validate(port_id); + if (unlikely(ret)) return NULL; - } dev = &rte_eth_devices[port_id]; priv = dev->data->dev_private; - if (!mlx5_imported_pd_and_ctx(priv->sh->cdev)) { - DRV_LOG(ERR, "Port %u " - "external RxQ isn't supported on local PD and CTX.", - port_id); - rte_errno = ENOTSUP; - return NULL; - } - if (!mlx5_devx_obj_ops_en(priv->sh)) { - DRV_LOG(ERR, - "Port %u external RxQ isn't supported by Verbs API.", - port_id); - rte_errno = ENOTSUP; - return NULL; - } /* * When user configures remote PD and CTX and device creates RxQ by * DevX, external RxQs array is allocated. diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h index 0d77ff89de..983913faa2 100644 --- a/drivers/net/mlx5/mlx5_tx.h +++ b/drivers/net/mlx5/mlx5_tx.h @@ -227,6 +227,8 @@ void mlx5_txq_dynf_timestamp_set(struct rte_eth_dev *dev); int mlx5_count_aggr_ports(struct rte_eth_dev *dev); int mlx5_map_aggr_tx_affinity(struct rte_eth_dev *dev, uint16_t tx_queue_id, uint8_t affinity); +int mlx5_ext_txq_verify(struct rte_eth_dev *dev); +struct mlx5_external_q *mlx5_ext_txq_get(struct rte_eth_dev *dev, uint16_t idx); /* mlx5_tx.c */ @@ -3788,4 +3790,27 @@ mlx5_tx_burst_tmpl(struct mlx5_txq_data *__rte_restrict txq, return loc.pkts_sent; } +/** + * Check whether given TxQ is external. + * + * @param dev + * Pointer to Ethernet device. + * @param queue_idx + * Tx queue index. + * + * @return + * True if is external TxQ, otherwise false. + */ +static __rte_always_inline bool +mlx5_is_external_txq(struct rte_eth_dev *dev, uint16_t queue_idx) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_external_q *txq; + + if (!priv->ext_txqs || queue_idx < MLX5_EXTERNAL_TX_QUEUE_ID_MIN) + return false; + txq = &priv->ext_txqs[queue_idx - MLX5_EXTERNAL_TX_QUEUE_ID_MIN]; + return !!rte_atomic_load_explicit(&txq->refcnt, rte_memory_order_relaxed); +} + #endif /* RTE_PMD_MLX5_TX_H_ */ diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index da4236f99a..8eb1ae1f03 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -27,6 +27,7 @@ #include "mlx5_tx.h" #include "mlx5_rxtx.h" #include "mlx5_autoconf.h" +#include "mlx5_devx.h" #include "rte_pmd_mlx5.h" #include "mlx5_flow.h" @@ -1183,6 +1184,57 @@ mlx5_txq_get(struct rte_eth_dev *dev, uint16_t idx) return ctrl; } +/** + * Get an external Tx queue. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * External Tx queue index. + * + * @return + * A pointer to the queue if it exists, NULL otherwise. + */ +struct mlx5_external_q * +mlx5_ext_txq_get(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + MLX5_ASSERT(mlx5_is_external_txq(dev, idx)); + return &priv->ext_txqs[idx - MLX5_EXTERNAL_TX_QUEUE_ID_MIN]; +} + +/** + * Verify the external Tx Queue list is empty. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * The number of object not released. + */ +int +mlx5_ext_txq_verify(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_external_q *txq; + uint32_t i; + int ret = 0; + + if (priv->ext_txqs == NULL) + return 0; + + for (i = MLX5_EXTERNAL_TX_QUEUE_ID_MIN; i <= UINT16_MAX ; ++i) { + txq = mlx5_ext_txq_get(dev, i); + if (txq->refcnt < 2) + continue; + DRV_LOG(DEBUG, "Port %u external TxQ %u still referenced.", + dev->data->port_id, i); + ++ret; + } + return ret; +} + /** * Release a Tx queue. * @@ -1416,3 +1468,103 @@ int mlx5_map_aggr_tx_affinity(struct rte_eth_dev *dev, uint16_t tx_queue_id, txq_ctrl->txq.tx_aggr_affinity = affinity; return 0; } + +/** + * Validate given external TxQ rte_flow index, and get pointer to concurrent + * external TxQ object to map/unmap. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] dpdk_idx + * Tx Queue index in rte_flow. + * + * @return + * Pointer to concurrent external TxQ on success, + * NULL otherwise and rte_errno is set. + */ +static struct mlx5_external_q * +mlx5_external_tx_queue_get_validate(uint16_t port_id, uint16_t dpdk_idx) +{ + struct rte_eth_dev *dev; + struct mlx5_priv *priv; + int ret; + + if (dpdk_idx < MLX5_EXTERNAL_TX_QUEUE_ID_MIN) { + DRV_LOG(ERR, "Queue index %u should be in range: [%u, %u].", + dpdk_idx, MLX5_EXTERNAL_TX_QUEUE_ID_MIN, UINT16_MAX); + rte_errno = EINVAL; + return NULL; + } + ret = mlx5_devx_extq_port_validate(port_id); + if (unlikely(ret)) + return NULL; + dev = &rte_eth_devices[port_id]; + priv = dev->data->dev_private; + /* + * When user configures remote PD and CTX and device creates TxQ by + * DevX, external TxQs array is allocated. + */ + MLX5_ASSERT(priv->ext_txqs != NULL); + return &priv->ext_txqs[dpdk_idx - MLX5_EXTERNAL_TX_QUEUE_ID_MIN]; +} + +int +rte_pmd_mlx5_external_tx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx, + uint32_t hw_idx) +{ + struct mlx5_external_q *ext_txq; + uint32_t unmapped = 0; + + ext_txq = mlx5_external_tx_queue_get_validate(port_id, dpdk_idx); + if (ext_txq == NULL) + return -rte_errno; + if (!rte_atomic_compare_exchange_strong_explicit(&ext_txq->refcnt, &unmapped, 1, + rte_memory_order_relaxed, rte_memory_order_relaxed)) { + if (ext_txq->hw_id != hw_idx) { + DRV_LOG(ERR, "Port %u external TxQ index %u " + "is already mapped to HW index (requesting is " + "%u, existing is %u).", + port_id, dpdk_idx, hw_idx, ext_txq->hw_id); + rte_errno = EEXIST; + return -rte_errno; + } + DRV_LOG(WARNING, "Port %u external TxQ index %u " + "is already mapped to the requested HW index (%u)", + port_id, dpdk_idx, hw_idx); + + } else { + ext_txq->hw_id = hw_idx; + DRV_LOG(DEBUG, "Port %u external TxQ index %u " + "is successfully mapped to the requested HW index (%u)", + port_id, dpdk_idx, hw_idx); + } + return 0; +} + +int +rte_pmd_mlx5_external_tx_queue_id_unmap(uint16_t port_id, uint16_t dpdk_idx) +{ + struct mlx5_external_q *ext_txq; + uint32_t mapped = 1; + + ext_txq = mlx5_external_tx_queue_get_validate(port_id, dpdk_idx); + if (ext_txq == NULL) + return -rte_errno; + if (ext_txq->refcnt > 1) { + DRV_LOG(ERR, "Port %u external TxQ index %u still referenced.", + port_id, dpdk_idx); + rte_errno = EINVAL; + return -rte_errno; + } + if (!rte_atomic_compare_exchange_strong_explicit(&ext_txq->refcnt, &mapped, 0, + rte_memory_order_relaxed, rte_memory_order_relaxed)) { + DRV_LOG(ERR, "Port %u external TxQ index %u doesn't exist.", + port_id, dpdk_idx); + rte_errno = EINVAL; + return -rte_errno; + } + DRV_LOG(DEBUG, + "Port %u external TxQ index %u is successfully unmapped.", + port_id, dpdk_idx); + return 0; +} diff --git a/drivers/net/mlx5/rte_pmd_mlx5.h b/drivers/net/mlx5/rte_pmd_mlx5.h index 004be0eea1..359e4192c8 100644 --- a/drivers/net/mlx5/rte_pmd_mlx5.h +++ b/drivers/net/mlx5/rte_pmd_mlx5.h @@ -68,6 +68,11 @@ int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains); */ #define RTE_PMD_MLX5_EXTERNAL_RX_QUEUE_ID_MIN (UINT16_MAX - 1000 + 1) +/** + * External Tx queue rte_flow index minimal value. + */ +#define MLX5_EXTERNAL_TX_QUEUE_ID_MIN (UINT16_MAX - 1000 + 1) + /** * Tag level to set the linear hash index. */ @@ -116,6 +121,49 @@ __rte_experimental int rte_pmd_mlx5_external_rx_queue_id_unmap(uint16_t port_id, uint16_t dpdk_idx); +/** + * Update mapping between rte_flow Tx queue index (16 bits) and HW queue index (32 + * bits) for TxQs which is created outside the PMD. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] dpdk_idx + * Queue index in rte_flow. + * @param[in] hw_idx + * Queue index in hardware. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + * Possible values for rte_errno: + * - EEXIST - a mapping with the same rte_flow index already exists. + * - EINVAL - invalid rte_flow index, out of range. + * - ENODEV - there is no Ethernet device for this port id. + * - ENOTSUP - the port doesn't support external TxQ. + */ +__rte_experimental +int rte_pmd_mlx5_external_tx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx, + uint32_t hw_idx); + +/** + * Remove mapping between rte_flow Tx queue index (16 bits) and HW queue index (32 + * bits) for TxQs which is created outside the PMD. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] dpdk_idx + * Queue index in rte_flow. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + * Possible values for rte_errno: + * - EINVAL - invalid index, out of range, still referenced or doesn't exist. + * - ENODEV - there is no Ethernet device for this port id. + * - ENOTSUP - the port doesn't support external TxQ. + */ +__rte_experimental +int rte_pmd_mlx5_external_tx_queue_id_unmap(uint16_t port_id, + uint16_t dpdk_idx); + /** * The rate of the host port shaper will be updated directly at the next * available descriptor threshold event to the rate that comes with this flag set; diff --git a/drivers/net/mlx5/version.map b/drivers/net/mlx5/version.map index 8fb0e07303..8a78d14786 100644 --- a/drivers/net/mlx5/version.map +++ b/drivers/net/mlx5/version.map @@ -20,4 +20,7 @@ EXPERIMENTAL { # added in 24.03 rte_pmd_mlx5_create_geneve_tlv_parser; rte_pmd_mlx5_destroy_geneve_tlv_parser; + # added in 24.07 + rte_pmd_mlx5_external_tx_queue_id_map; + rte_pmd_mlx5_external_tx_queue_id_unmap; };