From patchwork Thu Feb 24 23:25:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 108328 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E3546A034C; Fri, 25 Feb 2022 00:25:27 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CAA034114F; Fri, 25 Feb 2022 00:25:27 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2062.outbound.protection.outlook.com [40.107.243.62]) by mails.dpdk.org (Postfix) with ESMTP id AC9964114D for ; Fri, 25 Feb 2022 00:25:25 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lpCRMwEK7aWQSGKt6HbGxGfO4mBdrw8TnmO8EgvQNNiAbc0C7jnadiA1rUY3DPG+kcf95W6Y6PDB+Syga6Z1Ul5fVRMc8G6I71I7hQdja/qFtp4rC8y34AI9EOUbXITM9LFhohmgYtKy5979ayMMInS8C5b5A/18VO2dL+qRy7CpUWgXSeNts0I/9rII2D/g6gtvGZ20vLOErCdzSxicMH/BmFLYKOGIn6u4rTWVp2qQw1ju+uZq7o20+UcPDWxcnlcsejAQsbWb3/BVE5EJ/wJ/PR9JIeTdRUdyvbs+Askt44LKAmyAlT7xIkf68xGgvdqOVG6INSX6h/RV5w3CXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Fw80n82BmG4UET+24mgCDEuSjFl1JQkwflL11iM0AQI=; b=WLNk1x9NFSXD54m8OQgRUbToVlDAX0f/6DyHqD045e0ByVozCnwYMHOvE3GpvbV/MuovTVC1mWgR7qhxVkeCK1azyt2fzqpilPCdfeRnY48oPCry2adzGYLtw3iggdWqhx2pcrrpcejDaq0x002i/LprW4I3Tye4y4voTQFRjHO6mHAeTiTYHMNJX0NgMxs9ShyIlix8RBtMfciJULhF4G9m4Z93OV6uN5pnCxypOyI1DnttP55ifcadVKA47fUH3ubhylNKT/ruM2j8g/2A8ffEwk+HCWr092WuHcuWc42ZkKr3UwBYsWZgi/VLGf89K2Nd+4T3d5bKACMw3Qcs8w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Fw80n82BmG4UET+24mgCDEuSjFl1JQkwflL11iM0AQI=; b=t9Z4Q3KDlD1Ij9gnHCCUkWOvEUoBsxPGWdjxPXfgHZTLvqKYpqjzF6fZX9L6xOgrxhQ+7xbJCj24dDIskxtrDrc3DrgdaEb7LnxhG7ihBEtWkMoWxqSKsItU97zXd7LC5/65bFuNTNMV1ppZJrVljXIXiPqPFVpsQ3nknlqGO8caBDPrOOa05Bw3it0HZQd6xQW7gGl7+YytXOUWFVBXYQN79qkI+9/VCgj/+Q3K0wQlZRx65kOoBo6jlPqASur+KrQiJKVYCC9JnsLpWJiX+2eR1By/O5ZG6+0Kse7CEZW4njXVnEEQ8A/Nu3hbtHK45xKdV42rZXYbGnzAZm1+YQ== Received: from MW4PR04CA0220.namprd04.prod.outlook.com (2603:10b6:303:87::15) by MW5PR12MB5684.namprd12.prod.outlook.com (2603:10b6:303:1a1::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.22; Thu, 24 Feb 2022 23:25:21 +0000 Received: from CO1NAM11FT063.eop-nam11.prod.protection.outlook.com (2603:10b6:303:87:cafe::bf) by MW4PR04CA0220.outlook.office365.com (2603:10b6:303:87::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.23 via Frontend Transport; Thu, 24 Feb 2022 23:25:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by CO1NAM11FT063.mail.protection.outlook.com (10.13.175.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 23:25:20 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 23:25:19 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 15:25:18 -0800 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9 via Frontend Transport; Thu, 24 Feb 2022 15:25:17 -0800 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Subject: [PATCH v3 1/6] common/mlx5: consider local functions as internal Date: Fri, 25 Feb 2022 01:25:06 +0200 Message-ID: <20220224232511.3238707-2-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220224232511.3238707-1-michaelba@nvidia.com> References: <20220223184835.3061161-1-michaelba@nvidia.com> <20220224232511.3238707-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d894dfd3-9422-4565-97d2-08d9f7ececdf X-MS-TrafficTypeDiagnostic: MW5PR12MB5684:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: w5DV08f6cYccHDkyR1Xf9FOC3G9UVMET3PYpPa2bF3E5jp1sx27tgJ9tNYq9gaVYhRoCbMAB0vkbhDjPqzfJSnpZykI+cJXLOQGE/sXSMsdIQxgOfg1UfXqdUTmHQ87tceMX2ZnXa246QgXW0aGcA56tA6ZlxUpF1/pgLaXOPgltwzQJg0DCIZVQhEid7vDvH5+AdkZAX9TFF3LKaQtEzYmGlS3QAYh1pxtMCImbvnzBkBIbWMaFH6J16dUCJ39QzulI/7KUiikSw9ie1UNvV0esd4u2q6Rfj4yNWSnEiYMSHU4vv3shqnZC82P7+AJVu9xynwJHYk4U2GRIJVDrFQqxbJIawv6EcInXQrgTblnJGToszvst4et9/HAQKaPp2PwnHEUVivn1zA8dvGv4eZRSFxfH+zTCovGjrea0/LVVoBx7QcN0+mLaa2BAzvtdCd/uGyRoNhGrsNRNcE1kgG/2poQowymumQIDzVEEnMo67cR1oDjKmz3JhpOGsuhn7yQwhyOvnQMq/SR+ZwQLf7i2Pamp4RTGa5w1c2VUjb/lV1l80fLsy0vZ2TZ9Nht1YStamPq0dobZhuGD2UPnH7wm5x3asgyUmC6b7GboSRPeqa6ANvA6QjdcN3MPR/QybL/KUGDL+YnMlgtPxXUB5A0NMFtJZIV0oxR9WXDoBjKBGSrv9IT3Qoo+j36qdn2vkhVIvqZpKLCEn6wBB76S8w== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(508600001)(54906003)(336012)(426003)(6286002)(186003)(316002)(2906002)(26005)(6916009)(86362001)(2616005)(40460700003)(47076005)(7696005)(55016003)(8936002)(8676002)(5660300002)(4326008)(70586007)(70206006)(82310400004)(356005)(1076003)(36756003)(107886003)(81166007)(36860700001)(6666004)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 23:25:20.6961 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d894dfd3-9422-4565-97d2-08d9f7ececdf X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT063.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW5PR12MB5684 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The functions which are not explicitly marked as internal were exported because the local catch-all rule was missing in the version script. After adding the missing rule, all local functions are hidden. The function mlx5_get_device_guid is used in another library, so it needs to be exported (as internal). Because the local functions were exported as non-internal in DPDK 21.11, any change in these functions would break the ABI. An ABI exception is added for this library, considering that all functions are either local or internal. Signed-off-by: Michael Baum Acked-by: Matan Azrad --- devtools/libabigail.abignore | 4 ++++ drivers/common/mlx5/linux/mlx5_common_os.h | 1 + drivers/common/mlx5/version.map | 3 +++ 3 files changed, 8 insertions(+) diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore index ef0602975a..78d57497e6 100644 --- a/devtools/libabigail.abignore +++ b/devtools/libabigail.abignore @@ -20,3 +20,7 @@ ; Ignore changes to rte_crypto_asym_op, asymmetric crypto API is experimental [suppress_type] name = rte_crypto_asym_op + +; Ignore changes in common mlx5 driver, should be all internal +[suppress_file] + soname_regexp = ^librte_common_mlx5\. \ No newline at end of file diff --git a/drivers/common/mlx5/linux/mlx5_common_os.h b/drivers/common/mlx5/linux/mlx5_common_os.h index 83066e752d..edf356a30a 100644 --- a/drivers/common/mlx5/linux/mlx5_common_os.h +++ b/drivers/common/mlx5/linux/mlx5_common_os.h @@ -300,6 +300,7 @@ mlx5_set_context_attr(struct rte_device *dev, struct ibv_context *ctx); * 0 if OFED doesn't support. * >0 if success. */ +__rte_internal int mlx5_get_device_guid(const struct rte_pci_addr *dev, uint8_t *guid, size_t len); diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index 1c6153c576..cb20a7d893 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -80,6 +80,7 @@ INTERNAL { mlx5_free; + mlx5_get_device_guid; # WINDOWS_NO_EXPORT mlx5_get_ifname_sysfs; # WINDOWS_NO_EXPORT mlx5_get_pci_addr; # WINDOWS_NO_EXPORT @@ -149,4 +150,6 @@ INTERNAL { mlx5_mp_req_mempool_reg; mlx5_mr_mempool2mr_bh; mlx5_mr_mempool_populate_cache; + + local: *; }; From patchwork Thu Feb 24 23:25:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 108329 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A9FAEA034C; Fri, 25 Feb 2022 00:25:32 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C62BA41156; Fri, 25 Feb 2022 00:25:28 +0100 (CET) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2073.outbound.protection.outlook.com [40.107.92.73]) by mails.dpdk.org (Postfix) with ESMTP id 8A6CE4114D for ; Fri, 25 Feb 2022 00:25:26 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OICNAdmAojGI1omLaCt6CDO3TAilyQL7qVTTGLJE1bG+K1fO9A5UrWN94MbC8eLj8ZC6jRVZzNWyLDvynqAZWryA6dXRVYxQ6RVnICoiqLSVTTNZY3AoKucdCTEMmStTIh+9eumeTrkM1Tz4/E3OM1+w8/QR64tUqTUSg0kQf6eIP3lfJBQVpT9Ih6GV1T0yQw4rpThVRuQh3JXe2yNI/+wIK5cMyQCoAYuqdpbwrxP6oFeOXzcN577qEmr+j7WBmNaQOt+1NqKcSDoFJ9u9qOBd0RwCXpNBVoyXEtZmxcg4kVvaPn7UJQWjUNzt0MImKoAl3XKRF1IvSS7qTDTVmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+1e1z+OTXqc9CG2yg/EQg+96B9tJxOE/Xahr0R1Ij0E=; b=V5r/pk91D/XBFz/OoKHNxuqtxtyliKINlMqybI9/WEBzGjWHnX2sfkyvC893KIJu+fpGYPbmDMZz/C3njAMtGZ3KHOZS54nHQovEss+RAWIGStZB4NbsMWxBdf4+31Tn+ybyDC6d3pSqChv+xjemjDlT0eHjf1sdg0xdtPpIHhiErVrb0bR4P/252ZWq5QS7QJVMteYoGHiW8Yb/WQcq2dvcoQihnUmbyk8Y1sji0Z9cJQqMrbjcpyeNWO91/Jkjxqhqo4yTq6eBj/wEjoIoTLQDdQWA3jPsF5iR9b+9MQV3FjWQGgriPd5ksfTKrXfArMPvU70hy1HL2cqWXMaJUg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+1e1z+OTXqc9CG2yg/EQg+96B9tJxOE/Xahr0R1Ij0E=; b=pqML/5wkSEO2oaBkXhaFHCjlY+C4374gSQlqKKc7HoqZR0mGXoQci956G5pUuu4IH12xpxtAPLmzs/TmbYxOfdDgpGBoe5y9c0yZQRk7YHjYfA8NudtIjUHLjtQGsroOtafL3IbHRNvZnX91s6dwf5Zp56YhzEx9B49MuemklUV7tpq9yv7fdtkZJA4e8seCHGKg0VXg0B97M12mGqjIrbTmu1sUg+ROrAI1owYOtYn22ue7TUx7EFI+JlXu/1EHShea5x8k+qEUR7NBeZRGRnubaZq3K4wQekkvh9dyc2F4tr0l7rCH1yEp1GvJ6EkYG+uvANF2Ujl5i/1Rc66i+Q== Received: from MWHPR15CA0025.namprd15.prod.outlook.com (2603:10b6:300:ad::11) by MWHPR12MB1680.namprd12.prod.outlook.com (2603:10b6:301:10::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.24; Thu, 24 Feb 2022 23:25:22 +0000 Received: from CO1NAM11FT026.eop-nam11.prod.protection.outlook.com (2603:10b6:300:ad:cafe::ca) by MWHPR15CA0025.outlook.office365.com (2603:10b6:300:ad::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.17 via Frontend Transport; Thu, 24 Feb 2022 23:25:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT026.mail.protection.outlook.com (10.13.175.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 23:25:21 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 23:25:21 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 15:25:20 -0800 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9 via Frontend Transport; Thu, 24 Feb 2022 15:25:19 -0800 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Subject: [PATCH v3 2/6] common/mlx5: glue device and PD importation Date: Fri, 25 Feb 2022 01:25:07 +0200 Message-ID: <20220224232511.3238707-3-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220224232511.3238707-1-michaelba@nvidia.com> References: <20220223184835.3061161-1-michaelba@nvidia.com> <20220224232511.3238707-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 69d57bff-5cf4-40ad-08f7-08d9f7eceda2 X-MS-TrafficTypeDiagnostic: MWHPR12MB1680:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: QvqqVH8jKHMo+GfCivZSIQcAB/9FZZEYx1H6GsQQubuhiMWhyvKdeLCHSSccc2+wDBHP/u2FYKpG31uwov6N7tw+HwDmCL0XpKDydpNcoLoeJZbIoZu2TZ0tY/6eqOyxau8J2aVGXCwneETiFMZID4mSza1Tx9u9gYczKd+Jjcnw9Dhqd1iase/H9IT6DiPSgrfO6kNwAi/XuV0Yasg/AggXWfNn0YvQaVvfuqTmjD62N4QSsGZ9g6AVLqE93OKX7o+JcnOulsApH4QjOYgAbx/GVy2j2cd5zjLQUfC5HQg43PLcm6dsInzHMxqZJ1fD0TUEIjagjd+UVxmefUqkrjnx3/ItTWoE8IjkpC7fNKoFBUn3xuZikUWYsb4c7lwt4d09T2Ld6rIGKsWlm2KcLflTvgnPmbvKeEH7Ef8MHMUhd63/BRyPy35qYNMTBgdISDPsHREW95Q77xnGskPazg6L6nAUE/yjJKv9WPBjiCORYMYheEMgg/46kqbRHSog7ZuClLj+2MRQOZ+MIkkfnabzPfjLwJ8yrOj/GPs3ysb0A+BcpEHkg1JBeN3LZizz5VAFT6zvh4XgYU/sTtAzHWs/lIV1I2EYFg2w1e3Hk0WbNQRF01FyBYHip8DUBHNODQQNaWN40uHgZAg8mq63BlOMvjlVTn/fx/syu8rnyE5+F8CvirwKu0m1IdP43c60TXAmDdSIktKOLv1o/1CLDQ== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(6916009)(336012)(55016003)(36860700001)(54906003)(40460700003)(36756003)(2906002)(6666004)(7696005)(83380400001)(47076005)(5660300002)(8936002)(426003)(8676002)(2616005)(70586007)(4326008)(107886003)(82310400004)(81166007)(1076003)(316002)(26005)(186003)(86362001)(356005)(6286002)(508600001)(70206006)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 23:25:21.8141 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 69d57bff-5cf4-40ad-08f7-08d9f7eceda2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT026.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1680 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for rdma-core API to import device. The API gets ibv_context file descriptor and returns an ibv_context pointer that is associated with the given file descriptor. Add also support for rdma-core API to import PD. The API gets ibv_context and PD handle and returns a protection domain (PD) that is associated with the given handle in the given context. Signed-off-by: Michael Baum Acked-by: Matan Azrad --- drivers/common/mlx5/linux/meson.build | 2 ++ drivers/common/mlx5/linux/mlx5_glue.c | 41 +++++++++++++++++++++++++++ drivers/common/mlx5/linux/mlx5_glue.h | 4 +++ 3 files changed, 47 insertions(+) diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build index 4c7b53b9bd..ed48245c67 100644 --- a/drivers/common/mlx5/linux/meson.build +++ b/drivers/common/mlx5/linux/meson.build @@ -202,6 +202,8 @@ has_sym_args = [ 'mlx5dv_dr_domain_allow_duplicate_rules' ], [ 'HAVE_MLX5_IBV_REG_MR_IOVA', 'infiniband/verbs.h', 'ibv_reg_mr_iova' ], + [ 'HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR', 'infiniband/verbs.h', + 'ibv_import_device' ], ] config = configuration_data() foreach arg:has_sym_args diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c index bc6622053f..450dd6a06a 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.c +++ b/drivers/common/mlx5/linux/mlx5_glue.c @@ -34,6 +34,32 @@ mlx5_glue_dealloc_pd(struct ibv_pd *pd) return ibv_dealloc_pd(pd); } +static struct ibv_pd * +mlx5_glue_import_pd(struct ibv_context *context, uint32_t pd_handle) +{ +#ifdef HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR + return ibv_import_pd(context, pd_handle); +#else + (void)context; + (void)pd_handle; + errno = ENOTSUP; + return NULL; +#endif +} + +static int +mlx5_glue_unimport_pd(struct ibv_pd *pd) +{ +#ifdef HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR + ibv_unimport_pd(pd); + return 0; +#else + (void)pd; + errno = ENOTSUP; + return -errno; +#endif +} + static struct ibv_device ** mlx5_glue_get_device_list(int *num_devices) { @@ -52,6 +78,18 @@ mlx5_glue_open_device(struct ibv_device *device) return ibv_open_device(device); } +static struct ibv_context * +mlx5_glue_import_device(int cmd_fd) +{ +#ifdef HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR + return ibv_import_device(cmd_fd); +#else + (void)cmd_fd; + errno = ENOTSUP; + return NULL; +#endif +} + static int mlx5_glue_close_device(struct ibv_context *context) { @@ -1402,9 +1440,12 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { .fork_init = mlx5_glue_fork_init, .alloc_pd = mlx5_glue_alloc_pd, .dealloc_pd = mlx5_glue_dealloc_pd, + .import_pd = mlx5_glue_import_pd, + .unimport_pd = mlx5_glue_unimport_pd, .get_device_list = mlx5_glue_get_device_list, .free_device_list = mlx5_glue_free_device_list, .open_device = mlx5_glue_open_device, + .import_device = mlx5_glue_import_device, .close_device = mlx5_glue_close_device, .query_device = mlx5_glue_query_device, .query_device_ex = mlx5_glue_query_device_ex, diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h index 4e6d31f263..c4903a6dce 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.h +++ b/drivers/common/mlx5/linux/mlx5_glue.h @@ -151,9 +151,13 @@ struct mlx5_glue { int (*fork_init)(void); struct ibv_pd *(*alloc_pd)(struct ibv_context *context); int (*dealloc_pd)(struct ibv_pd *pd); + struct ibv_pd *(*import_pd)(struct ibv_context *context, + uint32_t pd_handle); + int (*unimport_pd)(struct ibv_pd *pd); struct ibv_device **(*get_device_list)(int *num_devices); void (*free_device_list)(struct ibv_device **list); struct ibv_context *(*open_device)(struct ibv_device *device); + struct ibv_context *(*import_device)(int cmd_fd); int (*close_device)(struct ibv_context *context); int (*query_device)(struct ibv_context *context, struct ibv_device_attr *device_attr); From patchwork Thu Feb 24 23:25:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 108333 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 23EE0A034C; Fri, 25 Feb 2022 00:25:58 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3524A4117A; Fri, 25 Feb 2022 00:25:37 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2052.outbound.protection.outlook.com [40.107.236.52]) by mails.dpdk.org (Postfix) with ESMTP id 708C24114A for ; Fri, 25 Feb 2022 00:25:34 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lc+3XnT86jlg1gwMiph7emBknw1dHGyliS61OiHfbqRgyRIpDI78wGvvyuR2b6/WyiP31VBwFiWFKWWC6tL4Qd4ep6eRfU6p6ciobvuTduugosqU7KtsHbp1XvVPK9QHDC3riwix0uU48fHcYJTpR1mJxMenw//SnXfNWJYV/pWcQbj/C1LQ2kVSkykRoWOAHttEcel7PiErdKP6CykiIBmosKNk9iXQdtKBmZEYTckp8LnMFt4rFh3oFZP/zyrqBQxs3NXm3fPEak6wbanbOdLMfS3DVBLIrepeahRsBGm/u7NVuj36/rxladNy64tmzrOCzPkTKEngpqzEuswg6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=r8rbcWbfT8BJybTQimtkLAXjUAxBxEugI3KI+TMP3nE=; b=Vd3lmvySD7Vhyslhi+0TD3uhqEJqO5RddVb40V/EsgaSXezgtAj+YrVL7KiH0yLP63fKwIF+nPiO6FulcReNZbOT/NYCYUOyTq2yrkMARooHrAyHWE3E6TCqvezN/IhOOZaktE2ihitFT5dbs9Ccho0uV7jZpV5CQW5f8M5+GFOz76gebfKchO0KjqNnOSJGoIuTBE4AbB1wpM6MFqURZvb7oLy0gxOh+KpwE09amd8GQCCjlj+mjsYwIGH+8LAAMhFEgJo2Wp22hFyXwWg6VATfX7LqZVgVAoYbZ/aXcU4M6ALVtz54FX6PXfJR0g5Zn9ELGlghV8NhnaEobfP/Wg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=r8rbcWbfT8BJybTQimtkLAXjUAxBxEugI3KI+TMP3nE=; b=RCOJDpvOLw77z3c4erDQd8hMU0r9VOW31vxfo3Hx4+LlWffTXcSXhAy+69bRJIwEg7EpgdAJYYTMYV+tyLblEGdllEbQpDJrEb1ZIPid1W0eMyLGtDuEQucFcJWnEGsAQMcEmgbjbukm8REC1CQabE/ZcDiOFBnhFIbAbgAKLiaE3V4oecEcmQHJI4vSUHV+ea8qPX6Ug7IAbUamD4wuHLJ02M7GANTXpqXM9CrG00XQVYI4DLYzVMeajUdrTfzPNL8xcNRI0VEvigsmslANvQK6ml45lk5RCOkjXHFnUk24/bbKYdWtX5KnIQj4E0IDffqTRe+KytC29GBiAY2Qnw== Received: from MWHPR2201CA0057.namprd22.prod.outlook.com (2603:10b6:301:16::31) by DM6PR12MB4074.namprd12.prod.outlook.com (2603:10b6:5:218::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.22; Thu, 24 Feb 2022 23:25:29 +0000 Received: from CO1NAM11FT056.eop-nam11.prod.protection.outlook.com (2603:10b6:301:16:cafe::c0) by MWHPR2201CA0057.outlook.office365.com (2603:10b6:301:16::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.12 via Frontend Transport; Thu, 24 Feb 2022 23:25:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT056.mail.protection.outlook.com (10.13.175.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 23:25:29 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 23:25:22 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 15:25:22 -0800 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9 via Frontend Transport; Thu, 24 Feb 2022 15:25:20 -0800 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Subject: [PATCH v3 3/6] common/mlx5: add remote PD and CTX support Date: Fri, 25 Feb 2022 01:25:08 +0200 Message-ID: <20220224232511.3238707-4-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220224232511.3238707-1-michaelba@nvidia.com> References: <20220223184835.3061161-1-michaelba@nvidia.com> <20220224232511.3238707-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 91139cff-70fb-4018-c885-08d9f7ecf1dd X-MS-TrafficTypeDiagnostic: DM6PR12MB4074:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: kv5PESNgfQMTgbMdnYGLplp93WapxYByfgOITzuS6xa0QZzeJ1fq475gG9PCcecGbg5QWiMyxsWq6DzHOeCG8h9x13qeyKFCHJiR0aCrlWzNDQsYoLyM8VN/gHRILzvmEsYJnjh0DOl6k5oneLW7KBBJ/cwdQL8qGxFXQLzasJXnaG/bTrgV54/ds9dFZVxSw2OpPRUAcN9gx9y3q3sA/HPAfoa9I8ZejPqLGwU5g8eXKFi5xp9KzeFd1N6YTdTPSJNx96O1HULPA8uGp4RTW4OCETsiaXbuo3apO3B6TlgrDofRuzm9ZFlOKOj+1kuf9/KOsONsWpRdOT1FKtE6WIx8wsH5nGg2i1FL0PdAcrrleFJSmIR+6ybeEWrDwJCGcTVxEMktt0KoEo0pNmnnpiE546xrv7owl08ekdxQMG9LOG4ERm6AmPC9ZRLkuMRyRhp3O3o3VERcwMZKc8JrNAoGFvpd7iETUBVhRzpkum2UawXkCImNiSReVK1l8ZSD0Cr3G9DEoKS6LSIH9roLywbdfDUlcmB6zk6m3WJvMzZpSUQiRyq1ivYjbZkZ22xZABWAX/hp+51AwWLyp5qKH1tdf0T/gVccte42fVEOO1eWmNuaIiBVkliFFC4wgy5KCE/7M+KgDvmkcPVkBhm4kgHBT2wUd8hyd+NtH7SqEi1m+jaL+5it27rajVpCL/JtGcoEcAl3iRGb0Kn3GBvZhhJVtUTQKFlkQoAhAlW0PLzIQvHCrCxVFxBr9cllvQ+o X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(1076003)(508600001)(2906002)(36860700001)(2616005)(4326008)(8676002)(70206006)(70586007)(5660300002)(55016003)(107886003)(40460700003)(54906003)(83380400001)(356005)(6916009)(316002)(7696005)(6666004)(82310400004)(86362001)(36756003)(6286002)(81166007)(336012)(426003)(26005)(8936002)(30864003)(186003)(47076005)(36900700001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 23:25:29.0728 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 91139cff-70fb-4018-c885-08d9f7ecf1dd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT056.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4074 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add option to probe common device using import CTX/PD functions instead of create functions. This option requires accepting the context FD and the PD handle as devargs. This sharing can be useful for applications that use PMD for only some operations. For example, an app that generates queues itself and uses PMD just to configure flow rules. Signed-off-by: Michael Baum Acked-by: Matan Azrad --- doc/guides/platform/mlx5.rst | 37 +++- drivers/common/mlx5/linux/mlx5_common_os.c | 196 ++++++++++++++++--- drivers/common/mlx5/linux/mlx5_common_os.h | 6 - drivers/common/mlx5/mlx5_common.c | 84 +++++--- drivers/common/mlx5/mlx5_common.h | 23 ++- drivers/common/mlx5/windows/mlx5_common_os.c | 37 +++- drivers/common/mlx5/windows/mlx5_common_os.h | 1 - 7 files changed, 324 insertions(+), 60 deletions(-) diff --git a/doc/guides/platform/mlx5.rst b/doc/guides/platform/mlx5.rst index d073c213ca..76b3f80315 100644 --- a/doc/guides/platform/mlx5.rst +++ b/doc/guides/platform/mlx5.rst @@ -81,6 +81,12 @@ Limitations - On Windows, only ``eth`` and ``crypto`` are supported. +Features +-------- + +- Remote PD and CTX - Linux only. + + .. _mlx5_common_compilation: Compilation Prerequisites @@ -638,4 +644,33 @@ and below are the arguments supported by the common mlx5 layer. If ``sq_db_nc`` is omitted, the preset (if any) environment variable "MLX5_SHUT_UP_BF" value is used. If there is no "MLX5_SHUT_UP_BF", the - default ``sq_db_nc`` value is zero for ARM64 hosts and one for others. \ No newline at end of file + default ``sq_db_nc`` value is zero for ARM64 hosts and one for others. + +- ``cmd_fd`` parameter [int] + + File descriptor of ``ibv_context`` created outside the PMD. + PMD will use this FD to import remote CTX. The ``cmd_fd`` is obtained from + the ``ibv_context->cmd_fd`` member, which must be dup'd before being passed. + This parameter is valid only if ``pd_handle`` parameter is specified. + + By default, the PMD will create a new ``ibv_context``. + + .. note:: + + When FD comes from another process, it is the user responsibility to + share the FD between the processes (e.g. by SCM_RIGHTS). + +- ``pd_handle`` parameter [int] + + Protection domain handle of ``ibv_pd`` created outside the PMD. + PMD will use this handle to import remote PD. The ``pd_handle`` can be + achieved from the original PD by getting its ``ibv_pd->handle`` member value. + This parameter is valid only if ``cmd_fd`` parameter is specified, and its + value must be a valid kernel handle for a PD object in the context represented + by given ``cmd_fd``. + + By default, the PMD will allocate a new PD. + + .. note:: + + The ``ibv_pd->handle`` member is different then ``mlx5dv_pd->pdn`` member. \ No newline at end of file diff --git a/drivers/common/mlx5/linux/mlx5_common_os.c b/drivers/common/mlx5/linux/mlx5_common_os.c index a752d79e8e..a3c25638da 100644 --- a/drivers/common/mlx5/linux/mlx5_common_os.c +++ b/drivers/common/mlx5/linux/mlx5_common_os.c @@ -408,27 +408,128 @@ mlx5_glue_constructor(void) } /** - * Allocate Protection Domain object and extract its pdn using DV API. + * Validate user arguments for remote PD and CTX. + * + * @param config + * Pointer to device configuration structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_os_remote_pd_and_ctx_validate(struct mlx5_common_dev_config *config) +{ + int device_fd = config->device_fd; + int pd_handle = config->pd_handle; + +#ifdef HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR + if (device_fd == MLX5_ARG_UNSET && pd_handle != MLX5_ARG_UNSET) { + DRV_LOG(ERR, "Remote PD without CTX is not supported."); + rte_errno = EINVAL; + return -rte_errno; + } + if (device_fd != MLX5_ARG_UNSET && pd_handle == MLX5_ARG_UNSET) { + DRV_LOG(ERR, "Remote CTX without PD is not supported."); + rte_errno = EINVAL; + return -rte_errno; + } + DRV_LOG(DEBUG, "Remote PD and CTX is supported: (cmd_fd=%d, " + "pd_handle=%d).", device_fd, pd_handle); +#else + if (pd_handle != MLX5_ARG_UNSET || device_fd != MLX5_ARG_UNSET) { + DRV_LOG(ERR, + "Remote PD and CTX is not supported - maybe old rdma-core version?"); + rte_errno = ENOTSUP; + return -rte_errno; + } +#endif + return 0; +} + +/** + * Release Protection Domain object. * * @param[out] cdev * Pointer to the mlx5 device. * * @return - * 0 on success, a negative errno value otherwise and rte_errno is set. + * 0 on success, a negative errno value otherwise. */ int +mlx5_os_pd_release(struct mlx5_common_device *cdev) +{ + if (cdev->config.pd_handle == MLX5_ARG_UNSET) + return mlx5_glue->dealloc_pd(cdev->pd); + else + return mlx5_glue->unimport_pd(cdev->pd); +} + +/** + * Allocate Protection Domain object. + * + * @param[out] cdev + * Pointer to the mlx5 device. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +static int mlx5_os_pd_create(struct mlx5_common_device *cdev) +{ + cdev->pd = mlx5_glue->alloc_pd(cdev->ctx); + if (cdev->pd == NULL) { + DRV_LOG(ERR, "Failed to allocate PD: %s", rte_strerror(errno)); + return errno ? -errno : -ENOMEM; + } + return 0; +} + +/** + * Import Protection Domain object according to given PD handle. + * + * @param[out] cdev + * Pointer to the mlx5 device. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +static int +mlx5_os_pd_import(struct mlx5_common_device *cdev) +{ + cdev->pd = mlx5_glue->import_pd(cdev->ctx, cdev->config.pd_handle); + if (cdev->pd == NULL) { + DRV_LOG(ERR, "Failed to import PD using handle=%d: %s", + cdev->config.pd_handle, rte_strerror(errno)); + return errno ? -errno : -ENOMEM; + } + return 0; +} + +/** + * Prepare Protection Domain object and extract its pdn using DV API. + * + * @param[out] cdev + * Pointer to the mlx5 device. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_os_pd_prepare(struct mlx5_common_device *cdev) { #ifdef HAVE_IBV_FLOW_DV_SUPPORT struct mlx5dv_obj obj; struct mlx5dv_pd pd_info; - int ret; #endif + int ret; - cdev->pd = mlx5_glue->alloc_pd(cdev->ctx); - if (cdev->pd == NULL) { - DRV_LOG(ERR, "Failed to allocate PD."); - return errno ? -errno : -ENOMEM; + if (cdev->config.pd_handle == MLX5_ARG_UNSET) + ret = mlx5_os_pd_create(cdev); + else + ret = mlx5_os_pd_import(cdev); + if (ret) { + rte_errno = -ret; + return ret; } if (cdev->config.devx == 0) return 0; @@ -438,15 +539,17 @@ mlx5_os_pd_create(struct mlx5_common_device *cdev) ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD); if (ret != 0) { DRV_LOG(ERR, "Fail to get PD object info."); - mlx5_glue->dealloc_pd(cdev->pd); + rte_errno = errno; + claim_zero(mlx5_os_pd_release(cdev)); cdev->pd = NULL; - return -errno; + return -rte_errno; } cdev->pdn = pd_info.pdn; return 0; #else DRV_LOG(ERR, "Cannot get pdn - no DV support."); - return -ENOTSUP; + rte_errno = ENOTSUP; + return -rte_errno; #endif /* HAVE_IBV_FLOW_DV_SUPPORT */ } @@ -648,28 +751,28 @@ mlx5_restore_doorbell_mapping_env(int value) /** * Function API to open IB device. * - * * @param cdev * Pointer to the mlx5 device. * @param classes * Chosen classes come from device arguments. * * @return - * 0 on success, a negative errno value otherwise and rte_errno is set. + * Pointer to ibv_context on success, NULL otherwise and rte_errno is set. */ -int -mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes) +static struct ibv_context * +mlx5_open_device(struct mlx5_common_device *cdev, uint32_t classes) { struct ibv_device *ibv; struct ibv_context *ctx = NULL; int dbmap_env; + MLX5_ASSERT(cdev->config.device_fd == MLX5_ARG_UNSET); if (classes & MLX5_CLASS_VDPA) ibv = mlx5_vdpa_get_ibv_dev(cdev->dev); else ibv = mlx5_os_get_ibv_dev(cdev->dev); if (!ibv) - return -rte_errno; + return NULL; DRV_LOG(INFO, "Dev information matches for device \"%s\".", ibv->name); /* * Configure environment variable "MLX5_BF_SHUT_UP" before the device @@ -682,29 +785,78 @@ mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes) ctx = mlx5_glue->dv_open_device(ibv); if (ctx) { cdev->config.devx = 1; - DRV_LOG(DEBUG, "DevX is supported."); } else if (classes == MLX5_CLASS_ETH) { /* The environment variable is still configured. */ ctx = mlx5_glue->open_device(ibv); if (ctx == NULL) goto error; - DRV_LOG(DEBUG, "DevX is NOT supported."); } else { goto error; } /* The device is created, no need for environment. */ mlx5_restore_doorbell_mapping_env(dbmap_env); - /* Hint libmlx5 to use PMD allocator for data plane resources */ - mlx5_set_context_attr(cdev->dev, ctx); - cdev->ctx = ctx; - return 0; + return ctx; error: rte_errno = errno ? errno : ENODEV; /* The device creation is failed, no need for environment. */ mlx5_restore_doorbell_mapping_env(dbmap_env); DRV_LOG(ERR, "Failed to open IB device \"%s\".", ibv->name); - return -rte_errno; + return NULL; +} + +/** + * Function API to import IB device. + * + * @param cdev + * Pointer to the mlx5 device. + * + * @return + * Pointer to ibv_context on success, NULL otherwise and rte_errno is set. + */ +static struct ibv_context * +mlx5_import_device(struct mlx5_common_device *cdev) +{ + struct ibv_context *ctx = NULL; + + MLX5_ASSERT(cdev->config.device_fd != MLX5_ARG_UNSET); + ctx = mlx5_glue->import_device(cdev->config.device_fd); + if (!ctx) { + DRV_LOG(ERR, "Failed to import device for fd=%d: %s", + cdev->config.device_fd, rte_strerror(errno)); + rte_errno = errno; + } + return ctx; +} + +/** + * Function API to prepare IB device. + * + * @param cdev + * Pointer to the mlx5 device. + * @param classes + * Chosen classes come from device arguments. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes) +{ + + struct ibv_context *ctx = NULL; + + if (cdev->config.device_fd == MLX5_ARG_UNSET) + ctx = mlx5_open_device(cdev, classes); + else + ctx = mlx5_import_device(cdev); + if (ctx == NULL) + return -rte_errno; + /* Hint libmlx5 to use PMD allocator for data plane resources */ + mlx5_set_context_attr(cdev->dev, ctx); + cdev->ctx = ctx; + return 0; } + int mlx5_get_device_guid(const struct rte_pci_addr *dev, uint8_t *guid, size_t len) { diff --git a/drivers/common/mlx5/linux/mlx5_common_os.h b/drivers/common/mlx5/linux/mlx5_common_os.h index edf356a30a..a85f3b5f3c 100644 --- a/drivers/common/mlx5/linux/mlx5_common_os.h +++ b/drivers/common/mlx5/linux/mlx5_common_os.h @@ -203,12 +203,6 @@ mlx5_os_get_devx_uar_page_id(void *uar) #endif } -static inline int -mlx5_os_dealloc_pd(void *pd) -{ - return mlx5_glue->dealloc_pd(pd); -} - __rte_internal static inline void * mlx5_os_umem_reg(void *ctx, void *addr, size_t size, uint32_t access) diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c index 8cf391df13..94c303ce81 100644 --- a/drivers/common/mlx5/mlx5_common.c +++ b/drivers/common/mlx5/mlx5_common.c @@ -24,6 +24,12 @@ uint8_t haswell_broadwell_cpu; /* Driver type key for new device global syntax. */ #define MLX5_DRIVER_KEY "driver" +/* Device parameter to get file descriptor for import device. */ +#define MLX5_DEVICE_FD "cmd_fd" + +/* Device parameter to get PD number for import Protection Domain. */ +#define MLX5_PD_HANDLE "pd_handle" + /* Enable extending memsegs when creating a MR. */ #define MLX5_MR_EXT_MEMSEG_EN "mr_ext_memseg_en" @@ -283,6 +289,10 @@ mlx5_common_args_check_handler(const char *key, const char *val, void *opaque) config->mr_mempool_reg_en = !!tmp; } else if (strcmp(key, MLX5_SYS_MEM_EN) == 0) { config->sys_mem_en = !!tmp; + } else if (strcmp(key, MLX5_DEVICE_FD) == 0) { + config->device_fd = tmp; + } else if (strcmp(key, MLX5_PD_HANDLE) == 0) { + config->pd_handle = tmp; } return 0; } @@ -310,6 +320,8 @@ mlx5_common_config_get(struct mlx5_kvargs_ctrl *mkvlist, MLX5_MR_EXT_MEMSEG_EN, MLX5_SYS_MEM_EN, MLX5_MR_MEMPOOL_REG_EN, + MLX5_DEVICE_FD, + MLX5_PD_HANDLE, NULL, }; int ret = 0; @@ -321,13 +333,19 @@ mlx5_common_config_get(struct mlx5_kvargs_ctrl *mkvlist, config->mr_mempool_reg_en = 1; config->sys_mem_en = 0; config->dbnc = MLX5_ARG_UNSET; + config->device_fd = MLX5_ARG_UNSET; + config->pd_handle = MLX5_ARG_UNSET; /* Process common parameters. */ ret = mlx5_kvargs_process(mkvlist, params, mlx5_common_args_check_handler, config); if (ret) { rte_errno = EINVAL; - ret = -rte_errno; + return -rte_errno; } + /* Validate user arguments for remote PD and CTX if it is given. */ + ret = mlx5_os_remote_pd_and_ctx_validate(config); + if (ret) + return ret; DRV_LOG(DEBUG, "mr_ext_memseg_en is %u.", config->mr_ext_memseg_en); DRV_LOG(DEBUG, "mr_mempool_reg_en is %u.", config->mr_mempool_reg_en); DRV_LOG(DEBUG, "sys_mem_en is %u.", config->sys_mem_en); @@ -645,7 +663,7 @@ static void mlx5_dev_hw_global_release(struct mlx5_common_device *cdev) { if (cdev->pd != NULL) { - claim_zero(mlx5_os_dealloc_pd(cdev->pd)); + claim_zero(mlx5_os_pd_release(cdev)); cdev->pd = NULL; } if (cdev->ctx != NULL) { @@ -674,20 +692,27 @@ mlx5_dev_hw_global_prepare(struct mlx5_common_device *cdev, uint32_t classes) ret = mlx5_os_open_device(cdev, classes); if (ret < 0) return ret; - /* Allocate Protection Domain object and extract its pdn. */ - ret = mlx5_os_pd_create(cdev); + /* + * When CTX is created by Verbs, query HCA attribute is unsupported. + * When CTX is imported, we cannot know if it is created by DevX or + * Verbs. So, we use query HCA attribute function to check it. + */ + if (cdev->config.devx || cdev->config.device_fd != MLX5_ARG_UNSET) { + /* Query HCA attributes. */ + ret = mlx5_devx_cmd_query_hca_attr(cdev->ctx, + &cdev->config.hca_attr); + if (ret) { + DRV_LOG(ERR, "Unable to read HCA caps in DevX mode."); + rte_errno = ENOTSUP; + goto error; + } + cdev->config.devx = 1; + } + DRV_LOG(DEBUG, "DevX is %ssupported.", cdev->config.devx ? "" : "NOT "); + /* Prepare Protection Domain object and extract its pdn. */ + ret = mlx5_os_pd_prepare(cdev); if (ret) goto error; - /* All actions taken below are relevant only when DevX is supported */ - if (cdev->config.devx == 0) - return 0; - /* Query HCA attributes. */ - ret = mlx5_devx_cmd_query_hca_attr(cdev->ctx, &cdev->config.hca_attr); - if (ret) { - DRV_LOG(ERR, "Unable to read HCA capabilities."); - rte_errno = ENOTSUP; - goto error; - } return 0; error: mlx5_dev_hw_global_release(cdev); @@ -814,26 +839,39 @@ mlx5_common_probe_again_args_validate(struct mlx5_common_device *cdev, * Checks the match between the temporary structure and the existing * common device structure. */ - if (cdev->config.mr_ext_memseg_en ^ config->mr_ext_memseg_en) { - DRV_LOG(ERR, "\"mr_ext_memseg_en\" " + if (cdev->config.mr_ext_memseg_en != config->mr_ext_memseg_en) { + DRV_LOG(ERR, "\"" MLX5_MR_EXT_MEMSEG_EN "\" " "configuration mismatch for device %s.", cdev->dev->name); goto error; } - if (cdev->config.mr_mempool_reg_en ^ config->mr_mempool_reg_en) { - DRV_LOG(ERR, "\"mr_mempool_reg_en\" " + if (cdev->config.mr_mempool_reg_en != config->mr_mempool_reg_en) { + DRV_LOG(ERR, "\"" MLX5_MR_MEMPOOL_REG_EN "\" " "configuration mismatch for device %s.", cdev->dev->name); goto error; } - if (cdev->config.sys_mem_en ^ config->sys_mem_en) { - DRV_LOG(ERR, - "\"sys_mem_en\" configuration mismatch for device %s.", + if (cdev->config.device_fd != config->device_fd) { + DRV_LOG(ERR, "\"" MLX5_DEVICE_FD "\" " + "configuration mismatch for device %s.", + cdev->dev->name); + goto error; + } + if (cdev->config.pd_handle != config->pd_handle) { + DRV_LOG(ERR, "\"" MLX5_PD_HANDLE "\" " + "configuration mismatch for device %s.", + cdev->dev->name); + goto error; + } + if (cdev->config.sys_mem_en != config->sys_mem_en) { + DRV_LOG(ERR, "\"" MLX5_SYS_MEM_EN "\" " + "configuration mismatch for device %s.", cdev->dev->name); goto error; } - if (cdev->config.dbnc ^ config->dbnc) { - DRV_LOG(ERR, "\"dbnc\" configuration mismatch for device %s.", + if (cdev->config.dbnc != config->dbnc) { + DRV_LOG(ERR, "\"" MLX5_SQ_DB_NC "\" " + "configuration mismatch for device %s.", cdev->dev->name); goto error; } diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h index 49bcea1d91..63f31437da 100644 --- a/drivers/common/mlx5/mlx5_common.h +++ b/drivers/common/mlx5/mlx5_common.h @@ -446,6 +446,8 @@ void mlx5_common_init(void); struct mlx5_common_dev_config { struct mlx5_hca_attr hca_attr; /* HCA attributes. */ int dbnc; /* Skip doorbell register write barrier. */ + int device_fd; /* Device file descriptor for importation. */ + int pd_handle; /* Protection Domain handle for importation. */ unsigned int devx:1; /* Whether devx interface is available or not. */ unsigned int sys_mem_en:1; /* The default memory allocator. */ unsigned int mr_mempool_reg_en:1; @@ -465,6 +467,23 @@ struct mlx5_common_device { struct mlx5_common_dev_config config; /* Device configuration. */ }; +/** + * Indicates whether PD and CTX are imported from another process, + * or created by this process. + * + * @param cdev + * Pointer to common device. + * + * @return + * True if PD and CTX are imported from another process, False otherwise. + */ +static inline bool +mlx5_imported_pd_and_ctx(struct mlx5_common_device *cdev) +{ + return cdev->config.device_fd != MLX5_ARG_UNSET && + cdev->config.pd_handle != MLX5_ARG_UNSET; +} + /** * Initialization function for the driver called during device probing. */ @@ -554,7 +573,9 @@ mlx5_devx_uar_release(struct mlx5_uar *uar); /* mlx5_common_os.c */ int mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes); -int mlx5_os_pd_create(struct mlx5_common_device *cdev); +int mlx5_os_pd_prepare(struct mlx5_common_device *cdev); +int mlx5_os_pd_release(struct mlx5_common_device *cdev); +int mlx5_os_remote_pd_and_ctx_validate(struct mlx5_common_dev_config *config); /* mlx5 PMD wrapped MR struct. */ struct mlx5_pmd_wrapped_mr { diff --git a/drivers/common/mlx5/windows/mlx5_common_os.c b/drivers/common/mlx5/windows/mlx5_common_os.c index c3cfc315f2..f2fc7cd494 100644 --- a/drivers/common/mlx5/windows/mlx5_common_os.c +++ b/drivers/common/mlx5/windows/mlx5_common_os.c @@ -25,21 +25,46 @@ mlx5_glue_constructor(void) { } +/** + * Validate user arguments for remote PD and CTX. + * + * @param config + * Pointer to device configuration structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_os_remote_pd_and_ctx_validate(struct mlx5_common_dev_config *config) +{ + int device_fd = config->device_fd; + int pd_handle = config->pd_handle; + + if (pd_handle != MLX5_ARG_UNSET || device_fd != MLX5_ARG_UNSET) { + DRV_LOG(ERR, "Remote PD and CTX is not supported on Windows."); + rte_errno = ENOTSUP; + return -rte_errno; + } + return 0; +} + /** * Release PD. Releases a given mlx5_pd object * - * @param[in] pd - * Pointer to mlx5_pd. + * @param[in] cdev + * Pointer to the mlx5 device. * * @return * Zero if pd is released successfully, negative number otherwise. */ int -mlx5_os_dealloc_pd(void *pd) +mlx5_os_pd_release(struct mlx5_common_device *cdev) { + struct mlx5_pd *pd = cdev->pd; + if (!pd) return -EINVAL; - mlx5_devx_cmd_destroy(((struct mlx5_pd *)pd)->obj); + mlx5_devx_cmd_destroy(pd->obj); mlx5_free(pd); return 0; } @@ -47,14 +72,14 @@ mlx5_os_dealloc_pd(void *pd) /** * Allocate Protection Domain object and extract its pdn using DV API. * - * @param[out] dev + * @param[out] cdev * Pointer to the mlx5 device. * * @return * 0 on success, a negative value otherwise. */ int -mlx5_os_pd_create(struct mlx5_common_device *cdev) +mlx5_os_pd_prepare(struct mlx5_common_device *cdev) { struct mlx5_pd *pd; diff --git a/drivers/common/mlx5/windows/mlx5_common_os.h b/drivers/common/mlx5/windows/mlx5_common_os.h index 61fc8dd761..ee7973f1ec 100644 --- a/drivers/common/mlx5/windows/mlx5_common_os.h +++ b/drivers/common/mlx5/windows/mlx5_common_os.h @@ -248,7 +248,6 @@ mlx5_os_devx_subscribe_devx_event(void *eventc, return -ENOTSUP; } -int mlx5_os_dealloc_pd(void *pd); __rte_internal void *mlx5_os_umem_reg(void *ctx, void *addr, size_t size, uint32_t access); __rte_internal From patchwork Thu Feb 24 23:25:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 108331 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 91A1CA034C; Fri, 25 Feb 2022 00:25:45 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1652241163; Fri, 25 Feb 2022 00:25:34 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2052.outbound.protection.outlook.com [40.107.94.52]) by mails.dpdk.org (Postfix) with ESMTP id E5E264115B for ; Fri, 25 Feb 2022 00:25:30 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RpQNVSFHpBVhnINghzQEBGlvetYs6M/w2M4HdM5EsDPQ8gtsLE/6mkfTEw8YTy+MevJmMd4woIvCgFEBb7PLh4QtEW8apM8Ib5v+fhgD1sAYPfv/3oQ8y49ph3B1Tv4lt51gHeAGzGol/VMDzMCScqi+OqrHD4lddM8/LYaUDJ5SwAJGquYNStv1jscPtZrNNTp8dzxLnkIGWmI0UJGvtQ/AJbH1GMdX2tywocup0DLWmWW5AdFzyQVjKxqVN7NAHxlt6FFhoAzcmDDzJcij51Q84oL8BWcfSutTt76tlWJas8/VuqdzdvG9Fi/XEjYNf6uywFquSzCnaP/X/T3Xyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9vGfFLmyAOPDyXmOCr1Y2vn/5On8z21uhF1Kk2ZqNL4=; b=k1N9J0tnsrvN5hfwGtr8VaNCS1qk+oMK74ZMZWg0MNgbnt/jS1miGUsvdTkjKOhmFahU+zjgn8DJ/bxQmi9FjdbGPxfsBwBlHXDYycd+tk/xAiYfypfwvCjv2Bbv0fHf6iWmCwFgOIahumSlWH9o2JPZW49s0Y6MPkwAKF8to26V0nVetcOmwuMSMgCNCprwJfJ/2+C89JhT248DRvTqGEibZgWDTgo29cKnugSkvV1br78ywLaSIrlxKv77gp2qsy4/zwRCJy79C+oI+2xdXOa7xvQc1vckCrdw5pDy1h/gTzyjpNH4QCwm659tOcHKHQ3tGLmXif/MJtgDD10k1Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9vGfFLmyAOPDyXmOCr1Y2vn/5On8z21uhF1Kk2ZqNL4=; b=CK/kBSO45wgHD94oe2gEXHunlTXjLVpXrnD5HRTYIRf7s5uqE5hdTjoywX+R1hj//PKnKLSNwOHkutkD1klA/1eG2znVlncWGF0I14riXuHw6jUtlZpFCqQZwZNyUIOq6g3gStARrbrK8PHZhrqydFwJNCCjC784gwRWLIo4QT++3UfylkTyUBN/8arrK6EotRvY/exqT9JtGYMlVRhqy3HPfVFQZhUXOhEIEm9+xFAntvwB9/sDCeBJCvaEGLY5kYk1Da3v5Frpbha/W3GywKLj8iagsZ9kh2KRzjn7YBEz2wOHhTSuUN4zKWqlZTrkzLasiAuZJl+BqJUFRFOLoQ== Received: from BN9PR03CA0476.namprd03.prod.outlook.com (2603:10b6:408:139::31) by PH7PR12MB5880.namprd12.prod.outlook.com (2603:10b6:510:1d8::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.24; Thu, 24 Feb 2022 23:25:27 +0000 Received: from BN8NAM11FT012.eop-nam11.prod.protection.outlook.com (2603:10b6:408:139:cafe::78) by BN9PR03CA0476.outlook.office365.com (2603:10b6:408:139::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.21 via Frontend Transport; Thu, 24 Feb 2022 23:25:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT012.mail.protection.outlook.com (10.13.177.55) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 23:25:25 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 23:25:25 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 15:25:24 -0800 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9 via Frontend Transport; Thu, 24 Feb 2022 15:25:22 -0800 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Subject: [PATCH v3 4/6] net/mlx5: optimize RxQ/TxQ control structure Date: Fri, 25 Feb 2022 01:25:09 +0200 Message-ID: <20220224232511.3238707-5-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220224232511.3238707-1-michaelba@nvidia.com> References: <20220223184835.3061161-1-michaelba@nvidia.com> <20220224232511.3238707-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0eb52fd6-1dc1-4477-facc-08d9f7ecf02f X-MS-TrafficTypeDiagnostic: PH7PR12MB5880:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: KyEjxkY5AE+ieee5PIrHMGtdrN+W1WeFX4Ten6UENkdXE0dQvDRbsfxV0bTXuvAILB1LG6hs/v9bO8LH9B4wblIBYlfTfpKrknI8x+YvfF3j+kSbmd9c8sJmdwXiHSThC61XZWVhtWaxk2v51Q92729aP5lxjEAhURqp9z88uVkvjxSvHf6pxs51tC5Q3NIxenOia8anoob495fAkx1SWlz30cXoQhDOBY9wjZPLS73N0Y0EwKE9blJ1kwk3XztQWgSjTUF96yrDt5n2b/omkTC/DKWZdBeowlODzZj6pgfrS4M0mvyySfJ14JeQQ9ZEqAAT4sgtiO/rpwdoqFk6L4owknjMHJNtNSry3IsxmiotzgWz1xFxEIY7wFur/tHggtpozXl34mqA7qzFuAROcIZ68E4+maWF+Nq79YOHEICdK/NyvY+fM3vCw4XcjzruD7IttThrJkacRrbtpZuXgZdmNkKE/4zZB4+PbtQ3mA3j/+Z/QJWK2ld7THFB/SXuB/vmB45G8ZsfHo0vGZElMI6G8ByJ00SYp+E4DOm3Q/3YLm2kcmTs+bqilpiHFU9Hj1NapppdtwN3FK9x8PxiE32h6RgiZasmbQDXVdzIWro+CyIlwCLNh1Al+YIq+XDmlIzjk/P6G4ebgiUx9scDS8p2MgDPlla0MauYy1djEGRvBaDKwfC/kOvHT5p+9ZDLXooDUu2eo0lX9pvSPHTl6Q== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:ErrorRetry; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(7696005)(54906003)(55016003)(1076003)(356005)(186003)(316002)(26005)(6916009)(30864003)(36756003)(36860700001)(47076005)(40460700003)(107886003)(4326008)(86362001)(82310400004)(426003)(83380400001)(70586007)(6666004)(508600001)(8936002)(8676002)(2616005)(81166007)(6286002)(70206006)(2906002)(5660300002)(336012)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 23:25:25.9279 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0eb52fd6-1dc1-4477-facc-08d9f7ecf02f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT012.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5880 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The RxQ/TxQ control structure has a field named type. This type is enum with values for standard and hairpin. The use of this field is to check whether the queue is of the hairpin type or standard. This patch replaces it with a boolean variable that saves whether it is a hairpin. Signed-off-by: Michael Baum Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_devx.c | 26 ++++++++++-------------- drivers/net/mlx5/mlx5_ethdev.c | 2 +- drivers/net/mlx5/mlx5_flow.c | 14 ++++++------- drivers/net/mlx5/mlx5_flow_dv.c | 14 +++++-------- drivers/net/mlx5/mlx5_rx.h | 13 +++--------- drivers/net/mlx5/mlx5_rxq.c | 33 +++++++++++------------------- drivers/net/mlx5/mlx5_trigger.c | 36 ++++++++++++++++----------------- drivers/net/mlx5/mlx5_tx.h | 7 +------ drivers/net/mlx5/mlx5_txq.c | 14 ++++++------- 9 files changed, 64 insertions(+), 95 deletions(-) diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 8d151fa4ab..bcd2358165 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -88,7 +88,7 @@ mlx5_devx_modify_rq(struct mlx5_rxq_priv *rxq, uint8_t type) default: break; } - if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) + if (rxq->ctrl->is_hairpin) return mlx5_devx_cmd_modify_rq(rxq->ctrl->obj->rq, &rq_attr); return mlx5_devx_cmd_modify_rq(rxq->devx_rq.rq, &rq_attr); } @@ -162,7 +162,7 @@ mlx5_rxq_devx_obj_release(struct mlx5_rxq_priv *rxq) if (rxq_obj == NULL) return; - if (rxq_obj->rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) { + if (rxq_obj->rxq_ctrl->is_hairpin) { if (rxq_obj->rq == NULL) return; mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RDY2RST); @@ -476,7 +476,7 @@ mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq) MLX5_ASSERT(rxq_data); MLX5_ASSERT(tmpl); - if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) + if (rxq_ctrl->is_hairpin) return mlx5_rxq_obj_hairpin_new(rxq); tmpl->rxq_ctrl = rxq_ctrl; if (rxq_ctrl->irq && !rxq_ctrl->started) { @@ -583,7 +583,7 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queues[i]); MLX5_ASSERT(rxq != NULL); - if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) + if (rxq->ctrl->is_hairpin) rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id; else rqt_attr->rq_list[i] = rxq->devx_rq.rq->id; @@ -706,17 +706,13 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, int tunnel, struct mlx5_devx_tir_attr *tir_attr) { struct mlx5_priv *priv = dev->data->dev_private; - enum mlx5_rxq_type rxq_obj_type; + bool is_hairpin; bool lro = true; uint32_t i; /* NULL queues designate drop queue. */ if (ind_tbl->queues != NULL) { - struct mlx5_rxq_ctrl *rxq_ctrl = - mlx5_rxq_ctrl_get(dev, ind_tbl->queues[0]); - rxq_obj_type = rxq_ctrl != NULL ? rxq_ctrl->type : - MLX5_RXQ_TYPE_STANDARD; - + is_hairpin = mlx5_rxq_is_hairpin(dev, ind_tbl->queues[0]); /* Enable TIR LRO only if all the queues were configured for. */ for (i = 0; i < ind_tbl->queues_n; ++i) { struct mlx5_rxq_data *rxq_i = @@ -728,7 +724,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, } } } else { - rxq_obj_type = priv->drop_queue.rxq->ctrl->type; + is_hairpin = priv->drop_queue.rxq->ctrl->is_hairpin; } memset(tir_attr, 0, sizeof(*tir_attr)); tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT; @@ -759,7 +755,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, (!!(hash_fields & MLX5_L4_DST_IBV_RX_HASH)) << MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_DPORT; } - if (rxq_obj_type == MLX5_RXQ_TYPE_HAIRPIN) + if (is_hairpin) tir_attr->transport_domain = priv->sh->td->id; else tir_attr->transport_domain = priv->sh->tdn; @@ -940,7 +936,7 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev) goto error; } rxq_obj->rxq_ctrl = rxq_ctrl; - rxq_ctrl->type = MLX5_RXQ_TYPE_STANDARD; + rxq_ctrl->is_hairpin = false; rxq_ctrl->sh = priv->sh; rxq_ctrl->obj = rxq_obj; rxq->ctrl = rxq_ctrl; @@ -1242,7 +1238,7 @@ mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) struct mlx5_txq_ctrl *txq_ctrl = container_of(txq_data, struct mlx5_txq_ctrl, txq); - if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN) + if (txq_ctrl->is_hairpin) return mlx5_txq_obj_hairpin_new(dev, idx); #if !defined(HAVE_MLX5DV_DEVX_UAR_OFFSET) && defined(HAVE_INFINIBAND_VERBS_H) DRV_LOG(ERR, "Port %u Tx queue %u cannot create with DevX, no UAR.", @@ -1381,7 +1377,7 @@ void mlx5_txq_devx_obj_release(struct mlx5_txq_obj *txq_obj) { MLX5_ASSERT(txq_obj); - if (txq_obj->txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN) { + if (txq_obj->txq_ctrl->is_hairpin) { if (txq_obj->tis) claim_zero(mlx5_devx_cmd_destroy(txq_obj->tis)); #if defined(HAVE_MLX5DV_DEVX_UAR_OFFSET) || !defined(HAVE_INFINIBAND_VERBS_H) diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index 72bf8ac914..406761ccf8 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -173,7 +173,7 @@ mlx5_dev_configure_rss_reta(struct rte_eth_dev *dev) for (i = 0, j = 0; i < rxqs_n; i++) { struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); - if (rxq_ctrl && rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl && !rxq_ctrl->is_hairpin) rss_queue_arr[j++] = i; } rss_queue_n = j; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 5a4e000c12..09701a73c1 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1788,7 +1788,7 @@ mlx5_validate_rss_queues(struct rte_eth_dev *dev, const char **error, uint32_t *queue_idx) { const struct mlx5_priv *priv = dev->data->dev_private; - enum mlx5_rxq_type rxq_type = MLX5_RXQ_TYPE_UNDEFINED; + bool is_hairpin = false; uint32_t i; for (i = 0; i != queues_n; ++i) { @@ -1805,9 +1805,9 @@ mlx5_validate_rss_queues(struct rte_eth_dev *dev, *queue_idx = i; return -EINVAL; } - if (i == 0) - rxq_type = rxq_ctrl->type; - if (rxq_type != rxq_ctrl->type) { + if (i == 0 && rxq_ctrl->is_hairpin) + is_hairpin = true; + if (is_hairpin != rxq_ctrl->is_hairpin) { *error = "combining hairpin and regular RSS queues is not supported"; *queue_idx = i; return -ENOTSUP; @@ -5885,15 +5885,13 @@ flow_create_split_metadata(struct rte_eth_dev *dev, const struct rte_flow_action_queue *queue; queue = qrss->conf; - if (mlx5_rxq_get_type(dev, queue->index) == - MLX5_RXQ_TYPE_HAIRPIN) + if (mlx5_rxq_is_hairpin(dev, queue->index)) qrss = NULL; } else if (qrss->type == RTE_FLOW_ACTION_TYPE_RSS) { const struct rte_flow_action_rss *rss; rss = qrss->conf; - if (mlx5_rxq_get_type(dev, rss->queue[0]) == - MLX5_RXQ_TYPE_HAIRPIN) + if (mlx5_rxq_is_hairpin(dev, rss->queue[0])) qrss = NULL; } } diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 7a012f7bb9..313dc64604 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -5771,8 +5771,7 @@ flow_dv_validate_action_sample(uint64_t *action_flags, } /* Continue validation for Xcap actions.*/ if ((sub_action_flags & MLX5_FLOW_XCAP_ACTIONS) && - (queue_index == 0xFFFF || - mlx5_rxq_get_type(dev, queue_index) != MLX5_RXQ_TYPE_HAIRPIN)) { + (queue_index == 0xFFFF || !mlx5_rxq_is_hairpin(dev, queue_index))) { if ((sub_action_flags & MLX5_FLOW_XCAP_ACTIONS) == MLX5_FLOW_XCAP_ACTIONS) return rte_flow_error_set(error, ENOTSUP, @@ -7957,8 +7956,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, */ if ((action_flags & (MLX5_FLOW_XCAP_ACTIONS | MLX5_FLOW_VLAN_ACTIONS)) && - (queue_index == 0xFFFF || - mlx5_rxq_get_type(dev, queue_index) != MLX5_RXQ_TYPE_HAIRPIN || + (queue_index == 0xFFFF || !mlx5_rxq_is_hairpin(dev, queue_index) || ((conf = mlx5_rxq_get_hairpin_conf(dev, queue_index)) != NULL && conf->tx_explicit != 0))) { if ((action_flags & MLX5_FLOW_XCAP_ACTIONS) == @@ -10948,10 +10946,8 @@ flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, { const struct mlx5_rte_flow_item_tx_queue *queue_m; const struct mlx5_rte_flow_item_tx_queue *queue_v; - void *misc_m = - MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); - void *misc_v = - MLX5_ADDR_OF(fte_match_param, key, misc_parameters); + void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); struct mlx5_txq_ctrl *txq; uint32_t queue, mask; @@ -10962,7 +10958,7 @@ flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, txq = mlx5_txq_get(dev, queue_v->queue); if (!txq) return; - if (txq->type == MLX5_TXQ_TYPE_HAIRPIN) + if (txq->is_hairpin) queue = txq->obj->sq->id; else queue = txq->obj->sq_obj.sq->id; diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 295dba063b..fbc86dcef2 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -141,12 +141,6 @@ struct mlx5_rxq_data { /* Buffer split segment descriptions - sizes, offsets, pools. */ } __rte_cache_aligned; -enum mlx5_rxq_type { - MLX5_RXQ_TYPE_STANDARD, /* Standard Rx queue. */ - MLX5_RXQ_TYPE_HAIRPIN, /* Hairpin Rx queue. */ - MLX5_RXQ_TYPE_UNDEFINED, -}; - /* RX queue control descriptor. */ struct mlx5_rxq_ctrl { struct mlx5_rxq_data rxq; /* Data path structure. */ @@ -154,7 +148,7 @@ struct mlx5_rxq_ctrl { LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */ struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */ struct mlx5_dev_ctx_shared *sh; /* Shared context. */ - enum mlx5_rxq_type type; /* Rxq type. */ + bool is_hairpin; /* Whether RxQ type is Hairpin. */ unsigned int socket; /* CPU socket ID for allocations. */ LIST_ENTRY(mlx5_rxq_ctrl) share_entry; /* Entry in shared RXQ list. */ uint32_t share_group; /* Group ID of shared RXQ. */ @@ -258,7 +252,7 @@ struct mlx5_hrxq *mlx5_hrxq_get(struct rte_eth_dev *dev, int mlx5_hrxq_obj_release(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq); int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hxrq_idx); uint32_t mlx5_hrxq_verify(struct rte_eth_dev *dev); -enum mlx5_rxq_type mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx); +bool mlx5_rxq_is_hairpin(struct rte_eth_dev *dev, uint16_t idx); const struct rte_eth_hairpin_conf *mlx5_rxq_get_hairpin_conf (struct rte_eth_dev *dev, uint16_t idx); struct mlx5_hrxq *mlx5_drop_action_create(struct rte_eth_dev *dev); @@ -632,8 +626,7 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev) for (i = 0; i < priv->rxqs_n; ++i) { struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); - if (rxq_ctrl == NULL || - rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl == NULL || rxq_ctrl->is_hairpin) continue; n_ibv++; if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index e7284f9da9..e96584d55d 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1391,8 +1391,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); struct mlx5_rxq_data *rxq; - if (rxq_ctrl == NULL || - rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl == NULL || rxq_ctrl->is_hairpin) continue; rxq = &rxq_ctrl->rxq; n_ibv++; @@ -1480,8 +1479,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) for (i = 0; i != priv->rxqs_n; ++i) { struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); - if (rxq_ctrl == NULL || - rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl == NULL || rxq_ctrl->is_hairpin) continue; rxq_ctrl->rxq.mprq_mp = mp; } @@ -1798,7 +1796,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, rte_errno = ENOSPC; goto error; } - tmpl->type = MLX5_RXQ_TYPE_STANDARD; + tmpl->is_hairpin = false; if (mlx5_mr_ctrl_init(&tmpl->rxq.mr_ctrl, &priv->sh->cdev->mr_scache.dev_gen, socket)) { /* rte_errno is already set. */ @@ -1969,7 +1967,7 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, LIST_INIT(&tmpl->owners); rxq->ctrl = tmpl; LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry); - tmpl->type = MLX5_RXQ_TYPE_HAIRPIN; + tmpl->is_hairpin = true; tmpl->socket = SOCKET_ID_ANY; tmpl->rxq.rss_hash = 0; tmpl->rxq.port_id = dev->data->port_id; @@ -2120,7 +2118,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) mlx5_free(rxq_ctrl->obj); rxq_ctrl->obj = NULL; } - if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { + if (!rxq_ctrl->is_hairpin) { if (!rxq_ctrl->started) rxq_free_elts(rxq_ctrl); dev->data->rx_queue_state[idx] = @@ -2129,7 +2127,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) } else { /* Refcnt zero, closing device. */ LIST_REMOVE(rxq, owner_entry); if (LIST_EMPTY(&rxq_ctrl->owners)) { - if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) + if (!rxq_ctrl->is_hairpin) mlx5_mr_btree_free (&rxq_ctrl->rxq.mr_ctrl.cache_bh); if (rxq_ctrl->rxq.shared) @@ -2169,7 +2167,7 @@ mlx5_rxq_verify(struct rte_eth_dev *dev) } /** - * Get a Rx queue type. + * Check whether RxQ type is Hairpin. * * @param dev * Pointer to Ethernet device. @@ -2177,17 +2175,15 @@ mlx5_rxq_verify(struct rte_eth_dev *dev) * Rx queue index. * * @return - * The Rx queue type. + * True if Rx queue type is Hairpin, otherwise False. */ -enum mlx5_rxq_type -mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx) +bool +mlx5_rxq_is_hairpin(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); - if (idx < priv->rxqs_n && rxq_ctrl != NULL) - return rxq_ctrl->type; - return MLX5_RXQ_TYPE_UNDEFINED; + return (idx < priv->rxqs_n && rxq_ctrl != NULL && rxq_ctrl->is_hairpin); } /* @@ -2204,14 +2200,9 @@ mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx) const struct rte_eth_hairpin_conf * mlx5_rxq_get_hairpin_conf(struct rte_eth_dev *dev, uint16_t idx) { - struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); - if (idx < priv->rxqs_n && rxq != NULL) { - if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) - return &rxq->hairpin_conf; - } - return NULL; + return mlx5_rxq_is_hairpin(dev, idx) ? &rxq->hairpin_conf : NULL; } /** diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 74c3bc8a13..fe8b42c414 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -59,7 +59,7 @@ mlx5_txq_start(struct rte_eth_dev *dev) if (!txq_ctrl) continue; - if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD) + if (!txq_ctrl->is_hairpin) txq_alloc_elts(txq_ctrl); MLX5_ASSERT(!txq_ctrl->obj); txq_ctrl->obj = mlx5_malloc(flags, sizeof(struct mlx5_txq_obj), @@ -77,7 +77,7 @@ mlx5_txq_start(struct rte_eth_dev *dev) txq_ctrl->obj = NULL; goto error; } - if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD) { + if (!txq_ctrl->is_hairpin) { size_t size = txq_data->cqe_s * sizeof(*txq_data->fcqs); txq_data->fcqs = mlx5_malloc(flags, size, @@ -167,7 +167,7 @@ mlx5_rxq_ctrl_prepare(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl, { int ret = 0; - if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { + if (!rxq_ctrl->is_hairpin) { /* * Pre-register the mempools. Regardless of whether * the implicit registration is enabled or not, @@ -280,7 +280,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) txq_ctrl = mlx5_txq_get(dev, i); if (!txq_ctrl) continue; - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN || + if (!txq_ctrl->is_hairpin || txq_ctrl->hairpin_conf.peers[0].port != self_port) { mlx5_txq_release(dev, i); continue; @@ -299,7 +299,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) if (!txq_ctrl) continue; /* Skip hairpin queues with other peer ports. */ - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN || + if (!txq_ctrl->is_hairpin || txq_ctrl->hairpin_conf.peers[0].port != self_port) { mlx5_txq_release(dev, i); continue; @@ -322,7 +322,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) return -rte_errno; } rxq_ctrl = rxq->ctrl; - if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN || + if (!rxq_ctrl->is_hairpin || rxq->hairpin_conf.peers[0].queue != i) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u Tx queue %d can't be binded to " @@ -412,7 +412,7 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, dev->data->port_id, peer_queue); return -rte_errno; } - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + if (!txq_ctrl->is_hairpin) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d is not a hairpin Txq", dev->data->port_id, peer_queue); @@ -444,7 +444,7 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, return -rte_errno; } rxq_ctrl = rxq->ctrl; - if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { + if (!rxq_ctrl->is_hairpin) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d is not a hairpin Rxq", dev->data->port_id, peer_queue); @@ -510,7 +510,7 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, dev->data->port_id, cur_queue); return -rte_errno; } - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + if (!txq_ctrl->is_hairpin) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Txq", dev->data->port_id, cur_queue); @@ -570,7 +570,7 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, return -rte_errno; } rxq_ctrl = rxq->ctrl; - if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { + if (!rxq_ctrl->is_hairpin) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq", dev->data->port_id, cur_queue); @@ -644,7 +644,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, dev->data->port_id, cur_queue); return -rte_errno; } - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + if (!txq_ctrl->is_hairpin) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Txq", dev->data->port_id, cur_queue); @@ -683,7 +683,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, return -rte_errno; } rxq_ctrl = rxq->ctrl; - if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { + if (!rxq_ctrl->is_hairpin) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq", dev->data->port_id, cur_queue); @@ -751,7 +751,7 @@ mlx5_hairpin_bind_single_port(struct rte_eth_dev *dev, uint16_t rx_port) txq_ctrl = mlx5_txq_get(dev, i); if (txq_ctrl == NULL) continue; - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + if (!txq_ctrl->is_hairpin) { mlx5_txq_release(dev, i); continue; } @@ -791,7 +791,7 @@ mlx5_hairpin_bind_single_port(struct rte_eth_dev *dev, uint16_t rx_port) txq_ctrl = mlx5_txq_get(dev, i); if (txq_ctrl == NULL) continue; - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + if (!txq_ctrl->is_hairpin) { mlx5_txq_release(dev, i); continue; } @@ -886,7 +886,7 @@ mlx5_hairpin_unbind_single_port(struct rte_eth_dev *dev, uint16_t rx_port) txq_ctrl = mlx5_txq_get(dev, i); if (txq_ctrl == NULL) continue; - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + if (!txq_ctrl->is_hairpin) { mlx5_txq_release(dev, i); continue; } @@ -1016,7 +1016,7 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, txq_ctrl = mlx5_txq_get(dev, i); if (!txq_ctrl) continue; - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + if (!txq_ctrl->is_hairpin) { mlx5_txq_release(dev, i); continue; } @@ -1040,7 +1040,7 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, if (rxq == NULL) continue; rxq_ctrl = rxq->ctrl; - if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) + if (!rxq_ctrl->is_hairpin) continue; pp = rxq->hairpin_conf.peers[0].port; if (pp >= RTE_MAX_ETHPORTS) { @@ -1318,7 +1318,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) if (!txq_ctrl) continue; /* Only Tx implicit mode requires the default Tx flow. */ - if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN && + if (txq_ctrl->is_hairpin && txq_ctrl->hairpin_conf.tx_explicit == 0 && txq_ctrl->hairpin_conf.peers[0].port == priv->dev_data->port_id) { diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h index 0adc3f4839..89dac0c65a 100644 --- a/drivers/net/mlx5/mlx5_tx.h +++ b/drivers/net/mlx5/mlx5_tx.h @@ -169,17 +169,12 @@ struct mlx5_txq_data { /* Storage for queued packets, must be the last field. */ } __rte_cache_aligned; -enum mlx5_txq_type { - MLX5_TXQ_TYPE_STANDARD, /* Standard Tx queue. */ - MLX5_TXQ_TYPE_HAIRPIN, /* Hairpin Tx queue. */ -}; - /* TX queue control descriptor. */ struct mlx5_txq_ctrl { LIST_ENTRY(mlx5_txq_ctrl) next; /* Pointer to the next element. */ uint32_t refcnt; /* Reference counter. */ unsigned int socket; /* CPU socket ID for allocations. */ - enum mlx5_txq_type type; /* The txq ctrl type. */ + bool is_hairpin; /* Whether TxQ type is Hairpin. */ unsigned int max_inline_data; /* Max inline data. */ unsigned int max_tso_header; /* Max TSO header size. */ struct mlx5_txq_obj *obj; /* Verbs/DevX queue object. */ diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index f128c3d1a5..0140f8b3b2 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -527,7 +527,7 @@ txq_uar_init_secondary(struct mlx5_txq_ctrl *txq_ctrl, int fd) return -rte_errno; } - if (txq_ctrl->type != MLX5_TXQ_TYPE_STANDARD) + if (txq_ctrl->is_hairpin) return 0; MLX5_ASSERT(ppriv); /* @@ -570,7 +570,7 @@ txq_uar_uninit_secondary(struct mlx5_txq_ctrl *txq_ctrl) rte_errno = ENOMEM; } - if (txq_ctrl->type != MLX5_TXQ_TYPE_STANDARD) + if (txq_ctrl->is_hairpin) return; addr = ppriv->uar_table[txq_ctrl->txq.idx].db; rte_mem_unmap(RTE_PTR_ALIGN_FLOOR(addr, page_size), page_size); @@ -631,7 +631,7 @@ mlx5_tx_uar_init_secondary(struct rte_eth_dev *dev, int fd) continue; txq = (*priv->txqs)[i]; txq_ctrl = container_of(txq, struct mlx5_txq_ctrl, txq); - if (txq_ctrl->type != MLX5_TXQ_TYPE_STANDARD) + if (txq_ctrl->is_hairpin) continue; MLX5_ASSERT(txq->idx == (uint16_t)i); ret = txq_uar_init_secondary(txq_ctrl, fd); @@ -1107,7 +1107,7 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, goto error; } __atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED); - tmpl->type = MLX5_TXQ_TYPE_STANDARD; + tmpl->is_hairpin = false; LIST_INSERT_HEAD(&priv->txqsctrl, tmpl, next); return tmpl; error: @@ -1150,7 +1150,7 @@ mlx5_txq_hairpin_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, tmpl->txq.port_id = dev->data->port_id; tmpl->txq.idx = idx; tmpl->hairpin_conf = *hairpin_conf; - tmpl->type = MLX5_TXQ_TYPE_HAIRPIN; + tmpl->is_hairpin = true; __atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED); LIST_INSERT_HEAD(&priv->txqsctrl, tmpl, next); return tmpl; @@ -1209,7 +1209,7 @@ mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx) mlx5_free(txq_ctrl->obj); txq_ctrl->obj = NULL; } - if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD) { + if (!txq_ctrl->is_hairpin) { if (txq_ctrl->txq.fcqs) { mlx5_free(txq_ctrl->txq.fcqs); txq_ctrl->txq.fcqs = NULL; @@ -1218,7 +1218,7 @@ mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx) dev->data->tx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED; } if (!__atomic_load_n(&txq_ctrl->refcnt, __ATOMIC_RELAXED)) { - if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD) + if (!txq_ctrl->is_hairpin) mlx5_mr_btree_free(&txq_ctrl->txq.mr_ctrl.cache_bh); LIST_REMOVE(txq_ctrl, next); mlx5_free(txq_ctrl); From patchwork Thu Feb 24 23:25:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 108330 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0DC58A034C; Fri, 25 Feb 2022 00:25:40 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0DA5441161; Fri, 25 Feb 2022 00:25:32 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2058.outbound.protection.outlook.com [40.107.237.58]) by mails.dpdk.org (Postfix) with ESMTP id 1047E4115B for ; Fri, 25 Feb 2022 00:25:30 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OYVSi0r4RVWm28ukh0gk31dYFZk3si5CimdZtUTQCOnGH+05nGOjvMn1rnezs+Sey+8R+0I+ArLlGzq24Q6SLLh5OL2MLz1k5YuwvPBtRsHc/d1/g6ZXY97WMQt4RUvrh7UeY0RxTEApltvH3QNfHYYgQq2XXl9eEH7RpYSLGpEOtoqG8SxEOCRH/s6lDYjROH9aNcBsZToU/6Yx/sRSY79ZxksPs2jm0d/80xZBxSIOx/d/MbHWjmG0TbwsFY+k5YrPW+U3ZIv0/Em2Y7JW+oTthx/OU2OuKJaHln7bPvem6/0JDD/ZnBHotVyxve6rc+H3J1JkzkUEwHc7FQ2alg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=U7Y8pFPOAN3P3YPrH9ZcIDiSGu6XyojHU+ml6CQt55U=; b=ftnm/eHaNlUiGIE289BXAJnC+qWi9pJz8QA55fXBh3IvpGqVZxnvQmTvFKVnzV4Vl9fCpecJBRKnHFSdmYHIVlQxCDYxXI6rJfN3LAABraOkJkGSkYcsz+gunJ6RMgXJ1f6ikmucH77sS3jm44X4kOKU49/BhmxAtN4jkna/npf06itt2eMoaZcGd/uCEPBY2Zgfn+2yNOI34zyLK8WKzeOHbPEiYn+vmqB3APBO3nOIvGydtRvS2IZaG/bxKzrer6Ar1tglo9iII2SbcMzlHERHz+2ziaKQWZc8Jam5xIDiC2VEAnHC4Z9IiZPQLpuq7vnp5G26uMPYwg964G3hkQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=U7Y8pFPOAN3P3YPrH9ZcIDiSGu6XyojHU+ml6CQt55U=; b=SRRsceRFpB4HeiH2+hCvEns6J5dLi24s5035gwA7GqTsftR07PWR3m1lTXCKfEA4hTjWPiH1ai0rChcUTpymo0GygE64jr6te9Y205TLxcXUA3tlkYpwvvZHkGR4J7BNz8WN1UolXFb+7cGeGWJ6v31NvCDlNIsI1rlO8lcbXyj9ggJPICv0FDGPr2/6XeCPGg+UZWEkRNLAsI+8ISWfq2Rp26qc47wntb80YnP81weWlL7sVOaw4r0QVOY9TPVx8zibXzKP0QhkvR4gubHZ/1iW/aD+PvtWPWcsp45KtGQryA4I9+PLypvZ693kjW4YM5uXjLZjcseJauF0twVCug== Received: from MW4PR03CA0270.namprd03.prod.outlook.com (2603:10b6:303:b4::35) by BN8PR12MB3585.namprd12.prod.outlook.com (2603:10b6:408:49::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.21; Thu, 24 Feb 2022 23:25:27 +0000 Received: from CO1NAM11FT012.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b4:cafe::9b) by MW4PR03CA0270.outlook.office365.com (2603:10b6:303:b4::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.21 via Frontend Transport; Thu, 24 Feb 2022 23:25:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by CO1NAM11FT012.mail.protection.outlook.com (10.13.175.192) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 23:25:27 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 23:25:26 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 15:25:26 -0800 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9 via Frontend Transport; Thu, 24 Feb 2022 15:25:24 -0800 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Subject: [PATCH v3 5/6] net/mlx5: add external RxQ mapping API Date: Fri, 25 Feb 2022 01:25:10 +0200 Message-ID: <20220224232511.3238707-6-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220224232511.3238707-1-michaelba@nvidia.com> References: <20220223184835.3061161-1-michaelba@nvidia.com> <20220224232511.3238707-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a1ea0ea6-fbe7-4741-2d71-08d9f7ecf0db X-MS-TrafficTypeDiagnostic: BN8PR12MB3585:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iPwpFyIEhmgA7r9pKTBEti2590Gyoapq2oi1uy93Ma7NFvxTJ42rY4Jlj1vJx5y83LvsBFi2e6+f7ICFwe8AA+0SIApH/Tiby7yjiYob07HOuyPo/M4oaAvWKwWHrf0f7orywECzSvf27DkyM653SuuywSoyPjhLIMq2qnKWvTsAm+oShgxoNYmoW/oOhQRjsC2fznHLa5z7IN8vjdg8KLxO4EDjiRr3JrDtjbYiFiCziGY7/hLbfX9s5TuOWUmi8cknZnw4zcBKp/iYPmJlc0/EIaiLwBYwI8OBWEzMck9FqRPBmfoUFe17LGo9Q8dUtQsWGEL2Ano6q28CLabgUY5E/+q+8zLpPyFBe2ssNUpY1sfv40Wm3xABATaE+Lz9R71trGcdENUR+N7voujuh3/ffh8oIMw5SY/kOJDIFyOLbbOlBJX6oyddmfD2j6Ppa1VoG23fRd9mBI8UGjXABm4zUW4JelgoIy7ORmQkQSBPKi/zKnTkVOB7b8hLEeLB2L2xkZh1X4eeU1WAA0MYLM+IADxtsB0+OI+J8tB7bZgRUlY90OQg57eU7z8SNzJmKVqpt3B/pUhSWatrSElpYtaCeKFpeJa4jyViZmKNgbDixd1nh6g7hbDFXgnFCtV4crUNaRPDb5mft0e/KiMuYnZQ37HN7bEAmACcuvaf8n0QSCZuSCJ1PV81kco/obRiK8QncOqu3iEuUUfFG91HNg== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(55016003)(4326008)(82310400004)(8676002)(86362001)(30864003)(81166007)(8936002)(356005)(2906002)(70206006)(1076003)(26005)(186003)(336012)(426003)(6286002)(5660300002)(2616005)(70586007)(316002)(6916009)(36860700001)(6666004)(7696005)(508600001)(40460700003)(47076005)(83380400001)(107886003)(36756003)(54906003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 23:25:27.3624 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a1ea0ea6-fbe7-4741-2d71-08d9f7ecf0db X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT012.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3585 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org External queue is a queue that has been created and managed outside the PMD. The queues owner might use PMD to generate flow rules using these external queues. When the queue is created in hardware it is given an ID represented by 32 bits. In contrast, the index of the queues in PMD is represented by 16 bits. To enable the use of PMD to generate flow rules, the queue owner must provide a mapping between the HW index and a 16-bit index corresponding to the RTE Flow API. This patch adds an API enabling to insert/cancel a mapping between HW queue id and RTE Flow queue id. Signed-off-by: Michael Baum Acked-by: Matan Azrad --- drivers/net/mlx5/linux/mlx5_os.c | 17 +++++ drivers/net/mlx5/mlx5.c | 1 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_defs.h | 3 + drivers/net/mlx5/mlx5_ethdev.c | 16 ++++- drivers/net/mlx5/mlx5_rx.h | 6 ++ drivers/net/mlx5/mlx5_rxq.c | 117 +++++++++++++++++++++++++++++++ drivers/net/mlx5/rte_pmd_mlx5.h | 50 ++++++++++++- drivers/net/mlx5/version.map | 3 + 9 files changed, 210 insertions(+), 4 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 2e1606a733..a847ed13cc 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1158,6 +1158,22 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = ENOMEM; goto error; } + /* + * When user configures remote PD and CTX and device creates RxQ by + * DevX, external RxQ is both supported and requested. + */ + if (mlx5_imported_pd_and_ctx(sh->cdev) && mlx5_devx_obj_ops_en(sh)) { + priv->ext_rxqs = mlx5_malloc(MLX5_MEM_ZERO | MLX5_MEM_RTE, + sizeof(struct mlx5_external_rxq) * + MLX5_MAX_EXT_RX_QUEUES, 0, + SOCKET_ID_ANY); + if (priv->ext_rxqs == NULL) { + DRV_LOG(ERR, "Fail to allocate external RxQ array."); + err = ENOMEM; + goto error; + } + DRV_LOG(DEBUG, "External RxQ is supported."); + } priv->sh = sh; priv->dev_port = spawn->phys_port; priv->pci_dev = spawn->pci_dev; @@ -1617,6 +1633,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, mlx5_list_destroy(priv->hrxqs); if (eth_dev && priv->flex_item_map) mlx5_flex_item_port_cleanup(eth_dev); + mlx5_free(priv->ext_rxqs); mlx5_free(priv); if (eth_dev != NULL) eth_dev->data->dev_private = NULL; diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 7611fdd62b..5ecca2dd1b 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1930,6 +1930,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) dev->data->port_id); if (priv->hrxqs) mlx5_list_destroy(priv->hrxqs); + mlx5_free(priv->ext_rxqs); /* * Free the shared context in last turn, because the cleanup * routines above may use some shared fields, like diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index bd69aa2334..0f825396a2 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1461,6 +1461,7 @@ struct mlx5_priv { /* RX/TX queues. */ unsigned int rxqs_n; /* RX queues array size. */ unsigned int txqs_n; /* TX queues array size. */ + struct mlx5_external_rxq *ext_rxqs; /* External RX queues array. */ struct mlx5_rxq_priv *(*rxq_privs)[]; /* RX queue non-shared data. */ struct mlx5_txq_data *(*txqs)[]; /* TX queues. */ struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. */ diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index 2d48fde010..15728fb41f 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -175,6 +175,9 @@ /* Maximum number of indirect actions supported by rte_flow */ #define MLX5_MAX_INDIRECT_ACTIONS 3 +/* Maximum number of external Rx queues supported by rte_flow */ +#define MLX5_MAX_EXT_RX_QUEUES (UINT16_MAX - MLX5_EXTERNAL_RX_QUEUE_ID_MIN + 1) + /* * Linux definition of static_assert is found in /usr/include/assert.h. * Windows does not require a redefinition. diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index 406761ccf8..de0ba2b1ff 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -27,6 +27,7 @@ #include "mlx5_tx.h" #include "mlx5_autoconf.h" #include "mlx5_devx.h" +#include "rte_pmd_mlx5.h" /** * Get the interface index from device name. @@ -81,9 +82,10 @@ mlx5_dev_configure(struct rte_eth_dev *dev) rte_errno = EINVAL; return -rte_errno; } - priv->rss_conf.rss_key = - mlx5_realloc(priv->rss_conf.rss_key, MLX5_MEM_RTE, - MLX5_RSS_HASH_KEY_LEN, 0, SOCKET_ID_ANY); + priv->rss_conf.rss_key = mlx5_realloc(priv->rss_conf.rss_key, + MLX5_MEM_RTE, + MLX5_RSS_HASH_KEY_LEN, 0, + SOCKET_ID_ANY); if (!priv->rss_conf.rss_key) { DRV_LOG(ERR, "port %u cannot allocate RSS hash key memory (%u)", dev->data->port_id, rxqs_n); @@ -127,6 +129,14 @@ mlx5_dev_configure(struct rte_eth_dev *dev) rte_errno = EINVAL; return -rte_errno; } + if (priv->ext_rxqs && rxqs_n >= MLX5_EXTERNAL_RX_QUEUE_ID_MIN) { + DRV_LOG(ERR, "port %u cannot handle this many Rx queues (%u), " + "the maximal number of internal Rx queues is %u", + dev->data->port_id, rxqs_n, + MLX5_EXTERNAL_RX_QUEUE_ID_MIN - 1); + rte_errno = EINVAL; + return -rte_errno; + } if (rxqs_n != priv->rxqs_n) { DRV_LOG(INFO, "port %u Rx queues number update: %u -> %u", dev->data->port_id, priv->rxqs_n, rxqs_n); diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index fbc86dcef2..aba05dffa7 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -175,6 +175,12 @@ struct mlx5_rxq_priv { uint32_t hairpin_status; /* Hairpin binding status. */ }; +/* External RX queue descriptor. */ +struct mlx5_external_rxq { + uint32_t hw_id; /* Queue index in the Hardware. */ + uint32_t refcnt; /* Reference counter. */ +}; + /* mlx5_rxq.c */ extern uint8_t rss_hash_default_key[]; diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index e96584d55d..889428f48a 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -30,6 +30,7 @@ #include "mlx5_utils.h" #include "mlx5_autoconf.h" #include "mlx5_devx.h" +#include "rte_pmd_mlx5.h" /* Default RSS hash key also used for ConnectX-3. */ @@ -3008,3 +3009,119 @@ mlx5_rxq_timestamp_set(struct rte_eth_dev *dev) data->rt_timestamp = sh->dev_cap.rt_timestamp; } } + +/** + * Validate given external RxQ rte_plow index, and get pointer to concurrent + * external RxQ object to map/unmap. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] dpdk_idx + * Queue index in rte_flow. + * + * @return + * Pointer to concurrent external RxQ on success, + * NULL otherwise and rte_errno is set. + */ +static struct mlx5_external_rxq * +mlx5_external_rx_queue_get_validate(uint16_t port_id, uint16_t dpdk_idx) +{ + struct rte_eth_dev *dev; + struct mlx5_priv *priv; + + if (dpdk_idx < MLX5_EXTERNAL_RX_QUEUE_ID_MIN) { + DRV_LOG(ERR, "Queue index %u should be in range: [%u, %u].", + dpdk_idx, MLX5_EXTERNAL_RX_QUEUE_ID_MIN, UINT16_MAX); + rte_errno = EINVAL; + return NULL; + } + if (rte_eth_dev_is_valid_port(port_id) < 0) { + DRV_LOG(ERR, "There is no Ethernet device for port %u.", + port_id); + rte_errno = ENODEV; + return NULL; + } + dev = &rte_eth_devices[port_id]; + priv = dev->data->dev_private; + if (!mlx5_imported_pd_and_ctx(priv->sh->cdev)) { + DRV_LOG(ERR, "Port %u " + "external RxQ isn't supported on local PD and CTX.", + port_id); + rte_errno = ENOTSUP; + return NULL; + } + if (!mlx5_devx_obj_ops_en(priv->sh)) { + DRV_LOG(ERR, + "Port %u external RxQ isn't supported by Verbs API.", + port_id); + rte_errno = ENOTSUP; + return NULL; + } + /* + * When user configures remote PD and CTX and device creates RxQ by + * DevX, external RxQs array is allocated. + */ + MLX5_ASSERT(priv->ext_rxqs != NULL); + return &priv->ext_rxqs[dpdk_idx - MLX5_EXTERNAL_RX_QUEUE_ID_MIN]; +} + +int +rte_pmd_mlx5_external_rx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx, + uint32_t hw_idx) +{ + struct mlx5_external_rxq *ext_rxq; + uint32_t unmapped = 0; + + ext_rxq = mlx5_external_rx_queue_get_validate(port_id, dpdk_idx); + if (ext_rxq == NULL) + return -rte_errno; + if (!__atomic_compare_exchange_n(&ext_rxq->refcnt, &unmapped, 1, false, + __ATOMIC_RELAXED, __ATOMIC_RELAXED)) { + if (ext_rxq->hw_id != hw_idx) { + DRV_LOG(ERR, "Port %u external RxQ index %u " + "is already mapped to HW index (requesting is " + "%u, existing is %u).", + port_id, dpdk_idx, hw_idx, ext_rxq->hw_id); + rte_errno = EEXIST; + return -rte_errno; + } + DRV_LOG(WARNING, "Port %u external RxQ index %u " + "is already mapped to the requested HW index (%u)", + port_id, dpdk_idx, hw_idx); + + } else { + ext_rxq->hw_id = hw_idx; + DRV_LOG(DEBUG, "Port %u external RxQ index %u " + "is successfully mapped to the requested HW index (%u)", + port_id, dpdk_idx, hw_idx); + } + return 0; +} + +int +rte_pmd_mlx5_external_rx_queue_id_unmap(uint16_t port_id, uint16_t dpdk_idx) +{ + struct mlx5_external_rxq *ext_rxq; + uint32_t mapped = 1; + + ext_rxq = mlx5_external_rx_queue_get_validate(port_id, dpdk_idx); + if (ext_rxq == NULL) + return -rte_errno; + if (ext_rxq->refcnt > 1) { + DRV_LOG(ERR, "Port %u external RxQ index %u still referenced.", + port_id, dpdk_idx); + rte_errno = EINVAL; + return -rte_errno; + } + if (!__atomic_compare_exchange_n(&ext_rxq->refcnt, &mapped, 0, false, + __ATOMIC_RELAXED, __ATOMIC_RELAXED)) { + DRV_LOG(ERR, "Port %u external RxQ index %u doesn't exist.", + port_id, dpdk_idx); + rte_errno = EINVAL; + return -rte_errno; + } + DRV_LOG(DEBUG, + "Port %u external RxQ index %u is successfully unmapped.", + port_id, dpdk_idx); + return 0; +} diff --git a/drivers/net/mlx5/rte_pmd_mlx5.h b/drivers/net/mlx5/rte_pmd_mlx5.h index fc37a386db..6e7907ee59 100644 --- a/drivers/net/mlx5/rte_pmd_mlx5.h +++ b/drivers/net/mlx5/rte_pmd_mlx5.h @@ -61,8 +61,56 @@ int rte_pmd_mlx5_get_dyn_flag_names(char *names[], unsigned int n); __rte_experimental int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains); +/** + * External Rx queue rte_flow index minimal value. + */ +#define MLX5_EXTERNAL_RX_QUEUE_ID_MIN (UINT16_MAX - 1000 + 1) + +/** + * Update mapping between rte_flow queue index (16 bits) and HW queue index (32 + * bits) for RxQs which is created outside the PMD. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] dpdk_idx + * Queue index in rte_flow. + * @param[in] hw_idx + * Queue index in hardware. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + * Possible values for rte_errno: + * - EEXIST - a mapping with the same rte_flow index already exists. + * - EINVAL - invalid rte_flow index, out of range. + * - ENODEV - there is no Ethernet device for this port id. + * - ENOTSUP - the port doesn't support external RxQ. + */ +__rte_experimental +int rte_pmd_mlx5_external_rx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx, + uint32_t hw_idx); + +/** + * Remove mapping between rte_flow queue index (16 bits) and HW queue index (32 + * bits) for RxQs which is created outside the PMD. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] dpdk_idx + * Queue index in rte_flow. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + * Possible values for rte_errno: + * - EINVAL - invalid index, out of range, still referenced or doesn't exist. + * - ENODEV - there is no Ethernet device for this port id. + * - ENOTSUP - the port doesn't support external RxQ. + */ +__rte_experimental +int rte_pmd_mlx5_external_rx_queue_id_unmap(uint16_t port_id, + uint16_t dpdk_idx); + #ifdef __cplusplus } #endif -#endif +#endif /* RTE_PMD_PRIVATE_MLX5_H_ */ diff --git a/drivers/net/mlx5/version.map b/drivers/net/mlx5/version.map index 0af7a12488..79cb79acc6 100644 --- a/drivers/net/mlx5/version.map +++ b/drivers/net/mlx5/version.map @@ -9,4 +9,7 @@ EXPERIMENTAL { rte_pmd_mlx5_get_dyn_flag_names; # added in 20.11 rte_pmd_mlx5_sync_flow; + # added in 22.03 + rte_pmd_mlx5_external_rx_queue_id_map; + rte_pmd_mlx5_external_rx_queue_id_unmap; }; From patchwork Thu Feb 24 23:25:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 108332 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 606A6A034C; Fri, 25 Feb 2022 00:25:51 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F21A54115C; Fri, 25 Feb 2022 00:25:34 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2089.outbound.protection.outlook.com [40.107.243.89]) by mails.dpdk.org (Postfix) with ESMTP id 13C2941163 for ; Fri, 25 Feb 2022 00:25:32 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OlEVSO9/Q9rHKov3cVdFVO6JTXv0LPtYkvCDAYa/oMtPiPqPPivDTWepxOuEHS4DpYDv3NhZYo9OoALPHm6me+Bf5jt0gIR3QqNyYhvZ8DKkoNKjgYf0cvLpjv1P2C1RHvfWiSPrZX78275xJYRvB/Sm/wSc1gC0VIRJT2MFPBFGUdyN4/xk88A8kv+St6yWmUqSks4mEMJeXUg6tyQT4z7dLS13m6GRPM7RiLPRQFVjlZu5SIpurCKLEddAJSBKadPR0h/qMPR0h40sxxh/0xxYVgrJT3Sm9WF6/9zh3KTWUUSjetwT1JhZlAtuwcJglWiwmgCkMyvvzQ6ljYLo4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4FtxAfFs4++8b/6QMnrYqP72sUjd61O7Gvv0l/Am940=; b=A+MWfcis/wKw4p5bspaheo4Bt0FulK+S72RfVdN7tbvlkBrA193SebxhLxfjVGijdFAzblGVEN7gRK9pbyqy6Z3uEWRqq0jAsnEQo/SbAfS2/dK1Cl1aK/QnsE3axMWFVdLRSlHdDi9//72jH5ALhMqAdf/CGhbdn7RQfXPURW7FWkHM5cmFYyILh5JbywULscQUDiIIsNBNvy5SjZlRj+ot7DZVXKi5JhtaHxQalrro4eLjhlJYCVcyUFrwEJc8qK3/ZDF24Sj9kyvwaOPaawmlYQyLigHeP4GNoP7TbD323cTBwORC5cW+UhZlziXed5giRf6su/fb8DhgcuL9/w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4FtxAfFs4++8b/6QMnrYqP72sUjd61O7Gvv0l/Am940=; b=Ddm3UQZJZld6Hi6WbLogjuofMSX3DGCipae0AUuYVhy3253dO4bPEuEBDtBMaLtIET0NluCxdoKJsKpc8bIAXOcPdu+GgNQAIrNxIe7ty5VgDONY96mPkZNbYwsb9mB48JpbY964wVvaDykTXAHSeLq2dR7TeJu/YRtKgiSiaaRzuQ2VkUI+4p+3O6nqhRPMnR8Cn3ada+20/fCcwxyWiHE77pIecgrZty1GD9/yCx8NH0imKJGF08Y5TVSQ/BbhnNKGqwBbDyN2WGOXyhUsGOytpXWvqM6IGujzIKQfANA42/UroJmA8ROUm3oApiV2xqQGFJKhzRRAatsQ9PUorw== Received: from MWHPR18CA0065.namprd18.prod.outlook.com (2603:10b6:300:39::27) by SA0PR12MB4560.namprd12.prod.outlook.com (2603:10b6:806:97::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.22; Thu, 24 Feb 2022 23:25:30 +0000 Received: from CO1NAM11FT018.eop-nam11.prod.protection.outlook.com (2603:10b6:300:39:cafe::e5) by MWHPR18CA0065.outlook.office365.com (2603:10b6:300:39::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 24 Feb 2022 23:25:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT018.mail.protection.outlook.com (10.13.175.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 23:25:29 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 23:25:28 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 15:25:27 -0800 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9 via Frontend Transport; Thu, 24 Feb 2022 15:25:26 -0800 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Subject: [PATCH v3 6/6] net/mlx5: support queue/RSS action for external RxQ Date: Fri, 25 Feb 2022 01:25:11 +0200 Message-ID: <20220224232511.3238707-7-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220224232511.3238707-1-michaelba@nvidia.com> References: <20220223184835.3061161-1-michaelba@nvidia.com> <20220224232511.3238707-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 716eb88f-1eb2-4437-cc05-08d9f7ecf226 X-MS-TrafficTypeDiagnostic: SA0PR12MB4560:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: QR/c2Fj6sDmomY0v6rrZ+hrjYnn5bzXRf7azq8kYFWNPh3bk87cSS94e1h73nEbotU8FVURytJFZdEi1gcyYs+l2QSodeHZF2jewUpKxeQ64envwydYklLhwdSlumDhXz3ZRS0p7SqhvgCmMsU0zzCL++sN/qYuEJp8axsTflxLoAIQpSFlPF4NVPB9IPWl85dCdLVU392oskhKiSWrURLtXFQ55aVrNmMTef0a+ejwFLMeCwiUPviXxXwbJZOCgd+9FZOAaHldFE+bqDD7OCGzvBS+2WV+qcfkkuzZ1Fn5+SCl1dNM91fYZmlCcxV5YH6Kr1OHBtcTggBtxIjRjDM/B0TUd2i7qp/13tHzdZvqtSzu/sO1PlWpkYqemeYJgekLL+5vZe6mE/0PlSOVwePG54XMelFV2YY4qaBzqfwNqNzRzLZPL4RptP6DIRhvf4Ru6kd4Oxr753C9fiMg3Y/XjFoKcX2BcUnDPCeOOxPYB33+7JW+SdAKS7UZxUv6YJzPgYKEOuvraPbkMoYkweNn6G82c02FMEKpww0vshmlJV32QjsmuHvJgG6cEHxVT8Lo3acMz9JVCDz5cA3ys5DMQjSGgZbipzS8YpbFUwk3NpFmTWVUh4LF0NQTTOaX70i+uGEEpK73/PB0w2O+TKxK3GL/3B+m6XP2+OCidbewxlnmZ5T4zMEUzWLSTQpuXQCLAbksgLDgxtd0vynr7WQ== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(8676002)(83380400001)(316002)(107886003)(55016003)(6916009)(86362001)(8936002)(40460700003)(70586007)(70206006)(2616005)(54906003)(1076003)(6666004)(26005)(36756003)(4326008)(426003)(6286002)(336012)(186003)(2906002)(30864003)(47076005)(82310400004)(508600001)(81166007)(7696005)(5660300002)(356005)(36860700001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 23:25:29.5357 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 716eb88f-1eb2-4437-cc05-08d9f7ecf226 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT018.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4560 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support queue/RSS action for external RxQ. In indirection table creation, the queue index will be taken from mapping array. This feature supports neither LRO nor Hairpin. Signed-off-by: Michael Baum Acked-by: Matan Azrad --- doc/guides/nics/mlx5.rst | 1 + doc/guides/rel_notes/release_22_03.rst | 1 + drivers/net/mlx5/mlx5.c | 4 + drivers/net/mlx5/mlx5_devx.c | 30 +++++-- drivers/net/mlx5/mlx5_flow.c | 29 +++++-- drivers/net/mlx5/mlx5_rx.h | 30 +++++++ drivers/net/mlx5/mlx5_rxq.c | 116 +++++++++++++++++++++++-- 7 files changed, 187 insertions(+), 24 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 7b04e9bac5..a5b3298f0c 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -38,6 +38,7 @@ Features - Multiple TX and RX queues. - Shared Rx queue. - Rx queue delay drop. +- Support steering for external Rx queue created outside the PMD. - Support for scattered TX frames. - Advanced support for scattered Rx frames with tunable buffer attributes. - IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues. diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index e66548558c..a29e96c37c 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -164,6 +164,7 @@ New Features * Support ConnectX-7 capability to schedule traffic sending on timestamp * Added WQE based hardware steering support with ``rte_flow_async`` API. + * Support steering for external Rx queue created outside the PMD. * **Updated Wangxun ngbe driver.** diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 5ecca2dd1b..74841caaf9 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1912,6 +1912,10 @@ mlx5_dev_close(struct rte_eth_dev *dev) if (ret) DRV_LOG(WARNING, "port %u some Rx queue objects still remain", dev->data->port_id); + ret = mlx5_ext_rxq_verify(dev); + if (ret) + DRV_LOG(WARNING, "Port %u some external RxQ still remain.", + dev->data->port_id); ret = mlx5_rxq_verify(dev); if (ret) DRV_LOG(WARNING, "port %u some Rx queues still remain", diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index bcd2358165..af106bda50 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -580,13 +580,21 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev, return rqt_attr; } for (i = 0; i != queues_n; ++i) { - struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queues[i]); + if (mlx5_is_external_rxq(dev, queues[i])) { + struct mlx5_external_rxq *ext_rxq = + mlx5_ext_rxq_get(dev, queues[i]); - MLX5_ASSERT(rxq != NULL); - if (rxq->ctrl->is_hairpin) - rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id; - else - rqt_attr->rq_list[i] = rxq->devx_rq.rq->id; + rqt_attr->rq_list[i] = ext_rxq->hw_id; + } else { + struct mlx5_rxq_priv *rxq = + mlx5_rxq_get(dev, queues[i]); + + MLX5_ASSERT(rxq != NULL); + if (rxq->ctrl->is_hairpin) + rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id; + else + rqt_attr->rq_list[i] = rxq->devx_rq.rq->id; + } } MLX5_ASSERT(i > 0); for (j = 0; i != rqt_n; ++j, ++i) @@ -711,7 +719,13 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, uint32_t i; /* NULL queues designate drop queue. */ - if (ind_tbl->queues != NULL) { + if (ind_tbl->queues == NULL) { + is_hairpin = priv->drop_queue.rxq->ctrl->is_hairpin; + } else if (mlx5_is_external_rxq(dev, ind_tbl->queues[0])) { + /* External RxQ supports neither Hairpin nor LRO. */ + is_hairpin = false; + lro = false; + } else { is_hairpin = mlx5_rxq_is_hairpin(dev, ind_tbl->queues[0]); /* Enable TIR LRO only if all the queues were configured for. */ for (i = 0; i < ind_tbl->queues_n; ++i) { @@ -723,8 +737,6 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, break; } } - } else { - is_hairpin = priv->drop_queue.rxq->ctrl->is_hairpin; } memset(tir_attr, 0, sizeof(*tir_attr)); tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 09701a73c1..3875160708 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1743,6 +1743,12 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "can't have 2 fate actions in" " same flow"); + if (attr->egress) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL, + "queue action not supported for egress."); + if (mlx5_is_external_rxq(dev, queue->index)) + return 0; if (!priv->rxqs_n) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF, @@ -1757,11 +1763,6 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, RTE_FLOW_ERROR_TYPE_ACTION_CONF, &queue->index, "queue is not configured"); - if (attr->egress) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL, - "queue action not supported for " - "egress"); return 0; } @@ -1776,7 +1777,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, * Size of the @p queues array. * @param[out] error * On error, filled with a textual error description. - * @param[out] queue + * @param[out] queue_idx * On error, filled with an offending queue index in @p queues array. * * @return @@ -1789,17 +1790,27 @@ mlx5_validate_rss_queues(struct rte_eth_dev *dev, { const struct mlx5_priv *priv = dev->data->dev_private; bool is_hairpin = false; + bool is_ext_rss = false; uint32_t i; for (i = 0; i != queues_n; ++i) { - struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, - queues[i]); + struct mlx5_rxq_ctrl *rxq_ctrl; + if (mlx5_is_external_rxq(dev, queues[0])) { + is_ext_rss = true; + continue; + } + if (is_ext_rss) { + *error = "Combining external and regular RSS queues is not supported"; + *queue_idx = i; + return -ENOTSUP; + } if (queues[i] >= priv->rxqs_n) { *error = "queue index out of range"; *queue_idx = i; return -EINVAL; } + rxq_ctrl = mlx5_rxq_ctrl_get(dev, queues[i]); if (rxq_ctrl == NULL) { *error = "queue is not configured"; *queue_idx = i; @@ -1894,7 +1905,7 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev, RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL, "L4 partial RSS requested but L4 RSS" " type not specified"); - if (!priv->rxqs_n) + if (!priv->rxqs_n && priv->ext_rxqs == NULL) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL, "No Rx queues configured"); diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index aba05dffa7..acebe3348c 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -18,6 +18,7 @@ #include "mlx5.h" #include "mlx5_autoconf.h" +#include "rte_pmd_mlx5.h" /* Support tunnel matching. */ #define MLX5_FLOW_TUNNEL 10 @@ -217,8 +218,14 @@ uint32_t mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx); struct mlx5_rxq_priv *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx); struct mlx5_rxq_ctrl *mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx); struct mlx5_rxq_data *mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_external_rxq *mlx5_ext_rxq_ref(struct rte_eth_dev *dev, + uint16_t idx); +uint32_t mlx5_ext_rxq_deref(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_external_rxq *mlx5_ext_rxq_get(struct rte_eth_dev *dev, + uint16_t idx); int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_verify(struct rte_eth_dev *dev); +int mlx5_ext_rxq_verify(struct rte_eth_dev *dev); int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl); int mlx5_ind_table_obj_verify(struct rte_eth_dev *dev); struct mlx5_ind_table_obj *mlx5_ind_table_obj_get(struct rte_eth_dev *dev, @@ -643,4 +650,27 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev) return n == n_ibv; } +/** + * Check whether given RxQ is external. + * + * @param dev + * Pointer to Ethernet device. + * @param queue_idx + * Rx queue index. + * + * @return + * True if is external RxQ, otherwise false. + */ +static __rte_always_inline bool +mlx5_is_external_rxq(struct rte_eth_dev *dev, uint16_t queue_idx) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_external_rxq *rxq; + + if (!priv->ext_rxqs || queue_idx < MLX5_EXTERNAL_RX_QUEUE_ID_MIN) + return false; + rxq = &priv->ext_rxqs[queue_idx - MLX5_EXTERNAL_RX_QUEUE_ID_MIN]; + return !!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED); +} + #endif /* RTE_PMD_MLX5_RX_H_ */ diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 889428f48a..ff293d9d56 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -2084,6 +2084,65 @@ mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx) return rxq == NULL ? NULL : &rxq->ctrl->rxq; } +/** + * Increase an external Rx queue reference count. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * External RX queue index. + * + * @return + * A pointer to the queue if it exists, NULL otherwise. + */ +struct mlx5_external_rxq * +mlx5_ext_rxq_ref(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_external_rxq *rxq = mlx5_ext_rxq_get(dev, idx); + + __atomic_fetch_add(&rxq->refcnt, 1, __ATOMIC_RELAXED); + return rxq; +} + +/** + * Decrease an external Rx queue reference count. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * External RX queue index. + * + * @return + * Updated reference count. + */ +uint32_t +mlx5_ext_rxq_deref(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_external_rxq *rxq = mlx5_ext_rxq_get(dev, idx); + + return __atomic_sub_fetch(&rxq->refcnt, 1, __ATOMIC_RELAXED); +} + +/** + * Get an external Rx queue. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * External Rx queue index. + * + * @return + * A pointer to the queue if it exists, NULL otherwise. + */ +struct mlx5_external_rxq * +mlx5_ext_rxq_get(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + MLX5_ASSERT(mlx5_is_external_rxq(dev, idx)); + return &priv->ext_rxqs[idx - MLX5_EXTERNAL_RX_QUEUE_ID_MIN]; +} + /** * Release a Rx queue. * @@ -2167,6 +2226,37 @@ mlx5_rxq_verify(struct rte_eth_dev *dev) return ret; } +/** + * Verify the external Rx Queue list is empty. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * The number of object not released. + */ +int +mlx5_ext_rxq_verify(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_external_rxq *rxq; + uint32_t i; + int ret = 0; + + if (priv->ext_rxqs == NULL) + return 0; + + for (i = MLX5_EXTERNAL_RX_QUEUE_ID_MIN; i <= UINT16_MAX ; ++i) { + rxq = mlx5_ext_rxq_get(dev, i); + if (rxq->refcnt < 2) + continue; + DRV_LOG(DEBUG, "Port %u external RxQ %u still referenced.", + dev->data->port_id, i); + ++ret; + } + return ret; +} + /** * Check whether RxQ type is Hairpin. * @@ -2182,8 +2272,11 @@ bool mlx5_rxq_is_hairpin(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); + struct mlx5_rxq_ctrl *rxq_ctrl; + if (mlx5_is_external_rxq(dev, idx)) + return false; + rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); return (idx < priv->rxqs_n && rxq_ctrl != NULL && rxq_ctrl->is_hairpin); } @@ -2358,9 +2451,16 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, if (ref_qs) for (i = 0; i != queues_n; ++i) { - if (mlx5_rxq_ref(dev, queues[i]) == NULL) { - ret = -rte_errno; - goto error; + if (mlx5_is_external_rxq(dev, queues[i])) { + if (mlx5_ext_rxq_ref(dev, queues[i]) == NULL) { + ret = -rte_errno; + goto error; + } + } else { + if (mlx5_rxq_ref(dev, queues[i]) == NULL) { + ret = -rte_errno; + goto error; + } } } ret = priv->obj_ops.ind_table_new(dev, n, ind_tbl); @@ -2371,8 +2471,12 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, error: if (ref_qs) { err = rte_errno; - for (j = 0; j < i; j++) - mlx5_rxq_deref(dev, queues[j]); + for (j = 0; j < i; j++) { + if (mlx5_is_external_rxq(dev, queues[j])) + mlx5_ext_rxq_deref(dev, queues[j]); + else + mlx5_rxq_deref(dev, queues[j]); + } rte_errno = err; } DRV_LOG(DEBUG, "Port %u cannot setup indirection table.",