From patchwork Mon Jun 6 11:46:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 112375 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F476A0542; Mon, 6 Jun 2022 13:47:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2ADD241611; Mon, 6 Jun 2022 13:47:21 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2078.outbound.protection.outlook.com [40.107.92.78]) by mails.dpdk.org (Postfix) with ESMTP id 462F9415D7; Mon, 6 Jun 2022 13:47:20 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lzbKNdp5JHsBNJff/KdeV5id3hik30FlWY2nOoBT4qYJQ5VrOKmwj7Wv9kJHWuypYi2jyLKAANOWUQ/CpzIwH1ulE9S97KUEFOmfLzJtOMzRBN2+tnOIflNqJhTeV9MywWa/k1RIfFtzExrkJp6634Jng5SJfVL5/3K1CV20irdqAPIaV5fyhbvwDM792bS7lI7kfvOurj0drDVEysQsIezyWGfHfvc8ErGvAdM0Tp+prvLmcb5QH2pw8l1IdN5Y0ywj0hiKcwo3wkr5k8zANZTLnKr+gww8rTN0OAwaUzsPWk2Q1MXAZVyoO2iA9T+k4GwIhpVwlkyFGQZzyyRVJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RAN/gXubkThMQrVMf7+vEVmRYwtqH62xGWmAZeglDoQ=; b=iscg21psD16F08Vwi27VPDAtZnxmHBJaTewEdjUJUOh60joFejZx+762re6tTJyu9t86I8epzLnmEDCdHF52pNZn8Wcoj3CCI1Bkhnh3zfPrzZV9EGL1oNt575i5Oq0LazBfGMA/EU4SxG2TRuGfuJ+oAQC2XW9mTHkmYjwVWkxxBQK2NLFbtYfCWz1P8Zd0G9idUlFKZlJaEQGDh0fDyv8Pklem4tXO93WDinTa1fC1eLaaz5q5zCoBhKeucjyNcscDYKt8ApiWnOr5gw6K7WbGvGO8kh92Bk5Knw7WoixwNtQegVFaFkhV6yfhEqgEgw/5nC+ziQcJMTrpj4ZVJA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RAN/gXubkThMQrVMf7+vEVmRYwtqH62xGWmAZeglDoQ=; b=Bi1pvCNrhnxP79AtqeN6eL6HJn8pRvf0v7LpIqqhQU8DnhXdqr+7Gx1RxPi7+9NvZIWfh2lKyHMjj47mFrexe5d8HPtwEarY133y9zxfjra6ttOIxIGSG3fG8QRS3k+eqLyJRhPgczH2jexRRhp1+1Xta2wFNf1KfCKKozzmg5xpuMxWM7keHhY13bONIL4nK7YuVffbRIE4mwWvnzgujGgiusnUCYGbkKk6uC+9Q1fdSh3Zr+krFaAX2aS32MAcsgvg0//G5yU1X3Dw2w8MUKFVC2V5xWbTMuv+OglxaHKyaoJzT9/YFybtyOIMR2J9AH4I/WZMnSXKKgY02fMZWg== Received: from MW4PR04CA0126.namprd04.prod.outlook.com (2603:10b6:303:84::11) by BY5PR12MB3793.namprd12.prod.outlook.com (2603:10b6:a03:1ad::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Mon, 6 Jun 2022 11:47:17 +0000 Received: from CO1NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:303:84:cafe::65) by MW4PR04CA0126.outlook.office365.com (2603:10b6:303:84::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:17 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT013.mail.protection.outlook.com (10.13.174.227) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:17 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:47:16 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:47:13 -0700 From: Li Zhang To: , , , , Maxime Coquelin CC: , , , , Subject: [PATCH v1 01/17] vdpa/mlx5: fix usage of capability for max number of virtqs Date: Mon, 6 Jun 2022 14:46:34 +0300 Message-ID: <20220606114650.209612-2-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 43cdf559-2a3a-4c6c-92ea-08da47b24e73 X-MS-TrafficTypeDiagnostic: BY5PR12MB3793:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: LHYPv4SZSCub9i0++IUs96BMcMPOhNHiQwsNBoSNbTzRItZ7gf/9NxPy/KMQdBZ5ltxAHt1uInW+Cz9egbDB4XDCY9bW/CPAJRNn+M1uvwkmg5ubTpC/+iNBu3SmrsLsRoF4FpRbA6v/9WcmvFNSp5doEys7dgAIFlwJbofoWE7wRl9LM/g4nvKoHP1IisS0KavYaO0Iaa6NfkF+u7vGlUz+sSa8CcA94dCqXyc88loHbJZTK6pNmIyb2JvJG8IE2hC97o/3mY3XyIZgPxQIHbJc42ziH/nugptEjnZuKtVSC7UZfvvGbOD4qyVxUEd6k1irlLxNG6TAJrT1yCGDc2RtaIqjcZHOddoD3hHhSfEmAj7mW9rfSFr4aM9u7R7jWzKc0xOArtc+4afYusGVHES6A03+BZo+HukWsnW7HwKEuyYqajP1hJvQJP4pu60Q0zvWVTDX922uJzCycmLsm9SFMc+RZyoHW4PYkCHDABmWU6pZHOdHBfJZTOvolUSWQyreNIn5lbJf4X4S6cffv1bmTDfPzx7IxCqwNgbseNA0Y8FCqlb33PwScmuuNrDnUAEW+yBH6ntCmAtOTfhJQQOioREvZtR2/XCxJ/vTZbuV5ppuWA/SljhExuX0V0T7fPk/3rdmyqel+GlyT03FTbhSlvPMTpYwRdLOXSQMdCsZx/kqdCM53fBrCKO5+LxwZAaea5doeKEidoQVGwKyUw== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(86362001)(40460700003)(8676002)(4326008)(5660300002)(70586007)(70206006)(36860700001)(7696005)(356005)(1076003)(6666004)(8936002)(54906003)(110136005)(6286002)(26005)(16526019)(82310400005)(186003)(83380400001)(316002)(2906002)(508600001)(55016003)(36756003)(2616005)(81166007)(426003)(47076005)(336012)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:47:17.1200 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 43cdf559-2a3a-4c6c-92ea-08da47b24e73 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB3793 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The driver wrongly takes the capability value for the number of virtq pairs instead of just the number of virtqs. Adjust all the usages of it to be the number of virtqs. Fixes: c2eb33a ("vdpa/mlx5: manage virtqs by array") Cc: stable@dpdk.org Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.c | 12 ++++++------ drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 6 +++--- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 76fa5d4299..ee71339b78 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -84,7 +84,7 @@ mlx5_vdpa_get_queue_num(struct rte_vdpa_device *vdev, uint32_t *queue_num) DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); return -1; } - *queue_num = priv->caps.max_num_virtio_queues; + *queue_num = priv->caps.max_num_virtio_queues / 2; return 0; } @@ -141,7 +141,7 @@ mlx5_vdpa_set_vring_state(int vid, int vring, int state) DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); return -EINVAL; } - if (vring >= (int)priv->caps.max_num_virtio_queues * 2) { + if (vring >= (int)priv->caps.max_num_virtio_queues) { DRV_LOG(ERR, "Too big vring id: %d.", vring); return -E2BIG; } @@ -388,7 +388,7 @@ mlx5_vdpa_get_stats(struct rte_vdpa_device *vdev, int qid, DRV_LOG(ERR, "Invalid device: %s.", vdev->device->name); return -ENODEV; } - if (qid >= (int)priv->caps.max_num_virtio_queues * 2) { + if (qid >= (int)priv->caps.max_num_virtio_queues) { DRV_LOG(ERR, "Too big vring id: %d for device %s.", qid, vdev->device->name); return -E2BIG; @@ -411,7 +411,7 @@ mlx5_vdpa_reset_stats(struct rte_vdpa_device *vdev, int qid) DRV_LOG(ERR, "Invalid device: %s.", vdev->device->name); return -ENODEV; } - if (qid >= (int)priv->caps.max_num_virtio_queues * 2) { + if (qid >= (int)priv->caps.max_num_virtio_queues) { DRV_LOG(ERR, "Too big vring id: %d for device %s.", qid, vdev->device->name); return -E2BIG; @@ -624,7 +624,7 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, DRV_LOG(DEBUG, "No capability to support virtq statistics."); priv = rte_zmalloc("mlx5 vDPA device private", sizeof(*priv) + sizeof(struct mlx5_vdpa_virtq) * - attr->vdpa.max_num_virtio_queues * 2, + attr->vdpa.max_num_virtio_queues, RTE_CACHE_LINE_SIZE); if (!priv) { DRV_LOG(ERR, "Failed to allocate private memory."); @@ -685,7 +685,7 @@ mlx5_vdpa_release_dev_resources(struct mlx5_vdpa_priv *priv) uint32_t i; mlx5_vdpa_dev_cache_clean(priv); - for (i = 0; i < priv->caps.max_num_virtio_queues * 2; i++) { + for (i = 0; i < priv->caps.max_num_virtio_queues; i++) { if (!priv->virtqs[i].counters) continue; claim_zero(mlx5_devx_cmd_destroy(priv->virtqs[i].counters)); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index e025be47d2..c258eb3024 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -72,7 +72,7 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) { unsigned int i, j; - for (i = 0; i < priv->caps.max_num_virtio_queues * 2; i++) { + for (i = 0; i < priv->caps.max_num_virtio_queues; i++) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; for (j = 0; j < RTE_DIM(virtq->umems); ++j) { @@ -492,9 +492,9 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) DRV_LOG(INFO, "TSO is enabled without CSUM, force CSUM."); priv->features |= (1ULL << VIRTIO_NET_F_CSUM); } - if (nr_vring > priv->caps.max_num_virtio_queues * 2) { + if (nr_vring > priv->caps.max_num_virtio_queues) { DRV_LOG(ERR, "Do not support more than %d virtqs(%d).", - (int)priv->caps.max_num_virtio_queues * 2, + (int)priv->caps.max_num_virtio_queues, (int)nr_vring); return -1; } From patchwork Mon Jun 6 11:46:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 112376 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DE552A0542; Mon, 6 Jun 2022 13:47:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 52312427F9; Mon, 6 Jun 2022 13:47:26 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2081.outbound.protection.outlook.com [40.107.94.81]) by mails.dpdk.org (Postfix) with ESMTP id 434B8427EE; Mon, 6 Jun 2022 13:47:25 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PLMBbBTBraG0TI1C+LSze5PlAY4W3H6VWROBa5ymZ0XnfB3ifoJJqZzUMlsy3N+FQFnltUKu05XEEwf8bvYiU19chI+vVocQhb2p2RgKRGSQAAW5BAIH+Ug7vogBneYC6XNLbYe8Ky3H4dNqFI84vBi9i2PEA1qNhW3jdcH1GjTKKL7BTZUbG6BBMoaI9Vc2wcCKaH6r37B2V/KmBewSPWDPxFB5SJqjtPiZtQBbnKyWYNkIPQ/yq8ea2RQ95UsI6MIJgDJjoooxUpkUbXlT3ytj7VnoeOdoYszHgf1jZTXbIxAIsjmG+LyxV23wwldIojFSMMOKHWOpP+Oxw54Vrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GxYjiKxrccCuUfqLT+hVMguof9KBb2t6cO9cFphAuQI=; b=RpK5AUBq6fzKS3p20xZ++Hgm1H/wVCjYBYqgMdifqt7uxESNu05lrFOi/gEEgkvEdsZxzTb0jiMJ+PknWiODSdYRRNZbQPALveO8wRaQ0YoHhFX7sUUMT9LkBlP+8ZRSfGu3QTu2TkOcfS1oFmqkR7ox0IfZ6eRdW2/YGmeYiTxgVkF1og2UuWAoH7kNGb/YrNlo+lw7wo3SKNenc7PnKW0tgks/fx7cOyXcbaQ8xwPcz88DlJO047EaE5IZaghgyIdkOOLK1s6nvZICBDCbjdT3REI1VSjymEOFMLD/zYcIt9MHpWyQWITaosZoUeqWchjDH6BzwJveNFQvbauwUQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GxYjiKxrccCuUfqLT+hVMguof9KBb2t6cO9cFphAuQI=; b=o/WsuRgF4waiz4WZQuxP2gor1cM6Naj3XjhUr4dNgiucUUagk4OTlltlMkNwF+BxVJZJckdgnlFq7EbMRgOmT2xqsSLi027Zu78WCDJxAkSKE1b/lrWFKdIOXVvgrpSKR3sYMttlChgj3mP5p2rO8G/R6FAqBrl55xQEQobqypFkEb1IS6rdoNRDBN0JWnSYQIKuFnQoZWnEnos1cYTt2Leet9m5Ql6pQbOnuEODrmz+oCRhLT2n3wI54E0rirrsdvTJx7HPY4rf56aDCaoypwhsUK8mwApkvWH2dMDXLk/b+9nCQ8emRgp2g6s6GpG9qEDZE3BymNyiFV0ZA3RzOA== Received: from DM6PR11CA0020.namprd11.prod.outlook.com (2603:10b6:5:190::33) by MW4PR12MB5603.namprd12.prod.outlook.com (2603:10b6:303:16a::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Mon, 6 Jun 2022 11:47:22 +0000 Received: from DM6NAM11FT003.eop-nam11.prod.protection.outlook.com (2603:10b6:5:190:cafe::a1) by DM6PR11CA0020.outlook.office365.com (2603:10b6:5:190::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.17 via Frontend Transport; Mon, 6 Jun 2022 11:47:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT003.mail.protection.outlook.com (10.13.173.162) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:21 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:47:21 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:47:16 -0700 From: Li Zhang To: , , , , Bruce Richardson , "Dmitry Kozlyuk" , Narcisa Ana Maria Vasile , Dmitry Malloy , Pallavi Kadam CC: , , , , Yajun Wu , Subject: [PATCH v1 02/17] eal: add device removal in rte cleanup Date: Mon, 6 Jun 2022 14:46:35 +0300 Message-ID: <20220606114650.209612-3-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5f5b50d4-be90-45ab-80c6-08da47b25128 X-MS-TrafficTypeDiagnostic: MW4PR12MB5603:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: EM3/HfdpClQ1fdiQhtZbeo/CJ4XRKw1QKQAaLm3ktCC6zRhVoAzAEYW377ZtkN3QHTpvBgXZfMvSq0O1s4PwqUjw/4QLf+wbjGIFiXBKc24zuy1XgJjw8CTSh+2id3zxkE+7kld67iXTAMn1OV7h8ykVwdOW+5fQfis8CfrlmY4TG9f2nb69unh82LfUZfwbgZsqC1AIJY+BAX90LfMWvUQbH0YLns8BBzINSyauNdWdixrGZKVnOHKE74qtjQ5X30Rt5UvyjIbuUuRerE5AQ2A1qDUuQ58mUcRkEj8q/9iyvTI7VVtiXmLVZ/7apzsdjO4pYFZM5JHTX7/yR3IByNBYLrD9UcOLFv5dbBSBNDSXV1shzOLZG8/GF/H9yndufq5X+wo/b8EE+JIDtJXlW4itHKlqWFYSK6AO9wuomWO7m8pTEyEc/ssRJ1TLpm+qWOQiCByLevuADEEI+uE1m9HgG6bR1Hi9w9IQP7DwK2Ja+7sSiGC6UnpbMxPazQu3ucfe5V/HACRZX9Wy/+5GW3D+jusTLJzjQhLmTZGLkcq1pxHoiM/OyuLXOvf2lg3lo64gTus1UG7pqfaHt+MbBJfxAGkT03iUzUMFg25xTBkI8cdYpw1Lko/dRjZBAxz3vPKt9fXdM65OCpeFcSpRgRKOrowD41oylE0+rDcvCckxY1FIUKTxu6MrSnqfQFoWMsTkPgTSMOjNrf+GTSkG4A== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(16526019)(1076003)(6286002)(36860700001)(426003)(186003)(2906002)(40460700003)(47076005)(508600001)(54906003)(336012)(70206006)(2616005)(70586007)(55016003)(110136005)(316002)(8936002)(6666004)(36756003)(356005)(81166007)(4326008)(8676002)(7696005)(26005)(5660300002)(86362001)(82310400005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:47:21.6456 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5f5b50d4-be90-45ab-80c6-08da47b25128 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT003.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB5603 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Yajun Wu Add device removal in function rte_eal_cleanup. This is the last chance device remove get called for sanity. Loop vdev bus first and then all bus for all device, calling rte_dev_remove. Cc: stable@dpdk.org Signed-off-by: Yajun Wu --- lib/eal/freebsd/eal.c | 33 +++++++++++++++++++++++++++++++++ lib/eal/include/rte_dev.h | 6 ++++++ lib/eal/linux/eal.c | 33 +++++++++++++++++++++++++++++++++ lib/eal/windows/eal.c | 33 +++++++++++++++++++++++++++++++++ 4 files changed, 105 insertions(+) diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c index a6b20960f2..5ffd9146b6 100644 --- a/lib/eal/freebsd/eal.c +++ b/lib/eal/freebsd/eal.c @@ -886,11 +886,44 @@ rte_eal_init(int argc, char **argv) return fctret; } +static int +bus_match_all(const struct rte_bus *bus, const void *data) +{ + RTE_SET_USED(bus); + RTE_SET_USED(data); + return 0; +} + +static void +remove_all_device(void) +{ + struct rte_bus *start = NULL, *next; + struct rte_dev_iterator dev_iter = {0}; + struct rte_device *dev = NULL; + struct rte_device *tdev = NULL; + char devstr[128]; + + RTE_DEV_FOREACH_SAFE(dev, "bus=vdev", &dev_iter, tdev) { + (void)rte_dev_remove(dev); + } + while ((next = rte_bus_find(start, bus_match_all, NULL)) != NULL) { + start = next; + /* Skip buses that don't have iterate method */ + if (!next->dev_iterate || !next->name) + continue; + snprintf(devstr, sizeof(devstr), "bus=%s", next->name); + RTE_DEV_FOREACH_SAFE(dev, devstr, &dev_iter, tdev) { + (void)rte_dev_remove(dev); + } + }; +} + int rte_eal_cleanup(void) { struct internal_config *internal_conf = eal_get_internal_configuration(); + remove_all_device(); rte_service_finalize(); rte_mp_channel_cleanup(); /* after this point, any DPDK pointers will become dangling */ diff --git a/lib/eal/include/rte_dev.h b/lib/eal/include/rte_dev.h index e6ff1218f9..382d548ea3 100644 --- a/lib/eal/include/rte_dev.h +++ b/lib/eal/include/rte_dev.h @@ -492,6 +492,12 @@ int rte_dev_dma_unmap(struct rte_device *dev, void *addr, uint64_t iova, size_t len); +#define RTE_DEV_FOREACH_SAFE(dev, devstr, it, tdev) \ + for (rte_dev_iterator_init(it, devstr), \ + (dev) = rte_dev_iterator_next(it); \ + (dev) && ((tdev) = rte_dev_iterator_next(it), 1); \ + (dev) = (tdev)) + #ifdef __cplusplus } #endif diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c index 1ef263434a..30b295916e 100644 --- a/lib/eal/linux/eal.c +++ b/lib/eal/linux/eal.c @@ -1248,6 +1248,38 @@ mark_freeable(const struct rte_memseg_list *msl, const struct rte_memseg *ms, return 0; } +static int +bus_match_all(const struct rte_bus *bus, const void *data) +{ + RTE_SET_USED(bus); + RTE_SET_USED(data); + return 0; +} + +static void +remove_all_device(void) +{ + struct rte_bus *start = NULL, *next; + struct rte_dev_iterator dev_iter = {0}; + struct rte_device *dev = NULL; + struct rte_device *tdev = NULL; + char devstr[128]; + + RTE_DEV_FOREACH_SAFE(dev, "bus=vdev", &dev_iter, tdev) { + (void)rte_dev_remove(dev); + } + while ((next = rte_bus_find(start, bus_match_all, NULL)) != NULL) { + start = next; + /* Skip buses that don't have iterate method */ + if (!next->dev_iterate || !next->name) + continue; + snprintf(devstr, sizeof(devstr), "bus=%s", next->name); + RTE_DEV_FOREACH_SAFE(dev, devstr, &dev_iter, tdev) { + (void)rte_dev_remove(dev); + } + }; +} + int rte_eal_cleanup(void) { @@ -1257,6 +1289,7 @@ rte_eal_cleanup(void) struct internal_config *internal_conf = eal_get_internal_configuration(); + remove_all_device(); if (rte_eal_process_type() == RTE_PROC_PRIMARY && internal_conf->hugepage_file.unlink_existing) rte_memseg_walk(mark_freeable, NULL); diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c index 122de2a319..3d7d411293 100644 --- a/lib/eal/windows/eal.c +++ b/lib/eal/windows/eal.c @@ -254,12 +254,45 @@ __rte_trace_point_register(rte_trace_point_t *trace, const char *name, return -ENOTSUP; } +static int +bus_match_all(const struct rte_bus *bus, const void *data) +{ + RTE_SET_USED(bus); + RTE_SET_USED(data); + return 0; +} + +static void +remove_all_device(void) +{ + struct rte_bus *start = NULL, *next; + struct rte_dev_iterator dev_iter = {0}; + struct rte_device *dev = NULL; + struct rte_device *tdev = NULL; + char devstr[128]; + + RTE_DEV_FOREACH_SAFE(dev, "bus=vdev", &dev_iter, tdev) { + (void)rte_dev_remove(dev); + } + while ((next = rte_bus_find(start, bus_match_all, NULL)) != NULL) { + start = next; + /* Skip buses that don't have iterate method */ + if (!next->dev_iterate || !next->name) + continue; + snprintf(devstr, sizeof(devstr), "bus=%s", next->name); + RTE_DEV_FOREACH_SAFE(dev, devstr, &dev_iter, tdev) { + (void)rte_dev_remove(dev); + } + }; +} + int rte_eal_cleanup(void) { struct internal_config *internal_conf = eal_get_internal_configuration(); + remove_all_device(); eal_intr_thread_cancel(); eal_mem_virt2iova_cleanup(); /* after this point, any DPDK pointers will become dangling */ From patchwork Mon Jun 6 11:46:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 112377 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C0AD9A0542; Mon, 6 Jun 2022 13:47:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 858F2427F6; Mon, 6 Jun 2022 13:47:30 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2079.outbound.protection.outlook.com [40.107.95.79]) by mails.dpdk.org (Postfix) with ESMTP id 9C49B406B4; Mon, 6 Jun 2022 13:47:29 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HyhFuYYqg288ze9z9nqN4A/5q7bH5EhA9/6cayiOAnS1/5BuVjDesjbKYNsX3eZZEsqlmu+0Dbe99oUKpBTQwA1PIuWQ559BWGqfLtqjFNipVYlKgq5CIFDzZH2sXliBA9cBxiSxQTw9yD3ESrCCl0ztcqbMihlbf+7pfH4mzYBqmRPbl84bqKxMs0uCz82QIZDAfBhexviOrVInOfgLQxoB0HVXU39AAV396NSUlaAB4AiKqja5boNORAV9OQWXryF5dGEgMuDfQlyURdGJsHJWKkiypT/Sjg4KNF44VfqEzwzooWPm8H6Q7yRzwEV5T/oWPEFmLt2+7A8IAK6VJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+ZPVU+1n/X1Ej6xmkv+FiB/jRHtSB9weFEKHydo0p+I=; b=R+SlpIbHmIZEHo7sBj6wyS6pJMy6ybQq6SL6aPDm/9SYREHeqVtlv2wMr85eJvwKHJqwoxlQeFbpnPqOtX4rNKx7n/x6iyGg9cYvzOnlfBdM1obu2NPNrbcJddYLWDSEfitr9KiVN7QhcbGpkzBAQTqY6UahUiZ7+9g7eiVk0nIJdf0SWXLikiK47jnCsUv0+P8W753L+JIkFYuR8phTImpFA038tT3nmm+AZjy71LoF3ADEXF549+A0hCqwKOZieeLxvL2hyw29ITwP3Mig345LFovCZVzEfbNW9Ug6XxxkqiRYMUKZxEVBaEgoCnQbv5qrastx6eP7HQ/4ynMdAQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+ZPVU+1n/X1Ej6xmkv+FiB/jRHtSB9weFEKHydo0p+I=; b=Pu39KVzOYOmCCO/h7WxVSQl3iR5aTKhG5Cs/keaaULHTiO33UAdpI+G1yX8RA//p/VBR1qzYZWs6blYIDj5PCYh6tOfc3g7Z1zGVPzTjsXBVPQ8GAn021+P0vj4XuNP+h6QkehwHeQY+W2Ohv9QWIw9xSBUyevUrIbp4b8lP+aa8rUyzFJWICQEmk9D6Klh7ahrhuP3WIeGbj+qUv4OKjaU25hFxY8LAlAZMUbpsIf8Ph6YWRc00MHlP1UVt1OuLLxDiYsqJdxijUHns2Gxd9Ee9BsjV4Q5ih8bfCw3pPoSADNSeHNH/n1boAqVrQr1++LAeLWMAuLAUyEaXUJmfqw== Received: from BN0PR03CA0020.namprd03.prod.outlook.com (2603:10b6:408:e6::25) by BL0PR12MB2530.namprd12.prod.outlook.com (2603:10b6:207:43::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.15; Mon, 6 Jun 2022 11:47:26 +0000 Received: from BN8NAM11FT011.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e6:cafe::e1) by BN0PR03CA0020.outlook.office365.com (2603:10b6:408:e6::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.18 via Frontend Transport; Mon, 6 Jun 2022 11:47:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT011.mail.protection.outlook.com (10.13.176.140) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:25 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:47:25 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:47:20 -0700 From: Li Zhang To: , , , , Maxime Coquelin , "Chenbo Xia" , Chengchang Tang CC: , , , , Yajun Wu , Subject: [PATCH v1 03/17] examples/vdpa: fix devices cleanup Date: Mon, 6 Jun 2022 14:46:36 +0300 Message-ID: <20220606114650.209612-4-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 13462693-1ef9-40f7-f578-08da47b253ab X-MS-TrafficTypeDiagnostic: BL0PR12MB2530:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: NjiDqxEUUVEMj7oZkAbPpiZNxUFntvfJ+LqYLQRKThK0zeI+q2K++G7h7XMTUNXuRundbz5vhiooyqAfkQeAuGbRGmoui49PR57r8V3ZdZ1GhyfekMS/csJPUSlbLWK+63vPlFlzEbxAC9lFDpOIIOi8slMkz/uAGQhtq3aVXoAOLZwrUouh1QNB61c3vHPCgXvt6BPzFoS5lzyN5ELSq0H232tnxmHNfKBlHMXMD5yO/OJTYlEOOpaGG4q39qXQehOu1SxWfQhLXlnrKE/fijL7jVLHriQkCS4msYUR265w2kdI5lL/OgD7WEV9m605AGEfQYLYpNkiGMOgQuhonexzVgw+u3uAeZGgPHpuZszZeUSOdVOp1rpFnqzeBdCNkEyM1mBwVusdPxcT3ic7aNLVPGnYugkmtI7O1txUU7PFZcXgzBOSmpSo25nHjqO2KgLm+UOgvQyhAtxyApafEK1pduDL/IXFpotzYxb49mncnuOdO7/0ljmsdtjlNCuv6kPZD6LDFlw0FNBlHHYfejXhbTdMvJL6Cvkh9ciTJZIQOv9mS82gfvaB4w7tL3eIVbFRLsSwp1hik1eBgnWphVKeu81KuoWOksxgXOVKJl8/f1hbUi64RzzcSgQCPzwqwtaU8x0MDmav5fVwtXIypg2c0zPjcNRaaq/jyiW4F8rh6e+ZlSEKEirRCosfsxCFfjD4Q99p8zuU6YzGW/ANSA== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(186003)(2906002)(81166007)(2616005)(86362001)(47076005)(4326008)(426003)(1076003)(7696005)(336012)(6666004)(316002)(82310400005)(70586007)(8676002)(70206006)(54906003)(6286002)(26005)(4744005)(55016003)(110136005)(36756003)(36860700001)(83380400001)(40460700003)(356005)(508600001)(5660300002)(8936002)(16526019)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:47:25.8157 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 13462693-1ef9-40f7-f578-08da47b253ab X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT011.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB2530 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Yajun Wu Move rte_eal_cleanup to function vdpa_sample_quit which handling all example app quit. Otherwise rte_eal_cleanup won't be called on receiving signal like SIGINT(control + c). Fixes: 10aa3757 ("examples: add eal cleanup to examples") Cc: stable@dpdk.org Signed-off-by: Yajun Wu --- examples/vdpa/main.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/examples/vdpa/main.c b/examples/vdpa/main.c index 7e11ef4e26..62e32b633d 100644 --- a/examples/vdpa/main.c +++ b/examples/vdpa/main.c @@ -286,6 +286,8 @@ vdpa_sample_quit(void) if (vports[i].ifname[0] != '\0') close_vdpa(&vports[i]); } + /* clean up the EAL */ + rte_eal_cleanup(); } static void @@ -632,8 +634,5 @@ main(int argc, char *argv[]) vdpa_sample_quit(); } - /* clean up the EAL */ - rte_eal_cleanup(); - return 0; } From patchwork Mon Jun 6 11:46:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 112378 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3F990A0542; Mon, 6 Jun 2022 13:47:49 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 916BD42B71; Mon, 6 Jun 2022 13:47:31 +0200 (CEST) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2049.outbound.protection.outlook.com [40.107.212.49]) by mails.dpdk.org (Postfix) with ESMTP id 86FBF4281B for ; Mon, 6 Jun 2022 13:47:30 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ftoJbz+jdpGnTR+0Go5vFhgh0s7cUbPT8JGpXxuWF+JIpm9maP9CRAuyzvbH2+YQDBcxvqzE6Dd3ArIAG8cNiQCrnY4DadQocAc+y9kBDSOcH1dpSmDhsvvBJA5m0afYOJGUdR8clFoFwcnBv2rombyi0XqlcOgRaOhaD70nmyFM9pP2fEtAAnup+xtlGXGjadv59lIgz+PPIyBzysu9eHOItuUNIiTfUyMIecXvAkywRC2H8Lj7RfLEo7xuDobA3H/cjzvC0DToXsPpNcnvPt3AVR0fI0Il6W86S5SHop/Adr9eDqq6JDFh+O/Y/oGwvZbMCoaD0Yp4IvMFqZMBRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=pWgUAQGnUueLXIS/QHxVt3prfe0vowEE1RTBi6rBqKY=; b=DX9KU9sRVuTAmHzenNeM34LZqf07a1u/7D08tYXkwMYyvU02KlRXD+I8Gb6hiCfQJGGLb0T7jO0DZuUa+V/D9xtjHGlmCiZuQ/RnLSdm5StCYKJPktEFDFWszpaV8w/pOvWxY/CHKUn0J/1OLarwpkbdohHeZWybaC8QNGiBJ2FdVlMTFNIBU81s2aFEcXpDvH91HI6/sfLYN8gM0AyNaE4gMhGjzWzoyVbX1JCvvDHf7aStCNA+hW6oKYEGpCRng4+lcuhVAugWq0j/8ZxRjTzxBz783piMe0ow5X9pHbnDYHE68edES8ZbN/ypxwCkI+zrlzVKIwO9UlKw601ZCg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pWgUAQGnUueLXIS/QHxVt3prfe0vowEE1RTBi6rBqKY=; b=s7BKtPtjhzPEXdYFQeOUBtbAJEm3MlTBDqQNjUMfiQWJkk6gfGGRDeyAHUSNx4QyiIcvEBnh1XDlthxZOLAyuUIeLtOA4AR072Fr9CJ9slrMIkjobkmHQ9ybMWj1MfB7fFkZqqTErNE3Pn8XMLl7dJrPhjouVlLuJeTMO/LAMHvRy0Os4LRiRj9GH8C2PDi+Z4iVtuE3pFjdLlqXJ6a8lZCxZee+N5iz7sY1CHJWHgzriFbhQSgnaWOgCng2mjtbBjsQTcgNRm6I3UeKafX+ySUkkKKxETSrF1Cq6m6LrBVtZZUturcRoFW63tfhlwj2W/mbUxiWIX14lVviA/Y4vg== Received: from CO1PR15CA0068.namprd15.prod.outlook.com (2603:10b6:101:20::12) by DM5PR12MB1626.namprd12.prod.outlook.com (2603:10b6:4:d::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun 2022 11:47:29 +0000 Received: from CO1NAM11FT060.eop-nam11.prod.protection.outlook.com (2603:10b6:101:20:cafe::c6) by CO1PR15CA0068.outlook.office365.com (2603:10b6:101:20::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19 via Frontend Transport; Mon, 6 Jun 2022 11:47:28 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT060.mail.protection.outlook.com (10.13.175.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:28 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:47:28 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:47:24 -0700 From: Li Zhang To: , , , CC: , , , , Yajun Wu Subject: [PATCH v1 04/17] vdpa/mlx5: support pre create virtq resource Date: Mon, 6 Jun 2022 14:46:37 +0300 Message-ID: <20220606114650.209612-5-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4f997a3a-9373-4c90-7032-08da47b2553f X-MS-TrafficTypeDiagnostic: DM5PR12MB1626:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: YguZlQLixJCL2VTvpUicFGzQQK8O0bBDraNCS+x1j9SwT7oEaN3kYwUAKccz7xjskwT9fVLe6WiFSfKvUBCgorz2Td+Ud1YZvBbKihgaijRuo/K1HFDYRmlNsRirlMAOTHcR8zHKtJ6uVMyv5aAFAW+YmE/+LGg44yA5X7afoq5qS+OMKeEJRBjfW17rFB+JiNyqaQB5zqh7FiWJkZzmhCm/Y7Ly8C1HQH8Wj68j9swdmiLB0eemBkDkPLTOdc07LKvfVtFBWVZM2UZqDpk4RoEi6aQPLi1kRo/TOBaatNHoCEis+sLO45flhcLI1tMOxXlumhykq0+d/171vifLvQiKjRrjBQOLRk78M5HNW5B6sXExlE+kPMzWHNYgsUJ95+T3IVqgTSMfVwSb5Mbh1QEwD7tFQDrvYpGcMg/lr1Gy8yyKfZ3QNJZnqVMrEA96vKGTR3LxzbjQ8oyrwF7GS5oDenLczlsfsISvnKrloq2j3PnNHbZ1wOZf1uw2XqvlgW+V2/lWuytQJ6vNu0gcLX/m19HsKnkfZNnQkf/BIIViJYoI0hePzUeG6PJ8sk/DnCqkz9Xz8xNQ6OCBpG12utyGOmVwbu5/kY1PXw8DfMn3VZEEa4t6JScxrhhUisM1CXVMlp19FXbZ0J5MeP8852r0ER+tzj6M0JBDFYdYXjTb3w1/2zGWca7qapSk0z43rVGLG4GN4hMdBu0XrzpEBQ== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(7696005)(70586007)(70206006)(186003)(336012)(8936002)(82310400005)(5660300002)(16526019)(508600001)(110136005)(8676002)(4326008)(2616005)(107886003)(1076003)(426003)(47076005)(40460700003)(81166007)(26005)(55016003)(356005)(2906002)(6636002)(6666004)(316002)(36756003)(36860700001)(83380400001)(86362001)(6286002)(54906003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:47:28.5068 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4f997a3a-9373-4c90-7032-08da47b2553f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT060.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1626 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Yajun Wu The motivation of this change is to reduce vDPA device queue creation time by create some queue resource in vDPA device probe stage. In VM live migration scenario, this can reduce 0.8ms for each queue creation, thus reduce LM network downtime. To create queue resource(umem/counter) in advance, we need to know virtio queue depth and max number of queue VM will use. Introduce two new devargs: queues(max queue pair number) and queue_size (queue depth). Two args must be both provided, if only one argument provided, the argument will be ignored and no pre-creation. The queues and queue_size must also be identical to vhost configuration driver later receive. Otherwise either the pre-create resource is wasted or missing or the resource need destroy and recreate(in case queue_size mismatch). Pre-create umem/counter will keep alive until vDPA device removal. Signed-off-by: Yajun Wu --- doc/guides/vdpadevs/mlx5.rst | 14 +++++++ drivers/vdpa/mlx5/mlx5_vdpa.c | 75 ++++++++++++++++++++++++++++++++++- drivers/vdpa/mlx5/mlx5_vdpa.h | 2 + 3 files changed, 89 insertions(+), 2 deletions(-) diff --git a/doc/guides/vdpadevs/mlx5.rst b/doc/guides/vdpadevs/mlx5.rst index 3ded142311..0ad77bf535 100644 --- a/doc/guides/vdpadevs/mlx5.rst +++ b/doc/guides/vdpadevs/mlx5.rst @@ -101,6 +101,20 @@ for an additional list of options shared with other mlx5 drivers. - 0, HW default. +- ``queue_size`` parameter [int] + + - 1 - 1024, Virio Queue depth for pre-creating queue resource to speed up + first time queue creation. Set it together with queues devarg. + + - 0, default value, no pre-create virtq resource. + +- ``queues`` parameter [int] + + - 1 - 128, Max number of virio queue pair(including 1 rx queue and 1 tx queue) + for pre-create queue resource to speed up first time queue creation. Set it + together with queue_size devarg. + + - 0, default value, no pre-create virtq resource. Error handling ^^^^^^^^^^^^^^ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index ee71339b78..faf833ee2f 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -244,7 +244,9 @@ mlx5_vdpa_mtu_set(struct mlx5_vdpa_priv *priv) static void mlx5_vdpa_dev_cache_clean(struct mlx5_vdpa_priv *priv) { - mlx5_vdpa_virtqs_cleanup(priv); + /* Clean pre-created resource in dev removal only. */ + if (!priv->queues) + mlx5_vdpa_virtqs_cleanup(priv); mlx5_vdpa_mem_dereg(priv); } @@ -494,6 +496,12 @@ mlx5_vdpa_args_check_handler(const char *key, const char *val, void *opaque) priv->hw_max_latency_us = (uint32_t)tmp; } else if (strcmp(key, "hw_max_pending_comp") == 0) { priv->hw_max_pending_comp = (uint32_t)tmp; + } else if (strcmp(key, "queue_size") == 0) { + priv->queue_size = (uint16_t)tmp; + } else if (strcmp(key, "queues") == 0) { + priv->queues = (uint16_t)tmp; + } else { + DRV_LOG(WARNING, "Invalid key %s.", key); } return 0; } @@ -524,9 +532,68 @@ mlx5_vdpa_config_get(struct mlx5_kvargs_ctrl *mkvlist, if (!priv->event_us && priv->event_mode == MLX5_VDPA_EVENT_MODE_DYNAMIC_TIMER) priv->event_us = MLX5_VDPA_DEFAULT_TIMER_STEP_US; + if ((priv->queue_size && !priv->queues) || + (!priv->queue_size && priv->queues)) { + priv->queue_size = 0; + priv->queues = 0; + DRV_LOG(WARNING, "Please provide both queue_size and queues."); + } DRV_LOG(DEBUG, "event mode is %d.", priv->event_mode); DRV_LOG(DEBUG, "event_us is %u us.", priv->event_us); DRV_LOG(DEBUG, "no traffic max is %u.", priv->no_traffic_max); + DRV_LOG(DEBUG, "queues is %u, queue_size is %u.", priv->queues, + priv->queue_size); +} + +static int +mlx5_vdpa_virtq_resource_prepare(struct mlx5_vdpa_priv *priv) +{ + uint32_t index; + uint32_t i; + + if (!priv->queues) + return 0; + for (index = 0; index < (priv->queues * 2); ++index) { + struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; + + if (priv->caps.queue_counters_valid) { + if (!virtq->counters) + virtq->counters = + mlx5_devx_cmd_create_virtio_q_counters + (priv->cdev->ctx); + if (!virtq->counters) { + DRV_LOG(ERR, "Failed to create virtq couners for virtq" + " %d.", index); + return -1; + } + } + for (i = 0; i < RTE_DIM(virtq->umems); ++i) { + uint32_t size; + void *buf; + struct mlx5dv_devx_umem *obj; + + size = priv->caps.umems[i].a * priv->queue_size + + priv->caps.umems[i].b; + buf = rte_zmalloc(__func__, size, 4096); + if (buf == NULL) { + DRV_LOG(ERR, "Cannot allocate umem %d memory for virtq" + " %u.", i, index); + return -1; + } + obj = mlx5_glue->devx_umem_reg(priv->cdev->ctx, buf, + size, IBV_ACCESS_LOCAL_WRITE); + if (obj == NULL) { + rte_free(buf); + DRV_LOG(ERR, "Failed to register umem %d for virtq %u.", + i, index); + return -1; + } + virtq->umems[i].size = size; + virtq->umems[i].buf = buf; + virtq->umems[i].obj = obj; + } + } + return 0; } static int @@ -604,6 +671,8 @@ mlx5_vdpa_create_dev_resources(struct mlx5_vdpa_priv *priv) return -rte_errno; if (mlx5_vdpa_event_qp_global_prepare(priv)) return -rte_errno; + if (mlx5_vdpa_virtq_resource_prepare(priv)) + return -rte_errno; return 0; } @@ -638,6 +707,7 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, priv->num_lag_ports = 1; pthread_mutex_init(&priv->vq_config_lock, NULL); priv->cdev = cdev; + mlx5_vdpa_config_get(mkvlist, priv); if (mlx5_vdpa_create_dev_resources(priv)) goto error; priv->vdev = rte_vdpa_register_device(cdev->dev, &mlx5_vdpa_ops); @@ -646,7 +716,6 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, rte_errno = rte_errno ? rte_errno : EINVAL; goto error; } - mlx5_vdpa_config_get(mkvlist, priv); SLIST_INIT(&priv->mr_list); pthread_mutex_lock(&priv_list_lock); TAILQ_INSERT_TAIL(&priv_list, priv, next); @@ -684,6 +753,8 @@ mlx5_vdpa_release_dev_resources(struct mlx5_vdpa_priv *priv) { uint32_t i; + if (priv->queues) + mlx5_vdpa_virtqs_cleanup(priv); mlx5_vdpa_dev_cache_clean(priv); for (i = 0; i < priv->caps.max_num_virtio_queues; i++) { if (!priv->virtqs[i].counters) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index e7f3319f89..f6719a3c60 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -135,6 +135,8 @@ struct mlx5_vdpa_priv { uint8_t hw_latency_mode; /* Hardware CQ moderation mode. */ uint16_t hw_max_latency_us; /* Hardware CQ moderation period in usec. */ uint16_t hw_max_pending_comp; /* Hardware CQ moderation counter. */ + uint16_t queue_size; /* virtq depth for pre-creating virtq resource */ + uint16_t queues; /* Max virtq pair for pre-creating virtq resource */ struct rte_vdpa_device *vdev; /* vDPA device. */ struct mlx5_common_device *cdev; /* Backend mlx5 device. */ int vid; /* vhost device id. */ From patchwork Mon Jun 6 11:46:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 112379 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BA82CA0542; Mon, 6 Jun 2022 13:47:57 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7993A42B6E; Mon, 6 Jun 2022 13:47:34 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2072.outbound.protection.outlook.com [40.107.92.72]) by mails.dpdk.org (Postfix) with ESMTP id 9BAD142B6E for ; Mon, 6 Jun 2022 13:47:33 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=I5WNrv5Jg26Sn6LwnBdIP6IbLtWQlyEo/6yqOYdM0zO1bXHK3vWLKPPSbK4cS3g3Fi38YvI7DtSz02qgpK3/oQfbGPin0RuOr2hORt/kTrza+g+gzJrwM1H2mI9ZXcE5SadjeFt0MeMQjBN1YznlarUM56rfkA7W8qYQaTP2POM1a8pQBrTJya0SRwerA5xIvR3JdD0+Hg43e9n/WBxfxuOlRquxlZn3XdgZ03xFJxJRPKjGTgx4Ittq0pq1uROHCpBjqOuZLAzKmuGB9esy0nDmtOMpGPtWBx8/sjogHJXhPhxiF0rHMCsT6lKOUGtYE4yp/Iv+4zyxIVK5bCuZVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=We/bsNcpSpBekIWDvBf9E+epWgYn0d2hG5xm0p+Apr4=; b=Y0lrDz2VhrLg0RlrFamnGQZhI/G9oQdU6zpZfoiAGmvGZnkiPWQW9Tx7M66u4ixV95v0i9BwLTUPa1wfhDY/kSyx/kvt8+RqWPLPZxd/NCuSwgjBisdrAEsNln+m8412SWxNznQ2DywyrrKJlgq4SOuNaOT4RDec2ubdVU7mKnwrx5hVk3iiQk7nUrFbhp01CqOTZYhZuJFAWZ01yBYAo8lO2/cY2RlmGXnyjA1pYpB5y7Ik0GYzszb8TB4tZnoPY6J2hVi2HGvlabctcEd8aSU1As5Ke18GAy9q+ZQo7ud3uGsDyh4/N3p6vduYhYzWoExbA35LbEbNBVWPPLpTxg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=We/bsNcpSpBekIWDvBf9E+epWgYn0d2hG5xm0p+Apr4=; b=ewh8d+3KSvxZWlLX04PmfgwrzMhRg4JrrTLIpvKAp8/rBeMeN4HZU1tAJkz/4w36OkmvE2XnINF43bdCwqVf2S0pqCbyUng85pRn48ysMGy9akQmlVk6fZPoOnjDmO+mEX44enygx6CQCiwarmKRKV2ZNgsrMZbULvZgMNmZxqUFXzQX2pYLt35pKY2I9mAPKe0UTwaXS38i2cHVYarQPMl7R0rD4ROEJ/QJckDu7R1n7td+itiNy08VDkRhhEn9rWV+iRLW+t3DgpKCQcP96CPBURs8cISz5nOwhKFUQwLrCWahj3AkOCfuOeY0rc/82gG9rqIBXl2N6OKzlheW5g== Received: from MWHPR1701CA0017.namprd17.prod.outlook.com (2603:10b6:301:14::27) by LV2PR12MB5965.namprd12.prod.outlook.com (2603:10b6:408:172::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Mon, 6 Jun 2022 11:47:32 +0000 Received: from CO1NAM11FT046.eop-nam11.prod.protection.outlook.com (2603:10b6:301:14:cafe::7) by MWHPR1701CA0017.outlook.office365.com (2603:10b6:301:14::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19 via Frontend Transport; Mon, 6 Jun 2022 11:47:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT046.mail.protection.outlook.com (10.13.174.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:31 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:47:30 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:47:27 -0700 From: Li Zhang To: , , , CC: , , , , Yajun Wu Subject: [PATCH v1 05/17] common/mlx5: add DevX API to move QP to reset state Date: Mon, 6 Jun 2022 14:46:38 +0300 Message-ID: <20220606114650.209612-6-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f3af62c6-55f3-4337-c9f3-08da47b25700 X-MS-TrafficTypeDiagnostic: LV2PR12MB5965:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: CACLLdc03n8FBgQo4w0UEN2L9/Djd0qU67cUxGHwD03RkQscGmO4Q8cKUvozP5/bt34AlN1ypQZWS7YLcpOHvSHeOevDHikitTnnBByycFMPRaXUogx2uOJJoNhkGz+Ox6BQv2x4AvT1YMPdvcoi9/evINTqBT+G4IzuAZad76AzV5coq5/VeCBadUGo/6YEIjbZ7ZfcugHZYbIcNl81q3vjMbs63EhkHjwGU4WtJpsYJFjZSzruU27oY9hXkFsupC1qfrZTiml9Uz1QY+O/Nn+MmE078qz1kBffUDvqxYVYjh9fuVGtGiXEnJtow8dhfwjSpGEMCVmQCWMt8t9RBDUemux49uZiTja3srtUd/8PDa0vzQnaMcs3QCLqqRoGB7cA6fYToIei4FCBjzJPzom+vs2qKqsrZYdqcm3zmGWszoGZ7Zg7pPy9qrId4EpXfwu8AnEkinKHfuPN441iqfcBnKRjPY07j0gGxRMSAA4++EDdm8w9rF59HUCsUWX/ujzYJv8YzoAYxjnCH0T8FueRE4csibdVji1jpmjP1zxm3LepDa9pUIyB+r00BcpcKrOO/mmwEMpUOgfiJgQZbCUVYOJHtRGR9/vo+Zxc6ybOpdu67xJwNTKyrf8uWkWTfmm0ZkGJqkyB6ndSU5ZOkPgNcWklQLT096l/qzDi+EAmCz9J7wquLJP2rfeX7k5l83NUu39PoWQRdvmS+nJptA== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(186003)(36860700001)(16526019)(316002)(6636002)(110136005)(70586007)(40460700003)(54906003)(356005)(86362001)(2906002)(8676002)(70206006)(4326008)(81166007)(5660300002)(8936002)(36756003)(107886003)(508600001)(1076003)(2616005)(6666004)(82310400005)(55016003)(7696005)(6286002)(336012)(47076005)(426003)(26005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:47:31.4516 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f3af62c6-55f3-4337-c9f3-08da47b25700 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT046.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV2PR12MB5965 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Yajun Wu Support set QP to RESET state. Signed-off-by: Yajun Wu --- drivers/common/mlx5/mlx5_devx_cmds.c | 7 +++++++ drivers/common/mlx5/mlx5_prm.h | 17 +++++++++++++++++ 2 files changed, 24 insertions(+) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index c6bdbc12bb..1d6d6578d6 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -2264,11 +2264,13 @@ mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp, uint32_t qp_st_mod_op, uint32_t rst2init[MLX5_ST_SZ_DW(rst2init_qp_in)]; uint32_t init2rtr[MLX5_ST_SZ_DW(init2rtr_qp_in)]; uint32_t rtr2rts[MLX5_ST_SZ_DW(rtr2rts_qp_in)]; + uint32_t qp2rst[MLX5_ST_SZ_DW(2rst_qp_in)]; } in; union { uint32_t rst2init[MLX5_ST_SZ_DW(rst2init_qp_out)]; uint32_t init2rtr[MLX5_ST_SZ_DW(init2rtr_qp_out)]; uint32_t rtr2rts[MLX5_ST_SZ_DW(rtr2rts_qp_out)]; + uint32_t qp2rst[MLX5_ST_SZ_DW(2rst_qp_out)]; } out; void *qpc; int ret; @@ -2311,6 +2313,11 @@ mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp, uint32_t qp_st_mod_op, inlen = sizeof(in.rtr2rts); outlen = sizeof(out.rtr2rts); break; + case MLX5_CMD_OP_QP_2RST: + MLX5_SET(2rst_qp_in, &in, qpn, qp->id); + inlen = sizeof(in.qp2rst); + outlen = sizeof(out.qp2rst); + break; default: DRV_LOG(ERR, "Invalid or unsupported QP modify op %u.", qp_st_mod_op); diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index bc3e70a1d1..8a2f55c33e 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -3657,6 +3657,23 @@ struct mlx5_ifc_init2init_qp_in_bits { u8 reserved_at_800[0x80]; }; +struct mlx5_ifc_2rst_qp_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x40]; +}; + +struct mlx5_ifc_2rst_qp_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + u8 vhca_tunnel_id[0x10]; + u8 op_mod[0x10]; + u8 reserved_at_80[0x8]; + u8 qpn[0x18]; + u8 reserved_at_a0[0x20]; +}; + struct mlx5_ifc_dealloc_pd_out_bits { u8 status[0x8]; u8 reserved_0[0x18]; From patchwork Mon Jun 6 11:46:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 112380 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CE06DA0542; Mon, 6 Jun 2022 13:48:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A793B42B70; Mon, 6 Jun 2022 13:47:38 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2053.outbound.protection.outlook.com [40.107.243.53]) by mails.dpdk.org (Postfix) with ESMTP id 2B09642B7A for ; Mon, 6 Jun 2022 13:47:37 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EMQ/TAxoiI69tPcfV9YHF0VztDjRE1DZj+sTV7id1BsVsF5Z5OPy3rEKNJ4bnPm+QzJYblhuRGkmdsGSrNXK7Y02hac2dnxV2AQKtPmwWt6MSO5+hMrpTmoA9y2VEPdI4DTDR9UyqSyXxLok6lOqSv71BX6Kk4ag0Qn9Vu69/hqFobLHrEFkBc3rsMrUtJtK8BhktVpID8kNXbTF7DMHMi9IvSyiXImcIZtUWSFjU2ZMKolSPCFIrn+oyTBhEeKipmvg2aKlGTBCaY9yLla2n0F1Vba+o06+uyR/qsR+sNl2Q/pyHDz+3a7oWXB3Vni8XkVU5uR8j0S/Biq6YkxEAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2zp3UNiQdPV3kY553RSNWiyXlxSBg95xFLWHTx0l1LI=; b=HoSxKrdBdOVUN+6I+gZgKfl3vaMDeMIs0CB7Yv2HiEV2IZzeFHWQ1iHQfMT4619IhefuoMfRTyQ65vmPtkxeYyhg3AkaZycN1pbTeMvPk1MSENrsm0qoQHi11nd7m9cu2foFPRPUlMdAmZzJIBiHUZM44YZjyzI9a3KTUh11ahswED1u5Mdm3FHhwRyexijjfxOonFLb90xnDWPXrosSDohuw0XHZEyEwPa7Me6DTviW+L1xm9HlSxk3RWfDm6AV0rv7qak2cqnz2387Ur3eIhvG6HbT5af7nMfW0cmBLjt5jK/jpGr1FqsUZKz81GY960JP2wgq6OI+ctbQ47E9vA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2zp3UNiQdPV3kY553RSNWiyXlxSBg95xFLWHTx0l1LI=; b=RIdlk6yb0UQ2HC8lh42Q0WWzxOC9gkFStwHiA8ISDKb6PJQ/xSjeh8fnrZJIez1BxnKw7lQ7erZX6DdNnVgd4xaa0ZIL0PQ8K3wp1ddfCtTBPTfQQpN5FEnyOyJ+F4JyuOCx9P3/2pv3I0WKTlO42po3XaAKGvF7WysUiIjsVtHD/dsh0D3UWlKzppBwuemvMdWk794hyFuwH7Z1drdY/De4dxsob20ONZQu/3ZcIyBSvXMBMp63fgo0+kj4LLok/bxuHJ0IXt/m0XisYB1MyzFTWzVA2/0kVye2ISJoUywsL6fxrRIhzq2q8GIs+fc4REhOHe4dZgCFiuKRmELU5g== Received: from DM5PR13CA0066.namprd13.prod.outlook.com (2603:10b6:3:117::28) by MN2PR12MB3327.namprd12.prod.outlook.com (2603:10b6:208:cd::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.17; Mon, 6 Jun 2022 11:47:35 +0000 Received: from DM6NAM11FT023.eop-nam11.prod.protection.outlook.com (2603:10b6:3:117:cafe::35) by DM5PR13CA0066.outlook.office365.com (2603:10b6:3:117::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.4 via Frontend Transport; Mon, 6 Jun 2022 11:47:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT023.mail.protection.outlook.com (10.13.173.96) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:34 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:47:34 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:47:30 -0700 From: Li Zhang To: , , , CC: , , , , Yajun Wu Subject: [PATCH v1 06/17] vdpa/mlx5: support event qp reuse Date: Mon, 6 Jun 2022 14:46:39 +0300 Message-ID: <20220606114650.209612-7-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: bec2a34a-a6f3-4e58-62b9-08da47b258e7 X-MS-TrafficTypeDiagnostic: MN2PR12MB3327:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ddsQcDN4KKu2PaUUmAbtGmGUojSHXjRSwOXcoOqchoSXvAhQXrMb/EniRUFyRqndKiqgDzUhUIspQjAudUJZi3bp7BsMLJzKyS7Feo30ooSKe6EGSIrAiXCil4SvmYd7Uv7xBPj1tNlSK6AdqB7muIF1zfZzZcLpecaquVgXvCXLbwX62ms25vYo5Tvs/lhCvagb86v8WVj0qiWKmZhY7P98LrxarQUK9xw6dINXpOK7PcQ7TN4vvUpefTS0uAacFb3NukzckmDg35+KeVXAsk0uyrLpIKfvRnwloCAahR9upT9ElG690+0xS+wHRnZVbQ+hI4yVYLUs+EEq9fwrgI+Q19ig8cvxXu8cPnc1SAm6QohgIYz8eIgqwPO5kJbU1MVibRb2BKGDIpW7KcuQ8rZG0gx82TT8maAiJFLmPSs5Y0E3IjUbqAOimMoCNZ3lAUqdD8GantViwHXPe+3hC43Tsll6ZIoe3+eYe4X07S/v6hUB9r5u5/dWlQz7Lr47s+z0SZUELpodYbm4JFaC9nz/Ft0Yyi63+SB/rZN8PavXNFyJNvACUYawtiZ2tGtHAPcF3GT1ufyjC/B+TN2DP+gP3SeZ6FqScBHGXsKvlfETrJRQg4qIdvKFif4ujim5XQ/eltmF9Ed6Jx0ALTv7XSF6mcq1n+tZEj1pDpezbJ52JnOeul4je6y9QNsuT3kVaLynMC66a0B/Ubes5FPONA== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(36860700001)(70586007)(70206006)(5660300002)(8676002)(1076003)(2616005)(6636002)(40460700003)(47076005)(107886003)(4326008)(336012)(426003)(508600001)(83380400001)(8936002)(7696005)(86362001)(82310400005)(36756003)(6286002)(26005)(16526019)(186003)(81166007)(316002)(110136005)(54906003)(356005)(55016003)(2906002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:47:34.6562 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bec2a34a-a6f3-4e58-62b9-08da47b258e7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT023.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3327 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Yajun Wu To speed up queue create time, event qp and cq will create only once. Each virtq creation will reuse same event qp and cq. Because FW will set event qp to error state during virtq destroy, need modify event qp to RESET state, then modify qp to RTS state as usual. This can save about 1.5ms for each virtq creation. After SW qp reset, qp pi/ci all become 0 while cq pi/ci keep as previous. Add new variable qp_ci to save SW qp ci. Move qp pi independently with cq ci. Add new function mlx5_vdpa_drain_cq to drain cq CQE after virtq release. Signed-off-by: Yajun Wu --- drivers/vdpa/mlx5/mlx5_vdpa.c | 8 ++++ drivers/vdpa/mlx5/mlx5_vdpa.h | 12 +++++- drivers/vdpa/mlx5/mlx5_vdpa_event.c | 60 +++++++++++++++++++++++++++-- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 6 +-- 4 files changed, 78 insertions(+), 8 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index faf833ee2f..ee99952e11 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -269,6 +269,7 @@ mlx5_vdpa_dev_close(int vid) } mlx5_vdpa_steer_unset(priv); mlx5_vdpa_virtqs_release(priv); + mlx5_vdpa_drain_cq(priv); if (priv->lm_mr.addr) mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); priv->state = MLX5_VDPA_STATE_PROBED; @@ -555,7 +556,14 @@ mlx5_vdpa_virtq_resource_prepare(struct mlx5_vdpa_priv *priv) return 0; for (index = 0; index < (priv->queues * 2); ++index) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; + int ret = mlx5_vdpa_event_qp_prepare(priv, priv->queue_size, + -1, &virtq->eqp); + if (ret) { + DRV_LOG(ERR, "Failed to create event QPs for virtq %d.", + index); + return -1; + } if (priv->caps.queue_counters_valid) { if (!virtq->counters) virtq->counters = diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index f6719a3c60..bf82026e37 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -55,6 +55,7 @@ struct mlx5_vdpa_event_qp { struct mlx5_vdpa_cq cq; struct mlx5_devx_obj *fw_qp; struct mlx5_devx_qp sw_qp; + uint16_t qp_pi; }; struct mlx5_vdpa_query_mr { @@ -226,7 +227,7 @@ int mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv); * @return * 0 on success, -1 otherwise and rte_errno is set. */ -int mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, +int mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, int callfd, struct mlx5_vdpa_event_qp *eqp); /** @@ -479,4 +480,13 @@ mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid, */ int mlx5_vdpa_virtq_stats_reset(struct mlx5_vdpa_priv *priv, int qid); + +/** + * Drain virtq CQ CQE. + * + * @param[in] priv + * The vdpa driver private structure. + */ +void +mlx5_vdpa_drain_cq(struct mlx5_vdpa_priv *priv); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index 7167a98db0..b43dca9255 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -137,7 +137,7 @@ mlx5_vdpa_cq_poll(struct mlx5_vdpa_cq *cq) }; uint32_t word; } last_word; - uint16_t next_wqe_counter = cq->cq_ci; + uint16_t next_wqe_counter = eqp->qp_pi; uint16_t cur_wqe_counter; uint16_t comp; @@ -156,9 +156,10 @@ mlx5_vdpa_cq_poll(struct mlx5_vdpa_cq *cq) rte_io_wmb(); /* Ring CQ doorbell record. */ cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci); + eqp->qp_pi += comp; rte_io_wmb(); /* Ring SW QP doorbell record. */ - eqp->sw_qp.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci + cq_size); + eqp->sw_qp.db_rec[0] = rte_cpu_to_be_32(eqp->qp_pi + cq_size); } return comp; } @@ -232,6 +233,25 @@ mlx5_vdpa_queues_complete(struct mlx5_vdpa_priv *priv) return max; } +void +mlx5_vdpa_drain_cq(struct mlx5_vdpa_priv *priv) +{ + unsigned int i; + + for (i = 0; i < priv->caps.max_num_virtio_queues * 2; i++) { + struct mlx5_vdpa_cq *cq = &priv->virtqs[i].eqp.cq; + + mlx5_vdpa_queue_complete(cq); + if (cq->cq_obj.cq) { + cq->cq_obj.cqes[0].wqe_counter = + rte_cpu_to_be_16(UINT16_MAX); + priv->virtqs[i].eqp.qp_pi = 0; + if (!cq->armed) + mlx5_vdpa_cq_arm(priv, cq); + } + } +} + /* Wait on all CQs channel for completion event. */ static struct mlx5_vdpa_cq * mlx5_vdpa_event_wait(struct mlx5_vdpa_priv *priv __rte_unused) @@ -574,14 +594,44 @@ mlx5_vdpa_qps2rts(struct mlx5_vdpa_event_qp *eqp) return 0; } +static int +mlx5_vdpa_qps2rst2rts(struct mlx5_vdpa_event_qp *eqp) +{ + if (mlx5_devx_cmd_modify_qp_state(eqp->fw_qp, MLX5_CMD_OP_QP_2RST, + eqp->sw_qp.qp->id)) { + DRV_LOG(ERR, "Failed to modify FW QP to RST state(%u).", + rte_errno); + return -1; + } + if (mlx5_devx_cmd_modify_qp_state(eqp->sw_qp.qp, + MLX5_CMD_OP_QP_2RST, eqp->fw_qp->id)) { + DRV_LOG(ERR, "Failed to modify SW QP to RST state(%u).", + rte_errno); + return -1; + } + return mlx5_vdpa_qps2rts(eqp); +} + int -mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, +mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, int callfd, struct mlx5_vdpa_event_qp *eqp) { struct mlx5_devx_qp_attr attr = {0}; uint16_t log_desc_n = rte_log2_u32(desc_n); uint32_t ret; + if (eqp->cq.cq_obj.cq != NULL && log_desc_n == eqp->cq.log_desc_n) { + /* Reuse existing resources. */ + eqp->cq.callfd = callfd; + /* FW will set event qp to error state in q destroy. */ + if (!mlx5_vdpa_qps2rst2rts(eqp)) { + rte_write32(rte_cpu_to_be_32(RTE_BIT32(log_desc_n)), + &eqp->sw_qp.db_rec[0]); + return 0; + } + } + if (eqp->fw_qp) + mlx5_vdpa_event_qp_destroy(eqp); if (mlx5_vdpa_cq_create(priv, log_desc_n, callfd, &eqp->cq)) return -1; attr.pd = priv->cdev->pdn; @@ -608,8 +658,10 @@ mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, } if (mlx5_vdpa_qps2rts(eqp)) goto error; + eqp->qp_pi = 0; /* First ringing. */ - rte_write32(rte_cpu_to_be_32(RTE_BIT32(log_desc_n)), + if (eqp->sw_qp.db_rec) + rte_write32(rte_cpu_to_be_32(RTE_BIT32(log_desc_n)), &eqp->sw_qp.db_rec[0]); return 0; error: diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index c258eb3024..6637ba1503 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -87,6 +87,8 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) } virtq->umems[j].size = 0; } + if (virtq->eqp.fw_qp) + mlx5_vdpa_event_qp_destroy(&virtq->eqp); } } @@ -117,8 +119,6 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) claim_zero(mlx5_devx_cmd_destroy(virtq->virtq)); } virtq->virtq = NULL; - if (virtq->eqp.fw_qp) - mlx5_vdpa_event_qp_destroy(&virtq->eqp); virtq->notifier_state = MLX5_VDPA_NOTIFIER_STATE_DISABLED; return 0; } @@ -246,7 +246,7 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) MLX5_VIRTQ_EVENT_MODE_QP : MLX5_VIRTQ_EVENT_MODE_NO_MSIX; if (attr.event_mode == MLX5_VIRTQ_EVENT_MODE_QP) { - ret = mlx5_vdpa_event_qp_create(priv, vq.size, vq.callfd, + ret = mlx5_vdpa_event_qp_prepare(priv, vq.size, vq.callfd, &virtq->eqp); if (ret) { DRV_LOG(ERR, "Failed to create event QPs for virtq %d.", From patchwork Mon Jun 6 11:46:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 112381 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 67202A0542; Mon, 6 Jun 2022 13:48:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8D5CA4281A; Mon, 6 Jun 2022 13:47:43 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2061.outbound.protection.outlook.com [40.107.236.61]) by mails.dpdk.org (Postfix) with ESMTP id 2FA3E42B84 for ; Mon, 6 Jun 2022 13:47:42 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OYrpKw9vijFtq9BC+GseeGjOzWS71VX/arIOSiILrAPRiSza1t6/l3Zi/t8+J33wwlEWk5MTiP5BRNd/oDcbjsobcgbB2zuF+cxJvlOIb32YvMa9xsQCWcVGpqGIXxItF+/Q8jVhWjWdMLkTSf90dChEszJwZbaquhKltdSSQLPYAJ0T+VpRCj6ACIN2NiGMm5swRdDXUmvEHO9sXYoNLJn29+9CPlwRf7gH79kJ+mPEujzWBZy/zhIiRgOb4wnGY+LnXa+w4ihXJkxZnfrlUc+pzY/v01Qo2qH/bQfhtYLfMDNH3Nc6P/Vr1szfEXn2cupFomuxpC4rFpoUFOwLFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ThoE1uZCZkPGeiGhP5FRmVx1hCO8Fk0McFIuFAahjv0=; b=S/sb0VsN2FzyFaomFxs8kh7U7YEFN/BG4aovON7Oft2++MqOzrOy8pnDObO/7RV2ODwB9o1fXWkntP3LefUXyLlzkdr/TUFgABSDhwpRDeuQG3ithMzn/5FTM09G0Ea+JyVWr4cRFMiQIbc8P9gCI76bOfpyrycUdYBzvK5z5Rt/Z13alyi31NkCyjyNL+ssskNC7BcTu69l+gAmC0ZQpQ3CB19cEJ0sgagHR0iDROfVT7f26XwsPG9iR1Xx0ghtk9XK7ocAObIp1k6xe95ZSvyX5R8+Cp8wWItFq3SChOXS0xa6w73ZB/xYVmg30hrZEaumz+W02dpGiOLVPOC+UQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ThoE1uZCZkPGeiGhP5FRmVx1hCO8Fk0McFIuFAahjv0=; b=Y0HIY7EbdAi9F5atTeLzZhWjjUGhMt0VwGgqJtCjCp3wAZQaqx2EG0OIm7AramzniabRq8brdNpHLq+t+/tssjbtPPiLQpGtSfS2mLNCL6lS1rHqCQ4G28k/bvVljCju8Ab+Yfv2rxHSF1d16TZS5gJ7ut4VaE9Y8/iS6lQUrSvTeJY4CGJv7kihptWutj8c2FjoHC8tNya+cubqsWCuPQY4GP7MrrH4UM8XAa93zfQafWSvJF1opClcGHfN3wUqtffVW+eOkad2nxd0+PDu+kYS7cvwBh1S5QUoYQ3IVNZfiiu2u/C5wA6BEHXpNN5hr4eoVX1tA68+FQNp1vXoUQ== Received: from BN6PR19CA0083.namprd19.prod.outlook.com (2603:10b6:404:133::21) by PH7PR12MB6609.namprd12.prod.outlook.com (2603:10b6:510:213::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Mon, 6 Jun 2022 11:47:37 +0000 Received: from BN8NAM11FT061.eop-nam11.prod.protection.outlook.com (2603:10b6:404:133:cafe::d4) by BN6PR19CA0083.outlook.office365.com (2603:10b6:404:133::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19 via Frontend Transport; Mon, 6 Jun 2022 11:47:37 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT061.mail.protection.outlook.com (10.13.177.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:37 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:47:36 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:47:33 -0700 From: Li Zhang To: , , , CC: , , , Subject: [PATCH v1 07/17] common/mlx5: extend virtq modifiable fields Date: Mon, 6 Jun 2022 14:46:40 +0300 Message-ID: <20220606114650.209612-8-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3b99e24a-9fe0-48f1-54dc-08da47b25a98 X-MS-TrafficTypeDiagnostic: PH7PR12MB6609:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: BDJ3hXgCvd8cFI62hiJYBD4BfjqK6UTlQhZFWiy+p7mYMh/MUjgFRSiJSbZ9if+YZ74upqcj3Q6cWgF7PIqUlHsEkZW72m49R+XpndC5sCkFus7tgZac/0qAnq7PmfgjbKd7/jkcGIB7LbDTrrxIbnScVzf5XKjvPVRuvGTW3tgpYepfY+OZkESXg9hNRG9G4S6AJrqGdEgsshfPAJEx9HKgXcwRjgl4i9SHq45rouDG5Kjj3yb2RdfbguWqg08FhD/bZCKcZo7c34TsQKG1CliVaYYgUcI5UBvTcguOVPLPuybYZhtkagfKTsWgo9OkXuRgp1rt47smzNmvvHfHKSrIm/JR6EcRm61pn7sIAZsm5Y6dk4ZWtc8c4Cx6Ikl9rXPzcoDYkyOk3CqohNnlnGVyumBvMeql6n3ohhDQxWddr4jt7+MJQDO+QIPl3Wl4xxk4hJrsluHWfJjxwoBo4Ly6chtbKBaDcaKOOzNQpUl+4dfqMdKYyGolZs7EClFCvDHDVgA9upZstQJhvOO7qChiMIuvBs5BicWca6aLx2HzQJ2z2S5ymlG94bYgARCgH5WsuWqm/9SYo14k8AOUQyWJ3jv8UADONoCp7vFDxFj4WlkPoSdps4DNmQQ2/fjlxmEcmzddoEqiI6Z7GEVtMWpEVJB4NyVn9H+InF8g1GlQPjM3YZCz+bHWhcPzSWBUnLWrNj+/ekgm80Eefy3gIQ== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(47076005)(336012)(508600001)(426003)(5660300002)(83380400001)(40460700003)(6666004)(16526019)(186003)(356005)(316002)(2906002)(8936002)(26005)(6636002)(6286002)(110136005)(54906003)(7696005)(107886003)(4326008)(8676002)(86362001)(55016003)(81166007)(2616005)(70206006)(70586007)(36756003)(1076003)(82310400005)(36860700001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:47:37.4467 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3b99e24a-9fe0-48f1-54dc-08da47b25a98 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT061.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6609 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org A virtq configuration can be modified after the virtq creation. Added the following modifiable fields: 1.address fields: desc_addr/used_addr/available_addr 2.hw_available_index 3.hw_used_index 4.virtio_q_type 5.version type 6.queue mkey 7.feature bit mask: tso_ipv4/tso_ipv6/tx_csum/rx_csum 8.event mode: event_mode/event_qpn_or_msix Signed-off-by: Li Zhang --- drivers/common/mlx5/mlx5_devx_cmds.c | 70 +++++++++++++++++++++++----- drivers/common/mlx5/mlx5_devx_cmds.h | 6 ++- drivers/common/mlx5/mlx5_prm.h | 13 +++++- 3 files changed, 76 insertions(+), 13 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 1d6d6578d6..1b68c37092 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -545,6 +545,15 @@ mlx5_devx_cmd_query_hca_vdpa_attr(void *ctx, vdpa_attr->log_doorbell_stride = MLX5_GET(virtio_emulation_cap, hcattr, log_doorbell_stride); + vdpa_attr->vnet_modify_ext = + MLX5_GET(virtio_emulation_cap, hcattr, + vnet_modify_ext); + vdpa_attr->virtio_net_q_addr_modify = + MLX5_GET(virtio_emulation_cap, hcattr, + virtio_net_q_addr_modify); + vdpa_attr->virtio_q_index_modify = + MLX5_GET(virtio_emulation_cap, hcattr, + virtio_q_index_modify); vdpa_attr->log_doorbell_bar_size = MLX5_GET(virtio_emulation_cap, hcattr, log_doorbell_bar_size); @@ -2074,27 +2083,66 @@ mlx5_devx_cmd_modify_virtq(struct mlx5_devx_obj *virtq_obj, MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type, MLX5_GENERAL_OBJ_TYPE_VIRTQ); MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_id, virtq_obj->id); - MLX5_SET64(virtio_net_q, virtq, modify_field_select, attr->type); + MLX5_SET64(virtio_net_q, virtq, modify_field_select, + attr->mod_fields_bitmap); MLX5_SET16(virtio_q, virtctx, queue_index, attr->queue_index); - switch (attr->type) { - case MLX5_VIRTQ_MODIFY_TYPE_STATE: + if (!attr->mod_fields_bitmap) { + DRV_LOG(ERR, "Failed to modify VIRTQ for no type set."); + rte_errno = EINVAL; + return -rte_errno; + } + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_STATE) MLX5_SET16(virtio_net_q, virtq, state, attr->state); - break; - case MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_PARAMS: + if (attr->mod_fields_bitmap & + MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_PARAMS) { MLX5_SET(virtio_net_q, virtq, dirty_bitmap_mkey, attr->dirty_bitmap_mkey); MLX5_SET64(virtio_net_q, virtq, dirty_bitmap_addr, attr->dirty_bitmap_addr); MLX5_SET(virtio_net_q, virtq, dirty_bitmap_size, attr->dirty_bitmap_size); - break; - case MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_DUMP_ENABLE: + } + if (attr->mod_fields_bitmap & + MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_DUMP_ENABLE) MLX5_SET(virtio_net_q, virtq, dirty_bitmap_dump_enable, attr->dirty_bitmap_dump_enable); - break; - default: - rte_errno = EINVAL; - return -rte_errno; + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_QUEUE_PERIOD) { + MLX5_SET(virtio_q, virtctx, queue_period_mode, + attr->hw_latency_mode); + MLX5_SET(virtio_q, virtctx, queue_period_us, + attr->hw_max_latency_us); + MLX5_SET(virtio_q, virtctx, queue_max_count, + attr->hw_max_pending_comp); + } + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_ADDR) { + MLX5_SET64(virtio_q, virtctx, desc_addr, attr->desc_addr); + MLX5_SET64(virtio_q, virtctx, used_addr, attr->used_addr); + MLX5_SET64(virtio_q, virtctx, available_addr, + attr->available_addr); + } + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_HW_AVAILABLE_INDEX) + MLX5_SET16(virtio_net_q, virtq, hw_available_index, + attr->hw_available_index); + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_HW_USED_INDEX) + MLX5_SET16(virtio_net_q, virtq, hw_used_index, + attr->hw_used_index); + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_Q_TYPE) + MLX5_SET16(virtio_q, virtctx, virtio_q_type, attr->q_type); + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_VERSION_1_0) + MLX5_SET16(virtio_q, virtctx, virtio_version_1_0, + attr->virtio_version_1_0); + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_Q_MKEY) + MLX5_SET(virtio_q, virtctx, virtio_q_mkey, attr->mkey); + if (attr->mod_fields_bitmap & + MLX5_VIRTQ_MODIFY_TYPE_QUEUE_FEATURE_BIT_MASK) { + MLX5_SET16(virtio_net_q, virtq, tso_ipv4, attr->tso_ipv4); + MLX5_SET16(virtio_net_q, virtq, tso_ipv6, attr->tso_ipv6); + MLX5_SET16(virtio_net_q, virtq, tx_csum, attr->tx_csum); + MLX5_SET16(virtio_net_q, virtq, rx_csum, attr->rx_csum); + } + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_EVENT_MODE) { + MLX5_SET16(virtio_q, virtctx, event_mode, attr->event_mode); + MLX5_SET(virtio_q, virtctx, event_qpn_or_msix, attr->qp_id); } ret = mlx5_glue->devx_obj_modify(virtq_obj->obj, in, sizeof(in), out, sizeof(out)); diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index 3747ef9e33..ec6467d927 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -74,6 +74,9 @@ struct mlx5_hca_vdpa_attr { uint32_t log_doorbell_stride:5; uint32_t log_doorbell_bar_size:5; uint32_t queue_counters_valid:1; + uint32_t vnet_modify_ext:1; + uint32_t virtio_net_q_addr_modify:1; + uint32_t virtio_q_index_modify:1; uint32_t max_num_virtio_queues; struct { uint32_t a; @@ -465,7 +468,7 @@ struct mlx5_devx_virtq_attr { uint32_t tis_id; uint32_t counters_obj_id; uint64_t dirty_bitmap_addr; - uint64_t type; + uint64_t mod_fields_bitmap; uint64_t desc_addr; uint64_t used_addr; uint64_t available_addr; @@ -475,6 +478,7 @@ struct mlx5_devx_virtq_attr { uint64_t offset; } umems[3]; uint8_t error_type; + uint8_t q_type; }; diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 8a2f55c33e..5f58a6ee1d 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1802,7 +1802,9 @@ struct mlx5_ifc_virtio_emulation_cap_bits { u8 virtio_queue_type[0x8]; u8 reserved_at_20[0x13]; u8 log_doorbell_stride[0x5]; - u8 reserved_at_3b[0x3]; + u8 vnet_modify_ext[0x1]; + u8 virtio_net_q_addr_modify[0x1]; + u8 virtio_q_index_modify[0x1]; u8 log_doorbell_bar_size[0x5]; u8 doorbell_bar_offset[0x40]; u8 reserved_at_80[0x8]; @@ -3024,6 +3026,15 @@ enum { MLX5_VIRTQ_MODIFY_TYPE_STATE = (1UL << 0), MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_PARAMS = (1UL << 3), MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_DUMP_ENABLE = (1UL << 4), + MLX5_VIRTQ_MODIFY_TYPE_QUEUE_PERIOD = (1UL << 5), + MLX5_VIRTQ_MODIFY_TYPE_ADDR = (1UL << 6), + MLX5_VIRTQ_MODIFY_TYPE_HW_AVAILABLE_INDEX = (1UL << 7), + MLX5_VIRTQ_MODIFY_TYPE_HW_USED_INDEX = (1UL << 8), + MLX5_VIRTQ_MODIFY_TYPE_Q_TYPE = (1UL << 9), + MLX5_VIRTQ_MODIFY_TYPE_VERSION_1_0 = (1UL << 10), + MLX5_VIRTQ_MODIFY_TYPE_Q_MKEY = (1UL << 11), + MLX5_VIRTQ_MODIFY_TYPE_QUEUE_FEATURE_BIT_MASK = (1UL << 12), + MLX5_VIRTQ_MODIFY_TYPE_EVENT_MODE = (1UL << 13), }; struct mlx5_ifc_virtio_q_bits { From patchwork Mon Jun 6 11:46:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 112382 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 01B05A0542; Mon, 6 Jun 2022 13:48:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B18FC42B88; Mon, 6 Jun 2022 13:47:44 +0200 (CEST) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2045.outbound.protection.outlook.com [40.107.212.45]) by mails.dpdk.org (Postfix) with ESMTP id 4E2D142B85 for ; Mon, 6 Jun 2022 13:47:42 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HFyj0Apf+fBRXCKCYQD4nm3zxBXOXsIFithcU7/itK5IVpTx8in27XovAXM9gdsVrWVfhbap5aE6bD/2/TKzwHPNITrxX1Y8023EbY+m5eBu9DJ8f4nTL53XKLMg41ymsXaPV1NEtyg9V3g99K0swggN19/EheMh2oKmN1o53NcTFrgxeUNxwS43Vz19mFkfFrKEpXO+zsUrQpPW4qQcsgJA6hTKkNKw+pDba5W8oZ6lqBzZZpytexpqoGGpKCESDiSxIdhUBpLLk9ZN/R4U2+JUMK6D3jlzv6oE9eSTgShI8H4AXqHhbcgW+B0s/YTLMvDqgR/J+gduvbP34FOWsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=//TCfW2+w1eAVfja1DoKg8YmRkJwwGXbi0BAHQBxa6I=; b=bcl05E5e5UC9X5IqAK/vnlMM+fzhNg3QGoLzctbzisDqf46J7OWWtRFucYKBl8TOupy5ZPwOOVY4kldQlEeLL9VUgIq4+2MEUVvKirnkojzAJXSIQdmcE9x5pjvHBIJz3Hnqpt1y59NdDhN7B+yv10XlAerKR43Fzvsl8kq4u0zvbYRkNytRzlHN+8iSdNAMzySgpInOzMqDHUweayCZpWhhWDRD351f7q4Xw2i+SVaCtQClKOgQWszF/xun+1oTl54ffvwaTRvLg7MJTBbvJV9hTPFm9OEskb82WOzGrMTxAs0yq+7ogoMO5fG+4NNCwFE7gvUtiI/2r7/1v+JOrA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=//TCfW2+w1eAVfja1DoKg8YmRkJwwGXbi0BAHQBxa6I=; b=e35tIyZSYYlBjdPIvAEgFR8cwp095Cf5+GwQ9Wkpcd3+OHm5IcX65FIYuI62tLYG7dAs/Iwq9TrHjwRy9r1R92WPvOp3GAk5rlnOVSC1376NijNp5vVSfL4mCS5AVZAwNr3MMb+zoYjzwiS8aoMS4xPgD+spzKgo02xZoxZh6Tlm/J57pL6gqTOUYvNOiob7KzNgja7TQMvMuGU59QwN35E4LThOlJKjmUABOH8/fubxDzXstGR+MiHYUiw+Ffx2nWSQ2yhUmpJMQ/Vyh9HdVHOuEilWDVzM9A20CrjmknoaU2Gg2zaprGo4/5wgrRbxtmPF3qUWf4UIs1+Z5iOdfA== Received: from CO2PR07CA0046.namprd07.prod.outlook.com (2603:10b6:100::14) by BYAPR12MB4789.namprd12.prod.outlook.com (2603:10b6:a03:111::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Mon, 6 Jun 2022 11:47:40 +0000 Received: from CO1NAM11FT064.eop-nam11.prod.protection.outlook.com (2603:10b6:100:0:cafe::ad) by CO2PR07CA0046.outlook.office365.com (2603:10b6:100::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13 via Frontend Transport; Mon, 6 Jun 2022 11:47:40 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT064.mail.protection.outlook.com (10.13.175.77) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:39 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:47:39 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:47:36 -0700 From: Li Zhang To: , , , CC: , , , Subject: [PATCH v1 08/17] vdpa/mlx5: pre-create virtq in the prob Date: Mon, 6 Jun 2022 14:46:41 +0300 Message-ID: <20220606114650.209612-9-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 241db7d9-29bb-45aa-5bab-08da47b25c0d X-MS-TrafficTypeDiagnostic: BYAPR12MB4789:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DDX0fnRFEUSINCjke7R+PTcJAgDTLv+LtekePuno1sNFzNMh0WLgwa2eSXZF33vRfTqC/39S+/vX5xobIZtcN863kpbun1FrfMnUA2aYPiV5a3Vi8xEig5Q/ppZ6jq4nK13JNiajrpAy0GbAtlNKF7hMEVW3d7EmcEzLII3wysLTqtQEIvcjg1e9rfrFHsfexVhH7KIFZRVBPd+qpRTn4qly7QbO2h4zA6rxToxoNB0AVOs+23QNR2jRcg/CXBLoI4g16ISJx/8W4JsIU1zsN0NMYqS/Df/fWs/Y/Y7F/7ISrx9BB0i47U05sZf14KwtaYIjyt2tSHG7nqgiyN4IjT3sPFXoOIyqwnsMbWNUIRxhV5N5DM5E2NNrvQSI350YJfDZdncuCMTJhF9yKtT8t1XtY8ZeZDKZHpWDgr1zNdW/ST9B6llHVOuF1FuCmo5nflb2W6ASL53NvQhUySwyQtYrOrzANy4YF3PHtAsg2vgFJsLNN02x7omSwSuRH4cM11q6ll8pCdfJayVSYw9Rb5WkPvBsMw+kTMxsupFyoM0lYttfiAgojEXeUnQJP5LeMj6FAhiSLoUZ6NweX/VHQ+GQX1X6mZXhjYM+JzGE7Lb8g4zLLqO4zYB9m4woYWREYllZKF2xTnGXWA6bszgBMqu8PuKa/TVmV+/o2ERHgsZfVEBmuxrUOnykh+/KsqHW8BwTUvFnoh4XcDWcG7qOkA== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(36756003)(70206006)(54906003)(6636002)(8936002)(336012)(30864003)(110136005)(47076005)(107886003)(426003)(316002)(70586007)(5660300002)(26005)(356005)(81166007)(55016003)(6286002)(8676002)(36860700001)(4326008)(1076003)(7696005)(83380400001)(82310400005)(2616005)(6666004)(40460700003)(508600001)(16526019)(186003)(86362001)(2906002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:47:39.9226 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 241db7d9-29bb-45aa-5bab-08da47b25c0d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT064.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB4789 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org dev_config operation is called in LM progress. LM time is very critical because all the VM packets are dropped directly at that time. Move the virtq creation to probe time and only modify the configuration later in the dev_config stage using the new ability to modify virtq. This optimization accelerates the LM process and reduces its time by 70%. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.h | 4 + drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 13 +- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 257 +++++++++++++++++----------- 3 files changed, 170 insertions(+), 104 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index bf82026e37..e5553079fe 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -80,6 +80,7 @@ struct mlx5_vdpa_virtq { uint16_t vq_size; uint8_t notifier_state; bool stopped; + uint32_t configured:1; uint32_t version; struct mlx5_vdpa_priv *priv; struct mlx5_devx_obj *virtq; @@ -489,4 +490,7 @@ mlx5_vdpa_virtq_stats_reset(struct mlx5_vdpa_priv *priv, int qid); */ void mlx5_vdpa_drain_cq(struct mlx5_vdpa_priv *priv); + +bool +mlx5_vdpa_is_modify_virtq_supported(struct mlx5_vdpa_priv *priv); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c index 43a2b98255..a8faf0c116 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c @@ -12,14 +12,17 @@ int mlx5_vdpa_logging_enable(struct mlx5_vdpa_priv *priv, int enable) { struct mlx5_devx_virtq_attr attr = { - .type = MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_DUMP_ENABLE, + .mod_fields_bitmap = + MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_DUMP_ENABLE, .dirty_bitmap_dump_enable = enable, }; + struct mlx5_vdpa_virtq *virtq; int i; for (i = 0; i < priv->nr_virtqs; ++i) { attr.queue_index = i; - if (!priv->virtqs[i].virtq) { + virtq = &priv->virtqs[i]; + if (!virtq->configured) { DRV_LOG(DEBUG, "virtq %d is invalid for dirty bitmap " "enabling.", i); } else if (mlx5_devx_cmd_modify_virtq(priv->virtqs[i].virtq, @@ -37,10 +40,11 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base, uint64_t log_size) { struct mlx5_devx_virtq_attr attr = { - .type = MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_PARAMS, + .mod_fields_bitmap = MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_PARAMS, .dirty_bitmap_addr = log_base, .dirty_bitmap_size = log_size, }; + struct mlx5_vdpa_virtq *virtq; int i; int ret = mlx5_os_wrapped_mkey_create(priv->cdev->ctx, priv->cdev->pd, priv->cdev->pdn, @@ -54,7 +58,8 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base, attr.dirty_bitmap_mkey = priv->lm_mr.lkey; for (i = 0; i < priv->nr_virtqs; ++i) { attr.queue_index = i; - if (!priv->virtqs[i].virtq) { + virtq = &priv->virtqs[i]; + if (!virtq->configured) { DRV_LOG(DEBUG, "virtq %d is invalid for LM.", i); } else if (mlx5_devx_cmd_modify_virtq(priv->virtqs[i].virtq, &attr)) { diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 6637ba1503..55cbc9fad2 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -75,6 +75,7 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) for (i = 0; i < priv->caps.max_num_virtio_queues; i++) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + virtq->configured = 0; for (j = 0; j < RTE_DIM(virtq->umems); ++j) { if (virtq->umems[j].obj) { claim_zero(mlx5_glue->devx_umem_dereg @@ -111,11 +112,12 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) rte_intr_fd_set(virtq->intr_handle, -1); } rte_intr_instance_free(virtq->intr_handle); - if (virtq->virtq) { + if (virtq->configured) { ret = mlx5_vdpa_virtq_stop(virtq->priv, virtq->index); if (ret) DRV_LOG(WARNING, "Failed to stop virtq %d.", virtq->index); + virtq->configured = 0; claim_zero(mlx5_devx_cmd_destroy(virtq->virtq)); } virtq->virtq = NULL; @@ -138,7 +140,7 @@ int mlx5_vdpa_virtq_modify(struct mlx5_vdpa_virtq *virtq, int state) { struct mlx5_devx_virtq_attr attr = { - .type = MLX5_VIRTQ_MODIFY_TYPE_STATE, + .mod_fields_bitmap = MLX5_VIRTQ_MODIFY_TYPE_STATE, .state = state ? MLX5_VIRTQ_STATE_RDY : MLX5_VIRTQ_STATE_SUSPEND, .queue_index = virtq->index, @@ -153,7 +155,7 @@ mlx5_vdpa_virtq_stop(struct mlx5_vdpa_priv *priv, int index) struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; int ret; - if (virtq->stopped) + if (virtq->stopped || !virtq->configured) return 0; ret = mlx5_vdpa_virtq_modify(virtq, 0); if (ret) @@ -209,51 +211,54 @@ mlx5_vdpa_hva_to_gpa(struct rte_vhost_memory *mem, uint64_t hva) } static int -mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) +mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, + struct mlx5_devx_virtq_attr *attr, + struct rte_vhost_vring *vq, int index) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; - struct rte_vhost_vring vq; - struct mlx5_devx_virtq_attr attr = {0}; uint64_t gpa; int ret; unsigned int i; - uint16_t last_avail_idx; - uint16_t last_used_idx; - uint16_t event_num = MLX5_EVENT_TYPE_OBJECT_CHANGE; - uint64_t cookie; - - ret = rte_vhost_get_vhost_vring(priv->vid, index, &vq); - if (ret) - return -1; - if (vq.size == 0) - return 0; - virtq->index = index; - virtq->vq_size = vq.size; - attr.tso_ipv4 = !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO4)); - attr.tso_ipv6 = !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO6)); - attr.tx_csum = !!(priv->features & (1ULL << VIRTIO_NET_F_CSUM)); - attr.rx_csum = !!(priv->features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)); - attr.virtio_version_1_0 = !!(priv->features & (1ULL << - VIRTIO_F_VERSION_1)); - attr.type = (priv->features & (1ULL << VIRTIO_F_RING_PACKED)) ? + uint16_t last_avail_idx = 0; + uint16_t last_used_idx = 0; + + if (virtq->virtq) + attr->mod_fields_bitmap = MLX5_VIRTQ_MODIFY_TYPE_STATE | + MLX5_VIRTQ_MODIFY_TYPE_ADDR | + MLX5_VIRTQ_MODIFY_TYPE_HW_AVAILABLE_INDEX | + MLX5_VIRTQ_MODIFY_TYPE_HW_USED_INDEX | + MLX5_VIRTQ_MODIFY_TYPE_VERSION_1_0 | + MLX5_VIRTQ_MODIFY_TYPE_Q_TYPE | + MLX5_VIRTQ_MODIFY_TYPE_Q_MKEY | + MLX5_VIRTQ_MODIFY_TYPE_QUEUE_FEATURE_BIT_MASK | + MLX5_VIRTQ_MODIFY_TYPE_EVENT_MODE; + attr->tso_ipv4 = !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO4)); + attr->tso_ipv6 = !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO6)); + attr->tx_csum = !!(priv->features & (1ULL << VIRTIO_NET_F_CSUM)); + attr->rx_csum = !!(priv->features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)); + attr->virtio_version_1_0 = + !!(priv->features & (1ULL << VIRTIO_F_VERSION_1)); + attr->q_type = + (priv->features & (1ULL << VIRTIO_F_RING_PACKED)) ? MLX5_VIRTQ_TYPE_PACKED : MLX5_VIRTQ_TYPE_SPLIT; /* * No need event QPs creation when the guest in poll mode or when the * capability allows it. */ - attr.event_mode = vq.callfd != -1 || !(priv->caps.event_mode & (1 << - MLX5_VIRTQ_EVENT_MODE_NO_MSIX)) ? - MLX5_VIRTQ_EVENT_MODE_QP : - MLX5_VIRTQ_EVENT_MODE_NO_MSIX; - if (attr.event_mode == MLX5_VIRTQ_EVENT_MODE_QP) { - ret = mlx5_vdpa_event_qp_prepare(priv, vq.size, vq.callfd, - &virtq->eqp); + attr->event_mode = vq->callfd != -1 || + !(priv->caps.event_mode & (1 << MLX5_VIRTQ_EVENT_MODE_NO_MSIX)) ? + MLX5_VIRTQ_EVENT_MODE_QP : MLX5_VIRTQ_EVENT_MODE_NO_MSIX; + if (attr->event_mode == MLX5_VIRTQ_EVENT_MODE_QP) { + ret = mlx5_vdpa_event_qp_prepare(priv, + vq->size, vq->callfd, &virtq->eqp); if (ret) { - DRV_LOG(ERR, "Failed to create event QPs for virtq %d.", + DRV_LOG(ERR, + "Failed to create event QPs for virtq %d.", index); return -1; } - attr.qp_id = virtq->eqp.fw_qp->id; + attr->mod_fields_bitmap |= MLX5_VIRTQ_MODIFY_TYPE_EVENT_MODE; + attr->qp_id = virtq->eqp.fw_qp->id; } else { DRV_LOG(INFO, "Virtq %d is, for sure, working by poll mode, no" " need event QPs and event mechanism.", index); @@ -265,77 +270,82 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) if (!virtq->counters) { DRV_LOG(ERR, "Failed to create virtq couners for virtq" " %d.", index); - goto error; + return -1; } - attr.counters_obj_id = virtq->counters->id; + attr->counters_obj_id = virtq->counters->id; } /* Setup 3 UMEMs for each virtq. */ - for (i = 0; i < RTE_DIM(virtq->umems); ++i) { - uint32_t size; - void *buf; - struct mlx5dv_devx_umem *obj; - - size = priv->caps.umems[i].a * vq.size + priv->caps.umems[i].b; - if (virtq->umems[i].size == size && - virtq->umems[i].obj != NULL) { - /* Reuse registered memory. */ - memset(virtq->umems[i].buf, 0, size); - goto reuse; - } - if (virtq->umems[i].obj) - claim_zero(mlx5_glue->devx_umem_dereg + if (virtq->virtq) { + for (i = 0; i < RTE_DIM(virtq->umems); ++i) { + uint32_t size; + void *buf; + struct mlx5dv_devx_umem *obj; + + size = + priv->caps.umems[i].a * vq->size + priv->caps.umems[i].b; + if (virtq->umems[i].size == size && + virtq->umems[i].obj != NULL) { + /* Reuse registered memory. */ + memset(virtq->umems[i].buf, 0, size); + goto reuse; + } + if (virtq->umems[i].obj) + claim_zero(mlx5_glue->devx_umem_dereg (virtq->umems[i].obj)); - if (virtq->umems[i].buf) - rte_free(virtq->umems[i].buf); - virtq->umems[i].size = 0; - virtq->umems[i].obj = NULL; - virtq->umems[i].buf = NULL; - buf = rte_zmalloc(__func__, size, 4096); - if (buf == NULL) { - DRV_LOG(ERR, "Cannot allocate umem %d memory for virtq" + if (virtq->umems[i].buf) + rte_free(virtq->umems[i].buf); + virtq->umems[i].size = 0; + virtq->umems[i].obj = NULL; + virtq->umems[i].buf = NULL; + buf = rte_zmalloc(__func__, + size, 4096); + if (buf == NULL) { + DRV_LOG(ERR, "Cannot allocate umem %d memory for virtq" " %u.", i, index); - goto error; - } - obj = mlx5_glue->devx_umem_reg(priv->cdev->ctx, buf, size, - IBV_ACCESS_LOCAL_WRITE); - if (obj == NULL) { - DRV_LOG(ERR, "Failed to register umem %d for virtq %u.", + return -1; + } + obj = mlx5_glue->devx_umem_reg(priv->cdev->ctx, + buf, size, IBV_ACCESS_LOCAL_WRITE); + if (obj == NULL) { + DRV_LOG(ERR, "Failed to register umem %d for virtq %u.", i, index); - goto error; - } - virtq->umems[i].size = size; - virtq->umems[i].buf = buf; - virtq->umems[i].obj = obj; + rte_free(buf); + return -1; + } + virtq->umems[i].size = size; + virtq->umems[i].buf = buf; + virtq->umems[i].obj = obj; reuse: - attr.umems[i].id = virtq->umems[i].obj->umem_id; - attr.umems[i].offset = 0; - attr.umems[i].size = virtq->umems[i].size; + attr->umems[i].id = virtq->umems[i].obj->umem_id; + attr->umems[i].offset = 0; + attr->umems[i].size = virtq->umems[i].size; + } } - if (attr.type == MLX5_VIRTQ_TYPE_SPLIT) { + if (attr->q_type == MLX5_VIRTQ_TYPE_SPLIT) { gpa = mlx5_vdpa_hva_to_gpa(priv->vmem, - (uint64_t)(uintptr_t)vq.desc); + (uint64_t)(uintptr_t)vq->desc); if (!gpa) { DRV_LOG(ERR, "Failed to get descriptor ring GPA."); - goto error; + return -1; } - attr.desc_addr = gpa; + attr->desc_addr = gpa; gpa = mlx5_vdpa_hva_to_gpa(priv->vmem, - (uint64_t)(uintptr_t)vq.used); + (uint64_t)(uintptr_t)vq->used); if (!gpa) { DRV_LOG(ERR, "Failed to get GPA for used ring."); - goto error; + return -1; } - attr.used_addr = gpa; + attr->used_addr = gpa; gpa = mlx5_vdpa_hva_to_gpa(priv->vmem, - (uint64_t)(uintptr_t)vq.avail); + (uint64_t)(uintptr_t)vq->avail); if (!gpa) { DRV_LOG(ERR, "Failed to get GPA for available ring."); - goto error; + return -1; } - attr.available_addr = gpa; + attr->available_addr = gpa; } - ret = rte_vhost_get_vring_base(priv->vid, index, &last_avail_idx, - &last_used_idx); + ret = rte_vhost_get_vring_base(priv->vid, + index, &last_avail_idx, &last_used_idx); if (ret) { last_avail_idx = 0; last_used_idx = 0; @@ -345,24 +355,71 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) "virtq %d.", priv->vid, last_avail_idx, last_used_idx, index); } - attr.hw_available_index = last_avail_idx; - attr.hw_used_index = last_used_idx; - attr.q_size = vq.size; - attr.mkey = priv->gpa_mkey_index; - attr.tis_id = priv->tiss[(index / 2) % priv->num_lag_ports]->id; - attr.queue_index = index; - attr.pd = priv->cdev->pdn; - attr.hw_latency_mode = priv->hw_latency_mode; - attr.hw_max_latency_us = priv->hw_max_latency_us; - attr.hw_max_pending_comp = priv->hw_max_pending_comp; - virtq->virtq = mlx5_devx_cmd_create_virtq(priv->cdev->ctx, &attr); + attr->hw_available_index = last_avail_idx; + attr->hw_used_index = last_used_idx; + attr->q_size = vq->size; + attr->mkey = priv->gpa_mkey_index; + attr->tis_id = priv->tiss[(index / 2) % priv->num_lag_ports]->id; + attr->queue_index = index; + attr->pd = priv->cdev->pdn; + attr->hw_latency_mode = priv->hw_latency_mode; + attr->hw_max_latency_us = priv->hw_max_latency_us; + attr->hw_max_pending_comp = priv->hw_max_pending_comp; + if (attr->hw_latency_mode || attr->hw_max_latency_us || + attr->hw_max_pending_comp) + attr->mod_fields_bitmap |= MLX5_VIRTQ_MODIFY_TYPE_QUEUE_PERIOD; + return 0; +} + +bool +mlx5_vdpa_is_modify_virtq_supported(struct mlx5_vdpa_priv *priv) +{ + return (priv->caps.vnet_modify_ext && + priv->caps.virtio_net_q_addr_modify && + priv->caps.virtio_q_index_modify) ? true : false; +} + +static int +mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) +{ + struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; + struct rte_vhost_vring vq; + struct mlx5_devx_virtq_attr attr = {0}; + int ret; + uint16_t event_num = MLX5_EVENT_TYPE_OBJECT_CHANGE; + uint64_t cookie; + + ret = rte_vhost_get_vhost_vring(priv->vid, index, &vq); + if (ret) + return -1; + if (vq.size == 0) + return 0; virtq->priv = priv; - if (!virtq->virtq) + virtq->stopped = 0; + ret = mlx5_vdpa_virtq_sub_objs_prepare(priv, &attr, + &vq, index); + if (ret) { + DRV_LOG(ERR, "Failed to setup update virtq attr" + " %d.", index); goto error; - claim_zero(rte_vhost_enable_guest_notification(priv->vid, index, 1)); - if (mlx5_vdpa_virtq_modify(virtq, 1)) + } + if (!virtq->virtq) { + virtq->index = index; + virtq->vq_size = vq.size; + virtq->virtq = mlx5_devx_cmd_create_virtq(priv->cdev->ctx, + &attr); + if (!virtq->virtq) + goto error; + attr.mod_fields_bitmap = MLX5_VIRTQ_MODIFY_TYPE_STATE; + } + attr.state = MLX5_VIRTQ_STATE_RDY; + ret = mlx5_devx_cmd_modify_virtq(virtq->virtq, &attr); + if (ret) { + DRV_LOG(ERR, "Failed to modify virtq %d.", index); goto error; - virtq->priv = priv; + } + claim_zero(rte_vhost_enable_guest_notification(priv->vid, index, 1)); + virtq->configured = 1; rte_write32(virtq->index, priv->virtq_db_addr); /* Setup doorbell mapping. */ virtq->intr_handle = @@ -553,7 +610,7 @@ mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable) return 0; DRV_LOG(INFO, "Virtq %d was modified, recreate it.", index); } - if (virtq->virtq) { + if (virtq->configured) { virtq->enable = 0; if (is_virtq_recvq(virtq->index, priv->nr_virtqs)) { ret = mlx5_vdpa_steer_update(priv); From patchwork Mon Jun 6 11:46:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 112383 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D7A2BA0542; Mon, 6 Jun 2022 13:48:35 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F186542B8F; Mon, 6 Jun 2022 13:47:46 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2077.outbound.protection.outlook.com [40.107.93.77]) by mails.dpdk.org (Postfix) with ESMTP id 887F142B8E for ; Mon, 6 Jun 2022 13:47:45 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Z/cGJ6US3XqHz+6MZLPFvDHH8Q+vRSztp+8Ff5YiWRIse0gPpnimNv7bKLPKqIE+n23B2kZOMpJQKx5YqcI9czAAWu7ffkAgVH1LXgcjOUy3HtnKMMP1jRt4tZVERSbybVYC5x6LIKpIm/cR3t8KrEnFN1KulM6NHul+VyXK3jZVPk3+BUwbY55fY01u2SjXZmFdicwmYyDQmMhqCP3tNCrrJnGGPE3XGx8909G2lDr2VFJTThUdnQymVkFLL4Rd8r/l5G+QBQlBu5+3LMBUIJE2HhGwc0/+Ir/t9t13fQk7LWZFHsFKze/88zTBkvUxvoGmbARIxTHcU431eZDk6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=isW+ckUr9Ps+iVzLqK0DbzwxpE7EafRBgztliaixwJE=; b=ZCoYVHtcGLD7K20xbrzle2jJPolOUlEP0pHGDUZDoJ86AAxeUJ3ylHwAM3GVVr6YDKuXHKc4gsQab8qCPSuGABiD1MNwGvVGb405nLPyC92pAEd6vIZuIJFT0EP+xvEqO2e+nluov1Ae/7GU5dX6bH+TqcB4CILURTjvqb/4mg+NGtfqfUoJRXEHEXGWsUukerAwqxy7oEQIgbqDhpW4doG28FGcRLH12UbQzeY1MIBWtmcAJJYa23nHDWumqNLf1rSBIzvtjPN/8g8gxCqTIxahLWfz99I6omtD/I8cPQ65OPpiDQ0IMtAtu1WAv+xFDvR17xhzZ0R0VTtXqTH7eQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=isW+ckUr9Ps+iVzLqK0DbzwxpE7EafRBgztliaixwJE=; b=FrJgvGDVfq6wIn9ubdtQLA4eM6EvMm8ADHjPEmp+NbIjnK/+13gYNPPz4N3c9Twt9cIq1q5KD26vBTpmGqbAkzbasJsH9QCzn0MjoF0qHU5iwCrvTICxJ/0PvFRVbk1U971fbl28Us5GY8mQ6zP8MbExtvum7EPf/lTvE+1BTow8wFEFGWbjYKwW78MqaU2a0V+zv7aHL86jTRH80Jzjcyaa5iwwYSWpD5lIN7kcAkwRKr6Y4B2UE4LeYBoLdB4Fi59+mrAQAw+FQRyHaSXPDglazyoc1ahXudd/OSuNnoHeIJ9wcRlBUQheHfb6abVEutG5H15NhhWWhm8jDe0WyA== Received: from CO2PR07CA0058.namprd07.prod.outlook.com (2603:10b6:100::26) by CY4PR12MB1877.namprd12.prod.outlook.com (2603:10b6:903:11c::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.15; Mon, 6 Jun 2022 11:47:43 +0000 Received: from CO1NAM11FT011.eop-nam11.prod.protection.outlook.com (2603:10b6:100:0:cafe::9c) by CO2PR07CA0058.outlook.office365.com (2603:10b6:100::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13 via Frontend Transport; Mon, 6 Jun 2022 11:47:42 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT011.mail.protection.outlook.com (10.13.175.186) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:42 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:47:42 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:47:39 -0700 From: Li Zhang To: , , , CC: , , , Subject: [PATCH v1 09/17] vdpa/mlx5: optimize datapath-control synchronization Date: Mon, 6 Jun 2022 14:46:42 +0300 Message-ID: <20220606114650.209612-10-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 276ebc1c-a0a3-4576-5e96-08da47b25d9f X-MS-TrafficTypeDiagnostic: CY4PR12MB1877:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iGaPHfebz65qwD/RSvmcD8uAP7aLK8Opoiu7YLw+DWuoMEdSlfg7WECh9NveZE3jZbI7k8t7KCJrHKP+oSsPRRas329Y2yZmy7MVdSDpSA51Ux5aRro7FU0Ccjqubk3OELFX4U8BjrW4QApF/hsgPQENnHiX3xOwLUUXEr7u1XSSLTabjp3xOX+jtX8r5eqOdyZqvAleL/n8HJ+HO9GSkfoMS5k9WsnJkikicm1NASffePZ5nigtTTwg/PyzhSF30J8HlX/3EASti7EZ4N3s1MaGni4EFq7IkU7jrbpKfTKldV1tth5bVEu8hvZfaB9R7Zyb2iga0/Wj5XHsfjgR2uzDwM8RZs9ipoScISajAOMZuJNtbo4vWUulpJ4pVyiGd6ZB14X8WokAoS+0aynGTMbaFZEJ7bcD3RN/RNblAxEFI1gLorPkkCFTGJKwC9BXu4c4d1fLhy2LMgPddFhl2qFU72CgfsJmLIj1L84MPWxiLo+Wp9aB/Nvj31k43F5ORE2iDbYwNF/4d73/z2Hi3Sde8XQ4jZ3dG7FNrVpb31nHI3j6ZKeWOhkTpWtU9fBZFIetzqOqRAnYCnaKRHpNR4IgMdBVoHePgbIz+4+/XHr7/lphXyTphoZSeZ5dN323toYi2D+xjYM3sIktcFZPPyc/7O3iEAkKIjmeG1JvB+0amsT6+WFWmQ7nRv8zwv6vXHQ94FP9SWXnl/Z/3mKJTQ== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(2616005)(2906002)(6666004)(4326008)(86362001)(47076005)(336012)(426003)(81166007)(1076003)(186003)(7696005)(82310400005)(316002)(107886003)(70206006)(70586007)(30864003)(26005)(54906003)(55016003)(110136005)(6286002)(36756003)(36860700001)(6636002)(8676002)(83380400001)(40460700003)(356005)(508600001)(5660300002)(8936002)(16526019)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:47:42.5885 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 276ebc1c-a0a3-4576-5e96-08da47b25d9f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT011.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1877 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The driver used a single global lock for any synchronization needed for the datapath and control path. It is better to group the critical sections with the other ones that should be synchronized. Replace the global lock with the following locks: 1.virtq locks(per virtq) synchronize datapath polling and parallel configurations on the same virtq. 2.A doorbell lock synchronizes doorbell update, which is shared for all the virtqs in the device. 3.A steering lock for the shared steering objects updates. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.c | 24 ++++--- drivers/vdpa/mlx5/mlx5_vdpa.h | 13 ++-- drivers/vdpa/mlx5/mlx5_vdpa_event.c | 97 ++++++++++++++++++----------- drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 34 +++++++--- drivers/vdpa/mlx5/mlx5_vdpa_steer.c | 7 ++- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 88 +++++++++++++++++++------- 6 files changed, 184 insertions(+), 79 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index ee99952e11..e5a11f72fd 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -135,6 +135,7 @@ mlx5_vdpa_set_vring_state(int vid, int vring, int state) struct rte_vdpa_device *vdev = rte_vhost_get_vdpa_device(vid); struct mlx5_vdpa_priv *priv = mlx5_vdpa_find_priv_resource_by_vdev(vdev); + struct mlx5_vdpa_virtq *virtq; int ret; if (priv == NULL) { @@ -145,9 +146,10 @@ mlx5_vdpa_set_vring_state(int vid, int vring, int state) DRV_LOG(ERR, "Too big vring id: %d.", vring); return -E2BIG; } - pthread_mutex_lock(&priv->vq_config_lock); + virtq = &priv->virtqs[vring]; + pthread_mutex_lock(&virtq->virtq_lock); ret = mlx5_vdpa_virtq_enable(priv, vring, state); - pthread_mutex_unlock(&priv->vq_config_lock); + pthread_mutex_unlock(&virtq->virtq_lock); return ret; } @@ -267,7 +269,9 @@ mlx5_vdpa_dev_close(int vid) ret |= mlx5_vdpa_lm_log(priv); priv->state = MLX5_VDPA_STATE_IN_PROGRESS; } + pthread_mutex_lock(&priv->steer_update_lock); mlx5_vdpa_steer_unset(priv); + pthread_mutex_unlock(&priv->steer_update_lock); mlx5_vdpa_virtqs_release(priv); mlx5_vdpa_drain_cq(priv); if (priv->lm_mr.addr) @@ -276,8 +280,6 @@ mlx5_vdpa_dev_close(int vid) if (!priv->connected) mlx5_vdpa_dev_cache_clean(priv); priv->vid = 0; - /* The mutex may stay locked after event thread cancel - initiate it. */ - pthread_mutex_init(&priv->vq_config_lock, NULL); DRV_LOG(INFO, "vDPA device %d was closed.", vid); return ret; } @@ -549,15 +551,21 @@ mlx5_vdpa_config_get(struct mlx5_kvargs_ctrl *mkvlist, static int mlx5_vdpa_virtq_resource_prepare(struct mlx5_vdpa_priv *priv) { + struct mlx5_vdpa_virtq *virtq; uint32_t index; uint32_t i; + for (index = 0; index < priv->caps.max_num_virtio_queues * 2; + index++) { + virtq = &priv->virtqs[index]; + pthread_mutex_init(&virtq->virtq_lock, NULL); + } if (!priv->queues) return 0; for (index = 0; index < (priv->queues * 2); ++index) { - struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; + virtq = &priv->virtqs[index]; int ret = mlx5_vdpa_event_qp_prepare(priv, priv->queue_size, - -1, &virtq->eqp); + -1, virtq); if (ret) { DRV_LOG(ERR, "Failed to create event QPs for virtq %d.", @@ -713,7 +721,8 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, priv->num_lag_ports = attr->num_lag_ports; if (attr->num_lag_ports == 0) priv->num_lag_ports = 1; - pthread_mutex_init(&priv->vq_config_lock, NULL); + rte_spinlock_init(&priv->db_lock); + pthread_mutex_init(&priv->steer_update_lock, NULL); priv->cdev = cdev; mlx5_vdpa_config_get(mkvlist, priv); if (mlx5_vdpa_create_dev_resources(priv)) @@ -797,7 +806,6 @@ mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv) mlx5_vdpa_release_dev_resources(priv); if (priv->vdev) rte_vdpa_unregister_device(priv->vdev); - pthread_mutex_destroy(&priv->vq_config_lock); rte_free(priv); } diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index e5553079fe..3fd5eefc5e 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -82,6 +82,7 @@ struct mlx5_vdpa_virtq { bool stopped; uint32_t configured:1; uint32_t version; + pthread_mutex_t virtq_lock; struct mlx5_vdpa_priv *priv; struct mlx5_devx_obj *virtq; struct mlx5_devx_obj *counters; @@ -126,7 +127,8 @@ struct mlx5_vdpa_priv { TAILQ_ENTRY(mlx5_vdpa_priv) next; bool connected; enum mlx5_dev_state state; - pthread_mutex_t vq_config_lock; + rte_spinlock_t db_lock; + pthread_mutex_t steer_update_lock; uint64_t no_traffic_counter; pthread_t timer_tid; int event_mode; @@ -222,14 +224,15 @@ int mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv); * Number of descriptors. * @param[in] callfd * The guest notification file descriptor. - * @param[in/out] eqp - * Pointer to the event QP structure. + * @param[in/out] virtq + * Pointer to the virt-queue structure. * * @return * 0 on success, -1 otherwise and rte_errno is set. */ -int mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, - int callfd, struct mlx5_vdpa_event_qp *eqp); +int +mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, + int callfd, struct mlx5_vdpa_virtq *virtq); /** * Destroy an event QP and all its related resources. diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index b43dca9255..2b0f5936d1 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -85,12 +85,13 @@ mlx5_vdpa_cq_arm(struct mlx5_vdpa_priv *priv, struct mlx5_vdpa_cq *cq) static int mlx5_vdpa_cq_create(struct mlx5_vdpa_priv *priv, uint16_t log_desc_n, - int callfd, struct mlx5_vdpa_cq *cq) + int callfd, struct mlx5_vdpa_virtq *virtq) { struct mlx5_devx_cq_attr attr = { .use_first_only = 1, .uar_page_id = mlx5_os_get_devx_uar_page_id(priv->uar.obj), }; + struct mlx5_vdpa_cq *cq = &virtq->eqp.cq; uint16_t event_nums[1] = {0}; int ret; @@ -102,10 +103,11 @@ mlx5_vdpa_cq_create(struct mlx5_vdpa_priv *priv, uint16_t log_desc_n, cq->log_desc_n = log_desc_n; rte_spinlock_init(&cq->sl); /* Subscribe CQ event to the event channel controlled by the driver. */ - ret = mlx5_os_devx_subscribe_devx_event(priv->eventc, - cq->cq_obj.cq->obj, - sizeof(event_nums), event_nums, - (uint64_t)(uintptr_t)cq); + ret = mlx5_glue->devx_subscribe_devx_event(priv->eventc, + cq->cq_obj.cq->obj, + sizeof(event_nums), + event_nums, + (uint64_t)(uintptr_t)virtq); if (ret) { DRV_LOG(ERR, "Failed to subscribe CQE event."); rte_errno = errno; @@ -167,13 +169,17 @@ mlx5_vdpa_cq_poll(struct mlx5_vdpa_cq *cq) static void mlx5_vdpa_arm_all_cqs(struct mlx5_vdpa_priv *priv) { + struct mlx5_vdpa_virtq *virtq; struct mlx5_vdpa_cq *cq; int i; for (i = 0; i < priv->nr_virtqs; i++) { + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); cq = &priv->virtqs[i].eqp.cq; if (cq->cq_obj.cq && !cq->armed) mlx5_vdpa_cq_arm(priv, cq); + pthread_mutex_unlock(&virtq->virtq_lock); } } @@ -220,13 +226,18 @@ mlx5_vdpa_queue_complete(struct mlx5_vdpa_cq *cq) static uint32_t mlx5_vdpa_queues_complete(struct mlx5_vdpa_priv *priv) { - int i; + struct mlx5_vdpa_virtq *virtq; + struct mlx5_vdpa_cq *cq; uint32_t max = 0; + uint32_t comp; + int i; for (i = 0; i < priv->nr_virtqs; i++) { - struct mlx5_vdpa_cq *cq = &priv->virtqs[i].eqp.cq; - uint32_t comp = mlx5_vdpa_queue_complete(cq); - + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + cq = &virtq->eqp.cq; + comp = mlx5_vdpa_queue_complete(cq); + pthread_mutex_unlock(&virtq->virtq_lock); if (comp > max) max = comp; } @@ -253,7 +264,7 @@ mlx5_vdpa_drain_cq(struct mlx5_vdpa_priv *priv) } /* Wait on all CQs channel for completion event. */ -static struct mlx5_vdpa_cq * +static struct mlx5_vdpa_virtq * mlx5_vdpa_event_wait(struct mlx5_vdpa_priv *priv __rte_unused) { #ifdef HAVE_IBV_DEVX_EVENT @@ -265,7 +276,8 @@ mlx5_vdpa_event_wait(struct mlx5_vdpa_priv *priv __rte_unused) sizeof(out.buf)); if (ret >= 0) - return (struct mlx5_vdpa_cq *)(uintptr_t)out.event_resp.cookie; + return (struct mlx5_vdpa_virtq *) + (uintptr_t)out.event_resp.cookie; DRV_LOG(INFO, "Got error in devx_get_event, ret = %d, errno = %d.", ret, errno); #endif @@ -276,7 +288,7 @@ static void * mlx5_vdpa_event_handle(void *arg) { struct mlx5_vdpa_priv *priv = arg; - struct mlx5_vdpa_cq *cq; + struct mlx5_vdpa_virtq *virtq; uint32_t max; switch (priv->event_mode) { @@ -284,7 +296,6 @@ mlx5_vdpa_event_handle(void *arg) case MLX5_VDPA_EVENT_MODE_FIXED_TIMER: priv->timer_delay_us = priv->event_us; while (1) { - pthread_mutex_lock(&priv->vq_config_lock); max = mlx5_vdpa_queues_complete(priv); if (max == 0 && priv->no_traffic_counter++ >= priv->no_traffic_max) { @@ -292,32 +303,37 @@ mlx5_vdpa_event_handle(void *arg) priv->vdev->device->name); mlx5_vdpa_arm_all_cqs(priv); do { - pthread_mutex_unlock - (&priv->vq_config_lock); - cq = mlx5_vdpa_event_wait(priv); - pthread_mutex_lock - (&priv->vq_config_lock); - if (cq == NULL || - mlx5_vdpa_queue_complete(cq) > 0) + virtq = mlx5_vdpa_event_wait(priv); + if (virtq == NULL) break; + pthread_mutex_lock( + &virtq->virtq_lock); + if (mlx5_vdpa_queue_complete( + &virtq->eqp.cq) > 0) { + pthread_mutex_unlock( + &virtq->virtq_lock); + break; + } + pthread_mutex_unlock( + &virtq->virtq_lock); } while (1); priv->timer_delay_us = priv->event_us; priv->no_traffic_counter = 0; } else if (max != 0) { priv->no_traffic_counter = 0; } - pthread_mutex_unlock(&priv->vq_config_lock); mlx5_vdpa_timer_sleep(priv, max); } return NULL; case MLX5_VDPA_EVENT_MODE_ONLY_INTERRUPT: do { - cq = mlx5_vdpa_event_wait(priv); - if (cq != NULL) { - pthread_mutex_lock(&priv->vq_config_lock); - if (mlx5_vdpa_queue_complete(cq) > 0) - mlx5_vdpa_cq_arm(priv, cq); - pthread_mutex_unlock(&priv->vq_config_lock); + virtq = mlx5_vdpa_event_wait(priv); + if (virtq != NULL) { + pthread_mutex_lock(&virtq->virtq_lock); + if (mlx5_vdpa_queue_complete( + &virtq->eqp.cq) > 0) + mlx5_vdpa_cq_arm(priv, &virtq->eqp.cq); + pthread_mutex_unlock(&virtq->virtq_lock); } } while (1); return NULL; @@ -339,7 +355,6 @@ mlx5_vdpa_err_interrupt_handler(void *cb_arg __rte_unused) struct mlx5_vdpa_virtq *virtq; uint64_t sec; - pthread_mutex_lock(&priv->vq_config_lock); while (mlx5_glue->devx_get_event(priv->err_chnl, &out.event_resp, sizeof(out.buf)) >= (ssize_t)sizeof(out.event_resp.cookie)) { @@ -351,10 +366,11 @@ mlx5_vdpa_err_interrupt_handler(void *cb_arg __rte_unused) continue; } virtq = &priv->virtqs[vq_index]; + pthread_mutex_lock(&virtq->virtq_lock); if (!virtq->enable || virtq->version != version) - continue; + goto unlock; if (rte_rdtsc() / rte_get_tsc_hz() < MLX5_VDPA_ERROR_TIME_SEC) - continue; + goto unlock; virtq->stopped = true; /* Query error info. */ if (mlx5_vdpa_virtq_query(priv, vq_index)) @@ -384,8 +400,9 @@ mlx5_vdpa_err_interrupt_handler(void *cb_arg __rte_unused) for (i = 1; i < RTE_DIM(virtq->err_time); i++) virtq->err_time[i - 1] = virtq->err_time[i]; virtq->err_time[RTE_DIM(virtq->err_time) - 1] = rte_rdtsc(); +unlock: + pthread_mutex_unlock(&virtq->virtq_lock); } - pthread_mutex_unlock(&priv->vq_config_lock); #endif } @@ -533,11 +550,18 @@ mlx5_vdpa_cqe_event_setup(struct mlx5_vdpa_priv *priv) void mlx5_vdpa_cqe_event_unset(struct mlx5_vdpa_priv *priv) { + struct mlx5_vdpa_virtq *virtq; void *status; + int i; if (priv->timer_tid) { pthread_cancel(priv->timer_tid); pthread_join(priv->timer_tid, &status); + /* The mutex may stay locked after event thread cancel, initiate it. */ + for (i = 0; i < priv->nr_virtqs; i++) { + virtq = &priv->virtqs[i]; + pthread_mutex_init(&virtq->virtq_lock, NULL); + } } priv->timer_tid = 0; } @@ -614,8 +638,9 @@ mlx5_vdpa_qps2rst2rts(struct mlx5_vdpa_event_qp *eqp) int mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, - int callfd, struct mlx5_vdpa_event_qp *eqp) + int callfd, struct mlx5_vdpa_virtq *virtq) { + struct mlx5_vdpa_event_qp *eqp = &virtq->eqp; struct mlx5_devx_qp_attr attr = {0}; uint16_t log_desc_n = rte_log2_u32(desc_n); uint32_t ret; @@ -632,7 +657,8 @@ mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, } if (eqp->fw_qp) mlx5_vdpa_event_qp_destroy(eqp); - if (mlx5_vdpa_cq_create(priv, log_desc_n, callfd, &eqp->cq)) + if (mlx5_vdpa_cq_create(priv, log_desc_n, callfd, virtq) || + !eqp->cq.cq_obj.cq) return -1; attr.pd = priv->cdev->pdn; attr.ts_format = @@ -650,8 +676,8 @@ mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, attr.ts_format = mlx5_ts_format_conv(priv->cdev->config.hca_attr.qp_ts_format); ret = mlx5_devx_qp_create(priv->cdev->ctx, &(eqp->sw_qp), - attr.num_of_receive_wqes * - MLX5_WSEG_SIZE, &attr, SOCKET_ID_ANY); + attr.num_of_receive_wqes * MLX5_WSEG_SIZE, + &attr, SOCKET_ID_ANY); if (ret) { DRV_LOG(ERR, "Failed to create SW QP(%u).", rte_errno); goto error; @@ -668,3 +694,4 @@ mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, mlx5_vdpa_event_qp_destroy(eqp); return -1; } + diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c index a8faf0c116..efebf364d0 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c @@ -25,11 +25,18 @@ mlx5_vdpa_logging_enable(struct mlx5_vdpa_priv *priv, int enable) if (!virtq->configured) { DRV_LOG(DEBUG, "virtq %d is invalid for dirty bitmap " "enabling.", i); - } else if (mlx5_devx_cmd_modify_virtq(priv->virtqs[i].virtq, + } else { + struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + + pthread_mutex_lock(&virtq->virtq_lock); + if (mlx5_devx_cmd_modify_virtq(priv->virtqs[i].virtq, &attr)) { - DRV_LOG(ERR, "Failed to modify virtq %d for dirty " + pthread_mutex_unlock(&virtq->virtq_lock); + DRV_LOG(ERR, "Failed to modify virtq %d for dirty " "bitmap enabling.", i); - return -1; + return -1; + } + pthread_mutex_unlock(&virtq->virtq_lock); } } return 0; @@ -61,10 +68,19 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base, virtq = &priv->virtqs[i]; if (!virtq->configured) { DRV_LOG(DEBUG, "virtq %d is invalid for LM.", i); - } else if (mlx5_devx_cmd_modify_virtq(priv->virtqs[i].virtq, - &attr)) { - DRV_LOG(ERR, "Failed to modify virtq %d for LM.", i); - goto err; + } else { + struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + + pthread_mutex_lock(&virtq->virtq_lock); + if (mlx5_devx_cmd_modify_virtq( + priv->virtqs[i].virtq, + &attr)) { + pthread_mutex_unlock(&virtq->virtq_lock); + DRV_LOG(ERR, + "Failed to modify virtq %d for LM.", i); + goto err; + } + pthread_mutex_unlock(&virtq->virtq_lock); } } return 0; @@ -79,6 +95,7 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base, int mlx5_vdpa_lm_log(struct mlx5_vdpa_priv *priv) { + struct mlx5_vdpa_virtq *virtq; uint64_t features; int ret = rte_vhost_get_negotiated_features(priv->vid, &features); int i; @@ -90,10 +107,13 @@ mlx5_vdpa_lm_log(struct mlx5_vdpa_priv *priv) if (!RTE_VHOST_NEED_LOG(features)) return 0; for (i = 0; i < priv->nr_virtqs; ++i) { + virtq = &priv->virtqs[i]; if (!priv->virtqs[i].virtq) { DRV_LOG(DEBUG, "virtq %d is invalid for LM log.", i); } else { + pthread_mutex_lock(&virtq->virtq_lock); ret = mlx5_vdpa_virtq_stop(priv, i); + pthread_mutex_unlock(&virtq->virtq_lock); if (ret) { DRV_LOG(ERR, "Failed to stop virtq %d for LM " "log.", i); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c index d4b4375c88..4cbf09784e 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c @@ -237,19 +237,24 @@ mlx5_vdpa_rss_flows_create(struct mlx5_vdpa_priv *priv) int mlx5_vdpa_steer_update(struct mlx5_vdpa_priv *priv) { - int ret = mlx5_vdpa_rqt_prepare(priv); + int ret; + pthread_mutex_lock(&priv->steer_update_lock); + ret = mlx5_vdpa_rqt_prepare(priv); if (ret == 0) { mlx5_vdpa_steer_unset(priv); } else if (ret < 0) { + pthread_mutex_unlock(&priv->steer_update_lock); return ret; } else if (!priv->steer.rss[0].flow) { ret = mlx5_vdpa_rss_flows_create(priv); if (ret) { DRV_LOG(ERR, "Cannot create RSS flows."); + pthread_mutex_unlock(&priv->steer_update_lock); return -1; } } + pthread_mutex_unlock(&priv->steer_update_lock); return 0; } diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 55cbc9fad2..138b7bdbc5 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -24,13 +24,17 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) int nbytes; int retry; + pthread_mutex_lock(&virtq->virtq_lock); if (priv->state != MLX5_VDPA_STATE_CONFIGURED && !virtq->enable) { + pthread_mutex_unlock(&virtq->virtq_lock); DRV_LOG(ERR, "device %d queue %d down, skip kick handling", priv->vid, virtq->index); return; } - if (rte_intr_fd_get(virtq->intr_handle) < 0) + if (rte_intr_fd_get(virtq->intr_handle) < 0) { + pthread_mutex_unlock(&virtq->virtq_lock); return; + } for (retry = 0; retry < 3; ++retry) { nbytes = read(rte_intr_fd_get(virtq->intr_handle), &buf, 8); @@ -44,9 +48,14 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) } break; } - if (nbytes < 0) + if (nbytes < 0) { + pthread_mutex_unlock(&virtq->virtq_lock); return; + } + rte_spinlock_lock(&priv->db_lock); rte_write32(virtq->index, priv->virtq_db_addr); + rte_spinlock_unlock(&priv->db_lock); + pthread_mutex_unlock(&virtq->virtq_lock); if (priv->state != MLX5_VDPA_STATE_CONFIGURED && !virtq->enable) { DRV_LOG(ERR, "device %d queue %d down, skip kick handling", priv->vid, virtq->index); @@ -66,6 +75,33 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) DRV_LOG(DEBUG, "Ring virtq %u doorbell.", virtq->index); } +/* Virtq must be locked before calling this function. */ +static void +mlx5_vdpa_virtq_unregister_intr_handle(struct mlx5_vdpa_virtq *virtq) +{ + int ret = -EAGAIN; + + if (!virtq->intr_handle) + return; + if (rte_intr_fd_get(virtq->intr_handle) >= 0) { + while (ret == -EAGAIN) { + ret = rte_intr_callback_unregister(virtq->intr_handle, + mlx5_vdpa_virtq_kick_handler, virtq); + if (ret == -EAGAIN) { + DRV_LOG(DEBUG, "Try again to unregister fd %d of virtq %hu interrupt", + rte_intr_fd_get(virtq->intr_handle), + virtq->index); + pthread_mutex_unlock(&virtq->virtq_lock); + usleep(MLX5_VDPA_INTR_RETRIES_USEC); + pthread_mutex_lock(&virtq->virtq_lock); + } + } + (void)rte_intr_fd_set(virtq->intr_handle, -1); + } + rte_intr_instance_free(virtq->intr_handle); + virtq->intr_handle = NULL; +} + /* Release cached VQ resources. */ void mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) @@ -75,6 +111,7 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) for (i = 0; i < priv->caps.max_num_virtio_queues; i++) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); virtq->configured = 0; for (j = 0; j < RTE_DIM(virtq->umems); ++j) { if (virtq->umems[j].obj) { @@ -90,28 +127,17 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) } if (virtq->eqp.fw_qp) mlx5_vdpa_event_qp_destroy(&virtq->eqp); + pthread_mutex_unlock(&virtq->virtq_lock); } } + static int mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) { int ret = -EAGAIN; - if (rte_intr_fd_get(virtq->intr_handle) >= 0) { - while (ret == -EAGAIN) { - ret = rte_intr_callback_unregister(virtq->intr_handle, - mlx5_vdpa_virtq_kick_handler, virtq); - if (ret == -EAGAIN) { - DRV_LOG(DEBUG, "Try again to unregister fd %d of virtq %hu interrupt", - rte_intr_fd_get(virtq->intr_handle), - virtq->index); - usleep(MLX5_VDPA_INTR_RETRIES_USEC); - } - } - rte_intr_fd_set(virtq->intr_handle, -1); - } - rte_intr_instance_free(virtq->intr_handle); + mlx5_vdpa_virtq_unregister_intr_handle(virtq); if (virtq->configured) { ret = mlx5_vdpa_virtq_stop(virtq->priv, virtq->index); if (ret) @@ -128,10 +154,15 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) void mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) { + struct mlx5_vdpa_virtq *virtq; int i; - for (i = 0; i < priv->nr_virtqs; i++) - mlx5_vdpa_virtq_unset(&priv->virtqs[i]); + for (i = 0; i < priv->nr_virtqs; i++) { + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + mlx5_vdpa_virtq_unset(virtq); + pthread_mutex_unlock(&virtq->virtq_lock); + } priv->features = 0; priv->nr_virtqs = 0; } @@ -250,7 +281,7 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, MLX5_VIRTQ_EVENT_MODE_QP : MLX5_VIRTQ_EVENT_MODE_NO_MSIX; if (attr->event_mode == MLX5_VIRTQ_EVENT_MODE_QP) { ret = mlx5_vdpa_event_qp_prepare(priv, - vq->size, vq->callfd, &virtq->eqp); + vq->size, vq->callfd, virtq); if (ret) { DRV_LOG(ERR, "Failed to create event QPs for virtq %d.", @@ -420,7 +451,9 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) } claim_zero(rte_vhost_enable_guest_notification(priv->vid, index, 1)); virtq->configured = 1; + rte_spinlock_lock(&priv->db_lock); rte_write32(virtq->index, priv->virtq_db_addr); + rte_spinlock_unlock(&priv->db_lock); /* Setup doorbell mapping. */ virtq->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); @@ -441,7 +474,7 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) if (rte_intr_callback_register(virtq->intr_handle, mlx5_vdpa_virtq_kick_handler, virtq)) { - rte_intr_fd_set(virtq->intr_handle, -1); + (void)rte_intr_fd_set(virtq->intr_handle, -1); DRV_LOG(ERR, "Failed to register virtq %d interrupt.", index); goto error; @@ -537,6 +570,7 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) uint32_t i; uint16_t nr_vring = rte_vhost_get_vring_num(priv->vid); int ret = rte_vhost_get_negotiated_features(priv->vid, &priv->features); + struct mlx5_vdpa_virtq *virtq; if (ret || mlx5_vdpa_features_validate(priv)) { DRV_LOG(ERR, "Failed to configure negotiated features."); @@ -556,9 +590,17 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) return -1; } priv->nr_virtqs = nr_vring; - for (i = 0; i < nr_vring; i++) - if (priv->virtqs[i].enable && mlx5_vdpa_virtq_setup(priv, i)) - goto error; + for (i = 0; i < nr_vring; i++) { + virtq = &priv->virtqs[i]; + if (virtq->enable) { + pthread_mutex_lock(&virtq->virtq_lock); + if (mlx5_vdpa_virtq_setup(priv, i)) { + pthread_mutex_unlock(&virtq->virtq_lock); + goto error; + } + pthread_mutex_unlock(&virtq->virtq_lock); + } + } return 0; error: mlx5_vdpa_virtqs_release(priv); From patchwork Mon Jun 6 11:46:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 112384 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5E796A0542; Mon, 6 Jun 2022 13:48:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D9357427F2; Mon, 6 Jun 2022 13:47:48 +0200 (CEST) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam04on2059.outbound.protection.outlook.com [40.107.100.59]) by mails.dpdk.org (Postfix) with ESMTP id B0A8E42B90 for ; Mon, 6 Jun 2022 13:47:47 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VNIvX7FE2JizTJ8T+AldvN146JUHJavyiQiqzEBqAj9RHni3sFhXCfFWJbS26v9N7Ky/d5mK2JZi24jpbHP73pJoNzm4u7o/hvbBp9CDzzBknYdfUnxn19zRDaeWD8dAlhsHSljBNH1lt8sm1zG/kV0UX8uWx+luwxEMJM5hkIemOJhh1+EOAhc2Ic/Vpf1sv4QpqQVg5Wosa9U6lmlC/cgntGu09MT4n3hIn60v2huJuHIzEE+VelOo48IixxiCUOOHLqJS95KKbD/ZFC+3ijUspkoYbCVBiFhmDuxluybjzVv5nVD2LsQYw4tcPevuIkLQ7xyrDHzOfBwZ5rQ7iA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XqNbWRPHMsXACs/uazvCCyzyqWQJ9s88f+cJ+vO9Xjc=; b=WaNotkMo/da656EWRg6EjTXcS8QFvHsMLoWLzB6ElzRw4fDoPn+rO/4aCgGKCgHdrBfX+4BjIckCWzAuRc5iZmWgXaC3XcfoEKis3BM61tmBFkQMjT0D+o6fdGswYxSQ17nt7WdkP0kcpu+6gxVcuTfqJcS/kRw/LKQL/hqD4v9xtENan9iVRV1xIB3CaCGSn1vos6k8mF14r5Kt73VZJRENgEfDK+dSoaIRhvFbWxwlRzEERQVn/Tn2XB+XVJv1cC8YofC5yilw3eriE3sRjkDADQ8WMpeBGZFn0BEEufyUgVz9s64v4i8PATCLdtohY+B+VupQUK2BoGJUdQzDUA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XqNbWRPHMsXACs/uazvCCyzyqWQJ9s88f+cJ+vO9Xjc=; b=Z1LFo6g7LRKmMYL8djdh7htGo9oNmmRv/dn7+X/S6nmVGZ9rc93SemgG8VmFRT/2kIcMNx2ss1FnG2pFLhsQZMekDZVWJNoaDqZO5Xbhp7NTBecf8sYsSdu9F6nXOxAReh5+yNoe8uzTFlKQXJpEZo90HC0p5BDvOQsGHfH4rU4a29q6CYWzIN68n9/4YtlVw0Asg7cRbY8VL0RHFksBUMz310jzLr4WVdO7aox1QPv57f1085yD2Q/UAtEm5ojbJDJ30Ic4n/Ofuws9ROjB4ia4wKvI7iNwUeBgorT+H2ZtptCDNJ2D2xCg5WFjqtn8VM3VefMqHonDDa1SWpY0cA== Received: from BN0PR04CA0088.namprd04.prod.outlook.com (2603:10b6:408:ea::33) by BN6PR12MB1873.namprd12.prod.outlook.com (2603:10b6:404:102::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Mon, 6 Jun 2022 11:47:46 +0000 Received: from BN8NAM11FT024.eop-nam11.prod.protection.outlook.com (2603:10b6:408:ea:cafe::38) by BN0PR04CA0088.outlook.office365.com (2603:10b6:408:ea::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.14 via Frontend Transport; Mon, 6 Jun 2022 11:47:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT024.mail.protection.outlook.com (10.13.177.38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:45 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:47:44 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:47:42 -0700 From: Li Zhang To: , , , CC: , , , Subject: [PATCH v1 10/17] vdpa/mlx5: add multi-thread management for configuration Date: Mon, 6 Jun 2022 14:46:43 +0300 Message-ID: <20220606114650.209612-11-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: adb9d5fd-c2ea-4db6-d712-08da47b25f93 X-MS-TrafficTypeDiagnostic: BN6PR12MB1873:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Qi8UI40Sl0+R8gF59vHEYHcYtoIfjBmrDn7bdymg4hlx9/VqgCiY8LW3GD65a5T87l5XoZ7Mt4zzdFY6BK7Fxc5aNznL433VrQg2/xkFZrLhy8l6cJZ/N3hUc1G10doU4tUPQiGeg8oYbsgwhtDgYSf4fl0rKRdbHkVbQdwP6dhh/Uv5Dxvzm5GD3LkPyYO/Rq2a/ThwlaSdHlXKBf8SVZ1XIp4wgVqYTcQzWdZ+/E6poE0kGprUIXTN+OFnkYuvjYCt0d7DXwuGclGOqYmaZBfsiZgYOKsFxBkgqUN0jivGV+dkFZZBfkEpQWfZ60az5e0zmDoUJbI9Y4LxKqRF5mzvn7p6TQTq+GKesG7Yo4kcdm58FaFtfrnDxIsqlcdxcWY+WBidOlopsX3NVpQeVeAM8nxSVNSggxw3eOPX9TYuJt7EE8t0rr4B2PbP5BX/D8fAE2vH/8t+3GMA6TgueT5hhx4tspsnSqIruePvTSgrEzLT9ZAzRqC8tAlRz1tZmslcxOnxPsXbDrxHb/Ulg6Otyp5V/n/gF8v6YD/uxx8+TXRWT/cRJxqrEMQDIuEyQVnh18Dxn2ug8GkAmfC7ZBrfC5tN7QGZU/DMfzNxrxW7Qk8wmwEOhBndzTFJVUJcHprfcTZ87SEu1lxs2D3kgvZZtr/llOySbbbJzfS6mx8Jcvz72NSP84DiB/XbSG9foKdh2685Xe72flKZ/KG7kw== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(336012)(36860700001)(316002)(426003)(55016003)(2616005)(47076005)(4326008)(7696005)(26005)(82310400005)(8676002)(6286002)(5660300002)(70586007)(54906003)(83380400001)(6636002)(110136005)(70206006)(40460700003)(508600001)(30864003)(86362001)(2906002)(16526019)(186003)(356005)(1076003)(107886003)(81166007)(36756003)(8936002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:47:45.7121 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: adb9d5fd-c2ea-4db6-d712-08da47b25f93 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT024.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1873 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The LM process includes a lot of objects creations and destructions in the source and the destination servers. As much as LM time increases, the packet drop of the VM increases. To improve LM time need to parallel the configurations for mlx5 FW. Add internal multi-thread management in the driver for it. A new devarg defines the number of threads and their CPU. The management is shared between all the devices of the driver. Since the event_core also affects the datapath events thread, reduce the priority of the datapath event thread to allow fast configuration of the devices doing the LM. Signed-off-by: Li Zhang --- doc/guides/vdpadevs/mlx5.rst | 11 +++ drivers/vdpa/mlx5/meson.build | 1 + drivers/vdpa/mlx5/mlx5_vdpa.c | 41 ++++++++ drivers/vdpa/mlx5/mlx5_vdpa.h | 36 +++++++ drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 129 ++++++++++++++++++++++++++ drivers/vdpa/mlx5/mlx5_vdpa_event.c | 2 +- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 8 +- 7 files changed, 223 insertions(+), 5 deletions(-) create mode 100644 drivers/vdpa/mlx5/mlx5_vdpa_cthread.c diff --git a/doc/guides/vdpadevs/mlx5.rst b/doc/guides/vdpadevs/mlx5.rst index 0ad77bf535..b75a01688d 100644 --- a/doc/guides/vdpadevs/mlx5.rst +++ b/doc/guides/vdpadevs/mlx5.rst @@ -78,6 +78,17 @@ for an additional list of options shared with other mlx5 drivers. CPU core number to set polling thread affinity to, default to control plane cpu. +- ``max_conf_threads`` parameter [int] + + Allow the driver to use internal threads to obtain fast configuration. + All the threads will be open on the same core of the event completion queue scheduling thread. + + - 0, default, don't use internal threads for configuration. + + - 1 - 256, number of internal threads in addition to the caller thread (8 is suggested). + This value, if not 0, should be the same for all the devices; + the first prob will take it with the event_core for all the multi-thread configurations in the driver. + - ``hw_latency_mode`` parameter [int] The completion queue moderation mode: diff --git a/drivers/vdpa/mlx5/meson.build b/drivers/vdpa/mlx5/meson.build index 0fa82ad257..9d8dbb1a82 100644 --- a/drivers/vdpa/mlx5/meson.build +++ b/drivers/vdpa/mlx5/meson.build @@ -15,6 +15,7 @@ sources = files( 'mlx5_vdpa_virtq.c', 'mlx5_vdpa_steer.c', 'mlx5_vdpa_lm.c', + 'mlx5_vdpa_cthread.c', ) cflags_options = [ '-std=c11', diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index e5a11f72fd..a9d023ed08 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -50,6 +50,8 @@ TAILQ_HEAD(mlx5_vdpa_privs, mlx5_vdpa_priv) priv_list = TAILQ_HEAD_INITIALIZER(priv_list); static pthread_mutex_t priv_list_lock = PTHREAD_MUTEX_INITIALIZER; +struct mlx5_vdpa_conf_thread_mng conf_thread_mng; + static void mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv); static struct mlx5_vdpa_priv * @@ -493,6 +495,29 @@ mlx5_vdpa_args_check_handler(const char *key, const char *val, void *opaque) DRV_LOG(WARNING, "Invalid event_core %s.", val); else priv->event_core = tmp; + } else if (strcmp(key, "max_conf_threads") == 0) { + if (tmp) { + priv->use_c_thread = true; + if (!conf_thread_mng.initializer_priv) { + conf_thread_mng.initializer_priv = priv; + if (tmp > MLX5_VDPA_MAX_C_THRD) { + DRV_LOG(WARNING, + "Invalid max_conf_threads %s " + "and set max_conf_threads to %d", + val, MLX5_VDPA_MAX_C_THRD); + tmp = MLX5_VDPA_MAX_C_THRD; + } + conf_thread_mng.max_thrds = tmp; + } else if (tmp != conf_thread_mng.max_thrds) { + DRV_LOG(WARNING, + "max_conf_threads is PMD argument and not per device, " + "only the first device configuration set it, current value is %d " + "and will not be changed to %d.", + conf_thread_mng.max_thrds, (int)tmp); + } + } else { + priv->use_c_thread = false; + } } else if (strcmp(key, "hw_latency_mode") == 0) { priv->hw_latency_mode = (uint32_t)tmp; } else if (strcmp(key, "hw_max_latency_us") == 0) { @@ -521,6 +546,9 @@ mlx5_vdpa_config_get(struct mlx5_kvargs_ctrl *mkvlist, "hw_max_latency_us", "hw_max_pending_comp", "no_traffic_time", + "queue_size", + "queues", + "max_conf_threads", NULL, }; @@ -725,6 +753,13 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, pthread_mutex_init(&priv->steer_update_lock, NULL); priv->cdev = cdev; mlx5_vdpa_config_get(mkvlist, priv); + if (priv->use_c_thread) { + if (conf_thread_mng.initializer_priv == priv) + if (mlx5_vdpa_mult_threads_create(priv->event_core)) + goto error; + __atomic_fetch_add(&conf_thread_mng.refcnt, 1, + __ATOMIC_RELAXED); + } if (mlx5_vdpa_create_dev_resources(priv)) goto error; priv->vdev = rte_vdpa_register_device(cdev->dev, &mlx5_vdpa_ops); @@ -739,6 +774,8 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, pthread_mutex_unlock(&priv_list_lock); return 0; error: + if (conf_thread_mng.initializer_priv == priv) + mlx5_vdpa_mult_threads_destroy(false); if (priv) mlx5_vdpa_dev_release(priv); return -rte_errno; @@ -806,6 +843,10 @@ mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv) mlx5_vdpa_release_dev_resources(priv); if (priv->vdev) rte_vdpa_unregister_device(priv->vdev); + if (priv->use_c_thread) + if (__atomic_fetch_sub(&conf_thread_mng.refcnt, + 1, __ATOMIC_RELAXED) == 1) + mlx5_vdpa_mult_threads_destroy(true); rte_free(priv); } diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 3fd5eefc5e..4e7c2557b7 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -73,6 +73,22 @@ enum { MLX5_VDPA_NOTIFIER_STATE_ERR }; +#define MLX5_VDPA_MAX_C_THRD 256 + +/* Generic mlx5_vdpa_c_thread information. */ +struct mlx5_vdpa_c_thread { + pthread_t tid; +}; + +struct mlx5_vdpa_conf_thread_mng { + void *initializer_priv; + uint32_t refcnt; + uint32_t max_thrds; + pthread_mutex_t cthrd_lock; + struct mlx5_vdpa_c_thread cthrd[MLX5_VDPA_MAX_C_THRD]; +}; +extern struct mlx5_vdpa_conf_thread_mng conf_thread_mng; + struct mlx5_vdpa_virtq { SLIST_ENTRY(mlx5_vdpa_virtq) next; uint8_t enable; @@ -126,6 +142,7 @@ enum mlx5_dev_state { struct mlx5_vdpa_priv { TAILQ_ENTRY(mlx5_vdpa_priv) next; bool connected; + bool use_c_thread; enum mlx5_dev_state state; rte_spinlock_t db_lock; pthread_mutex_t steer_update_lock; @@ -496,4 +513,23 @@ mlx5_vdpa_drain_cq(struct mlx5_vdpa_priv *priv); bool mlx5_vdpa_is_modify_virtq_supported(struct mlx5_vdpa_priv *priv); + +/** + * Create configuration multi-threads resource + * + * @param[in] cpu_core + * CPU core number to set configuration threads affinity to. + * + * @return + * 0 on success, a negative value otherwise. + */ +int +mlx5_vdpa_mult_threads_create(int cpu_core); + +/** + * Destroy configuration multi-threads resource + * + */ +void +mlx5_vdpa_mult_threads_destroy(bool need_unlock); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c new file mode 100644 index 0000000000..ba7d8b63b3 --- /dev/null +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -0,0 +1,129 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#include + +#include "mlx5_vdpa_utils.h" +#include "mlx5_vdpa.h" + +static void * +mlx5_vdpa_c_thread_handle(void *arg) +{ + /* To be added later. */ + return arg; +} + +static void +mlx5_vdpa_c_thread_destroy(uint32_t thrd_idx, bool need_unlock) +{ + if (conf_thread_mng.cthrd[thrd_idx].tid) { + pthread_cancel(conf_thread_mng.cthrd[thrd_idx].tid); + pthread_join(conf_thread_mng.cthrd[thrd_idx].tid, NULL); + conf_thread_mng.cthrd[thrd_idx].tid = 0; + if (need_unlock) + pthread_mutex_init(&conf_thread_mng.cthrd_lock, NULL); + } +} + +static int +mlx5_vdpa_c_thread_create(int cpu_core) +{ + const struct sched_param sp = { + .sched_priority = sched_get_priority_max(SCHED_RR), + }; + rte_cpuset_t cpuset; + pthread_attr_t attr; + uint32_t thrd_idx; + char name[32]; + int ret; + + pthread_mutex_lock(&conf_thread_mng.cthrd_lock); + pthread_attr_init(&attr); + ret = pthread_attr_setschedpolicy(&attr, SCHED_RR); + if (ret) { + DRV_LOG(ERR, "Failed to set thread sched policy = RR."); + goto c_thread_err; + } + ret = pthread_attr_setschedparam(&attr, &sp); + if (ret) { + DRV_LOG(ERR, "Failed to set thread priority."); + goto c_thread_err; + } + for (thrd_idx = 0; thrd_idx < conf_thread_mng.max_thrds; + thrd_idx++) { + ret = pthread_create(&conf_thread_mng.cthrd[thrd_idx].tid, + &attr, mlx5_vdpa_c_thread_handle, + (void *)&conf_thread_mng); + if (ret) { + DRV_LOG(ERR, "Failed to create vdpa multi-threads %d.", + thrd_idx); + goto c_thread_err; + } + CPU_ZERO(&cpuset); + if (cpu_core != -1) + CPU_SET(cpu_core, &cpuset); + else + cpuset = rte_lcore_cpuset(rte_get_main_lcore()); + ret = pthread_setaffinity_np( + conf_thread_mng.cthrd[thrd_idx].tid, + sizeof(cpuset), &cpuset); + if (ret) { + DRV_LOG(ERR, "Failed to set thread affinity for " + "vdpa multi-threads %d.", thrd_idx); + goto c_thread_err; + } + snprintf(name, sizeof(name), "vDPA-mthread-%d", thrd_idx); + ret = pthread_setname_np( + conf_thread_mng.cthrd[thrd_idx].tid, name); + if (ret) + DRV_LOG(ERR, "Failed to set vdpa multi-threads name %s.", + name); + else + DRV_LOG(DEBUG, "Thread name: %s.", name); + } + pthread_mutex_unlock(&conf_thread_mng.cthrd_lock); + return 0; +c_thread_err: + for (thrd_idx = 0; thrd_idx < conf_thread_mng.max_thrds; + thrd_idx++) + mlx5_vdpa_c_thread_destroy(thrd_idx, false); + pthread_mutex_unlock(&conf_thread_mng.cthrd_lock); + return -1; +} + +int +mlx5_vdpa_mult_threads_create(int cpu_core) +{ + pthread_mutex_init(&conf_thread_mng.cthrd_lock, NULL); + if (mlx5_vdpa_c_thread_create(cpu_core)) { + DRV_LOG(ERR, "Cannot create vDPA configuration threads."); + mlx5_vdpa_mult_threads_destroy(false); + return -1; + } + return 0; +} + +void +mlx5_vdpa_mult_threads_destroy(bool need_unlock) +{ + uint32_t thrd_idx; + + if (!conf_thread_mng.initializer_priv) + return; + for (thrd_idx = 0; thrd_idx < conf_thread_mng.max_thrds; + thrd_idx++) + mlx5_vdpa_c_thread_destroy(thrd_idx, need_unlock); + pthread_mutex_destroy(&conf_thread_mng.cthrd_lock); + memset(&conf_thread_mng, 0, sizeof(struct mlx5_vdpa_conf_thread_mng)); +} diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index 2b0f5936d1..b45fbac146 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -507,7 +507,7 @@ mlx5_vdpa_cqe_event_setup(struct mlx5_vdpa_priv *priv) pthread_attr_t attr; char name[16]; const struct sched_param sp = { - .sched_priority = sched_get_priority_max(SCHED_RR), + .sched_priority = sched_get_priority_max(SCHED_RR) - 1, }; if (!priv->eventc) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 138b7bdbc5..599809b09b 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -43,7 +43,7 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) errno == EWOULDBLOCK || errno == EAGAIN) continue; - DRV_LOG(ERR, "Failed to read kickfd of virtq %d: %s", + DRV_LOG(ERR, "Failed to read kickfd of virtq %d: %s.", virtq->index, strerror(errno)); } break; @@ -57,7 +57,7 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) rte_spinlock_unlock(&priv->db_lock); pthread_mutex_unlock(&virtq->virtq_lock); if (priv->state != MLX5_VDPA_STATE_CONFIGURED && !virtq->enable) { - DRV_LOG(ERR, "device %d queue %d down, skip kick handling", + DRV_LOG(ERR, "device %d queue %d down, skip kick handling.", priv->vid, virtq->index); return; } @@ -218,7 +218,7 @@ mlx5_vdpa_virtq_query(struct mlx5_vdpa_priv *priv, int index) return -1; } if (attr.state == MLX5_VIRTQ_STATE_ERROR) - DRV_LOG(WARNING, "vid %d vring %d hw error=%hhu", + DRV_LOG(WARNING, "vid %d vring %d hw error=%hhu.", priv->vid, index, attr.error_type); return 0; } @@ -380,7 +380,7 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, if (ret) { last_avail_idx = 0; last_used_idx = 0; - DRV_LOG(WARNING, "Couldn't get vring base, idx are set to 0"); + DRV_LOG(WARNING, "Couldn't get vring base, idx are set to 0."); } else { DRV_LOG(INFO, "vid %d: Init last_avail_idx=%d, last_used_idx=%d for " "virtq %d.", priv->vid, last_avail_idx, From patchwork Mon Jun 6 11:46:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 112385 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1082BA0542; Mon, 6 Jun 2022 13:48:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 18D9B42BA6; Mon, 6 Jun 2022 13:47:52 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2056.outbound.protection.outlook.com [40.107.244.56]) by mails.dpdk.org (Postfix) with ESMTP id E2E2542BA6 for ; Mon, 6 Jun 2022 13:47:50 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IGhyjjKlCf21PIyq5P99dm9923Amku/pkhTuro8p4Ly+9jLamCMmSkUGWkSZqKgomQyIwmyTPhWKnbJsoYoBZbA2F2SZTZ9lkNVbKC8JbOXsNsdT9g1vZfnCIbZzuu1El1Bswr8bjRaFBsB6NPWXNfp8VU9Nqv8AnBNvYiUInk93Av1kImBuE3kR+gVoLZqTOSn2mwN5yXn+Ye/COqfaUBmCmKYPuFcWYcxcH7n7cmVuUPkievf29PPgvLGWopbTV162mRC3BLQc5YlzzRNMCaPzR0jZTNJ7WoIyWFl4uSPVbGYySRurJPZlt1JiBZH0UIWQdF5vFHDrOcmyOHjzhQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WPeiYEKr8Xl0aU3ahh2E84TguKGVfhqJFiY6ZoKAkqw=; b=HcLFfKaK8QG7iBauvYWWk+DEzzvVGmACTeazuONDJ+7PInteeyduEV4gnKHCtPTSxfCYHfsXp92J+CiNSz5IRROW6fWazHSyan09b5UofPDz067ZgCsdJ/vppDbT9JSrbuXimdEwoqhUMF91D9rHuGbAzVgz0fYfY6A4LpFCyLKIOAr1PtXjPVRir/AMPpDbphZxWtfY3NT/R2VUWCDr8by4bW6UXUgGAf3zewYEPjc+qw92GoT2pJl1Aqu6uO84zCDlOXG5UJaKToqN5bWFaX/AgqJlmcgm2VXQvGAu6n6DSAUkEx/u5n2olLt2RZ3ufGHskNBK4jRoXJ/4D6ZnFg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=WPeiYEKr8Xl0aU3ahh2E84TguKGVfhqJFiY6ZoKAkqw=; b=rHxqtw61QSy6AsUQjh+yQ+p1sKPa2nvY/WLW5MKTrN1TJK/4JQGSRruWkexdagpzSFvWEhWcLPnsS3rjrhx56dxaQAww7nWJ7FoJszir0hrdcB8O2dRJeO1rKgOIoerDWNUCwOJ1pPNfE43PiVzo/RR9JzqL1XviOuuNmNpPSOgAjmMKGNsvCpVxa235I3cHnpkcukVt+slxSV5Rv4qWL6wT+vUThw6675MR8GeRo1ryTBt2BrzoG86GwPtsfgm2c8NL2gPTV0NX59g79QUaPa3m7tjPwwy9Fu/+JyYUBasySOUWS6dZFbs+LNlAfFFRIQDsmDqUzkHMOladkABRAQ== Received: from CO2PR04CA0093.namprd04.prod.outlook.com (2603:10b6:104:6::19) by BY5PR12MB3890.namprd12.prod.outlook.com (2603:10b6:a03:1a7::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Mon, 6 Jun 2022 11:47:49 +0000 Received: from CO1NAM11FT029.eop-nam11.prod.protection.outlook.com (2603:10b6:104:6:cafe::e7) by CO2PR04CA0093.outlook.office365.com (2603:10b6:104:6::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:48 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT029.mail.protection.outlook.com (10.13.174.214) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:48 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:47:47 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:47:44 -0700 From: Li Zhang To: , , , CC: , , , Subject: [PATCH v1 11/17] vdpa/mlx5: add task ring for MT management Date: Mon, 6 Jun 2022 14:46:44 +0300 Message-ID: <20220606114650.209612-12-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 508e58f9-de86-46c1-34fe-08da47b26102 X-MS-TrafficTypeDiagnostic: BY5PR12MB3890:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: c+aRDqcXUH2qWYZLGeSSoYW2FcHrXU8FfNMueuiIL2hgYOjRIAlkHNMMq2T9CqhKfhZqfSjxqP3u6x7wfgeLJBP1alGyP7ik/jHYyss1A0eeKVpb0/E6vlh70DYzUMbiv+dKM6GYgKrw9GMtn82/OZOFSzRO0tM/T9Nrl8bLMzotkuwow+GlQAsQAPIyjUERsTPx5h2fybBk3h5KxD8WoujLF1Zq+JEWh7R7wj8OQdmrhjsy9PlZ2lmops5wKTZhGGIRuPlYBCsao9+zRLJZKy9h0v00CzRdc4aEi6VfahHUDKqlQvWgENbByg2X+NcLNwch2eSirsOEj3VzeViknWr/plTkB03CpL5Gfkk3OUuuiSxjMHYyEowM1WpWv+eGMW6il2p3pz4tvC41wqYhKUn4dza79JU1UnnKpJ87bvA5uMLEDTMTCcGS1EiepunuV4GoHtQ5wjWTB7GQR4NkPv0J2JZ3Qio+2RwtrOx9eWdDq44EO7K63DE3TzJ7jrkG82VECXATWaQYEOZNi5/5yWn+fTeKTfAxbgKLR8U8RGywfJSWTJtWu7nIwNO59yzgX+9Ef2KQkIjkkPkryA75+BLzhOpojQmrc2wD1XbId5ugC+dYaaMQtUay8AJ+nnWT7YtlbZwn1VC9C3GeUolg3J9mxXS9mtsBsL9Fo4G/a2bvISgUSxZmfaw2yAbAw8tLqmtD6KIF9fIIiUNBqhSsvQ== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(70586007)(7696005)(316002)(110136005)(82310400005)(70206006)(186003)(16526019)(6636002)(55016003)(54906003)(356005)(47076005)(8676002)(81166007)(83380400001)(4326008)(86362001)(26005)(6286002)(336012)(426003)(36756003)(107886003)(40460700003)(36860700001)(8936002)(508600001)(1076003)(2906002)(2616005)(5660300002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:47:48.2538 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 508e58f9-de86-46c1-34fe-08da47b26102 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT029.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB3890 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The configuration threads tasks need a container to support multiple tasks assigned to a thread in parallel. Use rte_ring container per thread to manage the thread tasks without locks. The caller thread from the user context opens a task to a thread and enqueue it to the thread ring. The thread polls its ring and dequeue tasks. That’s why the ring should be in multi-producer and single consumer mode. Anatomic counter manages the tasks completion notification. The threads report errors to the caller by a dedicated error counter per task. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.h | 17 ++++ drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 115 +++++++++++++++++++++++++- 2 files changed, 130 insertions(+), 2 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 4e7c2557b7..2bbb868ec6 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -74,10 +74,22 @@ enum { }; #define MLX5_VDPA_MAX_C_THRD 256 +#define MLX5_VDPA_MAX_TASKS_PER_THRD 4096 +#define MLX5_VDPA_TASKS_PER_DEV 64 + +/* Generic task information and size must be multiple of 4B. */ +struct mlx5_vdpa_task { + struct mlx5_vdpa_priv *priv; + uint32_t *remaining_cnt; + uint32_t *err_cnt; + uint32_t idx; +} __rte_packed __rte_aligned(4); /* Generic mlx5_vdpa_c_thread information. */ struct mlx5_vdpa_c_thread { pthread_t tid; + struct rte_ring *rng; + pthread_cond_t c_cond; }; struct mlx5_vdpa_conf_thread_mng { @@ -532,4 +544,9 @@ mlx5_vdpa_mult_threads_create(int cpu_core); */ void mlx5_vdpa_mult_threads_destroy(bool need_unlock); + +bool +mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, + uint32_t thrd_idx, + uint32_t num); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c index ba7d8b63b3..1fdc92d3ad 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -11,17 +11,103 @@ #include #include #include +#include #include #include "mlx5_vdpa_utils.h" #include "mlx5_vdpa.h" +static inline uint32_t +mlx5_vdpa_c_thrd_ring_dequeue_bulk(struct rte_ring *r, + void **obj, uint32_t n, uint32_t *avail) +{ + uint32_t m; + + m = rte_ring_dequeue_bulk_elem_start(r, obj, + sizeof(struct mlx5_vdpa_task), n, avail); + n = (m == n) ? n : 0; + rte_ring_dequeue_elem_finish(r, n); + return n; +} + +static inline uint32_t +mlx5_vdpa_c_thrd_ring_enqueue_bulk(struct rte_ring *r, + void * const *obj, uint32_t n, uint32_t *free) +{ + uint32_t m; + + m = rte_ring_enqueue_bulk_elem_start(r, n, free); + n = (m == n) ? n : 0; + rte_ring_enqueue_elem_finish(r, obj, + sizeof(struct mlx5_vdpa_task), n); + return n; +} + +bool +mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, + uint32_t thrd_idx, + uint32_t num) +{ + struct rte_ring *rng = conf_thread_mng.cthrd[thrd_idx].rng; + struct mlx5_vdpa_task task[MLX5_VDPA_TASKS_PER_DEV]; + uint32_t i; + + MLX5_ASSERT(num <= MLX5_VDPA_TASKS_PER_DEV); + for (i = 0 ; i < num; i++) { + task[i].priv = priv; + /* To be added later. */ + } + if (!mlx5_vdpa_c_thrd_ring_enqueue_bulk(rng, (void **)&task, num, NULL)) + return -1; + for (i = 0 ; i < num; i++) + if (task[i].remaining_cnt) + __atomic_fetch_add(task[i].remaining_cnt, 1, + __ATOMIC_RELAXED); + /* wake up conf thread. */ + pthread_mutex_lock(&conf_thread_mng.cthrd_lock); + pthread_cond_signal(&conf_thread_mng.cthrd[thrd_idx].c_cond); + pthread_mutex_unlock(&conf_thread_mng.cthrd_lock); + return 0; +} + static void * mlx5_vdpa_c_thread_handle(void *arg) { - /* To be added later. */ - return arg; + struct mlx5_vdpa_conf_thread_mng *multhrd = arg; + pthread_t thread_id = pthread_self(); + struct mlx5_vdpa_priv *priv; + struct mlx5_vdpa_task task; + struct rte_ring *rng; + uint32_t thrd_idx; + uint32_t task_num; + + for (thrd_idx = 0; thrd_idx < multhrd->max_thrds; + thrd_idx++) + if (multhrd->cthrd[thrd_idx].tid == thread_id) + break; + if (thrd_idx >= multhrd->max_thrds) + return NULL; + rng = multhrd->cthrd[thrd_idx].rng; + while (1) { + task_num = mlx5_vdpa_c_thrd_ring_dequeue_bulk(rng, + (void **)&task, 1, NULL); + if (!task_num) { + /* No task and condition wait. */ + pthread_mutex_lock(&multhrd->cthrd_lock); + pthread_cond_wait( + &multhrd->cthrd[thrd_idx].c_cond, + &multhrd->cthrd_lock); + pthread_mutex_unlock(&multhrd->cthrd_lock); + } + priv = task.priv; + if (priv == NULL) + continue; + __atomic_fetch_sub(task.remaining_cnt, + 1, __ATOMIC_RELAXED); + /* To be added later. */ + } + return NULL; } static void @@ -34,6 +120,10 @@ mlx5_vdpa_c_thread_destroy(uint32_t thrd_idx, bool need_unlock) if (need_unlock) pthread_mutex_init(&conf_thread_mng.cthrd_lock, NULL); } + if (conf_thread_mng.cthrd[thrd_idx].rng) { + rte_ring_free(conf_thread_mng.cthrd[thrd_idx].rng); + conf_thread_mng.cthrd[thrd_idx].rng = NULL; + } } static int @@ -45,6 +135,7 @@ mlx5_vdpa_c_thread_create(int cpu_core) rte_cpuset_t cpuset; pthread_attr_t attr; uint32_t thrd_idx; + uint32_t ring_num; char name[32]; int ret; @@ -60,8 +151,26 @@ mlx5_vdpa_c_thread_create(int cpu_core) DRV_LOG(ERR, "Failed to set thread priority."); goto c_thread_err; } + ring_num = MLX5_VDPA_MAX_TASKS_PER_THRD / conf_thread_mng.max_thrds; + if (!ring_num) { + DRV_LOG(ERR, "Invalid ring number for thread."); + goto c_thread_err; + } for (thrd_idx = 0; thrd_idx < conf_thread_mng.max_thrds; thrd_idx++) { + snprintf(name, sizeof(name), "vDPA-mthread-ring-%d", + thrd_idx); + conf_thread_mng.cthrd[thrd_idx].rng = rte_ring_create_elem(name, + sizeof(struct mlx5_vdpa_task), ring_num, + rte_socket_id(), + RING_F_MP_HTS_ENQ | RING_F_MC_HTS_DEQ | + RING_F_EXACT_SZ); + if (!conf_thread_mng.cthrd[thrd_idx].rng) { + DRV_LOG(ERR, + "Failed to create vdpa multi-threads %d ring.", + thrd_idx); + goto c_thread_err; + } ret = pthread_create(&conf_thread_mng.cthrd[thrd_idx].tid, &attr, mlx5_vdpa_c_thread_handle, (void *)&conf_thread_mng); @@ -91,6 +200,8 @@ mlx5_vdpa_c_thread_create(int cpu_core) name); else DRV_LOG(DEBUG, "Thread name: %s.", name); + pthread_cond_init(&conf_thread_mng.cthrd[thrd_idx].c_cond, + NULL); } pthread_mutex_unlock(&conf_thread_mng.cthrd_lock); return 0; From patchwork Mon Jun 6 11:46:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 112386 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CCECCA0542; Mon, 6 Jun 2022 13:49:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3ED7642BAD; Mon, 6 Jun 2022 13:47:55 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2056.outbound.protection.outlook.com [40.107.243.56]) by mails.dpdk.org (Postfix) with ESMTP id EFF7F42BB1 for ; Mon, 6 Jun 2022 13:47:52 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Bj71AzEQmBwmW7Bc6iIGsEVdzSL0QSnqUGEtARQEPTBGcaHIEyMad0BK+19OFRL9UB7shdrdQXhb+A73I4bqioXtz4AXA8EcpxtcZDyTCnSyNyBlPsiW/CM1osWQkNCZJzyLSI9EikzlV58AG/zY8mKhp5LEw9hNrIdM+Y0BpUfj9Um43NFdIBjlC/hqUnfmEbh+/BCGb/vJthmNYU6br7Pm7+hzyymvjzIqH/FXPbVSVk6odr/gXnKmXRhJRRBMcUc6rNeIJ37DZm+Prdwjwz9xxmgcaJtPfqsoW5qTM+RJDXLwB8aU7uPhOgAr0UXxjOODfovQbbZwjl7MCFv/PA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CVRcS2uqZd9Wh3g830a87N53PYxY8F6zN7aoMgp2IRU=; b=bPWgQhgN4mlHJ+3Ent5OyEVIstpZmlEnTzuHz7PCxqLukZ53aobscXKFb6n0nE+RadgDftOAP3MsG7Ca2pVnt+FES/UKWPxEyUwyqG7j3ZtRO0bsuh39KtgfRlt61yZpGw1KCop5wNJLM2PmQkOFXoHaw7sZadh6JHGkIvNQ1ZyMNSDHbPN2AtmHyBys0uTnIES0ATrF7ATTV0hqI7qFoOFvqiZ0hGNDNZZ0k7oJA+X782fZi2TrkTyvSyxqWUh+cLoRW/f13Gy8k4IDeZ2lsRp12Fy4dmHmJ7jT9zEK6fwEqnJBigT+Ut8HAXplVNkiSmUy4VS5TOPFiVykII+MmQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CVRcS2uqZd9Wh3g830a87N53PYxY8F6zN7aoMgp2IRU=; b=tVaKTZzx4Oh4lO16ZYknpxbA7ViaR9jVXcdZ+12szjo01VQyu9sQkhHvi8+xB9BvkJHjR7OnVjzVk3ox9GuCD127MNAMnxVPQfzZVJw7Ly4UOiMhaNFjqnsIiP96ToB5JguCYalVIju+1kCiHFl9R+c7hqTjf/TAqjcHTaGu/JmLzch2GE2JdeyHdFAdCYP1uDVG18+TYwZj0VhMb66UdBbblnTixYFYU1GY+ESS1lWxXn+mrlC73gcy/P2F4Qb67OrgD9/zaV2LSxPgqeaWkHSMGThDwnT/F/hXbBon0xnmqfI5K2PTrd2J8TDzlmwkkfp3NIUkpF243AxxsXCEXw== Received: from MWHPR1701CA0008.namprd17.prod.outlook.com (2603:10b6:301:14::18) by MW3PR12MB4459.namprd12.prod.outlook.com (2603:10b6:303:56::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.17; Mon, 6 Jun 2022 11:47:51 +0000 Received: from CO1NAM11FT046.eop-nam11.prod.protection.outlook.com (2603:10b6:301:14:cafe::be) by MWHPR1701CA0008.outlook.office365.com (2603:10b6:301:14::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19 via Frontend Transport; Mon, 6 Jun 2022 11:47:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT046.mail.protection.outlook.com (10.13.174.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:51 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:47:50 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:47:47 -0700 From: Li Zhang To: , , , CC: , , , Subject: [PATCH v1 12/17] vdpa/mlx5: add MT task for VM memory registration Date: Mon, 6 Jun 2022 14:46:45 +0300 Message-ID: <20220606114650.209612-13-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 7b34048f-4dc0-43fc-5ec9-08da47b262ba X-MS-TrafficTypeDiagnostic: MW3PR12MB4459:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: oDcSIytj6BToGGwoGzG/HMcvXEieeemBcOqEEW+enQcCQe0S1EpyR+1FbAhdGmnfpDQ3J0O7sOhVy7kfyMlkddDXuRoYVWtfRnrdiondcjxuOmsBSAjuzcc2cZ5pyMBUj3BhzbOKr2YTlbu95nlQEzFlMfzrsS00xXA2rW6z8Sh/BcbiQ9gg7JAyDhjUGfYD7RQz8VvMEFfKEClv6Lx7A1m5o+omArJr+OJxeADFYqVFt5SyLbiYST3ifF28Nw60EiYipFNyhOVj6n5fyulNdDKPajb0sWwOpkN6QPEaScXTeVRYWlBAaKvt5M1+LrOJSs2zzcWBFmWjiksuptBZKzbBVAlV5AylIR89ztIeoB9LD37zdddnp/AmsahMUjungEsQQazRoW3K1OepehL9Td2e9mZNhwsbrvfl1hXCma9/TsRSuiuKH9lUV1ACvOKslAaTGVdUkgAEOmNfD6sEbskDH1giDhOPQFmAwX9nal9tg5T1av8QLgmQh74HwtLWtsVDzXm7QMdJgL3krffi6Wk+8H4oyNdsNrVq1vA7zPPFKfUNx7143UlcMYgRznAFLLjM/zEnlDefr7MgWH08d/WZSbIh+pTbnI/F4qHKkmFA/SHYugRgb/hH3wx2Hj/DdyoXP9wgsbMs9i/deZxhdVc8MaJadjPuzYHk2+mEFtMdsIk2JS2j7s9aX0aDsnGzm/GGSBjoX6MHCEAfFDs1Nw== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(26005)(356005)(30864003)(36756003)(86362001)(336012)(81166007)(7696005)(36860700001)(4326008)(6286002)(8676002)(2906002)(5660300002)(110136005)(40460700003)(54906003)(82310400005)(16526019)(47076005)(55016003)(316002)(186003)(426003)(83380400001)(6636002)(8936002)(2616005)(107886003)(508600001)(70206006)(70586007)(1076003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:47:51.1228 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7b34048f-4dc0-43fc-5ec9-08da47b262ba X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT046.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4459 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The driver creates a direct MR object of the HW for each VM memory region, which maps the VM physical address to the actual physical address. Later, after all the MRs are ready, the driver creates an indirect MR to group all the direct MRs into one virtual space from the HW perspective. Create direct MRs in parallel using the MT mechanism. After completion, the primary thread creates the indirect MR needed for the following virtqs configurations. This optimization accelerrate the LM process and reduce its time by 5%. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.c | 1 - drivers/vdpa/mlx5/mlx5_vdpa.h | 31 ++- drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 47 ++++- drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 270 ++++++++++++++++++-------- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 6 +- 5 files changed, 258 insertions(+), 97 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index a9d023ed08..e3b32fa087 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -768,7 +768,6 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, rte_errno = rte_errno ? rte_errno : EINVAL; goto error; } - SLIST_INIT(&priv->mr_list); pthread_mutex_lock(&priv_list_lock); TAILQ_INSERT_TAIL(&priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 2bbb868ec6..3316ce42be 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -59,7 +59,6 @@ struct mlx5_vdpa_event_qp { }; struct mlx5_vdpa_query_mr { - SLIST_ENTRY(mlx5_vdpa_query_mr) next; union { struct ibv_mr *mr; struct mlx5_devx_obj *mkey; @@ -76,10 +75,17 @@ enum { #define MLX5_VDPA_MAX_C_THRD 256 #define MLX5_VDPA_MAX_TASKS_PER_THRD 4096 #define MLX5_VDPA_TASKS_PER_DEV 64 +#define MLX5_VDPA_MAX_MRS 0xFFFF + +/* Vdpa task types. */ +enum mlx5_vdpa_task_type { + MLX5_VDPA_TASK_REG_MR = 1, +}; /* Generic task information and size must be multiple of 4B. */ struct mlx5_vdpa_task { struct mlx5_vdpa_priv *priv; + enum mlx5_vdpa_task_type type; uint32_t *remaining_cnt; uint32_t *err_cnt; uint32_t idx; @@ -101,6 +107,14 @@ struct mlx5_vdpa_conf_thread_mng { }; extern struct mlx5_vdpa_conf_thread_mng conf_thread_mng; +struct mlx5_vdpa_vmem_info { + struct rte_vhost_memory *vmem; + uint32_t entries_num; + uint64_t gcd; + uint64_t size; + uint8_t mode; +}; + struct mlx5_vdpa_virtq { SLIST_ENTRY(mlx5_vdpa_virtq) next; uint8_t enable; @@ -176,7 +190,7 @@ struct mlx5_vdpa_priv { struct mlx5_hca_vdpa_attr caps; uint32_t gpa_mkey_index; struct ibv_mr *null_mr; - struct rte_vhost_memory *vmem; + struct mlx5_vdpa_vmem_info vmem_info; struct mlx5dv_devx_event_channel *eventc; struct mlx5dv_devx_event_channel *err_chnl; struct mlx5_uar uar; @@ -187,11 +201,13 @@ struct mlx5_vdpa_priv { uint8_t num_lag_ports; uint64_t features; /* Negotiated features. */ uint16_t log_max_rqt_size; + uint16_t last_c_thrd_idx; + uint16_t num_mrs; /* Number of memory regions. */ struct mlx5_vdpa_steer steer; struct mlx5dv_var *var; void *virtq_db_addr; struct mlx5_pmd_wrapped_mr lm_mr; - SLIST_HEAD(mr_list, mlx5_vdpa_query_mr) mr_list; + struct mlx5_vdpa_query_mr **mrs; struct mlx5_vdpa_virtq virtqs[]; }; @@ -548,5 +564,12 @@ mlx5_vdpa_mult_threads_destroy(bool need_unlock); bool mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, uint32_t thrd_idx, - uint32_t num); + enum mlx5_vdpa_task_type task_type, + uint32_t *bulk_refcnt, uint32_t *bulk_err_cnt, + void **task_data, uint32_t num); +int +mlx5_vdpa_register_mr(struct mlx5_vdpa_priv *priv, uint32_t idx); +bool +mlx5_vdpa_c_thread_wait_bulk_tasks_done(uint32_t *remaining_cnt, + uint32_t *err_cnt, uint32_t sleep_time); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c index 1fdc92d3ad..10391931ae 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -47,16 +47,23 @@ mlx5_vdpa_c_thrd_ring_enqueue_bulk(struct rte_ring *r, bool mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, uint32_t thrd_idx, - uint32_t num) + enum mlx5_vdpa_task_type task_type, + uint32_t *remaining_cnt, uint32_t *err_cnt, + void **task_data, uint32_t num) { struct rte_ring *rng = conf_thread_mng.cthrd[thrd_idx].rng; struct mlx5_vdpa_task task[MLX5_VDPA_TASKS_PER_DEV]; + uint32_t *data = (uint32_t *)task_data; uint32_t i; MLX5_ASSERT(num <= MLX5_VDPA_TASKS_PER_DEV); for (i = 0 ; i < num; i++) { task[i].priv = priv; /* To be added later. */ + task[i].type = task_type; + task[i].remaining_cnt = remaining_cnt; + task[i].err_cnt = err_cnt; + task[i].idx = data[i]; } if (!mlx5_vdpa_c_thrd_ring_enqueue_bulk(rng, (void **)&task, num, NULL)) return -1; @@ -71,6 +78,23 @@ mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, return 0; } +bool +mlx5_vdpa_c_thread_wait_bulk_tasks_done(uint32_t *remaining_cnt, + uint32_t *err_cnt, uint32_t sleep_time) +{ + /* Check and wait all tasks done. */ + while (__atomic_load_n(remaining_cnt, + __ATOMIC_RELAXED) != 0) { + rte_delay_us_sleep(sleep_time); + } + if (__atomic_load_n(err_cnt, + __ATOMIC_RELAXED)) { + DRV_LOG(ERR, "Tasks done with error."); + return true; + } + return false; +} + static void * mlx5_vdpa_c_thread_handle(void *arg) { @@ -81,6 +105,7 @@ mlx5_vdpa_c_thread_handle(void *arg) struct rte_ring *rng; uint32_t thrd_idx; uint32_t task_num; + int ret; for (thrd_idx = 0; thrd_idx < multhrd->max_thrds; thrd_idx++) @@ -99,13 +124,29 @@ mlx5_vdpa_c_thread_handle(void *arg) &multhrd->cthrd[thrd_idx].c_cond, &multhrd->cthrd_lock); pthread_mutex_unlock(&multhrd->cthrd_lock); + continue; } priv = task.priv; if (priv == NULL) continue; - __atomic_fetch_sub(task.remaining_cnt, + switch (task.type) { + case MLX5_VDPA_TASK_REG_MR: + ret = mlx5_vdpa_register_mr(priv, task.idx); + if (ret) { + DRV_LOG(ERR, + "Failed to register mr %d.", task.idx); + __atomic_fetch_add(task.err_cnt, 1, + __ATOMIC_RELAXED); + } + break; + default: + DRV_LOG(ERR, "Invalid vdpa task type %d.", + task.type); + break; + } + if (task.remaining_cnt) + __atomic_fetch_sub(task.remaining_cnt, 1, __ATOMIC_RELAXED); - /* To be added later. */ } return NULL; } diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c index d6e3dd664b..e333f0bca6 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c @@ -17,25 +17,33 @@ void mlx5_vdpa_mem_dereg(struct mlx5_vdpa_priv *priv) { + struct mlx5_vdpa_query_mr *mrs = + (struct mlx5_vdpa_query_mr *)priv->mrs; struct mlx5_vdpa_query_mr *entry; - struct mlx5_vdpa_query_mr *next; + int i; - entry = SLIST_FIRST(&priv->mr_list); - while (entry) { - next = SLIST_NEXT(entry, next); - if (entry->is_indirect) - claim_zero(mlx5_devx_cmd_destroy(entry->mkey)); - else - claim_zero(mlx5_glue->dereg_mr(entry->mr)); - SLIST_REMOVE(&priv->mr_list, entry, mlx5_vdpa_query_mr, next); - rte_free(entry); - entry = next; + if (priv->mrs) { + for (i = priv->num_mrs - 1; i >= 0; i--) { + entry = &mrs[i]; + if (entry->is_indirect) { + if (entry->mkey) + claim_zero( + mlx5_devx_cmd_destroy(entry->mkey)); + } else { + if (entry->mr) + claim_zero( + mlx5_glue->dereg_mr(entry->mr)); + } + } + rte_free(priv->mrs); + priv->mrs = NULL; + priv->num_mrs = 0; } - SLIST_INIT(&priv->mr_list); - if (priv->vmem) { - free(priv->vmem); - priv->vmem = NULL; + if (priv->vmem_info.vmem) { + free(priv->vmem_info.vmem); + priv->vmem_info.vmem = NULL; } + priv->gpa_mkey_index = 0; } static int @@ -167,72 +175,29 @@ mlx5_vdpa_mem_cmp(struct rte_vhost_memory *mem1, struct rte_vhost_memory *mem2) #define KLM_SIZE_MAX_ALIGN(sz) ((sz) > MLX5_MAX_KLM_BYTE_COUNT ? \ MLX5_MAX_KLM_BYTE_COUNT : (sz)) -/* - * The target here is to group all the physical memory regions of the - * virtio device in one indirect mkey. - * For KLM Fixed Buffer Size mode (HW find the translation entry in one - * read according to the guest physical address): - * All the sub-direct mkeys of it must be in the same size, hence, each - * one of them should be in the GCD size of all the virtio memory - * regions and the holes between them. - * For KLM mode (each entry may be in different size so HW must iterate - * the entries): - * Each virtio memory region and each hole between them have one entry, - * just need to cover the maximum allowed size(2G) by splitting entries - * which their associated memory regions are bigger than 2G. - * It means that each virtio memory region may be mapped to more than - * one direct mkey in the 2 modes. - * All the holes of invalid memory between the virtio memory regions - * will be mapped to the null memory region for security. - */ -int -mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) +static int +mlx5_vdpa_create_indirect_mkey(struct mlx5_vdpa_priv *priv) { struct mlx5_devx_mkey_attr mkey_attr; - struct mlx5_vdpa_query_mr *entry = NULL; - struct rte_vhost_mem_region *reg = NULL; - uint8_t mode = 0; - uint32_t entries_num = 0; - uint32_t i; - uint64_t gcd = 0; + struct mlx5_vdpa_query_mr *mrs = + (struct mlx5_vdpa_query_mr *)priv->mrs; + struct mlx5_vdpa_query_mr *entry; + struct rte_vhost_mem_region *reg; + uint8_t mode = priv->vmem_info.mode; + uint32_t entries_num = priv->vmem_info.entries_num; + struct rte_vhost_memory *mem = priv->vmem_info.vmem; + struct mlx5_klm klm_array[entries_num]; + uint64_t gcd = priv->vmem_info.gcd; + int ret = -rte_errno; uint64_t klm_size; - uint64_t mem_size; - uint64_t k; int klm_index = 0; - int ret; - struct rte_vhost_memory *mem = mlx5_vdpa_vhost_mem_regions_prepare - (priv->vid, &mode, &mem_size, &gcd, &entries_num); - struct mlx5_klm klm_array[entries_num]; + uint64_t k; + uint32_t i; - if (!mem) - return -rte_errno; - if (priv->vmem != NULL) { - if (mlx5_vdpa_mem_cmp(mem, priv->vmem) == 0) { - /* VM memory not changed, reuse resources. */ - free(mem); - return 0; - } - mlx5_vdpa_mem_dereg(priv); - } - priv->vmem = mem; + /* If it is the last entry, create indirect mkey. */ for (i = 0; i < mem->nregions; i++) { + entry = &mrs[i]; reg = &mem->regions[i]; - entry = rte_zmalloc(__func__, sizeof(*entry), 0); - if (!entry) { - ret = -ENOMEM; - DRV_LOG(ERR, "Failed to allocate mem entry memory."); - goto error; - } - entry->mr = mlx5_glue->reg_mr_iova(priv->cdev->pd, - (void *)(uintptr_t)(reg->host_user_addr), - reg->size, reg->guest_phys_addr, - IBV_ACCESS_LOCAL_WRITE); - if (!entry->mr) { - DRV_LOG(ERR, "Failed to create direct Mkey."); - ret = -rte_errno; - goto error; - } - entry->is_indirect = 0; if (i > 0) { uint64_t sadd; uint64_t empty_region_sz = reg->guest_phys_addr - @@ -265,11 +230,10 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) klm_array[klm_index].address = reg->guest_phys_addr + k; klm_index++; } - SLIST_INSERT_HEAD(&priv->mr_list, entry, next); } memset(&mkey_attr, 0, sizeof(mkey_attr)); mkey_attr.addr = (uintptr_t)(mem->regions[0].guest_phys_addr); - mkey_attr.size = mem_size; + mkey_attr.size = priv->vmem_info.size; mkey_attr.pd = priv->cdev->pdn; mkey_attr.umem_id = 0; /* Must be zero for KLM mode. */ @@ -278,25 +242,159 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) mkey_attr.pg_access = 0; mkey_attr.klm_array = klm_array; mkey_attr.klm_num = klm_index; - entry = rte_zmalloc(__func__, sizeof(*entry), 0); - if (!entry) { - DRV_LOG(ERR, "Failed to allocate memory for indirect entry."); - ret = -ENOMEM; - goto error; - } + entry = &mrs[mem->nregions]; entry->mkey = mlx5_devx_cmd_mkey_create(priv->cdev->ctx, &mkey_attr); if (!entry->mkey) { DRV_LOG(ERR, "Failed to create indirect Mkey."); - ret = -rte_errno; - goto error; + rte_errno = -ret; + return ret; } entry->is_indirect = 1; - SLIST_INSERT_HEAD(&priv->mr_list, entry, next); priv->gpa_mkey_index = entry->mkey->id; return 0; +} + +/* + * The target here is to group all the physical memory regions of the + * virtio device in one indirect mkey. + * For KLM Fixed Buffer Size mode (HW find the translation entry in one + * read according to the guest phisical address): + * All the sub-direct mkeys of it must be in the same size, hence, each + * one of them should be in the GCD size of all the virtio memory + * regions and the holes between them. + * For KLM mode (each entry may be in different size so HW must iterate + * the entries): + * Each virtio memory region and each hole between them have one entry, + * just need to cover the maximum allowed size(2G) by splitting entries + * which their associated memory regions are bigger than 2G. + * It means that each virtio memory region may be mapped to more than + * one direct mkey in the 2 modes. + * All the holes of invalid memory between the virtio memory regions + * will be mapped to the null memory region for security. + */ +int +mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) +{ + void *mrs; + uint8_t mode = 0; + int ret = -rte_errno; + uint32_t i, thrd_idx, data[1]; + uint32_t remaining_cnt = 0, err_cnt = 0, task_num = 0; + struct rte_vhost_memory *mem = mlx5_vdpa_vhost_mem_regions_prepare + (priv->vid, &mode, &priv->vmem_info.size, + &priv->vmem_info.gcd, &priv->vmem_info.entries_num); + + if (!mem) + return -rte_errno; + if (priv->vmem_info.vmem != NULL) { + if (mlx5_vdpa_mem_cmp(mem, priv->vmem_info.vmem) == 0) { + /* VM memory not changed, reuse resources. */ + free(mem); + return 0; + } + mlx5_vdpa_mem_dereg(priv); + } + priv->vmem_info.vmem = mem; + priv->vmem_info.mode = mode; + priv->num_mrs = mem->nregions; + if (!priv->num_mrs || priv->num_mrs >= MLX5_VDPA_MAX_MRS) { + DRV_LOG(ERR, + "Invalid number of memory regions."); + goto error; + } + /* The last one is indirect mkey entry. */ + priv->num_mrs++; + mrs = rte_zmalloc("mlx5 vDPA memory regions", + sizeof(struct mlx5_vdpa_query_mr) * priv->num_mrs, 0); + priv->mrs = mrs; + if (!priv->mrs) { + DRV_LOG(ERR, "Failed to allocate private memory regions."); + goto error; + } + if (priv->use_c_thread) { + uint32_t main_task_idx[mem->nregions]; + + for (i = 0; i < mem->nregions; i++) { + thrd_idx = i % (conf_thread_mng.max_thrds + 1); + if (!thrd_idx) { + main_task_idx[task_num] = i; + task_num++; + continue; + } + thrd_idx = priv->last_c_thrd_idx + 1; + if (thrd_idx >= conf_thread_mng.max_thrds) + thrd_idx = 0; + priv->last_c_thrd_idx = thrd_idx; + data[0] = i; + if (mlx5_vdpa_task_add(priv, thrd_idx, + MLX5_VDPA_TASK_REG_MR, + &remaining_cnt, &err_cnt, + (void **)&data, 1)) { + DRV_LOG(ERR, + "Fail to add task mem region (%d)", i); + main_task_idx[task_num] = i; + task_num++; + } + } + for (i = 0; i < task_num; i++) { + ret = mlx5_vdpa_register_mr(priv, + main_task_idx[i]); + if (ret) { + DRV_LOG(ERR, + "Failed to register mem region %d.", i); + goto error; + } + } + if (mlx5_vdpa_c_thread_wait_bulk_tasks_done(&remaining_cnt, + &err_cnt, 100)) { + DRV_LOG(ERR, + "Failed to wait register mem region tasks ready."); + goto error; + } + } else { + for (i = 0; i < mem->nregions; i++) { + ret = mlx5_vdpa_register_mr(priv, i); + if (ret) { + DRV_LOG(ERR, + "Failed to register mem region %d.", i); + goto error; + } + } + } + ret = mlx5_vdpa_create_indirect_mkey(priv); + if (ret) { + DRV_LOG(ERR, "Failed to create indirect mkey ."); + goto error; + } + return 0; error: - rte_free(entry); mlx5_vdpa_mem_dereg(priv); rte_errno = -ret; return ret; } + +int +mlx5_vdpa_register_mr(struct mlx5_vdpa_priv *priv, uint32_t idx) +{ + struct rte_vhost_memory *mem = priv->vmem_info.vmem; + struct mlx5_vdpa_query_mr *mrs = + (struct mlx5_vdpa_query_mr *)priv->mrs; + struct mlx5_vdpa_query_mr *entry; + struct rte_vhost_mem_region *reg; + int ret; + + reg = &mem->regions[idx]; + entry = &mrs[idx]; + entry->mr = mlx5_glue->reg_mr_iova + (priv->cdev->pd, + (void *)(uintptr_t)(reg->host_user_addr), + reg->size, reg->guest_phys_addr, + IBV_ACCESS_LOCAL_WRITE); + if (!entry->mr) { + DRV_LOG(ERR, "Failed to create direct Mkey."); + ret = -rte_errno; + return ret; + } + entry->is_indirect = 0; + return 0; +} diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 599809b09b..0b317655db 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -353,21 +353,21 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, } } if (attr->q_type == MLX5_VIRTQ_TYPE_SPLIT) { - gpa = mlx5_vdpa_hva_to_gpa(priv->vmem, + gpa = mlx5_vdpa_hva_to_gpa(priv->vmem_info.vmem, (uint64_t)(uintptr_t)vq->desc); if (!gpa) { DRV_LOG(ERR, "Failed to get descriptor ring GPA."); return -1; } attr->desc_addr = gpa; - gpa = mlx5_vdpa_hva_to_gpa(priv->vmem, + gpa = mlx5_vdpa_hva_to_gpa(priv->vmem_info.vmem, (uint64_t)(uintptr_t)vq->used); if (!gpa) { DRV_LOG(ERR, "Failed to get GPA for used ring."); return -1; } attr->used_addr = gpa; - gpa = mlx5_vdpa_hva_to_gpa(priv->vmem, + gpa = mlx5_vdpa_hva_to_gpa(priv->vmem_info.vmem, (uint64_t)(uintptr_t)vq->avail); if (!gpa) { DRV_LOG(ERR, "Failed to get GPA for available ring."); From patchwork Mon Jun 6 11:46:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 112387 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E23B1A0542; Mon, 6 Jun 2022 13:49:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 27C9A42BB0; Mon, 6 Jun 2022 13:47:58 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2045.outbound.protection.outlook.com [40.107.237.45]) by mails.dpdk.org (Postfix) with ESMTP id 329FD42B8E for ; Mon, 6 Jun 2022 13:47:56 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=c43vwpMb1bSJbrR1Al4lFQXFyTl8IJSGWON4tnH1pZUR9ry0BdaRlbS5m1D6MNE2kD+w9RGEC4w7P3A/CywisAZoRISasgzYCrS1wUq4n2eGDTJZIyzYJ4e0Gueul0LYgZQmwPoDUlBUWfXDRSn15TrzS3+iRR8OWhK7bqFx6m3SlcMSEosi2XMmEPRhPjA9xDaEmjc0kt3Y0PyM+ex9NtxVBcXCNqKNtiIkXdcyQoZwpUreL/Qs0C8BdPqk6KadceWiM2Opl/nbGfMFnCLrj6OVSy+bor4o8F2QHa4/1/TFkp+tgjAqs8F418z850rwn5dafjzwmVbuwg/bacvCjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=D0Zy9/Tg4z8XhCWAE1gPUw8MnOBjkfQd3cdsZE+Y7cw=; b=H4AGkBpJ6pHrcdUwPg94sR9CrmhiCU53mqKf/nIBPSf3x8p7C3TaDsFUnFxjq8T37mSh10YNG3L2PtTKlgb+arpOe88FwdE8pNZj5QR309LDbuVTj3Q5kBoDIbYRZ1EoAPBfuU+D59ZQ/30Sv0QDSvPqwfs1x6IG4JJNdsPH5EaMNfiIItGfOFhTqRwKB0ubg39TL1/unAkN8xOjYiy+2ihMr04Apl62TDRKt9JpzPNQ03vLGCi0dCElzJqcFRql1ao5G+VEPFRrfhrxGCbMfYFHkANLCjwXA7CtyuWedQQ8yXm3IXANmMZ339/AzuwNNO+jsdGuj8mbeGgncm35fw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=D0Zy9/Tg4z8XhCWAE1gPUw8MnOBjkfQd3cdsZE+Y7cw=; b=lUA6fEcw6T0MRH7GpINDIftYht1uc6+D7gpEv9f8OML1mSJh9QQlEyAL65Q1uorzFDFTxeqA3O/BQIvM9zA7RZLZdal0VmyLWWF9oFY+ZYn0ytlzK6iRp5i3AFLZegB9LsxBKzJu23jcNqbGnOUj3Zo3FMVuUq7IE4kEy8I03p6mNd/zGicUBU3UkWeXjS6MnSLRP45uX6dnCAcvAjSuXEh+P/NHmYuTmY8+7kATSiKEaPuQt6199xTy/l3jLjsGGYiQkm1HBoTYnGsS8mVGzti316y/6mFWo1RuPhYn4zVCrbpcbz5hdnnD8SiugtthVI8RHurwVd7MIWQr12aulg== Received: from DM5PR19CA0038.namprd19.prod.outlook.com (2603:10b6:3:9a::24) by CY4PR1201MB0264.namprd12.prod.outlook.com (2603:10b6:910:1e::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun 2022 11:47:54 +0000 Received: from DM6NAM11FT019.eop-nam11.prod.protection.outlook.com (2603:10b6:3:9a:cafe::89) by DM5PR19CA0038.outlook.office365.com (2603:10b6:3:9a::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19 via Frontend Transport; Mon, 6 Jun 2022 11:47:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT019.mail.protection.outlook.com (10.13.172.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:53 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:47:53 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:47:50 -0700 From: Li Zhang To: , , , CC: , , , Subject: [PATCH v1 13/17] vdpa/mlx5: add virtq creation task for MT management Date: Mon, 6 Jun 2022 14:46:46 +0300 Message-ID: <20220606114650.209612-14-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4b37a1b7-d417-4bea-7a12-08da47b2646c X-MS-TrafficTypeDiagnostic: CY4PR1201MB0264:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: bY3CGF+7JRsiw4joGIbc379GnUM5VHJbzbmL+L1vJuiaKnrg2L9LYiQ+OP+VoTBTa3JPyWoUAiS9EGV0rqOfXfmgLF06QWdau1sdsOk0Y5tPvSBpXOxiPd/TPL9sPoU0zcW5bXBwWGBBpPYUJvXgYkjFz/XEVsMR5xauvOk3+7YnFws50upXPLYPRhFPuok74h28tjYPpm9CfqHvzMZviqbHJlQv/BIPjmBuqZc/Pr6eyk7X9qXkykdOnsJQTdR15F/8htyy9+fHXpKuINuxOZc9EEZhJVkjSHjQ1QAfQs5CrM63cNz9rckEK4U9Q/C8QF9ArF++zaERoYsiSLV5h0S60xUwZdx2ztQ4RUJUBHnLjazWe1JSODLb4aQos2auXPp7VHEN99ikK279jwcKqzc9tQnZQa3CYYfzKP4uVtCSzVLIFAfJiNGbGYnY5yaFrT8GUg+uB/7PshElAw1MSYwNCftgxfOVpFDmQSa4qYborQ8gMGgmQl75Sk8FTfIIMy+/LSjE07MD7NHzhHGCHiwG88lFP1PGV/gUuJSS+DIzoJ6OJ5dLrOgE+da/tIL5u3ibb3iGtS5c+zLNQs9vmSdoA1Vkc5FCyXyeQX74mNSc4TlhUsgnvkgfCE3oYhpjJK/KD5qxj6dAXJb3TTi8ih//2FbJDjCz7KU0Cgds/m54XQc4U9KE0HuUcUhkB8dqOKsx3opg9Z27LtwO3ULSvQ== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(4326008)(8676002)(356005)(316002)(86362001)(40460700003)(36756003)(30864003)(2906002)(5660300002)(110136005)(81166007)(70206006)(8936002)(2616005)(26005)(6286002)(336012)(107886003)(426003)(47076005)(1076003)(70586007)(186003)(16526019)(508600001)(55016003)(7696005)(6666004)(6636002)(82310400005)(54906003)(83380400001)(36860700001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:47:53.9365 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4b37a1b7-d417-4bea-7a12-08da47b2646c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT019.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1201MB0264 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The virtq object and all its sub-resources use a lot of FW commands and can be accelerated by the MT management. Split the virtqs creation between the configuration threads. This accelerates the LM process and reduces its time by 20%. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.h | 9 +- drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 14 +++ drivers/vdpa/mlx5/mlx5_vdpa_event.c | 2 +- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 149 +++++++++++++++++++------- 4 files changed, 134 insertions(+), 40 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 3316ce42be..35221f5ddc 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -80,6 +80,7 @@ enum { /* Vdpa task types. */ enum mlx5_vdpa_task_type { MLX5_VDPA_TASK_REG_MR = 1, + MLX5_VDPA_TASK_SETUP_VIRTQ, }; /* Generic task information and size must be multiple of 4B. */ @@ -117,12 +118,12 @@ struct mlx5_vdpa_vmem_info { struct mlx5_vdpa_virtq { SLIST_ENTRY(mlx5_vdpa_virtq) next; - uint8_t enable; uint16_t index; uint16_t vq_size; uint8_t notifier_state; - bool stopped; uint32_t configured:1; + uint32_t enable:1; + uint32_t stopped:1; uint32_t version; pthread_mutex_t virtq_lock; struct mlx5_vdpa_priv *priv; @@ -565,11 +566,13 @@ bool mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, uint32_t thrd_idx, enum mlx5_vdpa_task_type task_type, - uint32_t *bulk_refcnt, uint32_t *bulk_err_cnt, + uint32_t *remaining_cnt, uint32_t *err_cnt, void **task_data, uint32_t num); int mlx5_vdpa_register_mr(struct mlx5_vdpa_priv *priv, uint32_t idx); bool mlx5_vdpa_c_thread_wait_bulk_tasks_done(uint32_t *remaining_cnt, uint32_t *err_cnt, uint32_t sleep_time); +int +mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index, bool reg_kick); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c index 10391931ae..1389d369ae 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -100,6 +100,7 @@ mlx5_vdpa_c_thread_handle(void *arg) { struct mlx5_vdpa_conf_thread_mng *multhrd = arg; pthread_t thread_id = pthread_self(); + struct mlx5_vdpa_virtq *virtq; struct mlx5_vdpa_priv *priv; struct mlx5_vdpa_task task; struct rte_ring *rng; @@ -139,6 +140,19 @@ mlx5_vdpa_c_thread_handle(void *arg) __ATOMIC_RELAXED); } break; + case MLX5_VDPA_TASK_SETUP_VIRTQ: + virtq = &priv->virtqs[task.idx]; + pthread_mutex_lock(&virtq->virtq_lock); + ret = mlx5_vdpa_virtq_setup(priv, + task.idx, false); + if (ret) { + DRV_LOG(ERR, + "Failed to setup virtq %d.", task.idx); + __atomic_fetch_add( + task.err_cnt, 1, __ATOMIC_RELAXED); + } + pthread_mutex_unlock(&virtq->virtq_lock); + break; default: DRV_LOG(ERR, "Invalid vdpa task type %d.", task.type); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index b45fbac146..f782b6b832 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -371,7 +371,7 @@ mlx5_vdpa_err_interrupt_handler(void *cb_arg __rte_unused) goto unlock; if (rte_rdtsc() / rte_get_tsc_hz() < MLX5_VDPA_ERROR_TIME_SEC) goto unlock; - virtq->stopped = true; + virtq->stopped = 1; /* Query error info. */ if (mlx5_vdpa_virtq_query(priv, vq_index)) goto log; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 0b317655db..db05220e76 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -111,8 +111,9 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) for (i = 0; i < priv->caps.max_num_virtio_queues; i++) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + if (virtq->index != i) + continue; pthread_mutex_lock(&virtq->virtq_lock); - virtq->configured = 0; for (j = 0; j < RTE_DIM(virtq->umems); ++j) { if (virtq->umems[j].obj) { claim_zero(mlx5_glue->devx_umem_dereg @@ -131,7 +132,6 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) } } - static int mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) { @@ -191,7 +191,7 @@ mlx5_vdpa_virtq_stop(struct mlx5_vdpa_priv *priv, int index) ret = mlx5_vdpa_virtq_modify(virtq, 0); if (ret) return -1; - virtq->stopped = true; + virtq->stopped = 1; DRV_LOG(DEBUG, "vid %u virtq %u was stopped.", priv->vid, index); return mlx5_vdpa_virtq_query(priv, index); } @@ -411,7 +411,38 @@ mlx5_vdpa_is_modify_virtq_supported(struct mlx5_vdpa_priv *priv) } static int -mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) +mlx5_vdpa_virtq_doorbell_setup(struct mlx5_vdpa_virtq *virtq, + struct rte_vhost_vring *vq, int index) +{ + virtq->intr_handle = + rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); + if (virtq->intr_handle == NULL) { + DRV_LOG(ERR, "Fail to allocate intr_handle"); + return -1; + } + if (rte_intr_fd_set(virtq->intr_handle, vq->kickfd)) + return -1; + if (rte_intr_fd_get(virtq->intr_handle) == -1) { + DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index); + } else { + if (rte_intr_type_set(virtq->intr_handle, + RTE_INTR_HANDLE_EXT)) + return -1; + if (rte_intr_callback_register(virtq->intr_handle, + mlx5_vdpa_virtq_kick_handler, virtq)) { + (void)rte_intr_fd_set(virtq->intr_handle, -1); + DRV_LOG(ERR, "Failed to register virtq %d interrupt.", + index); + return -1; + } + DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.", + rte_intr_fd_get(virtq->intr_handle), index); + } + return 0; +} + +int +mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index, bool reg_kick) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; struct rte_vhost_vring vq; @@ -455,33 +486,11 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) rte_write32(virtq->index, priv->virtq_db_addr); rte_spinlock_unlock(&priv->db_lock); /* Setup doorbell mapping. */ - virtq->intr_handle = - rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); - if (virtq->intr_handle == NULL) { - DRV_LOG(ERR, "Fail to allocate intr_handle"); - goto error; - } - - if (rte_intr_fd_set(virtq->intr_handle, vq.kickfd)) - goto error; - - if (rte_intr_fd_get(virtq->intr_handle) == -1) { - DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index); - } else { - if (rte_intr_type_set(virtq->intr_handle, RTE_INTR_HANDLE_EXT)) - goto error; - - if (rte_intr_callback_register(virtq->intr_handle, - mlx5_vdpa_virtq_kick_handler, - virtq)) { - (void)rte_intr_fd_set(virtq->intr_handle, -1); + if (reg_kick) { + if (mlx5_vdpa_virtq_doorbell_setup(virtq, &vq, index)) { DRV_LOG(ERR, "Failed to register virtq %d interrupt.", index); goto error; - } else { - DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.", - rte_intr_fd_get(virtq->intr_handle), - index); } } /* Subscribe virtq error event. */ @@ -497,7 +506,6 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) rte_errno = errno; goto error; } - virtq->stopped = false; /* Initial notification to ask Qemu handling completed buffers. */ if (virtq->eqp.cq.callfd != -1) eventfd_write(virtq->eqp.cq.callfd, (eventfd_t)1); @@ -567,10 +575,12 @@ mlx5_vdpa_features_validate(struct mlx5_vdpa_priv *priv) int mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) { - uint32_t i; - uint16_t nr_vring = rte_vhost_get_vring_num(priv->vid); int ret = rte_vhost_get_negotiated_features(priv->vid, &priv->features); + uint16_t nr_vring = rte_vhost_get_vring_num(priv->vid); + uint32_t remaining_cnt = 0, err_cnt = 0, task_num = 0; + uint32_t i, thrd_idx, data[1]; struct mlx5_vdpa_virtq *virtq; + struct rte_vhost_vring vq; if (ret || mlx5_vdpa_features_validate(priv)) { DRV_LOG(ERR, "Failed to configure negotiated features."); @@ -590,16 +600,83 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) return -1; } priv->nr_virtqs = nr_vring; - for (i = 0; i < nr_vring; i++) { - virtq = &priv->virtqs[i]; - if (virtq->enable) { + if (priv->use_c_thread) { + uint32_t main_task_idx[nr_vring]; + + for (i = 0; i < nr_vring; i++) { + virtq = &priv->virtqs[i]; + if (!virtq->enable) + continue; + thrd_idx = i % (conf_thread_mng.max_thrds + 1); + if (!thrd_idx) { + main_task_idx[task_num] = i; + task_num++; + continue; + } + thrd_idx = priv->last_c_thrd_idx + 1; + if (thrd_idx >= conf_thread_mng.max_thrds) + thrd_idx = 0; + priv->last_c_thrd_idx = thrd_idx; + data[0] = i; + if (mlx5_vdpa_task_add(priv, thrd_idx, + MLX5_VDPA_TASK_SETUP_VIRTQ, + &remaining_cnt, &err_cnt, + (void **)&data, 1)) { + DRV_LOG(ERR, "Fail to add " + "task setup virtq (%d).", i); + main_task_idx[task_num] = i; + task_num++; + } + } + for (i = 0; i < task_num; i++) { + virtq = &priv->virtqs[main_task_idx[i]]; pthread_mutex_lock(&virtq->virtq_lock); - if (mlx5_vdpa_virtq_setup(priv, i)) { + if (mlx5_vdpa_virtq_setup(priv, + main_task_idx[i], false)) { pthread_mutex_unlock(&virtq->virtq_lock); goto error; } pthread_mutex_unlock(&virtq->virtq_lock); } + if (mlx5_vdpa_c_thread_wait_bulk_tasks_done(&remaining_cnt, + &err_cnt, 2000)) { + DRV_LOG(ERR, + "Failed to wait virt-queue setup tasks ready."); + goto error; + } + for (i = 0; i < nr_vring; i++) { + /* Setup doorbell mapping in order for Qume. */ + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + if (!virtq->enable || !virtq->configured) { + pthread_mutex_unlock(&virtq->virtq_lock); + continue; + } + if (rte_vhost_get_vhost_vring(priv->vid, i, &vq)) { + pthread_mutex_unlock(&virtq->virtq_lock); + goto error; + } + if (mlx5_vdpa_virtq_doorbell_setup(virtq, &vq, i)) { + pthread_mutex_unlock(&virtq->virtq_lock); + DRV_LOG(ERR, + "Failed to register virtq %d interrupt.", i); + goto error; + } + pthread_mutex_unlock(&virtq->virtq_lock); + } + } else { + for (i = 0; i < nr_vring; i++) { + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + if (virtq->enable) { + if (mlx5_vdpa_virtq_setup(priv, i, true)) { + pthread_mutex_unlock( + &virtq->virtq_lock); + goto error; + } + } + pthread_mutex_unlock(&virtq->virtq_lock); + } } return 0; error: @@ -663,7 +740,7 @@ mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable) mlx5_vdpa_virtq_unset(virtq); } if (enable) { - ret = mlx5_vdpa_virtq_setup(priv, index); + ret = mlx5_vdpa_virtq_setup(priv, index, true); if (ret) { DRV_LOG(ERR, "Failed to setup virtq %d.", index); return ret; From patchwork Mon Jun 6 11:46:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 112388 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E0D48A0542; Mon, 6 Jun 2022 13:49:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 50A8E42BC0; Mon, 6 Jun 2022 13:48:00 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2072.outbound.protection.outlook.com [40.107.223.72]) by mails.dpdk.org (Postfix) with ESMTP id 8188142BB9 for ; Mon, 6 Jun 2022 13:47:58 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mJDpMZlin7EQDBRinDc8prYKrSEQ+gBuP5IvM/p0FAYPszWv2m0On+gCJsWI3PA0t/xnLVZqVNoWJuK3avhsEj7quz+1g7Q+tL17S0lJKweDJfuqD1PH8qe0ZDut3wTsgOtrdcCBKYpJz5oIia75WhHM2TP7K0/0zoum/Iamsq3YGGL5l6ALOzZGyYyVuxvY3K9AyQ3aU6CkB/xuYVH3QMjzytm1Bpn/YNQhiCKCeKySJROV4D+l6JaEjMHb3c1p6ElQT/E2zwDlRBF7X8iIVXIu6g6r9kGFHb6NYEv3bDRR7jrHKcMSADvSpr7cvhgjmvMSw3gGgLfH1/XNQuU3ZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=TFlSU0NM+Vl7ACH+AnzFBZIOnfKwLgjvtRy8gMC45iw=; b=lElELwQ2TczaWofnL9DOZPzzuttUyQsyagXkCJnDlazmT8biRldv+VP4Ug6Flh9/BGIZgLcGR1bX7O0HdgAAqYUNwjk9D1fPC5oJtAy4HB8GOPF08REdUcx6fCE+PsihSb8hFt1FZX7yasKiJd2FeGW3zDIPWL0DFLIM+Htj5azzgpadGp+yT1rWKrhicVS11N1JiU0jtZjqC1DxBZ0zKfgvGlZ3Jci4YXQtMAWzZoZZAhNHrCLpifj8YzqAa/Q6D5nF4rY80YWMowlW6g9r4RI1nk3gn3zm00j5Wmux5egkGEv2zfCPozgsy2gzvHm4ANlQyQ5aG5c5kjaxgZq6VA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TFlSU0NM+Vl7ACH+AnzFBZIOnfKwLgjvtRy8gMC45iw=; b=NEDwX1Vq+qhwZV7eY3V27DpmwFVAtTOHgvNg5UMcd7VB7/0aZdkYONputEaPMjGsFuoXzhD+DU0pkffQuiLldiQ9nz8FBqqkLXSXLU+B3k0av6JKsyx6pC8DuGCxxYFtOkFmtiZf+24h2CD9ZCUShhNMgtDWxYpztIXfqdjFCcc+yUhkwp5yHa5XKI3tOnI2bC06ylQ/tjsmMxp+VLSkv6gVQndB+Sle+1PlpCEl20Y2xGe8/gkERELxG381coWg4OeNda0NQans1ti51iHvkze35EaYLTx8MglI4a6YzSpsBTrmddJR4LsWr/UElFJDEQ70fFuJupQXYo57OakpeQ== Received: from BN9PR03CA0064.namprd03.prod.outlook.com (2603:10b6:408:fc::9) by BN8PR12MB3396.namprd12.prod.outlook.com (2603:10b6:408:45::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun 2022 11:47:57 +0000 Received: from BN8NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:408:fc:cafe::3e) by BN9PR03CA0064.outlook.office365.com (2603:10b6:408:fc::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.17 via Frontend Transport; Mon, 6 Jun 2022 11:47:57 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT022.mail.protection.outlook.com (10.13.176.112) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:56 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:47:56 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:47:53 -0700 From: Li Zhang To: , , , CC: , , , Subject: [PATCH v1 14/17] vdpa/mlx5: add virtq LM log task Date: Mon, 6 Jun 2022 14:46:47 +0300 Message-ID: <20220606114650.209612-15-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d79bb48f-1e95-44fc-9213-08da47b26635 X-MS-TrafficTypeDiagnostic: BN8PR12MB3396:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Uv0ukY4X2H8/aigBZBelffKNFoyAeRSaLv7NnX5J5j7qhV69mQqlPinPeSheM9E7ThJRLz2sxXFJJuFv9goUzxHLuGgzGiIEzb7S51nwT7pV5cYxHG3HNsGlqItQbgpuUgxqFaH9feUPHiVuJDSMzTqhYeXTbH8Y3tmPR/IB1+8o/UiGrBNudc6ltSMTa20z54x3r1plhb8vRSuRS1EwA4BMouEQPXwnCh0HR9VViJBDgXd85psLlk7MfNnNH9n3DutHp3sMJl0iG3/a90Y9QkcpHAtuA0/0r7IHFXYKT1fqyZh18p9iZ3JN0sJHSPYaXKulm7tXbX+wjghZxsHk3mRYMyEZ71exwvF22Ya2TzFLZYNEKpp9osTLndJmzhEZUy+geCn8tfBb/XA4WIRXeosWL9lOgsFIMqUQ6WL6HSXspA4M9aUzIf+H976mFdgn3uyiiaHXhhb6uYT2htn3rlbtFWhZFW737v68KQUaDqDNuJcfetrPQkd/nn0oA8V9VLHcD0bdABjj0jlK+NA2Yt7SWvt4lUwgHu4yGm/SvjA9V2EiKZcI5QYWvkzmFeCJawh1MHPMuctimpUuPs0TkL1W9fVzDXEm2Y239t+ZEurSo+EUedTAvgqwoYANyhAy0RVoIoJEYFapqNb4AfFPCK2z6RN8l/Gk6J86FivALIY4YvhHmR6jAi+XkYBiJDYDMhxM1bFjxo8Pu7IB/ba2pw== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(8936002)(86362001)(1076003)(186003)(16526019)(426003)(54906003)(40460700003)(55016003)(316002)(110136005)(2616005)(70206006)(70586007)(508600001)(36860700001)(107886003)(82310400005)(8676002)(6636002)(4326008)(7696005)(47076005)(81166007)(336012)(356005)(2906002)(6666004)(26005)(6286002)(36756003)(5660300002)(83380400001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:47:56.9009 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d79bb48f-1e95-44fc-9213-08da47b26635 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3396 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Split the virtqs LM log between the configuration threads. This accelerates the LM process and reduces its time by 20%. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.h | 3 + drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 34 +++++++++++ drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 85 +++++++++++++++++++++------ 3 files changed, 105 insertions(+), 17 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 35221f5ddc..e08931719f 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -72,6 +72,8 @@ enum { MLX5_VDPA_NOTIFIER_STATE_ERR }; +#define MLX5_VDPA_USED_RING_LEN(size) \ + ((size) * sizeof(struct vring_used_elem) + sizeof(uint16_t) * 3) #define MLX5_VDPA_MAX_C_THRD 256 #define MLX5_VDPA_MAX_TASKS_PER_THRD 4096 #define MLX5_VDPA_TASKS_PER_DEV 64 @@ -81,6 +83,7 @@ enum { enum mlx5_vdpa_task_type { MLX5_VDPA_TASK_REG_MR = 1, MLX5_VDPA_TASK_SETUP_VIRTQ, + MLX5_VDPA_TASK_STOP_VIRTQ, }; /* Generic task information and size must be multiple of 4B. */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c index 1389d369ae..98369f0887 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -104,6 +104,7 @@ mlx5_vdpa_c_thread_handle(void *arg) struct mlx5_vdpa_priv *priv; struct mlx5_vdpa_task task; struct rte_ring *rng; + uint64_t features; uint32_t thrd_idx; uint32_t task_num; int ret; @@ -153,6 +154,39 @@ mlx5_vdpa_c_thread_handle(void *arg) } pthread_mutex_unlock(&virtq->virtq_lock); break; + case MLX5_VDPA_TASK_STOP_VIRTQ: + virtq = &priv->virtqs[task.idx]; + pthread_mutex_lock(&virtq->virtq_lock); + ret = mlx5_vdpa_virtq_stop(priv, + task.idx); + if (ret) { + DRV_LOG(ERR, + "Failed to stop virtq %d.", + task.idx); + __atomic_fetch_add( + task.err_cnt, 1, + __ATOMIC_RELAXED); + pthread_mutex_unlock(&virtq->virtq_lock); + break; + } + ret = rte_vhost_get_negotiated_features( + priv->vid, &features); + if (ret) { + DRV_LOG(ERR, + "Failed to get negotiated features virtq %d.", + task.idx); + __atomic_fetch_add( + task.err_cnt, 1, + __ATOMIC_RELAXED); + pthread_mutex_unlock(&virtq->virtq_lock); + break; + } + if (RTE_VHOST_NEED_LOG(features)) + rte_vhost_log_used_vring( + priv->vid, task.idx, 0, + MLX5_VDPA_USED_RING_LEN(virtq->vq_size)); + pthread_mutex_unlock(&virtq->virtq_lock); + break; default: DRV_LOG(ERR, "Invalid vdpa task type %d.", task.type); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c index efebf364d0..c2e78218ca 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c @@ -89,39 +89,90 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base, return -1; } -#define MLX5_VDPA_USED_RING_LEN(size) \ - ((size) * sizeof(struct vring_used_elem) + sizeof(uint16_t) * 3) - int mlx5_vdpa_lm_log(struct mlx5_vdpa_priv *priv) { + uint32_t remaining_cnt = 0, err_cnt = 0, task_num = 0; + uint32_t i, thrd_idx, data[1]; struct mlx5_vdpa_virtq *virtq; uint64_t features; - int ret = rte_vhost_get_negotiated_features(priv->vid, &features); - int i; + int ret; + ret = rte_vhost_get_negotiated_features(priv->vid, &features); if (ret) { DRV_LOG(ERR, "Failed to get negotiated features."); return -1; } - if (!RTE_VHOST_NEED_LOG(features)) - return 0; - for (i = 0; i < priv->nr_virtqs; ++i) { - virtq = &priv->virtqs[i]; - if (!priv->virtqs[i].virtq) { - DRV_LOG(DEBUG, "virtq %d is invalid for LM log.", i); - } else { + if (priv->use_c_thread && priv->nr_virtqs) { + uint32_t main_task_idx[priv->nr_virtqs]; + + for (i = 0; i < priv->nr_virtqs; i++) { + virtq = &priv->virtqs[i]; + if (!virtq->configured) + continue; + thrd_idx = i % (conf_thread_mng.max_thrds + 1); + if (!thrd_idx) { + main_task_idx[task_num] = i; + task_num++; + continue; + } + thrd_idx = priv->last_c_thrd_idx + 1; + if (thrd_idx >= conf_thread_mng.max_thrds) + thrd_idx = 0; + priv->last_c_thrd_idx = thrd_idx; + data[0] = i; + if (mlx5_vdpa_task_add(priv, thrd_idx, + MLX5_VDPA_TASK_STOP_VIRTQ, + &remaining_cnt, &err_cnt, + (void **)&data, 1)) { + DRV_LOG(ERR, "Fail to add " + "task stop virtq (%d).", i); + main_task_idx[task_num] = i; + task_num++; + } + } + for (i = 0; i < task_num; i++) { + virtq = &priv->virtqs[main_task_idx[i]]; pthread_mutex_lock(&virtq->virtq_lock); - ret = mlx5_vdpa_virtq_stop(priv, i); + ret = mlx5_vdpa_virtq_stop(priv, + main_task_idx[i]); + if (ret) { + pthread_mutex_unlock(&virtq->virtq_lock); + DRV_LOG(ERR, + "Failed to stop virtq %d.", i); + return -1; + } + if (RTE_VHOST_NEED_LOG(features)) + rte_vhost_log_used_vring(priv->vid, i, 0, + MLX5_VDPA_USED_RING_LEN(virtq->vq_size)); pthread_mutex_unlock(&virtq->virtq_lock); + } + if (mlx5_vdpa_c_thread_wait_bulk_tasks_done(&remaining_cnt, + &err_cnt, 2000)) { + DRV_LOG(ERR, + "Failed to wait virt-queue setup tasks ready."); + return -1; + } + } else { + for (i = 0; i < priv->nr_virtqs; i++) { + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + if (!virtq->configured) { + pthread_mutex_unlock(&virtq->virtq_lock); + continue; + } + ret = mlx5_vdpa_virtq_stop(priv, i); if (ret) { - DRV_LOG(ERR, "Failed to stop virtq %d for LM " - "log.", i); + pthread_mutex_unlock(&virtq->virtq_lock); + DRV_LOG(ERR, + "Failed to stop virtq %d for LM log.", i); return -1; } + if (RTE_VHOST_NEED_LOG(features)) + rte_vhost_log_used_vring(priv->vid, i, 0, + MLX5_VDPA_USED_RING_LEN(virtq->vq_size)); + pthread_mutex_unlock(&virtq->virtq_lock); } - rte_vhost_log_used_vring(priv->vid, i, 0, - MLX5_VDPA_USED_RING_LEN(priv->virtqs[i].vq_size)); } return 0; } From patchwork Mon Jun 6 11:46:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 112389 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D3487A0542; Mon, 6 Jun 2022 13:49:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2FCE9427F3; Mon, 6 Jun 2022 13:48:04 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2044.outbound.protection.outlook.com [40.107.94.44]) by mails.dpdk.org (Postfix) with ESMTP id 04C1241104 for ; Mon, 6 Jun 2022 13:48:03 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QiLckczaJG2JR/KqsHz2SlQLSZm21CxmWfgVBjE7+QM8pwWy42H17WEKhjLkNP3hNzgGjZXxPns36ydheir5OD5A6Tb2U2Ci9n9D/n5Uc7vyqwHlhsS7rlx/mf3rYxNAetViq50ANPNVXuOYjW6ahE6ONwWMc/gSg0oUoavxsf9WtWH6u5pXSzfvoRNk1orms2TtpnAhrqB3m+ag6Rx65FJCG0syHjqI9I3Z4L0p1IpFpJbinHS/oSubA319AhIGVxvjun02aVgSO11ky897HGpovtTyQX1tgdqOCZ/HLtUdpNDCn0GMfpEb3FPaRtg8t52suFaodKc2XP7VAi8eSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CFwda5vOA5XlHexKDRkII1aR3V+Nqm/BytjADmQ2iyE=; b=XVPhGyY7deIB1vgxpQSLa1sCqhwKdfwWRG/gvGn5bDQZBTL+PDLVmeg9XYXEzsfaok9USSUMf+BB9LTgz/EmUhUXod5CFF16Mpxs6+GY/FuQDi7ymYve1+V8/nTHt68ROMyA346LrTp1FOP1Cm62LlMZ/rpEc3GXOmzrMIP+c9Rf3bQWChb6hWnHOB6dvHyTrzJ6CEMPEq24E3QI9O0+c4BFeiOUO+3d5grzHu81D0JHsMr1JyXeMNFA8mK5PCj+9xMBGZN9nK6tvmn4ITU0LhZCCORUcMA1Wm1aWqqODsbm+cFzldWofLX1cJwr67Pzsiox0y/wr5CG9OgbcWtzOQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CFwda5vOA5XlHexKDRkII1aR3V+Nqm/BytjADmQ2iyE=; b=mqCPSNOJo1IbqGiavPyM9og9CZLS5Uq9pRv9mK3Up5Y6LxF+MRxWlrFKwUbnwlmtJ3DBw7TlTcyiVx6t0MEcpVGoHH21xuQICrZg5URlCa9jC35QVywDbFatpiQT5xmAMEiJnijoX+H9W3Zy90ZDBYWVmi/HrZxgWFXskDW98MjDSN0VLeI8YgdCvA3rSQONOVINORAz2+O1GKHv/3FNwunEVxuVtYkFd/DAOwHxrYhwD+f2u2Ws8D1BxjYACmEsfrxO7y5I3vB/cb14D37NF5dBgcXrk3wyisM7M1YKm/oHEVX+5Snb1SLvanHs4YCdHW5OmJwpJY07Q5rJP5UdKA== Received: from MWHPR07CA0007.namprd07.prod.outlook.com (2603:10b6:300:116::17) by CY5PR12MB6251.namprd12.prod.outlook.com (2603:10b6:930:21::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Mon, 6 Jun 2022 11:47:59 +0000 Received: from CO1NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:300:116:cafe::a9) by MWHPR07CA0007.outlook.office365.com (2603:10b6:300:116::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:59 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT036.mail.protection.outlook.com (10.13.174.124) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:59 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:47:58 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:47:56 -0700 From: Li Zhang To: , , , CC: , , , Subject: [PATCH v1 15/17] vdpa/mlx5: add device close task Date: Mon, 6 Jun 2022 14:46:48 +0300 Message-ID: <20220606114650.209612-16-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d915564c-f36a-4d00-d912-08da47b2679c X-MS-TrafficTypeDiagnostic: CY5PR12MB6251:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: yhlkBVyIQnMum0dZSw1QC+dFRpFwL5vAMpZgzaPwQDozR00KSh0RRRRvzyQV0mXx4Ay+7ht1QGjDjRdYL8FR6Apj+5SsAR4cYPraVqoyMe5bnoYPTRwmgS+susTef95cmdeuqUKuEZDbla9Re+aKSxw/RS9UMCP99mi2PqHMhXJVlH9cFTZbTaYz45GJPwfz3el5FLkH+GQmgw9bD0y8ZfNhh62JwzBhHNz4ELcZ+rKzMhNiBZH9zOy3Rals3akuUQSmaXkWSzFVMYwe5iThe38+HNloOdfJnDJfn8KwO5tJqCYrDsvejeH83KSCB3UnC7qiDcWBwh8tyTPuDED75YR6Q4CBpTCn7vVk7xmLhbmtq69fdljpLrnJSBo/qGy5fJRu7YynGEOQ6dYBy8ZaF0sIGqXX4Gk1X4Nza2qCUSmLmVyNC7o+hQC1s1p+a/Ob2zFPSBdnMga3Wixh/MI8hEv/yTCYgA/S8qcrWkgxowTR/f83+LLTLZum65I8d4dgbHeRgwkbl9poC1cqzz/n6OS73Opz/nZ+ErZVhOrS0rg21/gjlSgg+NTU8KnIcuvOdWuE6ibvxbKMRc7f/B8iu1iWIACTu90HrA3hX4OoHcM09fku7tn6IYumuUkYkcuXaynrIH9SEE4gzqnStsAhAHYZ+HuZNk2inchuKHrMUMpPrnZ22dTm+tsWNYjmSUHERKEDcwlQyr83obiAlUubZw== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(16526019)(336012)(82310400005)(186003)(4326008)(83380400001)(70206006)(86362001)(55016003)(70586007)(8676002)(426003)(26005)(6286002)(36860700001)(47076005)(7696005)(2906002)(508600001)(8936002)(6636002)(54906003)(5660300002)(110136005)(1076003)(107886003)(36756003)(2616005)(316002)(6666004)(40460700003)(356005)(81166007)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:47:59.3336 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d915564c-f36a-4d00-d912-08da47b2679c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6251 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Split the virtqs device close tasks after stopping virt-queue between the configuration threads. This accelerates the LM process and reduces its time by 50%. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.c | 56 +++++++++++++++++++++++++-- drivers/vdpa/mlx5/mlx5_vdpa.h | 8 ++++ drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 20 +++++++++- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 14 +++++++ 4 files changed, 94 insertions(+), 4 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index e3b32fa087..d000854c08 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -245,7 +245,7 @@ mlx5_vdpa_mtu_set(struct mlx5_vdpa_priv *priv) return kern_mtu == vhost_mtu ? 0 : -1; } -static void +void mlx5_vdpa_dev_cache_clean(struct mlx5_vdpa_priv *priv) { /* Clean pre-created resource in dev removal only. */ @@ -254,6 +254,26 @@ mlx5_vdpa_dev_cache_clean(struct mlx5_vdpa_priv *priv) mlx5_vdpa_mem_dereg(priv); } +static bool +mlx5_vdpa_wait_dev_close_tasks_done(struct mlx5_vdpa_priv *priv) +{ + uint32_t timeout = 0; + + /* Check and wait all close tasks done. */ + while (__atomic_load_n(&priv->dev_close_progress, + __ATOMIC_RELAXED) != 0 && timeout < 1000) { + rte_delay_us_sleep(10000); + timeout++; + } + if (priv->dev_close_progress) { + DRV_LOG(ERR, + "Failed to wait close device tasks done vid %d.", + priv->vid); + return true; + } + return false; +} + static int mlx5_vdpa_dev_close(int vid) { @@ -271,6 +291,27 @@ mlx5_vdpa_dev_close(int vid) ret |= mlx5_vdpa_lm_log(priv); priv->state = MLX5_VDPA_STATE_IN_PROGRESS; } + if (priv->use_c_thread) { + if (priv->last_c_thrd_idx >= + (conf_thread_mng.max_thrds - 1)) + priv->last_c_thrd_idx = 0; + else + priv->last_c_thrd_idx++; + __atomic_store_n(&priv->dev_close_progress, + 1, __ATOMIC_RELAXED); + if (mlx5_vdpa_task_add(priv, + priv->last_c_thrd_idx, + MLX5_VDPA_TASK_DEV_CLOSE_NOWAIT, + NULL, NULL, NULL, 1)) { + DRV_LOG(ERR, + "Fail to add dev close task. "); + goto single_thrd; + } + priv->state = MLX5_VDPA_STATE_PROBED; + DRV_LOG(INFO, "vDPA device %d was closed.", vid); + return ret; + } +single_thrd: pthread_mutex_lock(&priv->steer_update_lock); mlx5_vdpa_steer_unset(priv); pthread_mutex_unlock(&priv->steer_update_lock); @@ -278,10 +319,12 @@ mlx5_vdpa_dev_close(int vid) mlx5_vdpa_drain_cq(priv); if (priv->lm_mr.addr) mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); - priv->state = MLX5_VDPA_STATE_PROBED; if (!priv->connected) mlx5_vdpa_dev_cache_clean(priv); priv->vid = 0; + __atomic_store_n(&priv->dev_close_progress, 0, + __ATOMIC_RELAXED); + priv->state = MLX5_VDPA_STATE_PROBED; DRV_LOG(INFO, "vDPA device %d was closed.", vid); return ret; } @@ -302,6 +345,8 @@ mlx5_vdpa_dev_config(int vid) DRV_LOG(ERR, "Failed to reconfigure vid %d.", vid); return -1; } + if (mlx5_vdpa_wait_dev_close_tasks_done(priv)) + return -1; priv->vid = vid; priv->connected = true; if (mlx5_vdpa_mtu_set(priv)) @@ -444,8 +489,11 @@ mlx5_vdpa_dev_cleanup(int vid) DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); return -1; } - if (priv->state == MLX5_VDPA_STATE_PROBED) + if (priv->state == MLX5_VDPA_STATE_PROBED) { + if (priv->use_c_thread) + mlx5_vdpa_wait_dev_close_tasks_done(priv); mlx5_vdpa_dev_cache_clean(priv); + } priv->connected = false; return 0; } @@ -839,6 +887,8 @@ mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv) { if (priv->state == MLX5_VDPA_STATE_CONFIGURED) mlx5_vdpa_dev_close(priv->vid); + if (priv->use_c_thread) + mlx5_vdpa_wait_dev_close_tasks_done(priv); mlx5_vdpa_release_dev_resources(priv); if (priv->vdev) rte_vdpa_unregister_device(priv->vdev); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index e08931719f..b6392b9d66 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -84,6 +84,7 @@ enum mlx5_vdpa_task_type { MLX5_VDPA_TASK_REG_MR = 1, MLX5_VDPA_TASK_SETUP_VIRTQ, MLX5_VDPA_TASK_STOP_VIRTQ, + MLX5_VDPA_TASK_DEV_CLOSE_NOWAIT, }; /* Generic task information and size must be multiple of 4B. */ @@ -206,6 +207,7 @@ struct mlx5_vdpa_priv { uint64_t features; /* Negotiated features. */ uint16_t log_max_rqt_size; uint16_t last_c_thrd_idx; + uint16_t dev_close_progress; uint16_t num_mrs; /* Number of memory regions. */ struct mlx5_vdpa_steer steer; struct mlx5dv_var *var; @@ -578,4 +580,10 @@ mlx5_vdpa_c_thread_wait_bulk_tasks_done(uint32_t *remaining_cnt, uint32_t *err_cnt, uint32_t sleep_time); int mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index, bool reg_kick); +void +mlx5_vdpa_vq_destroy(struct mlx5_vdpa_virtq *virtq); +void +mlx5_vdpa_dev_cache_clean(struct mlx5_vdpa_priv *priv); +void +mlx5_vdpa_virtq_unreg_intr_handle_all(struct mlx5_vdpa_priv *priv); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c index 98369f0887..bb2279440b 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -63,7 +63,8 @@ mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, task[i].type = task_type; task[i].remaining_cnt = remaining_cnt; task[i].err_cnt = err_cnt; - task[i].idx = data[i]; + if (data) + task[i].idx = data[i]; } if (!mlx5_vdpa_c_thrd_ring_enqueue_bulk(rng, (void **)&task, num, NULL)) return -1; @@ -187,6 +188,23 @@ mlx5_vdpa_c_thread_handle(void *arg) MLX5_VDPA_USED_RING_LEN(virtq->vq_size)); pthread_mutex_unlock(&virtq->virtq_lock); break; + case MLX5_VDPA_TASK_DEV_CLOSE_NOWAIT: + mlx5_vdpa_virtq_unreg_intr_handle_all(priv); + pthread_mutex_lock(&priv->steer_update_lock); + mlx5_vdpa_steer_unset(priv); + pthread_mutex_unlock(&priv->steer_update_lock); + mlx5_vdpa_virtqs_release(priv); + mlx5_vdpa_drain_cq(priv); + if (priv->lm_mr.addr) + mlx5_os_wrapped_mkey_destroy( + &priv->lm_mr); + if (!priv->connected) + mlx5_vdpa_dev_cache_clean(priv); + priv->vid = 0; + __atomic_store_n( + &priv->dev_close_progress, 0, + __ATOMIC_RELAXED); + break; default: DRV_LOG(ERR, "Invalid vdpa task type %d.", task.type); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index db05220e76..a08c854b14 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -102,6 +102,20 @@ mlx5_vdpa_virtq_unregister_intr_handle(struct mlx5_vdpa_virtq *virtq) virtq->intr_handle = NULL; } +void +mlx5_vdpa_virtq_unreg_intr_handle_all(struct mlx5_vdpa_priv *priv) +{ + uint32_t i; + struct mlx5_vdpa_virtq *virtq; + + for (i = 0; i < priv->nr_virtqs; i++) { + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + mlx5_vdpa_virtq_unregister_intr_handle(virtq); + pthread_mutex_unlock(&virtq->virtq_lock); + } +} + /* Release cached VQ resources. */ void mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) From patchwork Mon Jun 6 11:46:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 112390 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A5786A0542; Mon, 6 Jun 2022 13:49:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 12B184280D; Mon, 6 Jun 2022 13:48:06 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2065.outbound.protection.outlook.com [40.107.92.65]) by mails.dpdk.org (Postfix) with ESMTP id 4F2BF42685 for ; Mon, 6 Jun 2022 13:48:05 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=XwVDZTCAYUCr1F8ib5RdRYyY+hv24cFR+M3/1r31j8JMKNJ0wScx3xN3pc39EiqlUel9HgjjuMLnd7CH1dYokWOTlAB6zGET1QNQKoAOADWC2FNe2y9GBR1xiuVM6iH8oWMZKr5jjloYjFWXHqQ4ocIL7EYgBLEhxXwn/MKquymKVRYQjO5zzyvlOfU7Fyj1iz0bDfzEiFYqmo6brnQwBcdbXgjCgQUTqaLMcdfdFsrn9ZVeSi3vNO7uT51e6IGvPiDYGlWdCtc7ZwOGcj4pkOfI/cCyt2MFHQ3vusS9b2YiX6fseTaus4R2/lrxNKdr5NBznjJCEK75POLmzVdqMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DpHlSTh4M1kgu2XgjQ6gUQPAkEhSm+SYs7NnbZ1OR5M=; b=F1AzbdF9N7Gsg9HVemeil8nlmjTisfwMkSJzcVsxn2NmEE9b/tfKz/yy5+xFxHtk2YKgDuSWMuQ+KJeJl57dgMS3W96ei4Ed6XiISX8Ex81LIJPueGeMtzumZ/69EAQbkL329v+oB7wpVmoRIsh99Ed1IRCxEY78B2kpSYn8Ws0SakZPO7Vl5UNsTe/n4lRvaBI1XZkL9dbwb7iy9c6HmVuBRepJNq7ZTvEGOyCc6hKMm68flPq78QxRDeZb2YuWYiAe65OiCGf+x9QUVHZ0/Jr9hsSASYYlThnYnMwxRL78eL5BDCX8HknnsX8JrnFHhu4xlWoB/9QSQp2SAxEGDw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DpHlSTh4M1kgu2XgjQ6gUQPAkEhSm+SYs7NnbZ1OR5M=; b=qIT/oISuv0f+1fgRbIIMK2vFaUKqcST8JBEj/GQ4r7ePyHe2/vQy4cXN13eg/bJ0afVMNJkdVE6q3e26GCMRTeHlqNGZ9A0Ph4x0YesXEj8MYDwizS9rSzOJjJOVBrvpYQG/VkwhyaRpzjKurJKNVV+pubDAjoJj3IxImozX5WXN6i3jjZiVYUjd2m2NmvUrcQ9RB/M0J8akT1vPd/KqVJVm+1cpa28OBwkU4E1+mZL/TMHZjSUIjW6epDrqcW7ITczkGjnCV8ttAM4Q0aIipyUh4QJ8AT9QD53VQue+l0v9WCp99U+0XM6Q7fj0TjGkCmekhXUyeC+ZZSxXEKN4jg== Received: from MW2PR16CA0048.namprd16.prod.outlook.com (2603:10b6:907:1::25) by MN2PR12MB4030.namprd12.prod.outlook.com (2603:10b6:208:159::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun 2022 11:48:03 +0000 Received: from CO1NAM11FT049.eop-nam11.prod.protection.outlook.com (2603:10b6:907:1:cafe::2a) by MW2PR16CA0048.outlook.office365.com (2603:10b6:907:1::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19 via Frontend Transport; Mon, 6 Jun 2022 11:48:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT049.mail.protection.outlook.com (10.13.175.50) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:48:02 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:48:01 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:47:58 -0700 From: Li Zhang To: , , , CC: , , , , Yajun Wu Subject: [PATCH v1 16/17] vdpa/mlx5: add virtq sub-resources creation Date: Mon, 6 Jun 2022 14:46:49 +0300 Message-ID: <20220606114650.209612-17-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: cce57cf0-bec6-41c6-b5e3-08da47b26968 X-MS-TrafficTypeDiagnostic: MN2PR12MB4030:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xy02PSFBizT7U4Y01JnyQgjLbAEY5VLm2mODCY1M6te43S1laWt8idt+XpoWAQgarnG074HCj89V6xAPXGwXbQSMdb9kiZBHsmQ/PjI2jdRMiLwzmjSNHxDGQGnZwKsV5o5L60FoHzhJZizlU7zDuft8WdVEf432NFzwr2BNuFMbpOvS9OXEpI7N1A/8KRpBHquXYTGzG6PanFhRVCDKnVvieRfzqFWhTM+m07W9Yr1LxOhJQCUjzqqHQ7NJD0EsliGkMGMaYs92iWSPxYJo5rQ+nX9bgyeaOa4ksVYeAahQRLWhqTRdB37v9gFe1B5lBROyGKiIGYdozT0RSioQxdN85xCsb7UXyboCj9Xw+3QhAmMzeG52DP7thy7KoQcZkM8Q3mWrjBmhPCCrc7K75wmgdsEGutcPDIXPK63WA76AS1kVeSy973iEtzUhJy5RxhT18Z2PGJ2Li732pwp74upTdb8QoOtQ8s4coWMIt+X/S/d2XfmdB0ptXHCV7bqbn/ilNKutuoUlCXTrixhhxpPGPVhY/LxwVoBeslAGP+w/s2ipEmm4iW5VRE5Nx/yhEeVg5HuZfSIklYBunFsMD3qo2Bovi3yEZbuj9XgR8GxBeodMOPTJq7v6EaAlBsaf/Mx3BsOuwQamVJcEM/XuhySymQ3JHqLOuo/Hyde1AhHH+9+IlqhbOVMWALU4up3KRZtNUlmNs615TFzWhKfjVg== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(36860700001)(30864003)(8936002)(4326008)(8676002)(81166007)(5660300002)(36756003)(82310400005)(356005)(70586007)(70206006)(2906002)(55016003)(316002)(83380400001)(6666004)(86362001)(426003)(336012)(508600001)(54906003)(6636002)(110136005)(26005)(6286002)(7696005)(1076003)(40460700003)(107886003)(186003)(16526019)(2616005)(47076005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:48:02.3161 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cce57cf0-bec6-41c6-b5e3-08da47b26968 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT049.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4030 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org pre-created virt-queue sub-resource in device probe stage and then modify virtqueue in device config stage. Steer table also need to support dummy virt-queue. This accelerates the LM process and reduces its time by 40%. Signed-off-by: Li Zhang Signed-off-by: Yajun Wu --- drivers/vdpa/mlx5/mlx5_vdpa.c | 72 +++++++-------------- drivers/vdpa/mlx5/mlx5_vdpa.h | 17 +++-- drivers/vdpa/mlx5/mlx5_vdpa_event.c | 11 ++-- drivers/vdpa/mlx5/mlx5_vdpa_steer.c | 17 +++-- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 99 +++++++++++++++++++++-------- 5 files changed, 123 insertions(+), 93 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index d000854c08..f006a9cd3f 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -627,65 +627,39 @@ mlx5_vdpa_config_get(struct mlx5_kvargs_ctrl *mkvlist, static int mlx5_vdpa_virtq_resource_prepare(struct mlx5_vdpa_priv *priv) { - struct mlx5_vdpa_virtq *virtq; + uint32_t max_queues; uint32_t index; - uint32_t i; + struct mlx5_vdpa_virtq *virtq; - for (index = 0; index < priv->caps.max_num_virtio_queues * 2; + for (index = 0; index < priv->caps.max_num_virtio_queues; index++) { virtq = &priv->virtqs[index]; pthread_mutex_init(&virtq->virtq_lock, NULL); } - if (!priv->queues) + if (!priv->queues || !priv->queue_size) return 0; - for (index = 0; index < (priv->queues * 2); ++index) { + max_queues = (priv->queues < priv->caps.max_num_virtio_queues) ? + (priv->queues * 2) : (priv->caps.max_num_virtio_queues); + for (index = 0; index < max_queues; ++index) + if (mlx5_vdpa_virtq_single_resource_prepare(priv, + index)) + goto error; + if (mlx5_vdpa_is_modify_virtq_supported(priv)) + if (mlx5_vdpa_steer_update(priv, true)) + goto error; + return 0; +error: + for (index = 0; index < max_queues; ++index) { virtq = &priv->virtqs[index]; - int ret = mlx5_vdpa_event_qp_prepare(priv, priv->queue_size, - -1, virtq); - - if (ret) { - DRV_LOG(ERR, "Failed to create event QPs for virtq %d.", - index); - return -1; - } - if (priv->caps.queue_counters_valid) { - if (!virtq->counters) - virtq->counters = - mlx5_devx_cmd_create_virtio_q_counters - (priv->cdev->ctx); - if (!virtq->counters) { - DRV_LOG(ERR, "Failed to create virtq couners for virtq" - " %d.", index); - return -1; - } - } - for (i = 0; i < RTE_DIM(virtq->umems); ++i) { - uint32_t size; - void *buf; - struct mlx5dv_devx_umem *obj; - - size = priv->caps.umems[i].a * priv->queue_size + - priv->caps.umems[i].b; - buf = rte_zmalloc(__func__, size, 4096); - if (buf == NULL) { - DRV_LOG(ERR, "Cannot allocate umem %d memory for virtq" - " %u.", i, index); - return -1; - } - obj = mlx5_glue->devx_umem_reg(priv->cdev->ctx, buf, - size, IBV_ACCESS_LOCAL_WRITE); - if (obj == NULL) { - rte_free(buf); - DRV_LOG(ERR, "Failed to register umem %d for virtq %u.", - i, index); - return -1; - } - virtq->umems[i].size = size; - virtq->umems[i].buf = buf; - virtq->umems[i].obj = obj; + if (virtq->virtq) { + pthread_mutex_lock(&virtq->virtq_lock); + mlx5_vdpa_virtq_unset(virtq); + pthread_mutex_unlock(&virtq->virtq_lock); } } - return 0; + if (mlx5_vdpa_is_modify_virtq_supported(priv)) + mlx5_vdpa_steer_unset(priv); + return -1; } static int diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index b6392b9d66..f353db62ac 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -277,13 +277,15 @@ int mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv); * The guest notification file descriptor. * @param[in/out] virtq * Pointer to the virt-queue structure. + * @param[in] reset + * If true, it will reset event qp. * * @return * 0 on success, -1 otherwise and rte_errno is set. */ int mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, - int callfd, struct mlx5_vdpa_virtq *virtq); + int callfd, struct mlx5_vdpa_virtq *virtq, bool reset); /** * Destroy an event QP and all its related resources. @@ -403,11 +405,13 @@ void mlx5_vdpa_steer_unset(struct mlx5_vdpa_priv *priv); * * @param[in] priv * The vdpa driver private structure. + * @param[in] is_dummy + * If set, it is updated with dummy queue for prepare resource. * * @return * 0 on success, a negative value otherwise. */ -int mlx5_vdpa_steer_update(struct mlx5_vdpa_priv *priv); +int mlx5_vdpa_steer_update(struct mlx5_vdpa_priv *priv, bool is_dummy); /** * Setup steering and all its related resources to enable RSS traffic from the @@ -581,9 +585,14 @@ mlx5_vdpa_c_thread_wait_bulk_tasks_done(uint32_t *remaining_cnt, int mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index, bool reg_kick); void -mlx5_vdpa_vq_destroy(struct mlx5_vdpa_virtq *virtq); -void mlx5_vdpa_dev_cache_clean(struct mlx5_vdpa_priv *priv); void mlx5_vdpa_virtq_unreg_intr_handle_all(struct mlx5_vdpa_priv *priv); +bool +mlx5_vdpa_virtq_single_resource_prepare(struct mlx5_vdpa_priv *priv, + int index); +int +mlx5_vdpa_qps2rst2rts(struct mlx5_vdpa_event_qp *eqp); +void +mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index f782b6b832..22f0920c88 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -249,7 +249,7 @@ mlx5_vdpa_drain_cq(struct mlx5_vdpa_priv *priv) { unsigned int i; - for (i = 0; i < priv->caps.max_num_virtio_queues * 2; i++) { + for (i = 0; i < priv->caps.max_num_virtio_queues; i++) { struct mlx5_vdpa_cq *cq = &priv->virtqs[i].eqp.cq; mlx5_vdpa_queue_complete(cq); @@ -618,7 +618,7 @@ mlx5_vdpa_qps2rts(struct mlx5_vdpa_event_qp *eqp) return 0; } -static int +int mlx5_vdpa_qps2rst2rts(struct mlx5_vdpa_event_qp *eqp) { if (mlx5_devx_cmd_modify_qp_state(eqp->fw_qp, MLX5_CMD_OP_QP_2RST, @@ -638,7 +638,7 @@ mlx5_vdpa_qps2rst2rts(struct mlx5_vdpa_event_qp *eqp) int mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, - int callfd, struct mlx5_vdpa_virtq *virtq) + int callfd, struct mlx5_vdpa_virtq *virtq, bool reset) { struct mlx5_vdpa_event_qp *eqp = &virtq->eqp; struct mlx5_devx_qp_attr attr = {0}; @@ -649,11 +649,10 @@ mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, /* Reuse existing resources. */ eqp->cq.callfd = callfd; /* FW will set event qp to error state in q destroy. */ - if (!mlx5_vdpa_qps2rst2rts(eqp)) { + if (reset && !mlx5_vdpa_qps2rst2rts(eqp)) rte_write32(rte_cpu_to_be_32(RTE_BIT32(log_desc_n)), &eqp->sw_qp.db_rec[0]); - return 0; - } + return 0; } if (eqp->fw_qp) mlx5_vdpa_event_qp_destroy(eqp); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c index 4cbf09784e..c2e0a17ace 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c @@ -57,7 +57,7 @@ mlx5_vdpa_steer_unset(struct mlx5_vdpa_priv *priv) * -1 on error. */ static int -mlx5_vdpa_rqt_prepare(struct mlx5_vdpa_priv *priv) +mlx5_vdpa_rqt_prepare(struct mlx5_vdpa_priv *priv, bool is_dummy) { int i; uint32_t rqt_n = RTE_MIN(MLX5_VDPA_DEFAULT_RQT_SIZE, @@ -67,15 +67,20 @@ mlx5_vdpa_rqt_prepare(struct mlx5_vdpa_priv *priv) sizeof(uint32_t), 0); uint32_t k = 0, j; int ret = 0, num; + uint16_t nr_vring = is_dummy ? + (((priv->queues * 2) < priv->caps.max_num_virtio_queues) ? + (priv->queues * 2) : priv->caps.max_num_virtio_queues) : priv->nr_virtqs; if (!attr) { DRV_LOG(ERR, "Failed to allocate RQT attributes memory."); rte_errno = ENOMEM; return -ENOMEM; } - for (i = 0; i < priv->nr_virtqs; i++) { + for (i = 0; i < nr_vring; i++) { if (is_virtq_recvq(i, priv->nr_virtqs) && - priv->virtqs[i].enable && priv->virtqs[i].virtq) { + (is_dummy || (priv->virtqs[i].enable && + priv->virtqs[i].configured)) && + priv->virtqs[i].virtq) { attr->rq_list[k] = priv->virtqs[i].virtq->id; k++; } @@ -235,12 +240,12 @@ mlx5_vdpa_rss_flows_create(struct mlx5_vdpa_priv *priv) } int -mlx5_vdpa_steer_update(struct mlx5_vdpa_priv *priv) +mlx5_vdpa_steer_update(struct mlx5_vdpa_priv *priv, bool is_dummy) { int ret; pthread_mutex_lock(&priv->steer_update_lock); - ret = mlx5_vdpa_rqt_prepare(priv); + ret = mlx5_vdpa_rqt_prepare(priv, is_dummy); if (ret == 0) { mlx5_vdpa_steer_unset(priv); } else if (ret < 0) { @@ -261,7 +266,7 @@ mlx5_vdpa_steer_update(struct mlx5_vdpa_priv *priv) int mlx5_vdpa_steer_setup(struct mlx5_vdpa_priv *priv) { - if (mlx5_vdpa_steer_update(priv)) + if (mlx5_vdpa_steer_update(priv, false)) goto error; return 0; error: diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index a08c854b14..20ce382487 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -146,10 +146,10 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) } } -static int +void mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) { - int ret = -EAGAIN; + int ret; mlx5_vdpa_virtq_unregister_intr_handle(virtq); if (virtq->configured) { @@ -157,12 +157,12 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) if (ret) DRV_LOG(WARNING, "Failed to stop virtq %d.", virtq->index); - virtq->configured = 0; claim_zero(mlx5_devx_cmd_destroy(virtq->virtq)); + virtq->index = 0; + virtq->virtq = NULL; + virtq->configured = 0; } - virtq->virtq = NULL; virtq->notifier_state = MLX5_VDPA_NOTIFIER_STATE_DISABLED; - return 0; } void @@ -175,6 +175,9 @@ mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) virtq = &priv->virtqs[i]; pthread_mutex_lock(&virtq->virtq_lock); mlx5_vdpa_virtq_unset(virtq); + if (i < (priv->queues * 2)) + mlx5_vdpa_virtq_single_resource_prepare( + priv, i); pthread_mutex_unlock(&virtq->virtq_lock); } priv->features = 0; @@ -258,7 +261,8 @@ mlx5_vdpa_hva_to_gpa(struct rte_vhost_memory *mem, uint64_t hva) static int mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, struct mlx5_devx_virtq_attr *attr, - struct rte_vhost_vring *vq, int index) + struct rte_vhost_vring *vq, + int index, bool is_prepare) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; uint64_t gpa; @@ -277,11 +281,15 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, MLX5_VIRTQ_MODIFY_TYPE_Q_MKEY | MLX5_VIRTQ_MODIFY_TYPE_QUEUE_FEATURE_BIT_MASK | MLX5_VIRTQ_MODIFY_TYPE_EVENT_MODE; - attr->tso_ipv4 = !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO4)); - attr->tso_ipv6 = !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO6)); - attr->tx_csum = !!(priv->features & (1ULL << VIRTIO_NET_F_CSUM)); - attr->rx_csum = !!(priv->features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)); - attr->virtio_version_1_0 = + attr->tso_ipv4 = is_prepare ? 1 : + !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO4)); + attr->tso_ipv6 = is_prepare ? 1 : + !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO6)); + attr->tx_csum = is_prepare ? 1 : + !!(priv->features & (1ULL << VIRTIO_NET_F_CSUM)); + attr->rx_csum = is_prepare ? 1 : + !!(priv->features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)); + attr->virtio_version_1_0 = is_prepare ? 1 : !!(priv->features & (1ULL << VIRTIO_F_VERSION_1)); attr->q_type = (priv->features & (1ULL << VIRTIO_F_RING_PACKED)) ? @@ -290,12 +298,12 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, * No need event QPs creation when the guest in poll mode or when the * capability allows it. */ - attr->event_mode = vq->callfd != -1 || + attr->event_mode = is_prepare || vq->callfd != -1 || !(priv->caps.event_mode & (1 << MLX5_VIRTQ_EVENT_MODE_NO_MSIX)) ? MLX5_VIRTQ_EVENT_MODE_QP : MLX5_VIRTQ_EVENT_MODE_NO_MSIX; if (attr->event_mode == MLX5_VIRTQ_EVENT_MODE_QP) { - ret = mlx5_vdpa_event_qp_prepare(priv, - vq->size, vq->callfd, virtq); + ret = mlx5_vdpa_event_qp_prepare(priv, vq->size, + vq->callfd, virtq, !virtq->virtq); if (ret) { DRV_LOG(ERR, "Failed to create event QPs for virtq %d.", @@ -320,7 +328,7 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, attr->counters_obj_id = virtq->counters->id; } /* Setup 3 UMEMs for each virtq. */ - if (virtq->virtq) { + if (!virtq->virtq) { for (i = 0; i < RTE_DIM(virtq->umems); ++i) { uint32_t size; void *buf; @@ -345,7 +353,7 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, buf = rte_zmalloc(__func__, size, 4096); if (buf == NULL) { - DRV_LOG(ERR, "Cannot allocate umem %d memory for virtq" + DRV_LOG(ERR, "Cannot allocate umem %d memory for virtq." " %u.", i, index); return -1; } @@ -366,7 +374,7 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, attr->umems[i].size = virtq->umems[i].size; } } - if (attr->q_type == MLX5_VIRTQ_TYPE_SPLIT) { + if (!is_prepare && attr->q_type == MLX5_VIRTQ_TYPE_SPLIT) { gpa = mlx5_vdpa_hva_to_gpa(priv->vmem_info.vmem, (uint64_t)(uintptr_t)vq->desc); if (!gpa) { @@ -389,21 +397,23 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, } attr->available_addr = gpa; } - ret = rte_vhost_get_vring_base(priv->vid, + if (!is_prepare) { + ret = rte_vhost_get_vring_base(priv->vid, index, &last_avail_idx, &last_used_idx); - if (ret) { - last_avail_idx = 0; - last_used_idx = 0; - DRV_LOG(WARNING, "Couldn't get vring base, idx are set to 0."); - } else { - DRV_LOG(INFO, "vid %d: Init last_avail_idx=%d, last_used_idx=%d for " + if (ret) { + last_avail_idx = 0; + last_used_idx = 0; + DRV_LOG(WARNING, "Couldn't get vring base, idx are set to 0."); + } else { + DRV_LOG(INFO, "vid %d: Init last_avail_idx=%d, last_used_idx=%d for " "virtq %d.", priv->vid, last_avail_idx, last_used_idx, index); + } } attr->hw_available_index = last_avail_idx; attr->hw_used_index = last_used_idx; attr->q_size = vq->size; - attr->mkey = priv->gpa_mkey_index; + attr->mkey = is_prepare ? 0 : priv->gpa_mkey_index; attr->tis_id = priv->tiss[(index / 2) % priv->num_lag_ports]->id; attr->queue_index = index; attr->pd = priv->cdev->pdn; @@ -416,6 +426,39 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, return 0; } +bool +mlx5_vdpa_virtq_single_resource_prepare(struct mlx5_vdpa_priv *priv, + int index) +{ + struct mlx5_devx_virtq_attr attr = {0}; + struct mlx5_vdpa_virtq *virtq; + struct rte_vhost_vring vq = { + .size = priv->queue_size, + .callfd = -1, + }; + int ret; + + virtq = &priv->virtqs[index]; + virtq->index = index; + virtq->vq_size = vq.size; + virtq->configured = 0; + virtq->virtq = NULL; + ret = mlx5_vdpa_virtq_sub_objs_prepare(priv, &attr, &vq, index, true); + if (ret) { + DRV_LOG(ERR, + "Cannot prepare setup resource for virtq %d.", index); + return true; + } + if (mlx5_vdpa_is_modify_virtq_supported(priv)) { + virtq->virtq = + mlx5_devx_cmd_create_virtq(priv->cdev->ctx, &attr); + virtq->priv = priv; + if (!virtq->virtq) + return true; + } + return false; +} + bool mlx5_vdpa_is_modify_virtq_supported(struct mlx5_vdpa_priv *priv) { @@ -473,7 +516,7 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index, bool reg_kick) virtq->priv = priv; virtq->stopped = 0; ret = mlx5_vdpa_virtq_sub_objs_prepare(priv, &attr, - &vq, index); + &vq, index, false); if (ret) { DRV_LOG(ERR, "Failed to setup update virtq attr" " %d.", index); @@ -746,7 +789,7 @@ mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable) if (virtq->configured) { virtq->enable = 0; if (is_virtq_recvq(virtq->index, priv->nr_virtqs)) { - ret = mlx5_vdpa_steer_update(priv); + ret = mlx5_vdpa_steer_update(priv, false); if (ret) DRV_LOG(WARNING, "Failed to disable steering " "for virtq %d.", index); @@ -761,7 +804,7 @@ mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable) } virtq->enable = 1; if (is_virtq_recvq(virtq->index, priv->nr_virtqs)) { - ret = mlx5_vdpa_steer_update(priv); + ret = mlx5_vdpa_steer_update(priv, false); if (ret) DRV_LOG(WARNING, "Failed to enable steering " "for virtq %d.", index); From patchwork Mon Jun 6 11:46:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 112391 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB320A0542; Mon, 6 Jun 2022 13:49:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3734F42BC6; Mon, 6 Jun 2022 13:48:09 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2082.outbound.protection.outlook.com [40.107.243.82]) by mails.dpdk.org (Postfix) with ESMTP id 036B842BC6 for ; Mon, 6 Jun 2022 13:48:07 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=X/FZFNvIIZJOGUaRyl1WQDCYYy9EaZC32+dWfEKSLzPOkT++fGINbMJGx2kqhJGjrIoZV7RKQ2R03vSPDTlATcjNLGPaiRI9CP3CO6TVWuTrLVq1/rKcTHKAxERw5zbIv3Rv6BxDcV9/KLRxhJWwHKJPRso9iR0Q2ejydHNkiDtrxqNsUF2B5Wl940fut5V4DblKpQ9VPATdr3NZTZ648BSwanreqwo/1wCLNXtcSg6BmI3o+XmNrsVBUUhGedlvHrOkcxGXnWztY/xeNojva/PLi4+1loUz+Slke4psEJJdPbdlUm0DxEPY4oOu+WdP0ZElmt/Vi3hjqnqPSYYsCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XsUnvxy0pfveSZZ6jgiaSH5PTTAq7dr+MbccXrR8wgI=; b=JJvJqCu/d36rIANLPrsbPwC1W+eLQIQ7/w+nbzfaeeaai//+kA5FW3bhjhi/gb7WF944jGAuQ7bf2R/izeVRh9j1U/lT/lnMm/69gu7ymyAXpZlj05luqn2ryItKjI5P9SpGp/6YR11Ij29fjcKNwF2MY4FEqwk82IM+LY0T4skANBqlhXpjV/f7RuKeTJuRlX3kLhplI+JtPTm1Q1dvaq9sURts9KtIk2CEWnvTkEoOk1Z19i5CA6GirlltwO5psUb7HJ+pTUUy6cjmNL2aqJquWtpDBU76EJVOJfgvAcG9kLUjlcEiUXsfwWtXOys8nFC2FFN8fL1T+MaGYGLwoA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XsUnvxy0pfveSZZ6jgiaSH5PTTAq7dr+MbccXrR8wgI=; b=dDfe1v6QMyTPfE0hoC2p9FOn8JKceMR7PAd1VVjkkHQ63LSIKcUxW6u9s8D34pNhj29iMEsWwHWgwgrz3+wp19i+tOEby7YSGbaXkQiuLdMorvkH2+/nlTp3df5kzsRawKSMdEjkI8zptD9y+oVT9/y7ppQFC0xiEg2QiXfRbA/wuHmQl4fMiytuRcU1/hF5bczhX30xJVJ5Tg7lg73VLrnLlb7g5Rdp/kndrcTXxJvu8KersK/e8m9qFh5ZjvmwIhE5N7HsbjAbMDV2j5zl5r7OASTPKIrVj5xaSNs4YJ5Emu48MnQYzrLbMWM0jS/hK+4XZSmsUZ7tTpOaYhFi9A== Received: from DM6PR13CA0011.namprd13.prod.outlook.com (2603:10b6:5:bc::24) by MN2PR12MB4223.namprd12.prod.outlook.com (2603:10b6:208:1d3::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun 2022 11:48:05 +0000 Received: from DM6NAM11FT028.eop-nam11.prod.protection.outlook.com (2603:10b6:5:bc:cafe::6f) by DM6PR13CA0011.outlook.office365.com (2603:10b6:5:bc::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.7 via Frontend Transport; Mon, 6 Jun 2022 11:48:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT028.mail.protection.outlook.com (10.13.173.140) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:48:05 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:48:04 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:48:01 -0700 From: Li Zhang To: , , , CC: , , , Subject: [PATCH v1 17/17] vdpa/mlx5: prepare virtqueue resource creation Date: Mon, 6 Jun 2022 14:46:50 +0300 Message-ID: <20220606114650.209612-18-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ebb4d759-cea9-4ab9-1dfa-08da47b26b09 X-MS-TrafficTypeDiagnostic: MN2PR12MB4223:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: JGp1gGKA7fckxnMpydZH8P8rG1YoLUsMhJMVvcgu8F1ZGtz6VeZaCAImcSM/+lTpBD+jBcY+mXAyN9n7L1CZTqZVHZip0pbe9KZGD6PgE/VWzacTtT9p+25wPifv1FIJVCbCDLTEWgvVHqhKfuW4meHLmKwp9/4vwSFvxpqqndrZnOW5UI3T8zPFAYDRAY/jW5+AO3RiKh9kDykQOResX3yhBhN61KqBoSj1x+x3LrTjnu4CV+svMA0ZEqujBrSf2IMTGnBAtDO/vNpeqf33wjrgOuNTtM+nL8urKBChW3SRKSPkQX3GsG8wtExYvXSXo9OstshwOLE4IgOSXV0/w3tMWQpJ+nmiF7yX/CKkr5+HQ/fbcV3zc7/zViMZMuUxpTB9kVOTWVGesReQuGjGR/MS7jLWRbajQdv1awlNa6adqrD47OzpvNXwCHbhES5W6Ooab9uzaV/9xfZn7zxzZSADUwzDsbhXtFgi1y+uEMZDsMDUDR6Ayf9MZ/OkrxSlp8KPzoQXqYfq6kQesSd2z+SsGWM4t6IqzfYQzn9hvX1foH/40KTPUqb/zMWFM3pIvBvvWNJlqCKdBADebvueqsE14sQ5v3lq+EERuMlt/vL+M0sJRnLh8jMWQKgsON3NGv5VOP1V5Epl5fOJ7S1qesO+vEMNCPeejr6+k9CDfoAyqY02BTHvQdDWvcS0ZF+upyom2wOJs1iSXJecbIhVRQ== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(70586007)(6666004)(36756003)(70206006)(107886003)(8936002)(4326008)(8676002)(7696005)(81166007)(6286002)(83380400001)(26005)(110136005)(36860700001)(508600001)(54906003)(40460700003)(6636002)(47076005)(426003)(336012)(82310400005)(55016003)(316002)(186003)(2906002)(16526019)(1076003)(356005)(2616005)(86362001)(30864003)(5660300002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:48:05.0606 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ebb4d759-cea9-4ab9-1dfa-08da47b26b09 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT028.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4223 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Split the virtqs virt-queue resource between the configuration threads. Also need pre-created virt-queue resource after virtq destruction. This accelerates the LM process and reduces its time by 30%. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.c | 115 ++++++++++++++++++++------ drivers/vdpa/mlx5/mlx5_vdpa.h | 12 ++- drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 15 +++- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 111 +++++++++++++++++++++---- 4 files changed, 208 insertions(+), 45 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index f006a9cd3f..c5d82872c7 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -275,23 +275,18 @@ mlx5_vdpa_wait_dev_close_tasks_done(struct mlx5_vdpa_priv *priv) } static int -mlx5_vdpa_dev_close(int vid) +_internal_mlx5_vdpa_dev_close(struct mlx5_vdpa_priv *priv, + bool release_resource) { - struct rte_vdpa_device *vdev = rte_vhost_get_vdpa_device(vid); - struct mlx5_vdpa_priv *priv = - mlx5_vdpa_find_priv_resource_by_vdev(vdev); int ret = 0; + int vid = priv->vid; - if (priv == NULL) { - DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); - return -1; - } mlx5_vdpa_cqe_event_unset(priv); if (priv->state == MLX5_VDPA_STATE_CONFIGURED) { ret |= mlx5_vdpa_lm_log(priv); priv->state = MLX5_VDPA_STATE_IN_PROGRESS; } - if (priv->use_c_thread) { + if (priv->use_c_thread && !release_resource) { if (priv->last_c_thrd_idx >= (conf_thread_mng.max_thrds - 1)) priv->last_c_thrd_idx = 0; @@ -315,7 +310,7 @@ mlx5_vdpa_dev_close(int vid) pthread_mutex_lock(&priv->steer_update_lock); mlx5_vdpa_steer_unset(priv); pthread_mutex_unlock(&priv->steer_update_lock); - mlx5_vdpa_virtqs_release(priv); + mlx5_vdpa_virtqs_release(priv, release_resource); mlx5_vdpa_drain_cq(priv); if (priv->lm_mr.addr) mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); @@ -329,6 +324,24 @@ mlx5_vdpa_dev_close(int vid) return ret; } +static int +mlx5_vdpa_dev_close(int vid) +{ + struct rte_vdpa_device *vdev = rte_vhost_get_vdpa_device(vid); + struct mlx5_vdpa_priv *priv; + + if (!vdev) { + DRV_LOG(ERR, "Invalid vDPA device."); + return -1; + } + priv = mlx5_vdpa_find_priv_resource_by_vdev(vdev); + if (priv == NULL) { + DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); + return -1; + } + return _internal_mlx5_vdpa_dev_close(priv, false); +} + static int mlx5_vdpa_dev_config(int vid) { @@ -624,11 +637,33 @@ mlx5_vdpa_config_get(struct mlx5_kvargs_ctrl *mkvlist, priv->queue_size); } +void +mlx5_vdpa_prepare_virtq_destroy(struct mlx5_vdpa_priv *priv) +{ + uint32_t max_queues, index; + struct mlx5_vdpa_virtq *virtq; + + if (!priv->queues || !priv->queue_size) + return; + max_queues = ((priv->queues * 2) < priv->caps.max_num_virtio_queues) ? + (priv->queues * 2) : (priv->caps.max_num_virtio_queues); + if (mlx5_vdpa_is_modify_virtq_supported(priv)) + mlx5_vdpa_steer_unset(priv); + for (index = 0; index < max_queues; ++index) { + virtq = &priv->virtqs[index]; + if (virtq->virtq) { + pthread_mutex_lock(&virtq->virtq_lock); + mlx5_vdpa_virtq_unset(virtq); + pthread_mutex_unlock(&virtq->virtq_lock); + } + } +} + static int mlx5_vdpa_virtq_resource_prepare(struct mlx5_vdpa_priv *priv) { - uint32_t max_queues; - uint32_t index; + uint32_t remaining_cnt = 0, err_cnt = 0, task_num = 0; + uint32_t max_queues, index, thrd_idx, data[1]; struct mlx5_vdpa_virtq *virtq; for (index = 0; index < priv->caps.max_num_virtio_queues; @@ -640,25 +675,53 @@ mlx5_vdpa_virtq_resource_prepare(struct mlx5_vdpa_priv *priv) return 0; max_queues = (priv->queues < priv->caps.max_num_virtio_queues) ? (priv->queues * 2) : (priv->caps.max_num_virtio_queues); - for (index = 0; index < max_queues; ++index) - if (mlx5_vdpa_virtq_single_resource_prepare(priv, - index)) + if (priv->use_c_thread) { + uint32_t main_task_idx[max_queues]; + + for (index = 0; index < max_queues; ++index) { + thrd_idx = index % (conf_thread_mng.max_thrds + 1); + if (!thrd_idx) { + main_task_idx[task_num] = index; + task_num++; + continue; + } + thrd_idx = priv->last_c_thrd_idx + 1; + if (thrd_idx >= conf_thread_mng.max_thrds) + thrd_idx = 0; + priv->last_c_thrd_idx = thrd_idx; + data[0] = index; + if (mlx5_vdpa_task_add(priv, thrd_idx, + MLX5_VDPA_TASK_PREPARE_VIRTQ, + &remaining_cnt, &err_cnt, + (void **)&data, 1)) { + DRV_LOG(ERR, "Fail to add " + "task prepare virtq (%d).", index); + main_task_idx[task_num] = index; + task_num++; + } + } + for (index = 0; index < task_num; ++index) + if (mlx5_vdpa_virtq_single_resource_prepare(priv, + main_task_idx[index])) + goto error; + if (mlx5_vdpa_c_thread_wait_bulk_tasks_done(&remaining_cnt, + &err_cnt, 2000)) { + DRV_LOG(ERR, + "Failed to wait virt-queue prepare tasks ready."); goto error; + } + } else { + for (index = 0; index < max_queues; ++index) + if (mlx5_vdpa_virtq_single_resource_prepare(priv, + index)) + goto error; + } if (mlx5_vdpa_is_modify_virtq_supported(priv)) if (mlx5_vdpa_steer_update(priv, true)) goto error; return 0; error: - for (index = 0; index < max_queues; ++index) { - virtq = &priv->virtqs[index]; - if (virtq->virtq) { - pthread_mutex_lock(&virtq->virtq_lock); - mlx5_vdpa_virtq_unset(virtq); - pthread_mutex_unlock(&virtq->virtq_lock); - } - } - if (mlx5_vdpa_is_modify_virtq_supported(priv)) - mlx5_vdpa_steer_unset(priv); + mlx5_vdpa_prepare_virtq_destroy(priv); return -1; } @@ -860,7 +923,7 @@ static void mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv) { if (priv->state == MLX5_VDPA_STATE_CONFIGURED) - mlx5_vdpa_dev_close(priv->vid); + _internal_mlx5_vdpa_dev_close(priv, true); if (priv->use_c_thread) mlx5_vdpa_wait_dev_close_tasks_done(priv); mlx5_vdpa_release_dev_resources(priv); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index f353db62ac..dc4dfba5ed 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -85,6 +85,7 @@ enum mlx5_vdpa_task_type { MLX5_VDPA_TASK_SETUP_VIRTQ, MLX5_VDPA_TASK_STOP_VIRTQ, MLX5_VDPA_TASK_DEV_CLOSE_NOWAIT, + MLX5_VDPA_TASK_PREPARE_VIRTQ, }; /* Generic task information and size must be multiple of 4B. */ @@ -128,6 +129,9 @@ struct mlx5_vdpa_virtq { uint32_t configured:1; uint32_t enable:1; uint32_t stopped:1; + uint32_t rx_csum:1; + uint32_t virtio_version_1_0:1; + uint32_t event_mode:3; uint32_t version; pthread_mutex_t virtq_lock; struct mlx5_vdpa_priv *priv; @@ -355,8 +359,12 @@ void mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv); * * @param[in] priv * The vdpa driver private structure. + * @param[in] release_resource + * The vdpa driver release resource without prepare resource. */ -void mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv); +void +mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv, + bool release_resource); /** * Cleanup cached resources of all virtqs. @@ -595,4 +603,6 @@ int mlx5_vdpa_qps2rst2rts(struct mlx5_vdpa_event_qp *eqp); void mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq); +void +mlx5_vdpa_prepare_virtq_destroy(struct mlx5_vdpa_priv *priv); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c index bb2279440b..6e6624e5a3 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -153,6 +153,7 @@ mlx5_vdpa_c_thread_handle(void *arg) __atomic_fetch_add( task.err_cnt, 1, __ATOMIC_RELAXED); } + virtq->enable = 1; pthread_mutex_unlock(&virtq->virtq_lock); break; case MLX5_VDPA_TASK_STOP_VIRTQ: @@ -193,7 +194,7 @@ mlx5_vdpa_c_thread_handle(void *arg) pthread_mutex_lock(&priv->steer_update_lock); mlx5_vdpa_steer_unset(priv); pthread_mutex_unlock(&priv->steer_update_lock); - mlx5_vdpa_virtqs_release(priv); + mlx5_vdpa_virtqs_release(priv, false); mlx5_vdpa_drain_cq(priv); if (priv->lm_mr.addr) mlx5_os_wrapped_mkey_destroy( @@ -205,6 +206,18 @@ mlx5_vdpa_c_thread_handle(void *arg) &priv->dev_close_progress, 0, __ATOMIC_RELAXED); break; + case MLX5_VDPA_TASK_PREPARE_VIRTQ: + ret = mlx5_vdpa_virtq_single_resource_prepare( + priv, task.idx); + if (ret) { + DRV_LOG(ERR, + "Failed to prepare virtq %d.", + task.idx); + __atomic_fetch_add( + task.err_cnt, 1, + __ATOMIC_RELAXED); + } + break; default: DRV_LOG(ERR, "Invalid vdpa task type %d.", task.type); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 20ce382487..d4dd73f861 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -116,18 +116,29 @@ mlx5_vdpa_virtq_unreg_intr_handle_all(struct mlx5_vdpa_priv *priv) } } +static void +mlx5_vdpa_vq_destroy(struct mlx5_vdpa_virtq *virtq) +{ + /* Clean pre-created resource in dev removal only */ + claim_zero(mlx5_devx_cmd_destroy(virtq->virtq)); + virtq->index = 0; + virtq->virtq = NULL; + virtq->configured = 0; +} + /* Release cached VQ resources. */ void mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) { unsigned int i, j; + mlx5_vdpa_steer_unset(priv); for (i = 0; i < priv->caps.max_num_virtio_queues; i++) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; - if (virtq->index != i) - continue; pthread_mutex_lock(&virtq->virtq_lock); + if (virtq->virtq) + mlx5_vdpa_vq_destroy(virtq); for (j = 0; j < RTE_DIM(virtq->umems); ++j) { if (virtq->umems[j].obj) { claim_zero(mlx5_glue->devx_umem_dereg @@ -157,29 +168,37 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) if (ret) DRV_LOG(WARNING, "Failed to stop virtq %d.", virtq->index); - claim_zero(mlx5_devx_cmd_destroy(virtq->virtq)); - virtq->index = 0; - virtq->virtq = NULL; - virtq->configured = 0; } + mlx5_vdpa_vq_destroy(virtq); virtq->notifier_state = MLX5_VDPA_NOTIFIER_STATE_DISABLED; } void -mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) +mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv, + bool release_resource) { struct mlx5_vdpa_virtq *virtq; - int i; - - for (i = 0; i < priv->nr_virtqs; i++) { + uint32_t i, max_virtq, valid_vq_num; + + valid_vq_num = ((priv->queues * 2) < priv->caps.max_num_virtio_queues) ? + (priv->queues * 2) : priv->caps.max_num_virtio_queues; + max_virtq = (release_resource && + (valid_vq_num) > priv->nr_virtqs) ? + (valid_vq_num) : priv->nr_virtqs; + for (i = 0; i < max_virtq; i++) { virtq = &priv->virtqs[i]; pthread_mutex_lock(&virtq->virtq_lock); mlx5_vdpa_virtq_unset(virtq); - if (i < (priv->queues * 2)) + virtq->enable = 0; + if (!release_resource && i < valid_vq_num) mlx5_vdpa_virtq_single_resource_prepare( priv, i); pthread_mutex_unlock(&virtq->virtq_lock); } + if (!release_resource && priv->queues && + mlx5_vdpa_is_modify_virtq_supported(priv)) + if (mlx5_vdpa_steer_update(priv, true)) + mlx5_vdpa_steer_unset(priv); priv->features = 0; priv->nr_virtqs = 0; } @@ -455,6 +474,9 @@ mlx5_vdpa_virtq_single_resource_prepare(struct mlx5_vdpa_priv *priv, virtq->priv = priv; if (!virtq->virtq) return true; + virtq->rx_csum = attr.rx_csum; + virtq->virtio_version_1_0 = attr.virtio_version_1_0; + virtq->event_mode = attr.event_mode; } return false; } @@ -538,6 +560,9 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index, bool reg_kick) goto error; } claim_zero(rte_vhost_enable_guest_notification(priv->vid, index, 1)); + virtq->rx_csum = attr.rx_csum; + virtq->virtio_version_1_0 = attr.virtio_version_1_0; + virtq->event_mode = attr.event_mode; virtq->configured = 1; rte_spinlock_lock(&priv->db_lock); rte_write32(virtq->index, priv->virtq_db_addr); @@ -629,6 +654,31 @@ mlx5_vdpa_features_validate(struct mlx5_vdpa_priv *priv) return 0; } +static bool +mlx5_vdpa_is_pre_created_vq_mismatch(struct mlx5_vdpa_priv *priv, + struct mlx5_vdpa_virtq *virtq) +{ + struct rte_vhost_vring vq; + uint32_t event_mode; + + if (virtq->rx_csum != + !!(priv->features & (1ULL << VIRTIO_NET_F_GUEST_CSUM))) + return true; + if (virtq->virtio_version_1_0 != + !!(priv->features & (1ULL << VIRTIO_F_VERSION_1))) + return true; + if (rte_vhost_get_vhost_vring(priv->vid, virtq->index, &vq)) + return true; + if (vq.size != virtq->vq_size) + return true; + event_mode = vq.callfd != -1 || !(priv->caps.event_mode & + (1 << MLX5_VIRTQ_EVENT_MODE_NO_MSIX)) ? + MLX5_VIRTQ_EVENT_MODE_QP : MLX5_VIRTQ_EVENT_MODE_NO_MSIX; + if (virtq->event_mode != event_mode) + return true; + return false; +} + int mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) { @@ -664,6 +714,15 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) virtq = &priv->virtqs[i]; if (!virtq->enable) continue; + if (priv->queues && virtq->virtq) { + if (mlx5_vdpa_is_pre_created_vq_mismatch(priv, virtq)) { + mlx5_vdpa_prepare_virtq_destroy(priv); + i = 0; + virtq = &priv->virtqs[i]; + if (!virtq->enable) + continue; + } + } thrd_idx = i % (conf_thread_mng.max_thrds + 1); if (!thrd_idx) { main_task_idx[task_num] = i; @@ -693,6 +752,7 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) pthread_mutex_unlock(&virtq->virtq_lock); goto error; } + virtq->enable = 1; pthread_mutex_unlock(&virtq->virtq_lock); } if (mlx5_vdpa_c_thread_wait_bulk_tasks_done(&remaining_cnt, @@ -724,20 +784,32 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) } else { for (i = 0; i < nr_vring; i++) { virtq = &priv->virtqs[i]; + if (!virtq->enable) + continue; + if (priv->queues && virtq->virtq) { + if (mlx5_vdpa_is_pre_created_vq_mismatch(priv, + virtq)) { + mlx5_vdpa_prepare_virtq_destroy( + priv); + i = 0; + virtq = &priv->virtqs[i]; + if (!virtq->enable) + continue; + } + } pthread_mutex_lock(&virtq->virtq_lock); - if (virtq->enable) { - if (mlx5_vdpa_virtq_setup(priv, i, true)) { - pthread_mutex_unlock( + if (mlx5_vdpa_virtq_setup(priv, i, true)) { + pthread_mutex_unlock( &virtq->virtq_lock); - goto error; - } + goto error; } + virtq->enable = 1; pthread_mutex_unlock(&virtq->virtq_lock); } } return 0; error: - mlx5_vdpa_virtqs_release(priv); + mlx5_vdpa_virtqs_release(priv, true); return -1; } @@ -795,6 +867,11 @@ mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable) "for virtq %d.", index); } mlx5_vdpa_virtq_unset(virtq); + } else { + if (virtq->virtq && + mlx5_vdpa_is_pre_created_vq_mismatch(priv, virtq)) + DRV_LOG(WARNING, + "Configuration mismatch dummy virtq %d.", index); } if (enable) { ret = mlx5_vdpa_virtq_setup(priv, index, true);