From patchwork Mon Apr 26 12:48:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 92178 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A7BBCA0548; Mon, 26 Apr 2021 14:48:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6550D41104; Mon, 26 Apr 2021 14:48:38 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2069.outbound.protection.outlook.com [40.107.92.69]) by mails.dpdk.org (Postfix) with ESMTP id 9BC6840140; Mon, 26 Apr 2021 14:48:36 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mhOzRWG5eZDJQmB/5AOBwMO+s9QHJJQPnLz5KE0GTXIxUklRpA1TbfZSK2/mVFD6mfAGzf4BBQCPnHlfcfHTP7JPZ6pJSVaKm+QdQ6VPcIQbP3uCGGlS+ojg9OWUkkR/vmdXBiPyIypulFE/MaFni6ah+8ifwc61Sl5XQJ17MP7pI3s8TebsPIvTQUryE5fx53QRy8HITgmNGEXexbVR082iK8OO2+yMZ+z3TvHpcKK5OZv7TuhFZWoD4jIwdKhDv9q+wYs0wlienEXvCjYxcXblgiBoVtyYu93gIX/G1QkQY0Y8CxVVar60hm/t1flsYDSqpE76woJzgajdjTzeaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=06YPcFhCEmS/NJQo0sMVKCzOXl6tyqrmtLmpIL93LGo=; b=VwDuBMzKUsxjjxNUH7agk9aQO09dOkADhq44WyJBeyRGwsopiUTvjQCGgFedRPc+Bek3ZkAfyq1Ph4JUMN7fcrt2ShnIaxoTNKhoQxLUvYfEhOPzb0UTHux/F5AasHt5Z1/7QTR3RbyP+dF61Oy5WbtbPgzTUfAYkyKlU4snCjzwW4d6Bucpe4iUZT8tcPCZPmM3qhOERDl6sCY7D1LAHBRJPwkMbWhRrROpYDIlmZ2p1xLMQoQ3YZ5racREESy3tCbpLtNIhXN3JnAtAItSU5Wipl6ToTXLrZ9Uu8fNjw3/Hsyn9Qkf7dJrH9urJTb+EHyPAhE75f8WZSgkW4U51A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=06YPcFhCEmS/NJQo0sMVKCzOXl6tyqrmtLmpIL93LGo=; b=E3p1xOQ/EyxZjgQpfot0XwTIh3VxHPGGapze14t3MfrP4aBiTNnvBoMfGuZ0JoD6AevFjxl9ErTvl3W9BIcu+x+X8Xj/1/Og5JWEip/ncdak3XJbh6vheHHHXN2biqh7jMoWplVWlFjzD4/RA/T2Nicxsw9VGR1HphPO9yZd4qmeyG8nKBxzaA3+w4i18q+wr3qu2OtqU6bDidclQCVrkSJnA4gdMGWXGPaf/ttBuTePur0U/crXiPuobNCWTJE1+91L4ONfuZvEwxwmAJX9vr6yj4DoL30M0hX65QkyAQ8SqWknC61uJm5xEHMtO8W7KUpXQgA3D2sQKdb2U+49tg== Received: from MWHPR13CA0035.namprd13.prod.outlook.com (2603:10b6:300:95::21) by BN8PR12MB3505.namprd12.prod.outlook.com (2603:10b6:408:69::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4065.23; Mon, 26 Apr 2021 12:48:35 +0000 Received: from CO1NAM11FT028.eop-nam11.prod.protection.outlook.com (2603:10b6:300:95:cafe::3b) by MWHPR13CA0035.outlook.office365.com (2603:10b6:300:95::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.16 via Frontend Transport; Mon, 26 Apr 2021 12:48:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT028.mail.protection.outlook.com (10.13.175.214) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4065.21 via Frontend Transport; Mon, 26 Apr 2021 12:48:34 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 26 Apr 2021 12:48:32 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko , Date: Mon, 26 Apr 2021 15:48:10 +0300 Message-ID: <20210426124810.43210-1-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <1616061368-29768-1-git-send-email-michaelba@nvidia.com> References: <1616061368-29768-1-git-send-email-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: c10f5dac-b1ea-4b2b-01a2-08d908b19aab X-MS-TrafficTypeDiagnostic: BN8PR12MB3505: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:546; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: LSQcYWbTIb4xEiDcuqm4qCxcNMRjKEY7nNV4ws/qMRlGSJ/UsiKcZNR2pOKfeiJyAdqc05O7nqkjFI7H9An3c3LlPd/1GYmdtEnTEe8BacG0qZBG+Svf1YvN9g7QCUbi7XP/AkXWdsv9Ey4+zOyXTZTA8umBnD9t8+3cjE9xhliILIjHD1JBxTGtAhgfuAKYosCSt8n0j+Eo0BDYnMQnqKvSqeE2joctYlA9VBLREK5G2hgn6eA8r4FaNADXIEaX8HCKMJ+giBZcVQPtVxSR40MDQz9D1vB0ZGkRkGW+b2ymi6AA1yZNWvppta0yZAnRZE6t2zHCHFdK/OFGMAzfXNgmh0UkB5jJiYPy46Xkxu2V4kZjsF5Ukx+ZRBd3WzpUhwkMdTVp4yAMqQW2LyZVqZ2E2lLbWQogh7uxCp5RLKKlVPpP5V2VwGRsjrsYW/LJ+cgbTmSAk0SixJQeBLm8ZCL3leBjiVVk9eTXFq/zmVM53+Z72b8kUFaS55nbgIMUoWxF+OlzTUAz//WahihorNFf0yu/CTt/RkjvF8hQ1qh6Ma1zkoVcX+1nAeXWGZrlrozCNv4c9wOhtbxEyewKt57MHI8mq5bX20jOjoHMCapg0iSJ+PdSmBPILPmorjMCXouOd82KDS1Hmj6NmnhEIPY9Cm7MBsRKTXiVeCpPcvE= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(346002)(136003)(376002)(396003)(36840700001)(46966006)(83380400001)(26005)(82740400003)(70586007)(6666004)(6286002)(5660300002)(8676002)(36906005)(55016002)(4326008)(2906002)(16526019)(86362001)(7636003)(356005)(82310400003)(316002)(8936002)(6916009)(478600001)(70206006)(450100002)(1076003)(36860700001)(54906003)(336012)(2616005)(36756003)(426003)(7696005)(186003)(47076005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2021 12:48:34.5258 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c10f5dac-b1ea-4b2b-01a2-08d908b19aab X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT028.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3505 Subject: [dpdk-dev] [PATCH v2] net/mlx5: workaround ASO memory region creation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Due to kernel issue in direct MKEY creation using the DevX API for physical memory, this patch replaces the ASO MR creation to use Verbs API. Fixes: f935ed4b645a ("net/mlx5: support flow hit action for aging") Cc: stable@dpdk.org Signed-off-by: Michael Baum Acked-by: Matan Azrad --- v2: The second patch in the series did not work due to a FW issue, this issue does not exist in this patch. drivers/common/mlx5/linux/mlx5_common_verbs.c | 1 - drivers/common/mlx5/windows/mlx5_common_os.c | 23 ++++--- drivers/net/mlx5/mlx5.h | 10 +-- drivers/net/mlx5/mlx5_flow_aso.c | 92 +++++++++++---------------- 4 files changed, 52 insertions(+), 74 deletions(-) diff --git a/drivers/common/mlx5/linux/mlx5_common_verbs.c b/drivers/common/mlx5/linux/mlx5_common_verbs.c index 339535d..aa560f0 100644 --- a/drivers/common/mlx5/linux/mlx5_common_verbs.c +++ b/drivers/common/mlx5/linux/mlx5_common_verbs.c @@ -37,7 +37,6 @@ { struct ibv_mr *ibv_mr; - memset(pmd_mr, 0, sizeof(*pmd_mr)); ibv_mr = mlx5_glue->reg_mr(pd, addr, length, IBV_ACCESS_LOCAL_WRITE | (haswell_broadwell_cpu ? 0 : diff --git a/drivers/common/mlx5/windows/mlx5_common_os.c b/drivers/common/mlx5/windows/mlx5_common_os.c index f2d781a..cebf42d 100644 --- a/drivers/common/mlx5/windows/mlx5_common_os.c +++ b/drivers/common/mlx5/windows/mlx5_common_os.c @@ -155,23 +155,22 @@ struct mlx5_devx_mkey_attr mkey_attr; struct mlx5_pd *mlx5_pd = (struct mlx5_pd *)pd; struct mlx5_hca_attr attr; + struct mlx5_devx_obj *mkey; + void *obj; if (!pd || !addr) { rte_errno = EINVAL; return -1; } - memset(pmd_mr, 0, sizeof(*pmd_mr)); if (mlx5_devx_cmd_query_hca_attr(mlx5_pd->devx_ctx, &attr)) return -1; - pmd_mr->addr = addr; - pmd_mr->len = length; - pmd_mr->obj = mlx5_os_umem_reg(mlx5_pd->devx_ctx, pmd_mr->addr, - pmd_mr->len, IBV_ACCESS_LOCAL_WRITE); - if (!pmd_mr->obj) + obj = mlx5_os_umem_reg(mlx5_pd->devx_ctx, addr, length, + IBV_ACCESS_LOCAL_WRITE); + if (!obj) return -1; mkey_attr.addr = (uintptr_t)addr; mkey_attr.size = length; - mkey_attr.umem_id = ((struct mlx5_devx_umem *)(pmd_mr->obj))->umem_id; + mkey_attr.umem_id = ((struct mlx5_devx_umem *)(obj))->umem_id; mkey_attr.pd = mlx5_pd->pdn; mkey_attr.log_entity_size = 0; mkey_attr.pg_access = 0; @@ -183,11 +182,15 @@ mkey_attr.relaxed_ordering_write = attr.relaxed_ordering_write; mkey_attr.relaxed_ordering_read = attr.relaxed_ordering_read; } - pmd_mr->mkey = mlx5_devx_cmd_mkey_create(mlx5_pd->devx_ctx, &mkey_attr); - if (!pmd_mr->mkey) { - claim_zero(mlx5_os_umem_dereg(pmd_mr->obj)); + mkey = mlx5_devx_cmd_mkey_create(mlx5_pd->devx_ctx, &mkey_attr); + if (!mkey) { + claim_zero(mlx5_os_umem_dereg(obj)); return -1; } + pmd_mr->addr = addr; + pmd_mr->len = length; + pmd_mr->obj = obj; + pmd_mr->mkey = mkey; pmd_mr->lkey = pmd_mr->mkey->id; return 0; } diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 378b68e..a29b8d6 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -483,14 +483,6 @@ struct mlx5_aso_cq { uint64_t errors; }; -struct mlx5_aso_devx_mr { - void *buf; - uint64_t length; - struct mlx5dv_devx_umem *umem; - struct mlx5_devx_obj *mkey; - bool is_indirect; -}; - struct mlx5_aso_sq_elem { union { struct { @@ -507,7 +499,7 @@ struct mlx5_aso_sq { struct mlx5_aso_cq cq; struct mlx5_devx_sq sq_obj; volatile uint64_t *uar_addr; - struct mlx5_aso_devx_mr mr; + struct mlx5_pmd_mr mr; uint16_t pi; uint32_t head; uint32_t tail; diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c index 20cd4fe..d9f8b14 100644 --- a/drivers/net/mlx5/mlx5_flow_aso.c +++ b/drivers/net/mlx5/mlx5_flow_aso.c @@ -60,76 +60,56 @@ /** * Free MR resources. * + * @param[in] sh + * Pointer to shared device context. * @param[in] mr * MR to free. */ static void -mlx5_aso_devx_dereg_mr(struct mlx5_aso_devx_mr *mr) +mlx5_aso_dereg_mr(struct mlx5_dev_ctx_shared *sh, struct mlx5_pmd_mr *mr) { - claim_zero(mlx5_devx_cmd_destroy(mr->mkey)); - if (!mr->is_indirect && mr->umem) - claim_zero(mlx5_glue->devx_umem_dereg(mr->umem)); - mlx5_free(mr->buf); + void *addr = mr->addr; + + sh->share_cache.dereg_mr_cb(mr); + mlx5_free(addr); memset(mr, 0, sizeof(*mr)); } /** * Register Memory Region. * - * @param[in] ctx - * Context returned from mlx5 open_device() glue function. + * @param[in] sh + * Pointer to shared device context. * @param[in] length * Size of MR buffer. * @param[in/out] mr * Pointer to MR to create. * @param[in] socket * Socket to use for allocation. - * @param[in] pdn - * Protection Domain number to use. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_aso_devx_reg_mr(void *ctx, size_t length, struct mlx5_aso_devx_mr *mr, - int socket, int pdn) +mlx5_aso_reg_mr(struct mlx5_dev_ctx_shared *sh, size_t length, + struct mlx5_pmd_mr *mr, int socket) { - struct mlx5_devx_mkey_attr mkey_attr; - mr->buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, length, 4096, - socket); - if (!mr->buf) { - DRV_LOG(ERR, "Failed to create ASO bits mem for MR by Devx."); + int ret; + + mr->addr = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, length, 4096, + socket); + if (!mr->addr) { + DRV_LOG(ERR, "Failed to create ASO bits mem for MR."); return -1; } - mr->umem = mlx5_os_umem_reg(ctx, mr->buf, length, - IBV_ACCESS_LOCAL_WRITE); - if (!mr->umem) { - DRV_LOG(ERR, "Failed to register Umem for MR by Devx."); - goto error; - } - mkey_attr.addr = (uintptr_t)mr->buf; - mkey_attr.size = length; - mkey_attr.umem_id = mlx5_os_get_umem_id(mr->umem); - mkey_attr.pd = pdn; - mkey_attr.pg_access = 1; - mkey_attr.klm_array = NULL; - mkey_attr.klm_num = 0; - mkey_attr.relaxed_ordering_read = 0; - mkey_attr.relaxed_ordering_write = 0; - mr->mkey = mlx5_devx_cmd_mkey_create(ctx, &mkey_attr); - if (!mr->mkey) { + ret = sh->share_cache.reg_mr_cb(sh->pd, mr->addr, length, mr); + if (ret) { DRV_LOG(ERR, "Failed to create direct Mkey."); - goto error; + mlx5_free(mr->addr); + return -1; } - mr->length = length; - mr->is_indirect = false; return 0; -error: - if (mr->umem) - claim_zero(mlx5_glue->devx_umem_dereg(mr->umem)); - mlx5_free(mr->buf); - return -1; } /** @@ -164,8 +144,8 @@ for (i = 0, wqe = &sq->sq_obj.aso_wqes[0]; i < size; ++i, ++wqe) { wqe->general_cseg.sq_ds = rte_cpu_to_be_32((sq->sqn << 8) | (sizeof(*wqe) >> 4)); - wqe->aso_cseg.lkey = rte_cpu_to_be_32(sq->mr.mkey->id); - addr = (uint64_t)((uint64_t *)sq->mr.buf + i * + wqe->aso_cseg.lkey = rte_cpu_to_be_32(sq->mr.lkey); + addr = (uint64_t)((uint64_t *)sq->mr.addr + i * MLX5_ASO_AGE_ACTIONS_PER_POOL / 64); wqe->aso_cseg.va_h = rte_cpu_to_be_32((uint32_t)(addr >> 32)); wqe->aso_cseg.va_l_r = rte_cpu_to_be_32((uint32_t)addr | 1u); @@ -227,14 +207,15 @@ * Protection Domain number to use. * @param[in] log_desc_n * Log of number of descriptors in queue. + * @param[in] ts_format + * timestamp format supported by the queue. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_aso_sq_create(void *ctx, struct mlx5_aso_sq *sq, int socket, - void *uar, uint32_t pdn, uint16_t log_desc_n, - uint32_t ts_format) +mlx5_aso_sq_create(void *ctx, struct mlx5_aso_sq *sq, int socket, void *uar, + uint32_t pdn, uint16_t log_desc_n, uint32_t ts_format) { struct mlx5_devx_create_sq_attr attr = { .user_index = 0xFFFF, @@ -286,26 +267,27 @@ * * @param[in] sh * Pointer to shared device context. + * @param[in] aso_opc_mod + * Mode of ASO feature. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh, - enum mlx5_access_aso_opc_mod aso_opc_mod) + enum mlx5_access_aso_opc_mod aso_opc_mod) { uint32_t sq_desc_n = 1 << MLX5_ASO_QUEUE_LOG_DESC; switch (aso_opc_mod) { case ASO_OPC_MOD_FLOW_HIT: - if (mlx5_aso_devx_reg_mr(sh->ctx, - (MLX5_ASO_AGE_ACTIONS_PER_POOL / 8) * - sq_desc_n, &sh->aso_age_mng->aso_sq.mr, 0, sh->pdn)) + if (mlx5_aso_reg_mr(sh, (MLX5_ASO_AGE_ACTIONS_PER_POOL / 8) * + sq_desc_n, &sh->aso_age_mng->aso_sq.mr, 0)) return -1; if (mlx5_aso_sq_create(sh->ctx, &sh->aso_age_mng->aso_sq, 0, sh->tx_uar, sh->pdn, MLX5_ASO_QUEUE_LOG_DESC, sh->sq_ts_format)) { - mlx5_aso_devx_dereg_mr(&sh->aso_age_mng->aso_sq.mr); + mlx5_aso_dereg_mr(sh, &sh->aso_age_mng->aso_sq.mr); return -1; } mlx5_aso_age_init_sq(&sh->aso_age_mng->aso_sq); @@ -329,16 +311,18 @@ * * @param[in] sh * Pointer to shared device context. + * @param[in] aso_opc_mod + * Mode of ASO feature. */ void mlx5_aso_queue_uninit(struct mlx5_dev_ctx_shared *sh, - enum mlx5_access_aso_opc_mod aso_opc_mod) + enum mlx5_access_aso_opc_mod aso_opc_mod) { struct mlx5_aso_sq *sq; switch (aso_opc_mod) { case ASO_OPC_MOD_FLOW_HIT: - mlx5_aso_devx_dereg_mr(&sh->aso_age_mng->aso_sq.mr); + mlx5_aso_dereg_mr(sh, &sh->aso_age_mng->aso_sq.mr); sq = &sh->aso_age_mng->aso_sq; break; case ASO_OPC_MOD_POLICER: @@ -478,7 +462,7 @@ uint16_t idx = (sq->tail + i) & mask; struct mlx5_aso_age_pool *pool = sq->elts[idx].pool; uint64_t diff = curr - pool->time_of_last_age_check; - uint64_t *addr = sq->mr.buf; + uint64_t *addr = sq->mr.addr; int j; addr += idx * MLX5_ASO_AGE_ACTIONS_PER_POOL / 64;