From patchwork Mon Jan 11 13:59:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86289 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9A697A09FF; Mon, 11 Jan 2021 15:00:32 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6BFCC140D24; Mon, 11 Jan 2021 14:59:54 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id 72465140D21 for ; Mon, 11 Jan 2021 14:59:51 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 11 Jan 2021 15:59:49 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10BDxPKO010436; Mon, 11 Jan 2021 15:59:49 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe Date: Mon, 11 Jan 2021 13:59:17 +0000 Message-Id: <1610373560-253158-8-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1610373560-253158-1-git-send-email-matan@nvidia.com> References: <1610373560-253158-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH 07/10] compress/mlx5: add memory region management X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Mellanox user space drivers don't deal with physical addresses, that's why any mbuf virtual address moved directly to the HW descriptor(WQE). The mapping between the virtual address to the physical address is saved in MR configured by the kernel to the HW. Each MR has a key that should also be moved to the WQE by the SW. When the SW see address which is not mapped, it extends the address range and creates a MR using a system call. Add memory region cache management: 2 level cache per queue-pair - no locks. 1 shared cache between all the queues using a lock. Using this way, the MR key search per data-path address is optimized. Signed-off-by: Matan Azrad --- drivers/compress/mlx5/mlx5_compress.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 132837e..ab24a84 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -46,6 +46,7 @@ struct mlx5_compress_priv { struct rte_compressdev_config dev_config; SLIST_HEAD(xform_list, mlx5_compress_xform) xform_list; rte_spinlock_t xform_sl; + struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */ }; struct mlx5_compress_qp { @@ -54,6 +55,7 @@ struct mlx5_compress_qp { uint16_t pi; uint16_t ci; volatile uint64_t *uar_addr; + struct mlx5_mr_ctrl mr_ctrl; int socket_id; struct mlx5_devx_cq cq; struct mlx5_devx_sq sq; @@ -118,6 +120,7 @@ struct mlx5_compress_qp { if (opaq != NULL) rte_free(opaq); } + mlx5_mr_btree_free(&qp->mr_ctrl.cache_bh); rte_free(qp); dev->data->queue_pairs[qp_id] = NULL; return 0; @@ -184,6 +187,13 @@ struct mlx5_compress_qp { rte_errno = ENOMEM; goto err; } + if (mlx5_mr_btree_init(&qp->mr_ctrl.cache_bh, MLX5_MR_BTREE_CACHE_N, + priv->dev_config.socket_id)) { + DRV_LOG(ERR, "Cannot allocate MR Btree for qp %u.", + (uint32_t)qp_id); + rte_errno = ENOMEM; + goto err; + } qp->entries_n = 1 << log_ops_n; qp->socket_id = socket_id; qp->qp_id = qp_id; @@ -513,6 +523,17 @@ struct mlx5_compress_qp { claim_zero(mlx5_glue->close_device(priv->ctx)); return -1; } + if (mlx5_mr_btree_init(&priv->mr_scache.cache, + MLX5_MR_BTREE_CACHE_N * 2, rte_socket_id()) != 0) { + DRV_LOG(ERR, "Failed to allocate shared cache MR memory."); + mlx5_compress_hw_global_release(priv); + rte_compressdev_pmd_destroy(priv->cdev); + claim_zero(mlx5_glue->close_device(priv->ctx)); + rte_errno = ENOMEM; + return -rte_errno; + } + priv->mr_scache.reg_mr_cb = mlx5_common_verbs_reg_mr; + priv->mr_scache.dereg_mr_cb = mlx5_common_verbs_dereg_mr; pthread_mutex_lock(&priv_list_lock); TAILQ_INSERT_TAIL(&mlx5_compress_priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); @@ -547,6 +568,7 @@ struct mlx5_compress_qp { TAILQ_REMOVE(&mlx5_compress_priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); if (found != 0) { + mlx5_mr_release_cache(&priv->mr_scache); mlx5_compress_hw_global_release(priv); rte_compressdev_pmd_destroy(priv->cdev); claim_zero(mlx5_glue->close_device(priv->ctx));