From patchwork Wed Aug 18 15:14:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raja Zidane X-Patchwork-Id: 97060 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D4221A0C52; Wed, 18 Aug 2021 17:15:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E8A3440151; Wed, 18 Aug 2021 17:15:13 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2048.outbound.protection.outlook.com [40.107.236.48]) by mails.dpdk.org (Postfix) with ESMTP id DBD3D4117D for ; Wed, 18 Aug 2021 17:15:11 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Cs1gduM8z9xmBLZTpuS8cBHYqmeTGlJ14v7wbDlreUPyzclMsMEbK7y9wpUn21Qt6/8mlzXnem9zWbyTH5sAekzoj2j5g83ffAEt5Tm8vdpUQOB1YORe2fJTh4uAC7BAkU7+ycujl69/oIQwl4toOA88s+66wwbnJdXrJ5p/5T4kG7hVP+XoVsP7cCn960ssaC8LkYPAAUCMzlz8i/YblUQ9G8MJNBqm2yfeCQ391/f1BtHaQGm0NYRouW+oyWu+L7mREalmN4gpA+0VRRyp2oT9mivihE8V7HBHPTrGVqt2R9DIf7jSuEuGn5U+nmIQa5LM9Ul3e7RlmvsEjkJ9uQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NsWuk9Kzd9ClyAyfMg0UytJ5zOVZXyKW3lrfENVQ5Ak=; b=NBYYgzR6WNnIpSOMFb20AhNaJ+QcWXkZbxFBWHWfJDcWlH8EyXbJJsrXHJKwLWiK+K8KcJpO8VAz4uuE/igmxuzyQPOYzNpV1kSFw/sZ0+ODlSzOfXiD8QODIcY8wrOR8CkwXRdyzUoHPU/w8DbUGQn/1YfZIuRUnD0Ovqa4X0manYopMPaL27ifk65z7LDfXKjraDU2Mi5e3tjNJ/I0qoZ6Gfs2qlkzgzvP9BH+4yGqiymv45z7u6kJKcIgCopNui0UyXjmTVbsf+tO+tZ3qi/DM5S2yfX7treTttXDNVpGYHIBLqQrolFx9Uz9Gppdo6D/MmTxo66hLTWKeG09OQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NsWuk9Kzd9ClyAyfMg0UytJ5zOVZXyKW3lrfENVQ5Ak=; b=Cvb2iXDswgeGI7Kkqavwf3hT6qNiMequoXOM8cWJCQ+WxdFKgmIUL9ZZWegxjfwytxH0KCuuojriIzpmkYazavM2YA0fMUTK4PfhYjsA3XJWJaVTq6vp1fIFSMvRrKa+0wHERt5Shl860BZtP30CGlSQKRjk8nN54Y47DhXtDz0z20AtfGz4NtXTM9G8+dNHIJDDmq9swy5Re8IifY1w+HATy/bpN0h6d/NBVLcuSW0angN/HKARz6EQnn7SjTvLzfRt8fehht+iyNyE6g21/dyNqyRKfy0i1YneUt2TKHjBFDyfxepAtj9D8MifG4xFxj/3YzPLRID7UvqPdl+eUQ== Received: from BN8PR16CA0030.namprd16.prod.outlook.com (2603:10b6:408:4c::43) by BY5PR12MB4242.namprd12.prod.outlook.com (2603:10b6:a03:203::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.19; Wed, 18 Aug 2021 15:15:10 +0000 Received: from BN8NAM11FT067.eop-nam11.prod.protection.outlook.com (2603:10b6:408:4c:cafe::3) by BN8PR16CA0030.outlook.office365.com (2603:10b6:408:4c::43) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.19 via Frontend Transport; Wed, 18 Aug 2021 15:15:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT067.mail.protection.outlook.com (10.13.177.159) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4436.19 via Frontend Transport; Wed, 18 Aug 2021 15:15:09 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 18 Aug 2021 15:15:06 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 18 Aug 2021 15:15:05 +0000 From: Raja Zidane To: CC: , Date: Wed, 18 Aug 2021 18:14:39 +0300 Message-ID: <20210818151441.12400-2-rzidane@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210818151441.12400-1-rzidane@nvidia.com> References: <20210818151441.12400-1-rzidane@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: bd417a19-dba8-4924-1eb9-08d9625af800 X-MS-TrafficTypeDiagnostic: BY5PR12MB4242: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:84; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: VsjW9opRUvus5EJhIDlz6ElVzPZ9VmIXlhXkt9KBJTZDBYpUHxB+6kdYcLu+9pkV7o448txNw88mfQNfmuNsKTvku+4SVuQf4vgqVSkkW96j+ZYq5atnjkhNmVrhkw9c7XdrcfUzOOa9lw9CrqvfYGhKlvzLGXdTVFvHNAHbqqecqEoAXV8sdMufVckMhQDCf7i7c7W0GkRSYfA2VEV4rj3NVxaCfTk9tYnwBG4WKxguDYYH7JF7W+Acs5ee+XlQNmjxv2ME15cHxb1rGlqhByvF7xgE5uY1vOKRBJ1IT2FLzsuY+lEh71GHpk+XZrO3g7THrCUVvwMsTMaI2mDyvwzaftyIzCI8mtxfYCB9ZtxGiojc0qT3JkRsu1BkR3X2Y2r8NZQwyhpvRzH5wKBX8mQQzfvwQFrNZb8hGb87hpTDKtjn69MJpIz/qmGVD4waMl5SZb2lXs/9EfTW0oEPAHq8UpAamnRnlPS2vsiybg3twDbL59RRaF1eDLLPu+vax4gW6jJ5kqEABR+kdNSeCibAvENcqOGZ7ClVeh5vNU3u11EF6vP4z1etqWMBoPJtKZd3aIN/xnnGcGvWsz2Vf0APErvkgtosl8V2eZ941s+1YNRcndmE9kTfUVjI9q15dgYKP6N2IRMvLuHtyx+4XaU644R91WWUpvJqPW9W4NOcCrhKWigftGkiXdazjOqEQ7ewMls1WyTFGBOZSrAlOQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(136003)(396003)(39860400002)(346002)(376002)(36840700001)(46966006)(336012)(30864003)(36860700001)(82310400003)(70586007)(70206006)(1076003)(86362001)(6666004)(426003)(5660300002)(83380400001)(47076005)(2906002)(7696005)(36756003)(8936002)(2616005)(107886003)(4326008)(8676002)(55016002)(26005)(7636003)(16526019)(356005)(478600001)(6916009)(186003)(6286002)(54906003)(316002)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Aug 2021 15:15:09.5362 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bd417a19-dba8-4924-1eb9-08d9625af800 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT067.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4242 Subject: [dpdk-dev] [RFC 1/3] common/mlx5: add common qp_create X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Raja Zidane --- drivers/common/mlx5/mlx5_common_devx.c | 111 +++++++++++++++++++++++++ drivers/common/mlx5/mlx5_common_devx.h | 20 +++++ drivers/common/mlx5/version.map | 2 + drivers/crypto/mlx5/mlx5_crypto.c | 80 +++++++----------- drivers/crypto/mlx5/mlx5_crypto.h | 5 +- drivers/vdpa/mlx5/mlx5_vdpa.h | 5 +- drivers/vdpa/mlx5/mlx5_vdpa_event.c | 58 ++++--------- 7 files changed, 181 insertions(+), 100 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c index 22c8d356c4..640fe3bbb9 100644 --- a/drivers/common/mlx5/mlx5_common_devx.c +++ b/drivers/common/mlx5/mlx5_common_devx.c @@ -271,6 +271,117 @@ mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj, uint16_t log_wqbb_n, return -rte_errno; } +/** + * Destroy DevX Queue Pair. + * + * @param[in] qp + * DevX QP to destroy. + */ +void +mlx5_devx_qp_destroy(struct mlx5_devx_qp *qp) +{ + if (qp->qp) + claim_zero(mlx5_devx_cmd_destroy(qp->qp)); + if (qp->umem_obj) + claim_zero(mlx5_os_umem_dereg(qp->umem_obj)); + if (qp->umem_buf) + mlx5_free((void *)(uintptr_t)qp->umem_buf); +} + +/** + * Create Queue Pair using DevX API. + * + * Get a pointer to partially initialized attributes structure, and updates the + * following fields: + * wq_umem_id + * wq_umem_offset + * dbr_umem_valid + * dbr_umem_id + * dbr_address + * sq_size + * log_page_size + * rq_size + * All other fields are updated by caller. + * + * @param[in] ctx + * Context returned from mlx5 open_device() glue function. + * @param[in/out] qp_obj + * Pointer to QP to create. + * @param[in] log_wqbb_n + * Log of number of WQBBs in queue. + * @param[in] attr + * Pointer to QP attributes structure. + * @param[in] socket + * Socket to use for allocation. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_devx_qp_create(void *ctx, struct mlx5_devx_qp *qp_obj, uint16_t log_wqbb_n, + struct mlx5_devx_qp_attr *attr, int socket) +{ + struct mlx5_devx_obj *qp = NULL; + struct mlx5dv_devx_umem *umem_obj = NULL; + void *umem_buf = NULL; + size_t alignment = MLX5_WQE_BUF_ALIGNMENT; + uint32_t umem_size, umem_dbrec; + uint16_t qp_size = 1 << log_wqbb_n; + int ret; + + if (alignment == (size_t)-1) { + DRV_LOG(ERR, "Failed to get WQE buf alignment."); + rte_errno = ENOMEM; + return -rte_errno; + } + /* Allocate memory buffer for WQEs and doorbell record. */ + umem_size = MLX5_WQE_SIZE * qp_size; + umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE); + umem_size += MLX5_DBR_SIZE; + umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size, + alignment, socket); + if (!umem_buf) { + DRV_LOG(ERR, "Failed to allocate memory for QP."); + rte_errno = ENOMEM; + return -rte_errno; + } + /* Register allocated buffer in user space with DevX. */ + umem_obj = mlx5_os_umem_reg(ctx, (void *)(uintptr_t)umem_buf, umem_size, + IBV_ACCESS_LOCAL_WRITE); + if (!umem_obj) { + DRV_LOG(ERR, "Failed to register umem for QP."); + rte_errno = errno; + goto error; + } + /* Fill attributes for SQ object creation. */ + attr->wq_umem_id = mlx5_os_get_umem_id(umem_obj); + attr->wq_umem_offset = 0; + attr->dbr_umem_valid = 1; + attr->dbr_umem_id = attr->wq_umem_id; + attr->dbr_address = umem_dbrec; + attr->log_page_size = MLX5_LOG_PAGE_SIZE; + /* Create send queue object with DevX. */ + qp = mlx5_devx_cmd_create_qp(ctx, attr); + if (!qp) { + DRV_LOG(ERR, "Can't create DevX QP object."); + rte_errno = ENOMEM; + goto error; + } + qp_obj->umem_buf = umem_buf; + qp_obj->umem_obj = umem_obj; + qp_obj->qp = qp; + qp_obj->db_rec = RTE_PTR_ADD(qp_obj->umem_buf, umem_dbrec); + return 0; +error: + ret = rte_errno; + if (umem_obj) + claim_zero(mlx5_os_umem_dereg(umem_obj)); + if (umem_buf) + mlx5_free((void *)(uintptr_t)umem_buf); + rte_errno = ret; + return -rte_errno; +} + /** * Destroy DevX Receive Queue. * diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h index aad0184e5a..b05260b401 100644 --- a/drivers/common/mlx5/mlx5_common_devx.h +++ b/drivers/common/mlx5/mlx5_common_devx.h @@ -33,6 +33,18 @@ struct mlx5_devx_sq { volatile uint32_t *db_rec; /* The SQ doorbell record. */ }; +/* DevX Queue Pair structure. */ +struct mlx5_devx_qp { + struct mlx5_devx_obj *qp; /* The QP DevX object. */ + void *umem_obj; /* The QP umem object. */ + union { + void *umem_buf; + struct mlx5_wqe *wqes; /* The QP ring buffer. */ + struct mlx5_aso_wqe *aso_wqes; + }; + volatile uint32_t *db_rec; /* The QP doorbell record. */ +}; + /* DevX Receive Queue structure. */ struct mlx5_devx_rq { struct mlx5_devx_obj *rq; /* The RQ DevX object. */ @@ -59,6 +71,14 @@ int mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj, uint16_t log_wqbb_n, struct mlx5_devx_create_sq_attr *attr, int socket); +__rte_internal +void mlx5_devx_qp_destroy(struct mlx5_devx_qp *qp); + +__rte_internal +int mlx5_devx_qp_create(void *ctx, struct mlx5_devx_qp *qp_obj, + uint16_t log_wqbb_n, + struct mlx5_devx_qp_attr *attr, int socket); + __rte_internal void mlx5_devx_rq_destroy(struct mlx5_devx_rq *rq); diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index e5cb6b7060..9487f787b6 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -71,6 +71,8 @@ INTERNAL { mlx5_devx_rq_destroy; mlx5_devx_sq_create; mlx5_devx_sq_destroy; + mlx5_devx_qp_create; + mlx5_devx_qp_destroy; mlx5_free; diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index b3d5200ca3..c66a3a7add 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -257,12 +257,12 @@ mlx5_crypto_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id) { struct mlx5_crypto_qp *qp = dev->data->queue_pairs[qp_id]; - if (qp->qp_obj != NULL) - claim_zero(mlx5_devx_cmd_destroy(qp->qp_obj)); - if (qp->umem_obj != NULL) - claim_zero(mlx5_glue->devx_umem_dereg(qp->umem_obj)); - if (qp->umem_buf != NULL) - rte_free(qp->umem_buf); + if (qp->qp_obj.qp != NULL) + claim_zero(mlx5_devx_cmd_destroy(qp->qp_obj.qp)); + if (qp->qp_obj.umem_obj != NULL) + claim_zero(mlx5_glue->devx_umem_dereg(qp->qp_obj.umem_obj)); + if (qp->qp_obj.umem_buf != NULL) + rte_free(qp->qp_obj.umem_buf); mlx5_mr_btree_free(&qp->mr_ctrl.cache_bh); mlx5_devx_cq_destroy(&qp->cq_obj); rte_free(qp); @@ -277,20 +277,20 @@ mlx5_crypto_qp2rts(struct mlx5_crypto_qp *qp) * In Order to configure self loopback, when calling these functions the * remote QP id that is used is the id of the same QP. */ - if (mlx5_devx_cmd_modify_qp_state(qp->qp_obj, MLX5_CMD_OP_RST2INIT_QP, - qp->qp_obj->id)) { + if (mlx5_devx_cmd_modify_qp_state(qp->qp_obj.qp, MLX5_CMD_OP_RST2INIT_QP, + qp->qp_obj.qp->id)) { DRV_LOG(ERR, "Failed to modify QP to INIT state(%u).", rte_errno); return -1; } - if (mlx5_devx_cmd_modify_qp_state(qp->qp_obj, MLX5_CMD_OP_INIT2RTR_QP, - qp->qp_obj->id)) { + if (mlx5_devx_cmd_modify_qp_state(qp->qp_obj.qp, MLX5_CMD_OP_INIT2RTR_QP, + qp->qp_obj.qp->id)) { DRV_LOG(ERR, "Failed to modify QP to RTR state(%u).", rte_errno); return -1; } - if (mlx5_devx_cmd_modify_qp_state(qp->qp_obj, MLX5_CMD_OP_RTR2RTS_QP, - qp->qp_obj->id)) { + if (mlx5_devx_cmd_modify_qp_state(qp->qp_obj.qp, MLX5_CMD_OP_RTR2RTS_QP, + qp->qp_obj.qp->id)) { DRV_LOG(ERR, "Failed to modify QP to RTS state(%u).", rte_errno); return -1; @@ -452,7 +452,7 @@ mlx5_crypto_wqe_set(struct mlx5_crypto_priv *priv, memcpy(klms, &umr->kseg[0], sizeof(*klms) * klm_n); } ds = 2 + klm_n; - cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj->id << 8) | ds); + cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj.qp->id << 8) | ds); cseg->opcode = rte_cpu_to_be_32((qp->db_pi << 8) | MLX5_OPCODE_RDMA_WRITE); ds = RTE_ALIGN(ds, 4); @@ -461,7 +461,7 @@ mlx5_crypto_wqe_set(struct mlx5_crypto_priv *priv, if (priv->max_rdmar_ds > ds) { cseg += ds; ds = priv->max_rdmar_ds - ds; - cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj->id << 8) | ds); + cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj.qp->id << 8) | ds); cseg->opcode = rte_cpu_to_be_32((qp->db_pi << 8) | MLX5_OPCODE_NOP); qp->db_pi += ds >> 2; /* Here, DS is 4 aligned for sure. */ @@ -503,7 +503,7 @@ mlx5_crypto_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, return 0; do { op = *ops++; - umr = RTE_PTR_ADD(qp->umem_buf, priv->wqe_set_size * qp->pi); + umr = RTE_PTR_ADD(qp->qp_obj.umem_buf, priv->wqe_set_size * qp->pi); if (unlikely(mlx5_crypto_wqe_set(priv, qp, op, umr) == 0)) { qp->stats.enqueue_err_count++; if (remain != nb_ops) { @@ -517,7 +517,7 @@ mlx5_crypto_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, } while (--remain); qp->stats.enqueued_count += nb_ops; rte_io_wmb(); - qp->db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(qp->db_pi); + qp->qp_obj.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(qp->db_pi); rte_wmb(); mlx5_crypto_uar_write(*(volatile uint64_t *)qp->wqe, qp->priv); rte_wmb(); @@ -583,7 +583,7 @@ mlx5_crypto_qp_init(struct mlx5_crypto_priv *priv, struct mlx5_crypto_qp *qp) uint32_t i; for (i = 0 ; i < qp->entries_n; i++) { - struct mlx5_wqe_cseg *cseg = RTE_PTR_ADD(qp->umem_buf, i * + struct mlx5_wqe_cseg *cseg = RTE_PTR_ADD(qp->qp_obj.umem_buf, i * priv->wqe_set_size); struct mlx5_wqe_umr_cseg *ucseg = (struct mlx5_wqe_umr_cseg *) (cseg + 1); @@ -593,7 +593,7 @@ mlx5_crypto_qp_init(struct mlx5_crypto_priv *priv, struct mlx5_crypto_qp *qp) struct mlx5_wqe_rseg *rseg; /* Init UMR WQE. */ - cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj->id << 8) | + cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj.qp->id << 8) | (priv->umr_wqe_size / MLX5_WSEG_SIZE)); cseg->flags = RTE_BE32(MLX5_COMP_ONLY_FIRST_ERR << MLX5_COMP_MODE_OFFSET); @@ -628,7 +628,7 @@ mlx5_crypto_indirect_mkeys_prepare(struct mlx5_crypto_priv *priv, .klm_num = RTE_ALIGN(priv->max_segs_num, 4), }; - for (umr = (struct mlx5_umr_wqe *)qp->umem_buf, i = 0; + for (umr = (struct mlx5_umr_wqe *)qp->qp_obj.umem_buf, i = 0; i < qp->entries_n; i++, umr = RTE_PTR_ADD(umr, priv->wqe_set_size)) { attr.klm_array = (struct mlx5_klm *)&umr->kseg[0]; qp->mkey[i] = mlx5_devx_cmd_mkey_create(priv->ctx, &attr); @@ -649,9 +649,7 @@ mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, struct mlx5_devx_qp_attr attr = {0}; struct mlx5_crypto_qp *qp; uint16_t log_nb_desc = rte_log2_u32(qp_conf->nb_descriptors); - uint32_t umem_size = RTE_BIT32(log_nb_desc) * - priv->wqe_set_size + - sizeof(*qp->db_rec) * 2; + uint32_t ret; uint32_t alloc_size = sizeof(*qp); struct mlx5_devx_cq_attr cq_attr = { .uar_page_id = mlx5_os_get_devx_uar_page_id(priv->uar), @@ -675,18 +673,15 @@ mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, DRV_LOG(ERR, "Failed to create CQ."); goto error; } - qp->umem_buf = rte_zmalloc_socket(__func__, umem_size, 4096, socket_id); - if (qp->umem_buf == NULL) { - DRV_LOG(ERR, "Failed to allocate QP umem."); - rte_errno = ENOMEM; - goto error; - } - qp->umem_obj = mlx5_glue->devx_umem_reg(priv->ctx, - (void *)(uintptr_t)qp->umem_buf, - umem_size, - IBV_ACCESS_LOCAL_WRITE); - if (qp->umem_obj == NULL) { - DRV_LOG(ERR, "Failed to register QP umem."); + /* fill attributes*/ + attr.pd = priv->pdn; + attr.uar_index = mlx5_os_get_devx_uar_page_id(priv->uar); + attr.cqn = qp->cq_obj.cq->id; + attr.rq_size = 0; + attr.sq_size = RTE_BIT32(log_nb_desc); + ret = mlx5_devx_qp_create(priv->ctx, &qp->qp_obj, log_nb_desc, &attr, socket_id); + if(ret) { + DRV_LOG(ERR, "Failed to create QP"); goto error; } if (mlx5_mr_btree_init(&qp->mr_ctrl.cache_bh, MLX5_MR_BTREE_CACHE_N, @@ -697,23 +692,6 @@ mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, goto error; } qp->mr_ctrl.dev_gen_ptr = &priv->mr_scache.dev_gen; - attr.pd = priv->pdn; - attr.uar_index = mlx5_os_get_devx_uar_page_id(priv->uar); - attr.cqn = qp->cq_obj.cq->id; - attr.log_page_size = rte_log2_u32(sysconf(_SC_PAGESIZE)); - attr.rq_size = 0; - attr.sq_size = RTE_BIT32(log_nb_desc); - attr.dbr_umem_valid = 1; - attr.wq_umem_id = qp->umem_obj->umem_id; - attr.wq_umem_offset = 0; - attr.dbr_umem_id = qp->umem_obj->umem_id; - attr.dbr_address = RTE_BIT64(log_nb_desc) * priv->wqe_set_size; - qp->qp_obj = mlx5_devx_cmd_create_qp(priv->ctx, &attr); - if (qp->qp_obj == NULL) { - DRV_LOG(ERR, "Failed to create QP(%u).", rte_errno); - goto error; - } - qp->db_rec = RTE_PTR_ADD(qp->umem_buf, (uintptr_t)attr.dbr_address); if (mlx5_crypto_qp2rts(qp)) goto error; qp->mkey = (struct mlx5_devx_obj **)RTE_ALIGN((uintptr_t)(qp + 1), diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index d49b0001f0..013eed30b5 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -43,11 +43,8 @@ struct mlx5_crypto_priv { struct mlx5_crypto_qp { struct mlx5_crypto_priv *priv; struct mlx5_devx_cq cq_obj; - struct mlx5_devx_obj *qp_obj; + struct mlx5_devx_qp qp_obj; struct rte_cryptodev_stats stats; - struct mlx5dv_devx_umem *umem_obj; - void *umem_buf; - volatile uint32_t *db_rec; struct rte_crypto_op **ops; struct mlx5_devx_obj **mkey; /* WQE's indirect mekys. */ struct mlx5_mr_ctrl mr_ctrl; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 2a04e36607..a27f3fdadb 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -54,10 +54,7 @@ struct mlx5_vdpa_cq { struct mlx5_vdpa_event_qp { struct mlx5_vdpa_cq cq; struct mlx5_devx_obj *fw_qp; - struct mlx5_devx_obj *sw_qp; - struct mlx5dv_devx_umem *umem_obj; - void *umem_buf; - volatile uint32_t *db_rec; + struct mlx5_devx_qp sw_qp; }; struct mlx5_vdpa_query_mr { diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index 3541c652ce..d327a605fa 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -179,7 +179,7 @@ mlx5_vdpa_cq_poll(struct mlx5_vdpa_cq *cq) cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci); rte_io_wmb(); /* Ring SW QP doorbell record. */ - eqp->db_rec[0] = rte_cpu_to_be_32(cq->cq_ci + cq_size); + eqp->sw_qp.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci + cq_size); } return comp; } @@ -531,12 +531,12 @@ mlx5_vdpa_cqe_event_unset(struct mlx5_vdpa_priv *priv) void mlx5_vdpa_event_qp_destroy(struct mlx5_vdpa_event_qp *eqp) { - if (eqp->sw_qp) - claim_zero(mlx5_devx_cmd_destroy(eqp->sw_qp)); - if (eqp->umem_obj) - claim_zero(mlx5_glue->devx_umem_dereg(eqp->umem_obj)); - if (eqp->umem_buf) - rte_free(eqp->umem_buf); + if (eqp->sw_qp.qp) + claim_zero(mlx5_devx_cmd_destroy(eqp->sw_qp.qp)); + if (eqp->sw_qp.umem_obj) + claim_zero(mlx5_glue->devx_umem_dereg(eqp->sw_qp.umem_obj)); + if (eqp->sw_qp.umem_buf) + rte_free(eqp->sw_qp.umem_buf); if (eqp->fw_qp) claim_zero(mlx5_devx_cmd_destroy(eqp->fw_qp)); mlx5_vdpa_cq_destroy(&eqp->cq); @@ -547,36 +547,36 @@ static int mlx5_vdpa_qps2rts(struct mlx5_vdpa_event_qp *eqp) { if (mlx5_devx_cmd_modify_qp_state(eqp->fw_qp, MLX5_CMD_OP_RST2INIT_QP, - eqp->sw_qp->id)) { + eqp->sw_qp.qp->id)) { DRV_LOG(ERR, "Failed to modify FW QP to INIT state(%u).", rte_errno); return -1; } - if (mlx5_devx_cmd_modify_qp_state(eqp->sw_qp, MLX5_CMD_OP_RST2INIT_QP, + if (mlx5_devx_cmd_modify_qp_state(eqp->sw_qp.qp, MLX5_CMD_OP_RST2INIT_QP, eqp->fw_qp->id)) { DRV_LOG(ERR, "Failed to modify SW QP to INIT state(%u).", rte_errno); return -1; } if (mlx5_devx_cmd_modify_qp_state(eqp->fw_qp, MLX5_CMD_OP_INIT2RTR_QP, - eqp->sw_qp->id)) { + eqp->sw_qp.qp->id)) { DRV_LOG(ERR, "Failed to modify FW QP to RTR state(%u).", rte_errno); return -1; } - if (mlx5_devx_cmd_modify_qp_state(eqp->sw_qp, MLX5_CMD_OP_INIT2RTR_QP, + if (mlx5_devx_cmd_modify_qp_state(eqp->sw_qp.qp, MLX5_CMD_OP_INIT2RTR_QP, eqp->fw_qp->id)) { DRV_LOG(ERR, "Failed to modify SW QP to RTR state(%u).", rte_errno); return -1; } if (mlx5_devx_cmd_modify_qp_state(eqp->fw_qp, MLX5_CMD_OP_RTR2RTS_QP, - eqp->sw_qp->id)) { + eqp->sw_qp.qp->id)) { DRV_LOG(ERR, "Failed to modify FW QP to RTS state(%u).", rte_errno); return -1; } - if (mlx5_devx_cmd_modify_qp_state(eqp->sw_qp, MLX5_CMD_OP_RTR2RTS_QP, + if (mlx5_devx_cmd_modify_qp_state(eqp->sw_qp.qp, MLX5_CMD_OP_RTR2RTS_QP, eqp->fw_qp->id)) { DRV_LOG(ERR, "Failed to modify SW QP to RTS state(%u).", rte_errno); @@ -591,8 +591,7 @@ mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, { struct mlx5_devx_qp_attr attr = {0}; uint16_t log_desc_n = rte_log2_u32(desc_n); - uint32_t umem_size = (1 << log_desc_n) * MLX5_WSEG_SIZE + - sizeof(*eqp->db_rec) * 2; + uint32_t ret; if (mlx5_vdpa_event_qp_global_prepare(priv)) return -1; @@ -605,42 +604,19 @@ mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, DRV_LOG(ERR, "Failed to create FW QP(%u).", rte_errno); goto error; } - eqp->umem_buf = rte_zmalloc(__func__, umem_size, 4096); - if (!eqp->umem_buf) { - DRV_LOG(ERR, "Failed to allocate memory for SW QP."); - rte_errno = ENOMEM; - goto error; - } - eqp->umem_obj = mlx5_glue->devx_umem_reg(priv->ctx, - (void *)(uintptr_t)eqp->umem_buf, - umem_size, - IBV_ACCESS_LOCAL_WRITE); - if (!eqp->umem_obj) { - DRV_LOG(ERR, "Failed to register umem for SW QP."); - goto error; - } - attr.uar_index = priv->uar->page_id; - attr.cqn = eqp->cq.cq_obj.cq->id; - attr.log_page_size = rte_log2_u32(sysconf(_SC_PAGESIZE)); attr.rq_size = 1 << log_desc_n; attr.log_rq_stride = rte_log2_u32(MLX5_WSEG_SIZE); attr.sq_size = 0; /* No need SQ. */ - attr.dbr_umem_valid = 1; - attr.wq_umem_id = eqp->umem_obj->umem_id; - attr.wq_umem_offset = 0; - attr.dbr_umem_id = eqp->umem_obj->umem_id; attr.ts_format = mlx5_ts_format_conv(priv->qp_ts_format); - attr.dbr_address = RTE_BIT64(log_desc_n) * MLX5_WSEG_SIZE; - eqp->sw_qp = mlx5_devx_cmd_create_qp(priv->ctx, &attr); - if (!eqp->sw_qp) { + ret = mlx5_devx_qp_create(priv->ctx, &(eqp->sw_qp), log_desc_n, &attr, SOCKET_ID_ANY); + if (ret) { DRV_LOG(ERR, "Failed to create SW QP(%u).", rte_errno); goto error; } - eqp->db_rec = RTE_PTR_ADD(eqp->umem_buf, (uintptr_t)attr.dbr_address); if (mlx5_vdpa_qps2rts(eqp)) goto error; /* First ringing. */ - rte_write32(rte_cpu_to_be_32(1 << log_desc_n), &eqp->db_rec[0]); + rte_write32(rte_cpu_to_be_32(1 << log_desc_n), &eqp->sw_qp.db_rec[0]); return 0; error: mlx5_vdpa_event_qp_destroy(eqp);