From patchwork Wed Aug 18 15:14:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raja Zidane X-Patchwork-Id: 97060 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D4221A0C52; Wed, 18 Aug 2021 17:15:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E8A3440151; Wed, 18 Aug 2021 17:15:13 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2048.outbound.protection.outlook.com [40.107.236.48]) by mails.dpdk.org (Postfix) with ESMTP id DBD3D4117D for ; Wed, 18 Aug 2021 17:15:11 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Cs1gduM8z9xmBLZTpuS8cBHYqmeTGlJ14v7wbDlreUPyzclMsMEbK7y9wpUn21Qt6/8mlzXnem9zWbyTH5sAekzoj2j5g83ffAEt5Tm8vdpUQOB1YORe2fJTh4uAC7BAkU7+ycujl69/oIQwl4toOA88s+66wwbnJdXrJ5p/5T4kG7hVP+XoVsP7cCn960ssaC8LkYPAAUCMzlz8i/YblUQ9G8MJNBqm2yfeCQ391/f1BtHaQGm0NYRouW+oyWu+L7mREalmN4gpA+0VRRyp2oT9mivihE8V7HBHPTrGVqt2R9DIf7jSuEuGn5U+nmIQa5LM9Ul3e7RlmvsEjkJ9uQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NsWuk9Kzd9ClyAyfMg0UytJ5zOVZXyKW3lrfENVQ5Ak=; b=NBYYgzR6WNnIpSOMFb20AhNaJ+QcWXkZbxFBWHWfJDcWlH8EyXbJJsrXHJKwLWiK+K8KcJpO8VAz4uuE/igmxuzyQPOYzNpV1kSFw/sZ0+ODlSzOfXiD8QODIcY8wrOR8CkwXRdyzUoHPU/w8DbUGQn/1YfZIuRUnD0Ovqa4X0manYopMPaL27ifk65z7LDfXKjraDU2Mi5e3tjNJ/I0qoZ6Gfs2qlkzgzvP9BH+4yGqiymv45z7u6kJKcIgCopNui0UyXjmTVbsf+tO+tZ3qi/DM5S2yfX7treTttXDNVpGYHIBLqQrolFx9Uz9Gppdo6D/MmTxo66hLTWKeG09OQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NsWuk9Kzd9ClyAyfMg0UytJ5zOVZXyKW3lrfENVQ5Ak=; b=Cvb2iXDswgeGI7Kkqavwf3hT6qNiMequoXOM8cWJCQ+WxdFKgmIUL9ZZWegxjfwytxH0KCuuojriIzpmkYazavM2YA0fMUTK4PfhYjsA3XJWJaVTq6vp1fIFSMvRrKa+0wHERt5Shl860BZtP30CGlSQKRjk8nN54Y47DhXtDz0z20AtfGz4NtXTM9G8+dNHIJDDmq9swy5Re8IifY1w+HATy/bpN0h6d/NBVLcuSW0angN/HKARz6EQnn7SjTvLzfRt8fehht+iyNyE6g21/dyNqyRKfy0i1YneUt2TKHjBFDyfxepAtj9D8MifG4xFxj/3YzPLRID7UvqPdl+eUQ== Received: from BN8PR16CA0030.namprd16.prod.outlook.com (2603:10b6:408:4c::43) by BY5PR12MB4242.namprd12.prod.outlook.com (2603:10b6:a03:203::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.19; Wed, 18 Aug 2021 15:15:10 +0000 Received: from BN8NAM11FT067.eop-nam11.prod.protection.outlook.com (2603:10b6:408:4c:cafe::3) by BN8PR16CA0030.outlook.office365.com (2603:10b6:408:4c::43) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.19 via Frontend Transport; Wed, 18 Aug 2021 15:15:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT067.mail.protection.outlook.com (10.13.177.159) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4436.19 via Frontend Transport; Wed, 18 Aug 2021 15:15:09 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 18 Aug 2021 15:15:06 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 18 Aug 2021 15:15:05 +0000 From: Raja Zidane To: CC: , Date: Wed, 18 Aug 2021 18:14:39 +0300 Message-ID: <20210818151441.12400-2-rzidane@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210818151441.12400-1-rzidane@nvidia.com> References: <20210818151441.12400-1-rzidane@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: bd417a19-dba8-4924-1eb9-08d9625af800 X-MS-TrafficTypeDiagnostic: BY5PR12MB4242: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:84; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: VsjW9opRUvus5EJhIDlz6ElVzPZ9VmIXlhXkt9KBJTZDBYpUHxB+6kdYcLu+9pkV7o448txNw88mfQNfmuNsKTvku+4SVuQf4vgqVSkkW96j+ZYq5atnjkhNmVrhkw9c7XdrcfUzOOa9lw9CrqvfYGhKlvzLGXdTVFvHNAHbqqecqEoAXV8sdMufVckMhQDCf7i7c7W0GkRSYfA2VEV4rj3NVxaCfTk9tYnwBG4WKxguDYYH7JF7W+Acs5ee+XlQNmjxv2ME15cHxb1rGlqhByvF7xgE5uY1vOKRBJ1IT2FLzsuY+lEh71GHpk+XZrO3g7THrCUVvwMsTMaI2mDyvwzaftyIzCI8mtxfYCB9ZtxGiojc0qT3JkRsu1BkR3X2Y2r8NZQwyhpvRzH5wKBX8mQQzfvwQFrNZb8hGb87hpTDKtjn69MJpIz/qmGVD4waMl5SZb2lXs/9EfTW0oEPAHq8UpAamnRnlPS2vsiybg3twDbL59RRaF1eDLLPu+vax4gW6jJ5kqEABR+kdNSeCibAvENcqOGZ7ClVeh5vNU3u11EF6vP4z1etqWMBoPJtKZd3aIN/xnnGcGvWsz2Vf0APErvkgtosl8V2eZ941s+1YNRcndmE9kTfUVjI9q15dgYKP6N2IRMvLuHtyx+4XaU644R91WWUpvJqPW9W4NOcCrhKWigftGkiXdazjOqEQ7ewMls1WyTFGBOZSrAlOQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(136003)(396003)(39860400002)(346002)(376002)(36840700001)(46966006)(336012)(30864003)(36860700001)(82310400003)(70586007)(70206006)(1076003)(86362001)(6666004)(426003)(5660300002)(83380400001)(47076005)(2906002)(7696005)(36756003)(8936002)(2616005)(107886003)(4326008)(8676002)(55016002)(26005)(7636003)(16526019)(356005)(478600001)(6916009)(186003)(6286002)(54906003)(316002)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Aug 2021 15:15:09.5362 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bd417a19-dba8-4924-1eb9-08d9625af800 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT067.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4242 Subject: [dpdk-dev] [RFC 1/3] common/mlx5: add common qp_create X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Raja Zidane --- drivers/common/mlx5/mlx5_common_devx.c | 111 +++++++++++++++++++++++++ drivers/common/mlx5/mlx5_common_devx.h | 20 +++++ drivers/common/mlx5/version.map | 2 + drivers/crypto/mlx5/mlx5_crypto.c | 80 +++++++----------- drivers/crypto/mlx5/mlx5_crypto.h | 5 +- drivers/vdpa/mlx5/mlx5_vdpa.h | 5 +- drivers/vdpa/mlx5/mlx5_vdpa_event.c | 58 ++++--------- 7 files changed, 181 insertions(+), 100 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c index 22c8d356c4..640fe3bbb9 100644 --- a/drivers/common/mlx5/mlx5_common_devx.c +++ b/drivers/common/mlx5/mlx5_common_devx.c @@ -271,6 +271,117 @@ mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj, uint16_t log_wqbb_n, return -rte_errno; } +/** + * Destroy DevX Queue Pair. + * + * @param[in] qp + * DevX QP to destroy. + */ +void +mlx5_devx_qp_destroy(struct mlx5_devx_qp *qp) +{ + if (qp->qp) + claim_zero(mlx5_devx_cmd_destroy(qp->qp)); + if (qp->umem_obj) + claim_zero(mlx5_os_umem_dereg(qp->umem_obj)); + if (qp->umem_buf) + mlx5_free((void *)(uintptr_t)qp->umem_buf); +} + +/** + * Create Queue Pair using DevX API. + * + * Get a pointer to partially initialized attributes structure, and updates the + * following fields: + * wq_umem_id + * wq_umem_offset + * dbr_umem_valid + * dbr_umem_id + * dbr_address + * sq_size + * log_page_size + * rq_size + * All other fields are updated by caller. + * + * @param[in] ctx + * Context returned from mlx5 open_device() glue function. + * @param[in/out] qp_obj + * Pointer to QP to create. + * @param[in] log_wqbb_n + * Log of number of WQBBs in queue. + * @param[in] attr + * Pointer to QP attributes structure. + * @param[in] socket + * Socket to use for allocation. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_devx_qp_create(void *ctx, struct mlx5_devx_qp *qp_obj, uint16_t log_wqbb_n, + struct mlx5_devx_qp_attr *attr, int socket) +{ + struct mlx5_devx_obj *qp = NULL; + struct mlx5dv_devx_umem *umem_obj = NULL; + void *umem_buf = NULL; + size_t alignment = MLX5_WQE_BUF_ALIGNMENT; + uint32_t umem_size, umem_dbrec; + uint16_t qp_size = 1 << log_wqbb_n; + int ret; + + if (alignment == (size_t)-1) { + DRV_LOG(ERR, "Failed to get WQE buf alignment."); + rte_errno = ENOMEM; + return -rte_errno; + } + /* Allocate memory buffer for WQEs and doorbell record. */ + umem_size = MLX5_WQE_SIZE * qp_size; + umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE); + umem_size += MLX5_DBR_SIZE; + umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size, + alignment, socket); + if (!umem_buf) { + DRV_LOG(ERR, "Failed to allocate memory for QP."); + rte_errno = ENOMEM; + return -rte_errno; + } + /* Register allocated buffer in user space with DevX. */ + umem_obj = mlx5_os_umem_reg(ctx, (void *)(uintptr_t)umem_buf, umem_size, + IBV_ACCESS_LOCAL_WRITE); + if (!umem_obj) { + DRV_LOG(ERR, "Failed to register umem for QP."); + rte_errno = errno; + goto error; + } + /* Fill attributes for SQ object creation. */ + attr->wq_umem_id = mlx5_os_get_umem_id(umem_obj); + attr->wq_umem_offset = 0; + attr->dbr_umem_valid = 1; + attr->dbr_umem_id = attr->wq_umem_id; + attr->dbr_address = umem_dbrec; + attr->log_page_size = MLX5_LOG_PAGE_SIZE; + /* Create send queue object with DevX. */ + qp = mlx5_devx_cmd_create_qp(ctx, attr); + if (!qp) { + DRV_LOG(ERR, "Can't create DevX QP object."); + rte_errno = ENOMEM; + goto error; + } + qp_obj->umem_buf = umem_buf; + qp_obj->umem_obj = umem_obj; + qp_obj->qp = qp; + qp_obj->db_rec = RTE_PTR_ADD(qp_obj->umem_buf, umem_dbrec); + return 0; +error: + ret = rte_errno; + if (umem_obj) + claim_zero(mlx5_os_umem_dereg(umem_obj)); + if (umem_buf) + mlx5_free((void *)(uintptr_t)umem_buf); + rte_errno = ret; + return -rte_errno; +} + /** * Destroy DevX Receive Queue. * diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h index aad0184e5a..b05260b401 100644 --- a/drivers/common/mlx5/mlx5_common_devx.h +++ b/drivers/common/mlx5/mlx5_common_devx.h @@ -33,6 +33,18 @@ struct mlx5_devx_sq { volatile uint32_t *db_rec; /* The SQ doorbell record. */ }; +/* DevX Queue Pair structure. */ +struct mlx5_devx_qp { + struct mlx5_devx_obj *qp; /* The QP DevX object. */ + void *umem_obj; /* The QP umem object. */ + union { + void *umem_buf; + struct mlx5_wqe *wqes; /* The QP ring buffer. */ + struct mlx5_aso_wqe *aso_wqes; + }; + volatile uint32_t *db_rec; /* The QP doorbell record. */ +}; + /* DevX Receive Queue structure. */ struct mlx5_devx_rq { struct mlx5_devx_obj *rq; /* The RQ DevX object. */ @@ -59,6 +71,14 @@ int mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj, uint16_t log_wqbb_n, struct mlx5_devx_create_sq_attr *attr, int socket); +__rte_internal +void mlx5_devx_qp_destroy(struct mlx5_devx_qp *qp); + +__rte_internal +int mlx5_devx_qp_create(void *ctx, struct mlx5_devx_qp *qp_obj, + uint16_t log_wqbb_n, + struct mlx5_devx_qp_attr *attr, int socket); + __rte_internal void mlx5_devx_rq_destroy(struct mlx5_devx_rq *rq); diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index e5cb6b7060..9487f787b6 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -71,6 +71,8 @@ INTERNAL { mlx5_devx_rq_destroy; mlx5_devx_sq_create; mlx5_devx_sq_destroy; + mlx5_devx_qp_create; + mlx5_devx_qp_destroy; mlx5_free; diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index b3d5200ca3..c66a3a7add 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -257,12 +257,12 @@ mlx5_crypto_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id) { struct mlx5_crypto_qp *qp = dev->data->queue_pairs[qp_id]; - if (qp->qp_obj != NULL) - claim_zero(mlx5_devx_cmd_destroy(qp->qp_obj)); - if (qp->umem_obj != NULL) - claim_zero(mlx5_glue->devx_umem_dereg(qp->umem_obj)); - if (qp->umem_buf != NULL) - rte_free(qp->umem_buf); + if (qp->qp_obj.qp != NULL) + claim_zero(mlx5_devx_cmd_destroy(qp->qp_obj.qp)); + if (qp->qp_obj.umem_obj != NULL) + claim_zero(mlx5_glue->devx_umem_dereg(qp->qp_obj.umem_obj)); + if (qp->qp_obj.umem_buf != NULL) + rte_free(qp->qp_obj.umem_buf); mlx5_mr_btree_free(&qp->mr_ctrl.cache_bh); mlx5_devx_cq_destroy(&qp->cq_obj); rte_free(qp); @@ -277,20 +277,20 @@ mlx5_crypto_qp2rts(struct mlx5_crypto_qp *qp) * In Order to configure self loopback, when calling these functions the * remote QP id that is used is the id of the same QP. */ - if (mlx5_devx_cmd_modify_qp_state(qp->qp_obj, MLX5_CMD_OP_RST2INIT_QP, - qp->qp_obj->id)) { + if (mlx5_devx_cmd_modify_qp_state(qp->qp_obj.qp, MLX5_CMD_OP_RST2INIT_QP, + qp->qp_obj.qp->id)) { DRV_LOG(ERR, "Failed to modify QP to INIT state(%u).", rte_errno); return -1; } - if (mlx5_devx_cmd_modify_qp_state(qp->qp_obj, MLX5_CMD_OP_INIT2RTR_QP, - qp->qp_obj->id)) { + if (mlx5_devx_cmd_modify_qp_state(qp->qp_obj.qp, MLX5_CMD_OP_INIT2RTR_QP, + qp->qp_obj.qp->id)) { DRV_LOG(ERR, "Failed to modify QP to RTR state(%u).", rte_errno); return -1; } - if (mlx5_devx_cmd_modify_qp_state(qp->qp_obj, MLX5_CMD_OP_RTR2RTS_QP, - qp->qp_obj->id)) { + if (mlx5_devx_cmd_modify_qp_state(qp->qp_obj.qp, MLX5_CMD_OP_RTR2RTS_QP, + qp->qp_obj.qp->id)) { DRV_LOG(ERR, "Failed to modify QP to RTS state(%u).", rte_errno); return -1; @@ -452,7 +452,7 @@ mlx5_crypto_wqe_set(struct mlx5_crypto_priv *priv, memcpy(klms, &umr->kseg[0], sizeof(*klms) * klm_n); } ds = 2 + klm_n; - cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj->id << 8) | ds); + cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj.qp->id << 8) | ds); cseg->opcode = rte_cpu_to_be_32((qp->db_pi << 8) | MLX5_OPCODE_RDMA_WRITE); ds = RTE_ALIGN(ds, 4); @@ -461,7 +461,7 @@ mlx5_crypto_wqe_set(struct mlx5_crypto_priv *priv, if (priv->max_rdmar_ds > ds) { cseg += ds; ds = priv->max_rdmar_ds - ds; - cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj->id << 8) | ds); + cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj.qp->id << 8) | ds); cseg->opcode = rte_cpu_to_be_32((qp->db_pi << 8) | MLX5_OPCODE_NOP); qp->db_pi += ds >> 2; /* Here, DS is 4 aligned for sure. */ @@ -503,7 +503,7 @@ mlx5_crypto_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, return 0; do { op = *ops++; - umr = RTE_PTR_ADD(qp->umem_buf, priv->wqe_set_size * qp->pi); + umr = RTE_PTR_ADD(qp->qp_obj.umem_buf, priv->wqe_set_size * qp->pi); if (unlikely(mlx5_crypto_wqe_set(priv, qp, op, umr) == 0)) { qp->stats.enqueue_err_count++; if (remain != nb_ops) { @@ -517,7 +517,7 @@ mlx5_crypto_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, } while (--remain); qp->stats.enqueued_count += nb_ops; rte_io_wmb(); - qp->db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(qp->db_pi); + qp->qp_obj.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(qp->db_pi); rte_wmb(); mlx5_crypto_uar_write(*(volatile uint64_t *)qp->wqe, qp->priv); rte_wmb(); @@ -583,7 +583,7 @@ mlx5_crypto_qp_init(struct mlx5_crypto_priv *priv, struct mlx5_crypto_qp *qp) uint32_t i; for (i = 0 ; i < qp->entries_n; i++) { - struct mlx5_wqe_cseg *cseg = RTE_PTR_ADD(qp->umem_buf, i * + struct mlx5_wqe_cseg *cseg = RTE_PTR_ADD(qp->qp_obj.umem_buf, i * priv->wqe_set_size); struct mlx5_wqe_umr_cseg *ucseg = (struct mlx5_wqe_umr_cseg *) (cseg + 1); @@ -593,7 +593,7 @@ mlx5_crypto_qp_init(struct mlx5_crypto_priv *priv, struct mlx5_crypto_qp *qp) struct mlx5_wqe_rseg *rseg; /* Init UMR WQE. */ - cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj->id << 8) | + cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj.qp->id << 8) | (priv->umr_wqe_size / MLX5_WSEG_SIZE)); cseg->flags = RTE_BE32(MLX5_COMP_ONLY_FIRST_ERR << MLX5_COMP_MODE_OFFSET); @@ -628,7 +628,7 @@ mlx5_crypto_indirect_mkeys_prepare(struct mlx5_crypto_priv *priv, .klm_num = RTE_ALIGN(priv->max_segs_num, 4), }; - for (umr = (struct mlx5_umr_wqe *)qp->umem_buf, i = 0; + for (umr = (struct mlx5_umr_wqe *)qp->qp_obj.umem_buf, i = 0; i < qp->entries_n; i++, umr = RTE_PTR_ADD(umr, priv->wqe_set_size)) { attr.klm_array = (struct mlx5_klm *)&umr->kseg[0]; qp->mkey[i] = mlx5_devx_cmd_mkey_create(priv->ctx, &attr); @@ -649,9 +649,7 @@ mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, struct mlx5_devx_qp_attr attr = {0}; struct mlx5_crypto_qp *qp; uint16_t log_nb_desc = rte_log2_u32(qp_conf->nb_descriptors); - uint32_t umem_size = RTE_BIT32(log_nb_desc) * - priv->wqe_set_size + - sizeof(*qp->db_rec) * 2; + uint32_t ret; uint32_t alloc_size = sizeof(*qp); struct mlx5_devx_cq_attr cq_attr = { .uar_page_id = mlx5_os_get_devx_uar_page_id(priv->uar), @@ -675,18 +673,15 @@ mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, DRV_LOG(ERR, "Failed to create CQ."); goto error; } - qp->umem_buf = rte_zmalloc_socket(__func__, umem_size, 4096, socket_id); - if (qp->umem_buf == NULL) { - DRV_LOG(ERR, "Failed to allocate QP umem."); - rte_errno = ENOMEM; - goto error; - } - qp->umem_obj = mlx5_glue->devx_umem_reg(priv->ctx, - (void *)(uintptr_t)qp->umem_buf, - umem_size, - IBV_ACCESS_LOCAL_WRITE); - if (qp->umem_obj == NULL) { - DRV_LOG(ERR, "Failed to register QP umem."); + /* fill attributes*/ + attr.pd = priv->pdn; + attr.uar_index = mlx5_os_get_devx_uar_page_id(priv->uar); + attr.cqn = qp->cq_obj.cq->id; + attr.rq_size = 0; + attr.sq_size = RTE_BIT32(log_nb_desc); + ret = mlx5_devx_qp_create(priv->ctx, &qp->qp_obj, log_nb_desc, &attr, socket_id); + if(ret) { + DRV_LOG(ERR, "Failed to create QP"); goto error; } if (mlx5_mr_btree_init(&qp->mr_ctrl.cache_bh, MLX5_MR_BTREE_CACHE_N, @@ -697,23 +692,6 @@ mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, goto error; } qp->mr_ctrl.dev_gen_ptr = &priv->mr_scache.dev_gen; - attr.pd = priv->pdn; - attr.uar_index = mlx5_os_get_devx_uar_page_id(priv->uar); - attr.cqn = qp->cq_obj.cq->id; - attr.log_page_size = rte_log2_u32(sysconf(_SC_PAGESIZE)); - attr.rq_size = 0; - attr.sq_size = RTE_BIT32(log_nb_desc); - attr.dbr_umem_valid = 1; - attr.wq_umem_id = qp->umem_obj->umem_id; - attr.wq_umem_offset = 0; - attr.dbr_umem_id = qp->umem_obj->umem_id; - attr.dbr_address = RTE_BIT64(log_nb_desc) * priv->wqe_set_size; - qp->qp_obj = mlx5_devx_cmd_create_qp(priv->ctx, &attr); - if (qp->qp_obj == NULL) { - DRV_LOG(ERR, "Failed to create QP(%u).", rte_errno); - goto error; - } - qp->db_rec = RTE_PTR_ADD(qp->umem_buf, (uintptr_t)attr.dbr_address); if (mlx5_crypto_qp2rts(qp)) goto error; qp->mkey = (struct mlx5_devx_obj **)RTE_ALIGN((uintptr_t)(qp + 1), diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index d49b0001f0..013eed30b5 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -43,11 +43,8 @@ struct mlx5_crypto_priv { struct mlx5_crypto_qp { struct mlx5_crypto_priv *priv; struct mlx5_devx_cq cq_obj; - struct mlx5_devx_obj *qp_obj; + struct mlx5_devx_qp qp_obj; struct rte_cryptodev_stats stats; - struct mlx5dv_devx_umem *umem_obj; - void *umem_buf; - volatile uint32_t *db_rec; struct rte_crypto_op **ops; struct mlx5_devx_obj **mkey; /* WQE's indirect mekys. */ struct mlx5_mr_ctrl mr_ctrl; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 2a04e36607..a27f3fdadb 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -54,10 +54,7 @@ struct mlx5_vdpa_cq { struct mlx5_vdpa_event_qp { struct mlx5_vdpa_cq cq; struct mlx5_devx_obj *fw_qp; - struct mlx5_devx_obj *sw_qp; - struct mlx5dv_devx_umem *umem_obj; - void *umem_buf; - volatile uint32_t *db_rec; + struct mlx5_devx_qp sw_qp; }; struct mlx5_vdpa_query_mr { diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index 3541c652ce..d327a605fa 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -179,7 +179,7 @@ mlx5_vdpa_cq_poll(struct mlx5_vdpa_cq *cq) cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci); rte_io_wmb(); /* Ring SW QP doorbell record. */ - eqp->db_rec[0] = rte_cpu_to_be_32(cq->cq_ci + cq_size); + eqp->sw_qp.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci + cq_size); } return comp; } @@ -531,12 +531,12 @@ mlx5_vdpa_cqe_event_unset(struct mlx5_vdpa_priv *priv) void mlx5_vdpa_event_qp_destroy(struct mlx5_vdpa_event_qp *eqp) { - if (eqp->sw_qp) - claim_zero(mlx5_devx_cmd_destroy(eqp->sw_qp)); - if (eqp->umem_obj) - claim_zero(mlx5_glue->devx_umem_dereg(eqp->umem_obj)); - if (eqp->umem_buf) - rte_free(eqp->umem_buf); + if (eqp->sw_qp.qp) + claim_zero(mlx5_devx_cmd_destroy(eqp->sw_qp.qp)); + if (eqp->sw_qp.umem_obj) + claim_zero(mlx5_glue->devx_umem_dereg(eqp->sw_qp.umem_obj)); + if (eqp->sw_qp.umem_buf) + rte_free(eqp->sw_qp.umem_buf); if (eqp->fw_qp) claim_zero(mlx5_devx_cmd_destroy(eqp->fw_qp)); mlx5_vdpa_cq_destroy(&eqp->cq); @@ -547,36 +547,36 @@ static int mlx5_vdpa_qps2rts(struct mlx5_vdpa_event_qp *eqp) { if (mlx5_devx_cmd_modify_qp_state(eqp->fw_qp, MLX5_CMD_OP_RST2INIT_QP, - eqp->sw_qp->id)) { + eqp->sw_qp.qp->id)) { DRV_LOG(ERR, "Failed to modify FW QP to INIT state(%u).", rte_errno); return -1; } - if (mlx5_devx_cmd_modify_qp_state(eqp->sw_qp, MLX5_CMD_OP_RST2INIT_QP, + if (mlx5_devx_cmd_modify_qp_state(eqp->sw_qp.qp, MLX5_CMD_OP_RST2INIT_QP, eqp->fw_qp->id)) { DRV_LOG(ERR, "Failed to modify SW QP to INIT state(%u).", rte_errno); return -1; } if (mlx5_devx_cmd_modify_qp_state(eqp->fw_qp, MLX5_CMD_OP_INIT2RTR_QP, - eqp->sw_qp->id)) { + eqp->sw_qp.qp->id)) { DRV_LOG(ERR, "Failed to modify FW QP to RTR state(%u).", rte_errno); return -1; } - if (mlx5_devx_cmd_modify_qp_state(eqp->sw_qp, MLX5_CMD_OP_INIT2RTR_QP, + if (mlx5_devx_cmd_modify_qp_state(eqp->sw_qp.qp, MLX5_CMD_OP_INIT2RTR_QP, eqp->fw_qp->id)) { DRV_LOG(ERR, "Failed to modify SW QP to RTR state(%u).", rte_errno); return -1; } if (mlx5_devx_cmd_modify_qp_state(eqp->fw_qp, MLX5_CMD_OP_RTR2RTS_QP, - eqp->sw_qp->id)) { + eqp->sw_qp.qp->id)) { DRV_LOG(ERR, "Failed to modify FW QP to RTS state(%u).", rte_errno); return -1; } - if (mlx5_devx_cmd_modify_qp_state(eqp->sw_qp, MLX5_CMD_OP_RTR2RTS_QP, + if (mlx5_devx_cmd_modify_qp_state(eqp->sw_qp.qp, MLX5_CMD_OP_RTR2RTS_QP, eqp->fw_qp->id)) { DRV_LOG(ERR, "Failed to modify SW QP to RTS state(%u).", rte_errno); @@ -591,8 +591,7 @@ mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, { struct mlx5_devx_qp_attr attr = {0}; uint16_t log_desc_n = rte_log2_u32(desc_n); - uint32_t umem_size = (1 << log_desc_n) * MLX5_WSEG_SIZE + - sizeof(*eqp->db_rec) * 2; + uint32_t ret; if (mlx5_vdpa_event_qp_global_prepare(priv)) return -1; @@ -605,42 +604,19 @@ mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, DRV_LOG(ERR, "Failed to create FW QP(%u).", rte_errno); goto error; } - eqp->umem_buf = rte_zmalloc(__func__, umem_size, 4096); - if (!eqp->umem_buf) { - DRV_LOG(ERR, "Failed to allocate memory for SW QP."); - rte_errno = ENOMEM; - goto error; - } - eqp->umem_obj = mlx5_glue->devx_umem_reg(priv->ctx, - (void *)(uintptr_t)eqp->umem_buf, - umem_size, - IBV_ACCESS_LOCAL_WRITE); - if (!eqp->umem_obj) { - DRV_LOG(ERR, "Failed to register umem for SW QP."); - goto error; - } - attr.uar_index = priv->uar->page_id; - attr.cqn = eqp->cq.cq_obj.cq->id; - attr.log_page_size = rte_log2_u32(sysconf(_SC_PAGESIZE)); attr.rq_size = 1 << log_desc_n; attr.log_rq_stride = rte_log2_u32(MLX5_WSEG_SIZE); attr.sq_size = 0; /* No need SQ. */ - attr.dbr_umem_valid = 1; - attr.wq_umem_id = eqp->umem_obj->umem_id; - attr.wq_umem_offset = 0; - attr.dbr_umem_id = eqp->umem_obj->umem_id; attr.ts_format = mlx5_ts_format_conv(priv->qp_ts_format); - attr.dbr_address = RTE_BIT64(log_desc_n) * MLX5_WSEG_SIZE; - eqp->sw_qp = mlx5_devx_cmd_create_qp(priv->ctx, &attr); - if (!eqp->sw_qp) { + ret = mlx5_devx_qp_create(priv->ctx, &(eqp->sw_qp), log_desc_n, &attr, SOCKET_ID_ANY); + if (ret) { DRV_LOG(ERR, "Failed to create SW QP(%u).", rte_errno); goto error; } - eqp->db_rec = RTE_PTR_ADD(eqp->umem_buf, (uintptr_t)attr.dbr_address); if (mlx5_vdpa_qps2rts(eqp)) goto error; /* First ringing. */ - rte_write32(rte_cpu_to_be_32(1 << log_desc_n), &eqp->db_rec[0]); + rte_write32(rte_cpu_to_be_32(1 << log_desc_n), &eqp->sw_qp.db_rec[0]); return 0; error: mlx5_vdpa_event_qp_destroy(eqp); From patchwork Wed Aug 18 15:14:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raja Zidane X-Patchwork-Id: 97061 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 268F4A0C52; Wed, 18 Aug 2021 17:15:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7A54B411F1; Wed, 18 Aug 2021 17:15:15 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2054.outbound.protection.outlook.com [40.107.236.54]) by mails.dpdk.org (Postfix) with ESMTP id 78F4440151 for ; Wed, 18 Aug 2021 17:15:13 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=D61fAKT6pfJZt+jDzVS99exsI2ooPpwzv6J/N7Vbn/1p47dem3tDY0RkhChxKCHrCtAjzeOsjLzoHW0wuJPTDpBvW6PTpeIK4amMUs5XLitrudQty80meVggKBmZzIZKbGeV1WE35cqwjKGC8HfhQKZIqHKTbESf5MCP35wpMOhfZXQ359B6GJQpnh0lXfECbesfAFgt+AOM//pXqJknijRhjyToyKBchQBRovExATZeQ15PFc4oEgZIwljAhA3ZECWkoHvo8sPz8nFl7tOayv71iGoo6qSgt4nFg06JvhjYwl/LiPZ4TMWtXFZv/T59lDdIFIXtfMAMVmrcCB7vcg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=f2dpLZe4iKaW0izOVqeFWHn67nK3Tb892r9+xqJkjBg=; b=YYOktpSjKOl0URcjGveb/Wz4IJJdZrHAJ/1ize7gx1KRTTHI/3QXVND4b3gU1lnhiwT5VbTtNCKZIRuBU4AR11dT8DW1jFDVZ4tcQM5npo9LQbinOoylhCOg+zkIr9IxUupF/C8bCjCu/7M2puQb8Mc8mfOlmkuNR2LPKsjBdiFVMY+YKMGEdBN7t25uoWIxl/C9BWbsIPuQfe+BtExJEo8eaTk6HxqsUuLUo3B24rN5rVUXy4OCSRvIBjVJh2tLxIgHLWYNncAhx6A0CMLbLSdCdumXeDExod3zSe6n/f5XEi8o0TvH907+EKF4EjqRaOygBNzlCajtbSryaylxew== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.36) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=f2dpLZe4iKaW0izOVqeFWHn67nK3Tb892r9+xqJkjBg=; b=cnYyiSbEkoWun1bc0pw/QSidDa3STdhhlpBhYPRi+/26DXBOX4xzRcpVpm5rNG2TScCLcHnoRbxqGUqQ9s5rFVw2vRg2bRSvvvjacnE8TjhHU+jXm+Cu1/9gk04oI1XhcOaGeTrkFaNb/ja6xIvxYjkjyLLHuQDn7rhyvEZx/2XT9OJAnjOh6MA69aBzX5cUn+AbtmkHD7iGzjitFCgEWeG3ibWG90AfayRgULOMwiCJlE1rNIpDAX8+IRRtZBJJ+lRn6ZmUTfoJ2h0ojrY9aLAiwRxhfwVQv6Sz1tD4JwJdZ/ztd0gI7bB08hch5JRMPAzfrzsKA1I4DTvu5iw26g== Received: from BN6PR21CA0008.namprd21.prod.outlook.com (2603:10b6:404:8e::18) by BYAPR12MB3574.namprd12.prod.outlook.com (2603:10b6:a03:ae::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.20; Wed, 18 Aug 2021 15:15:11 +0000 Received: from BN8NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:404:8e:cafe::3a) by BN6PR21CA0008.outlook.office365.com (2603:10b6:404:8e::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.1 via Frontend Transport; Wed, 18 Aug 2021 15:15:11 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.36) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.36 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.36; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.36) by BN8NAM11FT036.mail.protection.outlook.com (10.13.177.168) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4436.19 via Frontend Transport; Wed, 18 Aug 2021 15:15:11 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 18 Aug 2021 15:15:07 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 18 Aug 2021 15:15:06 +0000 From: Raja Zidane To: CC: , Date: Wed, 18 Aug 2021 18:14:40 +0300 Message-ID: <20210818151441.12400-3-rzidane@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210818151441.12400-1-rzidane@nvidia.com> References: <20210818151441.12400-1-rzidane@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 20d8e88b-d3cb-416e-21f3-08d9625af904 X-MS-TrafficTypeDiagnostic: BYAPR12MB3574: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:813; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 2pSqtYEpTgrePxJ339Q1AuR9FcGnEQBumhJu9+fr4dg0MPR8bNKf6eru+NcOm0SUJjRGCDthENP4dY79EqIF/mienKmlWVwCD2ked1KThWbt/SPRRMy6ZfQUz1yRc0I8ybGpiB1viip340hEuAnLaCniuXkifibtOF5JxPxxGc8A44CnuWsons0KoJ0iUZ0tJiRf0sTGEVZfSmaY3fNLMmTXJa9M2BjxuiZNtJn3R6k+RCppOQfkXRiD3cDSD5Z4eawQ6S4fecWwIyIeFHKhfU8OMw4+gpQYH422L47dRmtjvC0n2bBwjgECmO8VEzaOh8sjDaeUGV37LHWi3Vx/fdFyOEd9IqnY8ine78ASigXcvzmD+XJwFq3da3nOmZ4Q4YwDbJqMcdohluFzzcLHau2h3zmep7cXPvEnA255UQGcO9xXMNEb+LKgsGCooO3pXuYs9Xylrp57wPCvJ6jL01BayCxiKhfBAgXidyd+n/MUMhwlMMTP6sjF7W51RrEzLHV//Xxv3hfUxdK20qJaz5evvgE8AIZGtxTS++9mUi5jEmB98NqxHx6uJ2EplsKzZszm7SRg3S5Zhkqy5AQvq4R4LZnOyVyDUeHbAtaFtT6eJHkdnjUYpC7fO7g2N81Nxw8fR8usyPV7EBEIKs8H9AB2EfjgwDRjv1QuRodEI8zOyTW4osDp21z23Rnn5uCuO3Gnrtjuju6SbjmC0INnqQ== X-Forefront-Antispam-Report: CIP:216.228.112.36; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid05.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(396003)(136003)(346002)(376002)(46966006)(36840700001)(6916009)(55016002)(36756003)(16526019)(4326008)(2906002)(86362001)(7696005)(426003)(336012)(186003)(107886003)(26005)(478600001)(2616005)(6666004)(82310400003)(30864003)(36860700001)(8936002)(70206006)(47076005)(5660300002)(8676002)(6286002)(356005)(70586007)(1076003)(316002)(83380400001)(54906003)(7636003)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Aug 2021 15:15:11.2173 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 20d8e88b-d3cb-416e-21f3-08d9625af904 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.36]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB3574 Subject: [dpdk-dev] [RFC 2/3] compress/mlx5: refactor queue creation in mlx5 add support to compress and regex drivers in BlueField3 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Raja Zidane --- drivers/common/mlx5/mlx5_devx_cmds.c | 14 ++++- drivers/common/mlx5/mlx5_devx_cmds.h | 10 ++- drivers/common/mlx5/mlx5_prm.h | 42 +++++++++++-- drivers/compress/mlx5/mlx5_compress.c | 91 ++++++++++++++++++--------- 4 files changed, 116 insertions(+), 41 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 56407cc332..347ae75d37 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -858,9 +858,12 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->log_max_srq_sz = MLX5_GET(cmd_hca_cap, hcattr, log_max_srq_sz); attr->reg_c_preserve = MLX5_GET(cmd_hca_cap, hcattr, reg_c_preserve); - attr->mmo_dma_en = MLX5_GET(cmd_hca_cap, hcattr, dma_mmo); - attr->mmo_compress_en = MLX5_GET(cmd_hca_cap, hcattr, compress); - attr->mmo_decompress_en = MLX5_GET(cmd_hca_cap, hcattr, decompress); + attr->mmo_dma_sq_en = MLX5_GET(cmd_hca_cap, hcattr, dma_mmo_sq); + attr->mmo_compress_sq_en = MLX5_GET(cmd_hca_cap, hcattr, compress_mmo_sq); + attr->mmo_decompress_sq_en = MLX5_GET(cmd_hca_cap, hcattr, decompress_mmo_sq); + attr->mmo_dma_qp_en = MLX5_GET(cmd_hca_cap, hcattr, dma_mmo_qp); + attr->mmo_compress_qp_en = MLX5_GET(cmd_hca_cap, hcattr, compress_mmo_qp); + attr->mmo_decompress_qp_en = MLX5_GET(cmd_hca_cap, hcattr, decompress_mmo_qp); attr->compress_min_block_size = MLX5_GET(cmd_hca_cap, hcattr, compress_min_block_size); attr->log_max_mmo_dma = MLX5_GET(cmd_hca_cap, hcattr, log_dma_mmo_size); @@ -2022,6 +2025,11 @@ mlx5_devx_cmd_create_qp(void *ctx, MLX5_SET(qpc, qpc, pd, attr->pd); MLX5_SET(qpc, qpc, ts_format, attr->ts_format); if (attr->uar_index) { + if(attr->mmo) { + void *qpc_ext_and_pas_list = MLX5_ADDR_OF(create_qp_in, in, qpc_extension_and_pas_list); + void* qpc_ext = MLX5_ADDR_OF(qpc_extension_and_pas_list, qpc_ext_and_pas_list, qpc_data_extension); + MLX5_SET(qpc_extension, qpc_ext, mmo, 1); + } MLX5_SET(qpc, qpc, pm_state, MLX5_QP_PM_MIGRATED); MLX5_SET(qpc, qpc, uar_page, attr->uar_index); if (attr->log_page_size > MLX5_ADAPTER_PAGE_SHIFT) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index e576e30f24..f993b511dc 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -173,9 +173,12 @@ struct mlx5_hca_attr { uint32_t log_max_srq; uint32_t log_max_srq_sz; uint32_t rss_ind_tbl_cap; - uint32_t mmo_dma_en:1; - uint32_t mmo_compress_en:1; - uint32_t mmo_decompress_en:1; + uint32_t mmo_dma_sq_en:1; + uint32_t mmo_compress_sq_en:1; + uint32_t mmo_decompress_sq_en:1; + uint32_t mmo_dma_qp_en:1; + uint32_t mmo_compress_qp_en:1; + uint32_t mmo_decompress_qp_en:1; uint32_t compress_min_block_size:4; uint32_t log_max_mmo_dma:5; uint32_t log_max_mmo_compress:5; @@ -397,6 +400,7 @@ struct mlx5_devx_qp_attr { uint64_t dbr_address; uint32_t wq_umem_id; uint64_t wq_umem_offset; + uint32_t mmo; }; struct mlx5_devx_virtio_q_couners_attr { diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index fdb20f5d49..d0c75b97df 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1385,10 +1385,10 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 rtr2rts_qp_counters_set_id[0x1]; u8 rts2rts_udp_sport[0x1]; u8 rts2rts_lag_tx_port_affinity[0x1]; - u8 dma_mmo[0x1]; + u8 dma_mmo_sq[0x1]; u8 compress_min_block_size[0x4]; - u8 compress[0x1]; - u8 decompress[0x1]; + u8 compress_mmo_sq[0x1]; + u8 decompress_mmo_sq[0x1]; u8 log_max_ra_res_qp[0x6]; u8 end_pad[0x1]; u8 cc_query_allowed[0x1]; @@ -1631,7 +1631,12 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 num_vhca_ports[0x8]; u8 reserved_at_618[0x6]; u8 sw_owner_id[0x1]; - u8 reserved_at_61f[0x1e1]; + u8 reserved_at_61f[0x109]; + u8 dma_mmo_qp[0x1]; + u8 reserved_at_621[0x1]; + u8 compress_mmo_qp[0x1]; + u8 decompress_mmo_qp[0x1]; + u8 reserved_at_624[0xd4]; }; struct mlx5_ifc_qos_cap_bits { @@ -3235,6 +3240,27 @@ struct mlx5_ifc_create_qp_out_bits { u8 reserved_at_60[0x20]; }; +struct mlx5_ifc_qpc_extension_bits { + u8 reserved_at_0[0x2]; + u8 mmo[0x1]; + u8 reserved_at_3[0x5fd]; +}; + +#ifdef PEDANTIC +#pragma GCC diagnostic ignored "-Wpedantic" +#endif +struct mlx5_ifc_qpc_pas_list_bits { + u8 pas[0][0x40]; +}; + +#ifdef PEDANTIC +#pragma GCC diagnostic ignored "-Wpedantic" +#endif +struct mlx5_ifc_qpc_extension_and_pas_list_bits { + struct mlx5_ifc_qpc_extension_bits qpc_data_extension; + u8 pas[0][0x40]; +}; + #ifdef PEDANTIC #pragma GCC diagnostic ignored "-Wpedantic" #endif @@ -3243,7 +3269,8 @@ struct mlx5_ifc_create_qp_in_bits { u8 uid[0x10]; u8 reserved_at_20[0x10]; u8 op_mod[0x10]; - u8 reserved_at_40[0x40]; + u8 qpc_ext[0x1]; + u8 reserved_at_41[0x3f]; u8 opt_param_mask[0x20]; u8 reserved_at_a0[0x20]; struct mlx5_ifc_qpc_bits qpc; @@ -3251,7 +3278,10 @@ struct mlx5_ifc_create_qp_in_bits { u8 wq_umem_id[0x20]; u8 wq_umem_valid[0x1]; u8 reserved_at_861[0x1f]; - u8 pas[0][0x40]; + union { + struct mlx5_ifc_qpc_pas_list_bits qpc_pas_list; + struct mlx5_ifc_qpc_extension_and_pas_list_bits qpc_extension_and_pas_list; + }; }; #ifdef PEDANTIC #pragma GCC diagnostic error "-Wpedantic" diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 883e720ec1..05e75adb1c 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -48,6 +48,7 @@ struct mlx5_compress_priv { rte_spinlock_t xform_sl; struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */ volatile uint64_t *uar_addr; + uint8_t mmo_caps; /* bitmap 0->5: decomp_sq, decomp_qp, comp_sq, comp_qp, dma_sq, dma_qp */ #ifndef RTE_ARCH_64 rte_spinlock_t uar32_sl; #endif /* RTE_ARCH_64 */ @@ -61,7 +62,7 @@ struct mlx5_compress_qp { struct mlx5_mr_ctrl mr_ctrl; int socket_id; struct mlx5_devx_cq cq; - struct mlx5_devx_sq sq; + struct mlx5_devx_qp qp; struct mlx5_pmd_mr opaque_mr; struct rte_comp_op **ops; struct mlx5_compress_priv *priv; @@ -134,8 +135,8 @@ mlx5_compress_qp_release(struct rte_compressdev *dev, uint16_t qp_id) { struct mlx5_compress_qp *qp = dev->data->queue_pairs[qp_id]; - if (qp->sq.sq != NULL) - mlx5_devx_sq_destroy(&qp->sq); + if (qp->qp.qp != NULL) + mlx5_devx_qp_destroy(&qp->qp); if (qp->cq.cq != NULL) mlx5_devx_cq_destroy(&qp->cq); if (qp->opaque_mr.obj != NULL) { @@ -152,12 +153,12 @@ mlx5_compress_qp_release(struct rte_compressdev *dev, uint16_t qp_id) } static void -mlx5_compress_init_sq(struct mlx5_compress_qp *qp) +mlx5_compress_init_qp(struct mlx5_compress_qp *qp) { volatile struct mlx5_gga_wqe *restrict wqe = - (volatile struct mlx5_gga_wqe *)qp->sq.wqes; + (volatile struct mlx5_gga_wqe *)qp->qp.wqes; volatile struct mlx5_gga_compress_opaque *opaq = qp->opaque_mr.addr; - const uint32_t sq_ds = rte_cpu_to_be_32((qp->sq.sq->id << 8) | 4u); + const uint32_t sq_ds = rte_cpu_to_be_32((qp->qp.qp->id << 8) | 4u); const uint32_t flags = RTE_BE32(MLX5_COMP_ALWAYS << MLX5_COMP_MODE_OFFSET); const uint32_t opaq_lkey = rte_cpu_to_be_32(qp->opaque_mr.lkey); @@ -173,6 +174,35 @@ mlx5_compress_init_sq(struct mlx5_compress_qp *qp) } } +static int +mlx5_compress_qp2rts(struct mlx5_compress_qp *qp) +{ + /* + * In Order to configure self loopback, when calling these functions the + * remote QP id that is used is the id of the same QP. + */ + if (mlx5_devx_cmd_modify_qp_state(qp->qp.qp, MLX5_CMD_OP_RST2INIT_QP, + qp->qp.qp->id)) { + DRV_LOG(ERR, "Failed to modify QP to INIT state(%u).", + rte_errno); + return -1; + } + if (mlx5_devx_cmd_modify_qp_state(qp->qp.qp, MLX5_CMD_OP_INIT2RTR_QP, + qp->qp.qp->id)) { + DRV_LOG(ERR, "Failed to modify QP to RTR state(%u).", + rte_errno); + return -1; + } + if (mlx5_devx_cmd_modify_qp_state(qp->qp.qp, MLX5_CMD_OP_RTR2RTS_QP, + qp->qp.qp->id)) { + DRV_LOG(ERR, "Failed to modify QP to RTS state(%u).", + rte_errno); + return -1; + } + return 0; +} + + static int mlx5_compress_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, uint32_t max_inflight_ops, int socket_id) @@ -182,15 +212,9 @@ mlx5_compress_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, struct mlx5_devx_cq_attr cq_attr = { .uar_page_id = mlx5_os_get_devx_uar_page_id(priv->uar), }; - struct mlx5_devx_create_sq_attr sq_attr = { - .user_index = qp_id, - .wq_attr = (struct mlx5_devx_wq_attr){ - .pd = priv->pdn, - .uar_page = mlx5_os_get_devx_uar_page_id(priv->uar), - }, - }; - struct mlx5_devx_modify_sq_attr modify_attr = { - .state = MLX5_SQC_STATE_RDY, + struct mlx5_devx_qp_attr qp_attr = { + .pd = priv->pdn, + .uar_index = mlx5_os_get_devx_uar_page_id(priv->uar), }; uint32_t log_ops_n = rte_log2_u32(max_inflight_ops); uint32_t alloc_size = sizeof(*qp); @@ -242,24 +266,26 @@ mlx5_compress_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, DRV_LOG(ERR, "Failed to create CQ."); goto err; } - sq_attr.cqn = qp->cq.cq->id; - sq_attr.ts_format = mlx5_ts_format_conv(priv->sq_ts_format); - ret = mlx5_devx_sq_create(priv->ctx, &qp->sq, log_ops_n, &sq_attr, + qp_attr.cqn = qp->cq.cq->id; + qp_attr.ts_format = mlx5_ts_format_conv(priv->sq_ts_format); + qp_attr.rq_size = 0; + qp_attr.sq_size = 1 << log_ops_n; + qp_attr.mmo = (priv->mmo_caps & (1<<1)) && (priv->mmo_caps & (1<<3)) && (priv->mmo_caps & (1<<5)); + ret = mlx5_devx_qp_create(priv->ctx, &qp->qp, log_ops_n, &qp_attr, socket_id); if (ret != 0) { - DRV_LOG(ERR, "Failed to create SQ."); + DRV_LOG(ERR, "Failed to create QP."); goto err; } - mlx5_compress_init_sq(qp); - ret = mlx5_devx_cmd_modify_sq(qp->sq.sq, &modify_attr); - if (ret != 0) { - DRV_LOG(ERR, "Can't change SQ state to ready."); + ret = mlx5_compress_qp2rts(qp); + if(ret) { goto err; } + mlx5_compress_init_qp(qp); /* Save pointer of global generation number to check memory event. */ qp->mr_ctrl.dev_gen_ptr = &priv->mr_scache.dev_gen; DRV_LOG(INFO, "QP %u: SQN=0x%X CQN=0x%X entries num = %u", - (uint32_t)qp_id, qp->sq.sq->id, qp->cq.cq->id, qp->entries_n); + (uint32_t)qp_id, qp->qp.qp->id, qp->cq.cq->id, qp->entries_n); return 0; err: mlx5_compress_qp_release(dev, qp_id); @@ -508,7 +534,7 @@ mlx5_compress_enqueue_burst(void *queue_pair, struct rte_comp_op **ops, { struct mlx5_compress_qp *qp = queue_pair; volatile struct mlx5_gga_wqe *wqes = (volatile struct mlx5_gga_wqe *) - qp->sq.wqes, *wqe; + qp->qp.wqes, *wqe; struct mlx5_compress_xform *xform; struct rte_comp_op *op; uint16_t mask = qp->entries_n - 1; @@ -563,7 +589,7 @@ mlx5_compress_enqueue_burst(void *queue_pair, struct rte_comp_op **ops, } while (--remain); qp->stats.enqueued_count += nb_ops; rte_io_wmb(); - qp->sq.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(qp->pi); + qp->qp.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(qp->pi); rte_wmb(); mlx5_compress_uar_write(*(volatile uint64_t *)wqe, qp->priv); rte_wmb(); @@ -598,7 +624,7 @@ mlx5_compress_cqe_err_handle(struct mlx5_compress_qp *qp, volatile struct mlx5_err_cqe *cqe = (volatile struct mlx5_err_cqe *) &qp->cq.cqes[idx]; volatile struct mlx5_gga_wqe *wqes = (volatile struct mlx5_gga_wqe *) - qp->sq.wqes; + qp->qp.wqes; volatile struct mlx5_gga_compress_opaque *opaq = qp->opaque_mr.addr; op->status = RTE_COMP_OP_STATUS_ERROR; @@ -813,8 +839,9 @@ mlx5_compress_dev_probe(struct rte_device *dev) return -rte_errno; } if (mlx5_devx_cmd_query_hca_attr(ctx, &att) != 0 || - att.mmo_compress_en == 0 || att.mmo_decompress_en == 0 || - att.mmo_dma_en == 0) { + ((att.mmo_compress_sq_en == 0 || att.mmo_decompress_sq_en == 0 || + att.mmo_dma_sq_en == 0) && (att.mmo_compress_qp_en == 0 || + att.mmo_decompress_qp_en == 0 || att.mmo_dma_qp_en == 0))) { DRV_LOG(ERR, "Not enough capabilities to support compress " "operations, maybe old FW/OFED version?"); claim_zero(mlx5_glue->close_device(ctx)); @@ -835,6 +862,12 @@ mlx5_compress_dev_probe(struct rte_device *dev) cdev->enqueue_burst = mlx5_compress_enqueue_burst; cdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED; priv = cdev->data->dev_private; + priv->mmo_caps = 0 | att.mmo_decompress_sq_en; + priv->mmo_caps |= att.mmo_decompress_qp_en << 1; + priv->mmo_caps |= att.mmo_compress_sq_en << 2; + priv->mmo_caps |= att.mmo_compress_qp_en << 3; + priv->mmo_caps |= att.mmo_dma_sq_en << 4; + priv->mmo_caps |= att.mmo_dma_qp_en << 5; priv->ctx = ctx; priv->cdev = cdev; priv->min_block_size = att.compress_min_block_size; From patchwork Wed Aug 18 15:14:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raja Zidane X-Patchwork-Id: 97062 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 428D3A0C52; Wed, 18 Aug 2021 17:15:33 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A7E98411E9; Wed, 18 Aug 2021 17:15:17 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2083.outbound.protection.outlook.com [40.107.237.83]) by mails.dpdk.org (Postfix) with ESMTP id F0BE0411DD for ; Wed, 18 Aug 2021 17:15:14 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bX5WDCog5johlB/kEAo3uqTgNa3UP79qLcJGXffDyH5rNKlylzggUpDyy6HIA12whFtCZV2RN+VHoqpk1hU/g7LZlrgkIDlXNiawAkwEV1rDywGXO+ey6Z2tvyKrFHbI+Ggpj6M+beuE7sOxO36+tqDLR/DJCy7HZ+45PYvI06Vxuue+qRg6uELB8xEdrBeb6oCMdk14yQg2Duf/iFBLKUWBPIXha/zTPtORMtc/lJQKZLGUrNEwSQOAG/i4B9KRTAhVYXs4qc/FZFTzeqMugQ+dU6/BmktavZQBHU1DvtlePyrU/4DF99sEkTIByelKDb3wWiUUX8IkhE/oGYM5Sg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3wGDlwtoCDTuVFvSFYK1W2RNvaJteYJhhtxKsJ2SJjY=; b=SS+i8Fe5tQ2/IhQN1spayzLSVbeH8QDzDJJwi5N4e50DKh7gpp5zGnKohJcONW9/cLu9zgh+W61CNQCG3Sx0sGy/EEjclabFpoSkH4ONSvol1zocourBcAHk8gaHwwxevfOTlVIEh/RS4VrCQ1v1MBKMwtP0G7PumIoz5kNvSkFpsKtMeOHDYkjHcWbv+a7ZOr+AKumCThiRmIjTYycqj07KPabVrrzQTuoUolPxw8Xv0vH3yXjZSX7nMkz8dGjRHVB/34UqY9OS/r1Nz/xUVi5YR5drKC6ByMMATbknRzNPCe/JNR528LWwjzX+T05IRxRKpfKrQIbHTzP9XBaAPw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3wGDlwtoCDTuVFvSFYK1W2RNvaJteYJhhtxKsJ2SJjY=; b=GiaIw8RWGkPMjaC1FcjZAudXHOwzzXmk1kNIqxuaappfaMD8rSe91HPyUXHjHizB4zDYudhg+4wFWYagT+WV5hC44t+mO76Q5EYbxmPe8X0R2IKVyiYa1TjOL4ZC8YWP5U/XZ1n5lTHry9K0poRdANbrwDwmf4kfPduRxzkPDHQICJBdvOu3sWEf+8cNdpNwrrFj4theYT+Nj1hxUf5H1Uy67B7GGjUYHTUXyncsd68P2GaU7FKO1a2G5Evr1BUUd56QBsoixY1nRPFvjHZhfCGdtJv/Pl7lVxvS4w7gblBLU6qU+ghIvUWULovD07J7cW/VgMutl0jfprCyMEKSrA== Received: from BN9PR03CA0123.namprd03.prod.outlook.com (2603:10b6:408:fe::8) by SN1PR12MB2365.namprd12.prod.outlook.com (2603:10b6:802:2e::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.16; Wed, 18 Aug 2021 15:15:12 +0000 Received: from BN8NAM11FT034.eop-nam11.prod.protection.outlook.com (2603:10b6:408:fe:cafe::76) by BN9PR03CA0123.outlook.office365.com (2603:10b6:408:fe::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.19 via Frontend Transport; Wed, 18 Aug 2021 15:15:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT034.mail.protection.outlook.com (10.13.176.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4436.19 via Frontend Transport; Wed, 18 Aug 2021 15:15:11 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 18 Aug 2021 15:15:09 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 18 Aug 2021 15:15:08 +0000 From: Raja Zidane To: CC: , Date: Wed, 18 Aug 2021 18:14:41 +0300 Message-ID: <20210818151441.12400-4-rzidane@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210818151441.12400-1-rzidane@nvidia.com> References: <20210818151441.12400-1-rzidane@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 32a5a151-5e55-414f-aa87-08d9625af921 X-MS-TrafficTypeDiagnostic: SN1PR12MB2365: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2201; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: G3Pkk5u2FZAYbVviW30v7e4qHNezkIxqUiygF5XqxuplHZ83KbG5UPGVk51PdTmI+1KbdWty9Itd39YMSCxQxEkZKIQYYIhnrliiZJe7JTFXRNuuXzlLuynNYuyXVEtK6K+ChRi3SldpzJgsSMVEELFmcq5xWzpkkW+q5alVnoEdUy7OlxhzVo/e/8FoDpnytgwuvzQk5U78UfyRx5rzLNWj5jnIY9XxhuQYCvPEfkUjf1F8+qH7jcCzRxDg0Jf5k4LwK2r2Ql/y/1dPkEMDR13fcCjGXQsdcBQIxOziN3S+qCMWBdYoJABV0wIBcPTJZf0vlPbZfiZhXW7cmM2osLPjgU4xe81Z8kUIi+v3/Z4IC/7+iQvd0R5JRgggOhyyV39emvrU7F12o3Wt6klYsjzItJnZKVUG1R3bjLjgWn+GPgccBBjgp7ns7PZ46fwVXIQ6MpSnm8owY3Po55oY8sdZzjheFWShL/x6M+6p69NaiVO3ft22JHptjyRtZMpKDEK2s7hNnmW0LWk/vx3CWaBOKAlcD/ZeGJnAScdNiijmzdK/gtUo5DxV8uk5E/PhScD/w/rFrdiikFSuqjOAJvYst89+jO3xCXbYmtgP6rlzgkwmxKXJlFHldParoALDKSTBXPtFYhdYB4VWNK9vuOvXWNnUIOwuAr6MVMBTvBmqK2eVBYxFKPznXq2OYTvAoc0QUZfKkj6y44+BMwDjYQ== X-Forefront-Antispam-Report: CIP:216.228.112.35; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid04.nvidia.com; CAT:NONE; SFS:(4636009)(376002)(396003)(346002)(136003)(39860400002)(46966006)(36840700001)(26005)(4326008)(6666004)(36756003)(6916009)(1076003)(55016002)(2906002)(186003)(70206006)(70586007)(7636003)(54906003)(316002)(47076005)(107886003)(2616005)(356005)(86362001)(82740400003)(8676002)(6286002)(5660300002)(83380400001)(478600001)(426003)(8936002)(16526019)(36860700001)(30864003)(82310400003)(7696005)(336012); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Aug 2021 15:15:11.4318 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 32a5a151-5e55-414f-aa87-08d9625af921 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.35]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT034.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR12MB2365 Subject: [dpdk-dev] [RFC 3/3] regex/mlx5: refactor queue creation in mlx5 add support to compress and regex drivers in Bluefield3 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Raja Zidane --- drivers/common/mlx5/mlx5_common_devx.c | 28 ++++++++++++ drivers/common/mlx5/mlx5_common_devx.h | 3 ++ drivers/common/mlx5/version.map | 1 + drivers/compress/mlx5/mlx5_compress.c | 31 +------------ drivers/crypto/mlx5/mlx5_crypto.c | 30 +------------ drivers/regex/mlx5/mlx5_regex.h | 6 +-- drivers/regex/mlx5/mlx5_regex_control.c | 60 ++++++++++++------------- 7 files changed, 65 insertions(+), 94 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c index 640fe3bbb9..0baf0831e8 100644 --- a/drivers/common/mlx5/mlx5_common_devx.c +++ b/drivers/common/mlx5/mlx5_common_devx.c @@ -496,3 +496,31 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size, return -rte_errno; } +int +mlx5_devx_qp2rts(struct mlx5_devx_qp *qp) +{ + /* + * In Order to configure self loopback, when calling these functions the + * remote QP id that is used is the id of the same QP. + */ + if (mlx5_devx_cmd_modify_qp_state(qp->qp, MLX5_CMD_OP_RST2INIT_QP, + qp->qp->id)) { + DRV_LOG(ERR, "Failed to modify QP to INIT state(%u).", + rte_errno); + return -1; + } + if (mlx5_devx_cmd_modify_qp_state(qp->qp, MLX5_CMD_OP_INIT2RTR_QP, + qp->qp->id)) { + DRV_LOG(ERR, "Failed to modify QP to RTR state(%u).", + rte_errno); + return -1; + } + if (mlx5_devx_cmd_modify_qp_state(qp->qp, MLX5_CMD_OP_RTR2RTS_QP, + qp->qp->id)) { + DRV_LOG(ERR, "Failed to modify QP to RTS state(%u).", + rte_errno); + return -1; + } + return 0; +} + diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h index b05260b401..81036f92ff 100644 --- a/drivers/common/mlx5/mlx5_common_devx.h +++ b/drivers/common/mlx5/mlx5_common_devx.h @@ -87,4 +87,7 @@ int mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size, uint16_t log_wqbb_n, struct mlx5_devx_create_rq_attr *attr, int socket); +__rte_internal +int mlx5_devx_qp2rts(struct mlx5_devx_qp *qp); + #endif /* RTE_PMD_MLX5_COMMON_DEVX_H_ */ diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index 9487f787b6..e61673dcb0 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -73,6 +73,7 @@ INTERNAL { mlx5_devx_sq_destroy; mlx5_devx_qp_create; mlx5_devx_qp_destroy; + mlx5_devx_qp2rts; mlx5_free; diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 05e75adb1c..9cf75a9193 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -174,35 +174,6 @@ mlx5_compress_init_qp(struct mlx5_compress_qp *qp) } } -static int -mlx5_compress_qp2rts(struct mlx5_compress_qp *qp) -{ - /* - * In Order to configure self loopback, when calling these functions the - * remote QP id that is used is the id of the same QP. - */ - if (mlx5_devx_cmd_modify_qp_state(qp->qp.qp, MLX5_CMD_OP_RST2INIT_QP, - qp->qp.qp->id)) { - DRV_LOG(ERR, "Failed to modify QP to INIT state(%u).", - rte_errno); - return -1; - } - if (mlx5_devx_cmd_modify_qp_state(qp->qp.qp, MLX5_CMD_OP_INIT2RTR_QP, - qp->qp.qp->id)) { - DRV_LOG(ERR, "Failed to modify QP to RTR state(%u).", - rte_errno); - return -1; - } - if (mlx5_devx_cmd_modify_qp_state(qp->qp.qp, MLX5_CMD_OP_RTR2RTS_QP, - qp->qp.qp->id)) { - DRV_LOG(ERR, "Failed to modify QP to RTS state(%u).", - rte_errno); - return -1; - } - return 0; -} - - static int mlx5_compress_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, uint32_t max_inflight_ops, int socket_id) @@ -277,7 +248,7 @@ mlx5_compress_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, DRV_LOG(ERR, "Failed to create QP."); goto err; } - ret = mlx5_compress_qp2rts(qp); + ret = mlx5_devx_qp2rts(&qp->qp); if(ret) { goto err; } diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index c66a3a7add..94023e4844 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -270,34 +270,6 @@ mlx5_crypto_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id) return 0; } -static int -mlx5_crypto_qp2rts(struct mlx5_crypto_qp *qp) -{ - /* - * In Order to configure self loopback, when calling these functions the - * remote QP id that is used is the id of the same QP. - */ - if (mlx5_devx_cmd_modify_qp_state(qp->qp_obj.qp, MLX5_CMD_OP_RST2INIT_QP, - qp->qp_obj.qp->id)) { - DRV_LOG(ERR, "Failed to modify QP to INIT state(%u).", - rte_errno); - return -1; - } - if (mlx5_devx_cmd_modify_qp_state(qp->qp_obj.qp, MLX5_CMD_OP_INIT2RTR_QP, - qp->qp_obj.qp->id)) { - DRV_LOG(ERR, "Failed to modify QP to RTR state(%u).", - rte_errno); - return -1; - } - if (mlx5_devx_cmd_modify_qp_state(qp->qp_obj.qp, MLX5_CMD_OP_RTR2RTS_QP, - qp->qp_obj.qp->id)) { - DRV_LOG(ERR, "Failed to modify QP to RTS state(%u).", - rte_errno); - return -1; - } - return 0; -} - static __rte_noinline uint32_t mlx5_crypto_get_block_size(struct rte_crypto_op *op) { @@ -692,7 +664,7 @@ mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, goto error; } qp->mr_ctrl.dev_gen_ptr = &priv->mr_scache.dev_gen; - if (mlx5_crypto_qp2rts(qp)) + if (mlx5_devx_qp2rts(&qp->qp_obj)) goto error; qp->mkey = (struct mlx5_devx_obj **)RTE_ALIGN((uintptr_t)(qp + 1), RTE_CACHE_LINE_SIZE); diff --git a/drivers/regex/mlx5/mlx5_regex.h b/drivers/regex/mlx5/mlx5_regex.h index 514f3408f9..41ed58a6af 100644 --- a/drivers/regex/mlx5/mlx5_regex.h +++ b/drivers/regex/mlx5/mlx5_regex.h @@ -17,9 +17,9 @@ #include "mlx5_rxp.h" #include "mlx5_regex_utils.h" -struct mlx5_regex_sq { +struct mlx5_regex_inner_qp { uint16_t log_nb_desc; /* Log 2 number of desc for this object. */ - struct mlx5_devx_sq sq_obj; /* The SQ DevX object. */ + struct mlx5_devx_qp qp_obj; /* The QP DevX object. */ size_t pi, db_pi; size_t ci; uint32_t sqn; @@ -34,7 +34,7 @@ struct mlx5_regex_cq { struct mlx5_regex_qp { uint32_t flags; /* QP user flags. */ uint32_t nb_desc; /* Total number of desc for this qp. */ - struct mlx5_regex_sq *sqs; /* Pointer to sq array. */ + struct mlx5_regex_inner_qp *qps; /* Pointer to qp array. */ uint16_t nb_obj; /* Number of sq objects. */ struct mlx5_regex_cq cq; /* CQ struct. */ uint32_t free_sqs; diff --git a/drivers/regex/mlx5/mlx5_regex_control.c b/drivers/regex/mlx5/mlx5_regex_control.c index 8ce2dabb55..353d6aec97 100644 --- a/drivers/regex/mlx5/mlx5_regex_control.c +++ b/drivers/regex/mlx5/mlx5_regex_control.c @@ -106,12 +106,12 @@ regex_ctrl_create_cq(struct mlx5_regex_priv *priv, struct mlx5_regex_cq *cq) * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -regex_ctrl_destroy_sq(struct mlx5_regex_qp *qp, uint16_t q_ind) +regex_ctrl_destroy_inner_qp(struct mlx5_regex_qp *qp, uint16_t q_ind) { - struct mlx5_regex_sq *sq = &qp->sqs[q_ind]; + struct mlx5_regex_inner_qp *qp_obj = &qp->qps[q_ind]; - mlx5_devx_sq_destroy(&sq->sq_obj); - memset(sq, 0, sizeof(*sq)); + mlx5_devx_qp_destroy(&qp_obj->qp_obj); + memset(qp, 0, sizeof(*qp)); return 0; } @@ -131,45 +131,41 @@ regex_ctrl_destroy_sq(struct mlx5_regex_qp *qp, uint16_t q_ind) * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -regex_ctrl_create_sq(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp, +regex_ctrl_create_inner_qp(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp, uint16_t q_ind, uint16_t log_nb_desc) { #ifdef HAVE_IBV_FLOW_DV_SUPPORT - struct mlx5_devx_create_sq_attr attr = { - .user_index = q_ind, + struct mlx5_devx_qp_attr attr = { .cqn = qp->cq.cq_obj.cq->id, - .wq_attr = (struct mlx5_devx_wq_attr){ - .uar_page = priv->uar->page_id, - }, + .uar_index = priv->uar->page_id, .ts_format = mlx5_ts_format_conv(priv->sq_ts_format), }; - struct mlx5_devx_modify_sq_attr modify_attr = { - .state = MLX5_SQC_STATE_RDY, - }; - struct mlx5_regex_sq *sq = &qp->sqs[q_ind]; + struct mlx5_regex_inner_qp *qp_obj = &qp->qps[q_ind]; uint32_t pd_num = 0; int ret; - sq->log_nb_desc = log_nb_desc; - sq->sqn = q_ind; - sq->ci = 0; - sq->pi = 0; + qp_obj->log_nb_desc = log_nb_desc; + qp_obj->sqn = q_ind; + qp_obj->ci = 0; + qp_obj->pi = 0; ret = regex_get_pdn(priv->pd, &pd_num); if (ret) return ret; - attr.wq_attr.pd = pd_num; - ret = mlx5_devx_sq_create(priv->ctx, &sq->sq_obj, + attr.pd = pd_num; + attr.rq_size = 0; + attr.sq_size = 1 << log_nb_desc; + ret = mlx5_devx_qp_create(priv->ctx, &qp_obj->qp_obj, MLX5_REGEX_WQE_LOG_NUM(priv->has_umr, log_nb_desc), &attr, SOCKET_ID_ANY); if (ret) { - DRV_LOG(ERR, "Can't create SQ object."); + DRV_LOG(ERR, "Can't create QP object."); rte_errno = ENOMEM; return -rte_errno; } - ret = mlx5_devx_cmd_modify_sq(sq->sq_obj.sq, &modify_attr); + ret = mlx5_devx_qp2rts(&qp_obj->qp_obj); if (ret) { - DRV_LOG(ERR, "Can't change SQ state to ready."); - regex_ctrl_destroy_sq(qp, q_ind); + DRV_LOG(ERR, "Can't change QP state to RTS."); + regex_ctrl_destroy_inner_qp(qp, q_ind); rte_errno = ENOMEM; return -rte_errno; } @@ -224,10 +220,10 @@ mlx5_regex_qp_setup(struct rte_regexdev *dev, uint16_t qp_ind, (1 << MLX5_REGEX_WQE_LOG_NUM(priv->has_umr, log_desc)); else qp->nb_obj = 1; - qp->sqs = rte_malloc(NULL, - qp->nb_obj * sizeof(struct mlx5_regex_sq), 64); - if (!qp->sqs) { - DRV_LOG(ERR, "Can't allocate sq array memory."); + qp->qps = rte_malloc(NULL, + qp->nb_obj * sizeof(struct mlx5_regex_inner_qp), 64); + if (!qp->qps) { + DRV_LOG(ERR, "Can't allocate qp array memory."); rte_errno = ENOMEM; return -rte_errno; } @@ -238,9 +234,9 @@ mlx5_regex_qp_setup(struct rte_regexdev *dev, uint16_t qp_ind, goto err_cq; } for (i = 0; i < qp->nb_obj; i++) { - ret = regex_ctrl_create_sq(priv, qp, i, log_desc); + ret = regex_ctrl_create_inner_qp(priv, qp, i, log_desc); if (ret) { - DRV_LOG(ERR, "Can't create sq."); + DRV_LOG(ERR, "Can't create qp object."); goto err_btree; } nb_sq_config++; @@ -266,9 +262,9 @@ mlx5_regex_qp_setup(struct rte_regexdev *dev, uint16_t qp_ind, mlx5_mr_btree_free(&qp->mr_ctrl.cache_bh); err_btree: for (i = 0; i < nb_sq_config; i++) - regex_ctrl_destroy_sq(qp, i); + regex_ctrl_destroy_inner_qp(qp, i); regex_ctrl_destroy_cq(&qp->cq); err_cq: - rte_free(qp->sqs); + rte_free(qp->qps); return ret; }