From patchwork Sun Sep 26 11:18:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99687 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DCAC5A0547; Sun, 26 Sep 2021 13:19:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 001EA410F6; Sun, 26 Sep 2021 13:19:38 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2053.outbound.protection.outlook.com [40.107.94.53]) by mails.dpdk.org (Postfix) with ESMTP id A7A35410F4 for ; Sun, 26 Sep 2021 13:19:37 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=d9KO5gAUL/AQxvFlgPBYH1fSH6zIQodPMKnBIU6gTWsvPG5VKaUA6Mhwf4y9ARHZva/cyf+bwQfJBX7VmChmEyrT1mNcgN2JqSNH3DXym52p04xSPgLduSYeyqXD2PXEm/7qjdkVCwFhgOCnVPSf0nihRbYTIIyKNnzhtIEoq7s7DMuvr3YdH4hophcObx+2CBGbRKEpGulBU/gOPTMFGju8ygwFLqRtlw+vfoEHe2VpdF6nCQZBkZ0zoZgMxDNs75P9s36ybKYaw1VEa2U7RV0Z/TVVoou4koZ5XqbCnuNVPpgU9ezfJSHrm/mM4GidLmmy5uR7FfQJjnvaLZsRcg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=T2HOQbAWANpYJ9TChzKqfYKFj4XrSJz1zULPDHkuTqY=; b=MLUSg4gy0IkvlBsnfAFzX/eU4ORvencQ7U36pp/OEV10j0dOiHrYXgx+pvmyfQny6fALGJRabyHhnC5vlReJ7oUPIUbDZ37qXLbCd36lR0fLESAmITQU4o5CqEGTJ2wxvnkYJwUsnEGKIHj1oe7v0qFLAV4VrzHUoZoxqbZIgNE5EUejoVB98INxMylz6+vzoh6EBZgKJhVe0JRPjXxPrp82RATPebN8gBzpAMmTanJBBTEysn4SEZ3wMKVIBBJqiJ0i/Sabn8T4fUzijzzeUWRnBj3Emm7FMj6YfVx3ZacXKkr8OcoLfr+KSkGh3lWgH8P1aP2M7kFB+4aaQrXCuw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=T2HOQbAWANpYJ9TChzKqfYKFj4XrSJz1zULPDHkuTqY=; b=ZwVs4L74lLcbLSAylLaYs24Ej4S+Hexd2Vew4RYMg/S4RXwOaKNH8gQb5/vpVtb3jt+xnZ3WVvLNXkFj/SZP3wFmGVWds0TYn76AmI/a3HeNl2ifhcdedwnQ18cp2BhzEIMPpFxC4xElFOFiyFS0JnRVQ+aNb1YF3pcQoU8UEPrK4HiFA5n8r7Gic5WvVD+BZwocLamWTMRH1wR3uEKjGM9HrY1K/9jgsgII+OSaEDIqWLIhdGgjBwMZgDth46ewb7hTQfADMUJy1yGugZrGHU7XCeDlfNUqntlnXP35sxs4Yf5o7CEjf6UUPI4tdPN3FGbm2bqLre9vkh3shf2Bpg== Received: from BN9PR03CA0735.namprd03.prod.outlook.com (2603:10b6:408:110::20) by CO6PR12MB5473.namprd12.prod.outlook.com (2603:10b6:303:13e::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.13; Sun, 26 Sep 2021 11:19:34 +0000 Received: from BN8NAM11FT041.eop-nam11.prod.protection.outlook.com (2603:10b6:408:110:cafe::22) by BN9PR03CA0735.outlook.office365.com (2603:10b6:408:110::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.14 via Frontend Transport; Sun, 26 Sep 2021 11:19:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by BN8NAM11FT041.mail.protection.outlook.com (10.13.177.18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4544.13 via Frontend Transport; Sun, 26 Sep 2021 11:19:34 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 04:19:33 -0700 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:19:31 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko Date: Sun, 26 Sep 2021 19:18:57 +0800 Message-ID: <20210926111904.237736-5-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210926111904.237736-1-xuemingl@nvidia.com> References: <20210926111904.237736-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 704253d3-2331-4c60-be61-08d980df84c2 X-MS-TrafficTypeDiagnostic: CO6PR12MB5473: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:233; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: dr1GPKjeZyo/zIBYe8k9HQywIwInsj/nKfuou7131oOsS/d+E4JYdFR+mZvYXYLMmiRxAnqVvhQUr/b1k8S0Iv/o2mG9dR4F4RaDQw2bvMgXfG8C9UrNmuAHg2aa83IQZhruF6dBMqw1rwRnmTMyuCeGrFYmGOtWavLKMm4ALMM2gWn9Cp2C90MMxk1gJ1rYlpuiX08B3P1jdSutW6G8oMZIpUjzgoVlX9wWTxr5IPZ8XB3B+BHFyFkTAQ/nJr4vTWlCOATG27vxRa1OyNFkv/ZiRP76G4EWyua++FrlQPN+cSJgkYRfnqqemfEtfr41/vMsDTlMLiQZbFmCFSwuIyGsnU61yDXueYUnWFafrJx9VvsKyrSouM8s0yRIkB+svGKXSeQ3CUv9tn3oyBsDBZCYxfZQoSS9SwNN/3OQFGlW4q7UGHTkEBs8Kn7vNIGh9zTF0dZJD+AeCJTLhKCMAPvmB9OAW/AQQS4M5oL5TwgXlQdduiFeiiX3Tl0tks8nVaZWtZxKQd+gILrAQ+0jHLXEp3qCnVnWmXaf2OM4oiLfOQK1Uv06LLdQBRKyqGK7pM07yERrvUoKUCeDq9Dyc0NHfX5mlDUQGGszfeq/rHNPSmjpwaCcSJcHcrPDq2uVhCGXLSaNtuA5bCQlR6bmgsdBjtG64R93rnbSnM6s76RK/o7GPi2kcMCtOh82/MddGFBww6p96gwqc901PBfvyA== X-Forefront-Antispam-Report: CIP:216.228.112.32; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid01.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(4326008)(7696005)(36756003)(2616005)(70206006)(70586007)(82310400003)(26005)(336012)(6666004)(83380400001)(426003)(5660300002)(86362001)(2906002)(8676002)(1076003)(107886003)(6286002)(316002)(6916009)(36860700001)(186003)(508600001)(8936002)(55016002)(54906003)(16526019)(7636003)(47076005)(356005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2021 11:19:34.1194 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 704253d3-2331-4c60-be61-08d980df84c2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.32]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT041.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO6PR12MB5473 Subject: [dpdk-dev] [PATCH 04/11] net/mlx5: split multiple packet Rq memory pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Port info is invisible from shared Rx queue, split MPR mempool from device to Rx queue, also changed pool flag to mp_sc. Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5.c | 1 - drivers/net/mlx5/mlx5_rx.h | 4 +- drivers/net/mlx5/mlx5_rxq.c | 109 ++++++++++++-------------------- drivers/net/mlx5/mlx5_trigger.c | 10 ++- 4 files changed, 47 insertions(+), 77 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index f84e061fe71..3abb8c97e76 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1602,7 +1602,6 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_drop_action_destroy(dev); if (priv->mreg_cp_tbl) mlx5_hlist_destroy(priv->mreg_cp_tbl); - mlx5_mprq_free_mp(dev); if (priv->sh->ct_mng) mlx5_flow_aso_ct_mng_close(priv->sh); mlx5_os_free_shared_dr(priv); diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index d44c8078dea..a8e0c3162b0 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -179,8 +179,8 @@ struct mlx5_rxq_ctrl { extern uint8_t rss_hash_default_key[]; unsigned int mlx5_rxq_cqe_num(struct mlx5_rxq_data *rxq_data); -int mlx5_mprq_free_mp(struct rte_eth_dev *dev); -int mlx5_mprq_alloc_mp(struct rte_eth_dev *dev); +int mlx5_mprq_free_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl); +int mlx5_mprq_alloc_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl); int mlx5_rx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id); int mlx5_rx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id); int mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t queue_id); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 7e97cdd4bc0..14de8d0e6a4 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1087,7 +1087,7 @@ mlx5_mprq_buf_init(struct rte_mempool *mp, void *opaque_arg, } /** - * Free mempool of Multi-Packet RQ. + * Free RXQ mempool of Multi-Packet RQ. * * @param dev * Pointer to Ethernet device. @@ -1096,16 +1096,15 @@ mlx5_mprq_buf_init(struct rte_mempool *mp, void *opaque_arg, * 0 on success, negative errno value on failure. */ int -mlx5_mprq_free_mp(struct rte_eth_dev *dev) +mlx5_mprq_free_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl) { - struct mlx5_priv *priv = dev->data->dev_private; - struct rte_mempool *mp = priv->mprq_mp; - unsigned int i; + struct mlx5_rxq_data *rxq = &rxq_ctrl->rxq; + struct rte_mempool *mp = rxq->mprq_mp; if (mp == NULL) return 0; - DRV_LOG(DEBUG, "port %u freeing mempool (%s) for Multi-Packet RQ", - dev->data->port_id, mp->name); + DRV_LOG(DEBUG, "port %u queue %hu freeing mempool (%s) for Multi-Packet RQ", + dev->data->port_id, rxq->idx, mp->name); /* * If a buffer in the pool has been externally attached to a mbuf and it * is still in use by application, destroying the Rx queue can spoil @@ -1123,34 +1122,28 @@ mlx5_mprq_free_mp(struct rte_eth_dev *dev) return -rte_errno; } rte_mempool_free(mp); - /* Unset mempool for each Rx queue. */ - for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - - if (rxq == NULL) - continue; - rxq->mprq_mp = NULL; - } - priv->mprq_mp = NULL; + rxq->mprq_mp = NULL; return 0; } /** - * Allocate a mempool for Multi-Packet RQ. All configured Rx queues share the - * mempool. If already allocated, reuse it if there're enough elements. + * Allocate RXQ a mempool for Multi-Packet RQ. + * If already allocated, reuse it if there're enough elements. * Otherwise, resize it. * * @param dev * Pointer to Ethernet device. + * @param rxq_ctrl + * Pointer to RXQ. * * @return * 0 on success, negative errno value on failure. */ int -mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) +mlx5_mprq_alloc_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl) { - struct mlx5_priv *priv = dev->data->dev_private; - struct rte_mempool *mp = priv->mprq_mp; + struct mlx5_rxq_data *rxq = &rxq_ctrl->rxq; + struct rte_mempool *mp = rxq->mprq_mp; char name[RTE_MEMPOOL_NAMESIZE]; unsigned int desc = 0; unsigned int buf_len; @@ -1158,28 +1151,15 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) unsigned int obj_size; unsigned int strd_num_n = 0; unsigned int strd_sz_n = 0; - unsigned int i; - unsigned int n_ibv = 0; - if (!mlx5_mprq_enabled(dev)) + if (rxq_ctrl == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) return 0; - /* Count the total number of descriptors configured. */ - for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = container_of - (rxq, struct mlx5_rxq_ctrl, rxq); - - if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) - continue; - n_ibv++; - desc += 1 << rxq->elts_n; - /* Get the max number of strides. */ - if (strd_num_n < rxq->strd_num_n) - strd_num_n = rxq->strd_num_n; - /* Get the max size of a stride. */ - if (strd_sz_n < rxq->strd_sz_n) - strd_sz_n = rxq->strd_sz_n; - } + /* Number of descriptors configured. */ + desc = 1 << rxq->elts_n; + /* Get the max number of strides. */ + strd_num_n = rxq->strd_num_n; + /* Get the max size of a stride. */ + strd_sz_n = rxq->strd_sz_n; MLX5_ASSERT(strd_num_n && strd_sz_n); buf_len = (1 << strd_num_n) * (1 << strd_sz_n); obj_size = sizeof(struct mlx5_mprq_buf) + buf_len + (1 << strd_num_n) * @@ -1196,7 +1176,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) * this Mempool gets available again. */ desc *= 4; - obj_num = desc + MLX5_MPRQ_MP_CACHE_SZ * n_ibv; + obj_num = desc + MLX5_MPRQ_MP_CACHE_SZ; /* * rte_mempool_create_empty() has sanity check to refuse large cache * size compared to the number of elements. @@ -1209,50 +1189,41 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) DRV_LOG(DEBUG, "port %u mempool %s is being reused", dev->data->port_id, mp->name); /* Reuse. */ - goto exit; - } else if (mp != NULL) { - DRV_LOG(DEBUG, "port %u mempool %s should be resized, freeing it", - dev->data->port_id, mp->name); + return 0; + } + if (mp != NULL) { + DRV_LOG(DEBUG, "port %u queue %u mempool %s should be resized, freeing it", + dev->data->port_id, rxq->idx, mp->name); /* * If failed to free, which means it may be still in use, no way * but to keep using the existing one. On buffer underrun, * packets will be memcpy'd instead of external buffer * attachment. */ - if (mlx5_mprq_free_mp(dev)) { + if (mlx5_mprq_free_mp(dev, rxq_ctrl) != 0) { if (mp->elt_size >= obj_size) - goto exit; + return 0; else return -rte_errno; } } - snprintf(name, sizeof(name), "port-%u-mprq", dev->data->port_id); + snprintf(name, sizeof(name), "port-%u-queue-%hu-mprq", + dev->data->port_id, rxq->idx); mp = rte_mempool_create(name, obj_num, obj_size, MLX5_MPRQ_MP_CACHE_SZ, 0, NULL, NULL, mlx5_mprq_buf_init, - (void *)((uintptr_t)1 << strd_num_n), - dev->device->numa_node, 0); + (void *)(uintptr_t)(1 << strd_num_n), + dev->device->numa_node, MEMPOOL_F_SC_GET); if (mp == NULL) { DRV_LOG(ERR, - "port %u failed to allocate a mempool for" + "port %u queue %hu failed to allocate a mempool for" " Multi-Packet RQ, count=%u, size=%u", - dev->data->port_id, obj_num, obj_size); + dev->data->port_id, rxq->idx, obj_num, obj_size); rte_errno = ENOMEM; return -rte_errno; } - priv->mprq_mp = mp; -exit: - /* Set mempool for each Rx queue. */ - for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = container_of - (rxq, struct mlx5_rxq_ctrl, rxq); - - if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) - continue; - rxq->mprq_mp = mp; - } - DRV_LOG(INFO, "port %u Multi-Packet RQ is configured", - dev->data->port_id); + rxq->mprq_mp = mp; + DRV_LOG(INFO, "port %u queue %hu Multi-Packet RQ is configured", + dev->data->port_id, rxq->idx); return 0; } @@ -1717,8 +1688,10 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED; } if (!__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED)) { - if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh); + mlx5_mprq_free_mp(dev, rxq_ctrl); + } LIST_REMOVE(rxq_ctrl, next); mlx5_free(rxq_ctrl); (*priv->rxqs)[idx] = NULL; diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index c3adf5082e6..0753dbad053 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -138,11 +138,6 @@ mlx5_rxq_start(struct rte_eth_dev *dev) unsigned int i; int ret = 0; - /* Allocate/reuse/resize mempool for Multi-Packet RQ. */ - if (mlx5_mprq_alloc_mp(dev)) { - /* Should not release Rx queues but return immediately. */ - return -rte_errno; - } DRV_LOG(DEBUG, "Port %u device_attr.max_qp_wr is %d.", dev->data->port_id, priv->sh->device_attr.max_qp_wr); DRV_LOG(DEBUG, "Port %u device_attr.max_sge is %d.", @@ -153,8 +148,11 @@ mlx5_rxq_start(struct rte_eth_dev *dev) if (!rxq_ctrl) continue; if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { - /* Pre-register Rx mempools. */ if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) { + /* Allocate/reuse/resize mempool for MPRQ. */ + if (mlx5_mprq_alloc_mp(dev, rxq_ctrl) < 0) + goto error; + /* Pre-register Rx mempools. */ mlx5_mr_update_mp(dev, &rxq_ctrl->rxq.mr_ctrl, rxq_ctrl->rxq.mprq_mp); } else {