From patchwork Tue Jul 6 13:32:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95391 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 177E1A0C47; Tue, 6 Jul 2021 15:33:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1C99A4120E; Tue, 6 Jul 2021 15:33:33 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2084.outbound.protection.outlook.com [40.107.93.84]) by mails.dpdk.org (Postfix) with ESMTP id ECE4F4120E for ; Tue, 6 Jul 2021 15:33:30 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=h5f0PkhJ7x1YpernAYvFyjrM5hUBOnheKkofSxFOm9ooLCQwvIDe6diNL50S9KzdbBhD2hyjk1gTGQFjZNYBR+i/OMOnO9ZGCStplW5KHc7R8F2z8uJBkCJLtX2Du2wCje+iExDWUxHwCRmBTJGsTbeLvoz8XX13b0jtcqtkC7tpOY8ck4gFO4+U2WrQFqzIzftT762iocyzXnfbw6LWtoDQiLOkzGMCXXCYOLj/MwaA2G3i9ZKEXIBr9sX/+a+AkpPXNJZ/Ee8QO0G6EAsBERAZUuy1VmlTKIoZyEhkZTS0doct91/0a452fJ4p2diO++ehJZtG+qIxQ2+gF8vV5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=sGKkTxBRkki51x92gVo/ec3Xx05Ymp9Wu93yZ7GM/pg=; b=aVA9ZQ43wjkDFSX1xhYpU6qwIAP3a8x7A7geL3prqzoDtoKf0xgmLM1xWFcOR2SUIWu6CMk+hMfGTwO4zztoxDDTN4in8H2YjuE0HXxv+sxkkQA3RGQtfGZdA5+JVeRhXOA/gD3c/4fLczJ/OR1aD8Ox62Sn0D9QpQFUcj0UZrtkHkXZRSrqKG8aqQCzM1q8npdEjwCUcdb+BNdBXsrKLcER+JSlAMhIVvq/Mqd6eHv8YabgcBs59irKg9/wpxrRXkSn3JD8dzf62XniffU2PmYtKD35tGfnJOdQla6755b6AxXdgTqFfG1tza8Xs8XcRUE6+iW1AHE4bvOVmtC0rg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=sGKkTxBRkki51x92gVo/ec3Xx05Ymp9Wu93yZ7GM/pg=; b=FQwL+xWFW1ppAdISIMvaPOV7dLlDzsd60g8xdooKts1w5r9hDIGsdnjXt+FSRkj3aiNse91RgKgov1AQniEnodhcp2yWsDIOumNw5peCNn2XZ7NxpMEEBiE8UUtGy3TzbseXjtvUZte9xIsXcRF4QpMiqoEBfvRd0WpSBEHbskgxrSKrjmSi5RtwxX8XL+hoGj5CgP4zNIq2jUYv/3X7cgrGzLkdkmUte6yN51a57lTtdcPLwf/KgYnKNUptqCTiP7r5wm25kxgdZ8y11gSdeuSe+9jflv9+vFFZcyYOnnHxDUC7+FEXNAi5EBaeDjFdEcoBSz175oNPgHQP2J79tA== Received: from BN0PR02CA0050.namprd02.prod.outlook.com (2603:10b6:408:e5::25) by BY5PR12MB3698.namprd12.prod.outlook.com (2603:10b6:a03:194::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.31; Tue, 6 Jul 2021 13:33:29 +0000 Received: from BN8NAM11FT047.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e5:cafe::3f) by BN0PR02CA0050.outlook.office365.com (2603:10b6:408:e5::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 13:33:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT047.mail.protection.outlook.com (10.13.177.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 13:33:28 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 6 Jul 2021 13:33:24 +0000 From: Suanming Mou To: , CC: , , Date: Tue, 6 Jul 2021 16:32:35 +0300 Message-ID: <20210706133257.3353-5-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210706133257.3353-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210706133257.3353-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: fe22ce2f-29c7-4405-e95b-08d94082a3ff X-MS-TrafficTypeDiagnostic: BY5PR12MB3698: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:72; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4bzdA4KrMcndnQBnl8i3vbpB3XVNLOTPOaXLrUljsf0RMcpuLacN1Der3ALQxT+kdAyTxG0J87kyir5AB67v0KM7IEHb9Cf0bdpwvMZUlT+Etw4qJNP91+IdJ96kQ7s6tC6TebSBC21GcWGoweKubEJFvURt3Hr6q1LZW1U3/W35X5XsVFn7Cgu9VNnVZNdY0wR/oUjnrNeY24lwSld1R3py2yOoj6Dg/1ueCYgvDWGD87Eutr+gAasRcotXLW+JtdhG9OsbkpuZrAhL1ugXY5A2T1nUny3TfnTLPbCDzhwoRlSZ5yWXwVoVf9d3/6L2TaaM6UHszZlZCN4cCTFuiEN9lGo00UfLF4XCSdo/B9bU61aml7yaCeOTylw1z6fzy0u9RgZ0Kpx/H2OsdPLUSpq6RMKvW4if5rnhRSpJyObCoGUym3vSxU2aj1SQtOcGlhp1dJP3+Djz8u8+2JzHkxN41aXQjm/t6SsEp+NByxb9pIya6sk0UoxO7wm9EpcgOdxrFkJh7wttEtdAu9UU3KyEcptIiigX0UH4JTBfUVJtCiu5e93QFKPhhNhtiAuuUEZR6pdYNv0F4t2BucanYlyF5bWyjQ+b16J6g02hvOgXfR839RBHr+VTM+f/uh6hhHsuMknY+07PkIRsLVe30VPYTRZyoFGuu9z2MX81vb6dcjlpHdhMTNfy7no0DNZRFnn+X/TFFYnFOvwkv18OYw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(346002)(39860400002)(376002)(136003)(36840700001)(46966006)(8676002)(47076005)(86362001)(356005)(316002)(36860700001)(36756003)(2906002)(2616005)(426003)(1076003)(83380400001)(55016002)(478600001)(186003)(7696005)(6666004)(36906005)(7636003)(70586007)(6636002)(82310400003)(6286002)(82740400003)(336012)(5660300002)(16526019)(54906003)(8936002)(110136005)(4326008)(26005)(70206006); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 13:33:28.9260 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fe22ce2f-29c7-4405-e95b-08d94082a3ff X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT047.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB3698 Subject: [dpdk-dev] [PATCH v4 04/26] net/mlx5: support index pool non-lcore operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit supports the index pool non-lcore operations with an extra cache and lcore lock. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_utils.c | 75 +++++++++++++++++++++++++---------- drivers/net/mlx5/mlx5_utils.h | 3 +- 2 files changed, 56 insertions(+), 22 deletions(-) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 32f8d65073..f9557c09ff 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -275,6 +275,7 @@ mlx5_ipool_create(struct mlx5_indexed_pool_config *cfg) mlx5_trunk_idx_offset_get(pool, TRUNK_MAX_IDX + 1); if (!cfg->per_core_cache) pool->free_list = TRUNK_INVALID; + rte_spinlock_init(&pool->lcore_lock); return pool; } @@ -515,20 +516,14 @@ mlx5_ipool_allocate_from_global(struct mlx5_indexed_pool *pool, int cidx) } static void * -mlx5_ipool_get_cache(struct mlx5_indexed_pool *pool, uint32_t idx) +_mlx5_ipool_get_cache(struct mlx5_indexed_pool *pool, int cidx, uint32_t idx) { struct mlx5_indexed_trunk *trunk; struct mlx5_indexed_cache *lc; uint32_t trunk_idx; uint32_t entry_idx; - int cidx; MLX5_ASSERT(idx); - cidx = rte_lcore_index(rte_lcore_id()); - if (unlikely(cidx == -1)) { - rte_errno = ENOTSUP; - return NULL; - } if (unlikely(!pool->cache[cidx])) { pool->cache[cidx] = pool->cfg.malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_ipool_per_lcore) + @@ -549,15 +544,27 @@ mlx5_ipool_get_cache(struct mlx5_indexed_pool *pool, uint32_t idx) } static void * -mlx5_ipool_malloc_cache(struct mlx5_indexed_pool *pool, uint32_t *idx) +mlx5_ipool_get_cache(struct mlx5_indexed_pool *pool, uint32_t idx) { + void *entry; int cidx; cidx = rte_lcore_index(rte_lcore_id()); if (unlikely(cidx == -1)) { - rte_errno = ENOTSUP; - return NULL; + cidx = RTE_MAX_LCORE; + rte_spinlock_lock(&pool->lcore_lock); } + entry = _mlx5_ipool_get_cache(pool, cidx, idx); + if (unlikely(cidx == RTE_MAX_LCORE)) + rte_spinlock_unlock(&pool->lcore_lock); + return entry; +} + + +static void * +_mlx5_ipool_malloc_cache(struct mlx5_indexed_pool *pool, int cidx, + uint32_t *idx) +{ if (unlikely(!pool->cache[cidx])) { pool->cache[cidx] = pool->cfg.malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_ipool_per_lcore) + @@ -570,29 +577,40 @@ mlx5_ipool_malloc_cache(struct mlx5_indexed_pool *pool, uint32_t *idx) } else if (pool->cache[cidx]->len) { pool->cache[cidx]->len--; *idx = pool->cache[cidx]->idx[pool->cache[cidx]->len]; - return mlx5_ipool_get_cache(pool, *idx); + return _mlx5_ipool_get_cache(pool, cidx, *idx); } /* Not enough idx in global cache. Keep fetching from global. */ *idx = mlx5_ipool_allocate_from_global(pool, cidx); if (unlikely(!(*idx))) return NULL; - return mlx5_ipool_get_cache(pool, *idx); + return _mlx5_ipool_get_cache(pool, cidx, *idx); } -static void -mlx5_ipool_free_cache(struct mlx5_indexed_pool *pool, uint32_t idx) +static void * +mlx5_ipool_malloc_cache(struct mlx5_indexed_pool *pool, uint32_t *idx) { + void *entry; int cidx; + + cidx = rte_lcore_index(rte_lcore_id()); + if (unlikely(cidx == -1)) { + cidx = RTE_MAX_LCORE; + rte_spinlock_lock(&pool->lcore_lock); + } + entry = _mlx5_ipool_malloc_cache(pool, cidx, idx); + if (unlikely(cidx == RTE_MAX_LCORE)) + rte_spinlock_unlock(&pool->lcore_lock); + return entry; +} + +static void +_mlx5_ipool_free_cache(struct mlx5_indexed_pool *pool, int cidx, uint32_t idx) +{ struct mlx5_ipool_per_lcore *ilc; struct mlx5_indexed_cache *gc, *olc = NULL; uint32_t reclaim_num = 0; MLX5_ASSERT(idx); - cidx = rte_lcore_index(rte_lcore_id()); - if (unlikely(cidx == -1)) { - rte_errno = ENOTSUP; - return; - } /* * When index was allocated on core A but freed on core B. In this * case check if local cache on core B was allocated before. @@ -635,6 +653,21 @@ mlx5_ipool_free_cache(struct mlx5_indexed_pool *pool, uint32_t idx) pool->cache[cidx]->len++; } +static void +mlx5_ipool_free_cache(struct mlx5_indexed_pool *pool, uint32_t idx) +{ + int cidx; + + cidx = rte_lcore_index(rte_lcore_id()); + if (unlikely(cidx == -1)) { + cidx = RTE_MAX_LCORE; + rte_spinlock_lock(&pool->lcore_lock); + } + _mlx5_ipool_free_cache(pool, cidx, idx); + if (unlikely(cidx == RTE_MAX_LCORE)) + rte_spinlock_unlock(&pool->lcore_lock); +} + void * mlx5_ipool_malloc(struct mlx5_indexed_pool *pool, uint32_t *idx) { @@ -814,7 +847,7 @@ mlx5_ipool_destroy(struct mlx5_indexed_pool *pool) MLX5_ASSERT(pool); mlx5_ipool_lock(pool); if (pool->cfg.per_core_cache) { - for (i = 0; i < RTE_MAX_LCORE; i++) { + for (i = 0; i <= RTE_MAX_LCORE; i++) { /* * Free only old global cache. Pool gc will be * freed at last. @@ -883,7 +916,7 @@ mlx5_ipool_flush_cache(struct mlx5_indexed_pool *pool) for (i = 0; i < gc->len; i++) rte_bitmap_clear(ibmp, gc->idx[i] - 1); /* Clear core cache. */ - for (i = 0; i < RTE_MAX_LCORE; i++) { + for (i = 0; i < RTE_MAX_LCORE + 1; i++) { struct mlx5_ipool_per_lcore *ilc = pool->cache[i]; if (!ilc) diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 737dd7052d..a509b0a4eb 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -248,6 +248,7 @@ struct mlx5_ipool_per_lcore { struct mlx5_indexed_pool { struct mlx5_indexed_pool_config cfg; /* Indexed pool configuration. */ rte_spinlock_t rsz_lock; /* Pool lock for multiple thread usage. */ + rte_spinlock_t lcore_lock; /* Dim of trunk pointer array. */ union { struct { @@ -259,7 +260,7 @@ struct mlx5_indexed_pool { struct { struct mlx5_indexed_cache *gc; /* Global cache. */ - struct mlx5_ipool_per_lcore *cache[RTE_MAX_LCORE]; + struct mlx5_ipool_per_lcore *cache[RTE_MAX_LCORE + 1]; /* Local cache. */ struct rte_bitmap *ibmp; void *bmp_mem;