From patchwork Wed Jun 30 12:46:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95075 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8E960A0A0F; Wed, 30 Jun 2021 14:49:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C2124412EB; Wed, 30 Jun 2021 14:47:16 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2046.outbound.protection.outlook.com [40.107.94.46]) by mails.dpdk.org (Postfix) with ESMTP id 575FF412D7 for ; Wed, 30 Jun 2021 14:47:15 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gWZOL4qvNc7Ctq8ADyKI8odwP25v5sKt47ab/n9JcyqqaJuo4kZl6TUAFbWdCLWQEomN4LXKR0+7IHwfZNHJbaSDH9cUFF13daErfQLD+OqHXPCvJDlufxjWWR1wowlfaqsUKjfqXSvEUwVzCk/MA8Nxy4L2O32uprYJkRelM1jRktatFAX27FUiywqPcDg2ys9O4ATtLvup0t2hif80qE28UHvcnxq3jWM5+it/00BxKOUdl6L5sdh4P1sFHeTGlFmqejTIV1O/sYAtDiPzCCLS8haoy62EkARjAPzDPvdc7u04IQ5ydDYY628JLWjpToZP6HKnqpmi2g96VO4tww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0GaxJQN5yitk8nsMDngOSpLoJXvxXtiZlWYSJEnugZU=; b=RnyIw6zPgCr71/WZeLw+MwlA1NtER/26P7/Ar16VgeV7l1CTRaiauq1+//89WlIenQAqkBx3jnZAhux+cKuQwSqg+ji8Kn3nj4jaXwOJB67gS8ZIEpp9DSJSSH8ZUmlD/a1QclQgWwgIISq23hF6zkq0i/1DDR8bonIXH0xCmprmr0rVgSBP6gGEGvXHi2SNHmHKuVpv0sPGb1/QgbMGDiBUaeQiLbuxjguq5hqFAW+14VDuad3ip65NNXD9WMArf1tB1n7i6I8Tcb+CSLCDVPZZhT8lHxND+83yauOLPfHroZnmVsabGHp2tdxoypmxDEatoj9K++c35jbtagUHXQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0GaxJQN5yitk8nsMDngOSpLoJXvxXtiZlWYSJEnugZU=; b=rXPnFt5QFvbOkuveQidX1G1m5bDlufzFZ6Do8xlJOWU1B9VDwq7YX1prLcI9uA9NwuSR/fE3ggnkSf/mD5lhEYi6yO3RGznpibw6ECPsK1D/cVAnK0D3JuCbhzDk6qbWj0XJSxz/cEaxQ3O3muwof//kS23ZR4mHHotpdBzmEKixCHjYgSG+CEoKjWEX70/BAxEnTMnMPtg+GIfCrByKnOpXhplUa6l97Pnv0HoGpV2a4pAUr9pzybNtlS2P0W34stoAPFDFayKyMEPpLd3SjVOAwuBl0GlQ0WE4RSBY2z959FnnM9DUcprlNnZOOZmeFWf9kYvLKy33llwxXlCWaw== Received: from BN9PR03CA0340.namprd03.prod.outlook.com (2603:10b6:408:f6::15) by CH2PR12MB5017.namprd12.prod.outlook.com (2603:10b6:610:36::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22; Wed, 30 Jun 2021 12:47:13 +0000 Received: from BN8NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f6:cafe::43) by BN9PR03CA0340.outlook.office365.com (2603:10b6:408:f6::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Wed, 30 Jun 2021 12:47:13 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT013.mail.protection.outlook.com (10.13.176.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Wed, 30 Jun 2021 12:47:12 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Jun 2021 12:47:04 +0000 From: Suanming Mou To: , CC: , , Date: Wed, 30 Jun 2021 15:46:08 +0300 Message-ID: <20210630124609.8711-22-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210630124609.8711-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210630124609.8711-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 431f5a86-6469-4d1f-8ca6-08d93bc52ef2 X-MS-TrafficTypeDiagnostic: CH2PR12MB5017: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:51; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: PkBvWo8r6HCQQl3PG3zcr/BITAigj/VQ6Ex5PyXR7KEQtCMVLSokLrBq7kne1us37apIAt5oJrTK8Oj7bdFZJrgiZut5r9AB+4NlIrNyAdbpB14y4K0+xqV02QVHBwEPkMXtkBI0NRTa6gXcWHuD+tfFUuP5ef+0dTUfjuqZ8X+RRMkx78lr5sKBMWjTN9d0X53ATzGJ+YnCKcKGj4eg2JSwZ7Zi3sxJDqPc/W+jWaccih32WdyHqxPdZNIFSdwdfhyefEVovk0BBUq4qDYuMAW4FVQVwWgzTDj/LDQ1VGHEnSefGtLrIAbSD+xTasysqyqL/95PRb4DGW1IOBvqfWTIaDgCxdHivCOig5/iZ4JW/woHVDKPmUMMwVUxLU9B5yzt4cQ/tB2JRjD39M1zT/20G7bzs7LzIFzuzL+iNYvrMSm4T2fvgu27R9a8/QlVMQQpXJuMbeA6TA6oNjVeWjyhq/cBOkbPie5010X0pw+pFCzxNHOygOul6sfnC31ju2TEh7N2k9NS08DEs5oYzK6UexRCQfhH7QINIsXNPjEhLPnodBXPVME41S3Z1I0Jr/aa8oOfFxh/HQaglNzu9IH3ZCfkureCbWh/lsd4P28tzzWiIzfsDiBxQN+TV3/o6m3UQTmpClr6SpdXn6PABlfnI6ipcdihdcV1TxkeIlu+eRNPKNudbTVsd7Bd07IIhyksOauuyp/HqUkdFIQMzA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(376002)(346002)(136003)(39860400002)(36840700001)(46966006)(4326008)(47076005)(316002)(426003)(478600001)(82740400003)(36860700001)(6286002)(8676002)(54906003)(336012)(2906002)(36756003)(7636003)(83380400001)(2616005)(356005)(5660300002)(86362001)(16526019)(110136005)(7696005)(186003)(26005)(82310400003)(55016002)(8936002)(70206006)(70586007)(1076003)(6636002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2021 12:47:12.9940 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 431f5a86-6469-4d1f-8ca6-08d93bc52ef2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB5017 Subject: [dpdk-dev] [PATCH v2 21/22] net/mlx5: support list none local core operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit supports the list none local core operations with an extra sub-list. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_common_utils.c | 92 +++++++++++++++++-------- drivers/common/mlx5/mlx5_common_utils.h | 9 ++- 2 files changed, 71 insertions(+), 30 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_utils.c b/drivers/common/mlx5/mlx5_common_utils.c index 858c8d8164..d58d0d08ab 100644 --- a/drivers/common/mlx5/mlx5_common_utils.c +++ b/drivers/common/mlx5/mlx5_common_utils.c @@ -20,8 +20,8 @@ mlx5_list_init(struct mlx5_list_inconst *l_inconst, { rte_rwlock_init(&l_inconst->lock); if (l_const->lcores_share) { - l_inconst->cache[RTE_MAX_LCORE] = gc; - LIST_INIT(&l_inconst->cache[RTE_MAX_LCORE]->h); + l_inconst->cache[MLX5_LIST_GLOBAL] = gc; + LIST_INIT(&l_inconst->cache[MLX5_LIST_GLOBAL]->h); } DRV_LOG(DEBUG, "mlx5 list %s initialized.", l_const->name); return 0; @@ -59,6 +59,7 @@ mlx5_list_create(const char *name, void *ctx, bool lcores_share, list->l_const.cb_remove = cb_remove; list->l_const.cb_clone = cb_clone; list->l_const.cb_clone_free = cb_clone_free; + rte_spinlock_init(&list->l_const.nlcore_lock); if (lcores_share) gc = (struct mlx5_list_cache *)(list + 1); if (mlx5_list_init(&list->l_inconst, &list->l_const, gc) != 0) { @@ -85,11 +86,11 @@ __list_lookup(struct mlx5_list_inconst *l_inconst, DRV_LOG(DEBUG, "mlx5 list %s entry %p ref: %u.", l_const->name, (void *)entry, entry->ref_cnt); - } else if (lcore_index < RTE_MAX_LCORE) { + } else if (lcore_index < MLX5_LIST_GLOBAL) { ret = __atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED); } - if (likely(ret != 0 || lcore_index == RTE_MAX_LCORE)) + if (likely(ret != 0 || lcore_index == MLX5_LIST_GLOBAL)) return entry; if (reuse && ret == 0) entry->ref_cnt--; /* Invalid entry. */ @@ -107,10 +108,11 @@ _mlx5_list_lookup(struct mlx5_list_inconst *l_inconst, int i; rte_rwlock_read_lock(&l_inconst->lock); - for (i = 0; i < RTE_MAX_LCORE; i++) { + for (i = 0; i < MLX5_LIST_GLOBAL; i++) { if (!l_inconst->cache[i]) continue; - entry = __list_lookup(l_inconst, l_const, i, ctx, false); + entry = __list_lookup(l_inconst, l_const, i, + ctx, false); if (entry) break; } @@ -170,18 +172,11 @@ __list_cache_clean(struct mlx5_list_inconst *l_inconst, static inline struct mlx5_list_entry * _mlx5_list_register(struct mlx5_list_inconst *l_inconst, struct mlx5_list_const *l_const, - void *ctx) + void *ctx, int lcore_index) { struct mlx5_list_entry *entry, *local_entry; volatile uint32_t prev_gen_cnt = 0; - int lcore_index = rte_lcore_index(rte_lcore_id()); - MLX5_ASSERT(l_inconst); - MLX5_ASSERT(lcore_index < RTE_MAX_LCORE); - if (unlikely(lcore_index == -1)) { - rte_errno = ENOTSUP; - return NULL; - } if (unlikely(!l_inconst->cache[lcore_index])) { l_inconst->cache[lcore_index] = mlx5_malloc(0, sizeof(struct mlx5_list_cache), @@ -202,7 +197,7 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst, if (l_const->lcores_share) { /* 2. Lookup with read lock on global list, reuse if found. */ rte_rwlock_read_lock(&l_inconst->lock); - entry = __list_lookup(l_inconst, l_const, RTE_MAX_LCORE, + entry = __list_lookup(l_inconst, l_const, MLX5_LIST_GLOBAL, ctx, true); if (likely(entry)) { rte_rwlock_read_unlock(&l_inconst->lock); @@ -241,7 +236,7 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst, if (unlikely(prev_gen_cnt != l_inconst->gen_cnt)) { struct mlx5_list_entry *oentry = __list_lookup(l_inconst, l_const, - RTE_MAX_LCORE, + MLX5_LIST_GLOBAL, ctx, true); if (unlikely(oentry)) { @@ -255,7 +250,7 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst, } } /* 5. Update lists. */ - LIST_INSERT_HEAD(&l_inconst->cache[RTE_MAX_LCORE]->h, entry, next); + LIST_INSERT_HEAD(&l_inconst->cache[MLX5_LIST_GLOBAL]->h, entry, next); l_inconst->gen_cnt++; rte_rwlock_write_unlock(&l_inconst->lock); LIST_INSERT_HEAD(&l_inconst->cache[lcore_index]->h, local_entry, next); @@ -268,21 +263,30 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst, struct mlx5_list_entry * mlx5_list_register(struct mlx5_list *list, void *ctx) { - return _mlx5_list_register(&list->l_inconst, &list->l_const, ctx); + struct mlx5_list_entry *entry; + int lcore_index = rte_lcore_index(rte_lcore_id()); + + if (unlikely(lcore_index == -1)) { + lcore_index = MLX5_LIST_NLCORE; + rte_spinlock_lock(&list->l_const.nlcore_lock); + } + entry = _mlx5_list_register(&list->l_inconst, &list->l_const, ctx, + lcore_index); + if (unlikely(lcore_index == MLX5_LIST_NLCORE)) + rte_spinlock_unlock(&list->l_const.nlcore_lock); + return entry; } static inline int _mlx5_list_unregister(struct mlx5_list_inconst *l_inconst, struct mlx5_list_const *l_const, - struct mlx5_list_entry *entry) + struct mlx5_list_entry *entry, + int lcore_idx) { struct mlx5_list_entry *gentry = entry->gentry; - int lcore_idx; if (__atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) return 1; - lcore_idx = rte_lcore_index(rte_lcore_id()); - MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); if (entry->lcore_idx == (uint32_t)lcore_idx) { LIST_REMOVE(entry, next); if (l_const->lcores_share) @@ -321,7 +325,19 @@ int mlx5_list_unregister(struct mlx5_list *list, struct mlx5_list_entry *entry) { - return _mlx5_list_unregister(&list->l_inconst, &list->l_const, entry); + int ret; + int lcore_index = rte_lcore_index(rte_lcore_id()); + + if (unlikely(lcore_index == -1)) { + lcore_index = MLX5_LIST_NLCORE; + rte_spinlock_lock(&list->l_const.nlcore_lock); + } + ret = _mlx5_list_unregister(&list->l_inconst, &list->l_const, entry, + lcore_index); + if (unlikely(lcore_index == MLX5_LIST_NLCORE)) + rte_spinlock_unlock(&list->l_const.nlcore_lock); + return ret; + } static void @@ -332,13 +348,13 @@ mlx5_list_uninit(struct mlx5_list_inconst *l_inconst, int i; MLX5_ASSERT(l_inconst); - for (i = 0; i <= RTE_MAX_LCORE; i++) { + for (i = 0; i < MLX5_LIST_MAX; i++) { if (!l_inconst->cache[i]) continue; while (!LIST_EMPTY(&l_inconst->cache[i]->h)) { entry = LIST_FIRST(&l_inconst->cache[i]->h); LIST_REMOVE(entry, next); - if (i == RTE_MAX_LCORE) { + if (i == MLX5_LIST_GLOBAL) { l_const->cb_remove(l_const->ctx, entry); DRV_LOG(DEBUG, "mlx5 list %s entry %p " "destroyed.", l_const->name, @@ -347,7 +363,7 @@ mlx5_list_uninit(struct mlx5_list_inconst *l_inconst, l_const->cb_clone_free(l_const->ctx, entry); } } - if (i != RTE_MAX_LCORE) + if (i != MLX5_LIST_GLOBAL) mlx5_free(l_inconst->cache[i]); } } @@ -416,6 +432,7 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key, h->l_const.cb_remove = cb_remove; h->l_const.cb_clone = cb_clone; h->l_const.cb_clone_free = cb_clone_free; + rte_spinlock_init(&h->l_const.nlcore_lock); h->mask = act_size - 1; h->direct_key = direct_key; gc = (struct mlx5_list_cache *)&h->buckets[act_size]; @@ -449,28 +466,45 @@ mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, void *ctx) { uint32_t idx; struct mlx5_list_entry *entry; + int lcore_index = rte_lcore_index(rte_lcore_id()); if (h->direct_key) idx = (uint32_t)(key & h->mask); else idx = rte_hash_crc_8byte(key, 0) & h->mask; - entry = _mlx5_list_register(&h->buckets[idx].l, &h->l_const, ctx); + if (unlikely(lcore_index == -1)) { + lcore_index = MLX5_LIST_NLCORE; + rte_spinlock_lock(&h->l_const.nlcore_lock); + } + entry = _mlx5_list_register(&h->buckets[idx].l, &h->l_const, ctx, + lcore_index); if (likely(entry)) { if (h->l_const.lcores_share) entry->gentry->bucket_idx = idx; else entry->bucket_idx = idx; } + if (unlikely(lcore_index == MLX5_LIST_NLCORE)) + rte_spinlock_unlock(&h->l_const.nlcore_lock); return entry; } int mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_list_entry *entry) { + int lcore_index = rte_lcore_index(rte_lcore_id()); + int ret; uint32_t idx = h->l_const.lcores_share ? entry->gentry->bucket_idx : entry->bucket_idx; - - return _mlx5_list_unregister(&h->buckets[idx].l, &h->l_const, entry); + if (unlikely(lcore_index == -1)) { + lcore_index = MLX5_LIST_NLCORE; + rte_spinlock_lock(&h->l_const.nlcore_lock); + } + ret = _mlx5_list_unregister(&h->buckets[idx].l, &h->l_const, entry, + lcore_index); + if (unlikely(lcore_index == MLX5_LIST_NLCORE)) + rte_spinlock_unlock(&h->l_const.nlcore_lock); + return ret; } void diff --git a/drivers/common/mlx5/mlx5_common_utils.h b/drivers/common/mlx5/mlx5_common_utils.h index 6acd0d3754..66ce27464c 100644 --- a/drivers/common/mlx5/mlx5_common_utils.h +++ b/drivers/common/mlx5/mlx5_common_utils.h @@ -11,6 +11,12 @@ /** Maximum size of string for naming. */ #define MLX5_NAME_SIZE 32 +/** Maximum size of list. */ +#define MLX5_LIST_MAX (RTE_MAX_LCORE + 2) +/** Global list index. */ +#define MLX5_LIST_GLOBAL ((MLX5_LIST_MAX) - 1) +/** None rte core list index. */ +#define MLX5_LIST_NLCORE ((MLX5_LIST_MAX) - 2) struct mlx5_list; @@ -87,6 +93,7 @@ struct mlx5_list_const { char name[MLX5_NAME_SIZE]; /**< Name of the mlx5 list. */ void *ctx; /* user objects target to callback. */ bool lcores_share; /* Whether to share objects between the lcores. */ + rte_spinlock_t nlcore_lock; /* Lock for non-lcore list. */ mlx5_list_create_cb cb_create; /**< entry create callback. */ mlx5_list_match_cb cb_match; /**< entry match callback. */ mlx5_list_remove_cb cb_remove; /**< entry remove callback. */ @@ -102,7 +109,7 @@ struct mlx5_list_inconst { rte_rwlock_t lock; /* read/write lock. */ volatile uint32_t gen_cnt; /* List modification may update it. */ volatile uint32_t count; /* number of entries in list. */ - struct mlx5_list_cache *cache[RTE_MAX_LCORE + 1]; + struct mlx5_list_cache *cache[MLX5_LIST_MAX]; /* Lcore cache, last index is the global cache. */ };