From patchwork Fri Jul 2 06:18:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95161 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 612BFA0A0C; Fri, 2 Jul 2021 08:19:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D200241371; Fri, 2 Jul 2021 08:18:52 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2060.outbound.protection.outlook.com [40.107.244.60]) by mails.dpdk.org (Postfix) with ESMTP id C6EEE41358 for ; Fri, 2 Jul 2021 08:18:49 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lWJeFj31yZ/KfUnP5efnwrIfvuBeL/trrC2AFgtfbF9KbEO+CUGt7HJBkpgVjFpfboOnEKVdYDeaAVEgOXDEv9yxwL0P1zCN7ZkUdC5w1ZkhA4IC2Qq/3NNDPKYeTPRJYzoiQPM53XmjjNNiUxs97mjeFk1pvbv/ANRbJs+yy/F5CTvv8Ne/QbwmZBLh9zaHDVkVFqgpm4kv3K6fgdMpqPAfoFYkRpsOdKyduOxZeVnooclzULwosS+PdCNWlw9CcxUiNEDBMdsgUER6DwJ5+As01oHZmXoSc2Y5Xn7bQJLwMNbGngg86X5OrRD4YC77Ct2qb2yFT0QN2nDIYXwn/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UyrDB+s/v1ml5LIIf8QAM9Tnnii4/e5908TuWg4LkmE=; b=FFGPFE0atlCA5nIuPpUb5HmsyS0xQ58hLKsygfJOtV5ZThdmNeLblMyjgKpjkzDQQWFfKOk42lSnlG95l+2zfO1en6eoaQFaqx+Sr0kZdizFjzf1wKBUDmDR5y0Zug3KigenZT+OSnR1MVQVBG4x2UClOrj1gGxuK5GEfESdlrdB73ME2Q+3sLydtmF+ipoB6QMZ4sCDNVQAbQXuDuOqjRAQbKw8QRu6wP6QhIwnTuEj51i4aOEqMGIdx20KDvrQEcRmt2fiMJWxXpSNRhRgGEch3XM5Rgl09WcTnaQ/8BlSCTjfUkTuWlfT0rl05twmOkKNULqVQusQVL6rO3OdrA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UyrDB+s/v1ml5LIIf8QAM9Tnnii4/e5908TuWg4LkmE=; b=nLAkKdMWi4KrfJI9qmO0PKbiLlGPSwKW6791ZfLW2IUuGa0oeXGYOk5ADp7mG8DLLoVe/fl+V08kZvGibbsCqVAk2Cv3pdmhxxJGuc3RTojghBxG8fID0iX4m1ZT6UKQahbg+r4PxzQpzvMshqlRMqARdYcsDrHAG55JAv8YSck3aWSwNXW+bYFGBvPQ7edFIHEE0KK2BPqulBwjxhuQdFUqeGwuZAK861Kxkk/Jl4q0mFdW1t3dfSwcDmWB3eApts+Po/OXaarlnfwNaBhsOXwbfk8FBk0PEP1o0kmKD3X+Vwjj0cvPvlKtaB2aMFeao/TXJtg6eYV3ODgUZ0RPPA== Received: from MW4PR03CA0095.namprd03.prod.outlook.com (2603:10b6:303:b7::10) by DM6PR12MB3164.namprd12.prod.outlook.com (2603:10b6:5:188::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23; Fri, 2 Jul 2021 06:18:48 +0000 Received: from CO1NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b7:cafe::67) by MW4PR03CA0095.outlook.office365.com (2603:10b6:303:b7::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23 via Frontend Transport; Fri, 2 Jul 2021 06:18:48 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT022.mail.protection.outlook.com (10.13.175.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:48 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:45 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:03 +0300 Message-ID: <20210702061816.10454-10-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 8090fb01-13e3-4602-7b6b-08d93d2140e4 X-MS-TrafficTypeDiagnostic: DM6PR12MB3164: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:326; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: R6yj2avIcTnwShLYTb2r7MWFPdE2ezWb7njtYni5TV1soSr/3fg+DL15sNnrpU57nBaw9McpbK/S1VINjsEK7dBlMDLNmM/Pfvm2e4Q8SmzMrbsrN+xkUFibJhE7Al8tANCxXqkmEhjmgrX6MWFsQQ3VbZaEKS8DY/u7Dw41Quts3EeS5i6ianUylnD8He996yTUCpvZF9Gs3TXkCqfQVg+CkmsAD4IjzfXiB6zwIVnhdkmYShfGU5dSIv19mDs/TNBDTuG6+qgt/osqqniDTfl0VDHUGhvr0zH1QlLV6Y978r2MeL7NCD28h6AlRLTWKECwvEudk5Ym/QLypGoc+b/niUgfDOW+txYgQFXMvZI32PfiXS9HYDZp8hV7YD8EUXkXs2pIn31wJ696yjl+Mrm953zKrqnnyGkY1JFk60OvrQPcylyIUo/GvPpAv/FbcIUH4TFxWN7HeuOtJU+x8LnHEtXMeKahCbOA0eLptW54eiutA3hbylinM4YkANZggQdFku+6OY5f2szrCtIrGAqKW0vZglYXmWxAwZBv9+AdRhMdlGY8WKVLAbc8Hi0CPVKpyLKRRPQLQvPzIpbxwcNH91B8Gl3KXoo0m434/7MKVoiZnSxNe/iO2QLRyUg0qxch8P+EJdCBBqDSmi+pGI3eHsJqhrDHDACZaJFZcOtVJr2If5u05iN5za7/IiHL8bh1XxoiUsBLo12+vB42VA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(136003)(376002)(346002)(396003)(46966006)(36840700001)(1076003)(2906002)(316002)(86362001)(36906005)(4326008)(36860700001)(70586007)(7696005)(47076005)(54906003)(8676002)(110136005)(356005)(5660300002)(70206006)(7636003)(36756003)(82740400003)(55016002)(83380400001)(6666004)(6286002)(478600001)(186003)(6636002)(16526019)(82310400003)(426003)(336012)(26005)(8936002)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:18:48.0447 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8090fb01-13e3-4602-7b6b-08d93d2140e4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3164 Subject: [dpdk-dev] [PATCH v3 09/22] net/mlx5: manage list cache entries release X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Matan Azrad When a cache entry is allocated by lcore A and is released by lcore B, the driver should synchronize the cache list access of lcore A. The design decision is to manage a counter per lcore cache that will be increased atomically when the non-original lcore decreases the reference counter of cache entry to 0. In list register operation, before the running lcore starts a lookup in its cache, it will check the counter in order to free invalid entries in its cache. Signed-off-by: Matan Azrad Acked-by: Suanming Mou --- drivers/net/mlx5/mlx5_utils.c | 79 +++++++++++++++++++++++------------ drivers/net/mlx5/mlx5_utils.h | 2 + 2 files changed, 54 insertions(+), 27 deletions(-) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 772b352af5..7cdf44dcf7 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -47,36 +47,25 @@ __list_lookup(struct mlx5_list *list, int lcore_index, void *ctx, bool reuse) uint32_t ret; while (entry != NULL) { - struct mlx5_list_entry *nentry = LIST_NEXT(entry, next); - - if (list->cb_match(list, entry, ctx)) { - if (lcore_index < RTE_MAX_LCORE) { + if (list->cb_match(list, entry, ctx) == 0) { + if (reuse) { + ret = __atomic_add_fetch(&entry->ref_cnt, 1, + __ATOMIC_ACQUIRE) - 1; + DRV_LOG(DEBUG, "mlx5 list %s entry %p ref: %u.", + list->name, (void *)entry, + entry->ref_cnt); + } else if (lcore_index < RTE_MAX_LCORE) { ret = __atomic_load_n(&entry->ref_cnt, __ATOMIC_ACQUIRE); - if (ret == 0) { - LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); - } - } - entry = nentry; - continue; - } - if (reuse) { - ret = __atomic_add_fetch(&entry->ref_cnt, 1, - __ATOMIC_ACQUIRE); - if (ret == 1u) { - /* Entry was invalid before, free it. */ - LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); - entry = nentry; - continue; } - DRV_LOG(DEBUG, "mlx5 list %s entry %p ref++: %u.", - list->name, (void *)entry, entry->ref_cnt); + if (likely(ret != 0 || lcore_index == RTE_MAX_LCORE)) + return entry; + if (reuse && ret == 0) + entry->ref_cnt--; /* Invalid entry. */ } - break; + entry = LIST_NEXT(entry, next); } - return entry; + return NULL; } struct mlx5_list_entry * @@ -105,10 +94,31 @@ mlx5_list_cache_insert(struct mlx5_list *list, int lcore_index, return NULL; lentry->ref_cnt = 1u; lentry->gentry = gentry; + lentry->lcore_idx = (uint32_t)lcore_index; LIST_INSERT_HEAD(&list->cache[lcore_index].h, lentry, next); return lentry; } +static void +__list_cache_clean(struct mlx5_list *list, int lcore_index) +{ + struct mlx5_list_cache *c = &list->cache[lcore_index]; + struct mlx5_list_entry *entry = LIST_FIRST(&c->h); + uint32_t inv_cnt = __atomic_exchange_n(&c->inv_cnt, 0, + __ATOMIC_RELAXED); + + while (inv_cnt != 0 && entry != NULL) { + struct mlx5_list_entry *nentry = LIST_NEXT(entry, next); + + if (__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED) == 0) { + LIST_REMOVE(entry, next); + list->cb_clone_free(list, entry); + inv_cnt--; + } + entry = nentry; + } +} + struct mlx5_list_entry * mlx5_list_register(struct mlx5_list *list, void *ctx) { @@ -122,6 +132,8 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) rte_errno = ENOTSUP; return NULL; } + /* 0. Free entries that was invalidated by other lcores. */ + __list_cache_clean(list, lcore_index); /* 1. Lookup in local cache. */ local_entry = __list_lookup(list, lcore_index, ctx, true); if (local_entry) @@ -147,6 +159,7 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) entry->ref_cnt = 1u; local_entry->ref_cnt = 1u; local_entry->gentry = entry; + local_entry->lcore_idx = (uint32_t)lcore_index; rte_rwlock_write_lock(&list->lock); /* 4. Make sure the same entry was not created before the write lock. */ if (unlikely(prev_gen_cnt != list->gen_cnt)) { @@ -169,8 +182,8 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) rte_rwlock_write_unlock(&list->lock); LIST_INSERT_HEAD(&list->cache[lcore_index].h, local_entry, next); __atomic_add_fetch(&list->count, 1, __ATOMIC_ACQUIRE); - DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", - list->name, (void *)entry, entry->ref_cnt); + DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", list->name, + (void *)entry, entry->ref_cnt); return local_entry; } @@ -179,9 +192,21 @@ mlx5_list_unregister(struct mlx5_list *list, struct mlx5_list_entry *entry) { struct mlx5_list_entry *gentry = entry->gentry; + int lcore_idx; if (__atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_ACQUIRE) != 0) return 1; + lcore_idx = rte_lcore_index(rte_lcore_id()); + MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); + if (entry->lcore_idx == (uint32_t)lcore_idx) { + LIST_REMOVE(entry, next); + list->cb_clone_free(list, entry); + } else if (likely(lcore_idx != -1)) { + __atomic_add_fetch(&list->cache[entry->lcore_idx].inv_cnt, 1, + __ATOMIC_RELAXED); + } else { + return 0; + } if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_ACQUIRE) != 0) return 1; rte_rwlock_write_lock(&list->lock); diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 684d1e8a2a..ffa9cd5142 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -310,11 +310,13 @@ struct mlx5_list; struct mlx5_list_entry { LIST_ENTRY(mlx5_list_entry) next; /* Entry pointers in the list. */ uint32_t ref_cnt; /* 0 means, entry is invalid. */ + uint32_t lcore_idx; struct mlx5_list_entry *gentry; }; struct mlx5_list_cache { LIST_HEAD(mlx5_list_head, mlx5_list_entry) h; + uint32_t inv_cnt; /* Invalid entries counter. */ } __rte_cache_aligned; /**