From patchwork Wed Jun 30 12:45:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95063 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E617DA0A0F; Wed, 30 Jun 2021 14:47:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 48A85412A1; Wed, 30 Jun 2021 14:46:58 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2082.outbound.protection.outlook.com [40.107.223.82]) by mails.dpdk.org (Postfix) with ESMTP id 8B2A241296 for ; Wed, 30 Jun 2021 14:46:54 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Q+/gIVLB89DoZvh/CKTQqXAG9ccb+LHdHOzSAxHujwG3PYHXMvBHpXDrbYaPVewTFqmwmnMSvcp1hD1M6I5EOXfPeN6l95KULpumql2QbeAdc/7C5AB7UR+23xDO7sqI7pxmcsx7EOymWk3O085YiqKgB8DXQZJCBpV+aSMq9eQHDo7c2bsEDpvsKSkJu4SiD6W3Tf7Xf2umHX6bcX+VLn4qT8tpe2JIzMz6Ilogeb40B/WLHJQGt41UhmLdHmlieRa7+mkKzV58mQLyCymADYHfyNt/aof6jXSR/+fjlj9Q+uapvLHRC9Qmm2ZwLNpzd6bI95iJ8k16pPGXJUCxaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UyrDB+s/v1ml5LIIf8QAM9Tnnii4/e5908TuWg4LkmE=; b=eODxjKwQFl6jqbDlwmfvzCiJVDdl+BuHE5jBm2ZMVPn3e3VmpRFW5hFLmca2ZAQ5bzxsh1yZ60pGWp9GQP4Te4rD0ffOzRZTCRQtrkAh0arEV3sDaJ/gPmOSCkEkHmxFTo8bu0gf+hF5p3fh1FWo21l0hJfksywCOAbUk0FNLD+BjvhcvjcGMpFF0fMLFUrBLhoRZYOQ+aIB4vhlM22D1TbKgDjoM85JisriiToe6JDGuUa/ArT3srbL4te8Wblj61foEdZjZiFnHSAvZMutn4oyxJ752BRrRn2rBJCsWByYexCZKY8axRQwt2hD1+m9aI05Vz9s1Hr4aVt6pwgiUg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UyrDB+s/v1ml5LIIf8QAM9Tnnii4/e5908TuWg4LkmE=; b=Ks2D+/HJ/e7RnCSarU2ONucMnOVua+QzpuLq3XVFec/kWDfoBMPztekdYQ+RXxifewH0FKleKrw1DYiKW3pw3hXoIHhYqPbICIqMHuyesRsEXMq7SxsmJtSyRuBERbWyfpdk+v4AE8JnSz656x3p9ZKFnShKDPb1If+ehcI2Tt3YN0eQCFct8gKcNdukagMhduP6zFyoUzsIkXrz2ZFwN7my68Cr7wCybVbl1aA9u48PfypUTwBfqUFT8m5BEscpc8jRbpx0IAEtjGDIdreezFIfrUGQuAOvGjFnCxZyWbT0zr/0M4Vokt8pRcflndqD/eYIzAIJ7YDez/LBxUoPZQ== Received: from BN6PR1701CA0019.namprd17.prod.outlook.com (2603:10b6:405:15::29) by DM4PR12MB5165.namprd12.prod.outlook.com (2603:10b6:5:394::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.21; Wed, 30 Jun 2021 12:46:53 +0000 Received: from BN8NAM11FT045.eop-nam11.prod.protection.outlook.com (2603:10b6:405:15:cafe::ab) by BN6PR1701CA0019.outlook.office365.com (2603:10b6:405:15::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Wed, 30 Jun 2021 12:46:53 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT045.mail.protection.outlook.com (10.13.177.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Wed, 30 Jun 2021 12:46:52 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Jun 2021 12:46:42 +0000 From: Suanming Mou To: , CC: , , Date: Wed, 30 Jun 2021 15:45:56 +0300 Message-ID: <20210630124609.8711-10-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210630124609.8711-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210630124609.8711-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: fb36a420-ce44-42b7-4887-08d93bc522e4 X-MS-TrafficTypeDiagnostic: DM4PR12MB5165: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:326; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 0hCdPOTgOWOe4JvrOjQiM/kApc+4RvR46cAAQ5yg9+UzJQ9hiOt1nR7LWjtkoQniG6PDUW3dvhd7UcdTUg4SChc1IxfJLs7o6TpoH3E3zkQUWgJ1TUrdALmd4gjmiY/FxEY27zuZyScM0pj6E+Bmx6UylP/yLHKw5A75oJVCfwitMhJcx0p2tOqEJ0QpI04JXvynP4fLh08mYIwrSigt+zKvYzo0GHopJ1XX05wcH/i+8vF9IBl27R9YTCaTZfhkRkOlFEmYq2l4q/JGG3R/DS02EDgSlYOb3iOnWD8Ef30px8MeIZ9BwR4VBpi5WkkapFmI4QXVQwYCJkD7vfUpD2kAupcPrRA0iL+LpDEskchJxQ2FVZGQMB/eo+N2tf2fwrpwv9DONLyfMmW95+YDWw2aAyRcU4rIW/4RssRTUB5jElfEPw3UUyKRyowTtdkapNJnyRj0OcZdzgvNQD55ETx/e53et73SDrxVgHmP7ZynqxuXdoZhGDGPh/rXSDmEApyxJ949FVHYEfwxoPYHux3g9zKuM0EZvNc5ZN/DK9x1z0fpb8p5GiVvrqPytTbHLBrAuHEW64ARMTXvefPHVWRRkuJvKGxyl77/zgFf3t6e+MJ4eM0FKIug0HAoaqY1HL78PNXOiBF2jWWbduoKbaBMq0uKUeF/lSFxjJaPjn3AH23Cj314ddqP1ftdnPufZfZ9ZsHafUGSA0KPhEV66g== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(396003)(136003)(376002)(346002)(46966006)(36840700001)(47076005)(2616005)(6286002)(6636002)(82310400003)(36860700001)(16526019)(7636003)(426003)(316002)(4326008)(186003)(86362001)(478600001)(110136005)(5660300002)(54906003)(70586007)(70206006)(8936002)(26005)(1076003)(356005)(336012)(82740400003)(6666004)(7696005)(83380400001)(8676002)(55016002)(2906002)(36756003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2021 12:46:52.7899 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fb36a420-ce44-42b7-4887-08d93bc522e4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT045.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5165 Subject: [dpdk-dev] [PATCH v2 09/22] net/mlx5: manage list cache entries release X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Matan Azrad When a cache entry is allocated by lcore A and is released by lcore B, the driver should synchronize the cache list access of lcore A. The design decision is to manage a counter per lcore cache that will be increased atomically when the non-original lcore decreases the reference counter of cache entry to 0. In list register operation, before the running lcore starts a lookup in its cache, it will check the counter in order to free invalid entries in its cache. Signed-off-by: Matan Azrad Acked-by: Suanming Mou --- drivers/net/mlx5/mlx5_utils.c | 79 +++++++++++++++++++++++------------ drivers/net/mlx5/mlx5_utils.h | 2 + 2 files changed, 54 insertions(+), 27 deletions(-) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 772b352af5..7cdf44dcf7 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -47,36 +47,25 @@ __list_lookup(struct mlx5_list *list, int lcore_index, void *ctx, bool reuse) uint32_t ret; while (entry != NULL) { - struct mlx5_list_entry *nentry = LIST_NEXT(entry, next); - - if (list->cb_match(list, entry, ctx)) { - if (lcore_index < RTE_MAX_LCORE) { + if (list->cb_match(list, entry, ctx) == 0) { + if (reuse) { + ret = __atomic_add_fetch(&entry->ref_cnt, 1, + __ATOMIC_ACQUIRE) - 1; + DRV_LOG(DEBUG, "mlx5 list %s entry %p ref: %u.", + list->name, (void *)entry, + entry->ref_cnt); + } else if (lcore_index < RTE_MAX_LCORE) { ret = __atomic_load_n(&entry->ref_cnt, __ATOMIC_ACQUIRE); - if (ret == 0) { - LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); - } - } - entry = nentry; - continue; - } - if (reuse) { - ret = __atomic_add_fetch(&entry->ref_cnt, 1, - __ATOMIC_ACQUIRE); - if (ret == 1u) { - /* Entry was invalid before, free it. */ - LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); - entry = nentry; - continue; } - DRV_LOG(DEBUG, "mlx5 list %s entry %p ref++: %u.", - list->name, (void *)entry, entry->ref_cnt); + if (likely(ret != 0 || lcore_index == RTE_MAX_LCORE)) + return entry; + if (reuse && ret == 0) + entry->ref_cnt--; /* Invalid entry. */ } - break; + entry = LIST_NEXT(entry, next); } - return entry; + return NULL; } struct mlx5_list_entry * @@ -105,10 +94,31 @@ mlx5_list_cache_insert(struct mlx5_list *list, int lcore_index, return NULL; lentry->ref_cnt = 1u; lentry->gentry = gentry; + lentry->lcore_idx = (uint32_t)lcore_index; LIST_INSERT_HEAD(&list->cache[lcore_index].h, lentry, next); return lentry; } +static void +__list_cache_clean(struct mlx5_list *list, int lcore_index) +{ + struct mlx5_list_cache *c = &list->cache[lcore_index]; + struct mlx5_list_entry *entry = LIST_FIRST(&c->h); + uint32_t inv_cnt = __atomic_exchange_n(&c->inv_cnt, 0, + __ATOMIC_RELAXED); + + while (inv_cnt != 0 && entry != NULL) { + struct mlx5_list_entry *nentry = LIST_NEXT(entry, next); + + if (__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED) == 0) { + LIST_REMOVE(entry, next); + list->cb_clone_free(list, entry); + inv_cnt--; + } + entry = nentry; + } +} + struct mlx5_list_entry * mlx5_list_register(struct mlx5_list *list, void *ctx) { @@ -122,6 +132,8 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) rte_errno = ENOTSUP; return NULL; } + /* 0. Free entries that was invalidated by other lcores. */ + __list_cache_clean(list, lcore_index); /* 1. Lookup in local cache. */ local_entry = __list_lookup(list, lcore_index, ctx, true); if (local_entry) @@ -147,6 +159,7 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) entry->ref_cnt = 1u; local_entry->ref_cnt = 1u; local_entry->gentry = entry; + local_entry->lcore_idx = (uint32_t)lcore_index; rte_rwlock_write_lock(&list->lock); /* 4. Make sure the same entry was not created before the write lock. */ if (unlikely(prev_gen_cnt != list->gen_cnt)) { @@ -169,8 +182,8 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) rte_rwlock_write_unlock(&list->lock); LIST_INSERT_HEAD(&list->cache[lcore_index].h, local_entry, next); __atomic_add_fetch(&list->count, 1, __ATOMIC_ACQUIRE); - DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", - list->name, (void *)entry, entry->ref_cnt); + DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", list->name, + (void *)entry, entry->ref_cnt); return local_entry; } @@ -179,9 +192,21 @@ mlx5_list_unregister(struct mlx5_list *list, struct mlx5_list_entry *entry) { struct mlx5_list_entry *gentry = entry->gentry; + int lcore_idx; if (__atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_ACQUIRE) != 0) return 1; + lcore_idx = rte_lcore_index(rte_lcore_id()); + MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); + if (entry->lcore_idx == (uint32_t)lcore_idx) { + LIST_REMOVE(entry, next); + list->cb_clone_free(list, entry); + } else if (likely(lcore_idx != -1)) { + __atomic_add_fetch(&list->cache[entry->lcore_idx].inv_cnt, 1, + __ATOMIC_RELAXED); + } else { + return 0; + } if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_ACQUIRE) != 0) return 1; rte_rwlock_write_lock(&list->lock); diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 684d1e8a2a..ffa9cd5142 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -310,11 +310,13 @@ struct mlx5_list; struct mlx5_list_entry { LIST_ENTRY(mlx5_list_entry) next; /* Entry pointers in the list. */ uint32_t ref_cnt; /* 0 means, entry is invalid. */ + uint32_t lcore_idx; struct mlx5_list_entry *gentry; }; struct mlx5_list_cache { LIST_HEAD(mlx5_list_head, mlx5_list_entry) h; + uint32_t inv_cnt; /* Invalid entries counter. */ } __rte_cache_aligned; /**