From patchwork Tue Jul 6 13:32:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95401 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BB393A0C47; Tue, 6 Jul 2021 15:35:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 586BB4143C; Tue, 6 Jul 2021 15:33:47 +0200 (CEST) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2085.outbound.protection.outlook.com [40.107.100.85]) by mails.dpdk.org (Postfix) with ESMTP id 914BD41428 for ; Tue, 6 Jul 2021 15:33:44 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mXnBX3Ojf4535qibN/yPwZQVtkfUwN+D/1rd4bTUL5Y0nwWCzZiXa44iT1zHHkN0WLmanoxR7bJWo0NnUgZmiLeviTv2qn2jKaSGQyNTt6erA3RLuUyvoxCMAMtHmMFIP/Aobjp5rTsYuRnlf6yE+ibLuqDGEca01XjezZVaAQM+moja+MLkeaP30lwiU1wx7yGPjOZSlu6NNPIrZ2yYyQNHdpPSIFvTlqOz9MjNKRnCrbT0bRlIzkHuMu3L7c2FEjjCPiahAcRqsdmIrrdm67IiknFHClJHqN/HfLXCHc0zmKW2CI0RMarRnwLD4XpSHdUtx6Qw2jPeimcr157peA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rsdu2kho7U8tTAlFpOZ6l8B4eXuCSXF3Z4Z9tFNb7xQ=; b=kLknDvtGFEOwY622XKidr4Rlklg0YIF/+zmUPjQNsRBlM0IwRO3iubPEvbW/erw6bXic4UDrnjPdXfjDoRLyvjh0twmIpIhdn4BgJJVcQAOp3z+CdTQa3aE+/G14IW/RgFJe1gAF0ja5LdBN1J6/EghgNbPwCdnOYEnTt35c6Iv85Jo6I6SV49rJ1eDW0qtdth4O0M6r3JXkQAmGtKAmuWg+KXnMRfw/9TD8MTY2guB4+sLdDF25m8me3ckyFd+dWHzMt3hLkNIKQid+Uel03c18JDDrhvDQ/5HwfEDE9V+P6HuybI9t0gSn5cZOz4+acIe8xRYtuFOae4sq2q5BgA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rsdu2kho7U8tTAlFpOZ6l8B4eXuCSXF3Z4Z9tFNb7xQ=; b=GfW+oWV7G9UwSrNUBpnkT1S9GeP22tIPGhmdBHE+hH4y2PyBT/Jv73YMkRVuo/VoQnN5Xv+Jvuo+AbonWvNLstxkpKBqAQ4LV7roRY7sjRrzEkysdHdb7+rx6qWz8BBARRA9pVUy7ktZaEMsK1lRhYLoGxlRk4lFgPuW7B87VJibPH/HnYuRWeeGreKiRpGDFZvQ94rNLuLbNfo6Agwl3lMoO51+nTRACnX0EldxyjqqhEtUoaWIu88EZqeeHLCcH1jHZbmLeAguRe2ii0wJbjiubw3wvxnF8Ez8I/ASWG7s6mjb+V3yKrOd/DIAugdmI7gYnkHAFvujIzaO8lmvBg== Received: from BN9PR03CA0548.namprd03.prod.outlook.com (2603:10b6:408:138::13) by DM6PR12MB4825.namprd12.prod.outlook.com (2603:10b6:5:1d4::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.32; Tue, 6 Jul 2021 13:33:43 +0000 Received: from BN8NAM11FT062.eop-nam11.prod.protection.outlook.com (2603:10b6:408:138:cafe::81) by BN9PR03CA0548.outlook.office365.com (2603:10b6:408:138::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.19 via Frontend Transport; Tue, 6 Jul 2021 13:33:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT062.mail.protection.outlook.com (10.13.177.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 13:33:42 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 6 Jul 2021 13:33:41 +0000 From: Suanming Mou To: , CC: , , Date: Tue, 6 Jul 2021 16:32:45 +0300 Message-ID: <20210706133257.3353-15-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210706133257.3353-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210706133257.3353-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1db59b55-10c9-4d2f-d837-08d94082ac5c X-MS-TrafficTypeDiagnostic: DM6PR12MB4825: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:109; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: r+iOHFX04+2ClRyFh5AHYptwQZwJvMmjFJeIvh4i1OZWQ4JyCW9v8Kc5q0t8CJAq3nyeW73ANtHRWKxhLXfx04WFP/JdDWaFCR8vjvevGIBlWs8w2qGpfxyL9KAI09xowUIjqUBZ++jFWuQDraVWoYA3tEE8KaAA0Avjv6VnRp3MaDzuM+4KAgyYNXiRkGW1vGg0xaRRr8qQJprY71fPg6/sr9IfN41tZzNtf+JSwvOWnwmD8jcFlM4WCaQHLAJhbuMPViwDUslGRoMtBDM/kvZUXgr9tGI6UV5YKfiXwBzUS6SafaqzDImRCn+gT4TD3H5Y+zyuM24/P8XNKDyGQjx9H6qvF/8chhH9L/mcn1ATTTd0el3V3g6fbmKAT/qnonkbsd5PPS1wvTDQUulX76Pj+1CiGhRzgpRVxug//ws4HzWZtc4RSEzbGSCgdn47w8BQ14PxuSj568wn515T/7h+yQZPhUosftddGhFOj/jdI6gLfTmtZkZFf5EJkIMbI3PciRAz/zPw5YYgvbtr7WI2LrPQgLzYkAfAOOCbhkr7IrezEEgbq0OT6kLsLSaNsMtG6rP2DcyFQhQpaag+Qk/aLNIn+hJB9m22LuLoi3RCC8qvKdZ8b3C1HHz0KXgbRoORO0qCXy4+tpifPIuQ7kF5xowheQGSb5zGg1s0krKLNPoksq1JLxDh3FpiXrnVvtQQ5xqFJAVHzpbYVzfaRg== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(376002)(346002)(136003)(39860400002)(396003)(36840700001)(46966006)(36756003)(110136005)(26005)(1076003)(6636002)(6286002)(7696005)(36906005)(54906003)(316002)(47076005)(5660300002)(16526019)(83380400001)(82310400003)(36860700001)(426003)(70206006)(70586007)(336012)(2616005)(8936002)(2906002)(86362001)(4326008)(478600001)(186003)(8676002)(82740400003)(356005)(55016002)(7636003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 13:33:42.9618 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1db59b55-10c9-4d2f-d837-08d94082ac5c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT062.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4825 Subject: [dpdk-dev] [PATCH v4 14/26] common/mlx5: add list lcore share X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" As some actions in SW-steering is only memory and can be allowed to create duplicate objects, for lists which no need to check if there are existing same objects in other sub local lists, search the object only in local list will be more efficient. This commit adds the lcore share mode to list optimized the list register. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_common_utils.c | 46 +++++++++++++++++++------ drivers/common/mlx5/mlx5_common_utils.h | 16 ++++++--- drivers/net/mlx5/linux/mlx5_os.c | 11 +++--- drivers/net/mlx5/mlx5_flow_dv.c | 2 +- drivers/net/mlx5/windows/mlx5_os.c | 2 +- 5 files changed, 55 insertions(+), 22 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_utils.c b/drivers/common/mlx5/mlx5_common_utils.c index 8bb8a6016d..6ac78ba97f 100644 --- a/drivers/common/mlx5/mlx5_common_utils.c +++ b/drivers/common/mlx5/mlx5_common_utils.c @@ -14,7 +14,7 @@ /********************* mlx5 list ************************/ struct mlx5_list * -mlx5_list_create(const char *name, void *ctx, +mlx5_list_create(const char *name, void *ctx, bool lcores_share, mlx5_list_create_cb cb_create, mlx5_list_match_cb cb_match, mlx5_list_remove_cb cb_remove, @@ -35,6 +35,7 @@ mlx5_list_create(const char *name, void *ctx, if (name) snprintf(list->name, sizeof(list->name), "%s", name); list->ctx = ctx; + list->lcores_share = lcores_share; list->cb_create = cb_create; list->cb_match = cb_match; list->cb_remove = cb_remove; @@ -119,7 +120,10 @@ __list_cache_clean(struct mlx5_list *list, int lcore_index) if (__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED) == 0) { LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); + if (list->lcores_share) + list->cb_clone_free(list, entry); + else + list->cb_remove(list, entry); inv_cnt--; } entry = nentry; @@ -145,25 +149,36 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) local_entry = __list_lookup(list, lcore_index, ctx, true); if (local_entry) return local_entry; - /* 2. Lookup with read lock on global list, reuse if found. */ - rte_rwlock_read_lock(&list->lock); - entry = __list_lookup(list, RTE_MAX_LCORE, ctx, true); - if (likely(entry)) { + if (list->lcores_share) { + /* 2. Lookup with read lock on global list, reuse if found. */ + rte_rwlock_read_lock(&list->lock); + entry = __list_lookup(list, RTE_MAX_LCORE, ctx, true); + if (likely(entry)) { + rte_rwlock_read_unlock(&list->lock); + return mlx5_list_cache_insert(list, lcore_index, entry, + ctx); + } + prev_gen_cnt = list->gen_cnt; rte_rwlock_read_unlock(&list->lock); - return mlx5_list_cache_insert(list, lcore_index, entry, ctx); } - prev_gen_cnt = list->gen_cnt; - rte_rwlock_read_unlock(&list->lock); /* 3. Prepare new entry for global list and for cache. */ entry = list->cb_create(list, entry, ctx); if (unlikely(!entry)) return NULL; + entry->ref_cnt = 1u; + if (!list->lcores_share) { + entry->lcore_idx = (uint32_t)lcore_index; + LIST_INSERT_HEAD(&list->cache[lcore_index].h, entry, next); + __atomic_add_fetch(&list->count, 1, __ATOMIC_RELAXED); + DRV_LOG(DEBUG, "MLX5 list %s c%d entry %p new: %u.", + list->name, lcore_index, (void *)entry, entry->ref_cnt); + return entry; + } local_entry = list->cb_clone(list, entry, ctx); if (unlikely(!local_entry)) { list->cb_remove(list, entry); return NULL; } - entry->ref_cnt = 1u; local_entry->ref_cnt = 1u; local_entry->gentry = entry; local_entry->lcore_idx = (uint32_t)lcore_index; @@ -207,13 +222,22 @@ mlx5_list_unregister(struct mlx5_list *list, MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); if (entry->lcore_idx == (uint32_t)lcore_idx) { LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); + if (list->lcores_share) + list->cb_clone_free(list, entry); + else + list->cb_remove(list, entry); } else if (likely(lcore_idx != -1)) { __atomic_add_fetch(&list->cache[entry->lcore_idx].inv_cnt, 1, __ATOMIC_RELAXED); } else { return 0; } + if (!list->lcores_share) { + __atomic_sub_fetch(&list->count, 1, __ATOMIC_RELAXED); + DRV_LOG(DEBUG, "mlx5 list %s entry %p removed.", + list->name, (void *)entry); + return 0; + } if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) return 1; rte_rwlock_write_lock(&list->lock); diff --git a/drivers/common/mlx5/mlx5_common_utils.h b/drivers/common/mlx5/mlx5_common_utils.h index 96add6d003..000279d236 100644 --- a/drivers/common/mlx5/mlx5_common_utils.h +++ b/drivers/common/mlx5/mlx5_common_utils.h @@ -100,11 +100,8 @@ typedef struct mlx5_list_entry *(*mlx5_list_create_cb) */ struct mlx5_list { char name[MLX5_NAME_SIZE]; /**< Name of the mlx5 list. */ - volatile uint32_t gen_cnt; - /* List modification will update generation count. */ - volatile uint32_t count; /* number of entries in list. */ void *ctx; /* user objects target to callback. */ - rte_rwlock_t lock; /* read/write lock. */ + bool lcores_share; /* Whether to share objects between the lcores. */ mlx5_list_create_cb cb_create; /**< entry create callback. */ mlx5_list_match_cb cb_match; /**< entry match callback. */ mlx5_list_remove_cb cb_remove; /**< entry remove callback. */ @@ -112,17 +109,27 @@ struct mlx5_list { mlx5_list_clone_free_cb cb_clone_free; struct mlx5_list_cache cache[RTE_MAX_LCORE + 1]; /* Lcore cache, last index is the global cache. */ + volatile uint32_t gen_cnt; /* List modification may update it. */ + volatile uint32_t count; /* number of entries in list. */ + rte_rwlock_t lock; /* read/write lock. */ }; /** * Create a mlx5 list. * + * For actions in SW-steering is only memory and can be allowed + * to create duplicate objects, the lists don't need to check if + * there are existing same objects in other sub local lists, + * search the object only in local list will be more efficient. + * * @param list * Pointer to the hast list table. * @param name * Name of the mlx5 list. * @param ctx * Pointer to the list context data. + * @param lcores_share + * Whether to share objects between the lcores. * @param cb_create * Callback function for entry create. * @param cb_match @@ -134,6 +141,7 @@ struct mlx5_list { */ __rte_internal struct mlx5_list *mlx5_list_create(const char *name, void *ctx, + bool lcores_share, mlx5_list_create_cb cb_create, mlx5_list_match_cb cb_match, mlx5_list_remove_cb cb_remove, diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index da4d2fdadc..ced88f5394 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -274,7 +274,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) #ifdef HAVE_IBV_FLOW_DV_SUPPORT /* Init port id action list. */ snprintf(s, sizeof(s), "%s_port_id_action_list", sh->ibdev_name); - sh->port_id_action_list = mlx5_list_create(s, sh, + sh->port_id_action_list = mlx5_list_create(s, sh, true, flow_dv_port_id_create_cb, flow_dv_port_id_match_cb, flow_dv_port_id_remove_cb, @@ -284,7 +284,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) goto error; /* Init push vlan action list. */ snprintf(s, sizeof(s), "%s_push_vlan_action_list", sh->ibdev_name); - sh->push_vlan_action_list = mlx5_list_create(s, sh, + sh->push_vlan_action_list = mlx5_list_create(s, sh, true, flow_dv_push_vlan_create_cb, flow_dv_push_vlan_match_cb, flow_dv_push_vlan_remove_cb, @@ -294,7 +294,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) goto error; /* Init sample action list. */ snprintf(s, sizeof(s), "%s_sample_action_list", sh->ibdev_name); - sh->sample_action_list = mlx5_list_create(s, sh, + sh->sample_action_list = mlx5_list_create(s, sh, true, flow_dv_sample_create_cb, flow_dv_sample_match_cb, flow_dv_sample_remove_cb, @@ -304,7 +304,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) goto error; /* Init dest array action list. */ snprintf(s, sizeof(s), "%s_dest_array_list", sh->ibdev_name); - sh->dest_array_list = mlx5_list_create(s, sh, + sh->dest_array_list = mlx5_list_create(s, sh, true, flow_dv_dest_array_create_cb, flow_dv_dest_array_match_cb, flow_dv_dest_array_remove_cb, @@ -1750,7 +1750,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = ENOTSUP; goto error; } - priv->hrxqs = mlx5_list_create("hrxq", eth_dev, mlx5_hrxq_create_cb, + priv->hrxqs = mlx5_list_create("hrxq", eth_dev, true, + mlx5_hrxq_create_cb, mlx5_hrxq_match_cb, mlx5_hrxq_remove_cb, mlx5_hrxq_clone_cb, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 982056cb41..4f7cdb0622 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -10047,7 +10047,7 @@ flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void *cb_ctx) MKSTR(matcher_name, "%s_%s_%u_%u_matcher_list", key.is_fdb ? "FDB" : "NIC", key.is_egress ? "egress" : "ingress", key.level, key.id); - tbl_data->matchers = mlx5_list_create(matcher_name, sh, + tbl_data->matchers = mlx5_list_create(matcher_name, sh, true, flow_dv_matcher_create_cb, flow_dv_matcher_match_cb, flow_dv_matcher_remove_cb, diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index e6176e70d2..a04f93e1d4 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -610,7 +610,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = ENOTSUP; goto error; } - priv->hrxqs = mlx5_list_create("hrxq", eth_dev, + priv->hrxqs = mlx5_list_create("hrxq", eth_dev, true, mlx5_hrxq_create_cb, mlx5_hrxq_match_cb, mlx5_hrxq_remove_cb, mlx5_hrxq_clone_cb, mlx5_hrxq_clone_free_cb);