From patchwork Tue Oct 6 11:48:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79756 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3FAEBA04BB; Tue, 6 Oct 2020 13:50:02 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 83A224C8A; Tue, 6 Oct 2020 13:49:24 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id DC8292952 for ; Tue, 6 Oct 2020 13:49:19 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:16 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0L028553; Tue, 6 Oct 2020 14:49:14 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org Date: Tue, 6 Oct 2020 19:48:44 +0800 Message-Id: <1601984948-313027-2-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 01/25] net/mlx5: use thread safe index pool for flow objects X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" As mlx5 PMD is changed to be thread safe, all the flow-related sub-objects inside the PMD should be thread safe. This commit changes the index memory pools' lock configuration to be enabled. That makes the index pool be thread safe. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 01ead6e..c9fc085 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -191,7 +191,7 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list = .trunk_size = 64, .grow_trunk = 3, .grow_shift = 2, - .need_lock = 0, + .need_lock = 1, .release_mem_en = 1, .malloc = mlx5_malloc, .free = mlx5_free, @@ -202,7 +202,7 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list = .trunk_size = 64, .grow_trunk = 3, .grow_shift = 2, - .need_lock = 0, + .need_lock = 1, .release_mem_en = 1, .malloc = mlx5_malloc, .free = mlx5_free, @@ -213,7 +213,7 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list = .trunk_size = 64, .grow_trunk = 3, .grow_shift = 2, - .need_lock = 0, + .need_lock = 1, .release_mem_en = 1, .malloc = mlx5_malloc, .free = mlx5_free, @@ -224,7 +224,7 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list = .trunk_size = 64, .grow_trunk = 3, .grow_shift = 2, - .need_lock = 0, + .need_lock = 1, .release_mem_en = 1, .malloc = mlx5_malloc, .free = mlx5_free, @@ -235,7 +235,7 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list = .trunk_size = 64, .grow_trunk = 3, .grow_shift = 2, - .need_lock = 0, + .need_lock = 1, .release_mem_en = 1, .malloc = mlx5_malloc, .free = mlx5_free, @@ -247,7 +247,7 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list = .trunk_size = 64, .grow_trunk = 3, .grow_shift = 2, - .need_lock = 0, + .need_lock = 1, .release_mem_en = 1, .malloc = mlx5_malloc, .free = mlx5_free, @@ -258,7 +258,7 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list = .trunk_size = 64, .grow_trunk = 3, .grow_shift = 2, - .need_lock = 0, + .need_lock = 1, .release_mem_en = 1, .malloc = mlx5_malloc, .free = mlx5_free, @@ -269,7 +269,7 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list = .trunk_size = 64, .grow_trunk = 3, .grow_shift = 2, - .need_lock = 0, + .need_lock = 1, .release_mem_en = 1, .malloc = mlx5_malloc, .free = mlx5_free, @@ -284,7 +284,7 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list = .trunk_size = 64, .grow_trunk = 3, .grow_shift = 2, - .need_lock = 0, + .need_lock = 1, .release_mem_en = 1, .malloc = mlx5_malloc, .free = mlx5_free, From patchwork Tue Oct 6 11:48:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79754 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5B42DA04BB; Tue, 6 Oct 2020 13:49:23 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 86F6C1C01; Tue, 6 Oct 2020 13:49:21 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id D31B51023 for ; Tue, 6 Oct 2020 13:49:19 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:17 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0M028553; Tue, 6 Oct 2020 14:49:16 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org, Xueming Li Date: Tue, 6 Oct 2020 19:48:45 +0800 Message-Id: <1601984948-313027-3-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 02/25] net/mlx5: use thread specific flow context X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xueming Li As part of multi-thread flow support, this patch moves flow intermediate data to thread specific, makes them a flow context. The context is allocated per thread, destroyed along with thread life-cycle. Signed-off-by: Xueming Li --- drivers/net/mlx5/linux/mlx5_os.c | 5 -- drivers/net/mlx5/mlx5.c | 2 - drivers/net/mlx5/mlx5.h | 6 -- drivers/net/mlx5/mlx5_flow.c | 134 +++++++++++++++++++++++++------------ drivers/net/mlx5/mlx5_flow.h | 15 ++++- drivers/net/mlx5/mlx5_flow_dv.c | 26 ++++--- drivers/net/mlx5/mlx5_flow_verbs.c | 24 ++++--- 7 files changed, 133 insertions(+), 79 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 188a6d4..4276964 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1330,11 +1330,6 @@ err = ENOTSUP; goto error; } - /* - * Allocate the buffer for flow creating, just once. - * The allocation must be done before any flow creating. - */ - mlx5_flow_alloc_intermediate(eth_dev); /* Query availability of metadata reg_c's. */ err = mlx5_flow_discover_mreg_c(eth_dev); if (err < 0) { diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index c9fc085..16719e6 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1375,8 +1375,6 @@ struct mlx5_dev_ctx_shared * */ mlx5_flow_list_flush(dev, &priv->flows, true); mlx5_flow_meter_flush(dev, NULL); - /* Free the intermediate buffers for flow creation. */ - mlx5_flow_free_intermediate(dev); /* Prevent crashes when queues are still in use. */ dev->rx_pkt_burst = removed_rx_burst; dev->tx_pkt_burst = removed_tx_burst; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index bd91e16..0080ac8 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -793,10 +793,6 @@ struct mlx5_priv { struct mlx5_drop drop_queue; /* Flow drop queues. */ uint32_t flows; /* RTE Flow rules. */ uint32_t ctrl_flows; /* Control flow rules. */ - void *inter_flows; /* Intermediate resources for flow creation. */ - void *rss_desc; /* Intermediate rss description resources. */ - int flow_idx; /* Intermediate device flow index. */ - int flow_nested_idx; /* Intermediate device flow index, nested. */ struct mlx5_obj_ops obj_ops; /* HW objects operations. */ LIST_HEAD(rxq, mlx5_rxq_ctrl) rxqsctrl; /* DPDK Rx queues. */ LIST_HEAD(rxqobj, mlx5_rxq_obj) rxqsobj; /* Verbs/DevX Rx queues. */ @@ -1020,8 +1016,6 @@ int mlx5_dev_filter_ctrl(struct rte_eth_dev *dev, void mlx5_flow_stop(struct rte_eth_dev *dev, uint32_t *list); int mlx5_flow_start_default(struct rte_eth_dev *dev); void mlx5_flow_stop_default(struct rte_eth_dev *dev); -void mlx5_flow_alloc_intermediate(struct rte_eth_dev *dev); -void mlx5_flow_free_intermediate(struct rte_eth_dev *dev); int mlx5_flow_verify(struct rte_eth_dev *dev); int mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, uint32_t queue); int mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index ffa7646..eeee546 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -306,6 +306,13 @@ struct mlx5_flow_tunnel_info { }, }; +/* Key of thread specific flow workspace data. */ +static pthread_key_t key_workspace; + +/* Thread specific flow workspace data once initialization data. */ +static pthread_once_t key_workspace_init; + + /** * Translate tag ID to register. * @@ -4348,16 +4355,18 @@ struct mlx5_flow_tunnel_info { uint8_t buffer[2048]; } items_tx; struct rte_flow_expand_rss *buf = &expand_buffer.buf; - struct mlx5_flow_rss_desc *rss_desc = &((struct mlx5_flow_rss_desc *) - priv->rss_desc)[!!priv->flow_idx]; + struct mlx5_flow_rss_desc *rss_desc; const struct rte_flow_action *p_actions_rx = actions; uint32_t i; uint32_t idx = 0; int hairpin_flow; uint32_t hairpin_id = 0; struct rte_flow_attr attr_tx = { .priority = 0 }; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); int ret; + MLX5_ASSERT(wks); + rss_desc = &wks->rss_desc[!!wks->flow_idx]; hairpin_flow = flow_check_hairpin_split(dev, attr, actions); ret = flow_drv_validate(dev, attr, items, p_actions_rx, external, hairpin_flow, error); @@ -4383,9 +4392,25 @@ struct mlx5_flow_tunnel_info { flow->hairpin_flow_id = hairpin_id; MLX5_ASSERT(flow->drv_type > MLX5_FLOW_TYPE_MIN && flow->drv_type < MLX5_FLOW_TYPE_MAX); - memset(rss_desc, 0, sizeof(*rss_desc)); + memset(rss_desc, 0, offsetof(struct mlx5_flow_rss_desc, queue)); rss = flow_get_rss_action(p_actions_rx); if (rss) { + /* Check if need more memory for the queue. */ + if (rss->queue_num > wks->rssq_num[!!wks->flow_idx]) { + /* Default memory is from workspace. No need to free. */ + if (wks->rssq_num[!!wks->flow_idx] == + MLX5_RSSQ_DEFAULT_NUM) + rss_desc->queue = NULL; + rss_desc->queue = mlx5_realloc(rss_desc->queue, + MLX5_MEM_ZERO, + sizeof(rss_desc->queue[0]) * rss->queue_num * 2, + 0, SOCKET_ID_ANY); + if (!rss_desc->queue) { + rte_errno = EINVAL; + return 0; + } + wks->rssq_num[!!wks->flow_idx] = rss->queue_num * 2; + } /* * The following information is required by * mlx5_flow_hashfields_adjust() in advance. @@ -4414,9 +4439,9 @@ struct mlx5_flow_tunnel_info { * need to be translated before another calling. * No need to use ping-pong buffer to save memory here. */ - if (priv->flow_idx) { - MLX5_ASSERT(!priv->flow_nested_idx); - priv->flow_nested_idx = priv->flow_idx; + if (wks->flow_idx) { + MLX5_ASSERT(!wks->flow_nested_idx); + wks->flow_nested_idx = wks->flow_idx; } for (i = 0; i < buf->entries; ++i) { /* @@ -4481,9 +4506,9 @@ struct mlx5_flow_tunnel_info { flow, next); flow_rxq_flags_set(dev, flow); /* Nested flow creation index recovery. */ - priv->flow_idx = priv->flow_nested_idx; - if (priv->flow_nested_idx) - priv->flow_nested_idx = 0; + wks->flow_idx = wks->flow_nested_idx; + if (wks->flow_nested_idx) + wks->flow_nested_idx = 0; return idx; error: MLX5_ASSERT(flow); @@ -4498,9 +4523,9 @@ struct mlx5_flow_tunnel_info { mlx5_flow_id_release(priv->sh->flow_id_pool, hairpin_id); rte_errno = ret; - priv->flow_idx = priv->flow_nested_idx; - if (priv->flow_nested_idx) - priv->flow_nested_idx = 0; + wks->flow_idx = wks->flow_nested_idx; + if (wks->flow_nested_idx) + wks->flow_nested_idx = 0; return 0; } @@ -4782,48 +4807,69 @@ struct rte_flow * } /** - * Allocate intermediate resources for flow creation. - * - * @param dev - * Pointer to Ethernet device. + * Release key of thread specific flow workspace data. */ -void -mlx5_flow_alloc_intermediate(struct rte_eth_dev *dev) +static void +flow_release_workspace(void *data) { - struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_workspace *wks = data; - if (!priv->inter_flows) { - priv->inter_flows = mlx5_malloc(MLX5_MEM_ZERO, - MLX5_NUM_MAX_DEV_FLOWS * - sizeof(struct mlx5_flow) + - (sizeof(struct mlx5_flow_rss_desc) + - sizeof(uint16_t) * UINT16_MAX) * 2, 0, - SOCKET_ID_ANY); - if (!priv->inter_flows) { - DRV_LOG(ERR, "can't allocate intermediate memory."); - return; - } - } - priv->rss_desc = &((struct mlx5_flow *)priv->inter_flows) - [MLX5_NUM_MAX_DEV_FLOWS]; - /* Reset the index. */ - priv->flow_idx = 0; - priv->flow_nested_idx = 0; + if (!wks) + return; + if (wks->rssq_num[0] == MLX5_RSSQ_DEFAULT_NUM) + mlx5_free(wks->rss_desc[0].queue); + if (wks->rssq_num[1] == MLX5_RSSQ_DEFAULT_NUM) + mlx5_free(wks->rss_desc[1].queue); + mlx5_free(wks); + return; } /** - * Free intermediate resources for flows. + * Initialize key of thread specific flow workspace data. + */ +static void +flow_alloc_workspace(void) +{ + if (pthread_key_create(&key_workspace, flow_release_workspace)) + DRV_LOG(ERR, "can't create flow workspace data thread key."); +} + +/** + * Get thread specific flow workspace. * - * @param dev - * Pointer to Ethernet device. + * @return pointer to thread specific flowworkspace data, NULL on error. */ -void -mlx5_flow_free_intermediate(struct rte_eth_dev *dev) +struct mlx5_flow_workspace* +mlx5_flow_get_thread_workspace(void) { - struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_workspace *data; + + if (pthread_once(&key_workspace_init, flow_alloc_workspace)) { + DRV_LOG(ERR, "failed to init flow workspace data thread key."); + return NULL; + } - mlx5_free(priv->inter_flows); - priv->inter_flows = NULL; + data = pthread_getspecific(key_workspace); + if (!data) { + data = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*data) + + sizeof(uint16_t) * MLX5_RSSQ_DEFAULT_NUM * 2, + 0, SOCKET_ID_ANY); + if (!data) { + DRV_LOG(ERR, "failed to allocate flow workspace " + "memory."); + return NULL; + } + data->rss_desc[0].queue = (uint16_t *)(data + 1); + data->rss_desc[1].queue = data->rss_desc[0].queue + + MLX5_RSSQ_DEFAULT_NUM; + data->rssq_num[0] = MLX5_RSSQ_DEFAULT_NUM; + data->rssq_num[1] = MLX5_RSSQ_DEFAULT_NUM; + if (pthread_setspecific(key_workspace, data)) { + DRV_LOG(ERR, "failed to set flow workspace to thread."); + return NULL; + } + } + return data; } /** diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 279daf2..2685481 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -73,6 +73,9 @@ enum mlx5_feature_name { MLX5_MTR_SFX, }; +/* Default queue number. */ +#define MLX5_RSSQ_DEFAULT_NUM 16 + /* Pattern outer Layer bits. */ #define MLX5_FLOW_LAYER_OUTER_L2 (1u << 0) #define MLX5_FLOW_LAYER_OUTER_L3_IPV4 (1u << 1) @@ -531,7 +534,7 @@ struct mlx5_flow_rss_desc { uint32_t queue_num; /**< Number of entries in @p queue. */ uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */ uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */ - uint16_t queue[]; /**< Destination queues to redirect traffic to. */ + uint16_t *queue; /**< Destination queues. */ }; /* PMD flow priority for tunnel */ @@ -856,6 +859,15 @@ struct rte_flow { uint16_t meter; /**< Holds flow meter id. */ } __rte_packed; +/* Thread specific flow workspace intermediate data. */ +struct mlx5_flow_workspace { + struct mlx5_flow flows[MLX5_NUM_MAX_DEV_FLOWS]; + struct mlx5_flow_rss_desc rss_desc[2]; + uint32_t rssq_num[2]; /* Allocated queue num in rss_desc. */ + int flow_idx; /* Intermediate device flow index. */ + int flow_nested_idx; /* Intermediate device flow index, nested. */ +}; + typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], @@ -930,6 +942,7 @@ struct mlx5_flow_driver_ops { /* mlx5_flow.c */ +struct mlx5_flow_workspace *mlx5_flow_get_thread_workspace(void); struct mlx5_flow_id_pool *mlx5_flow_id_pool_alloc(uint32_t max_id); void mlx5_flow_id_pool_release(struct mlx5_flow_id_pool *pool); uint32_t mlx5_flow_id_get(struct mlx5_flow_id_pool *pool, uint32_t *id); diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 79fdf34..ede7bf8 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -5939,9 +5939,11 @@ struct field_modify_info modify_tcp[] = { struct mlx5_flow *dev_flow; struct mlx5_flow_handle *dev_handle; struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); + MLX5_ASSERT(wks); /* In case of corrupting the memory. */ - if (priv->flow_idx >= MLX5_NUM_MAX_DEV_FLOWS) { + if (wks->flow_idx >= MLX5_NUM_MAX_DEV_FLOWS) { rte_flow_error_set(error, ENOSPC, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "not free temporary device flow"); @@ -5955,8 +5957,8 @@ struct field_modify_info modify_tcp[] = { "not enough memory to create flow handle"); return NULL; } - /* No multi-thread supporting. */ - dev_flow = &((struct mlx5_flow *)priv->inter_flows)[priv->flow_idx++]; + MLX5_ASSERT(wks->flow_idx + 1 < RTE_DIM(wks->flows)); + dev_flow = &wks->flows[wks->flow_idx++]; dev_flow->handle = dev_handle; dev_flow->handle_idx = handle_idx; /* @@ -8181,9 +8183,8 @@ struct field_modify_info modify_tcp[] = { struct mlx5_dev_config *dev_conf = &priv->config; struct rte_flow *flow = dev_flow->flow; struct mlx5_flow_handle *handle = dev_flow->handle; - struct mlx5_flow_rss_desc *rss_desc = &((struct mlx5_flow_rss_desc *) - priv->rss_desc) - [!!priv->flow_nested_idx]; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); + struct mlx5_flow_rss_desc *rss_desc; uint64_t item_flags = 0; uint64_t last_item = 0; uint64_t action_flags = 0; @@ -8216,6 +8217,8 @@ struct field_modify_info modify_tcp[] = { uint32_t table; int ret = 0; + MLX5_ASSERT(wks); + rss_desc = &wks->rss_desc[!!wks->flow_nested_idx]; mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : MLX5DV_FLOW_TABLE_TYPE_NIC_RX; ret = mlx5_flow_group_to_table(attr, dev_flow->external, attr->group, @@ -8955,9 +8958,11 @@ struct field_modify_info modify_tcp[] = { int n; int err; int idx; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); - for (idx = priv->flow_idx - 1; idx >= priv->flow_nested_idx; idx--) { - dev_flow = &((struct mlx5_flow *)priv->inter_flows)[idx]; + MLX5_ASSERT(wks); + for (idx = wks->flow_idx - 1; idx >= wks->flow_nested_idx; idx--) { + dev_flow = &wks->flows[idx]; dv = &dev_flow->dv; dh = dev_flow->handle; dv_h = &dh->dvh; @@ -8988,9 +8993,8 @@ struct field_modify_info modify_tcp[] = { } else if (dh->fate_action == MLX5_FLOW_FATE_QUEUE) { struct mlx5_hrxq *hrxq; uint32_t hrxq_idx; - struct mlx5_flow_rss_desc *rss_desc = - &((struct mlx5_flow_rss_desc *)priv->rss_desc) - [!!priv->flow_nested_idx]; + struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc + [!!wks->flow_nested_idx]; MLX5_ASSERT(rss_desc->queue_num); hrxq_idx = mlx5_hrxq_get(dev, rss_desc->key, diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c index 62c18b8..b649960 100644 --- a/drivers/net/mlx5/mlx5_flow_verbs.c +++ b/drivers/net/mlx5/mlx5_flow_verbs.c @@ -1632,7 +1632,9 @@ struct mlx5_flow *dev_flow; struct mlx5_flow_handle *dev_handle; struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); + MLX5_ASSERT(wks); size += flow_verbs_get_actions_size(actions); size += flow_verbs_get_items_size(items); if (size > MLX5_VERBS_MAX_SPEC_ACT_SIZE) { @@ -1642,7 +1644,7 @@ return NULL; } /* In case of corrupting the memory. */ - if (priv->flow_idx >= MLX5_NUM_MAX_DEV_FLOWS) { + if (wks->flow_idx >= MLX5_NUM_MAX_DEV_FLOWS) { rte_flow_error_set(error, ENOSPC, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "not free temporary device flow"); @@ -1656,8 +1658,8 @@ "not enough memory to create flow handle"); return NULL; } - /* No multi-thread supporting. */ - dev_flow = &((struct mlx5_flow *)priv->inter_flows)[priv->flow_idx++]; + MLX5_ASSERT(wks->flow_idx + 1 < RTE_DIM(wks->flows)); + dev_flow = &wks->flows[wks->flow_idx++]; dev_flow->handle = dev_handle; dev_flow->handle_idx = handle_idx; /* Memcpy is used, only size needs to be cleared to 0. */ @@ -1701,10 +1703,11 @@ uint64_t priority = attr->priority; uint32_t subpriority = 0; struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_flow_rss_desc *rss_desc = &((struct mlx5_flow_rss_desc *) - priv->rss_desc) - [!!priv->flow_nested_idx]; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); + struct mlx5_flow_rss_desc *rss_desc; + MLX5_ASSERT(wks); + rss_desc = &wks->rss_desc[!!wks->flow_nested_idx]; if (priority == MLX5_FLOW_PRIO_RSVD) priority = priv->config.flow_prio - 1; for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { @@ -1960,9 +1963,11 @@ uint32_t dev_handles; int err; int idx; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); - for (idx = priv->flow_idx - 1; idx >= priv->flow_nested_idx; idx--) { - dev_flow = &((struct mlx5_flow *)priv->inter_flows)[idx]; + MLX5_ASSERT(wks); + for (idx = wks->flow_idx - 1; idx >= wks->flow_nested_idx; idx--) { + dev_flow = &wks->flows[idx]; handle = dev_flow->handle; if (handle->fate_action == MLX5_FLOW_FATE_DROP) { hrxq = mlx5_drop_action_create(dev); @@ -1976,8 +1981,7 @@ } else { uint32_t hrxq_idx; struct mlx5_flow_rss_desc *rss_desc = - &((struct mlx5_flow_rss_desc *)priv->rss_desc) - [!!priv->flow_nested_idx]; + &wks->rss_desc[!!wks->flow_nested_idx]; MLX5_ASSERT(rss_desc->queue_num); hrxq_idx = mlx5_hrxq_get(dev, rss_desc->key, From patchwork Tue Oct 6 11:48:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79758 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E7F81A04BB; Tue, 6 Oct 2020 13:50:43 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9C2581B3D9; Tue, 6 Oct 2020 13:49:27 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id A134F4C98 for ; Tue, 6 Oct 2020 13:49:24 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:19 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0N028553; Tue, 6 Oct 2020 14:49:18 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org, Xueming Li Date: Tue, 6 Oct 2020 19:48:46 +0800 Message-Id: <1601984948-313027-4-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 03/25] net/mlx5: reuse flow Id as hairpin Id X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xueming Li Hairpin flow matching required a unique flow ID for matching. This patch reuses flow ID as hairpin flow ID, this will save some code to generate a separate hairpin ID, also saves flow memory by removing hairpin ID. Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5.c | 11 ----------- drivers/net/mlx5/mlx5.h | 1 - drivers/net/mlx5/mlx5_flow.c | 32 ++++++++++---------------------- drivers/net/mlx5/mlx5_flow.h | 5 +---- 4 files changed, 11 insertions(+), 38 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 16719e6..6c5c04d 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -952,13 +952,6 @@ struct mlx5_dev_ctx_shared * MLX5_ASSERT(sh->devx_rx_uar); MLX5_ASSERT(mlx5_os_get_devx_uar_base_addr(sh->devx_rx_uar)); } - sh->flow_id_pool = mlx5_flow_id_pool_alloc - ((1 << HAIRPIN_FLOW_ID_BITS) - 1); - if (!sh->flow_id_pool) { - DRV_LOG(ERR, "can't create flow id pool"); - err = ENOMEM; - goto error; - } #ifndef RTE_ARCH_64 /* Initialize UAR access locks for 32bit implementations. */ rte_spinlock_init(&sh->uar_lock_cq); @@ -1020,8 +1013,6 @@ struct mlx5_dev_ctx_shared * claim_zero(mlx5_glue->dealloc_pd(sh->pd)); if (sh->ctx) claim_zero(mlx5_glue->close_device(sh->ctx)); - if (sh->flow_id_pool) - mlx5_flow_id_pool_release(sh->flow_id_pool); mlx5_free(sh); MLX5_ASSERT(err > 0); rte_errno = err; @@ -1092,8 +1083,6 @@ struct mlx5_dev_ctx_shared * mlx5_glue->devx_free_uar(sh->devx_rx_uar); if (sh->ctx) claim_zero(mlx5_glue->close_device(sh->ctx)); - if (sh->flow_id_pool) - mlx5_flow_id_pool_release(sh->flow_id_pool); pthread_mutex_destroy(&sh->txpp.mutex); mlx5_free(sh); return; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 0080ac8..a3ec994 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -652,7 +652,6 @@ struct mlx5_dev_ctx_shared { void *devx_comp; /* DEVX async comp obj. */ struct mlx5_devx_obj *tis; /* TIS object. */ struct mlx5_devx_obj *td; /* Transport domain. */ - struct mlx5_flow_id_pool *flow_id_pool; /* Flow ID pool. */ void *tx_uar; /* Tx/packet pacing shared UAR. */ struct mlx5_flex_parser_profiles fp[MLX5_FLEX_PARSER_MAX]; /* Flex parser profiles information. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index eeee546..f0a6a57 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -3426,9 +3426,8 @@ struct mlx5_flow_tunnel_info { struct rte_flow_action actions_rx[], struct rte_flow_action actions_tx[], struct rte_flow_item pattern_tx[], - uint32_t *flow_id) + uint32_t flow_id) { - struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_action_raw_encap *raw_encap; const struct rte_flow_action_raw_decap *raw_decap; struct mlx5_rte_flow_action_set_tag *set_tag; @@ -3438,7 +3437,6 @@ struct mlx5_flow_tunnel_info { char *addr; int encap = 0; - mlx5_flow_id_get(priv->sh->flow_id_pool, flow_id); for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { switch (actions->type) { case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: @@ -3507,7 +3505,7 @@ struct mlx5_flow_tunnel_info { set_tag = (void *)actions_rx; set_tag->id = mlx5_flow_get_reg_id(dev, MLX5_HAIRPIN_RX, 0, NULL); MLX5_ASSERT(set_tag->id > REG_NON); - set_tag->data = *flow_id; + set_tag->data = flow_id; tag_action->conf = set_tag; /* Create Tx item list. */ rte_memcpy(actions_tx, actions, sizeof(struct rte_flow_action)); @@ -3516,7 +3514,7 @@ struct mlx5_flow_tunnel_info { item->type = (enum rte_flow_item_type) MLX5_RTE_FLOW_ITEM_TYPE_TAG; tag_item = (void *)addr; - tag_item->data = *flow_id; + tag_item->data = flow_id; tag_item->id = mlx5_flow_get_reg_id(dev, MLX5_HAIRPIN_TX, 0, NULL); MLX5_ASSERT(set_tag->id > REG_NON); item->spec = tag_item; @@ -4360,7 +4358,6 @@ struct mlx5_flow_tunnel_info { uint32_t i; uint32_t idx = 0; int hairpin_flow; - uint32_t hairpin_id = 0; struct rte_flow_attr attr_tx = { .priority = 0 }; struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); int ret; @@ -4372,24 +4369,22 @@ struct mlx5_flow_tunnel_info { external, hairpin_flow, error); if (ret < 0) return 0; + flow = mlx5_ipool_zmalloc(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], &idx); + if (!flow) { + rte_errno = ENOMEM; + return 0; + } if (hairpin_flow > 0) { if (hairpin_flow > MLX5_MAX_SPLIT_ACTIONS) { rte_errno = EINVAL; - return 0; + goto error; } flow_hairpin_split(dev, actions, actions_rx.actions, actions_hairpin_tx.actions, items_tx.items, - &hairpin_id); + idx); p_actions_rx = actions_rx.actions; } - flow = mlx5_ipool_zmalloc(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], &idx); - if (!flow) { - rte_errno = ENOMEM; - goto error_before_flow; - } flow->drv_type = flow_get_drv_type(dev, attr); - if (hairpin_id != 0) - flow->hairpin_flow_id = hairpin_id; MLX5_ASSERT(flow->drv_type > MLX5_FLOW_TYPE_MIN && flow->drv_type < MLX5_FLOW_TYPE_MAX); memset(rss_desc, 0, offsetof(struct mlx5_flow_rss_desc, queue)); @@ -4517,11 +4512,7 @@ struct mlx5_flow_tunnel_info { flow_drv_destroy(dev, flow); mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], idx); rte_errno = ret; /* Restore rte_errno. */ -error_before_flow: ret = rte_errno; - if (hairpin_id) - mlx5_flow_id_release(priv->sh->flow_id_pool, - hairpin_id); rte_errno = ret; wks->flow_idx = wks->flow_nested_idx; if (wks->flow_nested_idx) @@ -4662,9 +4653,6 @@ struct rte_flow * */ if (dev->data->dev_started) flow_rxq_flags_trim(dev, flow); - if (flow->hairpin_flow_id) - mlx5_flow_id_release(priv->sh->flow_id_pool, - flow->hairpin_flow_id); flow_drv_destroy(dev, flow); if (list) ILIST_REMOVE(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], list, diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2685481..4a89524 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -841,8 +841,6 @@ struct mlx5_fdir_flow { uint32_t rix_flow; /* Index to flow. */ }; -#define HAIRPIN_FLOW_ID_BITS 28 - /* Flow structure. */ struct rte_flow { ILIST_ENTRY(uint32_t)next; /**< Index to the next flow structure. */ @@ -850,13 +848,12 @@ struct rte_flow { /**< Device flow handles that are part of the flow. */ uint32_t drv_type:2; /**< Driver type. */ uint32_t fdir:1; /**< Identifier of associated FDIR if any. */ - uint32_t hairpin_flow_id:HAIRPIN_FLOW_ID_BITS; /**< The flow id used for hairpin. */ uint32_t copy_applied:1; /**< The MARK copy Flow os applied. */ + uint32_t meter:16; /**< Holds flow meter id. */ uint32_t rix_mreg_copy; /**< Index to metadata register copy table resource. */ uint32_t counter; /**< Holds flow counter. */ - uint16_t meter; /**< Holds flow meter id. */ } __rte_packed; /* Thread specific flow workspace intermediate data. */ From patchwork Tue Oct 6 11:48:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79757 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 47B48A04BB; Tue, 6 Oct 2020 13:50:23 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 110981B222; Tue, 6 Oct 2020 13:49:26 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id A07C14C93 for ; Tue, 6 Oct 2020 13:49:24 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:22 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0O028553; Tue, 6 Oct 2020 14:49:20 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org, Xueming Li Date: Tue, 6 Oct 2020 19:48:47 +0800 Message-Id: <1601984948-313027-5-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 04/25] net/mlx5: indexed pool supports zero size entry X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xueming Li To make indexed pool to be used as ID generator, this patch allows entry size to be zero. Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5_utils.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index fefe833..0a75fa6 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -230,7 +230,7 @@ struct mlx5_indexed_pool * struct mlx5_indexed_pool *pool; uint32_t i; - if (!cfg || !cfg->size || (!cfg->malloc ^ !cfg->free) || + if (!cfg || (!cfg->malloc ^ !cfg->free) || (cfg->trunk_size && ((cfg->trunk_size & (cfg->trunk_size - 1)) || ((__builtin_ffs(cfg->trunk_size) + TRUNK_IDX_BITS) > 32)))) return NULL; @@ -391,7 +391,7 @@ struct mlx5_indexed_pool * { void *entry = mlx5_ipool_malloc(pool, idx); - if (entry) + if (entry && pool->cfg.size) memset(entry, 0, pool->cfg.size); return entry; } From patchwork Tue Oct 6 11:48:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79759 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id ABACFA04BB; Tue, 6 Oct 2020 13:51:08 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8C1541B669; Tue, 6 Oct 2020 13:49:31 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id B01791B669 for ; Tue, 6 Oct 2020 13:49:29 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:23 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0P028553; Tue, 6 Oct 2020 14:49:22 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org, Xueming Li Date: Tue, 6 Oct 2020 19:48:48 +0800 Message-Id: <1601984948-313027-6-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 05/25] net/mlx5: use indexed pool for RSS flow ID X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xueming Li The flow ID generation API used an integer pool to save released ID, The only usage is to generate RSS flow ID. To support multiple flow, it has to be enhanced to be thread safe. Indexed pool could be used to generate unique ID by setting size of pool entry to zero. Since bitmap is used, an extra benefits is saving memory to about one bit per entry. Further more indexed pool could be thread safe by enabling lock. This patch leverages indexed pool to generate RSS flow ID, removes unused flow ID generating API. Signed-off-by: Xueming Li --- drivers/net/mlx5/linux/mlx5_os.c | 13 ---- drivers/net/mlx5/mlx5.c | 125 ++------------------------------------- drivers/net/mlx5/mlx5.h | 11 +--- drivers/net/mlx5/mlx5_flow.c | 47 +++++---------- drivers/net/mlx5/mlx5_flow.h | 5 -- 5 files changed, 21 insertions(+), 180 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 4276964..bfd5276 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1287,17 +1287,6 @@ err = mlx5_alloc_shared_dr(priv); if (err) goto error; - /* - * RSS id is shared with meter flow id. Meter flow id can only - * use the 24 MSB of the register. - */ - priv->qrss_id_pool = mlx5_flow_id_pool_alloc(UINT32_MAX >> - MLX5_MTR_COLOR_BITS); - if (!priv->qrss_id_pool) { - DRV_LOG(ERR, "can't create flow id pool"); - err = ENOMEM; - goto error; - } } if (config->devx && config->dv_flow_en && config->dest_tir) { priv->obj_ops = devx_obj_ops; @@ -1372,8 +1361,6 @@ close(priv->nl_socket_rdma); if (priv->vmwa_context) mlx5_vlan_vmwa_exit(priv->vmwa_context); - if (priv->qrss_id_pool) - mlx5_flow_id_pool_release(priv->qrss_id_pool); if (own_domain_id) claim_zero(rte_eth_switch_domain_free(priv->domain_id)); mlx5_free(priv); diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 6c5c04d..b3d1638 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -299,6 +299,11 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list = .free = mlx5_free, .type = "rte_flow_ipool", }, + { + .size = 0, + .need_lock = 1, + .type = "mlx5_flow_rss_id_ipool", + }, }; @@ -307,126 +312,6 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list = #define MLX5_FLOW_TABLE_HLIST_ARRAY_SIZE 4096 -/** - * Allocate ID pool structure. - * - * @param[in] max_id - * The maximum id can be allocated from the pool. - * - * @return - * Pointer to pool object, NULL value otherwise. - */ -struct mlx5_flow_id_pool * -mlx5_flow_id_pool_alloc(uint32_t max_id) -{ - struct mlx5_flow_id_pool *pool; - void *mem; - - pool = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*pool), - RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); - if (!pool) { - DRV_LOG(ERR, "can't allocate id pool"); - rte_errno = ENOMEM; - return NULL; - } - mem = mlx5_malloc(MLX5_MEM_ZERO, - MLX5_FLOW_MIN_ID_POOL_SIZE * sizeof(uint32_t), - RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); - if (!mem) { - DRV_LOG(ERR, "can't allocate mem for id pool"); - rte_errno = ENOMEM; - goto error; - } - pool->free_arr = mem; - pool->curr = pool->free_arr; - pool->last = pool->free_arr + MLX5_FLOW_MIN_ID_POOL_SIZE; - pool->base_index = 0; - pool->max_id = max_id; - return pool; -error: - mlx5_free(pool); - return NULL; -} - -/** - * Release ID pool structure. - * - * @param[in] pool - * Pointer to flow id pool object to free. - */ -void -mlx5_flow_id_pool_release(struct mlx5_flow_id_pool *pool) -{ - mlx5_free(pool->free_arr); - mlx5_free(pool); -} - -/** - * Generate ID. - * - * @param[in] pool - * Pointer to flow id pool. - * @param[out] id - * The generated ID. - * - * @return - * 0 on success, error value otherwise. - */ -uint32_t -mlx5_flow_id_get(struct mlx5_flow_id_pool *pool, uint32_t *id) -{ - if (pool->curr == pool->free_arr) { - if (pool->base_index == pool->max_id) { - rte_errno = ENOMEM; - DRV_LOG(ERR, "no free id"); - return -rte_errno; - } - *id = ++pool->base_index; - return 0; - } - *id = *(--pool->curr); - return 0; -} - -/** - * Release ID. - * - * @param[in] pool - * Pointer to flow id pool. - * @param[out] id - * The generated ID. - * - * @return - * 0 on success, error value otherwise. - */ -uint32_t -mlx5_flow_id_release(struct mlx5_flow_id_pool *pool, uint32_t id) -{ - uint32_t size; - uint32_t size2; - void *mem; - - if (pool->curr == pool->last) { - size = pool->curr - pool->free_arr; - size2 = size * MLX5_ID_GENERATION_ARRAY_FACTOR; - MLX5_ASSERT(size2 > size); - mem = mlx5_malloc(0, size2 * sizeof(uint32_t), 0, - SOCKET_ID_ANY); - if (!mem) { - DRV_LOG(ERR, "can't allocate mem for id pool"); - rte_errno = ENOMEM; - return -rte_errno; - } - memcpy(mem, pool->free_arr, size * sizeof(uint32_t)); - mlx5_free(pool->free_arr); - pool->free_arr = mem; - pool->curr = pool->free_arr + size; - pool->last = pool->free_arr + size2; - } - *pool->curr = id; - pool->curr++; - return 0; -} /** * Initialize the shared aging list information per port. diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index a3ec994..0532577 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -45,6 +45,7 @@ enum mlx5_ipool_index { MLX5_IPOOL_HRXQ, /* Pool for hrxq resource. */ MLX5_IPOOL_MLX5_FLOW, /* Pool for mlx5 flow handle. */ MLX5_IPOOL_RTE_FLOW, /* Pool for rte_flow. */ + MLX5_IPOOL_RSS_ID, /* Pool for Queue/RSS flow ID. */ MLX5_IPOOL_MAX, }; @@ -513,15 +514,6 @@ struct mlx5_flow_tbl_resource { #define MLX5_FLOW_TABLE_LEVEL_SUFFIX (MLX5_MAX_TABLES - 3) #define MLX5_MAX_TABLES_FDB UINT16_MAX -/* ID generation structure. */ -struct mlx5_flow_id_pool { - uint32_t *free_arr; /**< Pointer to the a array of free values. */ - uint32_t base_index; - /**< The next index that can be used without any free elements. */ - uint32_t *curr; /**< Pointer to the index to pop. */ - uint32_t *last; /**< Pointer to the last element in the empty arrray. */ - uint32_t max_id; /**< Maximum id can be allocated from the pool. */ -}; /* Tx pacing queue structure - for Clock and Rearm queues. */ struct mlx5_txpp_wq { @@ -816,7 +808,6 @@ struct mlx5_priv { int nl_socket_route; /* Netlink socket (NETLINK_ROUTE). */ struct mlx5_dbr_page_list dbrpgs; /* Door-bell pages. */ struct mlx5_nl_vlan_vmwa_context *vmwa_context; /* VLAN WA context. */ - struct mlx5_flow_id_pool *qrss_id_pool; struct mlx5_hlist *mreg_cp_tbl; /* Hash table of Rx metadata register copy table. */ uint8_t mtr_sfx_reg; /* Meter prefix-suffix flow match REG_C. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index f0a6a57..c1fbc80 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -2349,30 +2349,6 @@ struct mlx5_flow_tunnel_info { error); } -/* Allocate unique ID for the split Q/RSS subflows. */ -static uint32_t -flow_qrss_get_id(struct rte_eth_dev *dev) -{ - struct mlx5_priv *priv = dev->data->dev_private; - uint32_t qrss_id, ret; - - ret = mlx5_flow_id_get(priv->qrss_id_pool, &qrss_id); - if (ret) - return 0; - MLX5_ASSERT(qrss_id); - return qrss_id; -} - -/* Free unique ID for the split Q/RSS subflows. */ -static void -flow_qrss_free_id(struct rte_eth_dev *dev, uint32_t qrss_id) -{ - struct mlx5_priv *priv = dev->data->dev_private; - - if (qrss_id) - mlx5_flow_id_release(priv->qrss_id_pool, qrss_id); -} - /** * Release resource related QUEUE/RSS action split. * @@ -2392,7 +2368,8 @@ struct mlx5_flow_tunnel_info { SILIST_FOREACH(priv->sh->ipool[MLX5_IPOOL_MLX5_FLOW], flow->dev_handles, handle_idx, dev_handle, next) if (dev_handle->split_flow_id) - flow_qrss_free_id(dev, dev_handle->split_flow_id); + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RSS_ID], + dev_handle->split_flow_id); } static int @@ -3629,6 +3606,7 @@ struct mlx5_flow_tunnel_info { struct rte_flow_action actions_sfx[], struct rte_flow_action actions_pre[]) { + struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_action *tag_action = NULL; struct rte_flow_item *tag_item; struct mlx5_rte_flow_action_set_tag *set_tag; @@ -3637,7 +3615,7 @@ struct mlx5_flow_tunnel_info { const struct rte_flow_action_raw_decap *raw_decap; struct mlx5_rte_flow_item_tag *tag_spec; struct mlx5_rte_flow_item_tag *tag_mask; - uint32_t tag_id; + uint32_t tag_id = 0; bool copy_vlan = false; /* Prepare the actions for prefix and suffix flow. */ @@ -3686,10 +3664,14 @@ struct mlx5_flow_tunnel_info { /* Set the tag. */ set_tag = (void *)actions_pre; set_tag->id = mlx5_flow_get_reg_id(dev, MLX5_MTR_SFX, 0, &error); - /* - * Get the id from the qrss_pool to make qrss share the id with meter. - */ - tag_id = flow_qrss_get_id(dev); + mlx5_ipool_malloc(priv->sh->ipool[MLX5_IPOOL_RSS_ID], &tag_id); + if (tag_id >= (1 << (sizeof(tag_id) * 8 - MLX5_MTR_COLOR_BITS))) { + DRV_LOG(ERR, "port %u meter flow id exceed max limit", + dev->data->port_id); + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RSS_ID], tag_id); + } + if (!tag_id) + return 0; set_tag->data = tag_id << MLX5_MTR_COLOR_BITS; assert(tag_action); tag_action->conf = set_tag; @@ -3782,6 +3764,7 @@ struct mlx5_flow_tunnel_info { const struct rte_flow_action *qrss, int actions_n, struct rte_flow_error *error) { + struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rte_flow_action_set_tag *set_tag; struct rte_flow_action_jump *jump; const int qrss_idx = qrss - actions; @@ -3813,7 +3796,7 @@ struct mlx5_flow_tunnel_info { * representors) domain even if they have coinciding * IDs. */ - flow_id = flow_qrss_get_id(dev); + mlx5_ipool_malloc(priv->sh->ipool[MLX5_IPOOL_RSS_ID], &flow_id); if (!flow_id) return rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ACTION, @@ -4137,7 +4120,7 @@ struct mlx5_flow_tunnel_info { * These ones are included into parent flow list and will be destroyed * by flow_drv_destroy. */ - flow_qrss_free_id(dev, qrss_id); + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RSS_ID], qrss_id); mlx5_free(ext_actions); return ret; } diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 4a89524..85f2528 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -940,11 +940,6 @@ struct mlx5_flow_driver_ops { /* mlx5_flow.c */ struct mlx5_flow_workspace *mlx5_flow_get_thread_workspace(void); -struct mlx5_flow_id_pool *mlx5_flow_id_pool_alloc(uint32_t max_id); -void mlx5_flow_id_pool_release(struct mlx5_flow_id_pool *pool); -uint32_t mlx5_flow_id_get(struct mlx5_flow_id_pool *pool, uint32_t *id); -uint32_t mlx5_flow_id_release(struct mlx5_flow_id_pool *pool, - uint32_t id); int mlx5_flow_group_to_table(const struct rte_flow_attr *attributes, bool external, uint32_t group, bool fdb_def_rule, uint32_t *table, struct rte_flow_error *error); From patchwork Tue Oct 6 11:48:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79760 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 93859A04BB; Tue, 6 Oct 2020 13:51:28 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 199CA1B686; Tue, 6 Oct 2020 13:49:33 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id C2B271B673 for ; Tue, 6 Oct 2020 13:49:29 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:25 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0Q028553; Tue, 6 Oct 2020 14:49:23 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org, Xueming Li Date: Tue, 6 Oct 2020 19:48:49 +0800 Message-Id: <1601984948-313027-7-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 06/25] net/mlx5: make rte flow list thread safe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xueming Li To support multi-thread flow operations, this patch introduces list lock for the rte_flow list manages all the rte_flow handlers. Signed-off-by: Xueming Li --- drivers/net/mlx5/linux/mlx5_os.c | 1 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.c | 10 ++++++++-- 3 files changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index bfd5276..94c1e38 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1250,6 +1250,7 @@ MLX5_MAX_MAC_ADDRESSES); priv->flows = 0; priv->ctrl_flows = 0; + rte_spinlock_init(&priv->flow_list_lock); TAILQ_INIT(&priv->flow_meters); TAILQ_INIT(&priv->flow_meter_profiles); /* Hint libmlx5 to use PMD allocator for data plane resources */ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 0532577..464d2cf 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -784,6 +784,7 @@ struct mlx5_priv { struct mlx5_drop drop_queue; /* Flow drop queues. */ uint32_t flows; /* RTE Flow rules. */ uint32_t ctrl_flows; /* Control flow rules. */ + rte_spinlock_t flow_list_lock; struct mlx5_obj_ops obj_ops; /* HW objects operations. */ LIST_HEAD(rxq, mlx5_rxq_ctrl) rxqsctrl; /* DPDK Rx queues. */ LIST_HEAD(rxqobj, mlx5_rxq_obj) rxqsobj; /* Verbs/DevX Rx queues. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index c1fbc80..2790c32 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -4479,9 +4479,12 @@ struct mlx5_flow_tunnel_info { if (ret < 0) goto error; } - if (list) + if (list) { + rte_spinlock_lock(&priv->flow_list_lock); ILIST_INSERT(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], list, idx, flow, next); + rte_spinlock_unlock(&priv->flow_list_lock); + } flow_rxq_flags_set(dev, flow); /* Nested flow creation index recovery. */ wks->flow_idx = wks->flow_nested_idx; @@ -4637,9 +4640,12 @@ struct rte_flow * if (dev->data->dev_started) flow_rxq_flags_trim(dev, flow); flow_drv_destroy(dev, flow); - if (list) + if (list) { + rte_spinlock_lock(&priv->flow_list_lock); ILIST_REMOVE(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], list, flow_idx, flow, next); + rte_spinlock_unlock(&priv->flow_list_lock); + } flow_mreg_del_copy_action(dev, flow); if (flow->fdir) { LIST_FOREACH(priv_fdir_flow, &priv->fdir_flows, next) { From patchwork Tue Oct 6 11:48:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79761 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D9A3AA04BB; Tue, 6 Oct 2020 13:51:47 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 814761B694; Tue, 6 Oct 2020 13:49:34 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id B9A7D1B671 for ; Tue, 6 Oct 2020 13:49:29 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:27 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0R028553; Tue, 6 Oct 2020 14:49:25 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org, Xueming Li Date: Tue, 6 Oct 2020 19:48:50 +0800 Message-Id: <1601984948-313027-8-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 07/25] net/mlx5: support concurrent access for hash list X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xueming Li To support multi-thread flow operations, this patch adds following to support concurrent access to hash list: 1. list level read/write lock 2. entry reference count 3. entry create/match/remove callback 4. remove insert/lookup/remove function which are not thread safe 5. add register/unregister function to support entry reuse For better performance, lookup function uses read lock to allow concurrent lookup from different thread, all other hash list modification functions uses write lock which blocks concurrent modification from other thread. The exact objects change will be applied in the next patches. Signed-off-by: Xueming Li --- drivers/net/mlx5/linux/mlx5_os.c | 27 ++++--- drivers/net/mlx5/mlx5.c | 13 ++-- drivers/net/mlx5/mlx5_flow.c | 7 +- drivers/net/mlx5/mlx5_flow_dv.c | 6 +- drivers/net/mlx5/mlx5_utils.c | 154 ++++++++++++++++++++++++++++++++------- drivers/net/mlx5/mlx5_utils.h | 142 +++++++++++++++++++++++++++++------- 6 files changed, 272 insertions(+), 77 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 94c1e38..13b5a3f 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -237,14 +237,16 @@ return err; /* Create tags hash list table. */ snprintf(s, sizeof(s), "%s_tags", sh->ibdev_name); - sh->tag_table = mlx5_hlist_create(s, MLX5_TAGS_HLIST_ARRAY_SIZE); + sh->tag_table = mlx5_hlist_create(s, MLX5_TAGS_HLIST_ARRAY_SIZE, 0, + false, NULL, NULL, NULL); if (!sh->tag_table) { DRV_LOG(ERR, "tags with hash creation failed."); err = ENOMEM; goto error; } snprintf(s, sizeof(s), "%s_hdr_modify", sh->ibdev_name); - sh->modify_cmds = mlx5_hlist_create(s, MLX5_FLOW_HDR_MODIFY_HTABLE_SZ); + sh->modify_cmds = mlx5_hlist_create(s, MLX5_FLOW_HDR_MODIFY_HTABLE_SZ, + 0, false, NULL, NULL, NULL); if (!sh->modify_cmds) { DRV_LOG(ERR, "hdr modify hash creation failed"); err = ENOMEM; @@ -252,7 +254,8 @@ } snprintf(s, sizeof(s), "%s_encaps_decaps", sh->ibdev_name); sh->encaps_decaps = mlx5_hlist_create(s, - MLX5_FLOW_ENCAP_DECAP_HTABLE_SZ); + MLX5_FLOW_ENCAP_DECAP_HTABLE_SZ, + 0, false, NULL, NULL, NULL); if (!sh->encaps_decaps) { DRV_LOG(ERR, "encap decap hash creation failed"); err = ENOMEM; @@ -332,16 +335,16 @@ sh->pop_vlan_action = NULL; } if (sh->encaps_decaps) { - mlx5_hlist_destroy(sh->encaps_decaps, NULL, NULL); + mlx5_hlist_destroy(sh->encaps_decaps); sh->encaps_decaps = NULL; } if (sh->modify_cmds) { - mlx5_hlist_destroy(sh->modify_cmds, NULL, NULL); + mlx5_hlist_destroy(sh->modify_cmds); sh->modify_cmds = NULL; } if (sh->tag_table) { /* tags should be destroyed with flow before. */ - mlx5_hlist_destroy(sh->tag_table, NULL, NULL); + mlx5_hlist_destroy(sh->tag_table); sh->tag_table = NULL; } mlx5_free_table_hash_list(priv); @@ -393,16 +396,16 @@ pthread_mutex_destroy(&sh->dv_mutex); #endif /* HAVE_MLX5DV_DR */ if (sh->encaps_decaps) { - mlx5_hlist_destroy(sh->encaps_decaps, NULL, NULL); + mlx5_hlist_destroy(sh->encaps_decaps); sh->encaps_decaps = NULL; } if (sh->modify_cmds) { - mlx5_hlist_destroy(sh->modify_cmds, NULL, NULL); + mlx5_hlist_destroy(sh->modify_cmds); sh->modify_cmds = NULL; } if (sh->tag_table) { /* tags should be destroyed with flow before. */ - mlx5_hlist_destroy(sh->tag_table, NULL, NULL); + mlx5_hlist_destroy(sh->tag_table); sh->tag_table = NULL; } mlx5_free_table_hash_list(priv); @@ -1343,7 +1346,9 @@ mlx5_flow_ext_mreg_supported(eth_dev) && priv->sh->dv_regc0_mask) { priv->mreg_cp_tbl = mlx5_hlist_create(MLX5_FLOW_MREG_HNAME, - MLX5_FLOW_MREG_HTABLE_SZ); + MLX5_FLOW_MREG_HTABLE_SZ, + 0, false, + NULL, NULL, NULL); if (!priv->mreg_cp_tbl) { err = ENOMEM; goto error; @@ -1353,7 +1358,7 @@ error: if (priv) { if (priv->mreg_cp_tbl) - mlx5_hlist_destroy(priv->mreg_cp_tbl, NULL, NULL); + mlx5_hlist_destroy(priv->mreg_cp_tbl); if (priv->sh) mlx5_os_free_shared_dr(priv); if (priv->nl_socket_route >= 0) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index b3d1638..ddf236a 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -998,7 +998,7 @@ struct mlx5_dev_ctx_shared * if (!sh->flow_tbls) return; - pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64); + pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64, NULL); if (pos) { tbl_data = container_of(pos, struct mlx5_flow_tbl_data_entry, entry); @@ -1007,7 +1007,7 @@ struct mlx5_dev_ctx_shared * mlx5_free(tbl_data); } table_key.direction = 1; - pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64); + pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64, NULL); if (pos) { tbl_data = container_of(pos, struct mlx5_flow_tbl_data_entry, entry); @@ -1017,7 +1017,7 @@ struct mlx5_dev_ctx_shared * } table_key.direction = 0; table_key.domain = 1; - pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64); + pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64, NULL); if (pos) { tbl_data = container_of(pos, struct mlx5_flow_tbl_data_entry, entry); @@ -1025,7 +1025,7 @@ struct mlx5_dev_ctx_shared * mlx5_hlist_remove(sh->flow_tbls, pos); mlx5_free(tbl_data); } - mlx5_hlist_destroy(sh->flow_tbls, NULL, NULL); + mlx5_hlist_destroy(sh->flow_tbls); } /** @@ -1047,7 +1047,8 @@ struct mlx5_dev_ctx_shared * MLX5_ASSERT(sh); snprintf(s, sizeof(s), "%s_flow_table", priv->sh->ibdev_name); - sh->flow_tbls = mlx5_hlist_create(s, MLX5_FLOW_TABLE_HLIST_ARRAY_SIZE); + sh->flow_tbls = mlx5_hlist_create(s, MLX5_FLOW_TABLE_HLIST_ARRAY_SIZE, + 0, false, NULL, NULL, NULL); if (!sh->flow_tbls) { DRV_LOG(ERR, "flow tables with hash creation failed."); err = ENOMEM; @@ -1275,7 +1276,7 @@ struct mlx5_dev_ctx_shared * } mlx5_proc_priv_uninit(dev); if (priv->mreg_cp_tbl) - mlx5_hlist_destroy(priv->mreg_cp_tbl, NULL, NULL); + mlx5_hlist_destroy(priv->mreg_cp_tbl); mlx5_mprq_free_mp(dev); mlx5_os_free_shared_dr(priv); if (priv->rss_conf.rss_key != NULL) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 2790c32..3a3b783 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -3018,7 +3018,7 @@ struct mlx5_flow_tunnel_info { cp_mreg.src = ret; /* Check if already registered. */ MLX5_ASSERT(priv->mreg_cp_tbl); - mcp_res = (void *)mlx5_hlist_lookup(priv->mreg_cp_tbl, mark_id); + mcp_res = (void *)mlx5_hlist_lookup(priv->mreg_cp_tbl, mark_id, NULL); if (mcp_res) { /* For non-default rule. */ if (mark_id != MLX5_DEFAULT_COPY_ID) @@ -3095,8 +3095,7 @@ struct mlx5_flow_tunnel_info { goto error; mcp_res->refcnt++; mcp_res->hlist_ent.key = mark_id; - ret = mlx5_hlist_insert(priv->mreg_cp_tbl, - &mcp_res->hlist_ent); + ret = !mlx5_hlist_insert(priv->mreg_cp_tbl, &mcp_res->hlist_ent); MLX5_ASSERT(!ret); if (ret) goto error; @@ -3246,7 +3245,7 @@ struct mlx5_flow_tunnel_info { if (!priv->mreg_cp_tbl) return; mcp_res = (void *)mlx5_hlist_lookup(priv->mreg_cp_tbl, - MLX5_DEFAULT_COPY_ID); + MLX5_DEFAULT_COPY_ID, NULL); if (!mcp_res) return; MLX5_ASSERT(mcp_res->rix_flow); diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index ede7bf8..fafe188 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -7632,7 +7632,7 @@ struct field_modify_info modify_tcp[] = { } }; struct mlx5_hlist_entry *pos = mlx5_hlist_lookup(sh->flow_tbls, - table_key.v64); + table_key.v64, NULL); struct mlx5_flow_tbl_data_entry *tbl_data; uint32_t idx = 0; int ret; @@ -7678,7 +7678,7 @@ struct field_modify_info modify_tcp[] = { /* Jump action reference count is initialized here. */ rte_atomic32_init(&tbl_data->jump.refcnt); pos->key = table_key.v64; - ret = mlx5_hlist_insert(sh->flow_tbls, pos); + ret = !mlx5_hlist_insert(sh->flow_tbls, pos); if (ret < 0) { rte_flow_error_set(error, -ret, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -7858,7 +7858,7 @@ struct field_modify_info modify_tcp[] = { int ret; /* Lookup a matching resource from cache. */ - entry = mlx5_hlist_lookup(sh->tag_table, (uint64_t)tag_be24); + entry = mlx5_hlist_lookup(sh->tag_table, (uint64_t)tag_be24, NULL); if (entry) { cache_resource = container_of (entry, struct mlx5_flow_dv_tag_resource, entry); diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 0a75fa6..4eb3db0 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -9,14 +9,39 @@ #include "mlx5_utils.h" +/********************* Hash List **********************/ + +static struct mlx5_hlist_entry * +mlx5_hlist_default_create_cb(struct mlx5_hlist *h, uint64_t key __rte_unused, + void *ctx __rte_unused) +{ + return mlx5_malloc(MLX5_MEM_ZERO, h->entry_sz, 0, SOCKET_ID_ANY); +} + +static void +mlx5_hlist_default_remove_cb(struct mlx5_hlist *h __rte_unused, + struct mlx5_hlist_entry *entry) +{ + mlx5_free(entry); +} + +static int +mlx5_hlist_default_match_cb(struct mlx5_hlist *h __rte_unused, + struct mlx5_hlist_entry *entry, void *ctx) +{ + return entry->key != *(uint64_t *)ctx; +} + struct mlx5_hlist * -mlx5_hlist_create(const char *name, uint32_t size) +mlx5_hlist_create(const char *name, uint32_t size, uint32_t entry_size, + bool write_most, mlx5_hlist_create_cb cb_create, + mlx5_hlist_match_cb cb_match, mlx5_hlist_remove_cb cb_remove) { struct mlx5_hlist *h; uint32_t act_size; uint32_t alloc_size; - if (!size) + if (!size || (!cb_create ^ !cb_remove)) return NULL; /* Align to the next power of 2, 32bits integer is enough now. */ if (!rte_is_power_of_2(size)) { @@ -40,13 +65,19 @@ struct mlx5_hlist * snprintf(h->name, MLX5_HLIST_NAMESIZE, "%s", name); h->table_sz = act_size; h->mask = act_size - 1; + h->entry_sz = entry_size; + h->write_most = write_most; + h->cb_create = cb_create ? cb_create : mlx5_hlist_default_create_cb; + h->cb_match = cb_match ? cb_match : mlx5_hlist_default_match_cb; + h->cb_remove = cb_remove ? cb_remove : mlx5_hlist_default_remove_cb; + rte_rwlock_init(&h->lock); DRV_LOG(DEBUG, "Hash list with %s size 0x%" PRIX32 " is created.", h->name, act_size); return h; } -struct mlx5_hlist_entry * -mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key) +static struct mlx5_hlist_entry * +__hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx, bool reuse) { uint32_t idx; struct mlx5_hlist_head *first; @@ -56,29 +87,81 @@ struct mlx5_hlist_entry * idx = rte_hash_crc_8byte(key, 0) & h->mask; first = &h->heads[idx]; LIST_FOREACH(node, first, next) { - if (node->key == key) - return node; + if (!__atomic_load_n(&node->ref_cnt, __ATOMIC_RELAXED)) + /* Ignore entry in middle of removal */ + continue; + if (!h->cb_match(h, node, ctx ? ctx : &key)) { + if (reuse) { + __atomic_add_fetch(&node->ref_cnt, 1, + __ATOMIC_RELAXED); + DRV_LOG(DEBUG, "hash list %s entry %p reuse: %u", + h->name, (void *)node, node->ref_cnt); + } + break; + } } - return NULL; + return node; } -int -mlx5_hlist_insert(struct mlx5_hlist *h, struct mlx5_hlist_entry *entry) +static struct mlx5_hlist_entry * +hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx, bool reuse) +{ + struct mlx5_hlist_entry *node; + + MLX5_ASSERT(h); + rte_rwlock_read_lock(&h->lock); + node = __hlist_lookup(h, key, ctx, reuse); + rte_rwlock_read_unlock(&h->lock); + return node; +} + +struct mlx5_hlist_entry * +mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx) +{ + return __hlist_lookup(h, key, ctx, false); +} + +struct mlx5_hlist_entry* +mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, void *ctx) { uint32_t idx; struct mlx5_hlist_head *first; - struct mlx5_hlist_entry *node; + struct mlx5_hlist_entry *entry; + uint32_t prev_gen_cnt = 0; MLX5_ASSERT(h && entry); - idx = rte_hash_crc_8byte(entry->key, 0) & h->mask; + /* Use write lock directly for write-most list */ + if (!h->write_most) { + prev_gen_cnt = __atomic_load_n(&h->gen_cnt, __ATOMIC_ACQUIRE); + entry = hlist_lookup(h, key, ctx, true); + if (entry) + return entry; + } + rte_rwlock_write_lock(&h->lock); + /* Check if the list changed by other threads. */ + if (h->write_most || + prev_gen_cnt != __atomic_load_n(&h->gen_cnt, __ATOMIC_ACQUIRE)) { + entry = __hlist_lookup(h, key, ctx, true); + if (entry) + goto done; + } + idx = rte_hash_crc_8byte(key, 0) & h->mask; first = &h->heads[idx]; - /* No need to reuse the lookup function. */ - LIST_FOREACH(node, first, next) { - if (node->key == entry->key) - return -EEXIST; + entry = h->cb_create(h, key, ctx); + if (!entry) { + rte_errno = ENOMEM; + DRV_LOG(ERR, "Failed to allocate hash list %s entry", h->name); + goto done; } + entry->key = key; + entry->ref_cnt = 1; LIST_INSERT_HEAD(first, entry, next); - return 0; + __atomic_add_fetch(&h->gen_cnt, 1, __ATOMIC_ACQ_REL); + DRV_LOG(DEBUG, "hash list %s entry %p new: %u", + h->name, (void *)entry, entry->ref_cnt); +done: + rte_rwlock_write_unlock(&h->lock); + return entry; } struct mlx5_hlist_entry * @@ -119,19 +202,41 @@ struct mlx5_hlist_entry * return 0; } -void -mlx5_hlist_remove(struct mlx5_hlist *h __rte_unused, - struct mlx5_hlist_entry *entry) +int +mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_hlist_entry *entry) { + uint32_t ref_cnt; + MLX5_ASSERT(entry && entry->next.le_prev); + MLX5_ASSERT(__atomic_fetch_n(&entry->ref_cnt, __ATOMIC_RELAXED)); + + ref_cnt = __atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_ACQ_REL); + DRV_LOG(DEBUG, "hash list %s entry %p deref: %u", + h->name, (void *)entry, entry->ref_cnt); + if (ref_cnt) + return 1; + rte_rwlock_write_lock(&h->lock); + /* + * Check the ref_cnt again in the slowpath since there maybe new + * users come. + */ + if (__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED)) { + rte_rwlock_write_unlock(&h->lock); + return 1; + } LIST_REMOVE(entry, next); /* Set to NULL to get rid of removing action for more than once. */ entry->next.le_prev = NULL; + h->cb_remove(h, entry); + __atomic_add_fetch(&h->gen_cnt, 1, __ATOMIC_ACQ_REL); + rte_rwlock_write_unlock(&h->lock); + DRV_LOG(DEBUG, "hash list %s entry %p removed", + h->name, (void *)entry); + return 0; } void -mlx5_hlist_destroy(struct mlx5_hlist *h, - mlx5_hlist_destroy_callback_fn cb, void *ctx) +mlx5_hlist_destroy(struct mlx5_hlist *h) { uint32_t idx; struct mlx5_hlist_entry *entry; @@ -150,15 +255,14 @@ struct mlx5_hlist_entry * * the beginning). Or else the default free function * will be used. */ - if (cb) - cb(entry, ctx); - else - mlx5_free(entry); + h->cb_remove(h, entry); } } mlx5_free(h); } +/********************* Indexed pool **********************/ + static inline void mlx5_ipool_lock(struct mlx5_indexed_pool *pool) { diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index f078bdc..8719dee 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -13,6 +13,7 @@ #include #include +#include #include #include @@ -20,6 +21,11 @@ #include "mlx5_defs.h" +#define mlx5_hlist_remove(h, e) \ + mlx5_hlist_unregister(h, e) + +#define mlx5_hlist_insert(h, e) \ + mlx5_hlist_register(h, 0, e) /* Convert a bit number to the corresponding 64-bit mask */ #define MLX5_BITSHIFT(v) (UINT64_C(1) << (v)) @@ -245,6 +251,8 @@ struct mlx5_indexed_pool { /** Maximum size of string for naming the hlist table. */ #define MLX5_HLIST_NAMESIZE 32 +struct mlx5_hlist; + /** * Structure of the entry in the hash list, user should define its own struct * that contains this in order to store the data. The 'key' is 64-bits right @@ -253,6 +261,7 @@ struct mlx5_indexed_pool { struct mlx5_hlist_entry { LIST_ENTRY(mlx5_hlist_entry) next; /* entry pointers in the list. */ uint64_t key; /* user defined 'key', could be the hash signature. */ + uint32_t ref_cnt; /* reference count. */ }; /** Structure for hash head. */ @@ -275,13 +284,73 @@ struct mlx5_hlist_entry { typedef int (*mlx5_hlist_match_callback_fn)(struct mlx5_hlist_entry *entry, void *ctx); -/** hash list table structure */ +/** + * Type of callback function for entry removal. + * + * @param list + * The hash list. + * @param entry + * The entry in the list. + */ +typedef void (*mlx5_hlist_remove_cb)(struct mlx5_hlist *list, + struct mlx5_hlist_entry *entry); + +/** + * Type of function for user defined matching. + * + * @param list + * The hash list. + * @param entry + * The entry in the list. + * @param ctx + * The pointer to new entry context. + * + * @return + * 0 if matching, non-zero number otherwise. + */ +typedef int (*mlx5_hlist_match_cb)(struct mlx5_hlist *list, + struct mlx5_hlist_entry *entry, void *ctx); + +/** + * Type of function for user defined hash list entry creation. + * + * @param list + * The hash list. + * @param key + * The key of the new entry. + * @param ctx + * The pointer to new entry context. + * + * @return + * Pointer to allocated entry on success, NULL otherwise. + */ +typedef struct mlx5_hlist_entry *(*mlx5_hlist_create_cb) + (struct mlx5_hlist *list, + uint64_t key, void *ctx); + +/** + * Hash list table structure + * + * Entry in hash list could be reused if entry already exists, reference + * count will increase and the existing entry returns. + * + * When destroy an entry from list, decrease reference count and only + * destroy when no further reference. + */ struct mlx5_hlist { char name[MLX5_HLIST_NAMESIZE]; /**< Name of the hash list. */ /**< number of heads, need to be power of 2. */ uint32_t table_sz; + uint32_t entry_sz; /**< Size of entry, used to allocate entry. */ /**< mask to get the index of the list heads. */ uint32_t mask; + rte_rwlock_t lock; + uint32_t gen_cnt; /* list modification will update generation count. */ + bool write_most; /* list mostly used for append new or destroy. */ + void *ctx; + mlx5_hlist_create_cb cb_create; /**< entry create callback. */ + mlx5_hlist_match_cb cb_match; /**< entry match callback. */ + mlx5_hlist_remove_cb cb_remove; /**< entry remove callback. */ struct mlx5_hlist_head heads[]; /**< list head arrays. */ }; @@ -297,40 +366,43 @@ struct mlx5_hlist { * Name of the hash list(optional). * @param size * Heads array size of the hash list. - * + * @param entry_size + * Entry size to allocate if cb_create not specified. + * @param write_most + * most operations to list is modification. + * @param cb_create + * Callback function for entry create. + * @param cb_match + * Callback function for entry match. + * @param cb_destroy + * Callback function for entry destroy. * @return * Pointer of the hash list table created, NULL on failure. */ -struct mlx5_hlist *mlx5_hlist_create(const char *name, uint32_t size); +struct mlx5_hlist *mlx5_hlist_create(const char *name, uint32_t size, + uint32_t entry_size, bool write_most, + mlx5_hlist_create_cb cb_create, + mlx5_hlist_match_cb cb_match, + mlx5_hlist_remove_cb cb_destroy); /** * Search an entry matching the key. * + * Result returned might be destroyed by other thread, must use + * this function only in main thread. + * * @param h * Pointer to the hast list table. * @param key * Key for the searching entry. + * @param ctx + * Common context parameter used by entry callback function. * * @return * Pointer of the hlist entry if found, NULL otherwise. */ -struct mlx5_hlist_entry *mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key); - -/** - * Insert an entry to the hash list table, the entry is only part of whole data - * element and a 64B key is used for matching. User should construct the key or - * give a calculated hash signature and guarantee there is no collision. - * - * @param h - * Pointer to the hast list table. - * @param entry - * Entry to be inserted into the hash list table. - * - * @return - * - zero for success. - * - -EEXIST if the entry is already inserted. - */ -int mlx5_hlist_insert(struct mlx5_hlist *h, struct mlx5_hlist_entry *entry); +struct mlx5_hlist_entry *mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, + void *ctx); /** * Extended routine to search an entry matching the context with @@ -376,6 +448,24 @@ int mlx5_hlist_insert_ex(struct mlx5_hlist *h, struct mlx5_hlist_entry *entry, mlx5_hlist_match_callback_fn cb, void *ctx); /** + * Insert an entry to the hash list table, the entry is only part of whole data + * element and a 64B key is used for matching. User should construct the key or + * give a calculated hash signature and guarantee there is no collision. + * + * @param h + * Pointer to the hast list table. + * @param entry + * Entry to be inserted into the hash list table. + * @param ctx + * Common context parameter used by callback function. + * + * @return + * registered entry on success, NULL otherwise + */ +struct mlx5_hlist_entry *mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, + void *ctx); + +/** * Remove an entry from the hash list table. User should guarantee the validity * of the entry. * @@ -383,9 +473,10 @@ int mlx5_hlist_insert_ex(struct mlx5_hlist *h, struct mlx5_hlist_entry *entry, * Pointer to the hast list table. (not used) * @param entry * Entry to be removed from the hash list table. + * @return + * 0 on entry removed, 1 on entry still referenced. */ -void mlx5_hlist_remove(struct mlx5_hlist *h __rte_unused, - struct mlx5_hlist_entry *entry); +int mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_hlist_entry *entry); /** * Destroy the hash list table, all the entries already inserted into the lists @@ -394,13 +485,8 @@ void mlx5_hlist_remove(struct mlx5_hlist *h __rte_unused, * * @param h * Pointer to the hast list table. - * @param cb - * Callback function for each inserted entry when destroying the hash list. - * @param ctx - * Common context parameter used by callback function for each entry. */ -void mlx5_hlist_destroy(struct mlx5_hlist *h, - mlx5_hlist_destroy_callback_fn cb, void *ctx); +void mlx5_hlist_destroy(struct mlx5_hlist *h); /** * This function allocates non-initialized memory entry from pool. From patchwork Tue Oct 6 11:48:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79762 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 946C2A04BB; Tue, 6 Oct 2020 13:52:15 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BEB5B1B737; Tue, 6 Oct 2020 13:49:36 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id C0E211B6A3 for ; Tue, 6 Oct 2020 13:49:34 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:28 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0S028553; Tue, 6 Oct 2020 14:49:27 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org, Xueming Li Date: Tue, 6 Oct 2020 19:48:51 +0800 Message-Id: <1601984948-313027-9-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 08/25] net/mlx5: make flow table cache thread safe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xueming Li To support multi-thread flow insertion/removal, this patch uses thread safe hash list API for flow table cache hash list. Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5.c | 102 ++++------------------------ drivers/net/mlx5/mlx5.h | 2 +- drivers/net/mlx5/mlx5_flow.h | 17 +++++ drivers/net/mlx5/mlx5_flow_dv.c | 147 ++++++++++++++++++++-------------------- 4 files changed, 105 insertions(+), 163 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index ddf236a..61e5e69 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -976,7 +976,7 @@ struct mlx5_dev_ctx_shared * } /** - * Destroy table hash list and all the root entries per domain. + * Destroy table hash list. * * @param[in] priv * Pointer to the private device data structure. @@ -985,46 +985,9 @@ struct mlx5_dev_ctx_shared * mlx5_free_table_hash_list(struct mlx5_priv *priv) { struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_flow_tbl_data_entry *tbl_data; - union mlx5_flow_tbl_key table_key = { - { - .table_id = 0, - .reserved = 0, - .domain = 0, - .direction = 0, - } - }; - struct mlx5_hlist_entry *pos; if (!sh->flow_tbls) return; - pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64, NULL); - if (pos) { - tbl_data = container_of(pos, struct mlx5_flow_tbl_data_entry, - entry); - MLX5_ASSERT(tbl_data); - mlx5_hlist_remove(sh->flow_tbls, pos); - mlx5_free(tbl_data); - } - table_key.direction = 1; - pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64, NULL); - if (pos) { - tbl_data = container_of(pos, struct mlx5_flow_tbl_data_entry, - entry); - MLX5_ASSERT(tbl_data); - mlx5_hlist_remove(sh->flow_tbls, pos); - mlx5_free(tbl_data); - } - table_key.direction = 0; - table_key.domain = 1; - pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64, NULL); - if (pos) { - tbl_data = container_of(pos, struct mlx5_flow_tbl_data_entry, - entry); - MLX5_ASSERT(tbl_data); - mlx5_hlist_remove(sh->flow_tbls, pos); - mlx5_free(tbl_data); - } mlx5_hlist_destroy(sh->flow_tbls); } @@ -1039,80 +1002,45 @@ struct mlx5_dev_ctx_shared * * Zero on success, positive error code otherwise. */ int -mlx5_alloc_table_hash_list(struct mlx5_priv *priv) +mlx5_alloc_table_hash_list(struct mlx5_priv *priv __rte_unused) { + int err = 0; + /* Tables are only used in DV and DR modes. */ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT struct mlx5_dev_ctx_shared *sh = priv->sh; char s[MLX5_HLIST_NAMESIZE]; - int err = 0; MLX5_ASSERT(sh); snprintf(s, sizeof(s), "%s_flow_table", priv->sh->ibdev_name); sh->flow_tbls = mlx5_hlist_create(s, MLX5_FLOW_TABLE_HLIST_ARRAY_SIZE, - 0, false, NULL, NULL, NULL); + 0, false, flow_dv_tbl_create_cb, NULL, + flow_dv_tbl_remove_cb); if (!sh->flow_tbls) { DRV_LOG(ERR, "flow tables with hash creation failed."); err = ENOMEM; return err; } + sh->flow_tbls->ctx = sh; #ifndef HAVE_MLX5DV_DR + struct rte_flow_error error; + struct rte_eth_dev *dev = &rte_eth_devices[priv->dev_data->port_id]; + /* * In case we have not DR support, the zero tables should be created * because DV expect to see them even if they cannot be created by * RDMA-CORE. */ - union mlx5_flow_tbl_key table_key = { - { - .table_id = 0, - .reserved = 0, - .domain = 0, - .direction = 0, - } - }; - struct mlx5_flow_tbl_data_entry *tbl_data = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(*tbl_data), 0, - SOCKET_ID_ANY); - - if (!tbl_data) { - err = ENOMEM; - goto error; - } - tbl_data->entry.key = table_key.v64; - err = mlx5_hlist_insert(sh->flow_tbls, &tbl_data->entry); - if (err) - goto error; - rte_atomic32_init(&tbl_data->tbl.refcnt); - rte_atomic32_inc(&tbl_data->tbl.refcnt); - table_key.direction = 1; - tbl_data = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*tbl_data), 0, - SOCKET_ID_ANY); - if (!tbl_data) { + if (!flow_dv_tbl_resource_get(dev, 0, 0, 0, 1, &error) || + !flow_dv_tbl_resource_get(dev, 0, 1, 0, 1, &error) || + !flow_dv_tbl_resource_get(dev, 0, 0, 1, 1, &error)) { err = ENOMEM; goto error; } - tbl_data->entry.key = table_key.v64; - err = mlx5_hlist_insert(sh->flow_tbls, &tbl_data->entry); - if (err) - goto error; - rte_atomic32_init(&tbl_data->tbl.refcnt); - rte_atomic32_inc(&tbl_data->tbl.refcnt); - table_key.direction = 0; - table_key.domain = 1; - tbl_data = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*tbl_data), 0, - SOCKET_ID_ANY); - if (!tbl_data) { - err = ENOMEM; - goto error; - } - tbl_data->entry.key = table_key.v64; - err = mlx5_hlist_insert(sh->flow_tbls, &tbl_data->entry); - if (err) - goto error; - rte_atomic32_init(&tbl_data->tbl.refcnt); - rte_atomic32_inc(&tbl_data->tbl.refcnt); return err; error: mlx5_free_table_hash_list(priv); #endif /* HAVE_MLX5DV_DR */ +#endif return err; } diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 464d2cf..f11d783 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -490,7 +490,7 @@ struct mlx5_dev_shared_port { struct { /* Table ID should be at the lowest address. */ uint32_t table_id; /**< ID of the table. */ - uint16_t reserved; /**< must be zero for comparison. */ + uint16_t dummy; /**< Dummy table for DV API. */ uint8_t domain; /**< 1 - FDB, 0 - NIC TX/RX. */ uint8_t direction; /**< 1 - egress, 0 - ingress. */ }; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 85f2528..f661d1e 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -368,6 +368,13 @@ enum mlx5_flow_fate_type { MLX5_FLOW_FATE_MAX, }; +/* Hash list callback context */ +struct mlx5_flow_cb_ctx { + struct rte_eth_dev *dev; + struct rte_flow_error *error; + void *data; +}; + /* Matcher PRM representation */ struct mlx5_flow_dv_match_params { size_t size; @@ -1074,4 +1081,14 @@ int mlx5_flow_destroy_policer_rules(struct rte_eth_dev *dev, const struct rte_flow_attr *attr); int mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error); + +/* hash list callbacks: */ +struct mlx5_hlist_entry *flow_dv_tbl_create_cb(struct mlx5_hlist *list, + uint64_t key, void *entry_ctx); +void flow_dv_tbl_remove_cb(struct mlx5_hlist *list, + struct mlx5_hlist_entry *entry); +struct mlx5_flow_tbl_resource *flow_dv_tbl_resource_get(struct rte_eth_dev *dev, + uint32_t table_id, uint8_t egress, uint8_t transfer, + uint8_t dummy, struct rte_flow_error *error); + #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index fafe188..fa19873 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -7597,6 +7597,48 @@ struct field_modify_info modify_tcp[] = { } +struct mlx5_hlist_entry * +flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void *ctx) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_flow_tbl_data_entry *tbl_data; + struct rte_flow_error *error = ctx; + union mlx5_flow_tbl_key key = { .v64 = key64 }; + struct mlx5_flow_tbl_resource *tbl; + void *domain; + uint32_t idx = 0; + int ret; + + tbl_data = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_JUMP], &idx); + if (!tbl_data) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot allocate flow table data entry"); + return NULL; + } + tbl_data->idx = idx; + tbl = &tbl_data->tbl; + if (key.dummy) + return &tbl_data->entry; + if (key.domain) + domain = sh->fdb_domain; + else if (key.direction) + domain = sh->tx_domain; + else + domain = sh->rx_domain; + ret = mlx5_flow_os_create_flow_tbl(domain, key.table_id, &tbl->obj); + if (ret) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "cannot create flow table object"); + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_JUMP], idx); + return NULL; + } + rte_atomic32_init(&tbl_data->jump.refcnt); + return &tbl_data->entry; +} + /** * Get a flow table. * @@ -7608,86 +7650,51 @@ struct field_modify_info modify_tcp[] = { * Direction of the table. * @param[in] transfer * E-Switch or NIC flow. + * @param[in] dummy + * Dummy entry for dv API. * @param[out] error * pointer to error structure. * * @return * Returns tables resource based on the index, NULL in case of failed. */ -static struct mlx5_flow_tbl_resource * +struct mlx5_flow_tbl_resource * flow_dv_tbl_resource_get(struct rte_eth_dev *dev, uint32_t table_id, uint8_t egress, - uint8_t transfer, + uint8_t transfer, uint8_t dummy, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_flow_tbl_resource *tbl; union mlx5_flow_tbl_key table_key = { { .table_id = table_id, - .reserved = 0, + .dummy = dummy, .domain = !!transfer, .direction = !!egress, } }; - struct mlx5_hlist_entry *pos = mlx5_hlist_lookup(sh->flow_tbls, - table_key.v64, NULL); + struct mlx5_hlist_entry *entry; struct mlx5_flow_tbl_data_entry *tbl_data; - uint32_t idx = 0; - int ret; - void *domain; - if (pos) { - tbl_data = container_of(pos, struct mlx5_flow_tbl_data_entry, - entry); - tbl = &tbl_data->tbl; - rte_atomic32_inc(&tbl->refcnt); - return tbl; - } - tbl_data = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_JUMP], &idx); - if (!tbl_data) { - rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "cannot allocate flow table data entry"); + entry = mlx5_hlist_register(priv->sh->flow_tbls, table_key.v64, error); + if (!entry) return NULL; - } - tbl_data->idx = idx; - tbl = &tbl_data->tbl; - pos = &tbl_data->entry; - if (transfer) - domain = sh->fdb_domain; - else if (egress) - domain = sh->tx_domain; - else - domain = sh->rx_domain; - ret = mlx5_flow_os_create_flow_tbl(domain, table_id, &tbl->obj); - if (ret) { - rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "cannot create flow table object"); - mlx5_ipool_free(sh->ipool[MLX5_IPOOL_JUMP], idx); - return NULL; - } - /* - * No multi-threads now, but still better to initialize the reference - * count before insert it into the hash list. - */ - rte_atomic32_init(&tbl->refcnt); - /* Jump action reference count is initialized here. */ - rte_atomic32_init(&tbl_data->jump.refcnt); - pos->key = table_key.v64; - ret = !mlx5_hlist_insert(sh->flow_tbls, pos); - if (ret < 0) { - rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "cannot insert flow table data entry"); - mlx5_flow_os_destroy_flow_tbl(tbl->obj); - mlx5_ipool_free(sh->ipool[MLX5_IPOOL_JUMP], idx); - } - rte_atomic32_inc(&tbl->refcnt); - return tbl; + tbl_data = container_of(entry, struct mlx5_flow_tbl_data_entry, entry); + return &tbl_data->tbl; +} + +void +flow_dv_tbl_remove_cb(struct mlx5_hlist *list, + struct mlx5_hlist_entry *entry) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_flow_tbl_data_entry *tbl_data = + container_of(entry, struct mlx5_flow_tbl_data_entry, entry); + + MLX5_ASSERT(entry && sh); + if (tbl_data->tbl.obj) + mlx5_flow_os_destroy_flow_tbl(tbl_data->tbl.obj); + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_JUMP], tbl_data->idx); } /** @@ -7712,18 +7719,7 @@ struct field_modify_info modify_tcp[] = { if (!tbl) return 0; - if (rte_atomic32_dec_and_test(&tbl->refcnt)) { - struct mlx5_hlist_entry *pos = &tbl_data->entry; - - mlx5_flow_os_destroy_flow_tbl(tbl->obj); - tbl->obj = NULL; - /* remove the entry from the hash list and free memory. */ - mlx5_hlist_remove(sh->flow_tbls, pos); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_JUMP], - tbl_data->idx); - return 0; - } - return 1; + return mlx5_hlist_unregister(sh->flow_tbls, &tbl_data->entry); } /** @@ -7762,7 +7758,7 @@ struct field_modify_info modify_tcp[] = { int ret; tbl = flow_dv_tbl_resource_get(dev, key->table_id, key->direction, - key->domain, error); + key->domain, 0, error); if (!tbl) return -rte_errno; /* No need to refill the error info */ tbl_data = container_of(tbl, struct mlx5_flow_tbl_data_entry, tbl); @@ -8492,7 +8488,8 @@ struct field_modify_info modify_tcp[] = { return ret; tbl = flow_dv_tbl_resource_get(dev, table, attr->egress, - attr->transfer, error); + attr->transfer, 0, + error); if (!tbl) return rte_flow_error_set (error, errno, @@ -9685,7 +9682,7 @@ struct field_modify_info modify_tcp[] = { dtb = &mtb->ingress; /* Create the meter table with METER level. */ dtb->tbl = flow_dv_tbl_resource_get(dev, MLX5_FLOW_TABLE_LEVEL_METER, - egress, transfer, &error); + egress, transfer, 0, &error); if (!dtb->tbl) { DRV_LOG(ERR, "Failed to create meter policer table."); return -1; @@ -9693,7 +9690,7 @@ struct field_modify_info modify_tcp[] = { /* Create the meter suffix table with SUFFIX level. */ dtb->sfx_tbl = flow_dv_tbl_resource_get(dev, MLX5_FLOW_TABLE_LEVEL_SUFFIX, - egress, transfer, &error); + egress, transfer, 0, &error); if (!dtb->sfx_tbl) { DRV_LOG(ERR, "Failed to create meter suffix table."); return -1; From patchwork Tue Oct 6 11:48:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79764 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 95B4EA04BB; Tue, 6 Oct 2020 13:52:58 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E157C1B85E; Tue, 6 Oct 2020 13:49:39 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id C19661B6AB for ; Tue, 6 Oct 2020 13:49:34 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:30 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0T028553; Tue, 6 Oct 2020 14:49:29 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org, stable@dpdk.org Date: Tue, 6 Oct 2020 19:48:52 +0800 Message-Id: <1601984948-313027-10-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 09/25] net/mlx5: fix redundant Direct Verbs resources allocate X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" All table, tag, header modify, header reformat are supported only on DV mode. For the OFED version doesn't support these, create the related redundant DV resources waste the memory. Add the code section in the HAVE_IBV_FLOW_DV_SUPPORT macro to avoid the redundant resources allocation. Fixes: 2eb4d0107acc ("net/mlx5: refactor PCI probing on Linux") Cc: stable@dpdk.org Signed-off-by: Suanming Mou --- drivers/net/mlx5/linux/mlx5_os.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 13b5a3f..d828035 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -234,7 +234,9 @@ DRV_LOG(DEBUG, "sh->flow_tbls[%p] already created, reuse\n", (void *)sh->flow_tbls); if (err) - return err; + goto error; + /* The resources below are only valid with DV support. */ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT /* Create tags hash list table. */ snprintf(s, sizeof(s), "%s_tags", sh->ibdev_name); sh->tag_table = mlx5_hlist_create(s, MLX5_TAGS_HLIST_ARRAY_SIZE, 0, @@ -261,6 +263,7 @@ err = ENOMEM; goto error; } +#endif #ifdef HAVE_MLX5DV_DR void *domain; From patchwork Tue Oct 6 11:48:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79763 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 12716A04BB; Tue, 6 Oct 2020 13:52:37 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3F3001B7D8; Tue, 6 Oct 2020 13:49:38 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id D846F1B709 for ; Tue, 6 Oct 2020 13:49:34 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:32 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0U028553; Tue, 6 Oct 2020 14:49:31 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org, Xueming Li Date: Tue, 6 Oct 2020 19:48:53 +0800 Message-Id: <1601984948-313027-11-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 10/25] net/mlx5: make flow tag list thread safe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xueming Li To support multi-thread flow insertion, this pathc updates flow tag list to use thread safe hash list. Signed-off-by: Xueming Li --- drivers/net/mlx5/linux/mlx5_os.c | 6 ++- drivers/net/mlx5/mlx5_flow.h | 5 +++ drivers/net/mlx5/mlx5_flow_dv.c | 97 +++++++++++++++++++--------------------- 3 files changed, 56 insertions(+), 52 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index d828035..39bd16b 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -225,7 +225,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) { struct mlx5_dev_ctx_shared *sh = priv->sh; - char s[MLX5_HLIST_NAMESIZE]; + char s[MLX5_HLIST_NAMESIZE] __rte_unused; int err = 0; if (!sh->flow_tbls) @@ -240,12 +240,14 @@ /* Create tags hash list table. */ snprintf(s, sizeof(s), "%s_tags", sh->ibdev_name); sh->tag_table = mlx5_hlist_create(s, MLX5_TAGS_HLIST_ARRAY_SIZE, 0, - false, NULL, NULL, NULL); + true, flow_dv_tag_create_cb, NULL, + flow_dv_tag_remove_cb); if (!sh->tag_table) { DRV_LOG(ERR, "tags with hash creation failed."); err = ENOMEM; goto error; } + sh->tag_table->ctx = sh; snprintf(s, sizeof(s), "%s_hdr_modify", sh->ibdev_name); sh->modify_cmds = mlx5_hlist_create(s, MLX5_FLOW_HDR_MODIFY_HTABLE_SZ, 0, false, NULL, NULL, NULL); diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index f661d1e..c92a40b 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1091,4 +1091,9 @@ struct mlx5_flow_tbl_resource *flow_dv_tbl_resource_get(struct rte_eth_dev *dev, uint32_t table_id, uint8_t egress, uint8_t transfer, uint8_t dummy, struct rte_flow_error *error); +struct mlx5_hlist_entry *flow_dv_tag_create_cb(struct mlx5_hlist *list, + uint64_t key, void *cb_ctx); +void flow_dv_tag_remove_cb(struct mlx5_hlist *list, + struct mlx5_hlist_entry *entry); + #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index fa19873..23ec983 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -7825,6 +7825,35 @@ struct mlx5_flow_tbl_resource * return 0; } +struct mlx5_hlist_entry * +flow_dv_tag_create_cb(struct mlx5_hlist *list, uint64_t key, void *ctx) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct rte_flow_error *error = ctx; + struct mlx5_flow_dv_tag_resource *entry; + uint32_t idx = 0; + int ret; + + entry = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_TAG], &idx); + if (!entry) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot allocate resource memory"); + return NULL; + } + entry->idx = idx; + ret = mlx5_flow_os_create_flow_action_tag(key, + &entry->action); + if (ret) { + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_TAG], idx); + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "cannot create action"); + return NULL; + } + return &entry->entry; +} + /** * Find existing tag resource or create and register a new one. * @@ -7848,54 +7877,32 @@ struct mlx5_flow_tbl_resource * struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_dev_ctx_shared *sh = priv->sh; struct mlx5_flow_dv_tag_resource *cache_resource; struct mlx5_hlist_entry *entry; - int ret; - /* Lookup a matching resource from cache. */ - entry = mlx5_hlist_lookup(sh->tag_table, (uint64_t)tag_be24, NULL); + entry = mlx5_hlist_register(priv->sh->tag_table, tag_be24, error); if (entry) { cache_resource = container_of (entry, struct mlx5_flow_dv_tag_resource, entry); - rte_atomic32_inc(&cache_resource->refcnt); dev_flow->handle->dvh.rix_tag = cache_resource->idx; dev_flow->dv.tag_resource = cache_resource; - DRV_LOG(DEBUG, "cached tag resource %p: refcnt now %d++", - (void *)cache_resource, - rte_atomic32_read(&cache_resource->refcnt)); return 0; } - /* Register new resource. */ - cache_resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_TAG], - &dev_flow->handle->dvh.rix_tag); - if (!cache_resource) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "cannot allocate resource memory"); - cache_resource->entry.key = (uint64_t)tag_be24; - ret = mlx5_flow_os_create_flow_action_tag(tag_be24, - &cache_resource->action); - if (ret) { - mlx5_free(cache_resource); - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "cannot create action"); - } - rte_atomic32_init(&cache_resource->refcnt); - rte_atomic32_inc(&cache_resource->refcnt); - if (mlx5_hlist_insert(sh->tag_table, &cache_resource->entry)) { - mlx5_flow_os_destroy_flow_action(cache_resource->action); - mlx5_free(cache_resource); - return rte_flow_error_set(error, EEXIST, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "cannot insert tag"); - } - dev_flow->dv.tag_resource = cache_resource; - DRV_LOG(DEBUG, "new tag resource %p: refcnt now %d++", - (void *)cache_resource, - rte_atomic32_read(&cache_resource->refcnt)); - return 0; + return -rte_errno; +} + +void +flow_dv_tag_remove_cb(struct mlx5_hlist *list, + struct mlx5_hlist_entry *entry) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_flow_dv_tag_resource *tag = + container_of(entry, struct mlx5_flow_dv_tag_resource, entry); + + MLX5_ASSERT(tag && sh && tag->action); + claim_zero(mlx5_flow_os_destroy_flow_action(tag->action)); + DRV_LOG(DEBUG, "tag %p: removed", (void *)tag); + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_TAG], tag->idx); } /** @@ -7914,24 +7921,14 @@ struct mlx5_flow_tbl_resource * uint32_t tag_idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_dev_ctx_shared *sh = priv->sh; struct mlx5_flow_dv_tag_resource *tag; tag = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_TAG], tag_idx); if (!tag) return 0; DRV_LOG(DEBUG, "port %u tag %p: refcnt %d--", - dev->data->port_id, (void *)tag, - rte_atomic32_read(&tag->refcnt)); - if (rte_atomic32_dec_and_test(&tag->refcnt)) { - claim_zero(mlx5_flow_os_destroy_flow_action(tag->action)); - mlx5_hlist_remove(sh->tag_table, &tag->entry); - DRV_LOG(DEBUG, "port %u tag %p: removed", - dev->data->port_id, (void *)tag); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_TAG], tag_idx); - return 0; - } - return 1; + dev->data->port_id, (void *)tag, tag->entry.ref_cnt); + return mlx5_hlist_unregister(priv->sh->tag_table, &tag->entry); } /** From patchwork Tue Oct 6 11:48:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79765 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 680DBA04BB; Tue, 6 Oct 2020 13:53:19 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9D9241B872; Tue, 6 Oct 2020 13:49:41 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id DA3851B81B for ; Tue, 6 Oct 2020 13:49:39 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:34 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0V028553; Tue, 6 Oct 2020 14:49:32 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org, Xueming Li Date: Tue, 6 Oct 2020 19:48:54 +0800 Message-Id: <1601984948-313027-12-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 11/25] net/mlx5: make flow modify action list thread safe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xueming Li To support multi-thread flow insertion, this patch updates flow modify action list to use thread safe hash list. Signed-off-by: Xueming Li --- drivers/net/mlx5/linux/mlx5_os.c | 5 +- drivers/net/mlx5/mlx5_flow.h | 13 ++- drivers/net/mlx5/mlx5_flow_dv.c | 191 +++++++++++++++++---------------------- 3 files changed, 96 insertions(+), 113 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 39bd16b..744b1dd 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -250,12 +250,15 @@ sh->tag_table->ctx = sh; snprintf(s, sizeof(s), "%s_hdr_modify", sh->ibdev_name); sh->modify_cmds = mlx5_hlist_create(s, MLX5_FLOW_HDR_MODIFY_HTABLE_SZ, - 0, false, NULL, NULL, NULL); + 0, true, flow_dv_modify_create_cb, + flow_dv_modify_match_cb, + flow_dv_modify_remove_cb); if (!sh->modify_cmds) { DRV_LOG(ERR, "hdr modify hash creation failed"); err = ENOMEM; goto error; } + sh->modify_cmds->ctx = sh; snprintf(s, sizeof(s), "%s_encaps_decaps", sh->ibdev_name); sh->encaps_decaps = mlx5_hlist_create(s, MLX5_FLOW_ENCAP_DECAP_HTABLE_SZ, diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index c92a40b..7effacc 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -451,10 +451,8 @@ struct mlx5_flow_dv_tag_resource { /* Modify resource structure */ struct mlx5_flow_dv_modify_hdr_resource { struct mlx5_hlist_entry entry; - /* Pointer to next element. */ - rte_atomic32_t refcnt; /**< Reference counter. */ - void *action; - /**< Modify header action object. */ + void *action; /**< Modify header action object. */ + /* Key area for hash list matching: */ uint8_t ft_type; /**< Flow table type, Rx or Tx. */ uint32_t actions_num; /**< Number of modification actions. */ uint64_t flags; /**< Flags for RDMA API. */ @@ -1096,4 +1094,11 @@ struct mlx5_hlist_entry *flow_dv_tag_create_cb(struct mlx5_hlist *list, void flow_dv_tag_remove_cb(struct mlx5_hlist *list, struct mlx5_hlist_entry *entry); +int flow_dv_modify_match_cb(struct mlx5_hlist *list, + struct mlx5_hlist_entry *entry, void *cb_ctx); +struct mlx5_hlist_entry *flow_dv_modify_create_cb(struct mlx5_hlist *list, + uint64_t key, void *ctx); +void flow_dv_modify_remove_cb(struct mlx5_hlist *list, + struct mlx5_hlist_entry *entry); + #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 23ec983..d19f697 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -4010,35 +4010,72 @@ struct field_modify_info modify_tcp[] = { /** * Match modify-header resource. * + * @param list + * Pointer to the hash list. * @param entry * Pointer to exist resource entry object. * @param ctx * Pointer to new modify-header resource. * * @return - * 0 on matching, -1 otherwise. + * 0 on matching, non-zero otherwise. */ -static int -flow_dv_modify_hdr_resource_match(struct mlx5_hlist_entry *entry, void *ctx) +int +flow_dv_modify_match_cb(struct mlx5_hlist *list __rte_unused, + struct mlx5_hlist_entry *entry, void *cb_ctx) { - struct mlx5_flow_dv_modify_hdr_resource *resource; - struct mlx5_flow_dv_modify_hdr_resource *cache_resource; - uint32_t actions_len; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_dv_modify_hdr_resource *ref = ctx->data; + struct mlx5_flow_dv_modify_hdr_resource *resource = + container_of(entry, typeof(*resource), entry); + uint32_t key_len = sizeof(*ref) - offsetof(typeof(*ref), ft_type); - resource = (struct mlx5_flow_dv_modify_hdr_resource *)ctx; - cache_resource = container_of(entry, - struct mlx5_flow_dv_modify_hdr_resource, - entry); - actions_len = resource->actions_num * sizeof(resource->actions[0]); - if (resource->entry.key == cache_resource->entry.key && - resource->ft_type == cache_resource->ft_type && - resource->actions_num == cache_resource->actions_num && - resource->flags == cache_resource->flags && - !memcmp((const void *)resource->actions, - (const void *)cache_resource->actions, - actions_len)) - return 0; - return -1; + key_len += ref->actions_num * sizeof(ref->actions[0]); + return ref->actions_num != resource->actions_num || + memcmp(&ref->ft_type, &resource->ft_type, key_len); +} + +struct mlx5_hlist_entry * +flow_dv_modify_create_cb(struct mlx5_hlist *list, uint64_t key __rte_unused, + void *cb_ctx) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5dv_dr_domain *ns; + struct mlx5_flow_dv_modify_hdr_resource *entry; + struct mlx5_flow_dv_modify_hdr_resource *ref = ctx->data; + int ret; + uint32_t data_len = ref->actions_num * sizeof(ref->actions[0]); + uint32_t key_len = sizeof(*ref) - offsetof(typeof(*ref), ft_type); + + entry = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*entry) + data_len, 0, + SOCKET_ID_ANY); + if (!entry) { + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot allocate resource memory"); + return NULL; + } + rte_memcpy(&entry->ft_type, + RTE_PTR_ADD(ref, offsetof(typeof(*ref), ft_type)), + key_len + data_len); + if (entry->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) + ns = sh->fdb_domain; + else if (entry->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_TX) + ns = sh->tx_domain; + else + ns = sh->rx_domain; + ret = mlx5_flow_os_create_flow_action_modify_header + (sh->ctx, ns, entry, + data_len, &entry->action); + if (ret) { + mlx5_free(entry); + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "cannot create modification action"); + return NULL; + } + return &entry->entry; } /** @@ -4065,19 +4102,14 @@ struct field_modify_info modify_tcp[] = { { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_flow_dv_modify_hdr_resource *cache_resource; - struct mlx5dv_dr_domain *ns; - uint32_t actions_len; + uint32_t key_len = sizeof(*resource) - + offsetof(typeof(*resource), ft_type) + + resource->actions_num * sizeof(resource->actions[0]); struct mlx5_hlist_entry *entry; - union mlx5_flow_modify_hdr_key hdr_mod_key = { - { - .ft_type = resource->ft_type, - .actions_num = resource->actions_num, - .group = dev_flow->dv.group, - .cksum = 0, - } + struct mlx5_flow_cb_ctx ctx = { + .error = error, + .data = resource, }; - int ret; resource->flags = dev_flow->dv.group ? 0 : MLX5DV_DR_ACTION_FLAGS_ROOT_LEVEL; @@ -4086,66 +4118,12 @@ struct field_modify_info modify_tcp[] = { return rte_flow_error_set(error, EOVERFLOW, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "too many modify header items"); - if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) - ns = sh->fdb_domain; - else if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_TX) - ns = sh->tx_domain; - else - ns = sh->rx_domain; - /* Lookup a matching resource from cache. */ - actions_len = resource->actions_num * sizeof(resource->actions[0]); - hdr_mod_key.cksum = __rte_raw_cksum(resource->actions, actions_len, 0); - resource->entry.key = hdr_mod_key.v64; - entry = mlx5_hlist_lookup_ex(sh->modify_cmds, resource->entry.key, - flow_dv_modify_hdr_resource_match, - (void *)resource); - if (entry) { - cache_resource = container_of(entry, - struct mlx5_flow_dv_modify_hdr_resource, - entry); - DRV_LOG(DEBUG, "modify-header resource %p: refcnt %d++", - (void *)cache_resource, - rte_atomic32_read(&cache_resource->refcnt)); - rte_atomic32_inc(&cache_resource->refcnt); - dev_flow->handle->dvh.modify_hdr = cache_resource; - return 0; - - } - /* Register new modify-header resource. */ - cache_resource = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(*cache_resource) + actions_len, 0, - SOCKET_ID_ANY); - if (!cache_resource) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "cannot allocate resource memory"); - *cache_resource = *resource; - rte_memcpy(cache_resource->actions, resource->actions, actions_len); - ret = mlx5_flow_os_create_flow_action_modify_header - (sh->ctx, ns, cache_resource, - actions_len, &cache_resource->action); - if (ret) { - mlx5_free(cache_resource); - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "cannot create action"); - } - rte_atomic32_init(&cache_resource->refcnt); - rte_atomic32_inc(&cache_resource->refcnt); - if (mlx5_hlist_insert_ex(sh->modify_cmds, &cache_resource->entry, - flow_dv_modify_hdr_resource_match, - (void *)cache_resource)) { - claim_zero(mlx5_flow_os_destroy_flow_action - (cache_resource->action)); - mlx5_free(cache_resource); - return rte_flow_error_set(error, EEXIST, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "action exist"); - } - dev_flow->handle->dvh.modify_hdr = cache_resource; - DRV_LOG(DEBUG, "new modify-header resource %p: refcnt %d++", - (void *)cache_resource, - rte_atomic32_read(&cache_resource->refcnt)); + resource->entry.key = __rte_raw_cksum(&resource->ft_type, key_len, 0); + entry = mlx5_hlist_register(sh->modify_cmds, resource->entry.key, &ctx); + if (!entry) + return -rte_errno; + resource = container_of(entry, typeof(*resource), entry); + dev_flow->handle->dvh.modify_hdr = resource; return 0; } @@ -9220,6 +9198,17 @@ struct mlx5_hlist_entry * return 1; } +void +flow_dv_modify_remove_cb(struct mlx5_hlist *list __rte_unused, + struct mlx5_hlist_entry *entry) +{ + struct mlx5_flow_dv_modify_hdr_resource *res = + container_of(entry, typeof(*res), entry); + + claim_zero(mlx5_flow_os_destroy_flow_action(res->action)); + mlx5_free(entry); +} + /** * Release a modify-header resource. * @@ -9236,24 +9225,10 @@ struct mlx5_hlist_entry * struct mlx5_flow_handle *handle) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_flow_dv_modify_hdr_resource *cache_resource = - handle->dvh.modify_hdr; + struct mlx5_flow_dv_modify_hdr_resource *entry = handle->dvh.modify_hdr; - MLX5_ASSERT(cache_resource->action); - DRV_LOG(DEBUG, "modify-header resource %p: refcnt %d--", - (void *)cache_resource, - rte_atomic32_read(&cache_resource->refcnt)); - if (rte_atomic32_dec_and_test(&cache_resource->refcnt)) { - claim_zero(mlx5_flow_os_destroy_flow_action - (cache_resource->action)); - mlx5_hlist_remove(priv->sh->modify_cmds, - &cache_resource->entry); - mlx5_free(cache_resource); - DRV_LOG(DEBUG, "modify-header resource %p: removed", - (void *)cache_resource); - return 0; - } - return 1; + MLX5_ASSERT(entry->action); + return mlx5_hlist_unregister(priv->sh->modify_cmds, &entry->entry); } /** From patchwork Tue Oct 6 11:48:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79767 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8171CA04BB; Tue, 6 Oct 2020 13:54:02 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B40D51B9E6; Tue, 6 Oct 2020 13:49:44 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 089181B870 for ; Tue, 6 Oct 2020 13:49:39 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:35 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0W028553; Tue, 6 Oct 2020 14:49:34 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org, Xueming Li Date: Tue, 6 Oct 2020 19:48:55 +0800 Message-Id: <1601984948-313027-13-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 12/25] net/mlx5: make metadata copy flow list thread safe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xueming Li To support multi-thread flow insertion, this patch updates metadata copy flow list to use thread safe hash list. Signed-off-by: Xueming Li --- drivers/net/mlx5/linux/mlx5_os.c | 6 +- drivers/net/mlx5/mlx5_flow.c | 276 +++++++++++++++++++-------------------- drivers/net/mlx5/mlx5_flow.h | 6 +- 3 files changed, 139 insertions(+), 149 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 744b1dd..50d3d99 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1354,13 +1354,13 @@ mlx5_flow_ext_mreg_supported(eth_dev) && priv->sh->dv_regc0_mask) { priv->mreg_cp_tbl = mlx5_hlist_create(MLX5_FLOW_MREG_HNAME, - MLX5_FLOW_MREG_HTABLE_SZ, - 0, false, - NULL, NULL, NULL); + MLX5_FLOW_MREG_HTABLE_SZ, 0, false, + flow_dv_mreg_create_cb, NULL, flow_dv_mreg_remove_cb); if (!priv->mreg_cp_tbl) { err = ENOMEM; goto error; } + priv->mreg_cp_tbl->ctx = eth_dev; } return eth_dev; error: diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 3a3b783..3808e2b 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -2950,36 +2950,17 @@ struct mlx5_flow_tunnel_info { flow_list_destroy(struct rte_eth_dev *dev, uint32_t *list, uint32_t flow_idx); -/** - * Add a flow of copying flow metadata registers in RX_CP_TBL. - * - * As mark_id is unique, if there's already a registered flow for the mark_id, - * return by increasing the reference counter of the resource. Otherwise, create - * the resource (mcp_res) and flow. - * - * Flow looks like, - * - If ingress port is ANY and reg_c[1] is mark_id, - * flow_tag := mark_id, reg_b := reg_c[0] and jump to RX_ACT_TBL. - * - * For default flow (zero mark_id), flow is like, - * - If ingress port is ANY, - * reg_b := reg_c[0] and jump to RX_ACT_TBL. - * - * @param dev - * Pointer to Ethernet device. - * @param mark_id - * ID of MARK action, zero means default flow for META. - * @param[out] error - * Perform verbose error reporting if not NULL. - * - * @return - * Associated resource on success, NULL otherwise and rte_errno is set. - */ -static struct mlx5_flow_mreg_copy_resource * -flow_mreg_add_copy_action(struct rte_eth_dev *dev, uint32_t mark_id, - struct rte_flow_error *error) +struct mlx5_hlist_entry * +flow_dv_mreg_create_cb(struct mlx5_hlist *list, uint64_t key, + void *cb_ctx) { + struct rte_eth_dev *dev = list->ctx; struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_mreg_copy_resource *mcp_res; + uint32_t idx = 0; + int ret; + uint32_t mark_id = key; struct rte_flow_attr attr = { .group = MLX5_FLOW_MREG_CP_TABLE_GROUP, .ingress = 1, @@ -3003,30 +2984,22 @@ struct mlx5_flow_tunnel_info { struct rte_flow_action actions[] = { [3] = { .type = RTE_FLOW_ACTION_TYPE_END, }, }; - struct mlx5_flow_mreg_copy_resource *mcp_res; - uint32_t idx = 0; - int ret; /* Fill the register fileds in the flow. */ - ret = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error); + ret = mlx5_flow_get_reg_id(ctx->dev, MLX5_FLOW_MARK, 0, ctx->error); if (ret < 0) return NULL; tag_spec.id = ret; - ret = mlx5_flow_get_reg_id(dev, MLX5_METADATA_RX, 0, error); + ret = mlx5_flow_get_reg_id(ctx->dev, MLX5_METADATA_RX, 0, ctx->error); if (ret < 0) return NULL; cp_mreg.src = ret; - /* Check if already registered. */ - MLX5_ASSERT(priv->mreg_cp_tbl); - mcp_res = (void *)mlx5_hlist_lookup(priv->mreg_cp_tbl, mark_id, NULL); - if (mcp_res) { - /* For non-default rule. */ - if (mark_id != MLX5_DEFAULT_COPY_ID) - mcp_res->refcnt++; - MLX5_ASSERT(mark_id != MLX5_DEFAULT_COPY_ID || - mcp_res->refcnt == 1); - return mcp_res; + mcp_res = mlx5_ipool_zmalloc(priv->sh->ipool[MLX5_IPOOL_MCP], &idx); + if (!mcp_res) { + rte_errno = ENOMEM; + return NULL; } + mcp_res->idx = idx; /* Provide the full width of FLAG specific value. */ if (mark_id == (priv->sh->dv_regc0_mask & MLX5_FLOW_MARK_DEFAULT)) tag_spec.data = MLX5_FLOW_MARK_DEFAULT; @@ -3076,39 +3049,68 @@ struct mlx5_flow_tunnel_info { .type = RTE_FLOW_ACTION_TYPE_END, }; } - /* Build a new entry. */ - mcp_res = mlx5_ipool_zmalloc(priv->sh->ipool[MLX5_IPOOL_MCP], &idx); - if (!mcp_res) { - rte_errno = ENOMEM; - return NULL; - } - mcp_res->idx = idx; /* * The copy Flows are not included in any list. There * ones are referenced from other Flows and can not - * be applied, removed, deleted in ardbitrary order + * be applied, removed, deleted in arbitrary order * by list traversing. */ - mcp_res->rix_flow = flow_list_create(dev, NULL, &attr, items, - actions, false, error); - if (!mcp_res->rix_flow) - goto error; - mcp_res->refcnt++; - mcp_res->hlist_ent.key = mark_id; - ret = !mlx5_hlist_insert(priv->mreg_cp_tbl, &mcp_res->hlist_ent); - MLX5_ASSERT(!ret); - if (ret) - goto error; - return mcp_res; -error: - if (mcp_res->rix_flow) - flow_list_destroy(dev, NULL, mcp_res->rix_flow); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MCP], mcp_res->idx); - return NULL; + mcp_res->rix_flow = flow_list_create(ctx->dev, NULL, &attr, items, + actions, false, ctx->error); + if (!mcp_res->rix_flow) { + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MCP], idx); + return NULL; + } + return &mcp_res->hlist_ent; } /** - * Release flow in RX_CP_TBL. + * Add a flow of copying flow metadata registers in RX_CP_TBL. + * + * As mark_id is unique, if there's already a registered flow for the mark_id, + * return by increasing the reference counter of the resource. Otherwise, create + * the resource (mcp_res) and flow. + * + * Flow looks like, + * - If ingress port is ANY and reg_c[1] is mark_id, + * flow_tag := mark_id, reg_b := reg_c[0] and jump to RX_ACT_TBL. + * + * For default flow (zero mark_id), flow is like, + * - If ingress port is ANY, + * reg_b := reg_c[0] and jump to RX_ACT_TBL. + * + * @param dev + * Pointer to Ethernet device. + * @param mark_id + * ID of MARK action, zero means default flow for META. + * @param[out] error + * Perform verbose error reporting if not NULL. + * + * @return + * Associated resource on success, NULL otherwise and rte_errno is set. + */ +static struct mlx5_flow_mreg_copy_resource * +flow_mreg_add_copy_action(struct rte_eth_dev *dev, uint32_t mark_id, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hlist_entry *entry; + struct mlx5_flow_cb_ctx ctx = { + .dev = dev, + .error = error, + }; + + /* Check if already registered. */ + MLX5_ASSERT(priv->mreg_cp_tbl); + entry = mlx5_hlist_register(priv->mreg_cp_tbl, mark_id, &ctx); + if (!entry) + return NULL; + return container_of(entry, struct mlx5_flow_mreg_copy_resource, + hlist_ent); +} + +/** + * Stop flow in RX_CP_TBL. * * @param dev * Pointer to Ethernet device. @@ -3116,117 +3118,102 @@ struct mlx5_flow_tunnel_info { * Parent flow for wich copying is provided. */ static void -flow_mreg_del_copy_action(struct rte_eth_dev *dev, - struct rte_flow *flow) +flow_mreg_stop_copy_action(struct rte_eth_dev *dev, + struct rte_flow *flow) { struct mlx5_flow_mreg_copy_resource *mcp_res; struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow *mcp_flow; - if (!flow->rix_mreg_copy) + if (!flow->rix_mreg_copy || !flow->copy_applied) return; mcp_res = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_MCP], flow->rix_mreg_copy); - if (!mcp_res || !priv->mreg_cp_tbl) + if (!mcp_res) return; - if (flow->copy_applied) { - MLX5_ASSERT(mcp_res->appcnt); - flow->copy_applied = 0; - --mcp_res->appcnt; - if (!mcp_res->appcnt) { - struct rte_flow *mcp_flow = mlx5_ipool_get - (priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], - mcp_res->rix_flow); - - if (mcp_flow) - flow_drv_remove(dev, mcp_flow); - } - } - /* - * We do not check availability of metadata registers here, - * because copy resources are not allocated in this case. - */ - if (--mcp_res->refcnt) + if (__atomic_sub_fetch(&mcp_res->appcnt, 0, __ATOMIC_ACQ_REL)) return; + flow->copy_applied = 0; + mcp_flow = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], + mcp_res->rix_flow); + if (mcp_flow) + flow_drv_remove(dev, mcp_flow); +} + +void +flow_dv_mreg_remove_cb(struct mlx5_hlist *list, struct mlx5_hlist_entry *entry) +{ + struct mlx5_flow_mreg_copy_resource *mcp_res = + container_of(entry, typeof(*mcp_res), hlist_ent); + struct rte_eth_dev *dev = list->ctx; + struct mlx5_priv *priv = dev->data->dev_private; + MLX5_ASSERT(mcp_res->rix_flow); flow_list_destroy(dev, NULL, mcp_res->rix_flow); - mlx5_hlist_remove(priv->mreg_cp_tbl, &mcp_res->hlist_ent); mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MCP], mcp_res->idx); - flow->rix_mreg_copy = 0; } /** - * Start flow in RX_CP_TBL. + * Release flow in RX_CP_TBL. * * @param dev * Pointer to Ethernet device. * @flow * Parent flow for wich copying is provided. - * - * @return - * 0 on success, a negative errno value otherwise and rte_errno is set. */ -static int -flow_mreg_start_copy_action(struct rte_eth_dev *dev, - struct rte_flow *flow) +static void +flow_mreg_del_copy_action(struct rte_eth_dev *dev, struct rte_flow *flow) { struct mlx5_flow_mreg_copy_resource *mcp_res; struct mlx5_priv *priv = dev->data->dev_private; - int ret; - if (!flow->rix_mreg_copy || flow->copy_applied) - return 0; + if (!flow->rix_mreg_copy) + return; mcp_res = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_MCP], flow->rix_mreg_copy); - if (!mcp_res) - return 0; - if (!mcp_res->appcnt) { - struct rte_flow *mcp_flow = mlx5_ipool_get - (priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], - mcp_res->rix_flow); - - if (mcp_flow) { - ret = flow_drv_apply(dev, mcp_flow, NULL); - if (ret) - return ret; - } - } - ++mcp_res->appcnt; - flow->copy_applied = 1; - return 0; + if (!mcp_res || !priv->mreg_cp_tbl) + return; + flow_mreg_stop_copy_action(dev, flow); + mlx5_hlist_unregister(priv->mreg_cp_tbl, &mcp_res->hlist_ent); } /** - * Stop flow in RX_CP_TBL. + * Start flow in RX_CP_TBL. * * @param dev * Pointer to Ethernet device. * @flow * Parent flow for wich copying is provided. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. */ -static void -flow_mreg_stop_copy_action(struct rte_eth_dev *dev, - struct rte_flow *flow) +static int +flow_mreg_start_copy_action(struct rte_eth_dev *dev, + struct rte_flow *flow) { struct mlx5_flow_mreg_copy_resource *mcp_res; struct mlx5_priv *priv = dev->data->dev_private; + int ret; - if (!flow->rix_mreg_copy || !flow->copy_applied) - return; + if (!flow->rix_mreg_copy || flow->copy_applied) + return 0; mcp_res = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_MCP], flow->rix_mreg_copy); if (!mcp_res) - return; - MLX5_ASSERT(mcp_res->appcnt); - --mcp_res->appcnt; - flow->copy_applied = 0; - if (!mcp_res->appcnt) { - struct rte_flow *mcp_flow = mlx5_ipool_get - (priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], - mcp_res->rix_flow); - - if (mcp_flow) - flow_drv_remove(dev, mcp_flow); + return 0; + if (__atomic_fetch_add(&mcp_res->appcnt, 1, __ATOMIC_ACQ_REL)) + return 0; + struct rte_flow *mcp_flow = mlx5_ipool_get + (priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], + mcp_res->rix_flow); + if (mcp_flow) { + ret = flow_drv_apply(dev, mcp_flow, NULL); + if (ret) + return ret; } + flow->copy_applied = 1; + return 0; } /** @@ -3238,20 +3225,17 @@ struct mlx5_flow_tunnel_info { static void flow_mreg_del_default_copy_action(struct rte_eth_dev *dev) { - struct mlx5_flow_mreg_copy_resource *mcp_res; + struct mlx5_hlist_entry *entry; struct mlx5_priv *priv = dev->data->dev_private; /* Check if default flow is registered. */ if (!priv->mreg_cp_tbl) return; - mcp_res = (void *)mlx5_hlist_lookup(priv->mreg_cp_tbl, - MLX5_DEFAULT_COPY_ID, NULL); - if (!mcp_res) + entry = mlx5_hlist_lookup(priv->mreg_cp_tbl, + MLX5_DEFAULT_COPY_ID, NULL); + if (!entry) return; - MLX5_ASSERT(mcp_res->rix_flow); - flow_list_destroy(dev, NULL, mcp_res->rix_flow); - mlx5_hlist_remove(priv->mreg_cp_tbl, &mcp_res->hlist_ent); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MCP], mcp_res->idx); + mlx5_hlist_unregister(priv->mreg_cp_tbl, entry); } /** @@ -3345,7 +3329,8 @@ struct mlx5_flow_tunnel_info { return -rte_errno; flow->rix_mreg_copy = mcp_res->idx; if (dev->data->dev_started) { - mcp_res->appcnt++; + __atomic_add_fetch(&mcp_res->appcnt, 1, + __ATOMIC_RELAXED); flow->copy_applied = 1; } return 0; @@ -3358,7 +3343,8 @@ struct mlx5_flow_tunnel_info { return -rte_errno; flow->rix_mreg_copy = mcp_res->idx; if (dev->data->dev_started) { - mcp_res->appcnt++; + __atomic_add_fetch(&mcp_res->appcnt, 1, + __ATOMIC_RELAXED); flow->copy_applied = 1; } return 0; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 7effacc..41969c2 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -508,7 +508,6 @@ struct mlx5_flow_mreg_copy_resource { struct mlx5_hlist_entry hlist_ent; LIST_ENTRY(mlx5_flow_mreg_copy_resource) next; /* List entry for device flows. */ - uint32_t refcnt; /* Reference counter. */ uint32_t appcnt; /* Apply/Remove counter. */ uint32_t idx; uint32_t rix_flow; /* Built flow for copy. */ @@ -1101,4 +1100,9 @@ struct mlx5_hlist_entry *flow_dv_modify_create_cb(struct mlx5_hlist *list, void flow_dv_modify_remove_cb(struct mlx5_hlist *list, struct mlx5_hlist_entry *entry); +struct mlx5_hlist_entry *flow_dv_mreg_create_cb(struct mlx5_hlist *list, + uint64_t key, void *ctx); +void flow_dv_mreg_remove_cb(struct mlx5_hlist *list, + struct mlx5_hlist_entry *entry); + #endif /* RTE_PMD_MLX5_FLOW_H_ */ From patchwork Tue Oct 6 11:48:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79766 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9BF9FA04BB; Tue, 6 Oct 2020 13:53:40 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 366A01B96B; Tue, 6 Oct 2020 13:49:43 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 090EB1B872 for ; Tue, 6 Oct 2020 13:49:39 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:37 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0X028553; Tue, 6 Oct 2020 14:49:36 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org Date: Tue, 6 Oct 2020 19:48:56 +0800 Message-Id: <1601984948-313027-14-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 13/25] net/mlx5: make header reformat action thread safe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit applies the thread safe hash list to the header reformat action. That makes the theader reformat action to be thread safe. Signed-off-by: Suanming Mou --- drivers/net/mlx5/linux/mlx5_os.c | 6 +- drivers/net/mlx5/mlx5_flow.h | 6 ++ drivers/net/mlx5/mlx5_flow_dv.c | 181 +++++++++++++++++++++------------------ 3 files changed, 111 insertions(+), 82 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 50d3d99..24cf348 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -262,12 +262,16 @@ snprintf(s, sizeof(s), "%s_encaps_decaps", sh->ibdev_name); sh->encaps_decaps = mlx5_hlist_create(s, MLX5_FLOW_ENCAP_DECAP_HTABLE_SZ, - 0, false, NULL, NULL, NULL); + 0, true, + flow_dv_encap_decap_create_cb, + flow_dv_encap_decap_match_cb, + flow_dv_encap_decap_remove_cb); if (!sh->encaps_decaps) { DRV_LOG(ERR, "encap decap hash creation failed"); err = ENOMEM; goto error; } + sh->encaps_decaps->ctx = sh; #endif #ifdef HAVE_MLX5DV_DR void *domain; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 41969c2..1fe0b30 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1105,4 +1105,10 @@ struct mlx5_hlist_entry *flow_dv_mreg_create_cb(struct mlx5_hlist *list, void flow_dv_mreg_remove_cb(struct mlx5_hlist *list, struct mlx5_hlist_entry *entry); +int flow_dv_encap_decap_match_cb(struct mlx5_hlist *list, + struct mlx5_hlist_entry *entry, void *cb_ctx); +struct mlx5_hlist_entry *flow_dv_encap_decap_create_cb(struct mlx5_hlist *list, + uint64_t key, void *cb_ctx); +void flow_dv_encap_decap_remove_cb(struct mlx5_hlist *list, + struct mlx5_hlist_entry *entry); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index d19f697..b884d8c 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -2548,21 +2548,24 @@ struct field_modify_info modify_tcp[] = { /** * Match encap_decap resource. * + * @param list + * Pointer to the hash list. * @param entry * Pointer to exist resource entry object. - * @param ctx + * @param ctx_cb * Pointer to new encap_decap resource. * * @return - * 0 on matching, -1 otherwise. + * 0 on matching, none-zero otherwise. */ -static int -flow_dv_encap_decap_resource_match(struct mlx5_hlist_entry *entry, void *ctx) +int +flow_dv_encap_decap_match_cb(struct mlx5_hlist *list __rte_unused, + struct mlx5_hlist_entry *entry, void *cb_ctx) { - struct mlx5_flow_dv_encap_decap_resource *resource; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_dv_encap_decap_resource *resource = ctx->data; struct mlx5_flow_dv_encap_decap_resource *cache_resource; - resource = (struct mlx5_flow_dv_encap_decap_resource *)ctx; cache_resource = container_of(entry, struct mlx5_flow_dv_encap_decap_resource, entry); @@ -2579,6 +2582,63 @@ struct field_modify_info modify_tcp[] = { } /** + * Allocate encap_decap resource. + * + * @param list + * Pointer to the hash list. + * @param entry + * Pointer to exist resource entry object. + * @param ctx_cb + * Pointer to new encap_decap resource. + * + * @return + * 0 on matching, none-zero otherwise. + */ +struct mlx5_hlist_entry * +flow_dv_encap_decap_create_cb(struct mlx5_hlist *list, + uint64_t key __rte_unused, + void *cb_ctx) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5dv_dr_domain *domain; + struct mlx5_flow_dv_encap_decap_resource *resource = ctx->data; + struct mlx5_flow_dv_encap_decap_resource *cache_resource; + uint32_t idx; + int ret; + + if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) + domain = sh->fdb_domain; + else if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_RX) + domain = sh->rx_domain; + else + domain = sh->tx_domain; + /* Register new encap/decap resource. */ + cache_resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], + &idx); + if (!cache_resource) { + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot allocate resource memory"); + return NULL; + } + *cache_resource = *resource; + cache_resource->idx = idx; + ret = mlx5_flow_os_create_flow_action_packet_reformat + (sh->ctx, domain, cache_resource, + &cache_resource->action); + if (ret) { + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], idx); + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "cannot create action"); + return NULL; + } + + return &cache_resource->entry; +} + +/** * Find existing encap/decap resource or create and register a new one. * * @param[in, out] dev @@ -2602,8 +2662,6 @@ struct field_modify_info modify_tcp[] = { { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_flow_dv_encap_decap_resource *cache_resource; - struct mlx5dv_dr_domain *domain; struct mlx5_hlist_entry *entry; union mlx5_flow_encap_decap_key encap_decap_key = { { @@ -2614,68 +2672,22 @@ struct field_modify_info modify_tcp[] = { .cksum = 0, } }; - int ret; + struct mlx5_flow_cb_ctx ctx = { + .error = error, + .data = resource, + }; resource->flags = dev_flow->dv.group ? 0 : 1; - if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) - domain = sh->fdb_domain; - else if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_RX) - domain = sh->rx_domain; - else - domain = sh->tx_domain; encap_decap_key.cksum = __rte_raw_cksum(resource->buf, resource->size, 0); resource->entry.key = encap_decap_key.v64; - /* Lookup a matching resource from cache. */ - entry = mlx5_hlist_lookup_ex(sh->encaps_decaps, resource->entry.key, - flow_dv_encap_decap_resource_match, - (void *)resource); - if (entry) { - cache_resource = container_of(entry, - struct mlx5_flow_dv_encap_decap_resource, entry); - DRV_LOG(DEBUG, "encap/decap resource %p: refcnt %d++", - (void *)cache_resource, - rte_atomic32_read(&cache_resource->refcnt)); - rte_atomic32_inc(&cache_resource->refcnt); - dev_flow->handle->dvh.rix_encap_decap = cache_resource->idx; - dev_flow->dv.encap_decap = cache_resource; - return 0; - } - /* Register new encap/decap resource. */ - cache_resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], - &dev_flow->handle->dvh.rix_encap_decap); - if (!cache_resource) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "cannot allocate resource memory"); - *cache_resource = *resource; - cache_resource->idx = dev_flow->handle->dvh.rix_encap_decap; - ret = mlx5_flow_os_create_flow_action_packet_reformat - (sh->ctx, domain, cache_resource, - &cache_resource->action); - if (ret) { - mlx5_free(cache_resource); - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "cannot create action"); - } - rte_atomic32_init(&cache_resource->refcnt); - rte_atomic32_inc(&cache_resource->refcnt); - if (mlx5_hlist_insert_ex(sh->encaps_decaps, &cache_resource->entry, - flow_dv_encap_decap_resource_match, - (void *)cache_resource)) { - claim_zero(mlx5_flow_os_destroy_flow_action - (cache_resource->action)); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_DECAP_ENCAP], - cache_resource->idx); - return rte_flow_error_set(error, EEXIST, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "action exist"); - } - dev_flow->dv.encap_decap = cache_resource; - DRV_LOG(DEBUG, "new encap/decap resource %p: refcnt %d++", - (void *)cache_resource, - rte_atomic32_read(&cache_resource->refcnt)); + entry = mlx5_hlist_register(sh->encaps_decaps, resource->entry.key, + &ctx); + if (!entry) + return -rte_errno; + resource = container_of(entry, typeof(*resource), entry); + dev_flow->dv.encap_decap = resource; + dev_flow->handle->dvh.rix_encap_decap = resource->idx; return 0; } @@ -9089,6 +9101,26 @@ struct mlx5_hlist_entry * } /** + * Release encap_decap resource. + * + * @param list + * Pointer to the hash list. + * @param entry + * Pointer to exist resource entry object. + */ +void +flow_dv_encap_decap_remove_cb(struct mlx5_hlist *list, + struct mlx5_hlist_entry *entry) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_flow_dv_encap_decap_resource *res = + container_of(entry, typeof(*res), entry); + + claim_zero(mlx5_flow_os_destroy_flow_action(res->action)); + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], res->idx); +} + +/** * Release an encap/decap resource. * * @param dev @@ -9104,28 +9136,15 @@ struct mlx5_hlist_entry * struct mlx5_flow_handle *handle) { struct mlx5_priv *priv = dev->data->dev_private; - uint32_t idx = handle->dvh.rix_encap_decap; struct mlx5_flow_dv_encap_decap_resource *cache_resource; cache_resource = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_DECAP_ENCAP], - idx); + handle->dvh.rix_encap_decap); if (!cache_resource) return 0; MLX5_ASSERT(cache_resource->action); - DRV_LOG(DEBUG, "encap/decap resource %p: refcnt %d--", - (void *)cache_resource, - rte_atomic32_read(&cache_resource->refcnt)); - if (rte_atomic32_dec_and_test(&cache_resource->refcnt)) { - claim_zero(mlx5_flow_os_destroy_flow_action - (cache_resource->action)); - mlx5_hlist_remove(priv->sh->encaps_decaps, - &cache_resource->entry); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_DECAP_ENCAP], idx); - DRV_LOG(DEBUG, "encap/decap resource %p: removed", - (void *)cache_resource); - return 0; - } - return 1; + return mlx5_hlist_unregister(priv->sh->encaps_decaps, + &cache_resource->entry); } /** From patchwork Tue Oct 6 11:48:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79768 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E67B1A04BB; Tue, 6 Oct 2020 13:54:31 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7107C1BA68; Tue, 6 Oct 2020 13:49:46 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 555841B9F0 for ; Tue, 6 Oct 2020 13:49:45 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:39 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0Y028553; Tue, 6 Oct 2020 14:49:37 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org Date: Tue, 6 Oct 2020 19:48:57 +0800 Message-Id: <1601984948-313027-15-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 14/25] net/mlx5: remove unused hash list operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In previous commits the hash list objects have been converted to new thread safe hash list. The legacy hash list code can be removed now. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5_utils.c | 38 ------------------------- drivers/net/mlx5/mlx5_utils.h | 66 ------------------------------------------- 2 files changed, 104 deletions(-) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 4eb3db0..387a988 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -164,44 +164,6 @@ struct mlx5_hlist_entry* return entry; } -struct mlx5_hlist_entry * -mlx5_hlist_lookup_ex(struct mlx5_hlist *h, uint64_t key, - mlx5_hlist_match_callback_fn cb, void *ctx) -{ - uint32_t idx; - struct mlx5_hlist_head *first; - struct mlx5_hlist_entry *node; - - MLX5_ASSERT(h && cb && ctx); - idx = rte_hash_crc_8byte(key, 0) & h->mask; - first = &h->heads[idx]; - LIST_FOREACH(node, first, next) { - if (!cb(node, ctx)) - return node; - } - return NULL; -} - -int -mlx5_hlist_insert_ex(struct mlx5_hlist *h, struct mlx5_hlist_entry *entry, - mlx5_hlist_match_callback_fn cb, void *ctx) -{ - uint32_t idx; - struct mlx5_hlist_head *first; - struct mlx5_hlist_entry *node; - - MLX5_ASSERT(h && entry && cb && ctx); - idx = rte_hash_crc_8byte(entry->key, 0) & h->mask; - first = &h->heads[idx]; - /* No need to reuse the lookup function. */ - LIST_FOREACH(node, first, next) { - if (!cb(node, ctx)) - return -EEXIST; - } - LIST_INSERT_HEAD(first, entry, next); - return 0; -} - int mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_hlist_entry *entry) { diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 8719dee..479dd10 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -21,12 +21,6 @@ #include "mlx5_defs.h" -#define mlx5_hlist_remove(h, e) \ - mlx5_hlist_unregister(h, e) - -#define mlx5_hlist_insert(h, e) \ - mlx5_hlist_register(h, 0, e) - /* Convert a bit number to the corresponding 64-bit mask */ #define MLX5_BITSHIFT(v) (UINT64_C(1) << (v)) @@ -267,23 +261,6 @@ struct mlx5_hlist_entry { /** Structure for hash head. */ LIST_HEAD(mlx5_hlist_head, mlx5_hlist_entry); -/** Type of function that is used to handle the data before freeing. */ -typedef void (*mlx5_hlist_destroy_callback_fn)(void *p, void *ctx); - -/** - * Type of function for user defined matching. - * - * @param entry - * The entry in the list. - * @param ctx - * The pointer to new entry context. - * - * @return - * 0 if matching, -1 otherwise. - */ -typedef int (*mlx5_hlist_match_callback_fn)(struct mlx5_hlist_entry *entry, - void *ctx); - /** * Type of callback function for entry removal. * @@ -405,49 +382,6 @@ struct mlx5_hlist_entry *mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx); /** - * Extended routine to search an entry matching the context with - * user defined match function. - * - * @param h - * Pointer to the hast list table. - * @param key - * Key for the searching entry. - * @param cb - * Callback function to match the node with context. - * @param ctx - * Common context parameter used by callback function. - * - * @return - * Pointer of the hlist entry if found, NULL otherwise. - */ -struct mlx5_hlist_entry *mlx5_hlist_lookup_ex(struct mlx5_hlist *h, - uint64_t key, - mlx5_hlist_match_callback_fn cb, - void *ctx); - -/** - * Extended routine to insert an entry to the list with key collisions. - * - * For the list have key collision, the extra user defined match function - * allows node with same key will be inserted. - * - * @param h - * Pointer to the hast list table. - * @param entry - * Entry to be inserted into the hash list table. - * @param cb - * Callback function to match the node with context. - * @param ctx - * Common context parameter used by callback function. - * - * @return - * - zero for success. - * - -EEXIST if the entry is already inserted. - */ -int mlx5_hlist_insert_ex(struct mlx5_hlist *h, struct mlx5_hlist_entry *entry, - mlx5_hlist_match_callback_fn cb, void *ctx); - -/** * Insert an entry to the hash list table, the entry is only part of whole data * element and a 64B key is used for matching. User should construct the key or * give a calculated hash signature and guarantee there is no collision. From patchwork Tue Oct 6 11:48:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79769 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0B5C5A04BB; Tue, 6 Oct 2020 13:54:53 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E84EB1BA96; Tue, 6 Oct 2020 13:49:47 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 25D041B9EB for ; Tue, 6 Oct 2020 13:49:45 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:40 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0Z028553; Tue, 6 Oct 2020 14:49:39 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org, Xueming Li Date: Tue, 6 Oct 2020 19:48:58 +0800 Message-Id: <1601984948-313027-16-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 15/25] net/mlx5: introduce thread safe linked list cache X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xueming Li New API of linked list for cache: - optimized for small amount cache list - optimized for read-most list - thread safe - since number of entries are limited, entries allocated by API - for dynamic entry size, pass 0 as entry size, then the creation callback allocate the entry. - since number of entries are limited, no need to use indexed pool to allocate memory. API will remove entry and free with mlx5_free. - search API is not supposed to be used in multi-thread Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5_utils.c | 170 +++++++++++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_utils.h | 172 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 342 insertions(+) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 387a988..c0b4ae5 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -223,6 +223,176 @@ struct mlx5_hlist_entry* mlx5_free(h); } +/********************* Cache list ************************/ + +static struct mlx5_cache_entry * +mlx5_clist_default_create_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry __rte_unused, + void *ctx __rte_unused) +{ + return mlx5_malloc(MLX5_MEM_ZERO, list->entry_sz, 0, SOCKET_ID_ANY); +} + +static void +mlx5_clist_default_remove_cb(struct mlx5_cache_list *list __rte_unused, + struct mlx5_cache_entry *entry) +{ + mlx5_free(entry); +} + +int +mlx5_cache_list_init(struct mlx5_cache_list *list, const char *name, + uint32_t entry_size, void *ctx, + mlx5_cache_create_cb cb_create, + mlx5_cache_match_cb cb_match, + mlx5_cache_remove_cb cb_remove) +{ + MLX5_ASSERT(list); + if (!cb_match || (!cb_create ^ !cb_remove)) + return -1; + if (name) + snprintf(list->name, sizeof(list->name), "%s", name); + list->entry_sz = entry_size; + list->ctx = ctx; + list->cb_create = cb_create ? cb_create : mlx5_clist_default_create_cb; + list->cb_match = cb_match; + list->cb_remove = cb_remove ? cb_remove : mlx5_clist_default_remove_cb; + rte_rwlock_init(&list->lock); + DRV_LOG(DEBUG, "Cache list %s initialized.", list->name); + LIST_INIT(&list->head); + return 0; +} + +static struct mlx5_cache_entry * +__cache_lookup(struct mlx5_cache_list *list, void *ctx, bool reuse) +{ + struct mlx5_cache_entry *entry; + + LIST_FOREACH(entry, &list->head, next) { + if (!__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED)) + /* Ignore entry in middle of removal */ + continue; + if (list->cb_match(list, entry, ctx)) + continue; + if (reuse) { + __atomic_add_fetch(&entry->ref_cnt, 1, + __ATOMIC_RELAXED); + DRV_LOG(DEBUG, "cache list %s entry %p ref++: %u", + list->name, (void *)entry, entry->ref_cnt); + } + break; + } + return entry; +} + +static struct mlx5_cache_entry * +cache_lookup(struct mlx5_cache_list *list, void *ctx, bool reuse) +{ + struct mlx5_cache_entry *entry; + + rte_rwlock_read_lock(&list->lock); + entry = __cache_lookup(list, ctx, reuse); + rte_rwlock_read_unlock(&list->lock); + return entry; +} + +struct mlx5_cache_entry * +mlx5_cache_lookup(struct mlx5_cache_list *list, void *ctx) +{ + return __cache_lookup(list, ctx, false); +} + +struct mlx5_cache_entry * +mlx5_cache_register(struct mlx5_cache_list *list, void *ctx) +{ + struct mlx5_cache_entry *entry; + uint32_t prev_gen_cnt = 0; + + MLX5_ASSERT(list); + prev_gen_cnt = __atomic_load_n(&list->gen_cnt, __ATOMIC_ACQUIRE); + /* Lookup with read lock, reuse if found. */ + entry = cache_lookup(list, ctx, true); + if (entry) + return entry; + /* Not found, append with write lock - block read from other threads. */ + rte_rwlock_write_lock(&list->lock); + /* If list changed by other threads before lock, search again. */ + if (prev_gen_cnt != __atomic_load_n(&list->gen_cnt, __ATOMIC_ACQUIRE)) { + /* Lookup and reuse w/o read lock */ + entry = __cache_lookup(list, ctx, true); + if (entry) + goto done; + } + entry = list->cb_create(list, entry, ctx); + if (!entry) { + if (list->entry_sz) + mlx5_free(entry); + else if (list->cb_remove) + list->cb_remove(list, entry); + DRV_LOG(ERR, "Failed to init cache list %s entry %p", + list->name, (void *)entry); + entry = NULL; + goto done; + } + entry->ref_cnt = 1; + LIST_INSERT_HEAD(&list->head, entry, next); + __atomic_add_fetch(&list->gen_cnt, 1, __ATOMIC_ACQUIRE); + __atomic_add_fetch(&list->count, 1, __ATOMIC_ACQUIRE); + DRV_LOG(DEBUG, "cache list %s entry %p new: %u", + list->name, (void *)entry, entry->ref_cnt); +done: + rte_rwlock_write_unlock(&list->lock); + return entry; +} + +int +mlx5_cache_unregister(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry) +{ + uint32_t ref_cnt; + + MLX5_ASSERT(entry && entry->next.le_prev); + MLX5_ASSERT(__atomic_fetch_n(&entry->ref_cnt, __ATOMIC_RELAXED)); + + ref_cnt = __atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_ACQ_REL); + DRV_LOG(DEBUG, "cache list %s entry %p ref--: %u", + list->name, (void *)entry, entry->ref_cnt); + if (ref_cnt) + return 1; + rte_rwlock_write_lock(&list->lock); + if (__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED)) { + return 1; + rte_rwlock_write_unlock(&list->lock); + } + __atomic_add_fetch(&list->gen_cnt, 1, __ATOMIC_ACQUIRE); + __atomic_sub_fetch(&list->count, 1, __ATOMIC_ACQUIRE); + LIST_REMOVE(entry, next); + list->cb_remove(list, entry); + rte_rwlock_write_unlock(&list->lock); + DRV_LOG(DEBUG, "cache list %s entry %p removed", + list->name, (void *)entry); + return 0; +} + +void +mlx5_cache_list_destroy(struct mlx5_cache_list *list) +{ + struct mlx5_cache_entry *entry; + + MLX5_ASSERT(list); + if (__atomic_load_n(&list->count, __ATOMIC_RELAXED)) { + /* no LIST_FOREACH_SAFE, using while instead */ + while (!LIST_EMPTY(&list->head)) { + entry = LIST_FIRST(&list->head); + LIST_REMOVE(entry, next); + list->cb_remove(list, entry); + DRV_LOG(DEBUG, "cache list %s entry %p destroyed", + list->name, (void *)entry); + } + } + memset(list, 0, sizeof(*list)); +} + /********************* Indexed pool **********************/ static inline void diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 479dd10..5c39f98 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -422,6 +422,178 @@ struct mlx5_hlist_entry *mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, */ void mlx5_hlist_destroy(struct mlx5_hlist *h); +/************************ cache list *****************************/ + +/** Maximum size of string for naming. */ +#define MLX5_NAME_SIZE 32 + +struct mlx5_cache_list; + +/** + * Structure of the entry in the cache list, user should define its own struct + * that contains this in order to store the data. + */ +struct mlx5_cache_entry { + LIST_ENTRY(mlx5_cache_entry) next; /* entry pointers in the list. */ + uint32_t ref_cnt; /* reference count. */ +}; + +/** + * Type of callback function for entry removal. + * + * @param list + * The cache list. + * @param entry + * The entry in the list. + */ +typedef void (*mlx5_cache_remove_cb)(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry); + +/** + * Type of function for user defined matching. + * + * @param list + * The cache list. + * @param entry + * The entry in the list. + * @param ctx + * The pointer to new entry context. + * + * @return + * 0 if matching, non-zero number otherwise. + */ +typedef int (*mlx5_cache_match_cb)(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry, void *ctx); + +/** + * Type of function for user defined cache list entry creation. + * + * @param list + * The cache list. + * @param entry + * The new allocated entry, NULL if list entry size unspecified, + * New entry has to be allocated in callback and return. + * @param ctx + * The pointer to new entry context. + * + * @return + * Pointer of entry on success, NULL otherwise. + */ +typedef struct mlx5_cache_entry *(*mlx5_cache_create_cb) + (struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry, + void *ctx); + +/** + * Linked cache list structure. + * + * Entry in cache list could be reused if entry already exists, + * reference count will increase and the existing entry returns. + * + * When destroy an entry from list, decrease reference count and only + * destroy when no further reference. + * + * Linked list cache is designed for limited number of entries cache, + * read mostly, less modification. + * + * For huge amount of entries cache, please consider hash list cache. + * + */ +struct mlx5_cache_list { + char name[MLX5_NAME_SIZE]; /**< Name of the cache list. */ + uint32_t entry_sz; /**< Entry size, 0: use create callback. */ + rte_rwlock_t lock; /* read/write lock. */ + uint32_t gen_cnt; /* List modification will update generation count. */ + uint32_t count; /* number of entries in list. */ + void *ctx; /* user objects target to callback. */ + mlx5_cache_create_cb cb_create; /**< entry create callback. */ + mlx5_cache_match_cb cb_match; /**< entry match callback. */ + mlx5_cache_remove_cb cb_remove; /**< entry remove callback. */ + LIST_HEAD(mlx5_cache_head, mlx5_cache_entry) head; +}; + +/** + * Initialize a cache list. + * + * @param list + * Pointer to the hast list table. + * @param name + * Name of the cache list. + * @param entry_size + * Entry size to allocate, 0 to allocate by creation callback. + * @param ctx + * Pointer to the list context data. + * @param cb_create + * Callback function for entry create. + * @param cb_match + * Callback function for entry match. + * @param cb_remove + * Callback function for entry remove. + * @return + * 0 on success, otherwise failure. + */ +int mlx5_cache_list_init(struct mlx5_cache_list *list, + const char *name, uint32_t entry_size, void *ctx, + mlx5_cache_create_cb cb_create, + mlx5_cache_match_cb cb_match, + mlx5_cache_remove_cb cb_remove); + +/** + * Search an entry matching the key. + * + * Result returned might be destroyed by other thread, must use + * this function only in main thread. + * + * @param list + * Pointer to the cache list. + * @param ctx + * Common context parameter used by entry callback function. + * + * @return + * Pointer of the cache entry if found, NULL otherwise. + */ +struct mlx5_cache_entry *mlx5_cache_lookup(struct mlx5_cache_list *list, + void *ctx); + +/** + * Reuse or create an entry to the cache list. + * + * @param list + * Pointer to the hast list table. + * @param ctx + * Common context parameter used by callback function. + * + * @return + * registered entry on success, NULL otherwise + */ +struct mlx5_cache_entry *mlx5_cache_register(struct mlx5_cache_list *list, + void *ctx); + +/** + * Remove an entry from the cache list. + * + * User should guarantee the validity of the entry. + * + * @param list + * Pointer to the hast list. + * @param entry + * Entry to be removed from the cache list table. + * @return + * 0 on entry removed, 1 on entry still referenced. + */ +int mlx5_cache_unregister(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry); + +/** + * Destroy the cache list. + * + * @param list + * Pointer to the cache list. + */ +void mlx5_cache_list_destroy(struct mlx5_cache_list *list); + +/********************************* indexed pool *************************/ + /** * This function allocates non-initialized memory entry from pool. * In NUMA systems, the memory entry allocated resides on the same From patchwork Tue Oct 6 11:48:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79770 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4DAA9A04BB; Tue, 6 Oct 2020 13:55:15 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 62CA11BAA8; Tue, 6 Oct 2020 13:49:49 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 261871B9EC for ; Tue, 6 Oct 2020 13:49:45 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:42 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0a028553; Tue, 6 Oct 2020 14:49:41 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org Date: Tue, 6 Oct 2020 19:48:59 +0800 Message-Id: <1601984948-313027-17-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 16/25] net/mlx5: make Rx queue thread safe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit applies the cache linked list to Rx queue to make it thread safe. Signed-off-by: Suanming Mou --- drivers/net/mlx5/linux/mlx5_os.c | 5 + drivers/net/mlx5/mlx5.c | 1 + drivers/net/mlx5/mlx5.h | 24 +++- drivers/net/mlx5/mlx5_flow.h | 16 --- drivers/net/mlx5/mlx5_flow_dv.c | 20 +--- drivers/net/mlx5/mlx5_flow_verbs.c | 19 +-- drivers/net/mlx5/mlx5_rxq.c | 234 ++++++++++++++++++------------------- drivers/net/mlx5/mlx5_rxtx.h | 20 ++-- 8 files changed, 164 insertions(+), 175 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 24cf348..db7b0de 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1335,6 +1335,10 @@ err = ENOTSUP; goto error; } + mlx5_cache_list_init(&priv->hrxqs, "hrxq", 0, eth_dev, + mlx5_hrxq_create_cb, + mlx5_hrxq_match_cb, + mlx5_hrxq_remove_cb); /* Query availability of metadata reg_c's. */ err = mlx5_flow_discover_mreg_c(eth_dev); if (err < 0) { @@ -1381,6 +1385,7 @@ mlx5_vlan_vmwa_exit(priv->vmwa_context); if (own_domain_id) claim_zero(rte_eth_switch_domain_free(priv->domain_id)); + mlx5_cache_list_destroy(&priv->hrxqs); mlx5_free(priv); if (eth_dev != NULL) eth_dev->data->dev_private = NULL; diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 61e5e69..fc9c5a9 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1219,6 +1219,7 @@ struct mlx5_dev_ctx_shared * close(priv->nl_socket_rdma); if (priv->vmwa_context) mlx5_vlan_vmwa_exit(priv->vmwa_context); + mlx5_cache_list_destroy(&priv->hrxqs); ret = mlx5_hrxq_verify(dev); if (ret) DRV_LOG(WARNING, "port %u some hash Rx queue still remain", diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f11d783..97729a8 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -61,6 +61,13 @@ enum mlx5_reclaim_mem_mode { MLX5_RCM_AGGR, /* Reclaim PMD and rdma-core level. */ }; +/* Hash list callback context */ +struct mlx5_flow_cb_ctx { + struct rte_eth_dev *dev; + struct rte_flow_error *error; + void *data; +}; + /* Device attributes used in mlx5 PMD */ struct mlx5_dev_attr { uint64_t device_cap_flags_ex; @@ -664,6 +671,18 @@ struct mlx5_proc_priv { /* MTR list. */ TAILQ_HEAD(mlx5_flow_meters, mlx5_flow_meter); +/* RSS description. */ +struct mlx5_flow_rss_desc { + uint32_t level; + uint32_t queue_num; /**< Number of entries in @p queue. */ + uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */ + uint64_t hash_fields; /* Verbs Hash fields. */ + uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */ + uint32_t key_len; /**< RSS hash key len. */ + uint32_t tunnel; /**< Queue in tunnel. */ + uint16_t *queue; /**< Destination queues. */ +}; + #define MLX5_PROC_PRIV(port_id) \ ((struct mlx5_proc_priv *)rte_eth_devices[port_id].process_private) @@ -710,7 +729,7 @@ struct mlx5_ind_table_obj { /* Hash Rx queue. */ struct mlx5_hrxq { - ILIST_ENTRY(uint32_t)next; /* Index to the next element. */ + struct mlx5_cache_entry entry; /* Cache entry. */ rte_atomic32_t refcnt; /* Reference counter. */ struct mlx5_ind_table_obj *ind_table; /* Indirection table. */ RTE_STD_C11 @@ -723,6 +742,7 @@ struct mlx5_hrxq { #endif uint64_t hash_fields; /* Verbs Hash fields. */ uint32_t rss_key_len; /* Hash key length in bytes. */ + uint32_t idx; /* Hash Rx queue index*/ uint8_t rss_key[]; /* Hash key. */ }; @@ -788,7 +808,7 @@ struct mlx5_priv { struct mlx5_obj_ops obj_ops; /* HW objects operations. */ LIST_HEAD(rxq, mlx5_rxq_ctrl) rxqsctrl; /* DPDK Rx queues. */ LIST_HEAD(rxqobj, mlx5_rxq_obj) rxqsobj; /* Verbs/DevX Rx queues. */ - uint32_t hrxqs; /* Verbs Hash Rx queues. */ + struct mlx5_cache_list hrxqs; /* Verbs Hash Rx queues. */ LIST_HEAD(txq, mlx5_txq_ctrl) txqsctrl; /* DPDK Tx queues. */ LIST_HEAD(txqobj, mlx5_txq_obj) txqsobj; /* Verbs/DevX Tx queues. */ /* Indirection tables. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 1fe0b30..6ec0e72 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -368,13 +368,6 @@ enum mlx5_flow_fate_type { MLX5_FLOW_FATE_MAX, }; -/* Hash list callback context */ -struct mlx5_flow_cb_ctx { - struct rte_eth_dev *dev; - struct rte_flow_error *error; - void *data; -}; - /* Matcher PRM representation */ struct mlx5_flow_dv_match_params { size_t size; @@ -532,15 +525,6 @@ struct ibv_spec_header { uint16_t size; }; -/* RSS description. */ -struct mlx5_flow_rss_desc { - uint32_t level; - uint32_t queue_num; /**< Number of entries in @p queue. */ - uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */ - uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */ - uint16_t *queue; /**< Destination queues. */ -}; - /* PMD flow priority for tunnel */ #define MLX5_TUNNEL_PRIO_GET(rss_desc) \ ((rss_desc)->level >= 2 ? MLX5_PRIORITY_MAP_L2 : MLX5_PRIORITY_MAP_L4) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index b884d8c..5092130 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -8981,21 +8981,11 @@ struct mlx5_hlist_entry * [!!wks->flow_nested_idx]; MLX5_ASSERT(rss_desc->queue_num); - hrxq_idx = mlx5_hrxq_get(dev, rss_desc->key, - MLX5_RSS_HASH_KEY_LEN, - dev_flow->hash_fields, - rss_desc->queue, - rss_desc->queue_num); - if (!hrxq_idx) { - hrxq_idx = mlx5_hrxq_new - (dev, rss_desc->key, - MLX5_RSS_HASH_KEY_LEN, - dev_flow->hash_fields, - rss_desc->queue, - rss_desc->queue_num, - !!(dh->layers & - MLX5_FLOW_LAYER_TUNNEL)); - } + rss_desc->key_len = MLX5_RSS_HASH_KEY_LEN; + rss_desc->hash_fields = dev_flow->hash_fields; + rss_desc->tunnel = !!(dh->layers & + MLX5_FLOW_LAYER_TUNNEL); + hrxq_idx = mlx5_hrxq_get(dev, rss_desc); hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx); if (!hrxq) { diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c index b649960..905da8a 100644 --- a/drivers/net/mlx5/mlx5_flow_verbs.c +++ b/drivers/net/mlx5/mlx5_flow_verbs.c @@ -1984,20 +1984,11 @@ &wks->rss_desc[!!wks->flow_nested_idx]; MLX5_ASSERT(rss_desc->queue_num); - hrxq_idx = mlx5_hrxq_get(dev, rss_desc->key, - MLX5_RSS_HASH_KEY_LEN, - dev_flow->hash_fields, - rss_desc->queue, - rss_desc->queue_num); - if (!hrxq_idx) - hrxq_idx = mlx5_hrxq_new - (dev, rss_desc->key, - MLX5_RSS_HASH_KEY_LEN, - dev_flow->hash_fields, - rss_desc->queue, - rss_desc->queue_num, - !!(handle->layers & - MLX5_FLOW_LAYER_TUNNEL)); + rss_desc->key_len = MLX5_RSS_HASH_KEY_LEN; + rss_desc->hash_fields = dev_flow->hash_fields; + rss_desc->tunnel = !!(handle->layers & + MLX5_FLOW_LAYER_TUNNEL); + hrxq_idx = mlx5_hrxq_get(dev, rss_desc); hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx); if (!hrxq) { diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index c059e21..0c45ca0 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1853,156 +1853,164 @@ struct mlx5_ind_table_obj * } /** - * Get an Rx Hash queue. + * Match an Rx Hash queue. * - * @param dev - * Pointer to Ethernet device. - * @param rss_conf - * RSS configuration for the Rx hash queue. - * @param queues - * Queues entering in hash queue. In case of empty hash_fields only the - * first queue index will be taken for the indirection table. - * @param queues_n - * Number of queues. + * @param list + * Cache list pointer. + * @param entry + * Hash queue entry pointer. + * @param cb_ctx + * Context of the callback function. * * @return - * An hash Rx queue index on success. + * 0 if match, none zero if not match. */ -uint32_t -mlx5_hrxq_get(struct rte_eth_dev *dev, - const uint8_t *rss_key, uint32_t rss_key_len, - uint64_t hash_fields, - const uint16_t *queues, uint32_t queues_n) +int +mlx5_hrxq_match_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry, + void *cb_ctx) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_hrxq *hrxq; - uint32_t idx; - - queues_n = hash_fields ? queues_n : 1; - ILIST_FOREACH(priv->sh->ipool[MLX5_IPOOL_HRXQ], priv->hrxqs, idx, - hrxq, next) { - struct mlx5_ind_table_obj *ind_tbl; + struct rte_eth_dev *dev = list->ctx; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_rss_desc *rss_desc = ctx->data; + struct mlx5_hrxq *hrxq = container_of(entry, typeof(*hrxq), entry); + struct mlx5_ind_table_obj *ind_tbl; + uint32_t queues_n; - if (hrxq->rss_key_len != rss_key_len) - continue; - if (memcmp(hrxq->rss_key, rss_key, rss_key_len)) - continue; - if (hrxq->hash_fields != hash_fields) - continue; - ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n); - if (!ind_tbl) - continue; - if (ind_tbl != hrxq->ind_table) { - mlx5_ind_table_obj_release(dev, ind_tbl); - continue; - } - rte_atomic32_inc(&hrxq->refcnt); - return idx; - } - return 0; + if (hrxq->rss_key_len != rss_desc->key_len || + memcmp(hrxq->rss_key, rss_desc->key, rss_desc->key_len) || + hrxq->hash_fields != rss_desc->hash_fields) + return 1; + queues_n = rss_desc->hash_fields ? rss_desc->queue_num : 1; + ind_tbl = mlx5_ind_table_obj_get(dev, rss_desc->queue, queues_n); + if (ind_tbl) + mlx5_ind_table_obj_release(dev, ind_tbl); + return ind_tbl != hrxq->ind_table; } /** - * Release the hash Rx queue. - * - * @param dev - * Pointer to Ethernet device. - * @param hrxq - * Index to Hash Rx queue to release. + * Remove the Rx Hash queue. * - * @return - * 1 while a reference on it exists, 0 when freed. + * @param list + * Cache list pointer. + * @param entry + * Hash queue entry pointer. */ -int -mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hrxq_idx) +void +mlx5_hrxq_remove_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry) { + struct rte_eth_dev *dev = list->ctx; struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_hrxq *hrxq; + struct mlx5_hrxq *hrxq = container_of(entry, typeof(*hrxq), entry); - hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx); - if (!hrxq) - return 0; - if (rte_atomic32_dec_and_test(&hrxq->refcnt)) { #ifdef HAVE_IBV_FLOW_DV_SUPPORT - mlx5_glue->destroy_flow_action(hrxq->action); + mlx5_glue->destroy_flow_action(hrxq->action); #endif - priv->obj_ops.hrxq_destroy(hrxq); - mlx5_ind_table_obj_release(dev, hrxq->ind_table); - ILIST_REMOVE(priv->sh->ipool[MLX5_IPOOL_HRXQ], &priv->hrxqs, - hrxq_idx, hrxq, next); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx); - return 0; - } - claim_nonzero(mlx5_ind_table_obj_release(dev, hrxq->ind_table)); - return 1; + priv->obj_ops.hrxq_destroy(hrxq); + mlx5_ind_table_obj_release(dev, hrxq->ind_table); + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq->idx); } /** * Create an Rx Hash queue. * - * @param dev - * Pointer to Ethernet device. - * @param rss_key - * RSS key for the Rx hash queue. - * @param rss_key_len - * RSS key length. - * @param hash_fields - * Verbs protocol hash field to make the RSS on. - * @param queues - * Queues entering in hash queue. In case of empty hash_fields only the - * first queue index will be taken for the indirection table. - * @param queues_n - * Number of queues. - * @param tunnel - * Tunnel type. + * @param list + * Cache list pointer. + * @param entry + * Hash queue entry pointer. + * @param cb_ctx + * Context of the callback function. * * @return - * The DevX object initialized index, 0 otherwise and rte_errno is set. + * queue entry on success, NULL otherwise. */ -uint32_t -mlx5_hrxq_new(struct rte_eth_dev *dev, - const uint8_t *rss_key, uint32_t rss_key_len, - uint64_t hash_fields, - const uint16_t *queues, uint32_t queues_n, - int tunnel __rte_unused) +struct mlx5_cache_entry * +mlx5_hrxq_create_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry __rte_unused, + void *cb_ctx) { + struct rte_eth_dev *dev = list->ctx; struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_rss_desc *rss_desc = ctx->data; + const uint8_t *rss_key = rss_desc->key; + uint32_t rss_key_len = rss_desc->key_len; + const uint16_t *queues = rss_desc->queue; + uint32_t queues_n = rss_desc->queue_num; struct mlx5_hrxq *hrxq = NULL; uint32_t hrxq_idx = 0; struct mlx5_ind_table_obj *ind_tbl; int ret; - queues_n = hash_fields ? queues_n : 1; + queues_n = rss_desc->hash_fields ? queues_n : 1; ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n); if (!ind_tbl) ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n); - if (!ind_tbl) { - rte_errno = ENOMEM; - return 0; - } + if (!ind_tbl) + return NULL; hrxq = mlx5_ipool_zmalloc(priv->sh->ipool[MLX5_IPOOL_HRXQ], &hrxq_idx); if (!hrxq) goto error; + hrxq->idx = hrxq_idx; hrxq->ind_table = ind_tbl; hrxq->rss_key_len = rss_key_len; - hrxq->hash_fields = hash_fields; + hrxq->hash_fields = rss_desc->hash_fields; memcpy(hrxq->rss_key, rss_key, rss_key_len); - ret = priv->obj_ops.hrxq_new(dev, hrxq, tunnel); - if (ret < 0) { - rte_errno = errno; + ret = priv->obj_ops.hrxq_new(dev, hrxq, rss_desc->tunnel); + if (ret < 0) goto error; - } - rte_atomic32_inc(&hrxq->refcnt); - ILIST_INSERT(priv->sh->ipool[MLX5_IPOOL_HRXQ], &priv->hrxqs, hrxq_idx, - hrxq, next); - return hrxq_idx; + return &hrxq->entry; error: - ret = rte_errno; /* Save rte_errno before cleanup. */ mlx5_ind_table_obj_release(dev, ind_tbl); if (hrxq) mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx); - rte_errno = ret; /* Restore rte_errno. */ - return 0; + return NULL; +} + +/** + * Get an Rx Hash queue. + * + * @param dev + * Pointer to Ethernet device. + * @param rss_desc + * RSS configuration for the Rx hash queue. + * + * @return + * An hash Rx queue index on success. + */ +uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev, + struct mlx5_flow_rss_desc *rss_desc) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hrxq *hrxq; + struct mlx5_cache_entry *entry; + struct mlx5_flow_cb_ctx ctx = { + .data = rss_desc, + }; + + entry = mlx5_cache_register(&priv->hrxqs, &ctx); + if (!entry) + return 0; + hrxq = container_of(entry, typeof(*hrxq), entry); + return hrxq->idx; +} + +/** + * Release the hash Rx queue. + * + * @param dev + * Pointer to Ethernet device. + * @param hrxq_idx + * Index to Hash Rx queue to release. + */ +void mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hrxq_idx) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hrxq *hrxq; + + hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx); + mlx5_cache_unregister(&priv->hrxqs, &hrxq->entry); } /** @@ -2087,21 +2095,9 @@ struct mlx5_hrxq * * The number of object not released. */ int -mlx5_hrxq_verify(struct rte_eth_dev *dev) +mlx5_hrxq_verify(struct rte_eth_dev *dev __rte_unused) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_hrxq *hrxq; - uint32_t idx; - int ret = 0; - - ILIST_FOREACH(priv->sh->ipool[MLX5_IPOOL_HRXQ], priv->hrxqs, idx, - hrxq, next) { - DRV_LOG(DEBUG, - "port %u hash Rx queue %p still referenced", - dev->data->port_id, (void *)hrxq); - ++ret; - } - return ret; + return 0; } /** diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index 9ffa028..6f603e2 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -370,17 +370,19 @@ struct mlx5_ind_table_obj *mlx5_ind_table_obj_get(struct rte_eth_dev *dev, uint32_t queues_n); int mlx5_ind_table_obj_release(struct rte_eth_dev *dev, struct mlx5_ind_table_obj *ind_tbl); -uint32_t mlx5_hrxq_new(struct rte_eth_dev *dev, - const uint8_t *rss_key, uint32_t rss_key_len, - uint64_t hash_fields, - const uint16_t *queues, uint32_t queues_n, - int tunnel __rte_unused); +struct mlx5_cache_entry *mlx5_hrxq_create_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry __rte_unused, void *cb_ctx); +int mlx5_hrxq_match_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry, + void *cb_ctx); +void mlx5_hrxq_remove_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry); uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev, - const uint8_t *rss_key, uint32_t rss_key_len, - uint64_t hash_fields, - const uint16_t *queues, uint32_t queues_n); -int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hxrq_idx); + struct mlx5_flow_rss_desc *rss_desc); +void mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hxrq_idx); int mlx5_hrxq_verify(struct rte_eth_dev *dev); + + enum mlx5_rxq_type mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx); struct mlx5_hrxq *mlx5_drop_action_create(struct rte_eth_dev *dev); void mlx5_drop_action_destroy(struct rte_eth_dev *dev); From patchwork Tue Oct 6 11:49:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79773 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 555FDA04BB; Tue, 6 Oct 2020 13:56:33 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 457F31BB6F; Tue, 6 Oct 2020 13:49:54 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 4689F1BAC5 for ; Tue, 6 Oct 2020 13:49:50 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:44 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0b028553; Tue, 6 Oct 2020 14:49:42 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org, Xueming Li Date: Tue, 6 Oct 2020 19:49:00 +0800 Message-Id: <1601984948-313027-18-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 17/25] net/mlx5: make matcher list thread safe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xueming Li To support multi-thread flow insertion, this path converts matcher list to use thread safe cache list API. Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5.h | 3 + drivers/net/mlx5/mlx5_flow.h | 15 ++- drivers/net/mlx5/mlx5_flow_dv.c | 209 +++++++++++++++++++++------------------- 3 files changed, 126 insertions(+), 101 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 97729a8..ee2211b 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -32,6 +32,9 @@ #include "mlx5_os.h" #include "mlx5_autoconf.h" + +#define MLX5_SH(dev) (((struct mlx5_priv *)(dev)->data->dev_private)->sh) + enum mlx5_ipool_index { #ifdef HAVE_IBV_FLOW_DV_SUPPORT MLX5_IPOOL_DECAP_ENCAP = 0, /* Pool for encap/decap resource. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 6ec0e72..fe9b6a6 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -378,11 +378,9 @@ struct mlx5_flow_dv_match_params { /* Matcher structure. */ struct mlx5_flow_dv_matcher { - LIST_ENTRY(mlx5_flow_dv_matcher) next; - /**< Pointer to the next element. */ + struct mlx5_cache_entry entry; /**< Pointer to the next element. */ struct mlx5_flow_tbl_resource *tbl; /**< Pointer to the table(group) the matcher associated with. */ - rte_atomic32_t refcnt; /**< Reference counter. */ void *matcher_object; /**< Pointer to DV matcher */ uint16_t crc; /**< CRC of key. */ uint16_t priority; /**< Priority of matcher. */ @@ -512,11 +510,12 @@ struct mlx5_flow_tbl_data_entry { /**< hash list entry, 64-bits key inside. */ struct mlx5_flow_tbl_resource tbl; /**< flow table resource. */ - LIST_HEAD(matchers, mlx5_flow_dv_matcher) matchers; + struct mlx5_cache_list matchers; /**< matchers' header associated with the flow table. */ struct mlx5_flow_dv_jump_tbl_resource jump; /**< jump resource, at most one for each table created. */ uint32_t idx; /**< index for the indexed mempool. */ + uint8_t direction; /**< table direction, 0: ingress, 1: egress. */ }; /* Verbs specification header. */ @@ -1095,4 +1094,12 @@ struct mlx5_hlist_entry *flow_dv_encap_decap_create_cb(struct mlx5_hlist *list, uint64_t key, void *cb_ctx); void flow_dv_encap_decap_remove_cb(struct mlx5_hlist *list, struct mlx5_hlist_entry *entry); + +int flow_dv_matcher_match_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry, void *ctx); +struct mlx5_cache_entry *flow_dv_matcher_create_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry, void *ctx); +void flow_dv_matcher_remove_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry); + #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 5092130..3774e46 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -70,7 +70,7 @@ }; static int -flow_dv_tbl_resource_release(struct rte_eth_dev *dev, +flow_dv_tbl_resource_release(struct mlx5_dev_ctx_shared *sh, struct mlx5_flow_tbl_resource *tbl); static int @@ -2730,7 +2730,7 @@ struct mlx5_hlist_entry * (void *)&tbl_data->jump, cnt); } else { /* old jump should not make the table ref++. */ - flow_dv_tbl_resource_release(dev, &tbl_data->tbl); + flow_dv_tbl_resource_release(MLX5_SH(dev), &tbl_data->tbl); MLX5_ASSERT(tbl_data->jump.action); DRV_LOG(DEBUG, "existed jump table resource %p: refcnt %d++", (void *)&tbl_data->jump, cnt); @@ -7608,6 +7608,7 @@ struct mlx5_hlist_entry * return NULL; } tbl_data->idx = idx; + tbl_data->direction = key.direction; tbl = &tbl_data->tbl; if (key.dummy) return &tbl_data->entry; @@ -7625,6 +7626,13 @@ struct mlx5_hlist_entry * mlx5_ipool_free(sh->ipool[MLX5_IPOOL_JUMP], idx); return NULL; } + MKSTR(matcher_name, "%s_%s_%u_matcher_cache", + key.domain ? "FDB" : "NIC", key.direction ? "egress" : "ingress", + key.table_id); + mlx5_cache_list_init(&tbl_data->matchers, matcher_name, 0, sh, + flow_dv_matcher_create_cb, + flow_dv_matcher_match_cb, + flow_dv_matcher_remove_cb); rte_atomic32_init(&tbl_data->jump.refcnt); return &tbl_data->entry; } @@ -7684,14 +7692,15 @@ struct mlx5_flow_tbl_resource * MLX5_ASSERT(entry && sh); if (tbl_data->tbl.obj) mlx5_flow_os_destroy_flow_tbl(tbl_data->tbl.obj); + mlx5_cache_list_destroy(&tbl_data->matchers); mlx5_ipool_free(sh->ipool[MLX5_IPOOL_JUMP], tbl_data->idx); } /** * Release a flow table. * - * @param[in] dev - * Pointer to rte_eth_dev structure. + * @param[in] sh + * Pointer to device shared structure. * @param[in] tbl * Table resource to be released. * @@ -7699,11 +7708,9 @@ struct mlx5_flow_tbl_resource * * Returns 0 if table was released, else return 1; */ static int -flow_dv_tbl_resource_release(struct rte_eth_dev *dev, +flow_dv_tbl_resource_release(struct mlx5_dev_ctx_shared *sh, struct mlx5_flow_tbl_resource *tbl) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_dev_ctx_shared *sh = priv->sh; struct mlx5_flow_tbl_data_entry *tbl_data = container_of(tbl, struct mlx5_flow_tbl_data_entry, tbl); @@ -7712,6 +7719,63 @@ struct mlx5_flow_tbl_resource * return mlx5_hlist_unregister(sh->flow_tbls, &tbl_data->entry); } +int +flow_dv_matcher_match_cb(struct mlx5_cache_list *list __rte_unused, + struct mlx5_cache_entry *entry, void *cb_ctx) +{ + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_dv_matcher *ref = ctx->data; + struct mlx5_flow_dv_matcher *cur = container_of(entry, typeof(*cur), + entry); + + return cur->crc != ref->crc || + cur->priority != ref->priority || + memcmp((const void *)cur->mask.buf, + (const void *)ref->mask.buf, ref->mask.size); +} + +struct mlx5_cache_entry * +flow_dv_matcher_create_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry __rte_unused, + void *cb_ctx) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_dv_matcher *ref = ctx->data; + struct mlx5_flow_dv_matcher *cache; + struct mlx5dv_flow_matcher_attr dv_attr = { + .type = IBV_FLOW_ATTR_NORMAL, + .match_mask = (void *)&ref->mask, + }; + struct mlx5_flow_tbl_data_entry *tbl = container_of(ref->tbl, + typeof(*tbl), tbl); + int ret; + + cache = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*cache), 0, SOCKET_ID_ANY); + if (!cache) { + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot create matcher"); + return NULL; + } + *cache = *ref; + dv_attr.match_criteria_enable = + flow_dv_matcher_enable(cache->mask.buf); + dv_attr.priority = ref->priority; + if (tbl->direction) + dv_attr.flags |= IBV_FLOW_ATTR_FLAGS_EGRESS; + ret = mlx5_flow_os_create_flow_matcher(sh->ctx, &dv_attr, tbl->tbl.obj, + &cache->matcher_object); + if (ret) { + mlx5_free(cache); + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot create matcher"); + return NULL; + } + return &cache->entry; +} + /** * Register the flow matcher. * @@ -7731,87 +7795,35 @@ struct mlx5_flow_tbl_resource * */ static int flow_dv_matcher_register(struct rte_eth_dev *dev, - struct mlx5_flow_dv_matcher *matcher, + struct mlx5_flow_dv_matcher *ref, union mlx5_flow_tbl_key *key, struct mlx5_flow *dev_flow, struct rte_flow_error *error) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_flow_dv_matcher *cache_matcher; - struct mlx5dv_flow_matcher_attr dv_attr = { - .type = IBV_FLOW_ATTR_NORMAL, - .match_mask = (void *)&matcher->mask, - }; + struct mlx5_cache_entry *entry; + struct mlx5_flow_dv_matcher *cache; struct mlx5_flow_tbl_resource *tbl; struct mlx5_flow_tbl_data_entry *tbl_data; - int ret; + struct mlx5_flow_cb_ctx ctx = { + .error = error, + .data = ref, + }; tbl = flow_dv_tbl_resource_get(dev, key->table_id, key->direction, key->domain, 0, error); if (!tbl) return -rte_errno; /* No need to refill the error info */ tbl_data = container_of(tbl, struct mlx5_flow_tbl_data_entry, tbl); - /* Lookup from cache. */ - LIST_FOREACH(cache_matcher, &tbl_data->matchers, next) { - if (matcher->crc == cache_matcher->crc && - matcher->priority == cache_matcher->priority && - !memcmp((const void *)matcher->mask.buf, - (const void *)cache_matcher->mask.buf, - cache_matcher->mask.size)) { - DRV_LOG(DEBUG, - "%s group %u priority %hd use %s " - "matcher %p: refcnt %d++", - key->domain ? "FDB" : "NIC", key->table_id, - cache_matcher->priority, - key->direction ? "tx" : "rx", - (void *)cache_matcher, - rte_atomic32_read(&cache_matcher->refcnt)); - rte_atomic32_inc(&cache_matcher->refcnt); - dev_flow->handle->dvh.matcher = cache_matcher; - /* old matcher should not make the table ref++. */ - flow_dv_tbl_resource_release(dev, tbl); - return 0; - } - } - /* Register new matcher. */ - cache_matcher = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*cache_matcher), 0, - SOCKET_ID_ANY); - if (!cache_matcher) { - flow_dv_tbl_resource_release(dev, tbl); + ref->tbl = tbl; + entry = mlx5_cache_register(&tbl_data->matchers, &ctx); + if (!entry) { + flow_dv_tbl_resource_release(MLX5_SH(dev), tbl); return rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "cannot allocate matcher memory"); + "cannot allocate ref memory"); } - *cache_matcher = *matcher; - dv_attr.match_criteria_enable = - flow_dv_matcher_enable(cache_matcher->mask.buf); - dv_attr.priority = matcher->priority; - if (key->direction) - dv_attr.flags |= IBV_FLOW_ATTR_FLAGS_EGRESS; - ret = mlx5_flow_os_create_flow_matcher(sh->ctx, &dv_attr, tbl->obj, - &cache_matcher->matcher_object); - if (ret) { - mlx5_free(cache_matcher); -#ifdef HAVE_MLX5DV_DR - flow_dv_tbl_resource_release(dev, tbl); -#endif - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "cannot create matcher"); - } - /* Save the table information */ - cache_matcher->tbl = tbl; - rte_atomic32_init(&cache_matcher->refcnt); - /* only matcher ref++, table ref++ already done above in get API. */ - rte_atomic32_inc(&cache_matcher->refcnt); - LIST_INSERT_HEAD(&tbl_data->matchers, cache_matcher, next); - dev_flow->handle->dvh.matcher = cache_matcher; - DRV_LOG(DEBUG, "%s group %u priority %hd new %s matcher %p: refcnt %d", - key->domain ? "FDB" : "NIC", key->table_id, - cache_matcher->priority, - key->direction ? "tx" : "rx", (void *)cache_matcher, - rte_atomic32_read(&cache_matcher->refcnt)); + cache = container_of(entry, typeof(*cache), entry); + dev_flow->handle->dvh.matcher = cache; return 0; } @@ -8485,7 +8497,7 @@ struct mlx5_hlist_entry * "cannot create jump action."); if (flow_dv_jump_tbl_resource_register (dev, tbl, dev_flow, error)) { - flow_dv_tbl_resource_release(dev, tbl); + flow_dv_tbl_resource_release(MLX5_SH(dev), tbl); return rte_flow_error_set (error, errno, RTE_FLOW_ERROR_TYPE_ACTION, @@ -9055,6 +9067,19 @@ struct mlx5_hlist_entry * return -rte_errno; } +void +flow_dv_matcher_remove_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_flow_dv_matcher *cache = container_of(entry, typeof(*cache), + entry); + + claim_zero(mlx5_flow_os_destroy_flow_matcher(cache->matcher_object)); + flow_dv_tbl_resource_release(sh, cache->tbl); + mlx5_free(cache); +} + /** * Release the flow matcher. * @@ -9067,27 +9092,15 @@ struct mlx5_hlist_entry * * 1 while a reference on it exists, 0 when freed. */ static int -flow_dv_matcher_release(struct rte_eth_dev *dev, +flow_dv_matcher_release(struct rte_eth_dev *dev __rte_unused, struct mlx5_flow_handle *handle) { struct mlx5_flow_dv_matcher *matcher = handle->dvh.matcher; + struct mlx5_flow_tbl_data_entry *tbl = container_of(matcher->tbl, + typeof(*tbl), tbl); MLX5_ASSERT(matcher->matcher_object); - DRV_LOG(DEBUG, "port %u matcher %p: refcnt %d--", - dev->data->port_id, (void *)matcher, - rte_atomic32_read(&matcher->refcnt)); - if (rte_atomic32_dec_and_test(&matcher->refcnt)) { - claim_zero(mlx5_flow_os_destroy_flow_matcher - (matcher->matcher_object)); - LIST_REMOVE(matcher, next); - /* table ref-- in release interface. */ - flow_dv_tbl_resource_release(dev, matcher->tbl); - mlx5_free(matcher); - DRV_LOG(DEBUG, "port %u matcher %p: removed", - dev->data->port_id, (void *)matcher); - return 0; - } - return 1; + return mlx5_cache_unregister(&tbl->matchers, &matcher->entry); } /** @@ -9169,7 +9182,7 @@ struct mlx5_hlist_entry * claim_zero(mlx5_flow_os_destroy_flow_action (cache_resource->action)); /* jump action memory free is inside the table release. */ - flow_dv_tbl_resource_release(dev, &tbl_data->tbl); + flow_dv_tbl_resource_release(MLX5_SH(dev), &tbl_data->tbl); DRV_LOG(DEBUG, "jump table resource %p: removed", (void *)cache_resource); return 0; @@ -9580,9 +9593,9 @@ struct mlx5_hlist_entry * claim_zero(mlx5_flow_os_destroy_flow_matcher (mtd->egress.any_matcher)); if (mtd->egress.tbl) - flow_dv_tbl_resource_release(dev, mtd->egress.tbl); + flow_dv_tbl_resource_release(MLX5_SH(dev), mtd->egress.tbl); if (mtd->egress.sfx_tbl) - flow_dv_tbl_resource_release(dev, mtd->egress.sfx_tbl); + flow_dv_tbl_resource_release(MLX5_SH(dev), mtd->egress.sfx_tbl); if (mtd->ingress.color_matcher) claim_zero(mlx5_flow_os_destroy_flow_matcher (mtd->ingress.color_matcher)); @@ -9590,9 +9603,10 @@ struct mlx5_hlist_entry * claim_zero(mlx5_flow_os_destroy_flow_matcher (mtd->ingress.any_matcher)); if (mtd->ingress.tbl) - flow_dv_tbl_resource_release(dev, mtd->ingress.tbl); + flow_dv_tbl_resource_release(MLX5_SH(dev), mtd->ingress.tbl); if (mtd->ingress.sfx_tbl) - flow_dv_tbl_resource_release(dev, mtd->ingress.sfx_tbl); + flow_dv_tbl_resource_release(MLX5_SH(dev), + mtd->ingress.sfx_tbl); if (mtd->transfer.color_matcher) claim_zero(mlx5_flow_os_destroy_flow_matcher (mtd->transfer.color_matcher)); @@ -9600,9 +9614,10 @@ struct mlx5_hlist_entry * claim_zero(mlx5_flow_os_destroy_flow_matcher (mtd->transfer.any_matcher)); if (mtd->transfer.tbl) - flow_dv_tbl_resource_release(dev, mtd->transfer.tbl); + flow_dv_tbl_resource_release(MLX5_SH(dev), mtd->transfer.tbl); if (mtd->transfer.sfx_tbl) - flow_dv_tbl_resource_release(dev, mtd->transfer.sfx_tbl); + flow_dv_tbl_resource_release(MLX5_SH(dev), + mtd->transfer.sfx_tbl); if (mtd->drop_actn) claim_zero(mlx5_flow_os_destroy_flow_action(mtd->drop_actn)); mlx5_free(mtd); From patchwork Tue Oct 6 11:49:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79772 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D0D15A04BB; Tue, 6 Oct 2020 13:56:10 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2FA0C1BB40; Tue, 6 Oct 2020 13:49:53 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 468DF1BAC7 for ; Tue, 6 Oct 2020 13:49:50 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:46 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0c028553; Tue, 6 Oct 2020 14:49:44 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org, Xueming Li Date: Tue, 6 Oct 2020 19:49:01 +0800 Message-Id: <1601984948-313027-19-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 18/25] net/mlx5: make port ID action cache thread safe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xueming Li To support multi-thread flow insertion, this patch convert port id action cache list to thread safe cache list. Signed-off-by: Xueming Li --- drivers/net/mlx5/linux/mlx5_os.c | 7 ++ drivers/net/mlx5/mlx5.h | 2 +- drivers/net/mlx5/mlx5_flow.h | 15 +++-- drivers/net/mlx5/mlx5_flow_dv.c | 140 +++++++++++++++++++++------------------ 4 files changed, 94 insertions(+), 70 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index db7b0de..3618e54 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -237,6 +237,12 @@ goto error; /* The resources below are only valid with DV support. */ #ifdef HAVE_IBV_FLOW_DV_SUPPORT + /* Init port id action cache list. */ + snprintf(s, sizeof(s), "%s_port_id_action_cache", sh->ibdev_name); + mlx5_cache_list_init(&sh->port_id_action_list, s, 0, sh, + flow_dv_port_id_create_cb, + flow_dv_port_id_match_cb, + flow_dv_port_id_remove_cb); /* Create tags hash list table. */ snprintf(s, sizeof(s), "%s_tags", sh->ibdev_name); sh->tag_table = mlx5_hlist_create(s, MLX5_TAGS_HLIST_ARRAY_SIZE, 0, @@ -420,6 +426,7 @@ mlx5_hlist_destroy(sh->tag_table); sh->tag_table = NULL; } + mlx5_cache_list_destroy(&sh->port_id_action_list); mlx5_free_table_hash_list(priv); } diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index ee2211b..ab44fa0 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -640,7 +640,7 @@ struct mlx5_dev_ctx_shared { struct mlx5_hlist *encaps_decaps; /* Encap/decap action hash list. */ struct mlx5_hlist *modify_cmds; struct mlx5_hlist *tag_table; - uint32_t port_id_action_list; /* List of port ID actions. */ + struct mlx5_cache_list port_id_action_list; /* Port ID action cache. */ uint32_t push_vlan_action_list; /* List of push VLAN actions. */ struct mlx5_flow_counter_mng cmng; /* Counters management structure. */ struct mlx5_flow_default_miss_resource default_miss; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index fe9b6a6..50aa7ea 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -471,12 +471,10 @@ struct mlx5_flow_dv_jump_tbl_resource { /* Port ID resource structure. */ struct mlx5_flow_dv_port_id_action_resource { - ILIST_ENTRY(uint32_t)next; - /* Pointer to next element. */ - rte_atomic32_t refcnt; /**< Reference counter. */ - void *action; - /**< Action object. */ + struct mlx5_cache_entry entry; + void *action; /**< Action object. */ uint32_t port_id; /**< Port ID value. */ + uint32_t idx; /**< Indexed pool memory index. */ }; /* Push VLAN action resource structure */ @@ -1102,4 +1100,11 @@ struct mlx5_cache_entry *flow_dv_matcher_create_cb(struct mlx5_cache_list *list, void flow_dv_matcher_remove_cb(struct mlx5_cache_list *list, struct mlx5_cache_entry *entry); +int flow_dv_port_id_match_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry, void *cb_ctx); +struct mlx5_cache_entry *flow_dv_port_id_create_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry, void *cb_ctx); +void flow_dv_port_id_remove_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry); + #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 3774e46..7882bb4 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -2741,6 +2741,52 @@ struct mlx5_hlist_entry * return 0; } +int +flow_dv_port_id_match_cb(struct mlx5_cache_list *list __rte_unused, + struct mlx5_cache_entry *entry, void *cb_ctx) +{ + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_dv_port_id_action_resource *ref = ctx->data; + struct mlx5_flow_dv_port_id_action_resource *res = + container_of(entry, typeof(*res), entry); + + return ref->port_id != res->port_id; +} + +struct mlx5_cache_entry * +flow_dv_port_id_create_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry __rte_unused, + void *cb_ctx) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_dv_port_id_action_resource *ref = ctx->data; + struct mlx5_flow_dv_port_id_action_resource *cache; + uint32_t idx; + int ret; + + /* Register new port id action resource. */ + cache = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_PORT_ID], &idx); + if (!cache) { + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot allocate port_id action cache memory"); + return NULL; + } + *cache = *ref; + ret = mlx5_flow_os_create_flow_action_dest_port(sh->fdb_domain, + ref->port_id, + &cache->action); + if (ret) { + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_PORT_ID], idx); + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot create action"); + return NULL; + } + return &cache->entry; +} + /** * Find existing default miss resource or create and register a new one. * @@ -2800,51 +2846,19 @@ struct mlx5_hlist_entry * struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_flow_dv_port_id_action_resource *cache_resource; - uint32_t idx = 0; - int ret; + struct mlx5_cache_entry *entry; + struct mlx5_flow_dv_port_id_action_resource *cache; + struct mlx5_flow_cb_ctx ctx = { + .error = error, + .data = resource, + }; - /* Lookup a matching resource from cache. */ - ILIST_FOREACH(sh->ipool[MLX5_IPOOL_PORT_ID], sh->port_id_action_list, - idx, cache_resource, next) { - if (resource->port_id == cache_resource->port_id) { - DRV_LOG(DEBUG, "port id action resource resource %p: " - "refcnt %d++", - (void *)cache_resource, - rte_atomic32_read(&cache_resource->refcnt)); - rte_atomic32_inc(&cache_resource->refcnt); - dev_flow->handle->rix_port_id_action = idx; - dev_flow->dv.port_id_action = cache_resource; - return 0; - } - } - /* Register new port id action resource. */ - cache_resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_PORT_ID], - &dev_flow->handle->rix_port_id_action); - if (!cache_resource) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "cannot allocate resource memory"); - *cache_resource = *resource; - ret = mlx5_flow_os_create_flow_action_dest_port - (priv->sh->fdb_domain, resource->port_id, - &cache_resource->action); - if (ret) { - mlx5_free(cache_resource); - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "cannot create action"); - } - rte_atomic32_init(&cache_resource->refcnt); - rte_atomic32_inc(&cache_resource->refcnt); - ILIST_INSERT(sh->ipool[MLX5_IPOOL_PORT_ID], &sh->port_id_action_list, - dev_flow->handle->rix_port_id_action, cache_resource, - next); - dev_flow->dv.port_id_action = cache_resource; - DRV_LOG(DEBUG, "new port id action resource %p: refcnt %d++", - (void *)cache_resource, - rte_atomic32_read(&cache_resource->refcnt)); + entry = mlx5_cache_register(&priv->sh->port_id_action_list, &ctx); + if (!entry) + return -rte_errno; + cache = container_of(entry, typeof(*cache), entry); + dev_flow->dv.port_id_action = cache; + dev_flow->handle->rix_port_id_action = cache->idx; return 0; } @@ -9253,6 +9267,18 @@ struct mlx5_hlist_entry * return mlx5_hlist_unregister(priv->sh->modify_cmds, &entry->entry); } +void +flow_dv_port_id_remove_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_flow_dv_port_id_action_resource *cache = + container_of(entry, typeof(*cache), entry); + + claim_zero(mlx5_flow_os_destroy_flow_action(cache->action)); + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_PORT_ID], cache->idx); +} + /** * Release port ID action resource. * @@ -9269,29 +9295,15 @@ struct mlx5_hlist_entry * struct mlx5_flow_handle *handle) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_flow_dv_port_id_action_resource *cache_resource; + struct mlx5_flow_dv_port_id_action_resource *cache; uint32_t idx = handle->rix_port_id_action; - cache_resource = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_PORT_ID], - idx); - if (!cache_resource) + cache = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_PORT_ID], idx); + if (!cache) return 0; - MLX5_ASSERT(cache_resource->action); - DRV_LOG(DEBUG, "port ID action resource %p: refcnt %d--", - (void *)cache_resource, - rte_atomic32_read(&cache_resource->refcnt)); - if (rte_atomic32_dec_and_test(&cache_resource->refcnt)) { - claim_zero(mlx5_flow_os_destroy_flow_action - (cache_resource->action)); - ILIST_REMOVE(priv->sh->ipool[MLX5_IPOOL_PORT_ID], - &priv->sh->port_id_action_list, idx, - cache_resource, next); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_PORT_ID], idx); - DRV_LOG(DEBUG, "port id action resource %p: removed", - (void *)cache_resource); - return 0; - } - return 1; + MLX5_ASSERT(cache->action); + return mlx5_cache_unregister(&priv->sh->port_id_action_list, + &cache->entry); } /** From patchwork Tue Oct 6 11:49:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79771 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 026B4A04BB; Tue, 6 Oct 2020 13:55:46 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C7B871BAFD; Tue, 6 Oct 2020 13:49:51 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 3B0701BABD for ; Tue, 6 Oct 2020 13:49:50 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:47 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0d028553; Tue, 6 Oct 2020 14:49:46 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org, Xueming Li Date: Tue, 6 Oct 2020 19:49:02 +0800 Message-Id: <1601984948-313027-20-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 19/25] net/mlx5: make push VLAN action cache thread safe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xueming Li To support multi-thread flow insertion, this patch converts push VLAN action cache list to thread safe cache list. Signed-off-by: Xueming Li --- drivers/net/mlx5/linux/mlx5_os.c | 7 ++ drivers/net/mlx5/mlx5.h | 2 +- drivers/net/mlx5/mlx5_flow.h | 13 +++- drivers/net/mlx5/mlx5_flow_dv.c | 157 +++++++++++++++++++++------------------ 4 files changed, 102 insertions(+), 77 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 3618e54..63ea55b 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -243,6 +243,12 @@ flow_dv_port_id_create_cb, flow_dv_port_id_match_cb, flow_dv_port_id_remove_cb); + /* Init push vlan action cache list. */ + snprintf(s, sizeof(s), "%s_push_vlan_action_cache", sh->ibdev_name); + mlx5_cache_list_init(&sh->push_vlan_action_list, s, 0, sh, + flow_dv_push_vlan_create_cb, + flow_dv_push_vlan_match_cb, + flow_dv_push_vlan_remove_cb); /* Create tags hash list table. */ snprintf(s, sizeof(s), "%s_tags", sh->ibdev_name); sh->tag_table = mlx5_hlist_create(s, MLX5_TAGS_HLIST_ARRAY_SIZE, 0, @@ -427,6 +433,7 @@ sh->tag_table = NULL; } mlx5_cache_list_destroy(&sh->port_id_action_list); + mlx5_cache_list_destroy(&sh->push_vlan_action_list); mlx5_free_table_hash_list(priv); } diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index ab44fa0..b2312cf 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -641,7 +641,7 @@ struct mlx5_dev_ctx_shared { struct mlx5_hlist *modify_cmds; struct mlx5_hlist *tag_table; struct mlx5_cache_list port_id_action_list; /* Port ID action cache. */ - uint32_t push_vlan_action_list; /* List of push VLAN actions. */ + struct mlx5_cache_list push_vlan_action_list; /* Push VLAN actions. */ struct mlx5_flow_counter_mng cmng; /* Counters management structure. */ struct mlx5_flow_default_miss_resource default_miss; /* Default miss action resource structure. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 50aa7ea..2e060e6 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -479,12 +479,11 @@ struct mlx5_flow_dv_port_id_action_resource { /* Push VLAN action resource structure */ struct mlx5_flow_dv_push_vlan_action_resource { - ILIST_ENTRY(uint32_t)next; - /* Pointer to next element. */ - rte_atomic32_t refcnt; /**< Reference counter. */ + struct mlx5_cache_entry entry; /* Cache entry. */ void *action; /**< Action object. */ uint8_t ft_type; /**< Flow table type, Rx, Tx or FDB. */ rte_be32_t vlan_tag; /**< VLAN tag value. */ + uint32_t idx; /**< Indexed pool memory index. */ }; /* Metadata register copy table entry. */ @@ -1107,4 +1106,12 @@ struct mlx5_cache_entry *flow_dv_port_id_create_cb(struct mlx5_cache_list *list, void flow_dv_port_id_remove_cb(struct mlx5_cache_list *list, struct mlx5_cache_entry *entry); +int flow_dv_push_vlan_match_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry, void *cb_ctx); +struct mlx5_cache_entry *flow_dv_push_vlan_create_cb + (struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry, void *cb_ctx); +void flow_dv_push_vlan_remove_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry); + #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 7882bb4..7c9b9190 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -2862,6 +2862,58 @@ struct mlx5_cache_entry * return 0; } +int +flow_dv_push_vlan_match_cb(struct mlx5_cache_list *list __rte_unused, + struct mlx5_cache_entry *entry, void *cb_ctx) +{ + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_dv_push_vlan_action_resource *ref = ctx->data; + struct mlx5_flow_dv_push_vlan_action_resource *res = + container_of(entry, typeof(*res), entry); + + return ref->vlan_tag != res->vlan_tag || ref->ft_type != res->ft_type; +} + +struct mlx5_cache_entry * +flow_dv_push_vlan_create_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry __rte_unused, + void *cb_ctx) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_dv_push_vlan_action_resource *ref = ctx->data; + struct mlx5_flow_dv_push_vlan_action_resource *cache; + struct mlx5dv_dr_domain *domain; + uint32_t idx; + int ret; + + /* Register new port id action resource. */ + cache = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_PUSH_VLAN], &idx); + if (!cache) { + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot allocate push_vlan action cache memory"); + return NULL; + } + *cache = *ref; + if (ref->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) + domain = sh->fdb_domain; + else if (ref->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_RX) + domain = sh->rx_domain; + else + domain = sh->tx_domain; + ret = mlx5_flow_os_create_flow_action_push_vlan(domain, ref->vlan_tag, + &cache->action); + if (ret) { + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_PUSH_VLAN], idx); + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot create push vlan action"); + return NULL; + } + return &cache->entry; +} + /** * Find existing push vlan resource or create and register a new one. * @@ -2885,62 +2937,23 @@ struct mlx5_cache_entry * struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_flow_dv_push_vlan_action_resource *cache_resource; - struct mlx5dv_dr_domain *domain; - uint32_t idx = 0; - int ret; + struct mlx5_flow_dv_push_vlan_action_resource *cache; + struct mlx5_cache_entry *entry; + struct mlx5_flow_cb_ctx ctx = { + .error = error, + .data = resource, + }; - /* Lookup a matching resource from cache. */ - ILIST_FOREACH(sh->ipool[MLX5_IPOOL_PUSH_VLAN], - sh->push_vlan_action_list, idx, cache_resource, next) { - if (resource->vlan_tag == cache_resource->vlan_tag && - resource->ft_type == cache_resource->ft_type) { - DRV_LOG(DEBUG, "push-VLAN action resource resource %p: " - "refcnt %d++", - (void *)cache_resource, - rte_atomic32_read(&cache_resource->refcnt)); - rte_atomic32_inc(&cache_resource->refcnt); - dev_flow->handle->dvh.rix_push_vlan = idx; - dev_flow->dv.push_vlan_res = cache_resource; - return 0; - } - } - /* Register new push_vlan action resource. */ - cache_resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_PUSH_VLAN], - &dev_flow->handle->dvh.rix_push_vlan); - if (!cache_resource) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "cannot allocate resource memory"); - *cache_resource = *resource; - if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) - domain = sh->fdb_domain; - else if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_RX) - domain = sh->rx_domain; - else - domain = sh->tx_domain; - ret = mlx5_flow_os_create_flow_action_push_vlan - (domain, resource->vlan_tag, - &cache_resource->action); - if (ret) { - mlx5_free(cache_resource); - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "cannot create action"); - } - rte_atomic32_init(&cache_resource->refcnt); - rte_atomic32_inc(&cache_resource->refcnt); - ILIST_INSERT(sh->ipool[MLX5_IPOOL_PUSH_VLAN], - &sh->push_vlan_action_list, - dev_flow->handle->dvh.rix_push_vlan, - cache_resource, next); - dev_flow->dv.push_vlan_res = cache_resource; - DRV_LOG(DEBUG, "new push vlan action resource %p: refcnt %d++", - (void *)cache_resource, - rte_atomic32_read(&cache_resource->refcnt)); + entry = mlx5_cache_register(&priv->sh->push_vlan_action_list, &ctx); + if (!entry) + return -rte_errno; + cache = container_of(entry, typeof(*cache), entry); + + dev_flow->handle->dvh.rix_push_vlan = cache->idx; + dev_flow->dv.push_vlan_res = cache; return 0; } + /** * Get the size of specific rte_flow_item_type hdr size * @@ -9306,6 +9319,18 @@ struct mlx5_hlist_entry * &cache->entry); } +void +flow_dv_push_vlan_remove_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_flow_dv_push_vlan_action_resource *cache = + container_of(entry, typeof(*cache), entry); + + claim_zero(mlx5_flow_os_destroy_flow_action(cache->action)); + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_PUSH_VLAN], cache->idx); +} + /** * Release push vlan action resource. * @@ -9322,29 +9347,15 @@ struct mlx5_hlist_entry * struct mlx5_flow_handle *handle) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_dv_push_vlan_action_resource *cache; uint32_t idx = handle->dvh.rix_push_vlan; - struct mlx5_flow_dv_push_vlan_action_resource *cache_resource; - cache_resource = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_PUSH_VLAN], - idx); - if (!cache_resource) - return 0; - MLX5_ASSERT(cache_resource->action); - DRV_LOG(DEBUG, "push VLAN action resource %p: refcnt %d--", - (void *)cache_resource, - rte_atomic32_read(&cache_resource->refcnt)); - if (rte_atomic32_dec_and_test(&cache_resource->refcnt)) { - claim_zero(mlx5_flow_os_destroy_flow_action - (cache_resource->action)); - ILIST_REMOVE(priv->sh->ipool[MLX5_IPOOL_PUSH_VLAN], - &priv->sh->push_vlan_action_list, idx, - cache_resource, next); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_PUSH_VLAN], idx); - DRV_LOG(DEBUG, "push vlan action resource %p: removed", - (void *)cache_resource); + cache = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_PUSH_VLAN], idx); + if (!cache) return 0; - } - return 1; + MLX5_ASSERT(cache->action); + return mlx5_cache_unregister(&priv->sh->push_vlan_action_list, + &cache->entry); } /** From patchwork Tue Oct 6 11:49:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79775 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2BA63A04BB; Tue, 6 Oct 2020 13:57:20 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9BAF61BC13; Tue, 6 Oct 2020 13:49:58 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 3B6541BBD1 for ; Tue, 6 Oct 2020 13:49:55 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:49 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0e028553; Tue, 6 Oct 2020 14:49:48 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org Date: Tue, 6 Oct 2020 19:49:03 +0800 Message-Id: <1601984948-313027-21-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 20/25] net/mlx5: create global jump action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit changes the jump action in table to be created with table creation in advanced. In this case, the jump action is safe to be used in multiple thread. The jump action will be destroyed when table is not used anymore and released. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5_flow_dv.c | 53 +++++++++++++---------------------------- 1 file changed, 17 insertions(+), 36 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 7c9b9190..a2cb9ed 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -2711,31 +2711,13 @@ struct mlx5_hlist_entry * (struct rte_eth_dev *dev __rte_unused, struct mlx5_flow_tbl_resource *tbl, struct mlx5_flow *dev_flow, - struct rte_flow_error *error) + struct rte_flow_error *error __rte_unused) { struct mlx5_flow_tbl_data_entry *tbl_data = container_of(tbl, struct mlx5_flow_tbl_data_entry, tbl); - int cnt, ret; MLX5_ASSERT(tbl); - cnt = rte_atomic32_read(&tbl_data->jump.refcnt); - if (!cnt) { - ret = mlx5_flow_os_create_flow_action_dest_flow_tbl - (tbl->obj, &tbl_data->jump.action); - if (ret) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "cannot create jump action"); - DRV_LOG(DEBUG, "new jump table resource %p: refcnt %d++", - (void *)&tbl_data->jump, cnt); - } else { - /* old jump should not make the table ref++. */ - flow_dv_tbl_resource_release(MLX5_SH(dev), &tbl_data->tbl); - MLX5_ASSERT(tbl_data->jump.action); - DRV_LOG(DEBUG, "existed jump table resource %p: refcnt %d++", - (void *)&tbl_data->jump, cnt); - } - rte_atomic32_inc(&tbl_data->jump.refcnt); + MLX5_ASSERT(tbl_data->jump.action); dev_flow->handle->rix_jump = tbl_data->idx; dev_flow->dv.jump = &tbl_data->jump; return 0; @@ -7653,6 +7635,18 @@ struct mlx5_hlist_entry * mlx5_ipool_free(sh->ipool[MLX5_IPOOL_JUMP], idx); return NULL; } + if (key.table_id) + ret = mlx5_flow_os_create_flow_action_dest_flow_tbl + (tbl->obj, &tbl_data->jump.action); + if (ret) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "cannot create table jump object"); + mlx5_flow_os_destroy_flow_tbl(tbl->obj); + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_JUMP], idx); + return NULL; + } + MKSTR(matcher_name, "%s_%s_%u_matcher_cache", key.domain ? "FDB" : "NIC", key.direction ? "egress" : "ingress", key.table_id); @@ -7717,6 +7711,8 @@ struct mlx5_flow_tbl_resource * container_of(entry, struct mlx5_flow_tbl_data_entry, entry); MLX5_ASSERT(entry && sh); + if (tbl_data->jump.action) + mlx5_flow_os_destroy_flow_action(tbl_data->jump.action); if (tbl_data->tbl.obj) mlx5_flow_os_destroy_flow_tbl(tbl_data->tbl.obj); mlx5_cache_list_destroy(&tbl_data->matchers); @@ -9193,28 +9189,13 @@ struct mlx5_hlist_entry * struct mlx5_flow_handle *handle) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_flow_dv_jump_tbl_resource *cache_resource; struct mlx5_flow_tbl_data_entry *tbl_data; tbl_data = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_JUMP], handle->rix_jump); if (!tbl_data) return 0; - cache_resource = &tbl_data->jump; - MLX5_ASSERT(cache_resource->action); - DRV_LOG(DEBUG, "jump table resource %p: refcnt %d--", - (void *)cache_resource, - rte_atomic32_read(&cache_resource->refcnt)); - if (rte_atomic32_dec_and_test(&cache_resource->refcnt)) { - claim_zero(mlx5_flow_os_destroy_flow_action - (cache_resource->action)); - /* jump action memory free is inside the table release. */ - flow_dv_tbl_resource_release(MLX5_SH(dev), &tbl_data->tbl); - DRV_LOG(DEBUG, "jump table resource %p: removed", - (void *)cache_resource); - return 0; - } - return 1; + return flow_dv_tbl_resource_release(MLX5_SH(dev), &tbl_data->tbl); } /** From patchwork Tue Oct 6 11:49:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79774 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 478FAA04BB; Tue, 6 Oct 2020 13:56:59 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 351E01BBD6; Tue, 6 Oct 2020 13:49:57 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 3BC061BBD2 for ; Tue, 6 Oct 2020 13:49:55 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:50 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0f028553; Tue, 6 Oct 2020 14:49:49 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org Date: Tue, 6 Oct 2020 19:49:04 +0800 Message-Id: <1601984948-313027-22-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 21/25] net/mlx5: create global default miss action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit creates the global default miss action instead of maintain it in flow insertion time. This makes the action to be thread safe. Signed-off-by: Suanming Mou --- drivers/net/mlx5/linux/mlx5_os.c | 7 ++++ drivers/net/mlx5/mlx5.c | 2 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow_dv.c | 86 ++-------------------------------------- 4 files changed, 14 insertions(+), 82 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 63ea55b..f0470a2 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1317,6 +1317,10 @@ err = mlx5_alloc_shared_dr(priv); if (err) goto error; + priv->default_miss_action = + mlx5_glue->dr_create_flow_action_default_miss(); + if (!priv->default_miss_action) + DRV_LOG(WARNING, "Default miss action not supported."); } if (config->devx && config->dv_flow_en && config->dest_tir) { priv->obj_ops = devx_obj_ops; @@ -1397,6 +1401,9 @@ close(priv->nl_socket_rdma); if (priv->vmwa_context) mlx5_vlan_vmwa_exit(priv->vmwa_context); + if (priv->default_miss_action) + mlx5_glue->destroy_flow_action + (priv->default_miss_action); if (own_domain_id) claim_zero(rte_eth_switch_domain_free(priv->domain_id)); mlx5_cache_list_destroy(&priv->hrxqs); diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index fc9c5a9..1d57d16 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1203,6 +1203,8 @@ struct mlx5_dev_ctx_shared * priv->txqs = NULL; } mlx5_proc_priv_uninit(dev); + if (priv->default_miss_action) + mlx5_glue->destroy_flow_action(priv->default_miss_action); if (priv->mreg_cp_tbl) mlx5_hlist_destroy(priv->mreg_cp_tbl); mlx5_mprq_free_mp(dev); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index b2312cf..944db8d 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -841,6 +841,7 @@ struct mlx5_priv { uint8_t skip_default_rss_reta; /* Skip configuration of default reta. */ uint8_t fdb_def_rule; /* Whether fdb jump to table 1 is configured. */ struct mlx5_mp_id mp_id; /* ID of a multi-process process */ + void *default_miss_action; LIST_HEAD(fdir, mlx5_fdir_flow) fdir_flows; /* fdir flows. */ }; diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index a2cb9ed..80df066 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -73,9 +73,6 @@ flow_dv_tbl_resource_release(struct mlx5_dev_ctx_shared *sh, struct mlx5_flow_tbl_resource *tbl); -static int -flow_dv_default_miss_resource_release(struct rte_eth_dev *dev); - /** * Initialize flow attributes structure according to flow items' types. * @@ -2770,42 +2767,6 @@ struct mlx5_cache_entry * } /** - * Find existing default miss resource or create and register a new one. - * - * @param[in, out] dev - * Pointer to rte_eth_dev structure. - * @param[out] error - * pointer to error structure. - * - * @return - * 0 on success otherwise -errno and errno is set. - */ -static int -flow_dv_default_miss_resource_register(struct rte_eth_dev *dev, - struct rte_flow_error *error) -{ - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_flow_default_miss_resource *cache_resource = - &sh->default_miss; - int cnt = rte_atomic32_read(&cache_resource->refcnt); - - if (!cnt) { - MLX5_ASSERT(cache_resource->action); - cache_resource->action = - mlx5_glue->dr_create_flow_action_default_miss(); - if (!cache_resource->action) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "cannot create default miss action"); - DRV_LOG(DEBUG, "new default miss resource %p: refcnt %d++", - (void *)cache_resource->action, cnt); - } - rte_atomic32_inc(&cache_resource->refcnt); - return 0; -} - -/** * Find existing table port ID resource or create and register a new one. * * @param[in, out] dev @@ -9033,15 +8994,13 @@ struct mlx5_hlist_entry * dh->rix_hrxq = hrxq_idx; dv->actions[n++] = hrxq->action; } else if (dh->fate_action == MLX5_FLOW_FATE_DEFAULT_MISS) { - if (flow_dv_default_miss_resource_register - (dev, error)) { + if (!priv->sh->default_miss.action) { rte_flow_error_set (error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "cannot create default miss resource"); - goto error_default_miss; + "default miss action not be created."); + goto error; } - dh->rix_default_fate = MLX5_FLOW_FATE_DEFAULT_MISS; dv->actions[n++] = priv->sh->default_miss.action; } err = mlx5_flow_os_create_flow(dv_h->matcher->matcher_object, @@ -9067,9 +9026,6 @@ struct mlx5_hlist_entry * } return 0; error: - if (dh->fate_action == MLX5_FLOW_FATE_DEFAULT_MISS) - flow_dv_default_miss_resource_release(dev); -error_default_miss: err = rte_errno; /* Save rte_errno before cleanup. */ SILIST_FOREACH(priv->sh->ipool[MLX5_IPOOL_MLX5_FLOW], flow->dev_handles, handle_idx, dh, next) { @@ -9198,36 +9154,6 @@ struct mlx5_hlist_entry * return flow_dv_tbl_resource_release(MLX5_SH(dev), &tbl_data->tbl); } -/** - * Release a default miss resource. - * - * @param dev - * Pointer to Ethernet device. - * @return - * 1 while a reference on it exists, 0 when freed. - */ -static int -flow_dv_default_miss_resource_release(struct rte_eth_dev *dev) -{ - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_flow_default_miss_resource *cache_resource = - &sh->default_miss; - - MLX5_ASSERT(cache_resource->action); - DRV_LOG(DEBUG, "default miss resource %p: refcnt %d--", - (void *)cache_resource->action, - rte_atomic32_read(&cache_resource->refcnt)); - if (rte_atomic32_dec_and_test(&cache_resource->refcnt)) { - claim_zero(mlx5_glue->destroy_flow_action - (cache_resource->action)); - DRV_LOG(DEBUG, "default miss resource %p: removed", - (void *)cache_resource->action); - return 0; - } - return 1; -} - void flow_dv_modify_remove_cb(struct mlx5_hlist *list __rte_unused, struct mlx5_hlist_entry *entry) @@ -9366,9 +9292,6 @@ struct mlx5_hlist_entry * case MLX5_FLOW_FATE_PORT_ID: flow_dv_port_id_action_resource_release(dev, handle); break; - case MLX5_FLOW_FATE_DEFAULT_MISS: - flow_dv_default_miss_resource_release(dev); - break; default: DRV_LOG(DEBUG, "Incorrect fate action:%d", handle->fate_action); break; @@ -9405,8 +9328,7 @@ struct mlx5_hlist_entry * dh->drv_flow = NULL; } if (dh->fate_action == MLX5_FLOW_FATE_DROP || - dh->fate_action == MLX5_FLOW_FATE_QUEUE || - dh->fate_action == MLX5_FLOW_FATE_DEFAULT_MISS) + dh->fate_action == MLX5_FLOW_FATE_QUEUE) flow_dv_fate_resource_release(dev, dh); if (dh->vf_vlan.tag && dh->vf_vlan.created) mlx5_vlan_vmwa_release(dev, &dh->vf_vlan); From patchwork Tue Oct 6 11:49:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79776 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 289A2A04BB; Tue, 6 Oct 2020 13:57:41 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DDA161BC22; Tue, 6 Oct 2020 13:49:59 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 8CC6A1BBD6 for ; Tue, 6 Oct 2020 13:49:55 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:52 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0g028553; Tue, 6 Oct 2020 14:49:51 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org Date: Tue, 6 Oct 2020 19:49:05 +0800 Message-Id: <1601984948-313027-23-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 22/25] net/mlx5: create global drop action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit creates the global drop action for flows instread of maintain it in flow insertion time. The uniqueu global drop action makes it thread safe. Signed-off-by: Suanming Mou --- drivers/net/mlx5/linux/mlx5_os.c | 5 +++++ drivers/net/mlx5/mlx5.c | 2 ++ drivers/net/mlx5/mlx5_flow_dv.c | 31 +++++++------------------------ drivers/net/mlx5/mlx5_flow_verbs.c | 35 ++++++++++++----------------------- 4 files changed, 26 insertions(+), 47 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index f0470a2..c3dda27 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1331,6 +1331,9 @@ } else { priv->obj_ops = ibv_obj_ops; } + priv->drop_queue.hrxq = mlx5_drop_action_create(eth_dev); + if (!priv->drop_queue.hrxq) + goto error; /* Supported Verbs flow priority number detection. */ err = mlx5_flow_discover_priorities(eth_dev); if (err < 0) { @@ -1401,6 +1404,8 @@ close(priv->nl_socket_rdma); if (priv->vmwa_context) mlx5_vlan_vmwa_exit(priv->vmwa_context); + if (eth_dev && priv->drop_queue.hrxq) + mlx5_drop_action_destroy(eth_dev); if (priv->default_miss_action) mlx5_glue->destroy_flow_action (priv->default_miss_action); diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 1d57d16..d2b3cf1 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1203,6 +1203,8 @@ struct mlx5_dev_ctx_shared * priv->txqs = NULL; } mlx5_proc_priv_uninit(dev); + if (priv->drop_queue.hrxq) + mlx5_drop_action_destroy(dev); if (priv->default_miss_action) mlx5_glue->destroy_flow_action(priv->default_miss_action); if (priv->mreg_cp_tbl) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 80df066..ff91c8b 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -8951,9 +8951,7 @@ struct mlx5_hlist_entry * if (dv->transfer) { dv->actions[n++] = priv->sh->esw_drop_action; } else { - struct mlx5_hrxq *drop_hrxq; - drop_hrxq = mlx5_drop_action_create(dev); - if (!drop_hrxq) { + if (!priv->drop_queue.hrxq) { rte_flow_error_set (error, errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -8961,14 +8959,8 @@ struct mlx5_hlist_entry * "cannot get drop hash queue"); goto error; } - /* - * Drop queues will be released by the specify - * mlx5_drop_action_destroy() function. Assign - * the special index to hrxq to mark the queue - * has been allocated. - */ - dh->rix_hrxq = UINT32_MAX; - dv->actions[n++] = drop_hrxq->action; + dv->actions[n++] = + priv->drop_queue.hrxq->action; } } else if (dh->fate_action == MLX5_FLOW_FATE_QUEUE) { struct mlx5_hrxq *hrxq; @@ -9030,14 +9022,9 @@ struct mlx5_hlist_entry * SILIST_FOREACH(priv->sh->ipool[MLX5_IPOOL_MLX5_FLOW], flow->dev_handles, handle_idx, dh, next) { /* hrxq is union, don't clear it if the flag is not set. */ - if (dh->rix_hrxq) { - if (dh->fate_action == MLX5_FLOW_FATE_DROP) { - mlx5_drop_action_destroy(dev); - dh->rix_hrxq = 0; - } else if (dh->fate_action == MLX5_FLOW_FATE_QUEUE) { - mlx5_hrxq_release(dev, dh->rix_hrxq); - dh->rix_hrxq = 0; - } + if (dh->fate_action == MLX5_FLOW_FATE_QUEUE && dh->rix_hrxq) { + mlx5_hrxq_release(dev, dh->rix_hrxq); + dh->rix_hrxq = 0; } if (dh->vf_vlan.tag && dh->vf_vlan.created) mlx5_vlan_vmwa_release(dev, &dh->vf_vlan); @@ -9280,9 +9267,6 @@ struct mlx5_hlist_entry * if (!handle->rix_fate) return; switch (handle->fate_action) { - case MLX5_FLOW_FATE_DROP: - mlx5_drop_action_destroy(dev); - break; case MLX5_FLOW_FATE_QUEUE: mlx5_hrxq_release(dev, handle->rix_hrxq); break; @@ -9327,8 +9311,7 @@ struct mlx5_hlist_entry * claim_zero(mlx5_flow_os_destroy_flow(dh->drv_flow)); dh->drv_flow = NULL; } - if (dh->fate_action == MLX5_FLOW_FATE_DROP || - dh->fate_action == MLX5_FLOW_FATE_QUEUE) + if (dh->fate_action == MLX5_FLOW_FATE_QUEUE) flow_dv_fate_resource_release(dev, dh); if (dh->vf_vlan.tag && dh->vf_vlan.created) mlx5_vlan_vmwa_release(dev, &dh->vf_vlan); diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c index 905da8a..e8a704b 100644 --- a/drivers/net/mlx5/mlx5_flow_verbs.c +++ b/drivers/net/mlx5/mlx5_flow_verbs.c @@ -72,12 +72,12 @@ }, }; struct ibv_flow *flow; - struct mlx5_hrxq *drop = mlx5_drop_action_create(dev); + struct mlx5_hrxq *drop = priv->drop_queue.hrxq; uint16_t vprio[] = { 8, 16 }; int i; int priority = 0; - if (!drop) { + if (!drop->qp) { rte_errno = ENOTSUP; return -rte_errno; } @@ -89,7 +89,6 @@ claim_zero(mlx5_glue->destroy_flow(flow)); priority = vprio[i]; } - mlx5_drop_action_destroy(dev); switch (priority) { case 8: priority = RTE_DIM(priority_map_3); @@ -1890,15 +1889,10 @@ handle->drv_flow = NULL; } /* hrxq is union, don't touch it only the flag is set. */ - if (handle->rix_hrxq) { - if (handle->fate_action == MLX5_FLOW_FATE_DROP) { - mlx5_drop_action_destroy(dev); - handle->rix_hrxq = 0; - } else if (handle->fate_action == - MLX5_FLOW_FATE_QUEUE) { - mlx5_hrxq_release(dev, handle->rix_hrxq); - handle->rix_hrxq = 0; - } + if (handle->rix_hrxq && + handle->fate_action == MLX5_FLOW_FATE_QUEUE) { + mlx5_hrxq_release(dev, handle->rix_hrxq); + handle->rix_hrxq = 0; } if (handle->vf_vlan.tag && handle->vf_vlan.created) mlx5_vlan_vmwa_release(dev, &handle->vf_vlan); @@ -1970,8 +1964,8 @@ dev_flow = &wks->flows[idx]; handle = dev_flow->handle; if (handle->fate_action == MLX5_FLOW_FATE_DROP) { - hrxq = mlx5_drop_action_create(dev); - if (!hrxq) { + hrxq = priv->drop_queue.hrxq; + if (hrxq->qp) { rte_flow_error_set (error, errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -2027,15 +2021,10 @@ SILIST_FOREACH(priv->sh->ipool[MLX5_IPOOL_MLX5_FLOW], flow->dev_handles, dev_handles, handle, next) { /* hrxq is union, don't touch it only the flag is set. */ - if (handle->rix_hrxq) { - if (handle->fate_action == MLX5_FLOW_FATE_DROP) { - mlx5_drop_action_destroy(dev); - handle->rix_hrxq = 0; - } else if (handle->fate_action == - MLX5_FLOW_FATE_QUEUE) { - mlx5_hrxq_release(dev, handle->rix_hrxq); - handle->rix_hrxq = 0; - } + if (handle->rix_hrxq && + handle->fate_action == MLX5_FLOW_FATE_QUEUE) { + mlx5_hrxq_release(dev, handle->rix_hrxq); + handle->rix_hrxq = 0; } if (handle->vf_vlan.tag && handle->vf_vlan.created) mlx5_vlan_vmwa_release(dev, &handle->vf_vlan); From patchwork Tue Oct 6 11:49:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79777 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6B7DDA04BB; Tue, 6 Oct 2020 13:58:04 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3B3BB1BC29; Tue, 6 Oct 2020 13:50:02 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 662261BC29 for ; Tue, 6 Oct 2020 13:50:00 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:54 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0h028553; Tue, 6 Oct 2020 14:49:52 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org Date: Tue, 6 Oct 2020 19:49:06 +0800 Message-Id: <1601984948-313027-24-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 23/25] net/mlx5: make meter action thread safe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commits add the spinlock for the meter action to make it be thread safe. Atomic reference count is not engough as the meter action should be created synchronized with reference count increase. With only atomic reference count, even the count is increased, the action may still not be created. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5_flow.h | 2 ++ drivers/net/mlx5/mlx5_flow_meter.c | 72 ++++++++++++++++++++------------------ 2 files changed, 39 insertions(+), 35 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2e060e6..e6890a4 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -748,6 +748,8 @@ struct mlx5_flow_meter { struct mlx5_flow_meter_profile *profile; /**< Meter profile parameters. */ + rte_spinlock_t sl; /**< Meter action spinlock. */ + /** Policer actions (per meter output color). */ enum rte_mtr_policer_action action[RTE_COLORS]; diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index b36bc7b..cba8389 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -679,6 +679,7 @@ fm->shared = !!shared; fm->policer_stats.stats_mask = params->stats_mask; fm->profile->ref_cnt++; + rte_spinlock_init(&fm->sl); return 0; error: mlx5_flow_destroy_policer_rules(dev, fm, &attr); @@ -1167,49 +1168,49 @@ struct mlx5_flow_meter * struct rte_flow_error *error) { struct mlx5_flow_meter *fm; + int ret = 0; fm = mlx5_flow_meter_find(priv, meter_id); if (fm == NULL) { rte_flow_error_set(error, ENOENT, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "Meter object id not valid"); - goto error; - } - if (!fm->shared && fm->ref_cnt) { - DRV_LOG(ERR, "Cannot share a non-shared meter."); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Meter can't be shared"); - goto error; + return fm; } - if (!fm->ref_cnt++) { - MLX5_ASSERT(!fm->mfts->meter_action); + rte_spinlock_lock(&fm->sl); + if (fm->mfts->meter_action) { + if (fm->shared && + attr->transfer == fm->transfer && + attr->ingress == fm->ingress && + attr->egress == fm->egress) + fm->ref_cnt++; + else + ret = -1; + } else { fm->ingress = attr->ingress; fm->egress = attr->egress; fm->transfer = attr->transfer; + fm->ref_cnt = 1; /* This also creates the meter object. */ fm->mfts->meter_action = mlx5_flow_meter_action_create(priv, fm); - if (!fm->mfts->meter_action) - goto error_detach; - } else { - MLX5_ASSERT(fm->mfts->meter_action); - if (attr->transfer != fm->transfer || - attr->ingress != fm->ingress || - attr->egress != fm->egress) { - DRV_LOG(ERR, "meter I/O attributes do not " - "match flow I/O attributes."); - goto error_detach; + if (!fm->mfts->meter_action) { + fm->ref_cnt = 0; + fm->ingress = 0; + fm->egress = 0; + fm->transfer = 0; + ret = -1; + DRV_LOG(ERR, "meter action create failed."); } } - return fm; -error_detach: - mlx5_flow_meter_detach(fm); - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - fm->mfts->meter_action ? "Meter attr not match" : - "Meter action create failed"); -error: - return NULL; + rte_spinlock_unlock(&fm->sl); + if (ret) + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + fm->mfts->meter_action ? + "Meter attr not match" : + "Meter action create failed"); + return ret ? NULL : fm; } /** @@ -1222,15 +1223,16 @@ struct mlx5_flow_meter * mlx5_flow_meter_detach(struct mlx5_flow_meter *fm) { #ifdef HAVE_MLX5_DR_CREATE_ACTION_FLOW_METER + rte_spinlock_lock(&fm->sl); MLX5_ASSERT(fm->ref_cnt); - if (--fm->ref_cnt) - return; - if (fm->mfts->meter_action) + if (--fm->ref_cnt == 0) { mlx5_glue->destroy_flow_action(fm->mfts->meter_action); - fm->mfts->meter_action = NULL; - fm->ingress = 0; - fm->egress = 0; - fm->transfer = 0; + fm->mfts->meter_action = NULL; + fm->ingress = 0; + fm->egress = 0; + fm->transfer = 0; + } + rte_spinlock_unlock(&fm->sl); #else (void)fm; #endif From patchwork Tue Oct 6 11:49:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79779 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E075FA04BB; Tue, 6 Oct 2020 13:58:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 04B3E1BC86; Tue, 6 Oct 2020 13:50:05 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 665E71BC6F for ; Tue, 6 Oct 2020 13:50:00 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:55 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0i028553; Tue, 6 Oct 2020 14:49:54 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org Date: Tue, 6 Oct 2020 19:49:07 +0800 Message-Id: <1601984948-313027-25-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 24/25] net/mlx5: make VLAN network interface thread safe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit takes advantage of the atomic operation with the VLAN VM workaround object reference count to make sure the object to be thread safe and not to be created or destroyed in parallel. Signed-off-by: Suanming Mou --- drivers/net/mlx5/linux/mlx5_vlan_os.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_vlan_os.c b/drivers/net/mlx5/linux/mlx5_vlan_os.c index 92fc17d..00e160f 100644 --- a/drivers/net/mlx5/linux/mlx5_vlan_os.c +++ b/drivers/net/mlx5/linux/mlx5_vlan_os.c @@ -45,10 +45,11 @@ return; vlan->created = 0; MLX5_ASSERT(vlan_dev[vlan->tag].refcnt); - if (--vlan_dev[vlan->tag].refcnt == 0 && - vlan_dev[vlan->tag].ifindex) { + if (!__atomic_sub_fetch(&vlan_dev[vlan->tag].refcnt, + 1, __ATOMIC_RELAXED)) { mlx5_nl_vlan_vmwa_delete(vmwa, vlan_dev[vlan->tag].ifindex); - vlan_dev[vlan->tag].ifindex = 0; + __atomic_store_n(&vlan_dev[vlan->tag].ifindex, + 0, __ATOMIC_RELAXED); } } @@ -72,16 +73,21 @@ MLX5_ASSERT(priv->vmwa_context); if (vlan->created || !vmwa) return; - if (vlan_dev[vlan->tag].refcnt == 0) { - MLX5_ASSERT(!vlan_dev[vlan->tag].ifindex); + if (__atomic_add_fetch + (&vlan_dev[vlan->tag].refcnt, 1, __ATOMIC_RELAXED) == 1) { + /* Make sure ifindex is destroyed. */ + rte_wait_until_equal_32(&vlan_dev[vlan->tag].ifindex, + 0, __ATOMIC_RELAXED); vlan_dev[vlan->tag].ifindex = mlx5_nl_vlan_vmwa_create(vmwa, vmwa->vf_ifindex, vlan->tag); + if (!vlan_dev[vlan->tag].ifindex) { + __atomic_store_n(&vlan_dev[vlan->tag].refcnt, + 0, __ATOMIC_RELAXED); + return; + } } - if (vlan_dev[vlan->tag].ifindex) { - vlan_dev[vlan->tag].refcnt++; - vlan->created = 1; - } + vlan->created = 1; } /* From patchwork Tue Oct 6 11:49:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 79778 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 22EC8A04BB; Tue, 6 Oct 2020 13:58:25 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B16361BC76; Tue, 6 Oct 2020 13:50:03 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 679691BC71 for ; Tue, 6 Oct 2020 13:50:00 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:57 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0j028553; Tue, 6 Oct 2020 14:49:55 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org, Xueming Li Date: Tue, 6 Oct 2020 19:49:08 +0800 Message-Id: <1601984948-313027-26-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 25/25] net/mlx5: remove shared context lock X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xueming Li To support multi-thread flow insertion, this patch removes shared data lock since all resources should support concurrent protection. Signed-off-by: Xueming Li --- drivers/net/mlx5/linux/mlx5_os.c | 2 - drivers/net/mlx5/mlx5.h | 1 - drivers/net/mlx5/mlx5_flow_dv.c | 140 ++++----------------------------------- 3 files changed, 14 insertions(+), 129 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index c3dda27..776b6a3 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -310,7 +310,6 @@ err = errno; goto error; } - pthread_mutex_init(&sh->dv_mutex, NULL); sh->tx_domain = domain; #ifdef HAVE_MLX5DV_DR_ESWITCH if (priv->config.dv_esw_en) { @@ -417,7 +416,6 @@ mlx5_glue->destroy_flow_action(sh->pop_vlan_action); sh->pop_vlan_action = NULL; } - pthread_mutex_destroy(&sh->dv_mutex); #endif /* HAVE_MLX5DV_DR */ if (sh->encaps_decaps) { mlx5_hlist_destroy(sh->encaps_decaps); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 944db8d..29ff194 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -620,7 +620,6 @@ struct mlx5_dev_ctx_shared { /* Packet pacing related structure. */ struct mlx5_dev_txpp txpp; /* Shared DV/DR flow data section. */ - pthread_mutex_t dv_mutex; /* DV context mutex. */ uint32_t dv_meta_mask; /* flow META metadata supported mask. */ uint32_t dv_mark_mask; /* flow MARK metadata supported mask. */ uint32_t dv_regc0_mask; /* available bits of metatada reg_c[0]. */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index ff91c8b..25d43ca 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -267,45 +267,6 @@ struct field_modify_info modify_tcp[] = { } } -/** - * Acquire the synchronizing object to protect multithreaded access - * to shared dv context. Lock occurs only if context is actually - * shared, i.e. we have multiport IB device and representors are - * created. - * - * @param[in] dev - * Pointer to the rte_eth_dev structure. - */ -static void -flow_dv_shared_lock(struct rte_eth_dev *dev) -{ - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_dev_ctx_shared *sh = priv->sh; - - if (sh->dv_refcnt > 1) { - int ret; - - ret = pthread_mutex_lock(&sh->dv_mutex); - MLX5_ASSERT(!ret); - (void)ret; - } -} - -static void -flow_dv_shared_unlock(struct rte_eth_dev *dev) -{ - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_dev_ctx_shared *sh = priv->sh; - - if (sh->dv_refcnt > 1) { - int ret; - - ret = pthread_mutex_unlock(&sh->dv_mutex); - MLX5_ASSERT(!ret); - (void)ret; - } -} - /* Update VLAN's VID/PCP based on input rte_flow_action. * * @param[in] action @@ -4880,7 +4841,7 @@ struct mlx5_hlist_entry * * Index to the counter handler. */ static void -flow_dv_counter_release(struct rte_eth_dev *dev, uint32_t counter) +flow_dv_counter_free(struct rte_eth_dev *dev, uint32_t counter) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_counter_pool *pool = NULL; @@ -8151,12 +8112,12 @@ struct mlx5_hlist_entry * * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -__flow_dv_translate(struct rte_eth_dev *dev, - struct mlx5_flow *dev_flow, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[], - struct rte_flow_error *error) +flow_dv_translate(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_config *dev_conf = &priv->config; @@ -8926,8 +8887,8 @@ struct mlx5_hlist_entry * * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -__flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow, - struct rte_flow_error *error) +flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow, + struct rte_flow_error *error) { struct mlx5_flow_dv_workspace *dv; struct mlx5_flow_handle *dh; @@ -9293,7 +9254,7 @@ struct mlx5_hlist_entry * * Pointer to flow structure. */ static void -__flow_dv_remove(struct rte_eth_dev *dev, struct rte_flow *flow) +flow_dv_remove(struct rte_eth_dev *dev, struct rte_flow *flow) { struct mlx5_flow_handle *dh; uint32_t handle_idx; @@ -9329,16 +9290,16 @@ struct mlx5_hlist_entry * * Pointer to flow structure. */ static void -__flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow) +flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow) { struct mlx5_flow_handle *dev_handle; struct mlx5_priv *priv = dev->data->dev_private; if (!flow) return; - __flow_dv_remove(dev, flow); + flow_dv_remove(dev, flow); if (flow->counter) { - flow_dv_counter_release(dev, flow->counter); + flow_dv_counter_free(dev, flow->counter); flow->counter = 0; } if (flow->meter) { @@ -9975,85 +9936,12 @@ struct mlx5_hlist_entry * } /* - * Mutex-protected thunk to lock-free __flow_dv_translate(). - */ -static int -flow_dv_translate(struct rte_eth_dev *dev, - struct mlx5_flow *dev_flow, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[], - struct rte_flow_error *error) -{ - int ret; - - flow_dv_shared_lock(dev); - ret = __flow_dv_translate(dev, dev_flow, attr, items, actions, error); - flow_dv_shared_unlock(dev); - return ret; -} - -/* - * Mutex-protected thunk to lock-free __flow_dv_apply(). - */ -static int -flow_dv_apply(struct rte_eth_dev *dev, - struct rte_flow *flow, - struct rte_flow_error *error) -{ - int ret; - - flow_dv_shared_lock(dev); - ret = __flow_dv_apply(dev, flow, error); - flow_dv_shared_unlock(dev); - return ret; -} - -/* - * Mutex-protected thunk to lock-free __flow_dv_remove(). - */ -static void -flow_dv_remove(struct rte_eth_dev *dev, struct rte_flow *flow) -{ - flow_dv_shared_lock(dev); - __flow_dv_remove(dev, flow); - flow_dv_shared_unlock(dev); -} - -/* - * Mutex-protected thunk to lock-free __flow_dv_destroy(). - */ -static void -flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow) -{ - flow_dv_shared_lock(dev); - __flow_dv_destroy(dev, flow); - flow_dv_shared_unlock(dev); -} - -/* * Mutex-protected thunk to lock-free flow_dv_counter_alloc(). */ static uint32_t flow_dv_counter_allocate(struct rte_eth_dev *dev) { - uint32_t cnt; - - flow_dv_shared_lock(dev); - cnt = flow_dv_counter_alloc(dev, 0, 0, 1, 0); - flow_dv_shared_unlock(dev); - return cnt; -} - -/* - * Mutex-protected thunk to lock-free flow_dv_counter_release(). - */ -static void -flow_dv_counter_free(struct rte_eth_dev *dev, uint32_t cnt) -{ - flow_dv_shared_lock(dev); - flow_dv_counter_release(dev, cnt); - flow_dv_shared_unlock(dev); + return flow_dv_counter_alloc(dev, 0, 0, 1, 0); } const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = {