From patchwork Wed Nov 11 07:14:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 83982 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BF93DA09D2; Wed, 11 Nov 2020 08:14:56 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3B0B14CA6; Wed, 11 Nov 2020 08:14:41 +0100 (CET) Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by dpdk.org (Postfix) with ESMTP id 5C7EF4C90 for ; Wed, 11 Nov 2020 08:14:39 +0100 (CET) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 10 Nov 2020 23:14:41 -0800 Received: from nvidia.com (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 11 Nov 2020 07:14:34 +0000 From: Gregory Etelson To: CC: , , , "Shahaf Shuler" , Viacheslav Ovsiienko , Xueming Li Date: Wed, 11 Nov 2020 09:14:14 +0200 Message-ID: <20201111071417.21177-2-getelson@nvidia.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201111071417.21177-1-getelson@nvidia.com> References: <20201111071417.21177-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605078881; bh=lEspkHkzolrVtxTzn9WavyPBoVHP5vPm745LiD0RuBU=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=JS+7PeZ9mkaYplFLzun/6xDQVIyQkpNEAanU+rfnQhI6exWIc+I4SbuRYqM2hR+Ly zusuI3visKquhDWe/iTIELLpWl9h93KYymbJihSA2k/Zi2LGcQ/OW2vgN1nVlP5QqX YaluMDHeN5/qdPj72gwKo6aloJfYSPJDbQmG7fTzm6TJTZvxC4m6i1aKN7Vc9Fa8/T ecsh7kY+u4bUGdXc6pVCieKMdTUGTJEZHlZRlbutGNfXY55v1iOAtKaEzLBOt8qTMR oPFxefHfIS1CuophtiyeeqKwOZcA+NK3z3Z/37pTTBw2NbMCb7iHnpwkbq5f8MxjrS 5eygvA0ZPNzaw== Subject: [dpdk-dev] [PATCH 1/4] net/mlx5: fix offloaded tunnel allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The original patch allocated tunnel offload objects with invalid indexes. As the result, PMD tunnel object allocation failed. In this patch indexed pool provides both an index and memory for a new tunnel offload object. Also tunnel offload ipool moved to dv enabled code only. Fixes: f2e8093 ("net/mlx5: use indexed pool as id generator") Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5.c | 50 ++++++++++++++++++------------------ drivers/net/mlx5/mlx5.h | 4 +-- drivers/net/mlx5/mlx5_flow.c | 41 ++++++++++------------------- 3 files changed, 40 insertions(+), 55 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 43344391df..e1faa819a3 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -186,7 +186,7 @@ static pthread_mutex_t mlx5_dev_ctx_list_mutex = PTHREAD_MUTEX_INITIALIZER; static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { #ifdef HAVE_IBV_FLOW_DV_SUPPORT - { + [MLX5_IPOOL_DECAP_ENCAP] = { .size = sizeof(struct mlx5_flow_dv_encap_decap_resource), .trunk_size = 64, .grow_trunk = 3, @@ -197,7 +197,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_encap_decap_ipool", }, - { + [MLX5_IPOOL_PUSH_VLAN] = { .size = sizeof(struct mlx5_flow_dv_push_vlan_action_resource), .trunk_size = 64, .grow_trunk = 3, @@ -208,7 +208,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_push_vlan_ipool", }, - { + [MLX5_IPOOL_TAG] = { .size = sizeof(struct mlx5_flow_dv_tag_resource), .trunk_size = 64, .grow_trunk = 3, @@ -219,7 +219,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_tag_ipool", }, - { + [MLX5_IPOOL_PORT_ID] = { .size = sizeof(struct mlx5_flow_dv_port_id_action_resource), .trunk_size = 64, .grow_trunk = 3, @@ -230,7 +230,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_port_id_ipool", }, - { + [MLX5_IPOOL_JUMP] = { .size = sizeof(struct mlx5_flow_tbl_data_entry), .trunk_size = 64, .grow_trunk = 3, @@ -241,7 +241,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_jump_ipool", }, - { + [MLX5_IPOOL_SAMPLE] = { .size = sizeof(struct mlx5_flow_dv_sample_resource), .trunk_size = 64, .grow_trunk = 3, @@ -252,7 +252,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_sample_ipool", }, - { + [MLX5_IPOOL_DEST_ARRAY] = { .size = sizeof(struct mlx5_flow_dv_dest_array_resource), .trunk_size = 64, .grow_trunk = 3, @@ -263,8 +263,19 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_dest_array_ipool", }, + [MLX5_IPOOL_TUNNEL_OFFLOAD] = { + .size = sizeof(struct mlx5_flow_tunnel), + .need_lock = 1, + .release_mem_en = 1, + .type = "mlx5_tunnel_offload", + }, + [MLX5_IPOOL_TUNNEL_FLOW_TBL_ID] = { + .size = 0, + .need_lock = 1, + .type = "mlx5_flow_tnl_tbl_ipool", + }, #endif - { + [MLX5_IPOOL_MTR] = { .size = sizeof(struct mlx5_flow_meter), .trunk_size = 64, .grow_trunk = 3, @@ -275,7 +286,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_meter_ipool", }, - { + [MLX5_IPOOL_MCP] = { .size = sizeof(struct mlx5_flow_mreg_copy_resource), .trunk_size = 64, .grow_trunk = 3, @@ -286,7 +297,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_mcp_ipool", }, - { + [MLX5_IPOOL_HRXQ] = { .size = (sizeof(struct mlx5_hrxq) + MLX5_RSS_HASH_KEY_LEN), .trunk_size = 64, .grow_trunk = 3, @@ -297,7 +308,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_hrxq_ipool", }, - { + [MLX5_IPOOL_MLX5_FLOW] = { /* * MLX5_IPOOL_MLX5_FLOW size varies for DV and VERBS flows. * It set in run time according to PCI function configuration. @@ -312,7 +323,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_flow_handle_ipool", }, - { + [MLX5_IPOOL_RTE_FLOW] = { .size = sizeof(struct rte_flow), .trunk_size = 4096, .need_lock = 1, @@ -321,22 +332,12 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "rte_flow_ipool", }, - { + [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID] = { .size = 0, .need_lock = 1, .type = "mlx5_flow_rss_id_ipool", }, - { - .size = 0, - .need_lock = 1, - .type = "mlx5_flow_tnl_flow_ipool", - }, - { - .size = 0, - .need_lock = 1, - .type = "mlx5_flow_tnl_tbl_ipool", - }, - { + [MLX5_IPOOL_RSS_SHARED_ACTIONS] = { .size = sizeof(struct mlx5_shared_action_rss), .trunk_size = 64, .grow_trunk = 3, @@ -347,7 +348,6 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_shared_action_rss", }, - }; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 7ee63a7a14..af097d6a7e 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -44,6 +44,8 @@ enum mlx5_ipool_index { MLX5_IPOOL_JUMP, /* Pool for jump resource. */ MLX5_IPOOL_SAMPLE, /* Pool for sample resource. */ MLX5_IPOOL_DEST_ARRAY, /* Pool for destination array resource. */ + MLX5_IPOOL_TUNNEL_OFFLOAD, /* Pool for tunnel offload context */ + MLX5_IPOOL_TUNNEL_FLOW_TBL_ID, /* Pool for tunnel table ID. */ #endif MLX5_IPOOL_MTR, /* Pool for meter resource. */ MLX5_IPOOL_MCP, /* Pool for metadata resource. */ @@ -51,8 +53,6 @@ enum mlx5_ipool_index { MLX5_IPOOL_MLX5_FLOW, /* Pool for mlx5 flow handle. */ MLX5_IPOOL_RTE_FLOW, /* Pool for rte_flow. */ MLX5_IPOOL_RSS_EXPANTION_FLOW_ID, /* Pool for Queue/RSS flow ID. */ - MLX5_IPOOL_TUNNEL_ID, /* Pool for flow tunnel ID. */ - MLX5_IPOOL_TNL_TBL_ID, /* Pool for tunnel table ID. */ MLX5_IPOOL_RSS_SHARED_ACTIONS, /* Pool for RSS shared actions. */ MLX5_IPOOL_MAX, }; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 92adfcacca..31c9d82b4a 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -6934,7 +6934,7 @@ mlx5_flow_tunnel_grp2tbl_remove_cb(struct mlx5_hlist *list, struct mlx5_dev_ctx_shared *sh = list->ctx; struct tunnel_tbl_entry *tte = container_of(entry, typeof(*tte), hash); - mlx5_ipool_free(sh->ipool[MLX5_IPOOL_TNL_TBL_ID], + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_TUNNEL_FLOW_TBL_ID], tunnel_flow_tbl_to_id(tte->flow_table)); mlx5_free(tte); } @@ -6952,12 +6952,12 @@ mlx5_flow_tunnel_grp2tbl_create_cb(struct mlx5_hlist *list, SOCKET_ID_ANY); if (!tte) goto err; - mlx5_ipool_malloc(sh->ipool[MLX5_IPOOL_TNL_TBL_ID], + mlx5_ipool_malloc(sh->ipool[MLX5_IPOOL_TUNNEL_FLOW_TBL_ID], &tte->flow_table); if (tte->flow_table >= MLX5_MAX_TABLES) { DRV_LOG(ERR, "Tunnel TBL ID %d exceed max limit.", tte->flow_table); - mlx5_ipool_free(sh->ipool[MLX5_IPOOL_TNL_TBL_ID], + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_TUNNEL_FLOW_TBL_ID], tte->flow_table); goto err; } else if (!tte->flow_table) { @@ -7465,14 +7465,13 @@ mlx5_flow_tunnel_free(struct rte_eth_dev *dev, struct mlx5_flow_tunnel *tunnel) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_indexed_pool *ipool; DRV_LOG(DEBUG, "port %u release pmd tunnel id=0x%x", dev->data->port_id, tunnel->tunnel_id); - RTE_VERIFY(!__atomic_load_n(&tunnel->refctn, __ATOMIC_RELAXED)); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_TUNNEL_ID], - tunnel->tunnel_id); mlx5_hlist_destroy(tunnel->groups); - mlx5_free(tunnel); + ipool = priv->sh->ipool[MLX5_IPOOL_TUNNEL_OFFLOAD]; + mlx5_ipool_free(ipool, tunnel->tunnel_id); } static struct mlx5_flow_tunnel * @@ -7494,39 +7493,25 @@ mlx5_flow_tunnel_allocate(struct rte_eth_dev *dev, const struct rte_flow_tunnel *app_tunnel) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_indexed_pool *ipool; struct mlx5_flow_tunnel *tunnel; uint32_t id; - mlx5_ipool_malloc(priv->sh->ipool[MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], - &id); + ipool = priv->sh->ipool[MLX5_IPOOL_TUNNEL_OFFLOAD]; + tunnel = mlx5_ipool_zmalloc(ipool, &id); + if (!tunnel) + return NULL; if (id >= MLX5_MAX_TUNNELS) { - mlx5_ipool_free(priv->sh->ipool - [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], id); + mlx5_ipool_free(ipool, id); DRV_LOG(ERR, "Tunnel ID %d exceed max limit.", id); return NULL; - } else if (!id) { - return NULL; - } - /** - * mlx5 flow tunnel is an auxlilary data structure - * It's not part of IO. No need to allocate it from - * huge pages pools dedicated for IO - */ - tunnel = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO, sizeof(*tunnel), - 0, SOCKET_ID_ANY); - if (!tunnel) { - mlx5_ipool_free(priv->sh->ipool - [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], id); - return NULL; } tunnel->groups = mlx5_hlist_create("tunnel groups", 1024, 0, 0, mlx5_flow_tunnel_grp2tbl_create_cb, NULL, mlx5_flow_tunnel_grp2tbl_remove_cb); if (!tunnel->groups) { - mlx5_ipool_free(priv->sh->ipool - [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], id); - mlx5_free(tunnel); + mlx5_ipool_free(ipool, id); return NULL; } tunnel->groups->ctx = priv->sh; From patchwork Wed Nov 11 07:14:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 83983 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id F1BF2A09D2; Wed, 11 Nov 2020 08:15:15 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AAD295928; Wed, 11 Nov 2020 08:14:44 +0100 (CET) Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by dpdk.org (Postfix) with ESMTP id 63E035913 for ; Wed, 11 Nov 2020 08:14:41 +0100 (CET) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 10 Nov 2020 23:14:43 -0800 Received: from nvidia.com (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 11 Nov 2020 07:14:37 +0000 From: Gregory Etelson To: CC: , , , "Shahaf Shuler" , Viacheslav Ovsiienko , Suanming Mou Date: Wed, 11 Nov 2020 09:14:15 +0200 Message-ID: <20201111071417.21177-3-getelson@nvidia.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201111071417.21177-1-getelson@nvidia.com> References: <20201111071417.21177-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605078883; bh=pUao8z9xjaIkiJfOlGghNSW2sxINSJ+Le1g/WRpApT8=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=gWT65q4T2vU60hOAqYPyzb0+5ztzxomHYwIRKjSpyQmdZ/dVX9bX1f0Fd2Q/xH+u3 v0Uj5ZSjCDIjW+VXEQha0l0EAJ2gekwj4vWKtGsvsNuib2RQ5GfOnbIvxI+JI4V2rl PsD3fBbcLfNitUOYVq/2n7lopLOGphZWKRk39MZVnqrgELnAYKTuqgZgBxYX6Y383A WacQuMqx/F1ejKSuvGdu/cQjtcLcQICcLWH2h0ZpS9An4NbudF1Xs134QQEXaIen+w 1ncEN7RCe+z4suoRy/GnnsNWmoglHxoXFsAGpZaZ/R1rEe+klQKf2oE+Xn7bElz95S u4Gu5NE4t27Cg== Subject: [dpdk-dev] [PATCH 2/4] net/mlx5: fix tunnel offload hub multi-thread protection X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The original patch was removing active tunnel offload objects from a tunnels db list. That action was leading to a PMD crash. Current patch isolates tunnels db list into a separate API. That API manages MT protection of the tunnel offload db. Fixes: e4f5880 ("net/mlx5: make tunnel hub list thread safe") Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow.c | 256 +++++++++++++++++++++++++---------- drivers/net/mlx5/mlx5_flow.h | 6 +- 2 files changed, 192 insertions(+), 70 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 31c9d82b4a..2f01e34033 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -33,6 +33,14 @@ #include "mlx5_common_os.h" #include "rte_pmd_mlx5.h" +static bool +mlx5_access_tunnel_offload_db + (struct rte_eth_dev *dev, + bool (*match)(struct rte_eth_dev *, + struct mlx5_flow_tunnel *, const void *), + void (*hit)(struct rte_eth_dev *, struct mlx5_flow_tunnel *, void *), + void (*miss)(struct rte_eth_dev *, void *), + void *ctx, bool lock_op); static struct mlx5_flow_tunnel * mlx5_find_tunnel_id(struct rte_eth_dev *dev, uint32_t id); static void @@ -661,29 +669,68 @@ mlx5_flow_tunnel_match(struct rte_eth_dev *dev, return 0; } +struct tunnel_db_element_release_ctx { + struct rte_flow_item *items; + struct rte_flow_action *actions; + uint32_t num_elements; + struct rte_flow_error *error; + int ret; +}; + +static bool +tunnel_element_release_match(struct rte_eth_dev *dev, + struct mlx5_flow_tunnel *tunnel, const void *x) +{ + const struct tunnel_db_element_release_ctx *ctx = x; + + RTE_SET_USED(dev); + if (ctx->num_elements != 1) + return false; + else if (ctx->items) + return ctx->items == &tunnel->item; + else if (ctx->actions) + return ctx->actions == &tunnel->action; + + return false; +} + +static void +tunnel_element_release_hit(struct rte_eth_dev *dev, + struct mlx5_flow_tunnel *tunnel, void *x) +{ + struct tunnel_db_element_release_ctx *ctx = x; + ctx->ret = 0; + if (!__atomic_sub_fetch(&tunnel->refctn, 1, __ATOMIC_RELAXED)) + mlx5_flow_tunnel_free(dev, tunnel); +} + +static void +tunnel_element_release_miss(struct rte_eth_dev *dev, void *x) +{ + struct tunnel_db_element_release_ctx *ctx = x; + RTE_SET_USED(dev); + ctx->ret = rte_flow_error_set(ctx->error, EINVAL, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "invalid argument"); +} + static int mlx5_flow_item_release(struct rte_eth_dev *dev, struct rte_flow_item *pmd_items, uint32_t num_items, struct rte_flow_error *err) { - struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); - struct mlx5_flow_tunnel *tun; + struct tunnel_db_element_release_ctx ctx = { + .items = pmd_items, + .actions = NULL, + .num_elements = num_items, + .error = err, + }; - rte_spinlock_lock(&thub->sl); - LIST_FOREACH(tun, &thub->tunnels, chain) { - if (&tun->item == pmd_items) { - LIST_REMOVE(tun, chain); - break; - } - } - rte_spinlock_unlock(&thub->sl); - if (!tun || num_items != 1) - return rte_flow_error_set(err, EINVAL, - RTE_FLOW_ERROR_TYPE_HANDLE, NULL, - "invalid argument"); - if (!__atomic_sub_fetch(&tun->refctn, 1, __ATOMIC_RELAXED)) - mlx5_flow_tunnel_free(dev, tun); - return 0; + mlx5_access_tunnel_offload_db(dev, tunnel_element_release_match, + tunnel_element_release_hit, + tunnel_element_release_miss, &ctx, false); + + return ctx.ret; } static int @@ -691,25 +738,18 @@ mlx5_flow_action_release(struct rte_eth_dev *dev, struct rte_flow_action *pmd_actions, uint32_t num_actions, struct rte_flow_error *err) { - struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); - struct mlx5_flow_tunnel *tun; + struct tunnel_db_element_release_ctx ctx = { + .items = NULL, + .actions = pmd_actions, + .num_elements = num_actions, + .error = err, + }; - rte_spinlock_lock(&thub->sl); - LIST_FOREACH(tun, &thub->tunnels, chain) { - if (&tun->action == pmd_actions) { - LIST_REMOVE(tun, chain); - break; - } - } - rte_spinlock_unlock(&thub->sl); - if (!tun || num_actions != 1) - return rte_flow_error_set(err, EINVAL, - RTE_FLOW_ERROR_TYPE_HANDLE, NULL, - "invalid argument"); - if (!__atomic_sub_fetch(&tun->refctn, 1, __ATOMIC_RELAXED)) - mlx5_flow_tunnel_free(dev, tun); + mlx5_access_tunnel_offload_db(dev, tunnel_element_release_match, + tunnel_element_release_hit, + tunnel_element_release_miss, &ctx, false); - return 0; + return ctx.ret; } static int @@ -5889,11 +5929,8 @@ flow_list_destroy(struct rte_eth_dev *dev, uint32_t *list, if (flow->tunnel) { struct mlx5_flow_tunnel *tunnel; - rte_spinlock_lock(&mlx5_tunnel_hub(dev)->sl); tunnel = mlx5_find_tunnel_id(dev, flow->tunnel_id); RTE_VERIFY(tunnel); - LIST_REMOVE(tunnel, chain); - rte_spinlock_unlock(&mlx5_tunnel_hub(dev)->sl); if (!__atomic_sub_fetch(&tunnel->refctn, 1, __ATOMIC_RELAXED)) mlx5_flow_tunnel_free(dev, tunnel); } @@ -7464,28 +7501,87 @@ static void mlx5_flow_tunnel_free(struct rte_eth_dev *dev, struct mlx5_flow_tunnel *tunnel) { + /* no tunnel hub spinlock protection */ struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); struct mlx5_indexed_pool *ipool; DRV_LOG(DEBUG, "port %u release pmd tunnel id=0x%x", dev->data->port_id, tunnel->tunnel_id); + rte_spinlock_lock(&thub->sl); + LIST_REMOVE(tunnel, chain); + rte_spinlock_unlock(&thub->sl); mlx5_hlist_destroy(tunnel->groups); ipool = priv->sh->ipool[MLX5_IPOOL_TUNNEL_OFFLOAD]; mlx5_ipool_free(ipool, tunnel->tunnel_id); } -static struct mlx5_flow_tunnel * -mlx5_find_tunnel_id(struct rte_eth_dev *dev, uint32_t id) +static bool +mlx5_access_tunnel_offload_db + (struct rte_eth_dev *dev, + bool (*match)(struct rte_eth_dev *, + struct mlx5_flow_tunnel *, const void *), + void (*hit)(struct rte_eth_dev *, struct mlx5_flow_tunnel *, void *), + void (*miss)(struct rte_eth_dev *, void *), + void *ctx, bool lock_op) { + bool verdict = false; struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); - struct mlx5_flow_tunnel *tun; + struct mlx5_flow_tunnel *tunnel; - LIST_FOREACH(tun, &thub->tunnels, chain) { - if (tun->tunnel_id == id) + rte_spinlock_lock(&thub->sl); + LIST_FOREACH(tunnel, &thub->tunnels, chain) { + verdict = match(dev, tunnel, (const void *)ctx); + if (verdict) break; } + if (!lock_op) + rte_spinlock_unlock(&thub->sl); + if (verdict && hit) + hit(dev, tunnel, ctx); + if (!verdict && miss) + miss(dev, ctx); + if (lock_op) + rte_spinlock_unlock(&thub->sl); - return tun; + return verdict; +} + +struct tunnel_db_find_tunnel_id_ctx { + uint32_t tunnel_id; + struct mlx5_flow_tunnel *tunnel; +}; + +static bool +find_tunnel_id_match(struct rte_eth_dev *dev, + struct mlx5_flow_tunnel *tunnel, const void *x) +{ + const struct tunnel_db_find_tunnel_id_ctx *ctx = x; + + RTE_SET_USED(dev); + return tunnel->tunnel_id == ctx->tunnel_id; +} + +static void +find_tunnel_id_hit(struct rte_eth_dev *dev, + struct mlx5_flow_tunnel *tunnel, void *x) +{ + struct tunnel_db_find_tunnel_id_ctx *ctx = x; + RTE_SET_USED(dev); + ctx->tunnel = tunnel; +} + +static struct mlx5_flow_tunnel * +mlx5_find_tunnel_id(struct rte_eth_dev *dev, uint32_t id) +{ + struct tunnel_db_find_tunnel_id_ctx ctx = { + .tunnel_id = id, + }; + + mlx5_access_tunnel_offload_db(dev, find_tunnel_id_match, + find_tunnel_id_hit, NULL, &ctx, true); + + return ctx.tunnel; } static struct mlx5_flow_tunnel * @@ -7533,38 +7629,60 @@ mlx5_flow_tunnel_allocate(struct rte_eth_dev *dev, return tunnel; } +struct tunnel_db_get_tunnel_ctx { + const struct rte_flow_tunnel *app_tunnel; + struct mlx5_flow_tunnel *tunnel; +}; + +static bool get_tunnel_match(struct rte_eth_dev *dev, + struct mlx5_flow_tunnel *tunnel, const void *x) +{ + const struct tunnel_db_get_tunnel_ctx *ctx = x; + + RTE_SET_USED(dev); + return !memcmp(ctx->app_tunnel, &tunnel->app_tunnel, + sizeof(*ctx->app_tunnel)); +} + +static void get_tunnel_hit(struct rte_eth_dev *dev, + struct mlx5_flow_tunnel *tunnel, void *x) +{ + /* called under tunnel spinlock protection */ + struct tunnel_db_get_tunnel_ctx *ctx = x; + + RTE_SET_USED(dev); + tunnel->refctn++; + ctx->tunnel = tunnel; +} + +static void get_tunnel_miss(struct rte_eth_dev *dev, void *x) +{ + /* called under tunnel spinlock protection */ + struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); + struct tunnel_db_get_tunnel_ctx *ctx = x; + + rte_spinlock_unlock(&thub->sl); + ctx->tunnel = mlx5_flow_tunnel_allocate(dev, ctx->app_tunnel); + ctx->tunnel->refctn = 1; + rte_spinlock_lock(&thub->sl); + if (ctx->tunnel) + LIST_INSERT_HEAD(&thub->tunnels, ctx->tunnel, chain); +} + + static int mlx5_get_flow_tunnel(struct rte_eth_dev *dev, const struct rte_flow_tunnel *app_tunnel, struct mlx5_flow_tunnel **tunnel) { - int ret; - struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); - struct mlx5_flow_tunnel *tun; - - rte_spinlock_lock(&thub->sl); - LIST_FOREACH(tun, &thub->tunnels, chain) { - if (!memcmp(app_tunnel, &tun->app_tunnel, - sizeof(*app_tunnel))) { - *tunnel = tun; - ret = 0; - break; - } - } - if (!tun) { - tun = mlx5_flow_tunnel_allocate(dev, app_tunnel); - if (tun) { - LIST_INSERT_HEAD(&thub->tunnels, tun, chain); - *tunnel = tun; - } else { - ret = -ENOMEM; - } - } - rte_spinlock_unlock(&thub->sl); - if (tun) - __atomic_add_fetch(&tun->refctn, 1, __ATOMIC_RELAXED); + struct tunnel_db_get_tunnel_ctx ctx = { + .app_tunnel = app_tunnel, + }; - return ret; + mlx5_access_tunnel_offload_db(dev, get_tunnel_match, get_tunnel_hit, + get_tunnel_miss, &ctx, true); + *tunnel = ctx.tunnel; + return ctx.tunnel ? 0 : -ENOMEM; } void mlx5_release_tunnel_hub(struct mlx5_dev_ctx_shared *sh, uint16_t port_id) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index e3a5030785..bdf2c50090 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -950,8 +950,12 @@ struct mlx5_flow_tunnel { /** PMD tunnel related context */ struct mlx5_flow_tunnel_hub { + /* Tunnels list + * Access to the list MUST be MT protected + */ LIST_HEAD(, mlx5_flow_tunnel) tunnels; - rte_spinlock_t sl; /* Tunnel list spinlock. */ + /* protect access to the tunnels list */ + rte_spinlock_t sl; struct mlx5_hlist *groups; /** non tunnel groups */ }; From patchwork Wed Nov 11 07:14:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 83984 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id EDBAFA09D2; Wed, 11 Nov 2020 08:15:34 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 76D245A0F; Wed, 11 Nov 2020 08:14:46 +0100 (CET) Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by dpdk.org (Postfix) with ESMTP id B27DE5913 for ; Wed, 11 Nov 2020 08:14:43 +0100 (CET) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 10 Nov 2020 23:14:36 -0800 Received: from nvidia.com (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 11 Nov 2020 07:14:39 +0000 From: Gregory Etelson To: CC: , , , "Shahaf Shuler" , Viacheslav Ovsiienko , Xueming Li Date: Wed, 11 Nov 2020 09:14:16 +0200 Message-ID: <20201111071417.21177-4-getelson@nvidia.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201111071417.21177-1-getelson@nvidia.com> References: <20201111071417.21177-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605078876; bh=wHVzReFfp3YAVwmcbMfstuO4bo7Y0gnsZFRg0GDwHBU=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=sPYhadi7XE6wTx0avnUZR/7MXZBNLx+/kBFIDKXuQBkOddDPDmSJUAYazDHMOrCQ/ R+Ni8eY4IsyLXp+6UVDpRAocluLyeadqV6F9N11RqR/zzficZWg1IqwBD8y4gRSg0S 9guJsgYZIhHOJPax00CCTsWqC1t/23d49yHDBjAmxMipQSZRddvekD8TfqPiA5PYtQ 5afnf1Q9D7pnsA6dYaWDgQIVc3y2bRVkkPRy82d5oG28ncY+se0m6UXvVCEzoqlEEs gdUQ5wIkqLMswpgE0YbGftjKsiadEc2fZUnH/AXq57BW+Zd91fj9UnBtS6uLuNa4up CxpZEBshySyXQ== Subject: [dpdk-dev] [PATCH 3/4] net/mlx5: fix PMD crash after tunnel offload match rule destruction X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The new flow table resource management API triggered a PMD crash in tunnel offload mode, when tunnel match flow rule was inserted before tunnel set rule. Reason for the crash was double flow table registration. The table was registered by the tunnel offload code for the first time and once more by PMD code, as part of general table processing. The table counter was decremented only once during the rule destruction and caused a resource leak that triggered the crash. The patch updates PMD registration with tunnel offload parameters and removes table registration in tunnel related code. Fixes: 663ad57dabb2 ("net/mlx5: make flow table cache thread safe") Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow.c | 16 ++++++++++---- drivers/net/mlx5/mlx5_flow_dv.c | 39 +++++++++++++++++---------------- 2 files changed, 32 insertions(+), 23 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 2f01e34033..185b4ba51a 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -7024,7 +7024,15 @@ tunnel_flow_group_to_flow_table(struct rte_eth_dev *dev, struct mlx5_hlist *group_hash; group_hash = tunnel ? tunnel->groups : thub->groups; - he = mlx5_hlist_register(group_hash, key.val, NULL); + he = mlx5_hlist_lookup(group_hash, key.val, NULL); + if (!he) { + DRV_LOG(DEBUG, "port %u tunnel %u group=%u - generate table id", + dev->data->port_id, key.tunnel_id, group); + he = mlx5_hlist_register(group_hash, key.val, NULL); + } else { + DRV_LOG(DEBUG, "port %u tunnel %u group=%u - skip table id", + dev->data->port_id, key.tunnel_id, group); + } if (!he) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR_GROUP, @@ -7032,8 +7040,8 @@ tunnel_flow_group_to_flow_table(struct rte_eth_dev *dev, "tunnel group index not supported"); tte = container_of(he, typeof(*tte), hash); *table = tte->flow_table; - DRV_LOG(DEBUG, "port %u tunnel %u group=%#x table=%#x", - dev->data->port_id, key.tunnel_id, group, *table); + DRV_LOG(DEBUG, "port %u tunnel %u group=%u table=%u", + dev->data->port_id, key.tunnel_id, group, *table); return 0; } @@ -7114,7 +7122,7 @@ mlx5_flow_group_to_table(struct rte_eth_dev *dev, standard_translation = true; } DRV_LOG(DEBUG, - "port %u group=%#x transfer=%d external=%d fdb_def_rule=%d translate=%s", + "port %u group=%u transfer=%d external=%d fdb_def_rule=%d translate=%s", dev->data->port_id, group, grp_info.transfer, grp_info.external, grp_info.fdb_def_rule, standard_translation ? "STANDARD" : "TUNNEL"); diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 78c710fef9..95165980f4 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -8042,6 +8042,8 @@ flow_dv_tbl_resource_get(struct rte_eth_dev *dev, "cannot get table"); return NULL; } + DRV_LOG(DEBUG, "Table_id %u tunnel %u group %u registered.", + table_id, tunnel ? tunnel->tunnel_id : 0, group_id); tbl_data = container_of(entry, struct mlx5_flow_tbl_data_entry, entry); return &tbl_data->tbl; } @@ -8080,7 +8082,7 @@ flow_dv_tbl_remove_cb(struct mlx5_hlist *list, if (he) mlx5_hlist_unregister(tunnel_grp_hash, he); DRV_LOG(DEBUG, - "Table_id %#x tunnel %u group %u released.", + "Table_id %u tunnel %u group %u released.", table_id, tbl_data->tunnel ? tbl_data->tunnel->tunnel_id : 0, @@ -8192,6 +8194,8 @@ flow_dv_matcher_register(struct rte_eth_dev *dev, struct mlx5_flow_dv_matcher *ref, union mlx5_flow_tbl_key *key, struct mlx5_flow *dev_flow, + const struct mlx5_flow_tunnel *tunnel, + uint32_t group_id, struct rte_flow_error *error) { struct mlx5_cache_entry *entry; @@ -8203,8 +8207,14 @@ flow_dv_matcher_register(struct rte_eth_dev *dev, .data = ref, }; - tbl = flow_dv_tbl_resource_get(dev, key->table_id, key->direction, - key->domain, false, NULL, 0, 0, error); + /** + * tunnel offload API requires this registration for cases when + * tunnel match rule was inserted before tunnel set rule. + */ + tbl = flow_dv_tbl_resource_get(dev, key->table_id, + key->direction, key->domain, + dev_flow->external, tunnel, + group_id, 0, error); if (!tbl) return -rte_errno; /* No need to refill the error info */ tbl_data = container_of(tbl, struct mlx5_flow_tbl_data_entry, tbl); @@ -9605,10 +9615,14 @@ flow_dv_translate(struct rte_eth_dev *dev, /* * do not add decap action if match rule drops packet * HW rejects rules with decap & drop + * + * if tunnel match rule was inserted before matching tunnel set + * rule flow table used in the match rule must be registered. + * current implementation handles that in the + * flow_dv_match_register() at the function end. */ bool add_decap = true; const struct rte_flow_action *ptr = actions; - struct mlx5_flow_tbl_resource *tbl; for (; ptr->type != RTE_FLOW_ACTION_TYPE_END; ptr++) { if (ptr->type == RTE_FLOW_ACTION_TYPE_DROP) { @@ -9625,20 +9639,6 @@ flow_dv_translate(struct rte_eth_dev *dev, dev_flow->dv.encap_decap->action; action_flags |= MLX5_FLOW_ACTION_DECAP; } - /* - * bind table_id with for tunnel match rule. - * Tunnel set rule establishes that bind in JUMP action handler. - * Required for scenario when application creates tunnel match - * rule before tunnel set rule. - */ - tbl = flow_dv_tbl_resource_get(dev, table, attr->egress, - attr->transfer, - !!dev_flow->external, tunnel, - attr->group, 0, error); - if (!tbl) - return rte_flow_error_set - (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, "cannot register tunnel group"); } for (; !actions_end ; actions++) { const struct rte_flow_action_queue *queue; @@ -10468,7 +10468,8 @@ flow_dv_translate(struct rte_eth_dev *dev, tbl_key.domain = attr->transfer; tbl_key.direction = attr->egress; tbl_key.table_id = dev_flow->dv.group; - if (flow_dv_matcher_register(dev, &matcher, &tbl_key, dev_flow, error)) + if (flow_dv_matcher_register(dev, &matcher, &tbl_key, dev_flow, + tunnel, attr->group, error)) return -rte_errno; return 0; } From patchwork Wed Nov 11 07:14:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 83985 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6C815A09D2; Wed, 11 Nov 2020 08:15:53 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DF4105AA7; Wed, 11 Nov 2020 08:14:47 +0100 (CET) Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by dpdk.org (Postfix) with ESMTP id B4510593A for ; Wed, 11 Nov 2020 08:14:45 +0100 (CET) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 10 Nov 2020 23:14:47 -0800 Received: from nvidia.com (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 11 Nov 2020 07:14:42 +0000 From: Gregory Etelson To: CC: , , , "Shahaf Shuler" , Viacheslav Ovsiienko Date: Wed, 11 Nov 2020 09:14:17 +0200 Message-ID: <20201111071417.21177-5-getelson@nvidia.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201111071417.21177-1-getelson@nvidia.com> References: <20201111071417.21177-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605078887; bh=cVgA4R1A2FARcdMSfts34M/ZOB7hB2CDKaK/snBEFs4=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=NJnNbyyO1yWww3lPw98yiMxCjr4oXW1219uFOMkyBlqLMTehD3Krh3AGNG2wDCnjJ 1+Z+fRChQrKZ9ID+rMO9uuxIKftQWudpercU+pTIkqpzFWjKA2mMeSV4IyWBcYHsmb /NZ1Y0ijQdmu+xBVYRlzmj5kEvfwALnAubfxzyMbHOOhRnUtu61w4whTIJlnIto95T BLhTgWk2NDi0oRz83ZwT8cQ7Lx1A+hemtGHNPicf4DLLYM/XKZxSGR0ldlOK1dKnmM L9Qq2+fpaaTODXaZRa7wqCKK2pr3dhHQKhDDpNDxEkL8xr8Ob3LmznLy7JfWChGEBM wDSFi6aduGhXQ== Subject: [dpdk-dev] [PATCH 4/4] net/mlx5: fix tunnel offload callback names X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Fix mlx5_flow_tunnel_action_release and mlx5_flow_tunnel_item_release callback names to match tunnel offload names pattern. Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload") Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 185b4ba51a..358a5f4e72 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -715,9 +715,9 @@ tunnel_element_release_miss(struct rte_eth_dev *dev, void *x) } static int -mlx5_flow_item_release(struct rte_eth_dev *dev, - struct rte_flow_item *pmd_items, - uint32_t num_items, struct rte_flow_error *err) +mlx5_flow_tunnel_item_release(struct rte_eth_dev *dev, + struct rte_flow_item *pmd_items, + uint32_t num_items, struct rte_flow_error *err) { struct tunnel_db_element_release_ctx ctx = { .items = pmd_items, @@ -734,9 +734,10 @@ mlx5_flow_item_release(struct rte_eth_dev *dev, } static int -mlx5_flow_action_release(struct rte_eth_dev *dev, - struct rte_flow_action *pmd_actions, - uint32_t num_actions, struct rte_flow_error *err) +mlx5_flow_tunnel_action_release(struct rte_eth_dev *dev, + struct rte_flow_action *pmd_actions, + uint32_t num_actions, + struct rte_flow_error *err) { struct tunnel_db_element_release_ctx ctx = { .items = NULL, @@ -800,8 +801,8 @@ static const struct rte_flow_ops mlx5_flow_ops = { .shared_action_query = mlx5_shared_action_query, .tunnel_decap_set = mlx5_flow_tunnel_decap_set, .tunnel_match = mlx5_flow_tunnel_match, - .tunnel_action_decap_release = mlx5_flow_action_release, - .tunnel_item_release = mlx5_flow_item_release, + .tunnel_action_decap_release = mlx5_flow_tunnel_action_release, + .tunnel_item_release = mlx5_flow_tunnel_item_release, .get_restore_info = mlx5_flow_tunnel_get_restore_info, };