From patchwork Mon Nov 16 14:02:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 84237 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F9F6A04B5; Mon, 16 Nov 2020 15:03:14 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C1428C8C6; Mon, 16 Nov 2020 15:02:57 +0100 (CET) Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by dpdk.org (Postfix) with ESMTP id E81F4C8BC for ; Mon, 16 Nov 2020 15:02:55 +0100 (CET) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Mon, 16 Nov 2020 06:03:03 -0800 Received: from nvidia.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 16 Nov 2020 14:02:52 +0000 From: Gregory Etelson To: CC: , , , Viacheslav Ovsiienko , Shahaf Shuler Date: Mon, 16 Nov 2020 16:02:18 +0200 Message-ID: <20201116140224.8464-2-getelson@nvidia.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201116140224.8464-1-getelson@nvidia.com> References: <20201111071417.21177-1-getelson@nvidia.com> <20201116140224.8464-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605535383; bh=L6BN3HS8waf4jrypfiqyyE988MfY5CPHMx29Y+k9D48=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=ZZvznFXBQk2lNQgDoxuyz1CPZVJoNUVU+KQrFN8LBeSUX8SF3iuqW4EbDonjAx6lY ucxmC8LBhOuSDQaLl3cYr3sT0+1dV07nrv2J6i39VHNHdXtqAErBysULV7eEr+6VID 6qousFMmUNBRKW8V4d3nBX336JMuAlw8suK+p8j6qizfXv+k/h0sKTMlpuv1bP5wfc bPGcsl9MfpFfLIrOxI0flXEI3+sO+2FsNEQyRajCHvYwCuxy+sZOvGNiJcFIqm5Uj2 MhZBHqUsLvPyKUgKEvOFhik9FM5As9R24PZT+XGuGLSt6CZ3nSlkpXlyruTIbH/T0G YDWAWFt8m45lQ== Subject: [dpdk-dev] [PATCH v5 1/6] net/mlx5: fix tunnel offload callback names X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Fix mlx5_flow_tunnel_action_release and mlx5_flow_tunnel_item_release callback names to match tunnel offload names pattern. Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload") Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 324349ed19..98559ece2b 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -662,9 +662,9 @@ mlx5_flow_tunnel_match(struct rte_eth_dev *dev, } static int -mlx5_flow_item_release(struct rte_eth_dev *dev, - struct rte_flow_item *pmd_items, - uint32_t num_items, struct rte_flow_error *err) +mlx5_flow_tunnel_item_release(struct rte_eth_dev *dev, + struct rte_flow_item *pmd_items, + uint32_t num_items, struct rte_flow_error *err) { struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); struct mlx5_flow_tunnel *tun; @@ -687,9 +687,10 @@ mlx5_flow_item_release(struct rte_eth_dev *dev, } static int -mlx5_flow_action_release(struct rte_eth_dev *dev, - struct rte_flow_action *pmd_actions, - uint32_t num_actions, struct rte_flow_error *err) +mlx5_flow_tunnel_action_release(struct rte_eth_dev *dev, + struct rte_flow_action *pmd_actions, + uint32_t num_actions, + struct rte_flow_error *err) { struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); struct mlx5_flow_tunnel *tun; @@ -760,8 +761,8 @@ static const struct rte_flow_ops mlx5_flow_ops = { .shared_action_query = mlx5_shared_action_query, .tunnel_decap_set = mlx5_flow_tunnel_decap_set, .tunnel_match = mlx5_flow_tunnel_match, - .tunnel_action_decap_release = mlx5_flow_action_release, - .tunnel_item_release = mlx5_flow_item_release, + .tunnel_action_decap_release = mlx5_flow_tunnel_action_release, + .tunnel_item_release = mlx5_flow_tunnel_item_release, .get_restore_info = mlx5_flow_tunnel_get_restore_info, }; From patchwork Mon Nov 16 14:02:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 84238 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2D8E0A04B5; Mon, 16 Nov 2020 15:03:37 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 60A47C8BC; Mon, 16 Nov 2020 15:03:13 +0100 (CET) Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by dpdk.org (Postfix) with ESMTP id B64EBC8BC for ; Mon, 16 Nov 2020 15:03:10 +0100 (CET) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Mon, 16 Nov 2020 06:03:01 -0800 Received: from nvidia.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 16 Nov 2020 14:02:54 +0000 From: Gregory Etelson To: CC: , , , Viacheslav Ovsiienko , Shahaf Shuler Date: Mon, 16 Nov 2020 16:02:19 +0200 Message-ID: <20201116140224.8464-3-getelson@nvidia.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201116140224.8464-1-getelson@nvidia.com> References: <20201111071417.21177-1-getelson@nvidia.com> <20201116140224.8464-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605535381; bh=81lrBH0YJ52kNp505JXCQ/35F0Q0dk+Mw4/F9VSKEzI=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=SV62ZGS6hKR5Uy5jKFMKLh4lgiAuxhge839+5qPmdqcndoHkH1xIMVdsGKqvkyOv6 sH5hi7itzVz22ONai1rMD0hSoCugjF11vniDKcrdphmoqXSr6AvEk5K5IfA5WVPBFF ah+eQiqkORgUO9b7gkHLoLWzgXSDsN/NXHbF+j72uOM6QWMjRQmY+JlcuLG4zJ/BUr JJ4p+aTJ3iRU7ejRpb9asGL4yMCglEO074qR8FyT9whqPjtlEva+Fx9yfCHehanGkl Gq2PdYcmndBr0MyxSsIpNYa7uhAUA4gV3ImbNBcEgz6PScQF05YPt/bKlWGrbLjjAf 0YxcrqigsPUvw== Subject: [dpdk-dev] [PATCH v5 2/6] net/mlx5: fix build with Direct Verbs disabled X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Tunnel offload API is implemented for Direct Verbs environment only. Current patch re-arranges tunnel related functions for compilation in non Direct Verbs setups to prevent compilation failures. The patch does not introduce new functions. Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload") Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow.c | 1048 +++++++++++++++++++--------------- drivers/net/mlx5/mlx5_flow.h | 5 + 2 files changed, 594 insertions(+), 459 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 98559ece2b..c8152cb8b1 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -33,16 +33,35 @@ #include "mlx5_common_os.h" #include "rte_pmd_mlx5.h" +struct tunnel_default_miss_ctx { + uint16_t *queue; + __extension__ + union { + struct rte_flow_action_rss action_rss; + struct rte_flow_action_queue miss_queue; + struct rte_flow_action_jump miss_jump; + uint8_t raw[0]; + }; +}; + +static int +flow_tunnel_add_default_miss(struct rte_eth_dev *dev, + struct rte_flow *flow, + const struct rte_flow_attr *attr, + const struct rte_flow_action *app_actions, + uint32_t flow_idx, + struct tunnel_default_miss_ctx *ctx, + struct rte_flow_error *error); static struct mlx5_flow_tunnel * mlx5_find_tunnel_id(struct rte_eth_dev *dev, uint32_t id); static void mlx5_flow_tunnel_free(struct rte_eth_dev *dev, struct mlx5_flow_tunnel *tunnel); -static const struct mlx5_flow_tbl_data_entry * -tunnel_mark_decode(struct rte_eth_dev *dev, uint32_t mark); -static int -mlx5_get_flow_tunnel(struct rte_eth_dev *dev, - const struct rte_flow_tunnel *app_tunnel, - struct mlx5_flow_tunnel **tunnel); +static uint32_t +tunnel_flow_group_to_flow_table(struct rte_eth_dev *dev, + const struct mlx5_flow_tunnel *tunnel, + uint32_t group, uint32_t *table, + struct rte_flow_error *error); + static struct mlx5_flow_workspace *mlx5_flow_push_thread_workspace(void); static void mlx5_flow_pop_thread_workspace(void); @@ -580,171 +599,32 @@ static int mlx5_shared_action_query const struct rte_flow_shared_action *action, void *data, struct rte_flow_error *error); -static inline bool -mlx5_flow_tunnel_validate(struct rte_eth_dev *dev, - struct rte_flow_tunnel *tunnel, - const char *err_msg) -{ - err_msg = NULL; - if (!is_tunnel_offload_active(dev)) { - err_msg = "tunnel offload was not activated"; - goto out; - } else if (!tunnel) { - err_msg = "no application tunnel"; - goto out; - } - - switch (tunnel->type) { - default: - err_msg = "unsupported tunnel type"; - goto out; - case RTE_FLOW_ITEM_TYPE_VXLAN: - break; - } - -out: - return !err_msg; -} - - static int mlx5_flow_tunnel_decap_set(struct rte_eth_dev *dev, struct rte_flow_tunnel *app_tunnel, struct rte_flow_action **actions, uint32_t *num_of_actions, - struct rte_flow_error *error) -{ - int ret; - struct mlx5_flow_tunnel *tunnel; - const char *err_msg = NULL; - bool verdict = mlx5_flow_tunnel_validate(dev, app_tunnel, err_msg); - - if (!verdict) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL, - err_msg); - ret = mlx5_get_flow_tunnel(dev, app_tunnel, &tunnel); - if (ret < 0) { - return rte_flow_error_set(error, ret, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL, - "failed to initialize pmd tunnel"); - } - *actions = &tunnel->action; - *num_of_actions = 1; - return 0; -} - + struct rte_flow_error *error); static int mlx5_flow_tunnel_match(struct rte_eth_dev *dev, struct rte_flow_tunnel *app_tunnel, struct rte_flow_item **items, uint32_t *num_of_items, - struct rte_flow_error *error) -{ - int ret; - struct mlx5_flow_tunnel *tunnel; - const char *err_msg = NULL; - bool verdict = mlx5_flow_tunnel_validate(dev, app_tunnel, err_msg); - - if (!verdict) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_HANDLE, NULL, - err_msg); - ret = mlx5_get_flow_tunnel(dev, app_tunnel, &tunnel); - if (ret < 0) { - return rte_flow_error_set(error, ret, - RTE_FLOW_ERROR_TYPE_HANDLE, NULL, - "failed to initialize pmd tunnel"); - } - *items = &tunnel->item; - *num_of_items = 1; - return 0; -} - + struct rte_flow_error *error); static int mlx5_flow_tunnel_item_release(struct rte_eth_dev *dev, struct rte_flow_item *pmd_items, - uint32_t num_items, struct rte_flow_error *err) -{ - struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); - struct mlx5_flow_tunnel *tun; - - rte_spinlock_lock(&thub->sl); - LIST_FOREACH(tun, &thub->tunnels, chain) { - if (&tun->item == pmd_items) { - LIST_REMOVE(tun, chain); - break; - } - } - rte_spinlock_unlock(&thub->sl); - if (!tun || num_items != 1) - return rte_flow_error_set(err, EINVAL, - RTE_FLOW_ERROR_TYPE_HANDLE, NULL, - "invalid argument"); - if (!__atomic_sub_fetch(&tun->refctn, 1, __ATOMIC_RELAXED)) - mlx5_flow_tunnel_free(dev, tun); - return 0; -} - + uint32_t num_items, struct rte_flow_error *err); static int mlx5_flow_tunnel_action_release(struct rte_eth_dev *dev, struct rte_flow_action *pmd_actions, uint32_t num_actions, - struct rte_flow_error *err) -{ - struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); - struct mlx5_flow_tunnel *tun; - - rte_spinlock_lock(&thub->sl); - LIST_FOREACH(tun, &thub->tunnels, chain) { - if (&tun->action == pmd_actions) { - LIST_REMOVE(tun, chain); - break; - } - } - rte_spinlock_unlock(&thub->sl); - if (!tun || num_actions != 1) - return rte_flow_error_set(err, EINVAL, - RTE_FLOW_ERROR_TYPE_HANDLE, NULL, - "invalid argument"); - if (!__atomic_sub_fetch(&tun->refctn, 1, __ATOMIC_RELAXED)) - mlx5_flow_tunnel_free(dev, tun); - - return 0; -} - + struct rte_flow_error *err); static int mlx5_flow_tunnel_get_restore_info(struct rte_eth_dev *dev, struct rte_mbuf *m, struct rte_flow_restore_info *info, - struct rte_flow_error *err) -{ - uint64_t ol_flags = m->ol_flags; - const struct mlx5_flow_tbl_data_entry *tble; - const uint64_t mask = PKT_RX_FDIR | PKT_RX_FDIR_ID; - - if ((ol_flags & mask) != mask) - goto err; - tble = tunnel_mark_decode(dev, m->hash.fdir.hi); - if (!tble) { - DRV_LOG(DEBUG, "port %u invalid miss tunnel mark %#x", - dev->data->port_id, m->hash.fdir.hi); - goto err; - } - MLX5_ASSERT(tble->tunnel); - memcpy(&info->tunnel, &tble->tunnel->app_tunnel, sizeof(info->tunnel)); - info->group_id = tble->group_id; - info->flags = RTE_FLOW_RESTORE_INFO_TUNNEL | - RTE_FLOW_RESTORE_INFO_GROUP_ID | - RTE_FLOW_RESTORE_INFO_ENCAPSULATED; - - return 0; - -err: - return rte_flow_error_set(err, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "failed to get restore info"); -} + struct rte_flow_error *err); static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, @@ -4160,174 +4040,38 @@ flow_hairpin_split(struct rte_eth_dev *dev, return 0; } -__extension__ -union tunnel_offload_mark { - uint32_t val; - struct { - uint32_t app_reserve:8; - uint32_t table_id:15; - uint32_t transfer:1; - uint32_t _unused_:8; - }; -}; - -struct tunnel_default_miss_ctx { - uint16_t *queue; - __extension__ - union { - struct rte_flow_action_rss action_rss; - struct rte_flow_action_queue miss_queue; - struct rte_flow_action_jump miss_jump; - uint8_t raw[0]; - }; -}; - +/** + * The last stage of splitting chain, just creates the subflow + * without any modification. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] flow + * Parent flow structure pointer. + * @param[in, out] sub_flow + * Pointer to return the created subflow, may be NULL. + * @param[in] attr + * Flow rule attributes. + * @param[in] items + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END action). + * @param[in] flow_split_info + * Pointer to flow split info structure. + * @param[out] error + * Perform verbose error reporting if not NULL. + * @return + * 0 on success, negative value otherwise + */ static int -flow_tunnel_add_default_miss(struct rte_eth_dev *dev, - struct rte_flow *flow, - const struct rte_flow_attr *attr, - const struct rte_flow_action *app_actions, - uint32_t flow_idx, - struct tunnel_default_miss_ctx *ctx, - struct rte_flow_error *error) -{ - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_flow *dev_flow; - struct rte_flow_attr miss_attr = *attr; - const struct mlx5_flow_tunnel *tunnel = app_actions[0].conf; - const struct rte_flow_item miss_items[2] = { - { - .type = RTE_FLOW_ITEM_TYPE_ETH, - .spec = NULL, - .last = NULL, - .mask = NULL - }, - { - .type = RTE_FLOW_ITEM_TYPE_END, - .spec = NULL, - .last = NULL, - .mask = NULL - } - }; - union tunnel_offload_mark mark_id; - struct rte_flow_action_mark miss_mark; - struct rte_flow_action miss_actions[3] = { - [0] = { .type = RTE_FLOW_ACTION_TYPE_MARK, .conf = &miss_mark }, - [2] = { .type = RTE_FLOW_ACTION_TYPE_END, .conf = NULL } - }; - const struct rte_flow_action_jump *jump_data; - uint32_t i, flow_table = 0; /* prevent compilation warning */ - struct flow_grp_info grp_info = { - .external = 1, - .transfer = attr->transfer, - .fdb_def_rule = !!priv->fdb_def_rule, - .std_tbl_fix = 0, - }; - int ret; - - if (!attr->transfer) { - uint32_t q_size; - - miss_actions[1].type = RTE_FLOW_ACTION_TYPE_RSS; - q_size = priv->reta_idx_n * sizeof(ctx->queue[0]); - ctx->queue = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO, q_size, - 0, SOCKET_ID_ANY); - if (!ctx->queue) - return rte_flow_error_set - (error, ENOMEM, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "invalid default miss RSS"); - ctx->action_rss.func = RTE_ETH_HASH_FUNCTION_DEFAULT, - ctx->action_rss.level = 0, - ctx->action_rss.types = priv->rss_conf.rss_hf, - ctx->action_rss.key_len = priv->rss_conf.rss_key_len, - ctx->action_rss.queue_num = priv->reta_idx_n, - ctx->action_rss.key = priv->rss_conf.rss_key, - ctx->action_rss.queue = ctx->queue; - if (!priv->reta_idx_n || !priv->rxqs_n) - return rte_flow_error_set - (error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "invalid port configuration"); - if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)) - ctx->action_rss.types = 0; - for (i = 0; i != priv->reta_idx_n; ++i) - ctx->queue[i] = (*priv->reta_idx)[i]; - } else { - miss_actions[1].type = RTE_FLOW_ACTION_TYPE_JUMP; - ctx->miss_jump.group = MLX5_TNL_MISS_FDB_JUMP_GRP; - } - miss_actions[1].conf = (typeof(miss_actions[1].conf))ctx->raw; - for (; app_actions->type != RTE_FLOW_ACTION_TYPE_JUMP; app_actions++); - jump_data = app_actions->conf; - miss_attr.priority = MLX5_TNL_MISS_RULE_PRIORITY; - miss_attr.group = jump_data->group; - ret = mlx5_flow_group_to_table(dev, tunnel, jump_data->group, - &flow_table, grp_info, error); - if (ret) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "invalid tunnel id"); - mark_id.app_reserve = 0; - mark_id.table_id = tunnel_flow_tbl_to_id(flow_table); - mark_id.transfer = !!attr->transfer; - mark_id._unused_ = 0; - miss_mark.id = mark_id.val; - dev_flow = flow_drv_prepare(dev, flow, &miss_attr, - miss_items, miss_actions, flow_idx, error); - if (!dev_flow) - return -rte_errno; - dev_flow->flow = flow; - dev_flow->external = true; - dev_flow->tunnel = tunnel; - /* Subflow object was created, we must include one in the list. */ - SILIST_INSERT(&flow->dev_handles, dev_flow->handle_idx, - dev_flow->handle, next); - DRV_LOG(DEBUG, - "port %u tunnel type=%d id=%u miss rule priority=%u group=%u", - dev->data->port_id, tunnel->app_tunnel.type, - tunnel->tunnel_id, miss_attr.priority, miss_attr.group); - ret = flow_drv_translate(dev, dev_flow, &miss_attr, miss_items, - miss_actions, error); - if (!ret) - ret = flow_mreg_update_copy_table(dev, flow, miss_actions, - error); - - return ret; -} - -/** - * The last stage of splitting chain, just creates the subflow - * without any modification. - * - * @param[in] dev - * Pointer to Ethernet device. - * @param[in] flow - * Parent flow structure pointer. - * @param[in, out] sub_flow - * Pointer to return the created subflow, may be NULL. - * @param[in] attr - * Flow rule attributes. - * @param[in] items - * Pattern specification (list terminated by the END pattern item). - * @param[in] actions - * Associated actions (list terminated by the END action). - * @param[in] flow_split_info - * Pointer to flow split info structure. - * @param[out] error - * Perform verbose error reporting if not NULL. - * @return - * 0 on success, negative value otherwise - */ -static int -flow_create_split_inner(struct rte_eth_dev *dev, - struct rte_flow *flow, - struct mlx5_flow **sub_flow, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[], - struct mlx5_flow_split_info *flow_split_info, - struct rte_flow_error *error) +flow_create_split_inner(struct rte_eth_dev *dev, + struct rte_flow *flow, + struct mlx5_flow **sub_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + struct mlx5_flow_split_info *flow_split_info, + struct rte_flow_error *error) { struct mlx5_flow *dev_flow; @@ -6953,99 +6697,6 @@ mlx5_flow_async_pool_query_handle(struct mlx5_dev_ctx_shared *sh, sh->cmng.pending_queries--; } -static const struct mlx5_flow_tbl_data_entry * -tunnel_mark_decode(struct rte_eth_dev *dev, uint32_t mark) -{ - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_hlist_entry *he; - union tunnel_offload_mark mbits = { .val = mark }; - union mlx5_flow_tbl_key table_key = { - { - .table_id = tunnel_id_to_flow_tbl(mbits.table_id), - .dummy = 0, - .domain = !!mbits.transfer, - .direction = 0, - } - }; - he = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64, NULL); - return he ? - container_of(he, struct mlx5_flow_tbl_data_entry, entry) : NULL; -} - -static void -mlx5_flow_tunnel_grp2tbl_remove_cb(struct mlx5_hlist *list, - struct mlx5_hlist_entry *entry) -{ - struct mlx5_dev_ctx_shared *sh = list->ctx; - struct tunnel_tbl_entry *tte = container_of(entry, typeof(*tte), hash); - - mlx5_ipool_free(sh->ipool[MLX5_IPOOL_TNL_TBL_ID], - tunnel_flow_tbl_to_id(tte->flow_table)); - mlx5_free(tte); -} - -static struct mlx5_hlist_entry * -mlx5_flow_tunnel_grp2tbl_create_cb(struct mlx5_hlist *list, - uint64_t key __rte_unused, - void *ctx __rte_unused) -{ - struct mlx5_dev_ctx_shared *sh = list->ctx; - struct tunnel_tbl_entry *tte; - - tte = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO, - sizeof(*tte), 0, - SOCKET_ID_ANY); - if (!tte) - goto err; - mlx5_ipool_malloc(sh->ipool[MLX5_IPOOL_TNL_TBL_ID], - &tte->flow_table); - if (tte->flow_table >= MLX5_MAX_TABLES) { - DRV_LOG(ERR, "Tunnel TBL ID %d exceed max limit.", - tte->flow_table); - mlx5_ipool_free(sh->ipool[MLX5_IPOOL_TNL_TBL_ID], - tte->flow_table); - goto err; - } else if (!tte->flow_table) { - goto err; - } - tte->flow_table = tunnel_id_to_flow_tbl(tte->flow_table); - return &tte->hash; -err: - if (tte) - mlx5_free(tte); - return NULL; -} - -static uint32_t -tunnel_flow_group_to_flow_table(struct rte_eth_dev *dev, - const struct mlx5_flow_tunnel *tunnel, - uint32_t group, uint32_t *table, - struct rte_flow_error *error) -{ - struct mlx5_hlist_entry *he; - struct tunnel_tbl_entry *tte; - union tunnel_tbl_key key = { - .tunnel_id = tunnel ? tunnel->tunnel_id : 0, - .group = group - }; - struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); - struct mlx5_hlist *group_hash; - - group_hash = tunnel ? tunnel->groups : thub->groups; - he = mlx5_hlist_register(group_hash, key.val, NULL); - if (!he) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_GROUP, - NULL, - "tunnel group index not supported"); - tte = container_of(he, typeof(*tte), hash); - *table = tte->flow_table; - DRV_LOG(DEBUG, "port %u tunnel %u group=%#x table=%#x", - dev->data->port_id, key.tunnel_id, group, *table); - return 0; -} - static int flow_group_to_table(uint32_t port_id, uint32_t group, uint32_t *table, struct flow_grp_info grp_info, struct rte_flow_error *error) @@ -7505,45 +7156,288 @@ mlx5_shared_action_flush(struct rte_eth_dev *dev) return ret; } -static void -mlx5_flow_tunnel_free(struct rte_eth_dev *dev, - struct mlx5_flow_tunnel *tunnel) -{ - struct mlx5_priv *priv = dev->data->dev_private; - - DRV_LOG(DEBUG, "port %u release pmd tunnel id=0x%x", - dev->data->port_id, tunnel->tunnel_id); - RTE_VERIFY(!__atomic_load_n(&tunnel->refctn, __ATOMIC_RELAXED)); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_TUNNEL_ID], - tunnel->tunnel_id); - mlx5_hlist_destroy(tunnel->groups); - mlx5_free(tunnel); -} +#ifndef HAVE_MLX5DV_DR +#define MLX5_DOMAIN_SYNC_FLOW ((1 << 0) | (1 << 1)) +#else +#define MLX5_DOMAIN_SYNC_FLOW \ + (MLX5DV_DR_DOMAIN_SYNC_FLAGS_SW | MLX5DV_DR_DOMAIN_SYNC_FLAGS_HW) +#endif -static struct mlx5_flow_tunnel * -mlx5_find_tunnel_id(struct rte_eth_dev *dev, uint32_t id) +int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains) { - struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); - struct mlx5_flow_tunnel *tun; - - LIST_FOREACH(tun, &thub->tunnels, chain) { - if (tun->tunnel_id == id) - break; - } + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct mlx5_flow_driver_ops *fops; + int ret; + struct rte_flow_attr attr = { .transfer = 0 }; - return tun; + fops = flow_get_drv_ops(flow_get_drv_type(dev, &attr)); + ret = fops->sync_domain(dev, domains, MLX5_DOMAIN_SYNC_FLOW); + if (ret > 0) + ret = -ret; + return ret; } -static struct mlx5_flow_tunnel * -mlx5_flow_tunnel_allocate(struct rte_eth_dev *dev, - const struct rte_flow_tunnel *app_tunnel) -{ - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_flow_tunnel *tunnel; - uint32_t id; - - mlx5_ipool_malloc(priv->sh->ipool[MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], - &id); +/** + * tunnel offload functionalilty is defined for DV environment only + */ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT +__extension__ +union tunnel_offload_mark { + uint32_t val; + struct { + uint32_t app_reserve:8; + uint32_t table_id:15; + uint32_t transfer:1; + uint32_t _unused_:8; + }; +}; + +static int +flow_tunnel_add_default_miss(struct rte_eth_dev *dev, + struct rte_flow *flow, + const struct rte_flow_attr *attr, + const struct rte_flow_action *app_actions, + uint32_t flow_idx, + struct tunnel_default_miss_ctx *ctx, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow *dev_flow; + struct rte_flow_attr miss_attr = *attr; + const struct mlx5_flow_tunnel *tunnel = app_actions[0].conf; + const struct rte_flow_item miss_items[2] = { + { + .type = RTE_FLOW_ITEM_TYPE_ETH, + .spec = NULL, + .last = NULL, + .mask = NULL + }, + { + .type = RTE_FLOW_ITEM_TYPE_END, + .spec = NULL, + .last = NULL, + .mask = NULL + } + }; + union tunnel_offload_mark mark_id; + struct rte_flow_action_mark miss_mark; + struct rte_flow_action miss_actions[3] = { + [0] = { .type = RTE_FLOW_ACTION_TYPE_MARK, .conf = &miss_mark }, + [2] = { .type = RTE_FLOW_ACTION_TYPE_END, .conf = NULL } + }; + const struct rte_flow_action_jump *jump_data; + uint32_t i, flow_table = 0; /* prevent compilation warning */ + struct flow_grp_info grp_info = { + .external = 1, + .transfer = attr->transfer, + .fdb_def_rule = !!priv->fdb_def_rule, + .std_tbl_fix = 0, + }; + int ret; + + if (!attr->transfer) { + uint32_t q_size; + + miss_actions[1].type = RTE_FLOW_ACTION_TYPE_RSS; + q_size = priv->reta_idx_n * sizeof(ctx->queue[0]); + ctx->queue = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO, q_size, + 0, SOCKET_ID_ANY); + if (!ctx->queue) + return rte_flow_error_set + (error, ENOMEM, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + NULL, "invalid default miss RSS"); + ctx->action_rss.func = RTE_ETH_HASH_FUNCTION_DEFAULT, + ctx->action_rss.level = 0, + ctx->action_rss.types = priv->rss_conf.rss_hf, + ctx->action_rss.key_len = priv->rss_conf.rss_key_len, + ctx->action_rss.queue_num = priv->reta_idx_n, + ctx->action_rss.key = priv->rss_conf.rss_key, + ctx->action_rss.queue = ctx->queue; + if (!priv->reta_idx_n || !priv->rxqs_n) + return rte_flow_error_set + (error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + NULL, "invalid port configuration"); + if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)) + ctx->action_rss.types = 0; + for (i = 0; i != priv->reta_idx_n; ++i) + ctx->queue[i] = (*priv->reta_idx)[i]; + } else { + miss_actions[1].type = RTE_FLOW_ACTION_TYPE_JUMP; + ctx->miss_jump.group = MLX5_TNL_MISS_FDB_JUMP_GRP; + } + miss_actions[1].conf = (typeof(miss_actions[1].conf))ctx->raw; + for (; app_actions->type != RTE_FLOW_ACTION_TYPE_JUMP; app_actions++); + jump_data = app_actions->conf; + miss_attr.priority = MLX5_TNL_MISS_RULE_PRIORITY; + miss_attr.group = jump_data->group; + ret = mlx5_flow_group_to_table(dev, tunnel, jump_data->group, + &flow_table, grp_info, error); + if (ret) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + NULL, "invalid tunnel id"); + mark_id.app_reserve = 0; + mark_id.table_id = tunnel_flow_tbl_to_id(flow_table); + mark_id.transfer = !!attr->transfer; + mark_id._unused_ = 0; + miss_mark.id = mark_id.val; + dev_flow = flow_drv_prepare(dev, flow, &miss_attr, + miss_items, miss_actions, flow_idx, error); + if (!dev_flow) + return -rte_errno; + dev_flow->flow = flow; + dev_flow->external = true; + dev_flow->tunnel = tunnel; + /* Subflow object was created, we must include one in the list. */ + SILIST_INSERT(&flow->dev_handles, dev_flow->handle_idx, + dev_flow->handle, next); + DRV_LOG(DEBUG, + "port %u tunnel type=%d id=%u miss rule priority=%u group=%u", + dev->data->port_id, tunnel->app_tunnel.type, + tunnel->tunnel_id, miss_attr.priority, miss_attr.group); + ret = flow_drv_translate(dev, dev_flow, &miss_attr, miss_items, + miss_actions, error); + if (!ret) + ret = flow_mreg_update_copy_table(dev, flow, miss_actions, + error); + + return ret; +} + +static const struct mlx5_flow_tbl_data_entry * +tunnel_mark_decode(struct rte_eth_dev *dev, uint32_t mark) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct mlx5_hlist_entry *he; + union tunnel_offload_mark mbits = { .val = mark }; + union mlx5_flow_tbl_key table_key = { + { + .table_id = tunnel_id_to_flow_tbl(mbits.table_id), + .dummy = 0, + .domain = !!mbits.transfer, + .direction = 0, + } + }; + he = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64, NULL); + return he ? + container_of(he, struct mlx5_flow_tbl_data_entry, entry) : NULL; +} + +static void +mlx5_flow_tunnel_grp2tbl_remove_cb(struct mlx5_hlist *list, + struct mlx5_hlist_entry *entry) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct tunnel_tbl_entry *tte = container_of(entry, typeof(*tte), hash); + + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_TNL_TBL_ID], + tunnel_flow_tbl_to_id(tte->flow_table)); + mlx5_free(tte); +} + +static struct mlx5_hlist_entry * +mlx5_flow_tunnel_grp2tbl_create_cb(struct mlx5_hlist *list, + uint64_t key __rte_unused, + void *ctx __rte_unused) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct tunnel_tbl_entry *tte; + + tte = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO, + sizeof(*tte), 0, + SOCKET_ID_ANY); + if (!tte) + goto err; + mlx5_ipool_malloc(sh->ipool[MLX5_IPOOL_TNL_TBL_ID], + &tte->flow_table); + if (tte->flow_table >= MLX5_MAX_TABLES) { + DRV_LOG(ERR, "Tunnel TBL ID %d exceed max limit.", + tte->flow_table); + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_TNL_TBL_ID], + tte->flow_table); + goto err; + } else if (!tte->flow_table) { + goto err; + } + tte->flow_table = tunnel_id_to_flow_tbl(tte->flow_table); + return &tte->hash; +err: + if (tte) + mlx5_free(tte); + return NULL; +} + +static uint32_t +tunnel_flow_group_to_flow_table(struct rte_eth_dev *dev, + const struct mlx5_flow_tunnel *tunnel, + uint32_t group, uint32_t *table, + struct rte_flow_error *error) +{ + struct mlx5_hlist_entry *he; + struct tunnel_tbl_entry *tte; + union tunnel_tbl_key key = { + .tunnel_id = tunnel ? tunnel->tunnel_id : 0, + .group = group + }; + struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); + struct mlx5_hlist *group_hash; + + group_hash = tunnel ? tunnel->groups : thub->groups; + he = mlx5_hlist_register(group_hash, key.val, NULL); + if (!he) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_GROUP, + NULL, + "tunnel group index not supported"); + tte = container_of(he, typeof(*tte), hash); + *table = tte->flow_table; + DRV_LOG(DEBUG, "port %u tunnel %u group=%#x table=%#x", + dev->data->port_id, key.tunnel_id, group, *table); + return 0; +} + +static void +mlx5_flow_tunnel_free(struct rte_eth_dev *dev, + struct mlx5_flow_tunnel *tunnel) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + DRV_LOG(DEBUG, "port %u release pmd tunnel id=0x%x", + dev->data->port_id, tunnel->tunnel_id); + RTE_VERIFY(!__atomic_load_n(&tunnel->refctn, __ATOMIC_RELAXED)); + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_TUNNEL_ID], + tunnel->tunnel_id); + mlx5_hlist_destroy(tunnel->groups); + mlx5_free(tunnel); +} + +static struct mlx5_flow_tunnel * +mlx5_find_tunnel_id(struct rte_eth_dev *dev, uint32_t id) +{ + struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); + struct mlx5_flow_tunnel *tun; + + LIST_FOREACH(tun, &thub->tunnels, chain) { + if (tun->tunnel_id == id) + break; + } + + return tun; +} + +static struct mlx5_flow_tunnel * +mlx5_flow_tunnel_allocate(struct rte_eth_dev *dev, + const struct rte_flow_tunnel *app_tunnel) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_tunnel *tunnel; + uint32_t id; + + mlx5_ipool_malloc(priv->sh->ipool[MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], + &id); if (id >= MLX5_MAX_TUNNELS) { mlx5_ipool_free(priv->sh->ipool [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], id); @@ -7671,23 +7565,259 @@ int mlx5_alloc_tunnel_hub(struct mlx5_dev_ctx_shared *sh) return err; } -#ifndef HAVE_MLX5DV_DR -#define MLX5_DOMAIN_SYNC_FLOW ((1 << 0) | (1 << 1)) -#else -#define MLX5_DOMAIN_SYNC_FLOW \ - (MLX5DV_DR_DOMAIN_SYNC_FLAGS_SW | MLX5DV_DR_DOMAIN_SYNC_FLAGS_HW) -#endif +static inline bool +mlx5_flow_tunnel_validate(struct rte_eth_dev *dev, + struct rte_flow_tunnel *tunnel, + const char *err_msg) +{ + err_msg = NULL; + if (!is_tunnel_offload_active(dev)) { + err_msg = "tunnel offload was not activated"; + goto out; + } else if (!tunnel) { + err_msg = "no application tunnel"; + goto out; + } -int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains) + switch (tunnel->type) { + default: + err_msg = "unsupported tunnel type"; + goto out; + case RTE_FLOW_ITEM_TYPE_VXLAN: + break; + } + +out: + return !err_msg; +} + +static int +mlx5_flow_tunnel_decap_set(struct rte_eth_dev *dev, + struct rte_flow_tunnel *app_tunnel, + struct rte_flow_action **actions, + uint32_t *num_of_actions, + struct rte_flow_error *error) { - struct rte_eth_dev *dev = &rte_eth_devices[port_id]; - const struct mlx5_flow_driver_ops *fops; int ret; - struct rte_flow_attr attr = { .transfer = 0 }; + struct mlx5_flow_tunnel *tunnel; + const char *err_msg = NULL; + bool verdict = mlx5_flow_tunnel_validate(dev, app_tunnel, err_msg); - fops = flow_get_drv_ops(flow_get_drv_type(dev, &attr)); - ret = fops->sync_domain(dev, domains, MLX5_DOMAIN_SYNC_FLOW); - if (ret > 0) - ret = -ret; - return ret; + if (!verdict) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL, + err_msg); + ret = mlx5_get_flow_tunnel(dev, app_tunnel, &tunnel); + if (ret < 0) { + return rte_flow_error_set(error, ret, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL, + "failed to initialize pmd tunnel"); + } + *actions = &tunnel->action; + *num_of_actions = 1; + return 0; +} + +static int +mlx5_flow_tunnel_match(struct rte_eth_dev *dev, + struct rte_flow_tunnel *app_tunnel, + struct rte_flow_item **items, + uint32_t *num_of_items, + struct rte_flow_error *error) +{ + int ret; + struct mlx5_flow_tunnel *tunnel; + const char *err_msg = NULL; + bool verdict = mlx5_flow_tunnel_validate(dev, app_tunnel, err_msg); + + if (!verdict) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + err_msg); + ret = mlx5_get_flow_tunnel(dev, app_tunnel, &tunnel); + if (ret < 0) { + return rte_flow_error_set(error, ret, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "failed to initialize pmd tunnel"); + } + *items = &tunnel->item; + *num_of_items = 1; + return 0; } +static int +mlx5_flow_tunnel_item_release(struct rte_eth_dev *dev, + struct rte_flow_item *pmd_items, + uint32_t num_items, struct rte_flow_error *err) +{ + struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); + struct mlx5_flow_tunnel *tun; + + rte_spinlock_lock(&thub->sl); + LIST_FOREACH(tun, &thub->tunnels, chain) { + if (&tun->item == pmd_items) { + LIST_REMOVE(tun, chain); + break; + } + } + rte_spinlock_unlock(&thub->sl); + if (!tun || num_items != 1) + return rte_flow_error_set(err, EINVAL, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "invalid argument"); + if (!__atomic_sub_fetch(&tun->refctn, 1, __ATOMIC_RELAXED)) + mlx5_flow_tunnel_free(dev, tun); + return 0; +} + +static int +mlx5_flow_tunnel_action_release(struct rte_eth_dev *dev, + struct rte_flow_action *pmd_actions, + uint32_t num_actions, + struct rte_flow_error *err) +{ + struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); + struct mlx5_flow_tunnel *tun; + + rte_spinlock_lock(&thub->sl); + LIST_FOREACH(tun, &thub->tunnels, chain) { + if (&tun->action == pmd_actions) { + LIST_REMOVE(tun, chain); + break; + } + } + rte_spinlock_unlock(&thub->sl); + if (!tun || num_actions != 1) + return rte_flow_error_set(err, EINVAL, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "invalid argument"); + if (!__atomic_sub_fetch(&tun->refctn, 1, __ATOMIC_RELAXED)) + mlx5_flow_tunnel_free(dev, tun); + + return 0; +} + +static int +mlx5_flow_tunnel_get_restore_info(struct rte_eth_dev *dev, + struct rte_mbuf *m, + struct rte_flow_restore_info *info, + struct rte_flow_error *err) +{ + uint64_t ol_flags = m->ol_flags; + const struct mlx5_flow_tbl_data_entry *tble; + const uint64_t mask = PKT_RX_FDIR | PKT_RX_FDIR_ID; + + if ((ol_flags & mask) != mask) + goto err; + tble = tunnel_mark_decode(dev, m->hash.fdir.hi); + if (!tble) { + DRV_LOG(DEBUG, "port %u invalid miss tunnel mark %#x", + dev->data->port_id, m->hash.fdir.hi); + goto err; + } + MLX5_ASSERT(tble->tunnel); + memcpy(&info->tunnel, &tble->tunnel->app_tunnel, sizeof(info->tunnel)); + info->group_id = tble->group_id; + info->flags = RTE_FLOW_RESTORE_INFO_TUNNEL | + RTE_FLOW_RESTORE_INFO_GROUP_ID | + RTE_FLOW_RESTORE_INFO_ENCAPSULATED; + + return 0; + +err: + return rte_flow_error_set(err, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "failed to get restore info"); +} + +#else /* HAVE_IBV_FLOW_DV_SUPPORT */ +static int +mlx5_flow_tunnel_decap_set(__rte_unused struct rte_eth_dev *dev, + __rte_unused struct rte_flow_tunnel *app_tunnel, + __rte_unused struct rte_flow_action **actions, + __rte_unused uint32_t *num_of_actions, + __rte_unused struct rte_flow_error *error) +{ + return -ENOTSUP; +} + +static int +mlx5_flow_tunnel_match(__rte_unused struct rte_eth_dev *dev, + __rte_unused struct rte_flow_tunnel *app_tunnel, + __rte_unused struct rte_flow_item **items, + __rte_unused uint32_t *num_of_items, + __rte_unused struct rte_flow_error *error) +{ + return -ENOTSUP; +} + +static int +mlx5_flow_tunnel_item_release(__rte_unused struct rte_eth_dev *dev, + __rte_unused struct rte_flow_item *pmd_items, + __rte_unused uint32_t num_items, + __rte_unused struct rte_flow_error *err) +{ + return -ENOTSUP; +} + +static int +mlx5_flow_tunnel_action_release(__rte_unused struct rte_eth_dev *dev, + __rte_unused struct rte_flow_action *pmd_action, + __rte_unused uint32_t num_actions, + __rte_unused struct rte_flow_error *err) +{ + return -ENOTSUP; +} + +static int +mlx5_flow_tunnel_get_restore_info(__rte_unused struct rte_eth_dev *dev, + __rte_unused struct rte_mbuf *m, + __rte_unused struct rte_flow_restore_info *i, + __rte_unused struct rte_flow_error *err) +{ + return -ENOTSUP; +} + +static int +flow_tunnel_add_default_miss(__rte_unused struct rte_eth_dev *dev, + __rte_unused struct rte_flow *flow, + __rte_unused const struct rte_flow_attr *attr, + __rte_unused const struct rte_flow_action *actions, + __rte_unused uint32_t flow_idx, + __rte_unused struct tunnel_default_miss_ctx *ctx, + __rte_unused struct rte_flow_error *error) +{ + return -ENOTSUP; +} + +static struct mlx5_flow_tunnel * +mlx5_find_tunnel_id(__rte_unused struct rte_eth_dev *dev, + __rte_unused uint32_t id) +{ + return NULL; +} + +static void +mlx5_flow_tunnel_free(__rte_unused struct rte_eth_dev *dev, + __rte_unused struct mlx5_flow_tunnel *tunnel) +{ +} + +static uint32_t +tunnel_flow_group_to_flow_table(__rte_unused struct rte_eth_dev *dev, + __rte_unused const struct mlx5_flow_tunnel *t, + __rte_unused uint32_t group, + __rte_unused uint32_t *table, + struct rte_flow_error *error) +{ + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "tunnel offload requires DV support"); +} + +void mlx5_release_tunnel_hub(__rte_unused struct mlx5_dev_ctx_shared *sh, + __rte_unused uint16_t port_id) +{ + return; +} +#endif /* HAVE_IBV_FLOW_DV_SUPPORT */ + diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 5fac8672fc..fbc6173fcb 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -991,8 +991,13 @@ mlx5_tunnel_hub(struct rte_eth_dev *dev) static inline bool is_tunnel_offload_active(struct rte_eth_dev *dev) { +#ifdef HAVE_IBV_FLOW_DV_SUPPORT struct mlx5_priv *priv = dev->data->dev_private; return !!priv->config.dv_miss_info; +#else + RTE_SET_USED(dev); + return false; +#endif } static inline bool From patchwork Mon Nov 16 14:02:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 84239 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 825A9A04B5; Mon, 16 Nov 2020 15:04:07 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7F6A8C902; Mon, 16 Nov 2020 15:03:16 +0100 (CET) Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by dpdk.org (Postfix) with ESMTP id 724A4C8F2 for ; Mon, 16 Nov 2020 15:03:14 +0100 (CET) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Mon, 16 Nov 2020 06:03:22 -0800 Received: from nvidia.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 16 Nov 2020 14:03:10 +0000 From: Gregory Etelson To: CC: , , , Viacheslav Ovsiienko , Shahaf Shuler Date: Mon, 16 Nov 2020 16:02:20 +0200 Message-ID: <20201116140224.8464-4-getelson@nvidia.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201116140224.8464-1-getelson@nvidia.com> References: <20201111071417.21177-1-getelson@nvidia.com> <20201116140224.8464-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605535402; bh=VrfTf2DH1DC74llJcAiv91HbZvmcFx8j/O9n+Mk+YkY=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=PKShq+Kdkcye5a1DHQQ6siq62tGRA5qRCC9IZgvq7EHgwlnqoMkZtZ6hdifXYNYl7 vvliPnpdms7dTulkNqgG8JESca5Hih0G4grJotDGhF0j69bdhbX2Ajq9T34xE61ey2 YeBh7h/pI2XLlQfo45hf6TawkF2xqhK/qsBvk4pooPXSIkp7HQQJggFd64YGzkCdyB DFshE5FwnQqb549PxRT7Ga8PI2UPid06V344mrjeXNafsUrONzS9VqhU4yodQn+BPK yOlt+iRCC6/rYeL2s1NwNJFQFxa/u69tXtHyKniHzj6Zh8roLeW/f1BbTzq8gEe26O lLpngS8vexUVQ== Subject: [dpdk-dev] [PATCH v5 3/6] net/mlx5: fix structure passing method in function call X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Tunnel offload implementation introduced 64 bit-field flow_grp_info structure. Since the structure size is 64 bits, the code passed that type by value in function calls. The patch changes that structure passing method to reference. Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload") Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow.c | 20 +++++++++++--------- drivers/net/mlx5/mlx5_flow.h | 4 ++-- drivers/net/mlx5/mlx5_flow_dv.c | 10 +++++----- 3 files changed, 18 insertions(+), 16 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index c8152cb8b1..bc93aa6377 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -6699,9 +6699,11 @@ mlx5_flow_async_pool_query_handle(struct mlx5_dev_ctx_shared *sh, static int flow_group_to_table(uint32_t port_id, uint32_t group, uint32_t *table, - struct flow_grp_info grp_info, struct rte_flow_error *error) + const struct flow_grp_info *grp_info, + struct rte_flow_error *error) { - if (grp_info.transfer && grp_info.external && grp_info.fdb_def_rule) { + if (grp_info->transfer && grp_info->external && + grp_info->fdb_def_rule) { if (group == UINT32_MAX) return rte_flow_error_set (error, EINVAL, @@ -6758,25 +6760,25 @@ int mlx5_flow_group_to_table(struct rte_eth_dev *dev, const struct mlx5_flow_tunnel *tunnel, uint32_t group, uint32_t *table, - struct flow_grp_info grp_info, + const struct flow_grp_info *grp_info, struct rte_flow_error *error) { int ret; bool standard_translation; - if (!grp_info.skip_scale && grp_info.external && + if (!grp_info->skip_scale && grp_info->external && group < MLX5_MAX_TABLES_EXTERNAL) group *= MLX5_FLOW_TABLE_FACTOR; if (is_tunnel_offload_active(dev)) { - standard_translation = !grp_info.external || - grp_info.std_tbl_fix; + standard_translation = !grp_info->external || + grp_info->std_tbl_fix; } else { standard_translation = true; } DRV_LOG(DEBUG, "port %u group=%#x transfer=%d external=%d fdb_def_rule=%d translate=%s", - dev->data->port_id, group, grp_info.transfer, - grp_info.external, grp_info.fdb_def_rule, + dev->data->port_id, group, grp_info->transfer, + grp_info->external, grp_info->fdb_def_rule, standard_translation ? "STANDARD" : "TUNNEL"); if (standard_translation) ret = flow_group_to_table(dev->data->port_id, group, table, @@ -7273,7 +7275,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev, miss_attr.priority = MLX5_TNL_MISS_RULE_PRIORITY; miss_attr.group = jump_data->group; ret = mlx5_flow_group_to_table(dev, tunnel, jump_data->group, - &flow_table, grp_info, error); + &flow_table, &grp_info, error); if (ret) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF, diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index fbc6173fcb..c33c0fee7c 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1273,8 +1273,8 @@ tunnel_use_standard_attr_group_translate int mlx5_flow_group_to_table(struct rte_eth_dev *dev, const struct mlx5_flow_tunnel *tunnel, uint32_t group, uint32_t *table, - struct flow_grp_info flags, - struct rte_flow_error *error); + const struct flow_grp_info *flags, + struct rte_flow_error *error); uint64_t mlx5_flow_hashfields_adjust(struct mlx5_flow_rss_desc *rss_desc, int tunnel, uint64_t layer_types, uint64_t hash_fields); diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 62d9ca9ffb..25ab9adee6 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -3935,7 +3935,7 @@ flow_dv_validate_action_jump(struct rte_eth_dev *dev, target_group = ((const struct rte_flow_action_jump *)action->conf)->group; ret = mlx5_flow_group_to_table(dev, tunnel, target_group, &table, - grp_info, error); + &grp_info, error); if (ret) return ret; if (attributes->group == target_group && @@ -5103,7 +5103,7 @@ static int flow_dv_validate_attributes(struct rte_eth_dev *dev, const struct mlx5_flow_tunnel *tunnel, const struct rte_flow_attr *attributes, - struct flow_grp_info grp_info, + const struct flow_grp_info *grp_info, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; @@ -5258,7 +5258,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, } grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate (dev, tunnel, attr, items, actions); - ret = flow_dv_validate_attributes(dev, tunnel, attr, grp_info, error); + ret = flow_dv_validate_attributes(dev, tunnel, attr, &grp_info, error); if (ret < 0) return ret; is_root = (uint64_t)ret; @@ -9597,7 +9597,7 @@ flow_dv_translate(struct rte_eth_dev *dev, grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate (dev, tunnel, attr, items, actions); ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table, - grp_info, error); + &grp_info, error); if (ret) return ret; dev_flow->dv.group = table; @@ -9944,7 +9944,7 @@ flow_dv_translate(struct rte_eth_dev *dev, ret = mlx5_flow_group_to_table(dev, tunnel, jump_group, &table, - grp_info, error); + &grp_info, error); if (ret) return ret; tbl = flow_dv_tbl_resource_get(dev, table, attr->egress, From patchwork Mon Nov 16 14:02:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 84240 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 60D57A04B5; Mon, 16 Nov 2020 15:04:29 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BF3D0C912; Mon, 16 Nov 2020 15:03:18 +0100 (CET) Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by dpdk.org (Postfix) with ESMTP id CEC0FC90E for ; Mon, 16 Nov 2020 15:03:17 +0100 (CET) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Mon, 16 Nov 2020 06:03:25 -0800 Received: from nvidia.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 16 Nov 2020 14:03:12 +0000 From: Gregory Etelson To: CC: , , , Viacheslav Ovsiienko , Shahaf Shuler , Xueming Li Date: Mon, 16 Nov 2020 16:02:21 +0200 Message-ID: <20201116140224.8464-5-getelson@nvidia.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201116140224.8464-1-getelson@nvidia.com> References: <20201111071417.21177-1-getelson@nvidia.com> <20201116140224.8464-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605535405; bh=663BBSALn8CHf5Vb/48DazVQ3sxZPjZKH/UpIpeQuv4=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=gQiTHYaU5+SrXkLMjWtyvM4fL/BqCGVBdimSS0QEg11KNJlhw48M3cFSMSw5QegXL qBcNmAvsu7cItcZ/0hY4nuwh3ZkPNZC1K234Q4r6gKfyJmy+Ug+7rZ3Z4FPmw/zxPe rUuOAspDe25NH0qXlsJk8PwUWpmOvAVw5nGXVDQTR0R1d9BQ5C5aJZUHlSZ49baGOQ 1esUogKpDdkd1HuKKnECiY9k2Pmnyy+nEA5FnLLlnu6VKnJA6wsLskQq98HgoQWd/e ywcgbhKwSzWMVHg81wfjhdKVJ6IvUqAe1w4y3FSQpeuSsNJLQDd84DDasBcjZCEiZb SLQ9nuEV+a6Ew== Subject: [dpdk-dev] [PATCH v5 4/6] net/mlx5: fix tunnel offload object allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The original patch allocated tunnel offload objects with invalid indexes. As the result, PMD tunnel object allocation failed. In this patch indexed pool provides both an index and memory for a new tunnel offload object. Also tunnel offload ipool moved to dv enabled code only. Fixes: f2e8093 ("net/mlx5: use indexed pool as id generator") Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5.c | 50 ++++++++++++++++++------------------ drivers/net/mlx5/mlx5.h | 4 +-- drivers/net/mlx5/mlx5_flow.c | 35 ++++++++----------------- 3 files changed, 37 insertions(+), 52 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 43344391df..31011c3a72 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -186,7 +186,7 @@ static pthread_mutex_t mlx5_dev_ctx_list_mutex = PTHREAD_MUTEX_INITIALIZER; static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { #ifdef HAVE_IBV_FLOW_DV_SUPPORT - { + [MLX5_IPOOL_DECAP_ENCAP] = { .size = sizeof(struct mlx5_flow_dv_encap_decap_resource), .trunk_size = 64, .grow_trunk = 3, @@ -197,7 +197,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_encap_decap_ipool", }, - { + [MLX5_IPOOL_PUSH_VLAN] = { .size = sizeof(struct mlx5_flow_dv_push_vlan_action_resource), .trunk_size = 64, .grow_trunk = 3, @@ -208,7 +208,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_push_vlan_ipool", }, - { + [MLX5_IPOOL_TAG] = { .size = sizeof(struct mlx5_flow_dv_tag_resource), .trunk_size = 64, .grow_trunk = 3, @@ -219,7 +219,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_tag_ipool", }, - { + [MLX5_IPOOL_PORT_ID] = { .size = sizeof(struct mlx5_flow_dv_port_id_action_resource), .trunk_size = 64, .grow_trunk = 3, @@ -230,7 +230,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_port_id_ipool", }, - { + [MLX5_IPOOL_JUMP] = { .size = sizeof(struct mlx5_flow_tbl_data_entry), .trunk_size = 64, .grow_trunk = 3, @@ -241,7 +241,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_jump_ipool", }, - { + [MLX5_IPOOL_SAMPLE] = { .size = sizeof(struct mlx5_flow_dv_sample_resource), .trunk_size = 64, .grow_trunk = 3, @@ -252,7 +252,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_sample_ipool", }, - { + [MLX5_IPOOL_DEST_ARRAY] = { .size = sizeof(struct mlx5_flow_dv_dest_array_resource), .trunk_size = 64, .grow_trunk = 3, @@ -263,8 +263,19 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_dest_array_ipool", }, + [MLX5_IPOOL_TUNNEL_ID] = { + .size = sizeof(struct mlx5_flow_tunnel), + .need_lock = 1, + .release_mem_en = 1, + .type = "mlx5_tunnel_offload", + }, + [MLX5_IPOOL_TNL_TBL_ID] = { + .size = 0, + .need_lock = 1, + .type = "mlx5_flow_tnl_tbl_ipool", + }, #endif - { + [MLX5_IPOOL_MTR] = { .size = sizeof(struct mlx5_flow_meter), .trunk_size = 64, .grow_trunk = 3, @@ -275,7 +286,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_meter_ipool", }, - { + [MLX5_IPOOL_MCP] = { .size = sizeof(struct mlx5_flow_mreg_copy_resource), .trunk_size = 64, .grow_trunk = 3, @@ -286,7 +297,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_mcp_ipool", }, - { + [MLX5_IPOOL_HRXQ] = { .size = (sizeof(struct mlx5_hrxq) + MLX5_RSS_HASH_KEY_LEN), .trunk_size = 64, .grow_trunk = 3, @@ -297,7 +308,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_hrxq_ipool", }, - { + [MLX5_IPOOL_MLX5_FLOW] = { /* * MLX5_IPOOL_MLX5_FLOW size varies for DV and VERBS flows. * It set in run time according to PCI function configuration. @@ -312,7 +323,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_flow_handle_ipool", }, - { + [MLX5_IPOOL_RTE_FLOW] = { .size = sizeof(struct rte_flow), .trunk_size = 4096, .need_lock = 1, @@ -321,22 +332,12 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "rte_flow_ipool", }, - { + [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID] = { .size = 0, .need_lock = 1, .type = "mlx5_flow_rss_id_ipool", }, - { - .size = 0, - .need_lock = 1, - .type = "mlx5_flow_tnl_flow_ipool", - }, - { - .size = 0, - .need_lock = 1, - .type = "mlx5_flow_tnl_tbl_ipool", - }, - { + [MLX5_IPOOL_RSS_SHARED_ACTIONS] = { .size = sizeof(struct mlx5_shared_action_rss), .trunk_size = 64, .grow_trunk = 3, @@ -347,7 +348,6 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "mlx5_shared_action_rss", }, - }; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 2ad927bc7d..0155f5ca23 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -44,6 +44,8 @@ enum mlx5_ipool_index { MLX5_IPOOL_JUMP, /* Pool for jump resource. */ MLX5_IPOOL_SAMPLE, /* Pool for sample resource. */ MLX5_IPOOL_DEST_ARRAY, /* Pool for destination array resource. */ + MLX5_IPOOL_TUNNEL_ID, /* Pool for tunnel offload context */ + MLX5_IPOOL_TNL_TBL_ID, /* Pool for tunnel table ID. */ #endif MLX5_IPOOL_MTR, /* Pool for meter resource. */ MLX5_IPOOL_MCP, /* Pool for metadata resource. */ @@ -51,8 +53,6 @@ enum mlx5_ipool_index { MLX5_IPOOL_MLX5_FLOW, /* Pool for mlx5 flow handle. */ MLX5_IPOOL_RTE_FLOW, /* Pool for rte_flow. */ MLX5_IPOOL_RSS_EXPANTION_FLOW_ID, /* Pool for Queue/RSS flow ID. */ - MLX5_IPOOL_TUNNEL_ID, /* Pool for flow tunnel ID. */ - MLX5_IPOOL_TNL_TBL_ID, /* Pool for tunnel table ID. */ MLX5_IPOOL_RSS_SHARED_ACTIONS, /* Pool for RSS shared actions. */ MLX5_IPOOL_MAX, }; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index bc93aa6377..06c19c6db3 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -7406,14 +7406,13 @@ mlx5_flow_tunnel_free(struct rte_eth_dev *dev, struct mlx5_flow_tunnel *tunnel) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_indexed_pool *ipool; DRV_LOG(DEBUG, "port %u release pmd tunnel id=0x%x", dev->data->port_id, tunnel->tunnel_id); - RTE_VERIFY(!__atomic_load_n(&tunnel->refctn, __ATOMIC_RELAXED)); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_TUNNEL_ID], - tunnel->tunnel_id); mlx5_hlist_destroy(tunnel->groups); - mlx5_free(tunnel); + ipool = priv->sh->ipool[MLX5_IPOOL_TUNNEL_ID]; + mlx5_ipool_free(ipool, tunnel->tunnel_id); } static struct mlx5_flow_tunnel * @@ -7435,39 +7434,25 @@ mlx5_flow_tunnel_allocate(struct rte_eth_dev *dev, const struct rte_flow_tunnel *app_tunnel) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_indexed_pool *ipool; struct mlx5_flow_tunnel *tunnel; uint32_t id; - mlx5_ipool_malloc(priv->sh->ipool[MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], - &id); + ipool = priv->sh->ipool[MLX5_IPOOL_TUNNEL_ID]; + tunnel = mlx5_ipool_zmalloc(ipool, &id); + if (!tunnel) + return NULL; if (id >= MLX5_MAX_TUNNELS) { - mlx5_ipool_free(priv->sh->ipool - [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], id); + mlx5_ipool_free(ipool, id); DRV_LOG(ERR, "Tunnel ID %d exceed max limit.", id); return NULL; - } else if (!id) { - return NULL; - } - /** - * mlx5 flow tunnel is an auxlilary data structure - * It's not part of IO. No need to allocate it from - * huge pages pools dedicated for IO - */ - tunnel = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO, sizeof(*tunnel), - 0, SOCKET_ID_ANY); - if (!tunnel) { - mlx5_ipool_free(priv->sh->ipool - [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], id); - return NULL; } tunnel->groups = mlx5_hlist_create("tunnel groups", 1024, 0, 0, mlx5_flow_tunnel_grp2tbl_create_cb, NULL, mlx5_flow_tunnel_grp2tbl_remove_cb); if (!tunnel->groups) { - mlx5_ipool_free(priv->sh->ipool - [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], id); - mlx5_free(tunnel); + mlx5_ipool_free(ipool, id); return NULL; } tunnel->groups->ctx = priv->sh; From patchwork Mon Nov 16 14:02:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 84241 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3EFB6A04B5; Mon, 16 Nov 2020 15:04:49 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 27106C91C; Mon, 16 Nov 2020 15:03:23 +0100 (CET) Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by dpdk.org (Postfix) with ESMTP id D8630C91C for ; Mon, 16 Nov 2020 15:03:21 +0100 (CET) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Mon, 16 Nov 2020 06:03:29 -0800 Received: from nvidia.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 16 Nov 2020 14:03:16 +0000 From: Gregory Etelson To: CC: , , , Viacheslav Ovsiienko , Shahaf Shuler , Suanming Mou Date: Mon, 16 Nov 2020 16:02:22 +0200 Message-ID: <20201116140224.8464-6-getelson@nvidia.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201116140224.8464-1-getelson@nvidia.com> References: <20201111071417.21177-1-getelson@nvidia.com> <20201116140224.8464-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605535409; bh=TdJQ7fZONc8fMXXrVTqgZUisBAX4jlh9/XcsdGkU11I=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=OwyscUk3O3TEhWA9UL17B96ecd/XgDU2Rz/fgNtmxNsnwXylkBL2CQ/mqvRf/cH/p y/ehPuimJIGLmp9TvQudVtEITO/TAPL5U7AioQp6pzOBm3M5wboW4Lm4nIh8VO74kv kIwcbESdqdrxA0ch+R8TkSr0uMRU3x9EtJBwCwi4c59SvLcMsSZ0bvxUicma+e60Ab s9KvM6YZNDS+rm9Gr8fIJcmSLYZhh2WXe9yOzmqWL6yAP8rnFaeLHALb8XJvbMBDF/ v0OpZUyTrOIPZR5khHBcxK/7f2wWZX8ThwhZ7InMrICSwxVqk4yBKkGAuqsb0zIsN+ GNDSzKX3yWubw== Subject: [dpdk-dev] [PATCH v5 5/6] net/mlx5: fix tunnel offload hub multi-thread protection X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The original patch was removing active tunnel offload objects from a tunnels db list without checking its reference counter value. That action was leading to a PMD crash. Current patch isolates tunnels db list into a separate API. That API manages MT protection of the tunnel offload db. Fixes: e4f5880 ("net/mlx5: make tunnel hub list thread safe") Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow.c | 266 +++++++++++++++++++++++++---------- drivers/net/mlx5/mlx5_flow.h | 6 +- 2 files changed, 195 insertions(+), 77 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 06c19c6db3..2fe8648341 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -5613,11 +5613,8 @@ flow_list_destroy(struct rte_eth_dev *dev, uint32_t *list, if (flow->tunnel) { struct mlx5_flow_tunnel *tunnel; - rte_spinlock_lock(&mlx5_tunnel_hub(dev)->sl); tunnel = mlx5_find_tunnel_id(dev, flow->tunnel_id); RTE_VERIFY(tunnel); - LIST_REMOVE(tunnel, chain); - rte_spinlock_unlock(&mlx5_tunnel_hub(dev)->sl); if (!__atomic_sub_fetch(&tunnel->refctn, 1, __ATOMIC_RELAXED)) mlx5_flow_tunnel_free(dev, tunnel); } @@ -7194,6 +7191,15 @@ union tunnel_offload_mark { }; }; +static bool +mlx5_access_tunnel_offload_db + (struct rte_eth_dev *dev, + bool (*match)(struct rte_eth_dev *, + struct mlx5_flow_tunnel *, const void *), + void (*hit)(struct rte_eth_dev *, struct mlx5_flow_tunnel *, void *), + void (*miss)(struct rte_eth_dev *, void *), + void *ctx, bool lock_op); + static int flow_tunnel_add_default_miss(struct rte_eth_dev *dev, struct rte_flow *flow, @@ -7415,18 +7421,72 @@ mlx5_flow_tunnel_free(struct rte_eth_dev *dev, mlx5_ipool_free(ipool, tunnel->tunnel_id); } -static struct mlx5_flow_tunnel * -mlx5_find_tunnel_id(struct rte_eth_dev *dev, uint32_t id) +static bool +mlx5_access_tunnel_offload_db + (struct rte_eth_dev *dev, + bool (*match)(struct rte_eth_dev *, + struct mlx5_flow_tunnel *, const void *), + void (*hit)(struct rte_eth_dev *, struct mlx5_flow_tunnel *, void *), + void (*miss)(struct rte_eth_dev *, void *), + void *ctx, bool lock_op) { + bool verdict = false; struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); - struct mlx5_flow_tunnel *tun; + struct mlx5_flow_tunnel *tunnel; - LIST_FOREACH(tun, &thub->tunnels, chain) { - if (tun->tunnel_id == id) + rte_spinlock_lock(&thub->sl); + LIST_FOREACH(tunnel, &thub->tunnels, chain) { + verdict = match(dev, tunnel, (const void *)ctx); + if (verdict) break; } + if (!lock_op) + rte_spinlock_unlock(&thub->sl); + if (verdict && hit) + hit(dev, tunnel, ctx); + if (!verdict && miss) + miss(dev, ctx); + if (lock_op) + rte_spinlock_unlock(&thub->sl); - return tun; + return verdict; +} + +struct tunnel_db_find_tunnel_id_ctx { + uint32_t tunnel_id; + struct mlx5_flow_tunnel *tunnel; +}; + +static bool +find_tunnel_id_match(struct rte_eth_dev *dev, + struct mlx5_flow_tunnel *tunnel, const void *x) +{ + const struct tunnel_db_find_tunnel_id_ctx *ctx = x; + + RTE_SET_USED(dev); + return tunnel->tunnel_id == ctx->tunnel_id; +} + +static void +find_tunnel_id_hit(struct rte_eth_dev *dev, + struct mlx5_flow_tunnel *tunnel, void *x) +{ + struct tunnel_db_find_tunnel_id_ctx *ctx = x; + RTE_SET_USED(dev); + ctx->tunnel = tunnel; +} + +static struct mlx5_flow_tunnel * +mlx5_find_tunnel_id(struct rte_eth_dev *dev, uint32_t id) +{ + struct tunnel_db_find_tunnel_id_ctx ctx = { + .tunnel_id = id, + }; + + mlx5_access_tunnel_offload_db(dev, find_tunnel_id_match, + find_tunnel_id_hit, NULL, &ctx, true); + + return ctx.tunnel; } static struct mlx5_flow_tunnel * @@ -7474,38 +7534,60 @@ mlx5_flow_tunnel_allocate(struct rte_eth_dev *dev, return tunnel; } +struct tunnel_db_get_tunnel_ctx { + const struct rte_flow_tunnel *app_tunnel; + struct mlx5_flow_tunnel *tunnel; +}; + +static bool get_tunnel_match(struct rte_eth_dev *dev, + struct mlx5_flow_tunnel *tunnel, const void *x) +{ + const struct tunnel_db_get_tunnel_ctx *ctx = x; + + RTE_SET_USED(dev); + return !memcmp(ctx->app_tunnel, &tunnel->app_tunnel, + sizeof(*ctx->app_tunnel)); +} + +static void get_tunnel_hit(struct rte_eth_dev *dev, + struct mlx5_flow_tunnel *tunnel, void *x) +{ + /* called under tunnel spinlock protection */ + struct tunnel_db_get_tunnel_ctx *ctx = x; + + RTE_SET_USED(dev); + tunnel->refctn++; + ctx->tunnel = tunnel; +} + +static void get_tunnel_miss(struct rte_eth_dev *dev, void *x) +{ + /* called under tunnel spinlock protection */ + struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); + struct tunnel_db_get_tunnel_ctx *ctx = x; + + rte_spinlock_unlock(&thub->sl); + ctx->tunnel = mlx5_flow_tunnel_allocate(dev, ctx->app_tunnel); + ctx->tunnel->refctn = 1; + rte_spinlock_lock(&thub->sl); + if (ctx->tunnel) + LIST_INSERT_HEAD(&thub->tunnels, ctx->tunnel, chain); +} + + static int mlx5_get_flow_tunnel(struct rte_eth_dev *dev, const struct rte_flow_tunnel *app_tunnel, struct mlx5_flow_tunnel **tunnel) { - int ret; - struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); - struct mlx5_flow_tunnel *tun; - - rte_spinlock_lock(&thub->sl); - LIST_FOREACH(tun, &thub->tunnels, chain) { - if (!memcmp(app_tunnel, &tun->app_tunnel, - sizeof(*app_tunnel))) { - *tunnel = tun; - ret = 0; - break; - } - } - if (!tun) { - tun = mlx5_flow_tunnel_allocate(dev, app_tunnel); - if (tun) { - LIST_INSERT_HEAD(&thub->tunnels, tun, chain); - *tunnel = tun; - } else { - ret = -ENOMEM; - } - } - rte_spinlock_unlock(&thub->sl); - if (tun) - __atomic_add_fetch(&tun->refctn, 1, __ATOMIC_RELAXED); + struct tunnel_db_get_tunnel_ctx ctx = { + .app_tunnel = app_tunnel, + }; - return ret; + mlx5_access_tunnel_offload_db(dev, get_tunnel_match, get_tunnel_hit, + get_tunnel_miss, &ctx, true); + *tunnel = ctx.tunnel; + return ctx.tunnel ? 0 : -ENOMEM; } void mlx5_release_tunnel_hub(struct mlx5_dev_ctx_shared *sh, uint16_t port_id) @@ -7631,56 +7713,88 @@ mlx5_flow_tunnel_match(struct rte_eth_dev *dev, *num_of_items = 1; return 0; } + +struct tunnel_db_element_release_ctx { + struct rte_flow_item *items; + struct rte_flow_action *actions; + uint32_t num_elements; + struct rte_flow_error *error; + int ret; +}; + +static bool +tunnel_element_release_match(struct rte_eth_dev *dev, + struct mlx5_flow_tunnel *tunnel, const void *x) +{ + const struct tunnel_db_element_release_ctx *ctx = x; + + RTE_SET_USED(dev); + if (ctx->num_elements != 1) + return false; + else if (ctx->items) + return ctx->items == &tunnel->item; + else if (ctx->actions) + return ctx->actions == &tunnel->action; + + return false; +} + +static void +tunnel_element_release_hit(struct rte_eth_dev *dev, + struct mlx5_flow_tunnel *tunnel, void *x) +{ + struct tunnel_db_element_release_ctx *ctx = x; + ctx->ret = 0; + if (!__atomic_sub_fetch(&tunnel->refctn, 1, __ATOMIC_RELAXED)) + mlx5_flow_tunnel_free(dev, tunnel); +} + +static void +tunnel_element_release_miss(struct rte_eth_dev *dev, void *x) +{ + struct tunnel_db_element_release_ctx *ctx = x; + RTE_SET_USED(dev); + ctx->ret = rte_flow_error_set(ctx->error, EINVAL, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "invalid argument"); +} + static int mlx5_flow_tunnel_item_release(struct rte_eth_dev *dev, - struct rte_flow_item *pmd_items, - uint32_t num_items, struct rte_flow_error *err) -{ - struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); - struct mlx5_flow_tunnel *tun; + struct rte_flow_item *pmd_items, + uint32_t num_items, struct rte_flow_error *err) +{ + struct tunnel_db_element_release_ctx ctx = { + .items = pmd_items, + .actions = NULL, + .num_elements = num_items, + .error = err, + }; - rte_spinlock_lock(&thub->sl); - LIST_FOREACH(tun, &thub->tunnels, chain) { - if (&tun->item == pmd_items) { - LIST_REMOVE(tun, chain); - break; - } - } - rte_spinlock_unlock(&thub->sl); - if (!tun || num_items != 1) - return rte_flow_error_set(err, EINVAL, - RTE_FLOW_ERROR_TYPE_HANDLE, NULL, - "invalid argument"); - if (!__atomic_sub_fetch(&tun->refctn, 1, __ATOMIC_RELAXED)) - mlx5_flow_tunnel_free(dev, tun); - return 0; + mlx5_access_tunnel_offload_db(dev, tunnel_element_release_match, + tunnel_element_release_hit, + tunnel_element_release_miss, &ctx, false); + + return ctx.ret; } static int mlx5_flow_tunnel_action_release(struct rte_eth_dev *dev, - struct rte_flow_action *pmd_actions, - uint32_t num_actions, - struct rte_flow_error *err) -{ - struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); - struct mlx5_flow_tunnel *tun; + struct rte_flow_action *pmd_actions, + uint32_t num_actions, struct rte_flow_error *err) +{ + struct tunnel_db_element_release_ctx ctx = { + .items = NULL, + .actions = pmd_actions, + .num_elements = num_actions, + .error = err, + }; - rte_spinlock_lock(&thub->sl); - LIST_FOREACH(tun, &thub->tunnels, chain) { - if (&tun->action == pmd_actions) { - LIST_REMOVE(tun, chain); - break; - } - } - rte_spinlock_unlock(&thub->sl); - if (!tun || num_actions != 1) - return rte_flow_error_set(err, EINVAL, - RTE_FLOW_ERROR_TYPE_HANDLE, NULL, - "invalid argument"); - if (!__atomic_sub_fetch(&tun->refctn, 1, __ATOMIC_RELAXED)) - mlx5_flow_tunnel_free(dev, tun); + mlx5_access_tunnel_offload_db(dev, tunnel_element_release_match, + tunnel_element_release_hit, + tunnel_element_release_miss, &ctx, false); - return 0; + return ctx.ret; } static int diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index c33c0fee7c..f64384217f 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -950,8 +950,12 @@ struct mlx5_flow_tunnel { /** PMD tunnel related context */ struct mlx5_flow_tunnel_hub { + /* Tunnels list + * Access to the list MUST be MT protected + */ LIST_HEAD(, mlx5_flow_tunnel) tunnels; - rte_spinlock_t sl; /* Tunnel list spinlock. */ + /* protect access to the tunnels list */ + rte_spinlock_t sl; struct mlx5_hlist *groups; /** non tunnel groups */ }; From patchwork Mon Nov 16 14:02:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 84242 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5DA24A04B5; Mon, 16 Nov 2020 15:05:12 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0E188C930; Mon, 16 Nov 2020 15:03:26 +0100 (CET) Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by dpdk.org (Postfix) with ESMTP id 14008C924 for ; Mon, 16 Nov 2020 15:03:24 +0100 (CET) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Mon, 16 Nov 2020 06:03:33 -0800 Received: from nvidia.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 16 Nov 2020 14:03:20 +0000 From: Gregory Etelson To: CC: , , , Viacheslav Ovsiienko , Shahaf Shuler , Xueming Li Date: Mon, 16 Nov 2020 16:02:23 +0200 Message-ID: <20201116140224.8464-7-getelson@nvidia.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201116140224.8464-1-getelson@nvidia.com> References: <20201111071417.21177-1-getelson@nvidia.com> <20201116140224.8464-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605535413; bh=Bvioau6wNrYgYkDjo/tL4CBi2P/1WQIcyI7EZCAiqWU=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=C84LA83TFKo4fY9SNd3koFwh7wzxqzj81r7GKlk1sNhGsZsUWXlujmBw72iZvY24i ZAouhlXnRvoS0He2SB72jJUaaMuM+lHLg2wmCs26wsv91L4G4orbCeIv0AVCFfDsn1 u6QgmqHxG3Zrz3HfnUOpNGoduWZTxOyqDV/Q2R5TyqafQ6yHMbjNF/c85cOMnOOOgk nIYGgGVtT9Wh52nkNfVEcIBr4AkamJhzhkmqPsry0sD+xfqs0qqV94ipYDINZuz6jW lTJCij/Rj8Gk7D+T7/px9Leaz0xsP8TKJKZY5krrBqoRwxyjSbpp056e6KvVeycGjv 9Jd79LwHJnK7A== Subject: [dpdk-dev] [PATCH v5 6/6] net/mlx5: fix crash in tunnel offload setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The new flow table resource management API triggered a PMD crash in tunnel offload mode, when tunnel match flow rule was inserted before tunnel set rule. Reason for the crash was double flow table registration. The table was registered by the tunnel offload code for the first time and once more by PMD code, as part of general table processing. The table counter was decremented only once during the rule destruction and caused a resource leak that triggered the crash. The patch updates PMD registration with tunnel offload parameters and removes table registration in tunnel related code. Fixes: 663ad57dabb2 ("net/mlx5: make flow table cache thread safe") Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow.c | 2 +- drivers/net/mlx5/mlx5_flow_dv.c | 39 +++++++++++++++++---------------- 2 files changed, 21 insertions(+), 20 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 2fe8648341..b9e1c30713 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -6773,7 +6773,7 @@ mlx5_flow_group_to_table(struct rte_eth_dev *dev, standard_translation = true; } DRV_LOG(DEBUG, - "port %u group=%#x transfer=%d external=%d fdb_def_rule=%d translate=%s", + "port %u group=%u transfer=%d external=%d fdb_def_rule=%d translate=%s", dev->data->port_id, group, grp_info->transfer, grp_info->external, grp_info->fdb_def_rule, standard_translation ? "STANDARD" : "TUNNEL"); diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 25ab9adee6..5e230a3c25 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -8042,6 +8042,8 @@ flow_dv_tbl_resource_get(struct rte_eth_dev *dev, "cannot get table"); return NULL; } + DRV_LOG(DEBUG, "Table_id %u tunnel %u group %u registered.", + table_id, tunnel ? tunnel->tunnel_id : 0, group_id); tbl_data = container_of(entry, struct mlx5_flow_tbl_data_entry, entry); return &tbl_data->tbl; } @@ -8080,7 +8082,7 @@ flow_dv_tbl_remove_cb(struct mlx5_hlist *list, if (he) mlx5_hlist_unregister(tunnel_grp_hash, he); DRV_LOG(DEBUG, - "Table_id %#x tunnel %u group %u released.", + "Table_id %u tunnel %u group %u released.", table_id, tbl_data->tunnel ? tbl_data->tunnel->tunnel_id : 0, @@ -8192,6 +8194,8 @@ flow_dv_matcher_register(struct rte_eth_dev *dev, struct mlx5_flow_dv_matcher *ref, union mlx5_flow_tbl_key *key, struct mlx5_flow *dev_flow, + const struct mlx5_flow_tunnel *tunnel, + uint32_t group_id, struct rte_flow_error *error) { struct mlx5_cache_entry *entry; @@ -8203,8 +8207,14 @@ flow_dv_matcher_register(struct rte_eth_dev *dev, .data = ref, }; - tbl = flow_dv_tbl_resource_get(dev, key->table_id, key->direction, - key->domain, false, NULL, 0, 0, error); + /** + * tunnel offload API requires this registration for cases when + * tunnel match rule was inserted before tunnel set rule. + */ + tbl = flow_dv_tbl_resource_get(dev, key->table_id, + key->direction, key->domain, + dev_flow->external, tunnel, + group_id, 0, error); if (!tbl) return -rte_errno; /* No need to refill the error info */ tbl_data = container_of(tbl, struct mlx5_flow_tbl_data_entry, tbl); @@ -9611,10 +9621,14 @@ flow_dv_translate(struct rte_eth_dev *dev, /* * do not add decap action if match rule drops packet * HW rejects rules with decap & drop + * + * if tunnel match rule was inserted before matching tunnel set + * rule flow table used in the match rule must be registered. + * current implementation handles that in the + * flow_dv_match_register() at the function end. */ bool add_decap = true; const struct rte_flow_action *ptr = actions; - struct mlx5_flow_tbl_resource *tbl; for (; ptr->type != RTE_FLOW_ACTION_TYPE_END; ptr++) { if (ptr->type == RTE_FLOW_ACTION_TYPE_DROP) { @@ -9631,20 +9645,6 @@ flow_dv_translate(struct rte_eth_dev *dev, dev_flow->dv.encap_decap->action; action_flags |= MLX5_FLOW_ACTION_DECAP; } - /* - * bind table_id with for tunnel match rule. - * Tunnel set rule establishes that bind in JUMP action handler. - * Required for scenario when application creates tunnel match - * rule before tunnel set rule. - */ - tbl = flow_dv_tbl_resource_get(dev, table, attr->egress, - attr->transfer, - !!dev_flow->external, tunnel, - attr->group, 0, error); - if (!tbl) - return rte_flow_error_set - (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, "cannot register tunnel group"); } for (; !actions_end ; actions++) { const struct rte_flow_action_queue *queue; @@ -10474,7 +10474,8 @@ flow_dv_translate(struct rte_eth_dev *dev, tbl_key.domain = attr->transfer; tbl_key.direction = attr->egress; tbl_key.table_id = dev_flow->dv.group; - if (flow_dv_matcher_register(dev, &matcher, &tbl_key, dev_flow, error)) + if (flow_dv_matcher_register(dev, &matcher, &tbl_key, dev_flow, + tunnel, attr->group, error)) return -rte_errno; return 0; }