From patchwork Sun Apr 25 15:57:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 92134 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 882A5A0548; Sun, 25 Apr 2021 17:57:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6CE8E4113E; Sun, 25 Apr 2021 17:57:45 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2052.outbound.protection.outlook.com [40.107.244.52]) by mails.dpdk.org (Postfix) with ESMTP id 9004041139; Sun, 25 Apr 2021 17:57:44 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hvH7wbmi3WXO36vjT/+RO6vo58/3byqKR7nWgcTZxCekvAC0uPKlxUWS1s+w5ceHFpmfNeyXQQ/pURnFcQUNFWX1tmxKsvc0n7PZFrUQcUpTnXMweGhhKo38zonBaTzdjGO0TOBs0e5hgUET43V+z7XPrtrsPslD9Fic+MjAaQgmxMpRZWil4N4ZQO6JP6ZJafHYTgBTFfUHq44qvIUqZH5+qTgan/2GxShTFC8O6br0HRnX33phvDFHbIex/m753D6kx6Ph8ZC7E5ZwDuRnjO6CTsNcLBrSACdsRbQn4Ybf4XcFvI5XySf99vinBNgUUbZIMrjriItur4ABLSQ09g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gVnr34nPFceLyihFGFa4YIMPVvnU+LabOE5jLmitla8=; b=E5M1Mm0IJ1z3FAWBFvbWscqB/MrtJKzlTRuPHUZllru+28raPET+3MrNns2+BHTYD/LdFsfsxlxL/1u7u7WpyFRYjrC6ynBppPBPjm2/0RpKNbQ5D0N4c25r5sxnev8VTIhBXH6+4j1BHBoQ2gY52NGaXe65YXbjC5A2PsLw3jgebW1d0ewMfZqXk9MIFofYaJClIF7tIJKVFhEHDSOv0tGPP/hBAmAHB8W8WyR6FF7KJBPBrMRMl7atlKolP7ubU4mE1fy+3nPe/lUe/KfqfPHM5QTJV/lVbvBNK1E5qBbyl+u2hL0DaXU3WjUY3jzNhNask5GR+U4qucVejI9mRQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gVnr34nPFceLyihFGFa4YIMPVvnU+LabOE5jLmitla8=; b=EEMSJzSz4NKtLO7MtNDgd/jZNComfBY9D1/0xZeGxa62QKqeZeAAiapkS65tfwFKXH98VVL1S1QCDGqZPcT9M6cZSCRgX7b7CXedH/MF4agwoNXZi7PCiEOi9M/v5Je4KPRQwW7rgEv4P5s8jprXkJDKJ0vTqHr05G/lJKjQ/m7Ta5/hlhCVGGu4PwSJRH6zoje4GdWiSWe3MJdp/BN+esAOEZQo+Gea2wDHPR4YjYH45FOxcE7FC+k+u+f8Hi14Yc7L8OnVqD3mL1PEo2cl9nC7VbOVHJ8T8F7+Z532biG3Z/+v53lKaefZQwZ0hPtIsDocuOd0pfTgjAbAZ3SXcQ== Received: from DS7PR03CA0091.namprd03.prod.outlook.com (2603:10b6:5:3b7::6) by BY5PR12MB3922.namprd12.prod.outlook.com (2603:10b6:a03:195::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4065.25; Sun, 25 Apr 2021 15:57:42 +0000 Received: from DM6NAM11FT018.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3b7:cafe::50) by DS7PR03CA0091.outlook.office365.com (2603:10b6:5:3b7::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4065.21 via Frontend Transport; Sun, 25 Apr 2021 15:57:42 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT018.mail.protection.outlook.com (10.13.172.110) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4065.21 via Frontend Transport; Sun, 25 Apr 2021 15:57:41 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 25 Apr 2021 15:57:39 +0000 From: Gregory Etelson To: CC: , , , , , Viacheslav Ovsiienko , Shahaf Shuler Date: Sun, 25 Apr 2021 18:57:21 +0300 Message-ID: <20210425155722.32477-1-getelson@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210419130204.24348-1-getelson@nvidia.com> References: <20210419130204.24348-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 23815201-1d14-4a5b-9f6b-08d90802dbca X-MS-TrafficTypeDiagnostic: BY5PR12MB3922: X-Microsoft-Antispam-PRVS: X-MS-Exchange-Transport-Forked: True X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: XbsM9ADVfnP7kvBrL6eGXZgLwqGGbqMItpW3ksisK9ZOuOxarUWGo6UbZ2703dlygk/3trsKYd/ccRQQ3w8JhP34YCDrvIOu5QCXn/vRBg9IoTbI1oV+Hbl9236oElvWdWS7Qoafo7PnL4OgwHgq+pwlOBrxSLff7D6j5/EfaL/GVqoEhFbj/44PxDPq4wZs6mnKmFAd1L5lAhaSVfACdngD8mQ/qk4NAJagYYqSWh3S+SCFX8s+xOqJb5cqGSzkwB/7g+mwF6C9e8a4UlRB5Q5akTakvBJrmhxYdH/Knlt2D+fxnlpN2uaW8lyUQNPxjxYwTBud5lAWudcJQ6ZtlXbEaUf0fKBE4yNOjVK62uKDsCq1IFG05FDMKpkFR5Z0Ky0aA6HjGaOB7+zwTlb/b9oz5EOSOOPnQwwGl4MmOWdyin9oG+2WrsIMSH7YnRH1YFYTOQVxPOAjUAp3WoZUcvEgoHSepT9SX/VUu3t/gtgF3hnWZpI5MdaM3miMHBJeht3MInRceanrSAyjyD454aOr91EeP1mknE9nuJxS1B3szrbUMNA5seyzmRlfuAduQTMWEQk1rFap8GXEJD7f/8tS2WtGnHFIEHZqoSq0hytVte1CjcZ+rEmdDklW8jYN/Z/SvX60Yb4mzyXLANu8EVRURi682kZaZFNVbErBqM5zR5a1BmyyD7OzXUGe8Nl1 X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(376002)(136003)(396003)(346002)(36840700001)(46966006)(8936002)(336012)(36860700001)(30864003)(478600001)(426003)(8676002)(2616005)(4326008)(1076003)(107886003)(16526019)(186003)(36756003)(70206006)(83380400001)(5660300002)(26005)(6666004)(86362001)(47076005)(54906003)(36906005)(70586007)(316002)(82310400003)(356005)(2906002)(7696005)(6916009)(7636003)(82740400003)(6286002)(55016002)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2021 15:57:41.8632 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 23815201-1d14-4a5b-9f6b-08d90802dbca X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT018.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB3922 Subject: [dpdk-dev] [PATCH v2 1/2] net/mlx5: fix tunnel offload private items location X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Tunnel offload API requires application to query PMD for specific flow items and actions. Application uses these PMD specific elements to build flow rules according to the tunnel offload model. The model does not restrict private elements location in a flow rule, but the current MLX5 PMD implementation expects that tunnel offload rule will begin with PMD specific elements. The patch removes that placement limitation in MLX5 PMD. Cc: stable@dpdk.org Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload") Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow.c | 48 ++++++++++++++++++--------- drivers/net/mlx5/mlx5_flow.h | 44 ++++++++++++++----------- drivers/net/mlx5/mlx5_flow_dv.c | 58 +++++++++++++++++++-------------- 3 files changed, 90 insertions(+), 60 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 84463074a5..fcc82ce9d4 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -51,6 +51,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_action *app_actions, uint32_t flow_idx, + const struct mlx5_flow_tunnel *tunnel, struct tunnel_default_miss_ctx *ctx, struct rte_flow_error *error); static struct mlx5_flow_tunnel * @@ -5463,22 +5464,14 @@ flow_create_split_outer(struct rte_eth_dev *dev, return ret; } -static struct mlx5_flow_tunnel * -flow_tunnel_from_rule(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[]) +static inline struct mlx5_flow_tunnel * +flow_tunnel_from_rule(const struct mlx5_flow *flow) { struct mlx5_flow_tunnel *tunnel; #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wcast-qual" - if (is_flow_tunnel_match_rule(dev, attr, items, actions)) - tunnel = (struct mlx5_flow_tunnel *)items[0].spec; - else if (is_flow_tunnel_steer_rule(dev, attr, items, actions)) - tunnel = (struct mlx5_flow_tunnel *)actions[0].conf; - else - tunnel = NULL; + tunnel = (typeof(tunnel))flow->tunnel; #pragma GCC diagnostic pop return tunnel; @@ -5672,12 +5665,11 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, error); if (ret < 0) goto error; - if (is_flow_tunnel_steer_rule(dev, attr, - buf->entry[i].pattern, - p_actions_rx)) { + if (is_flow_tunnel_steer_rule(wks->flows[0].tof_type)) { ret = flow_tunnel_add_default_miss(dev, flow, attr, p_actions_rx, idx, + wks->flows[0].tunnel, &default_miss_ctx, error); if (ret < 0) { @@ -5741,7 +5733,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, } flow_rxq_flags_set(dev, flow); rte_free(translated_actions); - tunnel = flow_tunnel_from_rule(dev, attr, items, actions); + tunnel = flow_tunnel_from_rule(wks->flows); if (tunnel) { flow->tunnel = 1; flow->tunnel_id = tunnel->tunnel_id; @@ -7459,6 +7451,28 @@ int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains) return ret; } +const struct mlx5_flow_tunnel * +mlx5_get_tof(const struct rte_flow_item *item, + const struct rte_flow_action *action, + enum mlx5_tof_rule_type *rule_type) +{ + for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { + if (item->type == (typeof(item->type)) + MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL) { + *rule_type = MLX5_TUNNEL_OFFLOAD_MATCH_RULE; + return flow_items_to_tunnel(item); + } + } + for (; action->conf != RTE_FLOW_ACTION_TYPE_END; action++) { + if (action->type == (typeof(action->type)) + MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET) { + *rule_type = MLX5_TUNNEL_OFFLOAD_SET_RULE; + return flow_actions_to_tunnel(action); + } + } + return NULL; +} + /** * tunnel offload functionalilty is defined for DV environment only */ @@ -7489,13 +7503,13 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_action *app_actions, uint32_t flow_idx, + const struct mlx5_flow_tunnel *tunnel, struct tunnel_default_miss_ctx *ctx, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow *dev_flow; struct rte_flow_attr miss_attr = *attr; - const struct mlx5_flow_tunnel *tunnel = app_actions[0].conf; const struct rte_flow_item miss_items[2] = { { .type = RTE_FLOW_ITEM_TYPE_ETH, @@ -7581,6 +7595,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev, dev_flow->flow = flow; dev_flow->external = true; dev_flow->tunnel = tunnel; + dev_flow->tof_type = MLX5_TUNNEL_OFFLOAD_MISS_RULE; /* Subflow object was created, we must include one in the list. */ SILIST_INSERT(&flow->dev_handles, dev_flow->handle_idx, dev_flow->handle, next); @@ -8192,6 +8207,7 @@ flow_tunnel_add_default_miss(__rte_unused struct rte_eth_dev *dev, __rte_unused const struct rte_flow_attr *attr, __rte_unused const struct rte_flow_action *actions, __rte_unused uint32_t flow_idx, + __rte_unused const struct mlx5_flow_tunnel *tunnel, __rte_unused struct tunnel_default_miss_ctx *ctx, __rte_unused struct rte_flow_error *error) { diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index ec673c29ab..61f40adc25 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -783,6 +783,16 @@ struct mlx5_flow_verbs_workspace { /** Maximal number of device sub-flows supported. */ #define MLX5_NUM_MAX_DEV_FLOWS 32 +/** + * tunnel offload rules type + */ +enum mlx5_tof_rule_type { + MLX5_TUNNEL_OFFLOAD_NONE = 0, + MLX5_TUNNEL_OFFLOAD_SET_RULE, + MLX5_TUNNEL_OFFLOAD_MATCH_RULE, + MLX5_TUNNEL_OFFLOAD_MISS_RULE, +}; + /** Device flow structure. */ __extension__ struct mlx5_flow { @@ -818,6 +828,7 @@ struct mlx5_flow { struct mlx5_flow_handle *handle; uint32_t handle_idx; /* Index of the mlx5 flow handle memory. */ const struct mlx5_flow_tunnel *tunnel; + enum mlx5_tof_rule_type tof_type; }; /* Flow meter state. */ @@ -1029,10 +1040,10 @@ mlx5_tunnel_hub(struct rte_eth_dev *dev) } static inline bool -is_tunnel_offload_active(struct rte_eth_dev *dev) +is_tunnel_offload_active(const struct rte_eth_dev *dev) { #ifdef HAVE_IBV_FLOW_DV_SUPPORT - struct mlx5_priv *priv = dev->data->dev_private; + const struct mlx5_priv *priv = dev->data->dev_private; return !!priv->config.dv_miss_info; #else RTE_SET_USED(dev); @@ -1041,23 +1052,15 @@ is_tunnel_offload_active(struct rte_eth_dev *dev) } static inline bool -is_flow_tunnel_match_rule(__rte_unused struct rte_eth_dev *dev, - __rte_unused const struct rte_flow_attr *attr, - __rte_unused const struct rte_flow_item items[], - __rte_unused const struct rte_flow_action actions[]) +is_flow_tunnel_match_rule(enum mlx5_tof_rule_type tof_rule_type) { - return (items[0].type == (typeof(items[0].type)) - MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL); + return tof_rule_type == MLX5_TUNNEL_OFFLOAD_MATCH_RULE; } static inline bool -is_flow_tunnel_steer_rule(__rte_unused struct rte_eth_dev *dev, - __rte_unused const struct rte_flow_attr *attr, - __rte_unused const struct rte_flow_item items[], - __rte_unused const struct rte_flow_action actions[]) +is_flow_tunnel_steer_rule(enum mlx5_tof_rule_type tof_rule_type) { - return (actions[0].type == (typeof(actions[0].type)) - MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET); + return tof_rule_type == MLX5_TUNNEL_OFFLOAD_SET_RULE; } static inline const struct mlx5_flow_tunnel * @@ -1299,11 +1302,10 @@ struct flow_grp_info { static inline bool tunnel_use_standard_attr_group_translate - (struct rte_eth_dev *dev, - const struct mlx5_flow_tunnel *tunnel, + (const struct rte_eth_dev *dev, const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[]) + const struct mlx5_flow_tunnel *tunnel, + enum mlx5_tof_rule_type tof_rule_type) { bool verdict; @@ -1319,7 +1321,7 @@ tunnel_use_standard_attr_group_translate * method */ verdict = !attr->group && - is_flow_tunnel_steer_rule(dev, attr, items, actions); + is_flow_tunnel_steer_rule(tof_rule_type); } else { /* * non-tunnel group translation uses standard method for @@ -1580,6 +1582,10 @@ int mlx5_flow_os_init_workspace_once(void); void *mlx5_flow_os_get_specific_workspace(void); int mlx5_flow_os_set_specific_workspace(struct mlx5_flow_workspace *data); void mlx5_flow_os_release_workspace(void); +const struct mlx5_flow_tunnel * +mlx5_get_tof(const struct rte_flow_item *items, + const struct rte_flow_action *actions, + enum mlx5_tof_rule_type *rule_type); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index e65cc13bd6..3b16f75743 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -6100,32 +6100,33 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, uint32_t rw_act_num = 0; uint64_t is_root; const struct mlx5_flow_tunnel *tunnel; + enum mlx5_tof_rule_type tof_rule_type; struct flow_grp_info grp_info = { .external = !!external, .transfer = !!attr->transfer, .fdb_def_rule = !!priv->fdb_def_rule, + .std_tbl_fix = true, }; const struct rte_eth_hairpin_conf *conf; if (items == NULL) return -1; - if (is_flow_tunnel_match_rule(dev, attr, items, actions)) { - tunnel = flow_items_to_tunnel(items); - action_flags |= MLX5_FLOW_ACTION_TUNNEL_MATCH | - MLX5_FLOW_ACTION_DECAP; - } else if (is_flow_tunnel_steer_rule(dev, attr, items, actions)) { - tunnel = flow_actions_to_tunnel(actions); - action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET; - } else { - tunnel = NULL; + tunnel = is_tunnel_offload_active(dev) ? + mlx5_get_tof(items, actions, &tof_rule_type) : NULL; + if (tunnel) { + if (priv->representor) + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "decap not supported for VF representor"); + if (tof_rule_type == MLX5_TUNNEL_OFFLOAD_SET_RULE) + action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET; + else if (tof_rule_type == MLX5_TUNNEL_OFFLOAD_MATCH_RULE) + action_flags |= MLX5_FLOW_ACTION_TUNNEL_MATCH | + MLX5_FLOW_ACTION_DECAP; + grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate + (dev, attr, tunnel, tof_rule_type); } - if (tunnel && priv->representor) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "decap not supported " - "for VF representor"); - grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate - (dev, tunnel, attr, items, actions); ret = flow_dv_validate_attributes(dev, tunnel, attr, &grp_info, error); if (ret < 0) return ret; @@ -10909,13 +10910,14 @@ flow_dv_translate(struct rte_eth_dev *dev, int tmp_actions_n = 0; uint32_t table; int ret = 0; - const struct mlx5_flow_tunnel *tunnel; + const struct mlx5_flow_tunnel *tunnel = NULL; struct flow_grp_info grp_info = { .external = !!dev_flow->external, .transfer = !!attr->transfer, .fdb_def_rule = !!priv->fdb_def_rule, .skip_scale = dev_flow->skip_scale & (1 << MLX5_SCALE_FLOW_GROUP_BIT), + .std_tbl_fix = true, }; if (!wks) @@ -10930,15 +10932,21 @@ flow_dv_translate(struct rte_eth_dev *dev, MLX5DV_FLOW_TABLE_TYPE_NIC_RX; /* update normal path action resource into last index of array */ sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM - 1]; - tunnel = is_flow_tunnel_match_rule(dev, attr, items, actions) ? - flow_items_to_tunnel(items) : - is_flow_tunnel_steer_rule(dev, attr, items, actions) ? - flow_actions_to_tunnel(actions) : - dev_flow->tunnel ? dev_flow->tunnel : NULL; + if (is_tunnel_offload_active(dev)) { + if (dev_flow->tunnel) { + RTE_VERIFY(dev_flow->tof_type == + MLX5_TUNNEL_OFFLOAD_MISS_RULE); + tunnel = dev_flow->tunnel; + } else { + tunnel = mlx5_get_tof(items, actions, + &dev_flow->tof_type); + dev_flow->tunnel = tunnel; + } + grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate + (dev, attr, tunnel, dev_flow->tof_type); + } mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : MLX5DV_FLOW_TABLE_TYPE_NIC_RX; - grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate - (dev, tunnel, attr, items, actions); ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table, &grp_info, error); if (ret) @@ -10948,7 +10956,7 @@ flow_dv_translate(struct rte_eth_dev *dev, mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; /* number of actions must be set to 0 in case of dirty stack. */ mhdr_res->actions_num = 0; - if (is_flow_tunnel_match_rule(dev, attr, items, actions)) { + if (is_flow_tunnel_match_rule(dev_flow->tof_type)) { /* * do not add decap action if match rule drops packet * HW rejects rules with decap & drop From patchwork Sun Apr 25 15:57:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 92135 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C58AEA0548; Sun, 25 Apr 2021 17:57:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E41C941171; Sun, 25 Apr 2021 17:57:48 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2076.outbound.protection.outlook.com [40.107.237.76]) by mails.dpdk.org (Postfix) with ESMTP id 7656F41160; Sun, 25 Apr 2021 17:57:45 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=N6df0m1bqwJbw1T/KxqbhhQHYPzKLlSLR/VcwaQTr4qf4WydMTBENwnW78w4Q3BO7aqrTxMagXqmxG5B5EL8r8UMY/4ystQru8MrBMbw2Sw8L6IGtr4umZGRju9ikttAER36Nw/Hk4QtqSXzDnGzftHTLi14TjQGn0EQ35ECiMg+WTTd3ZTs63PZpznpZiZnYQppuTsYoP/NW+ciUVzx3gNPcOXgATDz2+n038OgEgx05VHlWcJlFlc2ctt1XkYKXnrnu4wWZpecybVYPy987lVarVPQGtIDyKEuBWl5LGCcvzGitDw4eNltLUxG2BI+c/buPigQ3vL9+zoMb1/b6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=v0P136JGVoi6cFQ1c89hr1E7e/OAooFKrFNP44vH/Fw=; b=V/wbR6ROtJRxcv1jDvlDIAnyzRFdNMwreYsG/Ef/vbS5WZpvftQ6N/ushnbB6AZygJgo2stSCDZrFmaPavIVlNgBQMckP4OcY7Vk9jszOgSDz017e38viRF82TLabbMQzFmll2pqLBeWluq44C91SdCbOoTpzAv6++0pTQBiv1iA9+p9H7a8YKjDCV1mXRCL27FbUj8LYDqLpbwstdflefFkI86Se8Ohk4MoVHZSIJq++92bkSKQMVWcKncFrk/QrZymkt0N6pJH3z7LYP4ZYZTnIsPUpLHvrqxrGkS0baG20MVxBeabbLxNKGzFxA29iJaD8g/ZXDOqhHp65KnYpA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=v0P136JGVoi6cFQ1c89hr1E7e/OAooFKrFNP44vH/Fw=; b=KXOblr0FTfpzOGxhpFgIktaldZzWQZnTsT88SGwryu25JtYzeAz7ytmRLkXcLxc+qM6uCK+irnAMbEQ1jdaJc8NOUZ7ZMjz9iTZnbiU+33K1TL7p2Lvkc6+gD/Mywr3S0/aqKnu6Dzv1qiVvB6v10+7bmvEoPw9/DyOJhF423AueCKEOK1hYX73EDW3J4W6qO63ZJVVzV11PalGADEtyfN4tSr88ehR7nwf8pD9wV+zWmeul7U8SsW7F7fDso7VUC87drtRf0DDTKeSO/mo0xIzZpLDHywWRJJwAuYHfikBFZVug96VZ4ia4OECwwrgWMOTlJf+jnNS7EVBeBw25Qg== Received: from DM5PR11CA0005.namprd11.prod.outlook.com (2603:10b6:3:115::15) by BL1PR12MB5333.namprd12.prod.outlook.com (2603:10b6:208:31f::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4065.23; Sun, 25 Apr 2021 15:57:44 +0000 Received: from DM6NAM11FT053.eop-nam11.prod.protection.outlook.com (2603:10b6:3:115:cafe::c8) by DM5PR11CA0005.outlook.office365.com (2603:10b6:3:115::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4065.20 via Frontend Transport; Sun, 25 Apr 2021 15:57:44 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT053.mail.protection.outlook.com (10.13.173.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4065.21 via Frontend Transport; Sun, 25 Apr 2021 15:57:44 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 25 Apr 2021 15:57:41 +0000 From: Gregory Etelson To: CC: , , , , , Viacheslav Ovsiienko , Xiaoyun Li Date: Sun, 25 Apr 2021 18:57:22 +0300 Message-ID: <20210425155722.32477-2-getelson@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210425155722.32477-1-getelson@nvidia.com> References: <20210419130204.24348-1-getelson@nvidia.com> <20210425155722.32477-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b513d6a1-b352-4110-c2b2-08d90802dd15 X-MS-TrafficTypeDiagnostic: BL1PR12MB5333: X-Microsoft-Antispam-PRVS: X-MS-Exchange-Transport-Forked: True X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: jHnRORB8NjSRbHLCSzcK1fggdM3tVdmEV4bb/UuZTH3GjeIEnyiXwBjBnd18F/TTRmArWw2t9kzp6VLSbfUYjEBcGghe/AgCIauNxPhzZ430Qc0+dyriGgcxqm+nSInX4/MzA+eKDzrxUDTJNEfITdeMCo1rLLIeLUR+7w6qwbHy0CD8H34oRNmuaJIzVTly+vIX/mgup6uYvlLDSqQGk5MHJiQSWeX0TwfqOQkjxx3acCxOxiiUuCwc121UWW7jMh31yt86pkjwIk5ub1z9xwuICQzNLzHHxfZdJM7GvNPy31/YPrS7/hH700BKNO9BgT/8XFtCgTE9Ds2BVwX941cfHGLlzNiYK3AYj22Z6E+0sZ1n6xYu5y8CVfd6R2irI50vzbd4iIjRIrvCEBYMJB8CiEQrN+UXRhBgg9uYzX7ydIKQMrFl0cKtPivbl1198zTA3ZkeJdR3HHAj6pHQ8Yuln5id0vUB+EPb9/9RGmGIH9fRprZKCN/JlpLoqwRFFerxgp52DdRAiLEZ/t0ytVaom973TOt3sy+qNYqSveG1iviWst3Dm94oHldZCxdqPLuJyTfleaZNqSya+NiwwcD672To9GxQ2juqkcpxlgKEDCa7aPN6XaI9ylkJw1QujeLCVVU2LHehn2gr2lQREA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(376002)(136003)(346002)(396003)(39860400002)(36840700001)(46966006)(2906002)(8936002)(356005)(7636003)(6916009)(70206006)(36756003)(8676002)(82740400003)(55016002)(36860700001)(36906005)(6666004)(82310400003)(186003)(4326008)(1076003)(16526019)(336012)(54906003)(2616005)(26005)(7696005)(5660300002)(83380400001)(426003)(6286002)(86362001)(478600001)(47076005)(316002)(70586007); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2021 15:57:44.0614 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b513d6a1-b352-4110-c2b2-08d90802dd15 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT053.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5333 Subject: [dpdk-dev] [PATCH v2 2/2] app/testpmd: fix tunnel offload private items location X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Tunnel offload API requires application to query PMD for specific flow items and actions. Application uses these PMD specific elements to build flow rules according to the tunnel offload model. The model does not restrict private elements location in a flow rule, but the current MLX5 PMD implementation expected that tunnel offload rule will begin with PMD specific elements. The patch places tunnel offload private PMD flow elements between general RTE flow elements in a rule. Cc: stable@dpdk.org Fixes: 1b9f274623b8 ("app/testpmd: add commands for tunnel offload") Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- app/test-pmd/config.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 40b2b29725..1520b8193f 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1664,7 +1664,7 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id, aptr->type != RTE_FLOW_ACTION_TYPE_END; aptr++, num_actions++); pft->actions = malloc( - (num_actions + pft->num_pmd_actions) * + (num_actions + pft->num_pmd_actions + 1) * sizeof(actions[0])); if (!pft->actions) { rte_flow_tunnel_action_decap_release( @@ -1672,9 +1672,10 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id, pft->num_pmd_actions, &error); return NULL; } - rte_memcpy(pft->actions, pft->pmd_actions, + pft->actions[0].type = RTE_FLOW_ACTION_TYPE_VOID; + rte_memcpy(pft->actions + 1, pft->pmd_actions, pft->num_pmd_actions * sizeof(actions[0])); - rte_memcpy(pft->actions + pft->num_pmd_actions, actions, + rte_memcpy(pft->actions + pft->num_pmd_actions + 1, actions, num_actions * sizeof(actions[0])); } if (tunnel_ops->items) { @@ -1692,7 +1693,7 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id, for (iptr = pattern, num_items = 1; iptr->type != RTE_FLOW_ITEM_TYPE_END; iptr++, num_items++); - pft->items = malloc((num_items + pft->num_pmd_items) * + pft->items = malloc((num_items + pft->num_pmd_items + 1) * sizeof(pattern[0])); if (!pft->items) { rte_flow_tunnel_item_release( @@ -1700,9 +1701,10 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id, pft->num_pmd_items, &error); return NULL; } - rte_memcpy(pft->items, pft->pmd_items, + pft->items[0].type = RTE_FLOW_ITEM_TYPE_VOID; + rte_memcpy(pft->items + 1, pft->pmd_items, pft->num_pmd_items * sizeof(pattern[0])); - rte_memcpy(pft->items + pft->num_pmd_items, pattern, + rte_memcpy(pft->items + pft->num_pmd_items + 1, pattern, num_items * sizeof(pattern[0])); }