From patchwork Thu Nov 9 08:55:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 134011 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 19862432E1; Thu, 9 Nov 2023 09:56:33 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B232C42E35; Thu, 9 Nov 2023 09:56:28 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2073.outbound.protection.outlook.com [40.107.93.73]) by mails.dpdk.org (Postfix) with ESMTP id 88EB742DEF; Thu, 9 Nov 2023 09:56:27 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UGFyn5KVgYJEnb07IOL875bInmYFB1vfAoao7+acQPlLWWaULWDHY6kayfqnSfdoIn0HNMtA5Is6Zgz1HrgxpqmjUzqiUPijji7c8JHqS6jXLHBWHZeI7VsHWRh0AMPn/XHvffP656TGD/aEnJHrwCNsBfmA5BXWPZSwqKt5Gazp/mFEe9fSOPvsouO0r4U1dU0awHck+tiPCCmCyzsCMg85NdlYeWS6kuil6OaAZOykM+ZP9k7btA8nEslVWKUsxRgGePJIBFVbIPSZz07N39hnj87sRIx9VDn61uZO7f5l+VfgSFFO3MaEXDgQm2nwDhyBD3VXKMv0SopvAChXhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ppcql1GcaxSPOtbrQDBri1L89JTBemZwfyY5TqItiGI=; b=XOnibeFu8QkYDt8uwPXRGSOjUIoqUchVgMXcZ4Qzf3yrGyu68wYHEJy6o0Ui7oIfr72UZJnGO/c93mZHfs/KFtUkBSpL4lBYTOtcVs2Up3Xx8Vpb2xZ14v1drBAvDIGvjt4EqzIUvCZe22onnAMqmGzdMSpbm/y7SRSVcUb+UjPehyqPHuYdOESD1kBDUjeQUtkRbDAvzGRATJhHc6VcrWRvwhx3n5jgJuWn7M6Ag62udgnlH4cLbmImkBsR2MnFW6G+GDw3pBQeVgW6x4aYLFV5axeJqNUuuR/MHX6lV1bREWkP+odaVwK8UHPrt1nqGnNCSUTS5ZL4B9/hQPHHEA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ppcql1GcaxSPOtbrQDBri1L89JTBemZwfyY5TqItiGI=; b=BLWVTlFD4jQpcBkZa/Nd1ZzD40BTbfEHsYN6Zv8Vw7Jz7ICTTJgQ3sg0L4xY7VEDoweiF56qv8IwQYm2qE/pQx5TSa95IttqmAzCQW2dCOlL6wFxZNsEOhQFHySzlhx3oPHSCm25u2Stnc3bOHtIg931QHZyeyW9KE+CspaOZP5hXvLYWbFhbTSYXzMzBt1h/yDwmzrDvqGUQqQ1d2xMD2/EQTpm17J6MKa5Ulg4vxQggwqP839NcwkdakfLuCiE+k59jQC885RqRNUduIz6rTKKZ/W2TjedY7tAwyQ8AfDzfyyUcRz8qt0qJsPiV20KWHso1mIfs3F/WRLTT3PnYg== Received: from DM6PR14CA0055.namprd14.prod.outlook.com (2603:10b6:5:18f::32) by PH0PR12MB7078.namprd12.prod.outlook.com (2603:10b6:510:21d::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6977.18; Thu, 9 Nov 2023 08:56:25 +0000 Received: from DS2PEPF0000343C.namprd02.prod.outlook.com (2603:10b6:5:18f:cafe::14) by DM6PR14CA0055.outlook.office365.com (2603:10b6:5:18f::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6977.19 via Frontend Transport; Thu, 9 Nov 2023 08:56:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DS2PEPF0000343C.mail.protection.outlook.com (10.167.18.39) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6977.16 via Frontend Transport; Thu, 9 Nov 2023 08:56:24 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 9 Nov 2023 00:56:08 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 9 Nov 2023 00:56:05 -0800 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko , Ori Kam CC: , , Dariusz Sosnowski , Subject: [PATCH 1/2] net/mlx5: fix missing flow rules for external SQ Date: Thu, 9 Nov 2023 16:55:46 +0800 Message-ID: <20231109085547.1313003-2-suanmingm@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231109085547.1313003-1-suanmingm@nvidia.com> References: <20231109085547.1313003-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF0000343C:EE_|PH0PR12MB7078:EE_ X-MS-Office365-Filtering-Correlation-Id: 92c8f702-3c63-4fd1-5fa0-08dbe101c0c9 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: gD4Yq4NMTWRS1Y1TrR/sfHHxSZN00RE2WY3k2w8mvKcY9pi7mLvcd1VcvXGbr/tlRSVxRKuDMyhh+ZimRxniIvN+Zpxs3mRFXOuflI9q0e7ZDalH9HQZgOiTacU+os8qry8++AKddR0td10JjPaHuoRYa1vt5BtT5KDODd0UqKWyQuOF++hGYXxRo1IQN5mwlJG/YQ8BnSxh5qdC8CTHa3UaFJ4Lv04rus5sJM34kH+9+WVhIuTo46Gm1/EfGMwFHmpw4h+CHxI4900NhOrplOzOXHNqiWqOE36wPJ0Ds94+fIJDcY3PmxU5Aqx/Yv+ZZFiCr7roRzdNNPXjKgprDatraTuau+7rhg6e4DYPddOhWm5xt7YQPBjYK9yDH8gKXfPARcgyehegPyhqnEG85MSswO9d+zdz6gF2tZk9XUYQmNe88CGp4+o776+YmR+KEpym8YWVy5HaZVP6LVvgHbhNtQpzvXibZLILKgfwzVaux2YwiUMK1mJfA46OeRaqMm6J1BhXeAQzxJmC3dTxb1hF8QMBb3MtCPVAmMtUwZ8LlGLygW4AUVcMxcvyXVaW8qUeUm1/R783RS3qLQSw1E6A/WSynHcDMvFCKjQVAV14k7Cav8tL9j0LZLivZlnZ79UbLyWkdd9iEMaLfqdVDb24ZxSAkvuQqyd73nXb/XQ32oLKEKds1cFxqtT9VdUbSuRwwAx7ewCoPgSN45e44OFYcMPz7NODyws8bov+28ngUznR4KrL4lvQjll9dBjuVIjbCq/bvt76h24odwnKzQ9MNFcPlabPaqJ2Hvou1Gs= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(39860400002)(346002)(376002)(396003)(136003)(230173577357003)(230273577357003)(230922051799003)(82310400011)(186009)(451199024)(64100799003)(1800799009)(36840700001)(46966006)(40470700004)(356005)(40460700003)(5660300002)(1076003)(6666004)(41300700001)(2616005)(7696005)(55016003)(47076005)(26005)(83380400001)(36860700001)(426003)(336012)(6286002)(2906002)(30864003)(16526019)(6636002)(54906003)(70586007)(316002)(110136005)(8676002)(4326008)(70206006)(8936002)(450100002)(478600001)(36756003)(40480700001)(7636003)(86362001)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Nov 2023 08:56:24.7134 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 92c8f702-3c63-4fd1-5fa0-08dbe101c0c9 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF0000343C.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB7078 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Dariusz Sosnowski mlx5 PMD exposes a capability to register externally created SQs as if it was an SQ of a given representor port. Registration would cause a creation of control flow rules in FDB domain used to forward traffic betwen SQ and destination represented port. Before this patch, if representor matching was enabled (device argument repr_matching_en is equal to 1, default configuration), then during registration of external SQs, mlx5 PMD would not create control flow rules in NIC Tx domain. This caused an issue with packet metadata. If a packet sent on external SQ had packet metadata attached, then it would be lost when it would go from NIC Tx to FDB domain. With representor matching disabled everything is working correctly, because in that mode there is a single global flow rule for preserving packet metadata. This flow rule matches whole traffic on NIC Tx domain. With representor matching enabled, NIC Tx flow rules are created per SQ. This patch fixes that behavior. If representor matching is enabled, then NIC Tx flow rules are created for each external SQ registered in rte_pmd_mlx5_external_sq_enable(). This patch also adds an ability to destroy SQ miss flow rules for a given port and SQ number. This is required for error rollback flow in rte_pmd_mlx5_external_sq_enable(). Fixes: 26e1eaf2dac4 ("net/mlx5: support device control for E-Switch default rule") Cc: stable@dpdk.org Signed-off-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5.h | 40 ++++++++++++ drivers/net/mlx5/mlx5_flow.h | 2 + drivers/net/mlx5/mlx5_flow_hw.c | 107 +++++++++++++++++++++++++++++--- drivers/net/mlx5/mlx5_txq.c | 12 +++- 4 files changed, 149 insertions(+), 12 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f5eacb2c67..45ad0701f1 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1705,10 +1705,50 @@ struct mlx5_obj_ops { #define MLX5_RSS_HASH_FIELDS_LEN RTE_DIM(mlx5_rss_hash_fields) +enum mlx5_hw_ctrl_flow_type { + MLX5_HW_CTRL_FLOW_TYPE_GENERAL, + MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS_ROOT, + MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS, + MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_JUMP, + MLX5_HW_CTRL_FLOW_TYPE_TX_META_COPY, + MLX5_HW_CTRL_FLOW_TYPE_TX_REPR_MATCH, + MLX5_HW_CTRL_FLOW_TYPE_LACP_RX, + MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_RX_RSS, +}; + +/** Additional info about control flow rule. */ +struct mlx5_hw_ctrl_flow_info { + /** Determines the kind of control flow rule. */ + enum mlx5_hw_ctrl_flow_type type; + union { + /** + * If control flow is a SQ miss flow (root or not), + * then fields contains matching SQ number. + */ + uint32_t esw_mgr_sq; + /** + * If control flow is a Tx representor matching, + * then fields contains matching SQ number. + */ + uint32_t tx_repr_sq; + }; +}; + +/** Entry for tracking control flow rules in HWS. */ struct mlx5_hw_ctrl_flow { LIST_ENTRY(mlx5_hw_ctrl_flow) next; + /** + * Owner device is a port on behalf of which flow rule was created. + * + * It's different from the port which really created the flow rule + * if and only if flow rule is created on transfer proxy port + * on behalf of representor port. + */ struct rte_eth_dev *owner_dev; + /** Pointer to flow rule handle. */ struct rte_flow *flow; + /** Additional information about the control flow rule. */ + struct mlx5_hw_ctrl_flow_info info; }; /* diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 094be12715..d57b3b5465 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -2875,6 +2875,8 @@ int mlx5_flow_hw_flush_ctrl_flows(struct rte_eth_dev *dev); int mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn); +int mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, + uint32_t sqn); int mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev); int mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev); int mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn); diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index f57126e2ff..d512889682 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -11341,6 +11341,8 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { * Pointer to flow rule actions. * @param action_template_idx * Index of an action template associated with @p table. + * @param info + * Additional info about control flow rule. * * @return * 0 on success, negative errno value otherwise and rte_errno set. @@ -11352,7 +11354,8 @@ flow_hw_create_ctrl_flow(struct rte_eth_dev *owner_dev, struct rte_flow_item items[], uint8_t item_template_idx, struct rte_flow_action actions[], - uint8_t action_template_idx) + uint8_t action_template_idx, + struct mlx5_hw_ctrl_flow_info *info) { struct mlx5_priv *priv = proxy_dev->data->dev_private; uint32_t queue = CTRL_QUEUE_ID(priv); @@ -11399,6 +11402,10 @@ flow_hw_create_ctrl_flow(struct rte_eth_dev *owner_dev, } entry->owner_dev = owner_dev; entry->flow = flow; + if (info) + entry->info = *info; + else + entry->info.type = MLX5_HW_CTRL_FLOW_TYPE_GENERAL; LIST_INSERT_HEAD(&priv->hw_ctrl_flows, entry, next); rte_spinlock_unlock(&priv->hw_ctrl_lock); return 0; @@ -11602,6 +11609,10 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) }; struct rte_flow_item items[3] = { { 0 } }; struct rte_flow_action actions[3] = { { 0 } }; + struct mlx5_hw_ctrl_flow_info flow_info = { + .type = MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS_ROOT, + .esw_mgr_sq = sqn, + }; struct rte_eth_dev *proxy_dev; struct mlx5_priv *proxy_priv; uint16_t proxy_port_id = dev->data->port_id; @@ -11657,7 +11668,7 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) .type = RTE_FLOW_ACTION_TYPE_END, }; ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_root_tbl, - items, 0, actions, 0); + items, 0, actions, 0, &flow_info); if (ret) { DRV_LOG(ERR, "Port %u failed to create root SQ miss flow rule for SQ %u, ret %d", port_id, sqn, ret); @@ -11686,8 +11697,9 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) actions[1] = (struct rte_flow_action){ .type = RTE_FLOW_ACTION_TYPE_END, }; + flow_info.type = MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS; ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_tbl, - items, 0, actions, 0); + items, 0, actions, 0, &flow_info); if (ret) { DRV_LOG(ERR, "Port %u failed to create HWS SQ miss flow rule for SQ %u, ret %d", port_id, sqn, ret); @@ -11696,6 +11708,58 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) return 0; } +static bool +flow_hw_is_matching_sq_miss_flow(struct mlx5_hw_ctrl_flow *cf, + struct rte_eth_dev *dev, + uint32_t sqn) +{ + if (cf->owner_dev != dev) + return false; + if (cf->info.type == MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS_ROOT && cf->info.esw_mgr_sq == sqn) + return true; + if (cf->info.type == MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS && cf->info.esw_mgr_sq == sqn) + return true; + return false; +} + +int +mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) +{ + uint16_t port_id = dev->data->port_id; + uint16_t proxy_port_id = dev->data->port_id; + struct rte_eth_dev *proxy_dev; + struct mlx5_priv *proxy_priv; + struct mlx5_hw_ctrl_flow *cf; + struct mlx5_hw_ctrl_flow *cf_next; + int ret; + + ret = rte_flow_pick_transfer_proxy(port_id, &proxy_port_id, NULL); + if (ret) { + DRV_LOG(ERR, "Unable to pick transfer proxy port for port %u. Transfer proxy " + "port must be present for default SQ miss flow rules to exist.", + port_id); + return ret; + } + proxy_dev = &rte_eth_devices[proxy_port_id]; + proxy_priv = proxy_dev->data->dev_private; + if (!proxy_priv->dr_ctx) + return 0; + if (!proxy_priv->hw_esw_sq_miss_root_tbl || + !proxy_priv->hw_esw_sq_miss_tbl) + return 0; + cf = LIST_FIRST(&proxy_priv->hw_ctrl_flows); + while (cf != NULL) { + cf_next = LIST_NEXT(cf, next); + if (flow_hw_is_matching_sq_miss_flow(cf, dev, sqn)) { + claim_zero(flow_hw_destroy_ctrl_flow(proxy_dev, cf->flow)); + LIST_REMOVE(cf, next); + mlx5_free(cf); + } + cf = cf_next; + } + return 0; +} + int mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) { @@ -11724,6 +11788,9 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) .type = RTE_FLOW_ACTION_TYPE_END, } }; + struct mlx5_hw_ctrl_flow_info flow_info = { + .type = MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_JUMP, + }; struct rte_eth_dev *proxy_dev; struct mlx5_priv *proxy_priv; uint16_t proxy_port_id = dev->data->port_id; @@ -11754,7 +11821,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) } return flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_zero_tbl, - items, 0, actions, 0); + items, 0, actions, 0, &flow_info); } int @@ -11800,13 +11867,16 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev) .type = RTE_FLOW_ACTION_TYPE_END, }, }; + struct mlx5_hw_ctrl_flow_info flow_info = { + .type = MLX5_HW_CTRL_FLOW_TYPE_TX_META_COPY, + }; MLX5_ASSERT(priv->master); if (!priv->dr_ctx || !priv->hw_tx_meta_cpy_tbl) return 0; return flow_hw_create_ctrl_flow(dev, dev, priv->hw_tx_meta_cpy_tbl, - eth_all, 0, copy_reg_action, 0); + eth_all, 0, copy_reg_action, 0, &flow_info); } int @@ -11835,6 +11905,10 @@ mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn) { .type = RTE_FLOW_ACTION_TYPE_END }, { .type = RTE_FLOW_ACTION_TYPE_END }, }; + struct mlx5_hw_ctrl_flow_info flow_info = { + .type = MLX5_HW_CTRL_FLOW_TYPE_TX_REPR_MATCH, + .tx_repr_sq = sqn, + }; /* It is assumed that caller checked for representor matching. */ MLX5_ASSERT(priv->sh->config.repr_matching); @@ -11860,7 +11934,7 @@ mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn) actions[2].type = RTE_FLOW_ACTION_TYPE_JUMP; } return flow_hw_create_ctrl_flow(dev, dev, priv->hw_tx_repr_tagging_tbl, - items, 0, actions, 0); + items, 0, actions, 0, &flow_info); } static uint32_t @@ -11975,6 +12049,9 @@ __flow_hw_ctrl_flows_single(struct rte_eth_dev *dev, { .type = RTE_FLOW_ACTION_TYPE_RSS }, { .type = RTE_FLOW_ACTION_TYPE_END }, }; + struct mlx5_hw_ctrl_flow_info flow_info = { + .type = MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_RX_RSS, + }; if (!eth_spec) return -EINVAL; @@ -11988,7 +12065,7 @@ __flow_hw_ctrl_flows_single(struct rte_eth_dev *dev, items[3] = flow_hw_get_ctrl_rx_l4_item(rss_type); items[4] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_END }; /* Without VLAN filtering, only a single flow rule must be created. */ - return flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0); + return flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, &flow_info); } static int @@ -12004,6 +12081,9 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev, { .type = RTE_FLOW_ACTION_TYPE_RSS }, { .type = RTE_FLOW_ACTION_TYPE_END }, }; + struct mlx5_hw_ctrl_flow_info flow_info = { + .type = MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_RX_RSS, + }; unsigned int i; if (!eth_spec) @@ -12026,7 +12106,7 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev, }; items[1].spec = &vlan_spec; - if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0)) + if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, &flow_info)) return -rte_errno; } return 0; @@ -12044,6 +12124,9 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, { .type = RTE_FLOW_ACTION_TYPE_RSS }, { .type = RTE_FLOW_ACTION_TYPE_END }, }; + struct mlx5_hw_ctrl_flow_info flow_info = { + .type = MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_RX_RSS, + }; const struct rte_ether_addr cmp = { .addr_bytes = "\x00\x00\x00\x00\x00\x00", }; @@ -12067,7 +12150,7 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, if (!memcmp(mac, &cmp, sizeof(*mac))) continue; memcpy(ð_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN); - if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0)) + if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, &flow_info)) return -rte_errno; } return 0; @@ -12086,6 +12169,9 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev, { .type = RTE_FLOW_ACTION_TYPE_RSS }, { .type = RTE_FLOW_ACTION_TYPE_END }, }; + struct mlx5_hw_ctrl_flow_info flow_info = { + .type = MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_RX_RSS, + }; const struct rte_ether_addr cmp = { .addr_bytes = "\x00\x00\x00\x00\x00\x00", }; @@ -12117,7 +12203,8 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev, }; items[1].spec = &vlan_spec; - if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0)) + if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, + &flow_info)) return -rte_errno; } } diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index b584055fa8..ccdf2ffb14 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -1310,8 +1310,16 @@ rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num) return -rte_errno; } #ifdef HAVE_MLX5_HWS_SUPPORT - if (priv->sh->config.dv_flow_en == 2) - return mlx5_flow_hw_esw_create_sq_miss_flow(dev, sq_num); + if (priv->sh->config.dv_flow_en == 2) { + if (mlx5_flow_hw_esw_create_sq_miss_flow(dev, sq_num)) + return -rte_errno; + if (priv->sh->config.repr_matching && + mlx5_flow_hw_tx_repr_matching_flow(dev, sq_num)) { + mlx5_flow_hw_esw_destroy_sq_miss_flow(dev, sq_num); + return -rte_errno; + } + return 0; + } #endif flow = mlx5_flow_create_devx_sq_miss_flow(dev, sq_num); if (flow > 0) From patchwork Thu Nov 9 08:55:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 134012 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0585D432E1; Thu, 9 Nov 2023 09:56:43 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B522342E50; Thu, 9 Nov 2023 09:56:31 +0100 (CET) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2057.outbound.protection.outlook.com [40.107.92.57]) by mails.dpdk.org (Postfix) with ESMTP id 3987442E41; Thu, 9 Nov 2023 09:56:30 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fB9hRWlBGn4SyxuPTMSk9gLdkDquReNGne1ZGvzZU87BzUrlOgUkCjWyEtcA67AqT4SjLaO2q6VC+OdVJS3IP86uaqpMfxcPuW0vD4IbyzIp4K4y6JhQ9My/uEri7eri+asM0+60DphNTEAukHsFTOJ9kJdoH+wSimobpe7xw+KHNuFst/xbvcxpmx1lnGOf3OPNAbNvUKxolEeayktPPDFrXqSPSPzuZcpn8Sk76GgSe389ez1JaOfcneYFOJxO2nqbbrPj50ocyR2r8XJJihdr/RAozviUTJNJ6K2uh6nCZslxF6r1IeRrJDsXRZmNnqJMsAcdJUYfGfgnWa+PcA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Gb7IWVfZ94TKaMMjcLTA3aqbmakRsmgPCCHykna5OGA=; b=m6FsDUnd4ByRH8fNdYxd2Xh1NUpCGVpiYTWPFsOTzeHZ6E/KxmU5esYWd7QU2PNp9v2Uyj4KG6GOyCjGatuMioDit6Vk9xoIvV1/V4aoNTMDdQkS1Bvc81zqJUkIdvw1eb05MzKPJGZwlRwc1fy/37BhqE5SkkRVWBU7v218zty1ewZ6jgwZknCDQvFRdHhC/PYQUlXy4fidAunCsmPo6r0zjNnlTVl69gl5oKaJ4UNbHeqO40qx7vGyzJ8PtfxuWZ+DsZPubj22YmMWuH6IrCh7nwyWq2V014YU64UmvAVR9ErSRFI234wHDBLbmJ+H9kW3zwTRgOCGiqxXSeVxMw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Gb7IWVfZ94TKaMMjcLTA3aqbmakRsmgPCCHykna5OGA=; b=er7TXnMV0yQKIx0fXnIe+ue8jn5Jjxh2ixLSFjxYq68HQ4cVg+KFxT9N2248V0Z9U3p41LzewUiD3WHSWKO1jgmyh+k23IuVFzEUpHL+B/zu/PO1fTXFWJkRXJtSP2tOzoC+vpxZU08S5bi4AB5W1DCh6WuU+Zw2gYtcpe67lEP0zOqFRg1v1TDMjg/410obf2nczoWDRT9Vwo5DiYarvD8ltyZKwE5FLMOHZCEvwEbDxLQjCfpfuHgpR3NnKtIIrfvoxzteSjXv7VkvUu490ZPIM9OYqIrt1c7qWgFQ7NSg7SRfhUkJRJw31A1jLEz7CK+WaMsUtLbMdU7Cl6HDTA== Received: from DS7PR03CA0066.namprd03.prod.outlook.com (2603:10b6:5:3bb::11) by LV3PR12MB9144.namprd12.prod.outlook.com (2603:10b6:408:19d::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.28; Thu, 9 Nov 2023 08:56:28 +0000 Received: from DS2PEPF0000343E.namprd02.prod.outlook.com (2603:10b6:5:3bb:cafe::32) by DS7PR03CA0066.outlook.office365.com (2603:10b6:5:3bb::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6977.18 via Frontend Transport; Thu, 9 Nov 2023 08:56:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DS2PEPF0000343E.mail.protection.outlook.com (10.167.18.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6977.16 via Frontend Transport; Thu, 9 Nov 2023 08:56:27 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 9 Nov 2023 00:56:12 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 9 Nov 2023 00:56:09 -0800 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko , Ori Kam CC: , , Subject: [PATCH 2/2] net/mlx5: fix destroying external representor matched flows Date: Thu, 9 Nov 2023 16:55:47 +0800 Message-ID: <20231109085547.1313003-3-suanmingm@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231109085547.1313003-1-suanmingm@nvidia.com> References: <20231109085547.1313003-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF0000343E:EE_|LV3PR12MB9144:EE_ X-MS-Office365-Filtering-Correlation-Id: 3084ca0b-7d99-45ff-dcab-08dbe101c284 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: uyBx/GH41JPl7V97P/zlDAyUcC6Ra7Y5R33YXrKcnSfC4ocAYYGFjgJEo98wX/ltFhg6sC/aFKllBm/9IQeI3hoLjGDgF3ck4jpeXfV3b+Ni/RlKpZEkqcw/wQg3ftiILGeGVXitZVl4Ys9cOWqcsSHaSlPZQvoYcn1kFMvmcHCLy9E/efXgbWkXlPzBQJImR9+pxCHERpXbcej1YCvTcy8DdwjUetoE9Eugmly3N7TvWg6iwNeBHhPBANTjl2tmXXeYC+W0rT2/5wplWZ3DY3tSl60uyHuze3PAX9rBO+nwFMP+jbAyBrxq3seGE7DvD4uPII0/+3eDQGinmgTDH8vyiuKdcc/njPLuC3XImtZEzUh7+IR/K2rcDTb81sGZGZJIugnMt/siCeCkiSGreUu6PJGrlb5Xfclhpc7gIb2bzA8cfEi0PnmnqdtN04DEkd25qwbyB632fA0hhD03DPFGme14RwUtWn30JmMpRZfcSALedqjeZZOIDw5gsiVKU+cR0mCVzm0qI5cAm7q0MpQie8OFHOpAsSF03AnYTtipuDTnq1NqUEJR+Vy8VTxkXlZxCofb2NpajdLqh4M6bcIIjG0dGgbNIaaxj7W6zuzbaP7VP4W5aAa/vFHk721S7aVCg/WLO+xrEe1AOi8t1matHXIOjtLf6onUZeTr/BqjxvKOSTUi/6vLMTc5GpGZplzaXP2ZvBYVt5x+lcpokz312J3fPYl4ZDMAG1MZGKnpW14wosUIEIa331W0Ii81+5HSrZwB0De7dVAyE1oMdidhb9LzIXlb41IFTnkGey0= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(396003)(376002)(39860400002)(136003)(346002)(230273577357003)(230922051799003)(230173577357003)(64100799003)(451199024)(1800799009)(186009)(82310400011)(40470700004)(36840700001)(46966006)(5660300002)(41300700001)(36860700001)(40460700003)(2906002)(86362001)(36756003)(7636003)(356005)(82740400003)(336012)(16526019)(426003)(478600001)(83380400001)(70206006)(6286002)(110136005)(2616005)(70586007)(1076003)(26005)(6636002)(54906003)(316002)(47076005)(7696005)(6666004)(8676002)(4326008)(40480700001)(8936002)(55016003)(450100002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Nov 2023 08:56:27.6354 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3084ca0b-7d99-45ff-dcab-08dbe101c284 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF0000343E.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV3PR12MB9144 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The external representor matched SQ flows are managed by external SQ, PMD traffic enable/disable should not touch these flows. This commit adds an extra external list for the external representor matched SQ flows. Fixes: 26e1eaf2dac4 ("net/mlx5: support device control for E-Switch default rule") Cc: stable@dpdk.org Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 4 +-- drivers/net/mlx5/mlx5_flow_hw.c | 45 +++++++++++++++++++++++---------- drivers/net/mlx5/mlx5_trigger.c | 4 +-- drivers/net/mlx5/mlx5_txq.c | 4 +-- 5 files changed, 39 insertions(+), 19 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 45ad0701f1..795748eddc 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1855,6 +1855,7 @@ struct mlx5_priv { void *root_drop_action; /* Pointer to root drop action. */ rte_spinlock_t hw_ctrl_lock; LIST_HEAD(hw_ctrl_flow, mlx5_hw_ctrl_flow) hw_ctrl_flows; + LIST_HEAD(hw_ext_ctrl_flow, mlx5_hw_ctrl_flow) hw_ext_ctrl_flows; struct rte_flow_template_table *hw_esw_sq_miss_root_tbl; struct rte_flow_template_table *hw_esw_sq_miss_tbl; struct rte_flow_template_table *hw_esw_zero_tbl; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index d57b3b5465..8c0b9a4b60 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -2874,12 +2874,12 @@ int flow_null_counter_query(struct rte_eth_dev *dev, int mlx5_flow_hw_flush_ctrl_flows(struct rte_eth_dev *dev); int mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, - uint32_t sqn); + uint32_t sqn, bool external); int mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn); int mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev); int mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev); -int mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn); +int mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn, bool external); int mlx5_flow_actions_validate(struct rte_eth_dev *dev, const struct rte_flow_actions_template_attr *attr, const struct rte_flow_action actions[], diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index d512889682..8a23c7c281 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -9189,6 +9189,7 @@ flow_hw_configure(struct rte_eth_dev *dev, priv->nb_queue = nb_q_updated; rte_spinlock_init(&priv->hw_ctrl_lock); LIST_INIT(&priv->hw_ctrl_flows); + LIST_INIT(&priv->hw_ext_ctrl_flows); ret = flow_hw_create_ctrl_rx_tables(dev); if (ret) { rte_flow_error_set(error, -ret, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -11343,6 +11344,8 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { * Index of an action template associated with @p table. * @param info * Additional info about control flow rule. + * @param external + * External ctrl flow. * * @return * 0 on success, negative errno value otherwise and rte_errno set. @@ -11355,7 +11358,8 @@ flow_hw_create_ctrl_flow(struct rte_eth_dev *owner_dev, uint8_t item_template_idx, struct rte_flow_action actions[], uint8_t action_template_idx, - struct mlx5_hw_ctrl_flow_info *info) + struct mlx5_hw_ctrl_flow_info *info, + bool external) { struct mlx5_priv *priv = proxy_dev->data->dev_private; uint32_t queue = CTRL_QUEUE_ID(priv); @@ -11406,7 +11410,10 @@ flow_hw_create_ctrl_flow(struct rte_eth_dev *owner_dev, entry->info = *info; else entry->info.type = MLX5_HW_CTRL_FLOW_TYPE_GENERAL; - LIST_INSERT_HEAD(&priv->hw_ctrl_flows, entry, next); + if (external) + LIST_INSERT_HEAD(&priv->hw_ext_ctrl_flows, entry, next); + else + LIST_INSERT_HEAD(&priv->hw_ctrl_flows, entry, next); rte_spinlock_unlock(&priv->hw_ctrl_lock); return 0; error: @@ -11580,11 +11587,23 @@ flow_hw_flush_all_ctrl_flows(struct rte_eth_dev *dev) mlx5_free(cf); cf = cf_next; } + cf = LIST_FIRST(&priv->hw_ext_ctrl_flows); + while (cf != NULL) { + cf_next = LIST_NEXT(cf, next); + ret = flow_hw_destroy_ctrl_flow(dev, cf->flow); + if (ret) { + rte_errno = ret; + return -ret; + } + LIST_REMOVE(cf, next); + mlx5_free(cf); + cf = cf_next; + } return 0; } int -mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) +mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool external) { uint16_t port_id = dev->data->port_id; struct rte_flow_item_ethdev esw_mgr_spec = { @@ -11668,7 +11687,7 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) .type = RTE_FLOW_ACTION_TYPE_END, }; ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_root_tbl, - items, 0, actions, 0, &flow_info); + items, 0, actions, 0, &flow_info, external); if (ret) { DRV_LOG(ERR, "Port %u failed to create root SQ miss flow rule for SQ %u, ret %d", port_id, sqn, ret); @@ -11699,7 +11718,7 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) }; flow_info.type = MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS; ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_tbl, - items, 0, actions, 0, &flow_info); + items, 0, actions, 0, &flow_info, external); if (ret) { DRV_LOG(ERR, "Port %u failed to create HWS SQ miss flow rule for SQ %u, ret %d", port_id, sqn, ret); @@ -11821,7 +11840,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) } return flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_zero_tbl, - items, 0, actions, 0, &flow_info); + items, 0, actions, 0, &flow_info, false); } int @@ -11876,11 +11895,11 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev) return 0; return flow_hw_create_ctrl_flow(dev, dev, priv->hw_tx_meta_cpy_tbl, - eth_all, 0, copy_reg_action, 0, &flow_info); + eth_all, 0, copy_reg_action, 0, &flow_info, false); } int -mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn) +mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn, bool external) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rte_flow_item_sq sq_spec = { @@ -11934,7 +11953,7 @@ mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn) actions[2].type = RTE_FLOW_ACTION_TYPE_JUMP; } return flow_hw_create_ctrl_flow(dev, dev, priv->hw_tx_repr_tagging_tbl, - items, 0, actions, 0, &flow_info); + items, 0, actions, 0, &flow_info, external); } static uint32_t @@ -12065,7 +12084,7 @@ __flow_hw_ctrl_flows_single(struct rte_eth_dev *dev, items[3] = flow_hw_get_ctrl_rx_l4_item(rss_type); items[4] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_END }; /* Without VLAN filtering, only a single flow rule must be created. */ - return flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, &flow_info); + return flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, &flow_info, false); } static int @@ -12106,7 +12125,7 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev, }; items[1].spec = &vlan_spec; - if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, &flow_info)) + if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, &flow_info, false)) return -rte_errno; } return 0; @@ -12150,7 +12169,7 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, if (!memcmp(mac, &cmp, sizeof(*mac))) continue; memcpy(ð_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN); - if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, &flow_info)) + if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, &flow_info, false)) return -rte_errno; } return 0; @@ -12204,7 +12223,7 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev, items[1].spec = &vlan_spec; if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, - &flow_info)) + &flow_info, false)) return -rte_errno; } } diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 7bdb897612..d7ecb149fa 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -1494,13 +1494,13 @@ mlx5_traffic_enable_hws(struct rte_eth_dev *dev) continue; queue = mlx5_txq_get_sqn(txq); if ((priv->representor || priv->master) && config->dv_esw_en) { - if (mlx5_flow_hw_esw_create_sq_miss_flow(dev, queue)) { + if (mlx5_flow_hw_esw_create_sq_miss_flow(dev, queue, false)) { mlx5_txq_release(dev, i); goto error; } } if (config->dv_esw_en && config->repr_matching) { - if (mlx5_flow_hw_tx_repr_matching_flow(dev, queue)) { + if (mlx5_flow_hw_tx_repr_matching_flow(dev, queue, false)) { mlx5_txq_release(dev, i); goto error; } diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index ccdf2ffb14..1ac43548b2 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -1311,10 +1311,10 @@ rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num) } #ifdef HAVE_MLX5_HWS_SUPPORT if (priv->sh->config.dv_flow_en == 2) { - if (mlx5_flow_hw_esw_create_sq_miss_flow(dev, sq_num)) + if (mlx5_flow_hw_esw_create_sq_miss_flow(dev, sq_num, true)) return -rte_errno; if (priv->sh->config.repr_matching && - mlx5_flow_hw_tx_repr_matching_flow(dev, sq_num)) { + mlx5_flow_hw_tx_repr_matching_flow(dev, sq_num, true)) { mlx5_flow_hw_esw_destroy_sq_miss_flow(dev, sq_num); return -rte_errno; }