From patchwork Thu Nov 16 08:08:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 134410 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 18AE243341; Thu, 16 Nov 2023 09:09:20 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5DBCA40649; Thu, 16 Nov 2023 09:09:12 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2070.outbound.protection.outlook.com [40.107.244.70]) by mails.dpdk.org (Postfix) with ESMTP id 73BCE40150 for ; Thu, 16 Nov 2023 09:09:10 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=k0i9aE47lj7mcjeqKExs5+nOv8SclHV+jZJMgVgBAL88Qde7OuAgoRhdntazCT2V9DdT3r2Jul9lzbhjrwZhLsS+XPLCEPjPUmuUswnQJHLMQp/pHKIBnbO1kcPGiOiobwFGQX8EIwS9/v4fHybAdSkYsHd7BN4jNleqrM33dy9bCk4fBmTmSTuLI2c/PhWEMYi1nuKgz4O2keHdzkJ2B7JOwOusr6lPuu0+iUOtn74yYXIZboefLxoSHLFAlNnUrQ2KbJ9eSDGRy2M/jmPI6eemINwbcj2faL4Qv4uogj0SHHVlLno9dIm9xguuwHsTum7PLQ+Pjr23yCWT0qwsFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=BPeh29Q05obzTmo/D3v7421HXBXaetCJcBJHNxshYPw=; b=C0TjSV+FMvJR+yrC9S1M4Z8L/LtxWyX9U5jK9zGEgItiu8tcnyMv8Ajy38s9cIaOIZ3jkMVvnK9qSiCPbpUMio1NqfG+4uXRmVsURt/nWriyoQQTLQOL+5tfFcEiV5HrskOIdlafGtwM2O1ul+AOD54sW21zvOtbyklngfgTsjbdieV6M7y3XE6dapeLLHcAJQpx5PcDqxNCOHJrJnaY6yFLQ7veQJ7JfhTBokghaFSxyg0OG9vLKYD0uY04WJ4dihqMCqxbH/2T1CWC2vioMWCIeswHopRFd4bfv4TKEHfC2Cd7QsgIKWkAnWCXiIjhRPbohfpAoTBhtrLheokm7Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BPeh29Q05obzTmo/D3v7421HXBXaetCJcBJHNxshYPw=; b=qPCCjKSNFzem/A9Gb5o7R1944iy2BXkWrZ+TJahxLiaM2y9p42ut8bWCKscNGj9wpkQubwWgCfOYHE43mv5yYwnNnSk2V8yZ7d6fxSC1i29HJ5FQcAumQw/3uTiK88Vk00Tor0iKnmoiT5m7TPuzRRHolayYN7D4G9mq+PUr+G0/HAk2MIma8X7melul1JQeQfNYr9cHMAZf451rKIOwNOuYCPcXCKZ7+Lr+I74vGryl3mXY5XoAJ6JD5ImnyrqqDz5TA0kubL9tHAURtyH9L1ialA05qdWkoyRy481b1rqrHY7pYe5lV1OWoXBDZr8WhTMGWJwCaNiICdpJEA115w== Received: from CYZPR02CA0005.namprd02.prod.outlook.com (2603:10b6:930:a1::19) by CY5PR12MB6083.namprd12.prod.outlook.com (2603:10b6:930:29::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7002.21; Thu, 16 Nov 2023 08:09:07 +0000 Received: from CY4PEPF0000EDD3.namprd03.prod.outlook.com (2603:10b6:930:a1:cafe::30) by CYZPR02CA0005.outlook.office365.com (2603:10b6:930:a1::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7002.21 via Frontend Transport; Thu, 16 Nov 2023 08:09:07 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CY4PEPF0000EDD3.mail.protection.outlook.com (10.167.241.207) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7002.19 via Frontend Transport; Thu, 16 Nov 2023 08:09:07 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 16 Nov 2023 00:08:55 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 16 Nov 2023 00:08:53 -0800 From: Gregory Etelson To: CC: , , , "Ori Kam" , Matan Azrad , Viacheslav Ovsiienko , Suanming Mou Subject: [PATCH 2/2] net/mlx5: fix indirect list actions completions processing Date: Thu, 16 Nov 2023 10:08:33 +0200 Message-ID: <20231116080833.336377-3-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231116080833.336377-1-getelson@nvidia.com> References: <20231116080833.336377-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EDD3:EE_|CY5PR12MB6083:EE_ X-MS-Office365-Filtering-Correlation-Id: 8c6a55ab-9e5c-440e-82c3-08dbe67b4ebf X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DkcZumXgEy1DcNSbsFAQdnh+ixOXr68Mngjhwe4LoH2lawQk5otyR5PNCWAZ0zLnDlxxTDj/vUCMRvaWhpFBL2J2+0vqW3TXN2RPKFNgB55Lcxn46kVl/X00OXwVLmX83DjB1eKcIv+kt53Gqo10ZzR03dFVmwlnUVWK85+OkhUwGiva04lpEWIwbDK78qIQ3Ic55DtSG/4MlyfHVcpgZ7gTOX8cRolerwhkIz8bvMYc5Jc8Ym8sGtgaOzmFx50j/Tnls5ns9wMaxAAxdkhwkCa9jj67fo2authudp6NuhS9e4E/uUMzo1yvl3/OX7xsKwnnEahz32AyJRIo4lpb9j/wvL+fSPlni8mCek+ZXJo1SuSMUfVFRXTzdYAhzpDLHmBWTWLCxDYmCc3b7+lQwCL/yr4teKc+CmASr2FdG/9nP384rHIgWLZ+xvQN3oreuC6+Mv3n3soOUpA/nVgXGX1O4vH5zvqMh9NmIjkRo4hXqwaCG2BJVOWMlnvcGuVmdyAAN+TYAqQmzcZ4Q6RZEMPxZ36igf4OCYG6bZwVOu5lTI6iC2ya9sjk7d9f25umab3zPC26URqLDvekb789kG+XGr0NtTUKjUtNdagX8XgoRfP+7ZVK4h/PtIhadaP+6TvTVsdH//cFZj7YH6DWbxyoZ6fs0Kkh9j5uBeT8XnnURAjQj0VZotYAviDg+FVfgR7N8YHUVcYE3+zD8OdUSs+p7eUfpm1atLLuQpzrEBxuunraU9ufCLr1UnQZHedS X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(376002)(396003)(39860400002)(136003)(346002)(230922051799003)(82310400011)(186009)(64100799003)(1800799009)(451199024)(36840700001)(46966006)(40470700004)(36860700001)(47076005)(7636003)(356005)(55016003)(83380400001)(82740400003)(336012)(426003)(1076003)(36756003)(16526019)(6286002)(26005)(2616005)(7696005)(107886003)(6666004)(40480700001)(478600001)(86362001)(5660300002)(41300700001)(70586007)(70206006)(54906003)(316002)(6916009)(8936002)(8676002)(4326008)(40460700003)(2906002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Nov 2023 08:09:07.8190 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8c6a55ab-9e5c-440e-82c3-08dbe67b4ebf X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EDD3.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6083 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org MLX5 PMD separates async HWS jobs completion to 2 categories: - HWS flow rule completion; - HWS indirect action completion; When processing the latter, current PMD could not differentiate between copletion to legacy indirect and indirect list actions. The patch marks async job object with indirect action type and processes job completion according to that type. Current PMD supports 2 indirect action list types - MIRROR and REFORMAT. These indirect list types do not post WQE to create action. Therefore, the patch does not process `MLX5_HW_INDIRECT_TYPE_LIST` jobs. The new `indirect_type` member does not increase size of the `struct mlx5_hw_q_job`. Fixes: 3564e928c759 ("net/mlx5: support HWS flow mirror action") Signed-off-by: Gregory Etelson Acked-by: Ori Kam --- drivers/net/mlx5/mlx5.h | 6 ++ drivers/net/mlx5/mlx5_flow_hw.c | 107 ++++++++++++++++++-------------- 2 files changed, 68 insertions(+), 45 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f0d63a0ba5..76bf7d0f4f 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -382,11 +382,17 @@ enum mlx5_hw_job_type { MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY, /* Flow update and query job type. */ }; +enum mlx5_hw_indirect_type { + MLX5_HW_INDIRECT_TYPE_LEGACY, + MLX5_HW_INDIRECT_TYPE_LIST +}; + #define MLX5_HW_MAX_ITEMS (16) /* HW steering flow management job descriptor. */ struct mlx5_hw_q_job { uint32_t type; /* Job type. */ + uint32_t indirect_type; union { struct rte_flow_hw *flow; /* Flow attached to the job. */ const void *action; /* Indirect action attached to the job. */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index fb2e6bf67b..da873ae2e2 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -3740,6 +3740,56 @@ flow_hw_age_count_release(struct mlx5_priv *priv, uint32_t queue, } } +static __rte_always_inline void +flow_hw_pull_legacy_indirect_comp(struct rte_eth_dev *dev, struct mlx5_hw_q_job *job, + uint32_t queue) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_action *aso_ct; + struct mlx5_aso_mtr *aso_mtr; + uint32_t type, idx; + + if (MLX5_INDIRECT_ACTION_TYPE_GET(job->action) == + MLX5_INDIRECT_ACTION_TYPE_QUOTA) { + mlx5_quota_async_completion(dev, queue, job); + } else if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) { + type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action); + if (type == MLX5_INDIRECT_ACTION_TYPE_METER_MARK) { + idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action); + mlx5_ipool_free(priv->hws_mpool->idx_pool, idx); + } + } else if (job->type == MLX5_HW_Q_JOB_TYPE_CREATE) { + type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action); + if (type == MLX5_INDIRECT_ACTION_TYPE_METER_MARK) { + idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action); + aso_mtr = mlx5_ipool_get(priv->hws_mpool->idx_pool, idx); + aso_mtr->state = ASO_METER_READY; + } else if (type == MLX5_INDIRECT_ACTION_TYPE_CT) { + idx = MLX5_ACTION_CTX_CT_GET_IDX + ((uint32_t)(uintptr_t)job->action); + aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx); + aso_ct->state = ASO_CONNTRACK_READY; + } + } else if (job->type == MLX5_HW_Q_JOB_TYPE_QUERY) { + type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action); + if (type == MLX5_INDIRECT_ACTION_TYPE_CT) { + idx = MLX5_ACTION_CTX_CT_GET_IDX + ((uint32_t)(uintptr_t)job->action); + aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx); + mlx5_aso_ct_obj_analyze(job->query.user, + job->query.hw); + aso_ct->state = ASO_CONNTRACK_READY; + } + } else { + /* + * rte_flow_op_result::user data can point to + * struct mlx5_aso_mtr object as well + */ + if (queue != CTRL_QUEUE_ID(priv)) + MLX5_ASSERT(false); + } +} + static inline int __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, uint32_t queue, @@ -3749,11 +3799,7 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; struct rte_ring *r = priv->hw_q[queue].indir_cq; - struct mlx5_hw_q_job *job = NULL; void *user_data = NULL; - uint32_t type, idx; - struct mlx5_aso_mtr *aso_mtr; - struct mlx5_aso_ct_action *aso_ct; int ret_comp, i; ret_comp = (int)rte_ring_count(r); @@ -3775,49 +3821,18 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, &res[ret_comp], n_res - ret_comp); for (i = 0; i < ret_comp; i++) { - job = (struct mlx5_hw_q_job *)res[i].user_data; + struct mlx5_hw_q_job *job = (struct mlx5_hw_q_job *)res[i].user_data; + /* Restore user data. */ res[i].user_data = job->user_data; - if (MLX5_INDIRECT_ACTION_TYPE_GET(job->action) == - MLX5_INDIRECT_ACTION_TYPE_QUOTA) { - mlx5_quota_async_completion(dev, queue, job); - } else if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) { - type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action); - if (type == MLX5_INDIRECT_ACTION_TYPE_METER_MARK) { - idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action); - mlx5_ipool_free(priv->hws_mpool->idx_pool, idx); - } - } else if (job->type == MLX5_HW_Q_JOB_TYPE_CREATE) { - type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action); - if (type == MLX5_INDIRECT_ACTION_TYPE_METER_MARK) { - idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action); - aso_mtr = mlx5_ipool_get(priv->hws_mpool->idx_pool, idx); - aso_mtr->state = ASO_METER_READY; - } else if (type == MLX5_INDIRECT_ACTION_TYPE_CT) { - idx = MLX5_ACTION_CTX_CT_GET_IDX - ((uint32_t)(uintptr_t)job->action); - aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx); - aso_ct->state = ASO_CONNTRACK_READY; - } - } else if (job->type == MLX5_HW_Q_JOB_TYPE_QUERY) { - type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action); - if (type == MLX5_INDIRECT_ACTION_TYPE_CT) { - idx = MLX5_ACTION_CTX_CT_GET_IDX - ((uint32_t)(uintptr_t)job->action); - aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx); - mlx5_aso_ct_obj_analyze(job->query.user, - job->query.hw); - aso_ct->state = ASO_CONNTRACK_READY; - } - } else { - /* - * rte_flow_op_result::user data can point to - * struct mlx5_aso_mtr object as well - */ - if (queue == CTRL_QUEUE_ID(priv)) - continue; - MLX5_ASSERT(false); - } + if (job->indirect_type == MLX5_HW_INDIRECT_TYPE_LEGACY) + flow_hw_pull_legacy_indirect_comp(dev, job, queue); + /* + * Current PMD supports 2 indirect action list types - MIRROR and REFORMAT. + * These indirect list types do not post WQE to create action. + * Future indirect list types that do post WQE will add + * completion handlers here. + */ flow_hw_job_put(priv, job, queue); } return ret_comp; @@ -10109,6 +10124,7 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, } if (job) { job->action = handle; + job->indirect_type = MLX5_HW_INDIRECT_TYPE_LEGACY; flow_hw_action_finalize(dev, queue, job, push, aso, handle != NULL); } @@ -11341,6 +11357,7 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, } if (job) { job->action = handle; + job->indirect_type = MLX5_HW_INDIRECT_TYPE_LIST; flow_hw_action_finalize(dev, queue, job, push, false, handle != NULL); }