From patchwork Thu Nov 16 08:08:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 134409 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BD63243341; Thu, 16 Nov 2023 09:09:10 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A9ABF402D6; Thu, 16 Nov 2023 09:09:10 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2078.outbound.protection.outlook.com [40.107.243.78]) by mails.dpdk.org (Postfix) with ESMTP id E42CD402D4; Thu, 16 Nov 2023 09:09:08 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ikxHQlwymGjfVKEoHiLO0dgncd7A+Y5gq4GfC0h5swnM1MPqRm89wQjPVN8jrogqHvnXGh8JUJXtH65Mw+RpitIcsSQpkt2ukPUfwY9IUa3UgH1ZUE6TPKd1q30TcsmrU3LQTrRxEnpJWd90ckogXvMUJsL0IJ5v6v1rAfEtoIT0eIAjP5W+ZcS1E6TXo44dt1sHWp3Sas1rbROyO/lYROU5N7Ig2M+vXcpiBgNjKYfglXAaH4GFHj5F4gAtbdr6bY7l+YzH4E77hJoL4YfqVXa39fHl5F2isT9nm78tVxbAJBscXgxzxUu+tgHaTN9faUIM+R0fJQ9hB5uhKSR+tQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CEo1ndda8cw7jTgb05KnICyHo6tONxTo2Ovo9mB8xRU=; b=RCDAOxeQ/KB4Y1IbZsvnX4T+5sddlnQEm+bfAgS+q6YdFrvBODewZAxzKSQTTAW7doMrw9PrsFbmesyKsqDfqWNtG4waQt69y/e3MrQV/+5KPbTUc+cNzbgEFi/BCKuiGAAFRCgxUzxmQBXQtXY5HxocDOUq1+xoT1aps86HjtwCLoxqUQWOHU0wh0wgzd6OuBCZo5gWe3XmAO/jYCZUBuLCrFkt9s70w4ZH6kJAD0yQhV4svmlHKAGP6dMidSbOJ7onrwsmMxTh+n+XQuiootpxBMJYzBZHPilkA8sAsazgysYj9zeSVrqcEkoAd5/c4J7CjNML3qHY4bQ9uL0FBw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CEo1ndda8cw7jTgb05KnICyHo6tONxTo2Ovo9mB8xRU=; b=VDlr3qqXIIGpl4OwBpij24oa4xgqoxcdjFbHR9AxfebqroGO4o2yZ0jxRNc2pCuZAa9pNkNbVPP8dldrv97ypiMfEEoQ58iH5VwHD2uxvFQy5uH929ShjODA8XIibkUUmIxxRYysQb7+TCDARln7ttISNXYxldaa9X7kjajHhmLhHAlxwJOrRN0CfGNwZnq+MHC4PkhcMiw11iCrhj2jjw3/4Jh6vgnPjlpYwJpJYPg9s8on23unAmZzxI+4lD431glKvnE1mPAarNMOIaayvaMBxBFkULf3D1lp4PLdttfsXl4KR9ZCqs/JldW9Jkbr/p0jcTliHvW1YYbhiAyxjw== Received: from DS7PR03CA0200.namprd03.prod.outlook.com (2603:10b6:5:3b6::25) by IA1PR12MB6433.namprd12.prod.outlook.com (2603:10b6:208:3af::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7002.21; Thu, 16 Nov 2023 08:09:05 +0000 Received: from DS2PEPF0000343E.namprd02.prod.outlook.com (2603:10b6:5:3b6:cafe::83) by DS7PR03CA0200.outlook.office365.com (2603:10b6:5:3b6::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7002.21 via Frontend Transport; Thu, 16 Nov 2023 08:09:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DS2PEPF0000343E.mail.protection.outlook.com (10.167.18.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7002.20 via Frontend Transport; Thu, 16 Nov 2023 08:09:05 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 16 Nov 2023 00:08:52 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 16 Nov 2023 00:08:48 -0800 From: Gregory Etelson To: CC: , , , , Ori Kam , Matan Azrad , Viacheslav Ovsiienko , "Suanming Mou" , Alexander Kozyrev Subject: [PATCH 1/2] net/mlx5: fix sync queue completion processing Date: Thu, 16 Nov 2023 10:08:32 +0200 Message-ID: <20231116080833.336377-2-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231116080833.336377-1-getelson@nvidia.com> References: <20231116080833.336377-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF0000343E:EE_|IA1PR12MB6433:EE_ X-MS-Office365-Filtering-Correlation-Id: 51f9c0ca-9f02-4af7-dcbb-08dbe67b4d31 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: lhqmJySEmA8UaKSeVJl+FZaFDYCRTy9pRVDhxPX62XDnXgDwqKiJxJt4VpVozZb67rQzNfwIcP+9r8XwtC9CzhtTJvOwe3UnwsQTeenhwp1MuyTliF3sMHw/k+7W+UIzJ4KPPzCiBODX5rT+FLHybCkTz5R+m7/zSzp3QVU88J6Logra9JTkCvg59SfoSg39GTDRbY1c5HGVUPf7PA8pxxJCWzoKdPG/9ji6Ty8SF81y7dcb5FcA9TtDvSCzyAQdq2yJcEhXb5nL0zHNWqwB8c+LW8zxSd3C01QS/+1yF7dRM71pvxcX8aB8wlrhgG7Tx9mKEhAbw8tb/xd8PsB0IVMjyIuQHrGPd5HQ23z/LeBEo1BCk9HhK2KEGcbeR4reufAQHK6KauCBOf247JwK1UzdBHFfXrNccE9niDpRDOomzfOIa8qdu4YO3uTBwXjuzH1RBn+dGor0JBotSW6Gbf+4mExiiHLaddVxaXngui1ScVxEuhKRqHBYek/UkzqPneHoTyNXvumtEuwAmMwH0IDt9cwmwrRpKT32H5l5HYhW1PhoefpjhHUEjvflgRkhpuq9UoY7z7n62gNCjypfP4lSmFWOWTAtsW5VttzYZHd29jvT/XO+LQ9Ww+n2Zc24HcZuf7CDxLLUdg7JHBi+1RDvUkEAiH57xsUa0zD4JxNW3ohtIdUaYjhk4SyaUiBUmYxaFajmojfu8N4gZJAwvzGr0RluQewvQdreiQt5sxS/LQlzS6CNq8QnEyFGZIkTV1LbVOYKe46IM2Xo8V9ynQ== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(39860400002)(136003)(376002)(346002)(396003)(230922051799003)(230273577357003)(230173577357003)(451199024)(186009)(82310400011)(1800799009)(64100799003)(36840700001)(46966006)(40470700004)(70206006)(70586007)(41300700001)(6916009)(54906003)(316002)(450100002)(86362001)(5660300002)(30864003)(2906002)(40460700003)(4326008)(8676002)(8936002)(47076005)(7636003)(356005)(36860700001)(336012)(426003)(82740400003)(55016003)(83380400001)(6666004)(40480700001)(478600001)(36756003)(1076003)(26005)(6286002)(16526019)(107886003)(2616005)(7696005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Nov 2023 08:09:05.1973 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 51f9c0ca-9f02-4af7-dcbb-08dbe67b4d31 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF0000343E.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6433 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Indirect **SYNC** METER_MARK and CT update actions do not remove completion after WQE post. That implementation speeds up update time by avoiding HW timeout. The completion is remoted before the following WQE post. However, HWS queue updates do not reflect that behaviour. Therefore, during port destruction sync queue may have pending completions although the queue reports empty status. The patch validates that number of pushed WQEs will not exceed queue capacity. As the result, it allows to process more completions than expected. Fixes: 48fbb0e93d06 ("net/mlx5: support flow meter mark indirect action with HWS") Cc: stable@dpdk.org Signed-off-by: Gregory Etelson Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow_hw.c | 267 +++++++++++++++++--------------- 1 file changed, 142 insertions(+), 125 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index d72f0a66fb..fb2e6bf67b 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -273,6 +273,22 @@ static const struct rte_flow_item_eth ctrl_rx_eth_bcast_spec = { .hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", .hdr.ether_type = 0, }; + +static __rte_always_inline struct mlx5_hw_q_job * +flow_hw_job_get(struct mlx5_priv *priv, uint32_t queue) +{ + MLX5_ASSERT(priv->hw_q[queue].job_idx <= priv->hw_q[queue].size); + return priv->hw_q[queue].job_idx ? + priv->hw_q[queue].job[--priv->hw_q[queue].job_idx] : NULL; +} + +static __rte_always_inline void +flow_hw_job_put(struct mlx5_priv *priv, struct mlx5_hw_q_job *job, uint32_t queue) +{ + MLX5_ASSERT(priv->hw_q[queue].job_idx < priv->hw_q[queue].size); + priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job; +} + static inline enum mlx5dr_matcher_insert_mode flow_hw_matcher_insert_mode_get(enum rte_flow_table_insertion_type insert_type) { @@ -3297,10 +3313,10 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, .burst = attr->postpone, }; struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS]; - struct rte_flow_hw *flow; - struct mlx5_hw_q_job *job; + struct rte_flow_hw *flow = NULL; + struct mlx5_hw_q_job *job = NULL; const struct rte_flow_item *rule_items; - uint32_t flow_idx; + uint32_t flow_idx = 0; uint32_t res_idx = 0; int ret; @@ -3308,7 +3324,8 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, rte_errno = EINVAL; goto error; } - if (unlikely(!priv->hw_q[queue].job_idx)) { + job = flow_hw_job_get(priv, queue); + if (!job) { rte_errno = ENOMEM; goto error; } @@ -3317,16 +3334,15 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, goto error; mlx5_ipool_malloc(table->resource, &res_idx); if (!res_idx) - goto flow_free; + goto error; /* * Set the table here in order to know the destination table - * when free the flow afterwards. + * when free the flow afterward. */ flow->table = table; flow->mt_idx = pattern_template_index; flow->idx = flow_idx; flow->res_idx = res_idx; - job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; /* * Set the job type here in order to know if the flow memory * should be freed or not when get the result from dequeue. @@ -3354,25 +3370,25 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, pattern_template_index, actions, rule_acts, queue, error)) { rte_errno = EINVAL; - goto free; + goto error; } rule_items = flow_hw_get_rule_items(dev, table, items, pattern_template_index, job); if (!rule_items) - goto free; + goto error; ret = mlx5dr_rule_create(table->matcher, pattern_template_index, rule_items, action_template_index, rule_acts, &rule_attr, (struct mlx5dr_rule *)flow->rule); if (likely(!ret)) return (struct rte_flow *)flow; -free: - /* Flow created fail, return the descriptor and flow memory. */ - priv->hw_q[queue].job_idx++; - mlx5_ipool_free(table->resource, res_idx); -flow_free: - mlx5_ipool_free(table->flow, flow_idx); error: + if (job) + flow_hw_job_put(priv, job, queue); + if (flow_idx) + mlx5_ipool_free(table->flow, flow_idx); + if (res_idx) + mlx5_ipool_free(table->resource, res_idx); rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "fail to create rte flow"); @@ -3425,9 +3441,9 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, .burst = attr->postpone, }; struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS]; - struct rte_flow_hw *flow; - struct mlx5_hw_q_job *job; - uint32_t flow_idx; + struct rte_flow_hw *flow = NULL; + struct mlx5_hw_q_job *job = NULL; + uint32_t flow_idx = 0; uint32_t res_idx = 0; int ret; @@ -3435,7 +3451,8 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, rte_errno = EINVAL; goto error; } - if (unlikely(!priv->hw_q[queue].job_idx)) { + job = flow_hw_job_get(priv, queue); + if (!job) { rte_errno = ENOMEM; goto error; } @@ -3444,7 +3461,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, goto error; mlx5_ipool_malloc(table->resource, &res_idx); if (!res_idx) - goto flow_free; + goto error; /* * Set the table here in order to know the destination table * when free the flow afterwards. @@ -3453,7 +3470,6 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, flow->mt_idx = 0; flow->idx = flow_idx; flow->res_idx = res_idx; - job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; /* * Set the job type here in order to know if the flow memory * should be freed or not when get the result from dequeue. @@ -3478,20 +3494,20 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, &table->ats[action_template_index], 0, actions, rule_acts, queue, error)) { rte_errno = EINVAL; - goto free; + goto error; } ret = mlx5dr_rule_create(table->matcher, 0, items, action_template_index, rule_acts, &rule_attr, (struct mlx5dr_rule *)flow->rule); if (likely(!ret)) return (struct rte_flow *)flow; -free: - /* Flow created fail, return the descriptor and flow memory. */ - priv->hw_q[queue].job_idx++; - mlx5_ipool_free(table->resource, res_idx); -flow_free: - mlx5_ipool_free(table->flow, flow_idx); error: + if (job) + flow_hw_job_put(priv, job, queue); + if (res_idx) + mlx5_ipool_free(table->resource, res_idx); + if (flow_idx) + mlx5_ipool_free(table->flow, flow_idx); rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "fail to create rte flow"); @@ -3545,18 +3561,18 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, struct rte_flow_hw *of = (struct rte_flow_hw *)flow; struct rte_flow_hw *nf; struct rte_flow_template_table *table = of->table; - struct mlx5_hw_q_job *job; + struct mlx5_hw_q_job *job = NULL; uint32_t res_idx = 0; int ret; - if (unlikely(!priv->hw_q[queue].job_idx)) { + job = flow_hw_job_get(priv, queue); + if (!job) { rte_errno = ENOMEM; goto error; } mlx5_ipool_malloc(table->resource, &res_idx); if (!res_idx) goto error; - job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; nf = job->upd_flow; memset(nf, 0, sizeof(struct rte_flow_hw)); /* @@ -3594,7 +3610,7 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, nf->mt_idx, actions, rule_acts, queue, error)) { rte_errno = EINVAL; - goto free; + goto error; } /* * Switch the old flow and the new flow. @@ -3605,11 +3621,12 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, action_template_index, rule_acts, &rule_attr); if (likely(!ret)) return 0; -free: - /* Flow created fail, return the descriptor and flow memory. */ - priv->hw_q[queue].job_idx++; - mlx5_ipool_free(table->resource, res_idx); error: + /* Flow created fail, return the descriptor and flow memory. */ + if (job) + flow_hw_job_put(priv, job, queue); + if (res_idx) + mlx5_ipool_free(table->resource, res_idx); return rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "fail to update rte flow"); @@ -3656,24 +3673,24 @@ flow_hw_async_flow_destroy(struct rte_eth_dev *dev, struct mlx5_hw_q_job *job; int ret; - if (unlikely(!priv->hw_q[queue].job_idx)) { - rte_errno = ENOMEM; - goto error; - } - job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; + job = flow_hw_job_get(priv, queue); + if (!job) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to destroy rte flow: flow queue full"); job->type = MLX5_HW_Q_JOB_TYPE_DESTROY; job->user_data = user_data; job->flow = fh; rule_attr.user_data = job; rule_attr.rule_idx = fh->rule_idx; ret = mlx5dr_rule_destroy((struct mlx5dr_rule *)fh->rule, &rule_attr); - if (likely(!ret)) - return 0; - priv->hw_q[queue].job_idx++; -error: - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "fail to destroy rte flow"); + if (ret) { + flow_hw_job_put(priv, job, queue); + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to destroy rte flow"); + } + return 0; } /** @@ -3732,7 +3749,7 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; struct rte_ring *r = priv->hw_q[queue].indir_cq; - struct mlx5_hw_q_job *job; + struct mlx5_hw_q_job *job = NULL; void *user_data = NULL; uint32_t type, idx; struct mlx5_aso_mtr *aso_mtr; @@ -3792,8 +3809,16 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, job->query.hw); aso_ct->state = ASO_CONNTRACK_READY; } + } else { + /* + * rte_flow_op_result::user data can point to + * struct mlx5_aso_mtr object as well + */ + if (queue == CTRL_QUEUE_ID(priv)) + continue; + MLX5_ASSERT(false); } - priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job; + flow_hw_job_put(priv, job, queue); } return ret_comp; } @@ -3865,7 +3890,7 @@ flow_hw_pull(struct rte_eth_dev *dev, mlx5_ipool_free(job->flow->table->resource, res_idx); } } - priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job; + flow_hw_job_put(priv, job, queue); } /* 2. Pull indirect action comp. */ if (ret < n_res) @@ -3874,7 +3899,7 @@ flow_hw_pull(struct rte_eth_dev *dev, return ret; } -static inline void +static inline uint32_t __flow_hw_push_action(struct rte_eth_dev *dev, uint32_t queue) { @@ -3889,10 +3914,35 @@ __flow_hw_push_action(struct rte_eth_dev *dev, rte_ring_dequeue(iq, &job); rte_ring_enqueue(cq, job); } - if (priv->hws_ctpool) - mlx5_aso_push_wqe(priv->sh, &priv->ct_mng->aso_sqs[queue]); - if (priv->hws_mpool) - mlx5_aso_push_wqe(priv->sh, &priv->hws_mpool->sq[queue]); + if (!priv->shared_host) { + if (priv->hws_ctpool) + mlx5_aso_push_wqe(priv->sh, + &priv->ct_mng->aso_sqs[queue]); + if (priv->hws_mpool) + mlx5_aso_push_wqe(priv->sh, + &priv->hws_mpool->sq[queue]); + } + return priv->hw_q[queue].size - priv->hw_q[queue].job_idx; +} + +static int +__flow_hw_push(struct rte_eth_dev *dev, + uint32_t queue, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + int ret, num; + + num = __flow_hw_push_action(dev, queue); + ret = mlx5dr_send_queue_action(priv->dr_ctx, queue, + MLX5DR_SEND_QUEUE_ACTION_DRAIN_ASYNC); + if (ret) { + rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to push flows"); + return ret; + } + return num; } /** @@ -3912,22 +3962,11 @@ __flow_hw_push_action(struct rte_eth_dev *dev, */ static int flow_hw_push(struct rte_eth_dev *dev, - uint32_t queue, - struct rte_flow_error *error) + uint32_t queue, struct rte_flow_error *error) { - struct mlx5_priv *priv = dev->data->dev_private; - int ret; + int ret = __flow_hw_push(dev, queue, error); - __flow_hw_push_action(dev, queue); - ret = mlx5dr_send_queue_action(priv->dr_ctx, queue, - MLX5DR_SEND_QUEUE_ACTION_DRAIN_ASYNC); - if (ret) { - rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "fail to push flows"); - return ret; - } - return 0; + return ret >= 0 ? 0 : ret; } /** @@ -3937,8 +3976,6 @@ flow_hw_push(struct rte_eth_dev *dev, * Pointer to the rte_eth_dev structure. * @param[in] queue * The queue to pull the flow. - * @param[in] pending_rules - * The pending flow number. * @param[out] error * Pointer to error structure. * @@ -3947,24 +3984,24 @@ flow_hw_push(struct rte_eth_dev *dev, */ static int __flow_hw_pull_comp(struct rte_eth_dev *dev, - uint32_t queue, - uint32_t pending_rules, - struct rte_flow_error *error) + uint32_t queue, struct rte_flow_error *error) { struct rte_flow_op_result comp[BURST_THR]; int ret, i, empty_loop = 0; + uint32_t pending_rules; - ret = flow_hw_push(dev, queue, error); + ret = __flow_hw_push(dev, queue, error); if (ret < 0) return ret; + pending_rules = ret; while (pending_rules) { ret = flow_hw_pull(dev, queue, comp, BURST_THR, error); if (ret < 0) return -1; if (!ret) { - rte_delay_us_sleep(20000); + rte_delay_us_sleep(MLX5_ASO_WQE_CQE_RESPONSE_DELAY); if (++empty_loop > 5) { - DRV_LOG(WARNING, "No available dequeue, quit."); + DRV_LOG(WARNING, "No available dequeue %u, quit.", pending_rules); break; } continue; @@ -3973,13 +4010,16 @@ __flow_hw_pull_comp(struct rte_eth_dev *dev, if (comp[i].status == RTE_FLOW_OP_ERROR) DRV_LOG(WARNING, "Flow flush get error CQE."); } - if ((uint32_t)ret > pending_rules) { - DRV_LOG(WARNING, "Flow flush get extra CQE."); - return rte_flow_error_set(error, ERANGE, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "get extra CQE"); - } - pending_rules -= ret; + /* + * Indirect **SYNC** METER_MARK and CT actions do not + * remove completion after WQE post. + * That implementation avoids HW timeout. + * The completion is removed before the following WQE post. + * However, HWS queue updates do not reflect that behaviour. + * Therefore, during port destruction sync queue may have + * pending completions. + */ + pending_rules -= RTE_MIN(pending_rules, (uint32_t)ret); empty_loop = 0; } return 0; @@ -4001,7 +4041,7 @@ flow_hw_q_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_hw_q *hw_q; + struct mlx5_hw_q *hw_q = &priv->hw_q[MLX5_DEFAULT_FLUSH_QUEUE]; struct rte_flow_template_table *tbl; struct rte_flow_hw *flow; struct rte_flow_op_attr attr = { @@ -4020,13 +4060,10 @@ flow_hw_q_flow_flush(struct rte_eth_dev *dev, * be minus value. */ for (queue = 0; queue < priv->nb_queue; queue++) { - hw_q = &priv->hw_q[queue]; - if (__flow_hw_pull_comp(dev, queue, hw_q->size - hw_q->job_idx, - error)) + if (__flow_hw_pull_comp(dev, queue, error)) return -1; } /* Flush flow per-table from MLX5_DEFAULT_FLUSH_QUEUE. */ - hw_q = &priv->hw_q[MLX5_DEFAULT_FLUSH_QUEUE]; LIST_FOREACH(tbl, &priv->flow_hw_tbl, next) { if (!tbl->cfg.external) continue; @@ -4042,8 +4079,8 @@ flow_hw_q_flow_flush(struct rte_eth_dev *dev, /* Drain completion with queue size. */ if (pending_rules >= hw_q->size) { if (__flow_hw_pull_comp(dev, - MLX5_DEFAULT_FLUSH_QUEUE, - pending_rules, error)) + MLX5_DEFAULT_FLUSH_QUEUE, + error)) return -1; pending_rules = 0; } @@ -4051,8 +4088,7 @@ flow_hw_q_flow_flush(struct rte_eth_dev *dev, } /* Drain left completion. */ if (pending_rules && - __flow_hw_pull_comp(dev, MLX5_DEFAULT_FLUSH_QUEUE, pending_rules, - error)) + __flow_hw_pull_comp(dev, MLX5_DEFAULT_FLUSH_QUEUE, error)) return -1; return 0; } @@ -9911,18 +9947,6 @@ flow_hw_action_push(const struct rte_flow_op_attr *attr) return attr ? !attr->postpone : true; } -static __rte_always_inline struct mlx5_hw_q_job * -flow_hw_job_get(struct mlx5_priv *priv, uint32_t queue) -{ - return priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; -} - -static __rte_always_inline void -flow_hw_job_put(struct mlx5_priv *priv, uint32_t queue) -{ - priv->hw_q[queue].job_idx++; -} - static __rte_always_inline struct mlx5_hw_q_job * flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue, const struct rte_flow_action_handle *handle, @@ -9933,13 +9957,13 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue, struct mlx5_hw_q_job *job; MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE); - if (unlikely(!priv->hw_q[queue].job_idx)) { + job = flow_hw_job_get(priv, queue); + if (!job) { rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL, "Action destroy failed due to queue full."); return NULL; } - job = flow_hw_job_get(priv, queue); job->type = type; job->action = handle; job->user_data = user_data; @@ -9953,16 +9977,21 @@ flow_hw_action_finalize(struct rte_eth_dev *dev, uint32_t queue, bool push, bool aso, bool status) { struct mlx5_priv *priv = dev->data->dev_private; + + if (queue == MLX5_HW_INV_QUEUE) + queue = CTRL_QUEUE_ID(priv); if (likely(status)) { - if (push) - __flow_hw_push_action(dev, queue); + /* 1. add new job to a queue */ if (!aso) rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq : priv->hw_q[queue].indir_iq, job); + /* 2. send pending jobs */ + if (push) + __flow_hw_push_action(dev, queue); } else { - flow_hw_job_put(priv, queue); + flow_hw_job_put(priv, job, queue); } } @@ -11584,13 +11613,7 @@ flow_hw_create_ctrl_flow(struct rte_eth_dev *owner_dev, ret = -rte_errno; goto error; } - ret = flow_hw_push(proxy_dev, queue, NULL); - if (ret) { - DRV_LOG(ERR, "port %u failed to drain control flow queue", - proxy_dev->data->port_id); - goto error; - } - ret = __flow_hw_pull_comp(proxy_dev, queue, 1, NULL); + ret = __flow_hw_pull_comp(proxy_dev, queue, NULL); if (ret) { DRV_LOG(ERR, "port %u failed to insert control flow", proxy_dev->data->port_id); @@ -11651,13 +11674,7 @@ flow_hw_destroy_ctrl_flow(struct rte_eth_dev *dev, struct rte_flow *flow) " flow operation", dev->data->port_id); goto exit; } - ret = flow_hw_push(dev, queue, NULL); - if (ret) { - DRV_LOG(ERR, "port %u failed to drain control flow queue", - dev->data->port_id); - goto exit; - } - ret = __flow_hw_pull_comp(dev, queue, 1, NULL); + ret = __flow_hw_pull_comp(dev, queue, NULL); if (ret) { DRV_LOG(ERR, "port %u failed to destroy control flow", dev->data->port_id); From patchwork Thu Nov 16 08:08:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 134410 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 18AE243341; Thu, 16 Nov 2023 09:09:20 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5DBCA40649; Thu, 16 Nov 2023 09:09:12 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2070.outbound.protection.outlook.com [40.107.244.70]) by mails.dpdk.org (Postfix) with ESMTP id 73BCE40150 for ; Thu, 16 Nov 2023 09:09:10 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=k0i9aE47lj7mcjeqKExs5+nOv8SclHV+jZJMgVgBAL88Qde7OuAgoRhdntazCT2V9DdT3r2Jul9lzbhjrwZhLsS+XPLCEPjPUmuUswnQJHLMQp/pHKIBnbO1kcPGiOiobwFGQX8EIwS9/v4fHybAdSkYsHd7BN4jNleqrM33dy9bCk4fBmTmSTuLI2c/PhWEMYi1nuKgz4O2keHdzkJ2B7JOwOusr6lPuu0+iUOtn74yYXIZboefLxoSHLFAlNnUrQ2KbJ9eSDGRy2M/jmPI6eemINwbcj2faL4Qv4uogj0SHHVlLno9dIm9xguuwHsTum7PLQ+Pjr23yCWT0qwsFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=BPeh29Q05obzTmo/D3v7421HXBXaetCJcBJHNxshYPw=; b=C0TjSV+FMvJR+yrC9S1M4Z8L/LtxWyX9U5jK9zGEgItiu8tcnyMv8Ajy38s9cIaOIZ3jkMVvnK9qSiCPbpUMio1NqfG+4uXRmVsURt/nWriyoQQTLQOL+5tfFcEiV5HrskOIdlafGtwM2O1ul+AOD54sW21zvOtbyklngfgTsjbdieV6M7y3XE6dapeLLHcAJQpx5PcDqxNCOHJrJnaY6yFLQ7veQJ7JfhTBokghaFSxyg0OG9vLKYD0uY04WJ4dihqMCqxbH/2T1CWC2vioMWCIeswHopRFd4bfv4TKEHfC2Cd7QsgIKWkAnWCXiIjhRPbohfpAoTBhtrLheokm7Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BPeh29Q05obzTmo/D3v7421HXBXaetCJcBJHNxshYPw=; b=qPCCjKSNFzem/A9Gb5o7R1944iy2BXkWrZ+TJahxLiaM2y9p42ut8bWCKscNGj9wpkQubwWgCfOYHE43mv5yYwnNnSk2V8yZ7d6fxSC1i29HJ5FQcAumQw/3uTiK88Vk00Tor0iKnmoiT5m7TPuzRRHolayYN7D4G9mq+PUr+G0/HAk2MIma8X7melul1JQeQfNYr9cHMAZf451rKIOwNOuYCPcXCKZ7+Lr+I74vGryl3mXY5XoAJ6JD5ImnyrqqDz5TA0kubL9tHAURtyH9L1ialA05qdWkoyRy481b1rqrHY7pYe5lV1OWoXBDZr8WhTMGWJwCaNiICdpJEA115w== Received: from CYZPR02CA0005.namprd02.prod.outlook.com (2603:10b6:930:a1::19) by CY5PR12MB6083.namprd12.prod.outlook.com (2603:10b6:930:29::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7002.21; Thu, 16 Nov 2023 08:09:07 +0000 Received: from CY4PEPF0000EDD3.namprd03.prod.outlook.com (2603:10b6:930:a1:cafe::30) by CYZPR02CA0005.outlook.office365.com (2603:10b6:930:a1::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7002.21 via Frontend Transport; Thu, 16 Nov 2023 08:09:07 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CY4PEPF0000EDD3.mail.protection.outlook.com (10.167.241.207) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7002.19 via Frontend Transport; Thu, 16 Nov 2023 08:09:07 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 16 Nov 2023 00:08:55 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 16 Nov 2023 00:08:53 -0800 From: Gregory Etelson To: CC: , , , "Ori Kam" , Matan Azrad , Viacheslav Ovsiienko , Suanming Mou Subject: [PATCH 2/2] net/mlx5: fix indirect list actions completions processing Date: Thu, 16 Nov 2023 10:08:33 +0200 Message-ID: <20231116080833.336377-3-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231116080833.336377-1-getelson@nvidia.com> References: <20231116080833.336377-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EDD3:EE_|CY5PR12MB6083:EE_ X-MS-Office365-Filtering-Correlation-Id: 8c6a55ab-9e5c-440e-82c3-08dbe67b4ebf X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DkcZumXgEy1DcNSbsFAQdnh+ixOXr68Mngjhwe4LoH2lawQk5otyR5PNCWAZ0zLnDlxxTDj/vUCMRvaWhpFBL2J2+0vqW3TXN2RPKFNgB55Lcxn46kVl/X00OXwVLmX83DjB1eKcIv+kt53Gqo10ZzR03dFVmwlnUVWK85+OkhUwGiva04lpEWIwbDK78qIQ3Ic55DtSG/4MlyfHVcpgZ7gTOX8cRolerwhkIz8bvMYc5Jc8Ym8sGtgaOzmFx50j/Tnls5ns9wMaxAAxdkhwkCa9jj67fo2authudp6NuhS9e4E/uUMzo1yvl3/OX7xsKwnnEahz32AyJRIo4lpb9j/wvL+fSPlni8mCek+ZXJo1SuSMUfVFRXTzdYAhzpDLHmBWTWLCxDYmCc3b7+lQwCL/yr4teKc+CmASr2FdG/9nP384rHIgWLZ+xvQN3oreuC6+Mv3n3soOUpA/nVgXGX1O4vH5zvqMh9NmIjkRo4hXqwaCG2BJVOWMlnvcGuVmdyAAN+TYAqQmzcZ4Q6RZEMPxZ36igf4OCYG6bZwVOu5lTI6iC2ya9sjk7d9f25umab3zPC26URqLDvekb789kG+XGr0NtTUKjUtNdagX8XgoRfP+7ZVK4h/PtIhadaP+6TvTVsdH//cFZj7YH6DWbxyoZ6fs0Kkh9j5uBeT8XnnURAjQj0VZotYAviDg+FVfgR7N8YHUVcYE3+zD8OdUSs+p7eUfpm1atLLuQpzrEBxuunraU9ufCLr1UnQZHedS X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(376002)(396003)(39860400002)(136003)(346002)(230922051799003)(82310400011)(186009)(64100799003)(1800799009)(451199024)(36840700001)(46966006)(40470700004)(36860700001)(47076005)(7636003)(356005)(55016003)(83380400001)(82740400003)(336012)(426003)(1076003)(36756003)(16526019)(6286002)(26005)(2616005)(7696005)(107886003)(6666004)(40480700001)(478600001)(86362001)(5660300002)(41300700001)(70586007)(70206006)(54906003)(316002)(6916009)(8936002)(8676002)(4326008)(40460700003)(2906002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Nov 2023 08:09:07.8190 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8c6a55ab-9e5c-440e-82c3-08dbe67b4ebf X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EDD3.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6083 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org MLX5 PMD separates async HWS jobs completion to 2 categories: - HWS flow rule completion; - HWS indirect action completion; When processing the latter, current PMD could not differentiate between copletion to legacy indirect and indirect list actions. The patch marks async job object with indirect action type and processes job completion according to that type. Current PMD supports 2 indirect action list types - MIRROR and REFORMAT. These indirect list types do not post WQE to create action. Therefore, the patch does not process `MLX5_HW_INDIRECT_TYPE_LIST` jobs. The new `indirect_type` member does not increase size of the `struct mlx5_hw_q_job`. Fixes: 3564e928c759 ("net/mlx5: support HWS flow mirror action") Signed-off-by: Gregory Etelson Acked-by: Ori Kam --- drivers/net/mlx5/mlx5.h | 6 ++ drivers/net/mlx5/mlx5_flow_hw.c | 107 ++++++++++++++++++-------------- 2 files changed, 68 insertions(+), 45 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f0d63a0ba5..76bf7d0f4f 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -382,11 +382,17 @@ enum mlx5_hw_job_type { MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY, /* Flow update and query job type. */ }; +enum mlx5_hw_indirect_type { + MLX5_HW_INDIRECT_TYPE_LEGACY, + MLX5_HW_INDIRECT_TYPE_LIST +}; + #define MLX5_HW_MAX_ITEMS (16) /* HW steering flow management job descriptor. */ struct mlx5_hw_q_job { uint32_t type; /* Job type. */ + uint32_t indirect_type; union { struct rte_flow_hw *flow; /* Flow attached to the job. */ const void *action; /* Indirect action attached to the job. */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index fb2e6bf67b..da873ae2e2 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -3740,6 +3740,56 @@ flow_hw_age_count_release(struct mlx5_priv *priv, uint32_t queue, } } +static __rte_always_inline void +flow_hw_pull_legacy_indirect_comp(struct rte_eth_dev *dev, struct mlx5_hw_q_job *job, + uint32_t queue) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_action *aso_ct; + struct mlx5_aso_mtr *aso_mtr; + uint32_t type, idx; + + if (MLX5_INDIRECT_ACTION_TYPE_GET(job->action) == + MLX5_INDIRECT_ACTION_TYPE_QUOTA) { + mlx5_quota_async_completion(dev, queue, job); + } else if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) { + type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action); + if (type == MLX5_INDIRECT_ACTION_TYPE_METER_MARK) { + idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action); + mlx5_ipool_free(priv->hws_mpool->idx_pool, idx); + } + } else if (job->type == MLX5_HW_Q_JOB_TYPE_CREATE) { + type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action); + if (type == MLX5_INDIRECT_ACTION_TYPE_METER_MARK) { + idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action); + aso_mtr = mlx5_ipool_get(priv->hws_mpool->idx_pool, idx); + aso_mtr->state = ASO_METER_READY; + } else if (type == MLX5_INDIRECT_ACTION_TYPE_CT) { + idx = MLX5_ACTION_CTX_CT_GET_IDX + ((uint32_t)(uintptr_t)job->action); + aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx); + aso_ct->state = ASO_CONNTRACK_READY; + } + } else if (job->type == MLX5_HW_Q_JOB_TYPE_QUERY) { + type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action); + if (type == MLX5_INDIRECT_ACTION_TYPE_CT) { + idx = MLX5_ACTION_CTX_CT_GET_IDX + ((uint32_t)(uintptr_t)job->action); + aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx); + mlx5_aso_ct_obj_analyze(job->query.user, + job->query.hw); + aso_ct->state = ASO_CONNTRACK_READY; + } + } else { + /* + * rte_flow_op_result::user data can point to + * struct mlx5_aso_mtr object as well + */ + if (queue != CTRL_QUEUE_ID(priv)) + MLX5_ASSERT(false); + } +} + static inline int __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, uint32_t queue, @@ -3749,11 +3799,7 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; struct rte_ring *r = priv->hw_q[queue].indir_cq; - struct mlx5_hw_q_job *job = NULL; void *user_data = NULL; - uint32_t type, idx; - struct mlx5_aso_mtr *aso_mtr; - struct mlx5_aso_ct_action *aso_ct; int ret_comp, i; ret_comp = (int)rte_ring_count(r); @@ -3775,49 +3821,18 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, &res[ret_comp], n_res - ret_comp); for (i = 0; i < ret_comp; i++) { - job = (struct mlx5_hw_q_job *)res[i].user_data; + struct mlx5_hw_q_job *job = (struct mlx5_hw_q_job *)res[i].user_data; + /* Restore user data. */ res[i].user_data = job->user_data; - if (MLX5_INDIRECT_ACTION_TYPE_GET(job->action) == - MLX5_INDIRECT_ACTION_TYPE_QUOTA) { - mlx5_quota_async_completion(dev, queue, job); - } else if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) { - type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action); - if (type == MLX5_INDIRECT_ACTION_TYPE_METER_MARK) { - idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action); - mlx5_ipool_free(priv->hws_mpool->idx_pool, idx); - } - } else if (job->type == MLX5_HW_Q_JOB_TYPE_CREATE) { - type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action); - if (type == MLX5_INDIRECT_ACTION_TYPE_METER_MARK) { - idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action); - aso_mtr = mlx5_ipool_get(priv->hws_mpool->idx_pool, idx); - aso_mtr->state = ASO_METER_READY; - } else if (type == MLX5_INDIRECT_ACTION_TYPE_CT) { - idx = MLX5_ACTION_CTX_CT_GET_IDX - ((uint32_t)(uintptr_t)job->action); - aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx); - aso_ct->state = ASO_CONNTRACK_READY; - } - } else if (job->type == MLX5_HW_Q_JOB_TYPE_QUERY) { - type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action); - if (type == MLX5_INDIRECT_ACTION_TYPE_CT) { - idx = MLX5_ACTION_CTX_CT_GET_IDX - ((uint32_t)(uintptr_t)job->action); - aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx); - mlx5_aso_ct_obj_analyze(job->query.user, - job->query.hw); - aso_ct->state = ASO_CONNTRACK_READY; - } - } else { - /* - * rte_flow_op_result::user data can point to - * struct mlx5_aso_mtr object as well - */ - if (queue == CTRL_QUEUE_ID(priv)) - continue; - MLX5_ASSERT(false); - } + if (job->indirect_type == MLX5_HW_INDIRECT_TYPE_LEGACY) + flow_hw_pull_legacy_indirect_comp(dev, job, queue); + /* + * Current PMD supports 2 indirect action list types - MIRROR and REFORMAT. + * These indirect list types do not post WQE to create action. + * Future indirect list types that do post WQE will add + * completion handlers here. + */ flow_hw_job_put(priv, job, queue); } return ret_comp; @@ -10109,6 +10124,7 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, } if (job) { job->action = handle; + job->indirect_type = MLX5_HW_INDIRECT_TYPE_LEGACY; flow_hw_action_finalize(dev, queue, job, push, aso, handle != NULL); } @@ -11341,6 +11357,7 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, } if (job) { job->action = handle; + job->indirect_type = MLX5_HW_INDIRECT_TYPE_LIST; flow_hw_action_finalize(dev, queue, job, push, false, handle != NULL); }