From patchwork Thu Feb 29 11:51:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dariusz Sosnowski X-Patchwork-Id: 137473 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0FE4E43C35; Thu, 29 Feb 2024 12:53:13 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3C14442D9A; Thu, 29 Feb 2024 12:52:48 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2052.outbound.protection.outlook.com [40.107.94.52]) by mails.dpdk.org (Postfix) with ESMTP id 09A4142D80 for ; Thu, 29 Feb 2024 12:52:47 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=F70PcPXipkawoI8VCCKMNbhiFidgH12vYgP9SNGwL6bNO0OUwGHcyfVmY3Bdv9JziVI9TpdXtmFobxaOsqG68SxPMBcVGxzLmYMf5swF6R9sQm5oMpuxQDoiIZ4Z4dR7pfqylDfnxucdo4p64dsj3jpS/0H4UW3BGOENLGMlbJ2TLpVjiuyN0OFKPxK5vU/TEhl/RLZlOHKvyEyWphT8ix9N0z4UI3forS4+0V3RN+53zCvMKWDDNJ7Cuy/G4xXDk2SpkdOnTjKFjpibCDg+SY2oxHna7rNUznhbbD9BwXUb3kNOf6p6Ijw+LiKfsMB56tPr9vly3OHD8Bj2Hl+vBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Vzv6b4PbgIwik+C77XT5k1z4YGJKXT33iFT8Qj9jEB0=; b=OA47ciHYE7j9iPIIFllbOmZPcU80xWEtp9XADPq0m9tNXMHttMNec4EMig/6TTWCg8OPkRWtjiUgrS6e9/7ILIxzJcviBScr7goMdQ1N6w1UapPBnWS1ep8VZXemxYW/LtY2Ra050rtylKTQat8PlIUFYg5fhpH8SjUguX39UwH5LdpkGQi2hXbvLlY6QKmb9JMKPe3RRYNUrfdyHy7Rj98MjXyDAyGqLp6EMM0x1EDAfa9KtELiRd9MI3VXa8A/N9i8xV6Ayf2PJZkKTth+wJPVlPwI96ZDeOePrzrY0nGLrjUS6QhjAoOSxKPtEeZqnrG15aYAfy9m+o/me5ZnlA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Vzv6b4PbgIwik+C77XT5k1z4YGJKXT33iFT8Qj9jEB0=; b=Eq5Lg9lDLPnJzJ7/qO1mNwLFGnHKDoQ/cVShJIU/4FrA1lFSzZ23OObAoYjy+eYyOLv3asKJxzLH8Yf1qeNHXA25fANWK3CAYtSd8YBef10ty+OeH4QSNbCj1xtIEzxieUzgAgB3YUZnCJHs2ACdwNqPvqFCYDT8Y7JkxjsZg1zmrypaEGRKl+W8cDOLC7woQhv7/Z25XgXxaruq5gqgpB91zA2IUdGP1AObvmW33w0Euhc+uMr72BJInSHiC7wPFV7J+DMADcNvidReMXsm5XUiOjjaSJjY3+cj/ghslB/rm6qb50W344vuc/2F/8MPtTKtlHCYU+TStVV45cyeOQ== Received: from BLAPR05CA0003.namprd05.prod.outlook.com (2603:10b6:208:36e::10) by SN7PR12MB8435.namprd12.prod.outlook.com (2603:10b6:806:2e2::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.41; Thu, 29 Feb 2024 11:52:43 +0000 Received: from BL6PEPF0001AB4B.namprd04.prod.outlook.com (2603:10b6:208:36e:cafe::98) by BLAPR05CA0003.outlook.office365.com (2603:10b6:208:36e::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7362.13 via Frontend Transport; Thu, 29 Feb 2024 11:52:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BL6PEPF0001AB4B.mail.protection.outlook.com (10.167.242.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Thu, 29 Feb 2024 11:52:43 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 29 Feb 2024 03:52:22 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Thu, 29 Feb 2024 03:52:20 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad CC: , Raslan Darawsheh , Bing Zhao Subject: [PATCH v2 05/11] net/mlx5: remove action params from job Date: Thu, 29 Feb 2024 12:51:50 +0100 Message-ID: <20240229115157.201671-6-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240229115157.201671-1-dsosnowski@nvidia.com> References: <20240228170046.176600-1-dsosnowski@nvidia.com> <20240229115157.201671-1-dsosnowski@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL6PEPF0001AB4B:EE_|SN7PR12MB8435:EE_ X-MS-Office365-Filtering-Correlation-Id: 015b3cc0-eef0-4f20-89f6-08dc391cf059 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: cccb24993BQ4pU9siCaUh+ct92Q0NdI8pNzGjSpOjraYX0kCNoMOZg2944KCQr+pZY1eU7cl4YQCaZ2zkFGS2hVxuDH8omZ35swbsZXVIW0hiYqkCRjW3sHOaQSJY7mWHV0cDHQ+Y0hpMdPLaZpj3Zie9i2MHko5CokdTmDPauM5+Vv92973I0OY2ZOGftIOFBfSvl8PYmqb5E1Va+JJAEac9s5S4OVPcXJT834VnFcYvaoXVlPlsfP82JYgoQP3ySoE2TFHWTcHNjKp9h/qqWe2MIzJZkAyRyMaGP2kmZ2EzIwk0FaKtIr7Zaj7vdeiYtyhxO6xernSIL9Z2IvvzgtC/lF079lkyp93NsDO2GxpnvT6Rg9W2uGs18u2CYIJp+QxbKK0OO17itOEiZeiVpI9CiodNyGi8LwmOJPYoUUDj3n9aaLJQdYPFkiw8Bulbxfrr7qoUvSP37SJeAkvCdXHIa0V/D8enj1UDa43cPHmdgdmzsulTkV0KMQvPLc/6jXrvH4NaOKhKxa6yHAPpq24qtU2aanVvQiobiSt8PPmactp+bgYvVKgvl7/NrljUDUwSmyiNkqIVzhrsHEHUJqLbI1yUMJdOLD880CAQZ4Ir/QU4i9teq06NnsL1KsF26A4NoFG5PrR5BCuNoXJIL34aBYJ5sP+dROH755Lj9+SziliKqGXalJze4THazXSU7ltaDrMiz6a0/Ha4hxAHjZQOqJZRyi2LwM0GYx8LGRU2zCkbO3+wpvlcrAEUWmD X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(36860700004)(82310400014); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Feb 2024 11:52:43.2160 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 015b3cc0-eef0-4f20-89f6-08dc391cf059 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BL6PEPF0001AB4B.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB8435 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org mlx5_hw_q_job struct held references to buffers which contained: - modify header commands array, - encap/decap data buffer, - IPv6 routing data buffer. These buffers were passed as parameters to HWS layer during rule creation. They were needed only during the call to HWS layer when flow operation is enqueues (i.e. mlx5dr_rule_create()). After operation is enqueued, data stored there can be safely discarded and it is not required to store it during the whole lifecycle of a job. This patch removes references to these buffers from mlx5_hw_q_job and removes relevant allocations to reduce job memory footprint. Buffers stored per job are replaced with stack allocated ones, contained in mlx5_flow_hw_action_params struct. Signed-off-by: Dariusz Sosnowski Acked-by: Ori Kam --- drivers/net/mlx5/mlx5.h | 3 - drivers/net/mlx5/mlx5_flow.h | 10 +++ drivers/net/mlx5/mlx5_flow_hw.c | 120 ++++++++++++++------------------ 3 files changed, 63 insertions(+), 70 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f11a0181b8..42dc312a87 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -401,9 +401,6 @@ struct mlx5_hw_q_job { const void *action; /* Indirect action attached to the job. */ }; void *user_data; /* Job user data. */ - uint8_t *encap_data; /* Encap data. */ - uint8_t *push_data; /* IPv6 routing push data. */ - struct mlx5_modification_cmd *mhdr_cmd; struct rte_flow_item *items; union { struct { diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 02af0a08fa..9ed356e1c2 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1306,6 +1306,16 @@ typedef int #define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) +/** Container for flow action data constructed during flow rule creation. */ +struct mlx5_flow_hw_action_params { + /** Array of constructed modify header commands. */ + struct mlx5_modification_cmd mhdr_cmd[MLX5_MHDR_MAX_CMD]; + /** Constructed encap/decap data buffer. */ + uint8_t encap_data[MLX5_ENCAP_MAX_LEN]; + /** Constructed IPv6 routing data buffer. */ + uint8_t ipv6_push_data[MLX5_PUSH_MAX_LEN]; +}; + /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { LIST_ENTRY(mlx5_action_construct_data) next; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 1fe8f42618..a87fe4d07a 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -158,7 +158,7 @@ static int flow_hw_translate_group(struct rte_eth_dev *dev, struct rte_flow_error *error); static __rte_always_inline int flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, - struct mlx5_hw_q_job *job, + struct mlx5_modification_cmd *mhdr_cmd, struct mlx5_action_construct_data *act_data, const struct mlx5_hw_actions *hw_acts, const struct rte_flow_action *action); @@ -2812,7 +2812,7 @@ flow_hw_mhdr_cmd_is_nop(const struct mlx5_modification_cmd *cmd) * 0 on success, negative value otherwise and rte_errno is set. */ static __rte_always_inline int -flow_hw_modify_field_construct(struct mlx5_hw_q_job *job, +flow_hw_modify_field_construct(struct mlx5_modification_cmd *mhdr_cmd, struct mlx5_action_construct_data *act_data, const struct mlx5_hw_actions *hw_acts, const struct rte_flow_action *action) @@ -2871,7 +2871,7 @@ flow_hw_modify_field_construct(struct mlx5_hw_q_job *job, if (i >= act_data->modify_header.mhdr_cmds_end) return -1; - if (flow_hw_mhdr_cmd_is_nop(&job->mhdr_cmd[i])) { + if (flow_hw_mhdr_cmd_is_nop(&mhdr_cmd[i])) { ++i; continue; } @@ -2891,7 +2891,7 @@ flow_hw_modify_field_construct(struct mlx5_hw_q_job *job, mhdr_action->dst.field == RTE_FLOW_FIELD_IPV6_DSCP) data <<= MLX5_IPV6_HDR_DSCP_SHIFT; data = (data & mask) >> off_b; - job->mhdr_cmd[i++].data1 = rte_cpu_to_be_32(data); + mhdr_cmd[i++].data1 = rte_cpu_to_be_32(data); ++field; } while (field->size); return 0; @@ -2905,8 +2905,10 @@ flow_hw_modify_field_construct(struct mlx5_hw_q_job *job, * * @param[in] dev * Pointer to the rte_eth_dev structure. - * @param[in] job - * Pointer to job descriptor. + * @param[in] flow + * Pointer to flow structure. + * @param[in] ap + * Pointer to container for temporarily constructed actions' parameters. * @param[in] hw_acts * Pointer to translated actions from template. * @param[in] it_idx @@ -2923,7 +2925,8 @@ flow_hw_modify_field_construct(struct mlx5_hw_q_job *job, */ static __rte_always_inline int flow_hw_actions_construct(struct rte_eth_dev *dev, - struct mlx5_hw_q_job *job, + struct rte_flow_hw *flow, + struct mlx5_flow_hw_action_params *ap, const struct mlx5_hw_action_template *hw_at, const uint8_t it_idx, const struct rte_flow_action actions[], @@ -2933,7 +2936,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; - struct rte_flow_template_table *table = job->flow->table; + struct rte_flow_template_table *table = flow->table; struct mlx5_action_construct_data *act_data; const struct rte_flow_actions_template *at = hw_at->action_template; const struct mlx5_hw_actions *hw_acts = &hw_at->acts; @@ -2945,8 +2948,6 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, const struct rte_flow_action_meter *meter = NULL; const struct rte_flow_action_age *age = NULL; const struct rte_flow_action_nat64 *nat64_c = NULL; - uint8_t *buf = job->encap_data; - uint8_t *push_buf = job->push_data; struct rte_flow_attr attr = { .ingress = 1, }; @@ -2971,17 +2972,17 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, if (hw_acts->mhdr && hw_acts->mhdr->mhdr_cmds_num > 0 && !hw_acts->mhdr->shared) { uint16_t pos = hw_acts->mhdr->pos; - mp_segment = mlx5_multi_pattern_segment_find(table, job->flow->res_idx); + mp_segment = mlx5_multi_pattern_segment_find(table, flow->res_idx); if (!mp_segment || !mp_segment->mhdr_action) return -1; rule_acts[pos].action = mp_segment->mhdr_action; /* offset is relative to DR action */ rule_acts[pos].modify_header.offset = - job->flow->res_idx - mp_segment->head_index; + flow->res_idx - mp_segment->head_index; rule_acts[pos].modify_header.data = - (uint8_t *)job->mhdr_cmd; - rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds, - sizeof(*job->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num); + (uint8_t *)ap->mhdr_cmd; + rte_memcpy(ap->mhdr_cmd, hw_acts->mhdr->mhdr_cmds, + sizeof(*ap->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num); } LIST_FOREACH(act_data, &hw_acts->act_list, next) { uint32_t jump_group; @@ -3014,7 +3015,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_INDIRECT: if (flow_hw_shared_action_construct (dev, queue, action, table, it_idx, - at->action_flags, job->flow, + at->action_flags, flow, &rule_acts[act_data->action_dst])) return -1; break; @@ -3039,8 +3040,8 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, return -1; rule_acts[act_data->action_dst].action = (!!attr.group) ? jump->hws_action : jump->root_action; - job->flow->jump = jump; - job->flow->fate_type = MLX5_FLOW_FATE_JUMP; + flow->jump = jump; + flow->fate_type = MLX5_FLOW_FATE_JUMP; break; case RTE_FLOW_ACTION_TYPE_RSS: case RTE_FLOW_ACTION_TYPE_QUEUE: @@ -3050,8 +3051,8 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, if (!hrxq) return -1; rule_acts[act_data->action_dst].action = hrxq->action; - job->flow->hrxq = hrxq; - job->flow->fate_type = MLX5_FLOW_FATE_QUEUE; + flow->hrxq = hrxq; + flow->fate_type = MLX5_FLOW_FATE_QUEUE; break; case MLX5_RTE_FLOW_ACTION_TYPE_RSS: item_flags = table->its[it_idx]->item_flags; @@ -3063,38 +3064,37 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: enc_item = ((const struct rte_flow_action_vxlan_encap *) action->conf)->definition; - if (flow_dv_convert_encap_data(enc_item, buf, &encap_len, NULL)) + if (flow_dv_convert_encap_data(enc_item, ap->encap_data, &encap_len, NULL)) return -1; break; case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: enc_item = ((const struct rte_flow_action_nvgre_encap *) action->conf)->definition; - if (flow_dv_convert_encap_data(enc_item, buf, &encap_len, NULL)) + if (flow_dv_convert_encap_data(enc_item, ap->encap_data, &encap_len, NULL)) return -1; break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: raw_encap_data = (const struct rte_flow_action_raw_encap *) action->conf; - rte_memcpy((void *)buf, raw_encap_data->data, act_data->encap.len); - MLX5_ASSERT(raw_encap_data->size == - act_data->encap.len); + rte_memcpy(ap->encap_data, raw_encap_data->data, act_data->encap.len); + MLX5_ASSERT(raw_encap_data->size == act_data->encap.len); break; case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: ipv6_push = (const struct rte_flow_action_ipv6_ext_push *)action->conf; - rte_memcpy((void *)push_buf, ipv6_push->data, + rte_memcpy(ap->ipv6_push_data, ipv6_push->data, act_data->ipv6_ext.len); MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len); break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) - ret = flow_hw_set_vlan_vid_construct(dev, job, + ret = flow_hw_set_vlan_vid_construct(dev, ap->mhdr_cmd, act_data, hw_acts, action); else - ret = flow_hw_modify_field_construct(job, + ret = flow_hw_modify_field_construct(ap->mhdr_cmd, act_data, hw_acts, action); @@ -3130,8 +3130,8 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, rule_acts[act_data->action_dst + 1].action = (!!attr.group) ? jump->hws_action : jump->root_action; - job->flow->jump = jump; - job->flow->fate_type = MLX5_FLOW_FATE_JUMP; + flow->jump = jump; + flow->fate_type = MLX5_FLOW_FATE_JUMP; if (mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr)) return -1; break; @@ -3145,11 +3145,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, */ age_idx = mlx5_hws_age_action_create(priv, queue, 0, age, - job->flow->res_idx, + flow->res_idx, error); if (age_idx == 0) return -rte_errno; - job->flow->age_idx = age_idx; + flow->age_idx = age_idx; if (at->action_flags & MLX5_FLOW_ACTION_INDIRECT_COUNT) /* * When AGE uses indirect counter, no need to @@ -3172,7 +3172,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, ); if (ret != 0) return ret; - job->flow->cnt_id = cnt_id; + flow->cnt_id = cnt_id; break; case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: ret = mlx5_hws_cnt_pool_get_action_offset @@ -3183,7 +3183,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, ); if (ret != 0) return ret; - job->flow->cnt_id = act_data->shared_counter.id; + flow->cnt_id = act_data->shared_counter.id; break; case RTE_FLOW_ACTION_TYPE_CONNTRACK: ct_idx = MLX5_INDIRECT_ACTION_IDX_GET(action->conf); @@ -3210,8 +3210,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, */ ret = flow_hw_meter_mark_compile(dev, act_data->action_dst, action, - rule_acts, &job->flow->mtr_id, - MLX5_HW_INV_QUEUE, error); + rule_acts, &flow->mtr_id, MLX5_HW_INV_QUEUE, error); if (ret != 0) return ret; break; @@ -3226,9 +3225,9 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, } if (at->action_flags & MLX5_FLOW_ACTION_INDIRECT_COUNT) { if (at->action_flags & MLX5_FLOW_ACTION_INDIRECT_AGE) { - age_idx = job->flow->age_idx & MLX5_HWS_AGE_IDX_MASK; + age_idx = flow->age_idx & MLX5_HWS_AGE_IDX_MASK; if (mlx5_hws_cnt_age_get(priv->hws_cpool, - job->flow->cnt_id) != age_idx) + flow->cnt_id) != age_idx) /* * This is first use of this indirect counter * for this indirect AGE, need to increase the @@ -3240,7 +3239,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, * Update this indirect counter the indirect/direct AGE in which * using it. */ - mlx5_hws_cnt_age_set(priv->hws_cpool, job->flow->cnt_id, + mlx5_hws_cnt_age_set(priv->hws_cpool, flow->cnt_id, age_idx); } if (hw_acts->encap_decap && !hw_acts->encap_decap->shared) { @@ -3250,21 +3249,21 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, if (ix < 0) return -1; if (!mp_segment) - mp_segment = mlx5_multi_pattern_segment_find(table, job->flow->res_idx); + mp_segment = mlx5_multi_pattern_segment_find(table, flow->res_idx); if (!mp_segment || !mp_segment->reformat_action[ix]) return -1; ra->action = mp_segment->reformat_action[ix]; /* reformat offset is relative to selected DR action */ - ra->reformat.offset = job->flow->res_idx - mp_segment->head_index; - ra->reformat.data = buf; + ra->reformat.offset = flow->res_idx - mp_segment->head_index; + ra->reformat.data = ap->encap_data; } if (hw_acts->push_remove && !hw_acts->push_remove->shared) { rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset = - job->flow->res_idx - 1; - rule_acts[hw_acts->push_remove_pos].ipv6_ext.header = push_buf; + flow->res_idx - 1; + rule_acts[hw_acts->push_remove_pos].ipv6_ext.header = ap->ipv6_push_data; } if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id)) - job->flow->cnt_id = hw_acts->cnt_id; + flow->cnt_id = hw_acts->cnt_id; return 0; } @@ -3364,6 +3363,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, .burst = attr->postpone, }; struct mlx5dr_rule_action *rule_acts; + struct mlx5_flow_hw_action_params ap; struct rte_flow_hw *flow = NULL; struct mlx5_hw_q_job *job = NULL; const struct rte_flow_item *rule_items; @@ -3420,7 +3420,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, * No need to copy and contrust a new "actions" list based on the * user's input, in order to save the cost. */ - if (flow_hw_actions_construct(dev, job, + if (flow_hw_actions_construct(dev, flow, &ap, &table->ats[action_template_index], pattern_template_index, actions, rule_acts, queue, error)) { @@ -3512,6 +3512,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, .burst = attr->postpone, }; struct mlx5dr_rule_action *rule_acts; + struct mlx5_flow_hw_action_params ap; struct rte_flow_hw *flow = NULL; struct mlx5_hw_q_job *job = NULL; uint32_t flow_idx = 0; @@ -3564,7 +3565,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, * No need to copy and contrust a new "actions" list based on the * user's input, in order to save the cost. */ - if (flow_hw_actions_construct(dev, job, + if (flow_hw_actions_construct(dev, flow, &ap, &table->ats[action_template_index], 0, actions, rule_acts, queue, error)) { rte_errno = EINVAL; @@ -3646,6 +3647,7 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, .burst = attr->postpone, }; struct mlx5dr_rule_action *rule_acts; + struct mlx5_flow_hw_action_params ap; struct rte_flow_hw *of = (struct rte_flow_hw *)flow; struct rte_flow_hw *nf; struct rte_flow_template_table *table = of->table; @@ -3698,7 +3700,7 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, * No need to copy and contrust a new "actions" list based on the * user's input, in order to save the cost. */ - if (flow_hw_actions_construct(dev, job, + if (flow_hw_actions_construct(dev, nf, &ap, &table->ats[action_template_index], nf->mt_idx, actions, rule_acts, queue, error)) { @@ -6682,7 +6684,7 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev, static __rte_always_inline int flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, - struct mlx5_hw_q_job *job, + struct mlx5_modification_cmd *mhdr_cmd, struct mlx5_action_construct_data *act_data, const struct mlx5_hw_actions *hw_acts, const struct rte_flow_action *action) @@ -6710,8 +6712,7 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, .conf = &conf }; - return flow_hw_modify_field_construct(job, act_data, hw_acts, - &modify_action); + return flow_hw_modify_field_construct(mhdr_cmd, act_data, hw_acts, &modify_action); } static int @@ -10121,10 +10122,6 @@ flow_hw_configure(struct rte_eth_dev *dev, } mem_size += (sizeof(struct mlx5_hw_q_job *) + sizeof(struct mlx5_hw_q_job) + - sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN + - sizeof(uint8_t) * MLX5_PUSH_MAX_LEN + - sizeof(struct mlx5_modification_cmd) * - MLX5_MHDR_MAX_CMD + sizeof(struct rte_flow_item) * MLX5_HW_MAX_ITEMS + sizeof(struct rte_flow_hw)) * @@ -10137,8 +10134,6 @@ flow_hw_configure(struct rte_eth_dev *dev, goto err; } for (i = 0; i < nb_q_updated; i++) { - uint8_t *encap = NULL, *push = NULL; - struct mlx5_modification_cmd *mhdr_cmd = NULL; struct rte_flow_item *items = NULL; struct rte_flow_hw *upd_flow = NULL; @@ -10152,20 +10147,11 @@ flow_hw_configure(struct rte_eth_dev *dev, &job[_queue_attr[i - 1]->size - 1].upd_flow[1]; job = (struct mlx5_hw_q_job *) &priv->hw_q[i].job[_queue_attr[i]->size]; - mhdr_cmd = (struct mlx5_modification_cmd *) - &job[_queue_attr[i]->size]; - encap = (uint8_t *) - &mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD]; - push = (uint8_t *) - &encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN]; items = (struct rte_flow_item *) - &push[_queue_attr[i]->size * MLX5_PUSH_MAX_LEN]; + &job[_queue_attr[i]->size]; upd_flow = (struct rte_flow_hw *) &items[_queue_attr[i]->size * MLX5_HW_MAX_ITEMS]; for (j = 0; j < _queue_attr[i]->size; j++) { - job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD]; - job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN]; - job[j].push_data = &push[j * MLX5_PUSH_MAX_LEN]; job[j].items = &items[j * MLX5_HW_MAX_ITEMS]; job[j].upd_flow = &upd_flow[j]; priv->hw_q[i].job[j] = &job[j];