From patchwork Thu Feb 10 16:29:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 107294 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 89496A00BE; Thu, 10 Feb 2022 17:31:05 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3D33A411B6; Thu, 10 Feb 2022 17:30:09 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2075.outbound.protection.outlook.com [40.107.220.75]) by mails.dpdk.org (Postfix) with ESMTP id BFA4E42739 for ; Thu, 10 Feb 2022 17:30:06 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CP7pmYEKcOto+/+AApfPSsI/vSAjMkDySZDVoAWZxoY4+n62eHfyrUNR64bRf4mlxl5letnJMimiVXjqPkkNoKYapxF/o7JUZ527sYJeMCbhgRy5AfHltzE02ij8kyD4OTWmLqqV3ucfjaXGmnzB/Aa1f9vMMu49jZU0kGzfOLNrfoI64j2dWXP94neInN1cE2iov/h66RbRfvUhkfQ6aGFAgKvyTYJYQsPx/sz0ZIk7u04XO3fIiBFQkmtmh1wp3HEZim7qgd5BREdZxmV98F2TaN2zA+sziOxLa0mM/7tsAJtvI44pqt5QD/autUhv5/+Bgs6So9i1Ea5vdGLJHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=P5OferphP0ivB6EOkUmU0W8+N5qG+TL0DYIGFouI+mI=; b=ht5nLY9vxZlzDkgBHMTDHTvdC76MWYx0zU0r19mzAncV2LEEv4vZn/wqT53qSSlT6YPxyC97QhVwMsBcTSjIMux+KnPBj8vh7tKoo5GszeSA1egzThY5D1JHXgRYSpRZpnzBIAdXj2rzlARNHWyGPw9NS8/tRAPPkcYYoW3WaRbuzpEJgQjKRmVhzcwO3kkwynbrkv9sR+iqgqqV/iLTRrvq9dFR3WIS2kdXZ3U7m09JLYnHMzjKW4JyTJXILRJlR8pwpYKOiLnuys+D+/ujy9lwcA+rIIC2DuJD75Dcaa/A6PFMEdVNKIrNw5c2kSxv85I4wIR6TM+XXmJeX+rTyQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=P5OferphP0ivB6EOkUmU0W8+N5qG+TL0DYIGFouI+mI=; b=XE3Rz03rqVLhSktguMkIGEJ7LAEyNToDEJ5mRF9vyzWnwyn1HENXdotglE3umQSKzXz2M9fUiNpxZANqlx69Lf9eaYBNi7CroBLcAjmQOz3QzNdmjYCIcds7wuyPxw0FrZXBtmMyEZKNq0L9nTfiskMIQIK6rKSe5cvno3wqMJaU3OQTxP7Hyct77TOCAXGcP+ldU2RLgxMi56T6QLECIPiKsV8eY9HJ8ELjzJe8Pwd9K9DHnubCBKHRHy3SNz/6V4IYeqNt7qEQQKOvtIR848AbxM7Ybdjd3doFqQc7+W+MKOJzyNevm6F1c+f9DYpHS7DrV1R9tOZrq009imLTzw== Received: from DM5PR19CA0052.namprd19.prod.outlook.com (2603:10b6:3:116::14) by MN0PR12MB5785.namprd12.prod.outlook.com (2603:10b6:208:374::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11; Thu, 10 Feb 2022 16:30:04 +0000 Received: from DM6NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:3:116:cafe::fd) by DM5PR19CA0052.outlook.office365.com (2603:10b6:3:116::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:30:04 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by DM6NAM11FT013.mail.protection.outlook.com (10.13.173.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:30:04 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 10 Feb 2022 16:30:03 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 10 Feb 2022 08:30:01 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH 09/13] net/mlx5: add flow jump action Date: Thu, 10 Feb 2022 18:29:22 +0200 Message-ID: <20220210162926.20436-10-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220210162926.20436-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 82bebdd0-a2e1-47c3-b696-08d9ecb297b2 X-MS-TrafficTypeDiagnostic: MN0PR12MB5785:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:514; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 302G9w6LUGb1AqhItfp+xq5HkRI0sL5/x7SKyw+lLbFwWj6ABBMOru7EMkIzrrg8E4GxrAeBFOIb2KgpaBPaK3EUWwxWqlpnwx4NJljOnhHRpRM+SCR23feRlILgcPunuxFQX3iBe0scz75aXqQ4sEMz6Q7k05bUxo4iH25etEtGfl6tvsCJdmB4m8pGIfPfxnoKb/uQ/vqAEmBjziR8kAFtttEgYHWtq+y2oJem9jTSlhoLIzo7ErUJcMW1VsPa3Wfj2N6rgAv4303N1fp5k/FuSRbc2K4A2awU66llHGGFwDE8X3SrjSTFD3r/NVW+dwcUnYm5PFM9rn7PS3jjDvWtEBM5Owf47TdyJOYJE8GcQXV3W+Zalpbn1WBpYVulIDc+DT+0a9i+9VgnwP0tfQCwqj9B4P4fRaC4ZCwnUwVTVLQRIqcmajQlOrNIJv7IbofMKLmDNglVF/jFkUeNNKvhZS1jKRJtGzmYZ7sVtMjTvg4ypfedIA6x/PFMfEpDiibN1IssAosz0sgmp35LbvWt46Zaq3kZJalDL+stN8AtE4WwNy/yOpgOIQS+JLAkInyBVleV8RKJlpiwbAElLqpaWJkOh+242OCDK2JEvjI5l8nUr2PXCQeqd/0YKn+IedtZzB0bZRzHzFcU//jxhaSh/HuPy1TUsJX+ZPO2Cv3WI4Z1RHgZ/aXc6S/tAA5RB23Lz4wKfaHKaR9PERLwi4N/gvhuec5LpdBSsqXLQ4uazTTboC+Zzh+LODvpFktl X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(4326008)(508600001)(426003)(54906003)(26005)(8676002)(336012)(70586007)(2616005)(8936002)(5660300002)(16526019)(6636002)(316002)(6286002)(70206006)(110136005)(86362001)(186003)(82310400004)(81166007)(40460700003)(2906002)(6666004)(55016003)(36860700001)(36756003)(47076005)(356005)(83380400001)(7696005)(1076003)(30864003)(36900700001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2022 16:30:04.1947 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 82bebdd0-a2e1-47c3-b696-08d9ecb297b2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB5785 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Jump action connects different level of flow tables as a complete data flow. A new action construct data struct is also added in this commit to help handle the dynamic actions. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 25 ++- drivers/net/mlx5/mlx5_flow_hw.c | 270 +++++++++++++++++++++++++++++--- 3 files changed, 275 insertions(+), 21 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index ec4eb7ee94..0bc9897101 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1525,6 +1525,7 @@ struct mlx5_priv { /* HW steering global drop action. */ struct mlx5dr_action *hw_drop[MLX5_HW_ACTION_FLAG_MAX] [MLX5DR_TABLE_TYPE_MAX]; + struct mlx5_indexed_pool *acts_ipool; /* Action data indexed pool. */ }; #define PORT_ID(priv) ((priv)->dev_data->port_id) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 40eb8d79aa..a1ab9173d9 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1018,10 +1018,25 @@ struct rte_flow { /* HWS flow struct. */ struct rte_flow_hw { uint32_t idx; /* Flow index from indexed pool. */ + uint32_t fate_type; /* Fate action type. */ + union { + /* Jump action. */ + struct mlx5_hw_jump_action *jump; + }; struct rte_flow_template_table *table; /* The table flow allcated from. */ struct mlx5dr_rule rule; /* HWS layer data struct. */ } __rte_packed; +/* rte flow action translate to DR action struct. */ +struct mlx5_action_construct_data { + LIST_ENTRY(mlx5_action_construct_data) next; + /* Ensure the action types are matched. */ + int type; + uint32_t idx; /* Data index. */ + uint16_t action_src; /* rte_flow_action src offset. */ + uint16_t action_dst; /* mlx5dr_rule_action dst offset. */ +}; + /* Flow item template struct. */ struct rte_flow_pattern_template { LIST_ENTRY(rte_flow_pattern_template) next; @@ -1054,9 +1069,17 @@ struct mlx5_hw_jump_action { struct mlx5dr_action *hws_action; }; +/* The maximum actions support in the flow. */ +#define MLX5_HW_MAX_ACTS 16 + /* DR action set struct. */ struct mlx5_hw_actions { - struct mlx5dr_action *drop; /* Drop action. */ + /* Dynamic action list. */ + LIST_HEAD(act_list, mlx5_action_construct_data) act_list; + struct mlx5_hw_jump_action *jump; /* Jump action. */ + uint32_t acts_num:4; /* Total action number. */ + /* Translated DR action array from action template. */ + struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS]; }; /* mlx5 action template struct. */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index dcf72ab89f..a825766245 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -30,18 +30,158 @@ static uint32_t mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_MAX] }, }; +/** + * Register destination table DR jump action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] table_attr + * Pointer to the flow attributes. + * @param[in] dest_group + * The destination group ID. + * @param[out] error + * Pointer to error structure. + * + * @return + * Table on success, NULL otherwise and rte_errno is set. + */ +static struct mlx5_hw_jump_action * +flow_hw_jump_action_register(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + uint32_t dest_group, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_attr jattr = *attr; + struct mlx5_flow_group *grp; + struct mlx5_flow_cb_ctx ctx = { + .dev = dev, + .error = error, + .data = &jattr, + }; + struct mlx5_list_entry *ge; + + jattr.group = dest_group; + ge = mlx5_hlist_register(priv->sh->flow_tbls, dest_group, &ctx); + if (!ge) + return NULL; + grp = container_of(ge, struct mlx5_flow_group, entry); + return &grp->jump; +} + +/** + * Release jump action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] jump + * Pointer to the jump action. + */ + +static void +flow_hw_jump_release(struct rte_eth_dev *dev, struct mlx5_hw_jump_action *jump) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_group *grp; + + grp = container_of + (jump, struct mlx5_flow_group, jump); + mlx5_hlist_unregister(priv->sh->flow_tbls, &grp->entry); +} + /** * Destroy DR actions created by action template. * * For DR actions created during table creation's action translate. * Need to destroy the DR action when destroying the table. * + * @param[in] dev + * Pointer to the rte_eth_dev structure. * @param[in] acts * Pointer to the template HW steering DR actions. */ static void -__flow_hw_action_template_destroy(struct mlx5_hw_actions *acts __rte_unused) +__flow_hw_action_template_destroy(struct rte_eth_dev *dev, + struct mlx5_hw_actions *acts) { + struct mlx5_priv *priv = dev->data->dev_private; + + if (acts->jump) { + struct mlx5_flow_group *grp; + + grp = container_of + (acts->jump, struct mlx5_flow_group, jump); + mlx5_hlist_unregister(priv->sh->flow_tbls, &grp->entry); + acts->jump = NULL; + } +} + +/** + * Append dynamic action to the dynamic action list. + * + * @param[in] priv + * Pointer to the port private data structure. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline struct mlx5_action_construct_data * +__flow_hw_act_data_alloc(struct mlx5_priv *priv, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst) +{ + struct mlx5_action_construct_data *act_data; + uint32_t idx = 0; + + act_data = mlx5_ipool_zmalloc(priv->acts_ipool, &idx); + if (!act_data) + return NULL; + act_data->idx = idx; + act_data->type = type; + act_data->action_src = action_src; + act_data->action_dst = action_dst; + return act_data; +} + +/** + * Append dynamic action to the dynamic action list. + * + * @param[in] priv + * Pointer to the port private data structure. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +__flow_hw_act_data_general_append(struct mlx5_priv *priv, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst) +{ struct mlx5_action_construct_data *act_data; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return -1; + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return 0; } /** @@ -74,14 +214,16 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, const struct rte_flow_template_table_attr *table_attr, struct mlx5_hw_actions *acts, struct rte_flow_actions_template *at, - struct rte_flow_error *error __rte_unused) + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_attr *attr = &table_attr->flow_attr; struct rte_flow_action *actions = at->actions; + struct rte_flow_action *action_start = actions; struct rte_flow_action *masks = at->masks; bool actions_end = false; - uint32_t type; + uint32_t type, i; + int err; if (attr->transfer) type = MLX5DR_TABLE_TYPE_FDB; @@ -89,14 +231,34 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, type = MLX5DR_TABLE_TYPE_NIC_TX; else type = MLX5DR_TABLE_TYPE_NIC_RX; - for (; !actions_end; actions++, masks++) { + for (i = 0; !actions_end; actions++, masks++) { switch (actions->type) { case RTE_FLOW_ACTION_TYPE_INDIRECT: break; case RTE_FLOW_ACTION_TYPE_VOID: break; case RTE_FLOW_ACTION_TYPE_DROP: - acts->drop = priv->hw_drop[!!attr->group][type]; + acts->rule_acts[i++].action = + priv->hw_drop[!!attr->group][type]; + break; + case RTE_FLOW_ACTION_TYPE_JUMP: + if (masks->conf) { + uint32_t jump_group = + ((const struct rte_flow_action_jump *) + actions->conf)->group; + acts->jump = flow_hw_jump_action_register + (dev, attr, jump_group, error); + if (!acts->jump) + goto err; + acts->rule_acts[i].action = (!!attr->group) ? + acts->jump->hws_action : + acts->jump->root_action; + } else if (__flow_hw_act_data_general_append + (priv, acts, actions->type, + actions - action_start, i)){ + goto err; + } + i++; break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; @@ -105,7 +267,14 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, break; } } + acts->acts_num = i; return 0; +err: + err = rte_errno; + __flow_hw_action_template_destroy(dev, acts); + return rte_flow_error_set(error, err, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to create rte table"); } /** @@ -114,6 +283,10 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, * For action template contains dynamic actions, these actions need to * be updated according to the rte_flow action during flow creation. * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] job + * Pointer to job descriptor. * @param[in] hw_acts * Pointer to translated actions from template. * @param[in] actions @@ -127,31 +300,63 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, * 0 on success, negative value otherwise and rte_errno is set. */ static __rte_always_inline int -flow_hw_actions_construct(struct mlx5_hw_actions *hw_acts, +flow_hw_actions_construct(struct rte_eth_dev *dev, + struct mlx5_hw_q_job *job, + struct mlx5_hw_actions *hw_acts, const struct rte_flow_action actions[], struct mlx5dr_rule_action *rule_acts, uint32_t *acts_num) { - bool actions_end = false; - uint32_t i; + struct rte_flow_template_table *table = job->flow->table; + struct mlx5_action_construct_data *act_data; + const struct rte_flow_action *action; + struct rte_flow_attr attr = { + .ingress = 1, + }; - for (i = 0; !actions_end || (i >= MLX5_HW_MAX_ACTS); actions++) { - switch (actions->type) { + memcpy(rule_acts, hw_acts->rule_acts, + sizeof(*rule_acts) * hw_acts->acts_num); + *acts_num = hw_acts->acts_num; + if (LIST_EMPTY(&hw_acts->act_list)) + return 0; + attr.group = table->grp->group_id; + if (table->type == MLX5DR_TABLE_TYPE_FDB) { + attr.transfer = 1; + attr.ingress = 1; + } else if (table->type == MLX5DR_TABLE_TYPE_NIC_TX) { + attr.egress = 1; + attr.ingress = 0; + } else { + attr.ingress = 1; + } + LIST_FOREACH(act_data, &hw_acts->act_list, next) { + uint32_t jump_group; + struct mlx5_hw_jump_action *jump; + + action = &actions[act_data->action_src]; + MLX5_ASSERT(action->type == RTE_FLOW_ACTION_TYPE_INDIRECT || + (int)action->type == act_data->type); + switch (action->type) { case RTE_FLOW_ACTION_TYPE_INDIRECT: break; case RTE_FLOW_ACTION_TYPE_VOID: break; - case RTE_FLOW_ACTION_TYPE_DROP: - rule_acts[i++].action = hw_acts->drop; - break; - case RTE_FLOW_ACTION_TYPE_END: - actions_end = true; + case RTE_FLOW_ACTION_TYPE_JUMP: + jump_group = ((const struct rte_flow_action_jump *) + action->conf)->group; + jump = flow_hw_jump_action_register + (dev, &attr, jump_group, NULL); + if (!jump) + return -1; + rule_acts[act_data->action_dst].action = + (!!attr.group) ? jump->hws_action : jump->root_action; + job->flow->jump = jump; + job->flow->fate_type = MLX5_FLOW_FATE_JUMP; break; default: break; } } - *acts_num = i; return 0; } @@ -230,7 +435,8 @@ flow_hw_q_flow_create(struct rte_eth_dev *dev, rule_attr.user_data = job; hw_acts = &table->ats[action_template_index].acts; /* Construct the flow action array based on the input actions.*/ - flow_hw_actions_construct(hw_acts, actions, rule_acts, &acts_num); + flow_hw_actions_construct(dev, job, hw_acts, actions, + rule_acts, &acts_num); ret = mlx5dr_rule_create(table->matcher, pattern_template_index, items, rule_acts, acts_num, @@ -344,8 +550,11 @@ flow_hw_q_pull(struct rte_eth_dev *dev, job = (struct mlx5_hw_q_job *)res[i].user_data; /* Restore user data. */ res[i].user_data = job->user_data; - if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) + if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) { + if (job->flow->fate_type == MLX5_FLOW_FATE_JUMP) + flow_hw_jump_release(dev, job->flow->jump); mlx5_ipool_free(job->flow->table->flow, job->flow->idx); + } priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job; } return ret; @@ -616,6 +825,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, rte_errno = EINVAL; goto at_error; } + LIST_INIT(&tbl->ats[i].acts.act_list); err = flow_hw_actions_translate(dev, attr, &tbl->ats[i].acts, action_templates[i], error); @@ -631,7 +841,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, return tbl; at_error: while (i--) { - __flow_hw_action_template_destroy(&tbl->ats[i].acts); + __flow_hw_action_template_destroy(dev, &tbl->ats[i].acts); __atomic_sub_fetch(&action_templates[i]->refcnt, 1, __ATOMIC_RELAXED); } @@ -687,7 +897,7 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, __atomic_sub_fetch(&table->its[i]->refcnt, 1, __ATOMIC_RELAXED); for (i = 0; i < table->nb_action_templates; i++) { - __flow_hw_action_template_destroy(&table->ats[i].acts); + __flow_hw_action_template_destroy(dev, &table->ats[i].acts); __atomic_sub_fetch(&table->ats[i].action_template->refcnt, 1, __ATOMIC_RELAXED); } @@ -1106,6 +1316,15 @@ flow_hw_configure(struct rte_eth_dev *dev, struct mlx5_hw_q *hw_q; struct mlx5_hw_q_job *job = NULL; uint32_t mem_size, i, j; + struct mlx5_indexed_pool_config cfg = { + .size = sizeof(struct rte_flow_hw), + .trunk_size = 4096, + .need_lock = 1, + .release_mem_en = !!priv->config.reclaim_mode, + .malloc = mlx5_malloc, + .free = mlx5_free, + .type = "mlx5_hw_action_construct_data", + }; if (!port_attr || !nb_queue || !queue_attr) { rte_errno = EINVAL; @@ -1124,6 +1343,9 @@ flow_hw_configure(struct rte_eth_dev *dev, } flow_hw_resource_release(dev); } + priv->acts_ipool = mlx5_ipool_create(&cfg); + if (!priv->acts_ipool) + goto err; /* Allocate the queue job descriptor LIFO. */ mem_size = sizeof(priv->hw_q[0]) * nb_queue; for (i = 0; i < nb_queue; i++) { @@ -1193,6 +1415,10 @@ flow_hw_configure(struct rte_eth_dev *dev, mlx5_free(priv->hw_q); priv->hw_q = NULL; } + if (priv->acts_ipool) { + mlx5_ipool_destroy(priv->acts_ipool); + priv->acts_ipool = NULL; + } return rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "fail to configure port"); @@ -1234,6 +1460,10 @@ flow_hw_resource_release(struct rte_eth_dev *dev) mlx5dr_action_destroy(priv->hw_drop[i][j]); } } + if (priv->acts_ipool) { + mlx5_ipool_destroy(priv->acts_ipool); + priv->acts_ipool = NULL; + } mlx5_free(priv->hw_q); priv->hw_q = NULL; claim_zero(mlx5dr_context_close(priv->dr_ctx));