From patchwork Wed Feb 28 10:25:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 137423 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3081843C21; Wed, 28 Feb 2024 11:26:29 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2E43042EA9; Wed, 28 Feb 2024 11:26:23 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2063.outbound.protection.outlook.com [40.107.93.63]) by mails.dpdk.org (Postfix) with ESMTP id 5EB7042D6B for ; Wed, 28 Feb 2024 11:26:18 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SEnvQQ1njwnbaRgj7tUyAPPt+6ymv9AmO7AFD6a7Cq10vDbY5Z9IVCSfn6uxhjT4uMd6KqlehLVBlFS+yvxB9aOpMQrhPCGLXECR9P5WRkpAtK1gFlH8DWCfB6mb5FcEuNZ5it88c6SvFEe4LPhL402StLnrhQ8dKu5wlgppmUUDR4n08VmpwE28qAEKlnDgdE4a5o7mI75OM6Y+KLF61QW2QUldwbQUe3XuOigEk42hwArqnrDbazs8Ecw/HTDbkkM8dJRf4Z9UR97bglgXYQbEizx6jHGPeJDU90aDi+CM0BHFTux3mJEi+vV050lgJ8VuaJjMK5iDo0Ye4Za9cw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vve1Ub/YzAR6sBY30hi0gguyAyZfp5YXoRntlTs28iE=; b=WKNdqLakJY1GN9LgaEDuY8hGKcwI6RTwNgZRE9ku7z2S0YD4wIiHg5o+8U+e+bO5jW99zJd5Yv0HtQxAsKQGQvcWAlvdtk7VROhJMBcHEAOFTBTUK/lD1YN1ywJWuWcZBzhuIc1sRnAEJP8r9ajcV8fhLAxXhBtCVoho3VrAisUy95TS8NOlUUAD9OUvOugLyzEv5hRNQlZYUA6RlSnr2kIg2/IijDWa02gPTMMUvvqw6dIFdN0Z3KApjpuuA/3prtmmdjlBAnQHzcYgxSuLszmXKC1CEhwB1FxE11L9GOeBERAS7UR1oj6iY/yhRgrRZpv8CV7U3aokBU/IB7p20w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vve1Ub/YzAR6sBY30hi0gguyAyZfp5YXoRntlTs28iE=; b=eaYVkmPYxA3MAMjaAcWJ0G6QrdLzYxOFyOy/roxGpyeawcdlFEaHhWts+vPllqC9HBodAoF6YrKf13rl7Wqc0fD1YbMeZUkXZCv4qSK14fNcDFfT2CC308uhjVUAPymcq1xYRHcWDGMFgSJxNkO5Z9qbGHexEcyq9oUETPSu9J8DfXjwPDmKjZp7wi9jjXWPqSoFr47XX3xwsZHYDJF/3yha9gOvRvEAVL6vbdNifSL5KHObSbRTTVcJMe+7TaBr4rICdYdwFW2tbTyOPOjsVSIC4xodJ3PVU7E/laRwRFPSCP1o3eZAbt2T6wyTEVAFhNGHo1axuwrIOVLja8f/BA== Received: from SJ0PR03CA0240.namprd03.prod.outlook.com (2603:10b6:a03:39f::35) by SJ2PR12MB9242.namprd12.prod.outlook.com (2603:10b6:a03:56f::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.36; Wed, 28 Feb 2024 10:26:15 +0000 Received: from SJ1PEPF00001CDC.namprd05.prod.outlook.com (2603:10b6:a03:39f:cafe::88) by SJ0PR03CA0240.outlook.office365.com (2603:10b6:a03:39f::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.28 via Frontend Transport; Wed, 28 Feb 2024 10:26:15 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SJ1PEPF00001CDC.mail.protection.outlook.com (10.167.242.4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Wed, 28 Feb 2024 10:26:15 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 28 Feb 2024 02:26:00 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 28 Feb 2024 02:25:57 -0800 From: Gregory Etelson To: CC: , , , Dariusz Sosnowski , Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad Subject: [PATCH v2 3/4] net/mlx5: move multi-pattern actions management to table level Date: Wed, 28 Feb 2024 12:25:25 +0200 Message-ID: <20240228102526.434717-4-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228102526.434717-1-getelson@nvidia.com> References: <20240202115611.288892-2-getelson@nvidia.com> <20240228102526.434717-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ1PEPF00001CDC:EE_|SJ2PR12MB9242:EE_ X-MS-Office365-Filtering-Correlation-Id: 9910473f-2a2d-4749-a2e0-08dc3847b1cd X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: g1ZMHRLZRMdNKnIrNW596RqUTaWLsvtD0mIEnVLbCDToveI+pwaAeKuqDlgK1IowjPgdjiq/Od+lRQwtRM9nmThXYhViJtwYX7Bej+cT6kHn/Or0ROU2REHPNrbmR0oAiCS/QCFpv8cXcC3GBgX6YD9gUiJ/e3of+NT1yuNPeahmdQSI7yk1iNTeNGF60P8F3hKZCUNknNrawXmpCX9bz3ZFcskc4+EQnzmiPG31PhUjXbSCZT0vfiXoK6eB6rfrK+nA8hlOyEbZHcc3DIkI2Gj8jPQsKDHSetvehRSXLS6Bo5dhzCEP0we6QZyYBfWjl/9beyelDGKBbY9WdE2qOdBmW/hhJXNTltxlzD08/EbT/mKBOo9fWcxEb3TDaIAzBv0i58o8RONwgBnvIzXPRgGx7SJvlFbkgxZbgiGLL7U4uZr9EclPQWIpWhYVyo/aOiqeSCtYILkWAoigNsz5vJZURBq02FpPX8PXB8EvrY7JPSu6HalWPgp6/JrRPXhMsMzyOPTFJE9N9TXW7nMI2uAgp76W/mO9qVFh9dsw+slgCfz9GQNrkVfllIBpD2uZ+/vHGKPBt/PvJpI13tiRiJ0Ar8nfykbjaErd4HAFk+T7U1SUXaI9V/SIhp/ep8yzAmchxyacqk5PJ9rRKizhMbKX4k5dXuwn289BVryVoGKYw3da01Bm2x7ODe9x4cHeihnTiIvqRu5cFPq8SXcr1oMJeYZIQ/cqwrpdNC6gP/24xIQMC9MAgCNHqTvwBv5X X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(230273577357003)(36860700004)(82310400014); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Feb 2024 10:26:15.4145 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9910473f-2a2d-4749-a2e0-08dc3847b1cd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SJ1PEPF00001CDC.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB9242 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The multi-pattern actions related structures and management code have been moved to the table level. That code refactor is required for the upcoming table resize feature. Signed-off-by: Gregory Etelson Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5_flow.h | 73 ++++++++++- drivers/net/mlx5/mlx5_flow_hw.c | 225 +++++++++++++++----------------- 2 files changed, 173 insertions(+), 125 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index a4d0ff7b13..9cc237c542 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1410,7 +1410,6 @@ struct mlx5_hw_encap_decap_action { /* Is header_reformat action shared across flows in table. */ uint32_t shared:1; uint32_t multi_pattern:1; - volatile uint32_t *multi_pattern_refcnt; size_t data_size; /* Action metadata size. */ uint8_t data[]; /* Action data. */ }; @@ -1433,7 +1432,6 @@ struct mlx5_hw_modify_header_action { /* Is MODIFY_HEADER action shared across flows in table. */ uint32_t shared:1; uint32_t multi_pattern:1; - volatile uint32_t *multi_pattern_refcnt; /* Amount of modification commands stored in the precompiled buffer. */ uint32_t mhdr_cmds_num; /* Precompiled modification commands. */ @@ -1487,6 +1485,76 @@ struct mlx5_flow_group { #define MLX5_HW_TBL_MAX_ITEM_TEMPLATE 2 #define MLX5_HW_TBL_MAX_ACTION_TEMPLATE 32 +#define MLX5_MULTIPATTERN_ENCAP_NUM 5 +#define MLX5_MAX_TABLE_RESIZE_NUM 64 + +struct mlx5_multi_pattern_segment { + uint32_t capacity; + uint32_t head_index; + struct mlx5dr_action *mhdr_action; + struct mlx5dr_action *reformat_action[MLX5_MULTIPATTERN_ENCAP_NUM]; +}; + +struct mlx5_tbl_multi_pattern_ctx { + struct { + uint32_t elements_num; + struct mlx5dr_action_reformat_header reformat_hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + /** + * insert_header structure is larger than reformat_header. + * Enclosing these structures with union will case a gap between + * reformat_hdr array elements. + * mlx5dr_action_create_reformat() expects adjacent array elements. + */ + struct mlx5dr_action_insert_header insert_hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + } reformat[MLX5_MULTIPATTERN_ENCAP_NUM]; + + struct { + uint32_t elements_num; + struct mlx5dr_action_mh_pattern pattern[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + } mh; + struct mlx5_multi_pattern_segment segments[MLX5_MAX_TABLE_RESIZE_NUM]; +}; + +static __rte_always_inline void +mlx5_multi_pattern_activate(struct mlx5_tbl_multi_pattern_ctx *mpctx) +{ + mpctx->segments[0].head_index = 1; +} + +static __rte_always_inline bool +mlx5_is_multi_pattern_active(const struct mlx5_tbl_multi_pattern_ctx *mpctx) +{ + return mpctx->segments[0].head_index == 1; +} + +static __rte_always_inline struct mlx5_multi_pattern_segment * +mlx5_multi_pattern_segment_get_next(struct mlx5_tbl_multi_pattern_ctx *mpctx) +{ + int i; + + for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) { + if (!mpctx->segments[i].capacity) + return &mpctx->segments[i]; + } + return NULL; +} + +static __rte_always_inline struct mlx5_multi_pattern_segment * +mlx5_multi_pattern_segment_find(struct mlx5_tbl_multi_pattern_ctx *mpctx, + uint32_t flow_resource_ix) +{ + int i; + + for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) { + uint32_t limit = mpctx->segments[i].head_index + + mpctx->segments[i].capacity; + + if (flow_resource_ix < limit) + return &mpctx->segments[i]; + } + return NULL; +} + struct mlx5_flow_template_table_cfg { struct rte_flow_template_table_attr attr; /* Table attributes passed through flow API. */ bool external; /* True if created by flow API, false if table is internal to PMD. */ @@ -1507,6 +1575,7 @@ struct rte_flow_template_table { uint8_t nb_item_templates; /* Item template number. */ uint8_t nb_action_templates; /* Action template number. */ uint32_t refcnt; /* Table reference counter. */ + struct mlx5_tbl_multi_pattern_ctx mpctx; }; #endif diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 5938d8b90c..38aed03970 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -78,41 +78,14 @@ struct mlx5_indlst_legacy { #define MLX5_CONST_ENCAP_ITEM(encap_type, ptr) \ (((const struct encap_type *)(ptr))->definition) -struct mlx5_multi_pattern_ctx { - union { - struct mlx5dr_action_reformat_header reformat_hdr; - struct mlx5dr_action_mh_pattern mh_pattern; - }; - union { - /* action template auxiliary structures for object destruction */ - struct mlx5_hw_encap_decap_action *encap; - struct mlx5_hw_modify_header_action *mhdr; - }; - /* multi pattern action */ - struct mlx5dr_rule_action *rule_action; -}; - -#define MLX5_MULTIPATTERN_ENCAP_NUM 4 - -struct mlx5_tbl_multi_pattern_ctx { - struct { - uint32_t elements_num; - struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; - } reformat[MLX5_MULTIPATTERN_ENCAP_NUM]; - - struct { - uint32_t elements_num; - struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; - } mh; -}; - -#define MLX5_EMPTY_MULTI_PATTERN_CTX {{{0,}},} - static int mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev, struct rte_flow_template_table *tbl, - struct mlx5_tbl_multi_pattern_ctx *mpat, + struct mlx5_multi_pattern_segment *segment, + uint32_t bulk_size, struct rte_flow_error *error); +static void +mlx5_destroy_multi_pattern_segment(struct mlx5_multi_pattern_segment *segment); static __rte_always_inline int mlx5_multi_pattern_reformat_to_index(enum mlx5dr_action_type type) @@ -577,28 +550,14 @@ flow_hw_ct_compile(struct rte_eth_dev *dev, static void flow_hw_template_destroy_reformat_action(struct mlx5_hw_encap_decap_action *encap_decap) { - if (encap_decap->multi_pattern) { - uint32_t refcnt = __atomic_sub_fetch(encap_decap->multi_pattern_refcnt, - 1, __ATOMIC_RELAXED); - if (refcnt) - return; - mlx5_free((void *)(uintptr_t)encap_decap->multi_pattern_refcnt); - } - if (encap_decap->action) + if (encap_decap->action && !encap_decap->multi_pattern) mlx5dr_action_destroy(encap_decap->action); } static void flow_hw_template_destroy_mhdr_action(struct mlx5_hw_modify_header_action *mhdr) { - if (mhdr->multi_pattern) { - uint32_t refcnt = __atomic_sub_fetch(mhdr->multi_pattern_refcnt, - 1, __ATOMIC_RELAXED); - if (refcnt) - return; - mlx5_free((void *)(uintptr_t)mhdr->multi_pattern_refcnt); - } - if (mhdr->action) + if (mhdr->action && !mhdr->multi_pattern) mlx5dr_action_destroy(mhdr->action); } @@ -1924,21 +1883,22 @@ mlx5_tbl_translate_reformat(struct mlx5_priv *priv, acts->encap_decap->shared = true; } else { uint32_t ix; - typeof(mp_ctx->reformat[0]) *reformat_ctx = mp_ctx->reformat + - mp_reformat_ix; + typeof(mp_ctx->reformat[0]) *reformat = mp_ctx->reformat + + mp_reformat_ix; - ix = reformat_ctx->elements_num++; - reformat_ctx->ctx[ix].reformat_hdr = hdr; - reformat_ctx->ctx[ix].rule_action = &acts->rule_acts[at->reformat_off]; - reformat_ctx->ctx[ix].encap = acts->encap_decap; + ix = reformat->elements_num++; + reformat->reformat_hdr[ix] = hdr; acts->rule_acts[at->reformat_off].reformat.hdr_idx = ix; acts->encap_decap_pos = at->reformat_off; + acts->encap_decap->multi_pattern = 1; acts->encap_decap->data_size = data_size; + acts->encap_decap->action_type = refmt_type; ret = __flow_hw_act_data_encap_append (priv, acts, (at->actions + reformat_src)->type, reformat_src, at->reformat_off, data_size); if (ret) return -rte_errno; + mlx5_multi_pattern_activate(mp_ctx); } return 0; } @@ -1987,12 +1947,11 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev, } else { typeof(mp_ctx->mh) *mh = &mp_ctx->mh; uint32_t idx = mh->elements_num; - struct mlx5_multi_pattern_ctx *mh_ctx = mh->ctx + mh->elements_num++; - mh_ctx->mh_pattern = pattern; - mh_ctx->mhdr = acts->mhdr; - mh_ctx->rule_action = &acts->rule_acts[mhdr_ix]; + mh->pattern[mh->elements_num++] = pattern; + acts->mhdr->multi_pattern = 1; acts->rule_acts[mhdr_ix].modify_header.pattern_idx = idx; + mlx5_multi_pattern_activate(mp_ctx); } return 0; } @@ -2552,16 +2511,17 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, { int ret; uint32_t i; - struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX; for (i = 0; i < tbl->nb_action_templates; i++) { if (__flow_hw_actions_translate(dev, &tbl->cfg, &tbl->ats[i].acts, tbl->ats[i].action_template, - &mpat, error)) + &tbl->mpctx, error)) goto err; } - ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error); + ret = mlx5_tbl_multi_pattern_process(dev, tbl, &tbl->mpctx.segments[0], + rte_log2_u32(tbl->cfg.attr.nb_flows), + error); if (ret) goto err; return 0; @@ -2944,6 +2904,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, int ret; uint32_t age_idx = 0; struct mlx5_aso_mtr *aso_mtr; + struct mlx5_multi_pattern_segment *mp_segment; rte_memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * at->dr_actions_num); attr.group = table->grp->group_id; @@ -3074,6 +3035,10 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len); break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: + mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx); + if (!mp_segment || !mp_segment->mhdr_action) + return -1; + rule_acts[hw_acts->mhdr->pos].action = mp_segment->mhdr_action; if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) ret = flow_hw_set_vlan_vid_construct(dev, job, act_data, @@ -3225,9 +3190,17 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, age_idx); } if (hw_acts->encap_decap && !hw_acts->encap_decap->shared) { - rule_acts[hw_acts->encap_decap_pos].reformat.offset = - job->flow->res_idx - 1; - rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; + int ix = mlx5_multi_pattern_reformat_to_index(hw_acts->encap_decap->action_type); + struct mlx5dr_rule_action *ra = &rule_acts[hw_acts->encap_decap_pos]; + + if (ix < 0) + return -1; + mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx); + if (!mp_segment || !mp_segment->reformat_action[ix]) + return -1; + ra->action = mp_segment->reformat_action[ix]; + ra->reformat.offset = job->flow->res_idx - 1; + ra->reformat.data = buf; } if (hw_acts->push_remove && !hw_acts->push_remove->shared) { rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset = @@ -4133,86 +4106,65 @@ flow_hw_q_flow_flush(struct rte_eth_dev *dev, static int mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev, struct rte_flow_template_table *tbl, - struct mlx5_tbl_multi_pattern_ctx *mpat, + struct mlx5_multi_pattern_segment *segment, + uint32_t bulk_size, struct rte_flow_error *error) { + int ret = 0; uint32_t i; struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_tbl_multi_pattern_ctx *mpctx = &tbl->mpctx; const struct rte_flow_template_table_attr *table_attr = &tbl->cfg.attr; const struct rte_flow_attr *attr = &table_attr->flow_attr; enum mlx5dr_table_type type = get_mlx5dr_table_type(attr); uint32_t flags = mlx5_hw_act_flag[!!attr->group][type]; - struct mlx5dr_action *dr_action; - uint32_t bulk_size = rte_log2_u32(table_attr->nb_flows); + struct mlx5dr_action *dr_action = NULL; for (i = 0; i < MLX5_MULTIPATTERN_ENCAP_NUM; i++) { - uint32_t j; - uint32_t *reformat_refcnt; - typeof(mpat->reformat[0]) *reformat = mpat->reformat + i; - struct mlx5dr_action_reformat_header hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + typeof(mpctx->reformat[0]) *reformat = mpctx->reformat + i; enum mlx5dr_action_type reformat_type = mlx5_multi_pattern_reformat_index_to_type(i); if (!reformat->elements_num) continue; - for (j = 0; j < reformat->elements_num; j++) - hdr[j] = reformat->ctx[j].reformat_hdr; - reformat_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t), 0, - rte_socket_id()); - if (!reformat_refcnt) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "failed to allocate multi-pattern encap counter"); - *reformat_refcnt = reformat->elements_num; - dr_action = mlx5dr_action_create_reformat - (priv->dr_ctx, reformat_type, reformat->elements_num, hdr, - bulk_size, flags); + dr_action = reformat_type == MLX5DR_ACTION_TYP_INSERT_HEADER ? + mlx5dr_action_create_insert_header + (priv->dr_ctx, reformat->elements_num, + reformat->insert_hdr, bulk_size, flags) : + mlx5dr_action_create_reformat + (priv->dr_ctx, reformat_type, reformat->elements_num, + reformat->reformat_hdr, bulk_size, flags); if (!dr_action) { - mlx5_free(reformat_refcnt); - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "failed to create multi-pattern encap action"); - } - for (j = 0; j < reformat->elements_num; j++) { - reformat->ctx[j].rule_action->action = dr_action; - reformat->ctx[j].encap->action = dr_action; - reformat->ctx[j].encap->multi_pattern = 1; - reformat->ctx[j].encap->multi_pattern_refcnt = reformat_refcnt; + ret = rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "failed to create multi-pattern encap action"); + goto error; } + segment->reformat_action[i] = dr_action; } - if (mpat->mh.elements_num) { - typeof(mpat->mh) *mh = &mpat->mh; - struct mlx5dr_action_mh_pattern pattern[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; - uint32_t *mh_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t), - 0, rte_socket_id()); - - if (!mh_refcnt) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "failed to allocate modify header counter"); - *mh_refcnt = mpat->mh.elements_num; - for (i = 0; i < mpat->mh.elements_num; i++) - pattern[i] = mh->ctx[i].mh_pattern; + if (mpctx->mh.elements_num) { + typeof(mpctx->mh) *mh = &mpctx->mh; dr_action = mlx5dr_action_create_modify_header - (priv->dr_ctx, mpat->mh.elements_num, pattern, + (priv->dr_ctx, mpctx->mh.elements_num, mh->pattern, bulk_size, flags); if (!dr_action) { - mlx5_free(mh_refcnt); - return rte_flow_error_set(error, rte_errno, + ret = rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "failed to create multi-pattern header modify action"); - } - for (i = 0; i < mpat->mh.elements_num; i++) { - mh->ctx[i].rule_action->action = dr_action; - mh->ctx[i].mhdr->action = dr_action; - mh->ctx[i].mhdr->multi_pattern = 1; - mh->ctx[i].mhdr->multi_pattern_refcnt = mh_refcnt; + NULL, "failed to create multi-pattern header modify action"); + goto error; } + segment->mhdr_action = dr_action; + } + if (dr_action) { + segment->capacity = RTE_BIT32(bulk_size); + if (segment != &mpctx->segments[MLX5_MAX_TABLE_RESIZE_NUM - 1]) + segment[1].head_index = segment->head_index + segment->capacity; } - return 0; +error: + mlx5_destroy_multi_pattern_segment(segment); + return ret; } static int @@ -4225,7 +4177,6 @@ mlx5_hw_build_template_table(struct rte_eth_dev *dev, { int ret; uint8_t i; - struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX; for (i = 0; i < nb_action_templates; i++) { uint32_t refcnt = __atomic_add_fetch(&action_templates[i]->refcnt, 1, @@ -4246,16 +4197,21 @@ mlx5_hw_build_template_table(struct rte_eth_dev *dev, ret = __flow_hw_actions_translate(dev, &tbl->cfg, &tbl->ats[i].acts, action_templates[i], - &mpat, error); + &tbl->mpctx, error); if (ret) { i++; goto at_error; } } tbl->nb_action_templates = nb_action_templates; - ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error); - if (ret) - goto at_error; + if (mlx5_is_multi_pattern_active(&tbl->mpctx)) { + ret = mlx5_tbl_multi_pattern_process(dev, tbl, + &tbl->mpctx.segments[0], + rte_log2_u32(tbl->cfg.attr.nb_flows), + error); + if (ret) + goto at_error; + } return 0; at_error: @@ -4624,6 +4580,28 @@ flow_hw_template_table_create(struct rte_eth_dev *dev, action_templates, nb_action_templates, error); } +static void +mlx5_destroy_multi_pattern_segment(struct mlx5_multi_pattern_segment *segment) +{ + int i; + + if (segment->mhdr_action) + mlx5dr_action_destroy(segment->mhdr_action); + for (i = 0; i < MLX5_MULTIPATTERN_ENCAP_NUM; i++) { + if (segment->reformat_action[i]) + mlx5dr_action_destroy(segment->reformat_action[i]); + } + segment->capacity = 0; +} + +static void +flow_hw_destroy_table_multi_pattern_ctx(struct rte_flow_template_table *table) +{ + int sx; + + for (sx = 0; sx < MLX5_MAX_TABLE_RESIZE_NUM; sx++) + mlx5_destroy_multi_pattern_segment(table->mpctx.segments + sx); +} /** * Destroy flow table. * @@ -4669,6 +4647,7 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, __atomic_fetch_sub(&table->ats[i].action_template->refcnt, 1, __ATOMIC_RELAXED); } + flow_hw_destroy_table_multi_pattern_ctx(table); mlx5dr_matcher_destroy(table->matcher); mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry); mlx5_ipool_destroy(table->resource);