From patchwork Sun Sep 24 09:41:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 131859 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AA46A42625; Sun, 24 Sep 2023 11:42:00 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4F9AD4029C; Sun, 24 Sep 2023 11:42:00 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2051.outbound.protection.outlook.com [40.107.237.51]) by mails.dpdk.org (Postfix) with ESMTP id 885AE40296 for ; Sun, 24 Sep 2023 11:41:58 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OElCWY5BSBHTWG1Y/BFgmm+oW8si+YHZ6vwzk/4jwfSVl1XOPdShNwrh9gv3trubft5HMIgw/3VxfAloAsp47t8ORMkFILCZOvls7zPxHzHmgTCFyDeUHZ7zf3dYaWKm0eoon4RNC/gceL4r02m1iV1AhJKxiu14vDvK4zdLjAylSEjC6QROtChBjflbjK0Pz48sBloYaEi8mGgIYCJO6D6WfRfYZkbq2Zakb4CxR+8n/v+qpp2NxnQQVtxB/IwK1wsfZPEqLcls3Dlf5PFXnmlWk72et1R+3HE7em44NKkzOH5HRXRhqa5H6RLBODTsjqC2wYiZGgmPzTxHfvNhtw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=yFuJJQyDSsddZYJo9xAC9S4m0Qu0lEtyucdWppBavS8=; b=C1MUOKijVZB0ywkN/It6DbgEqijaQerav0A2pOc4bjP/uqA6pCDejby3q3c4eBtpzgn2wunzMVrepUUZqOOyMo6PMeml+2uYw8KHJ7FrhHdxySw5QmajMBgV64poNpNBsolVeOwkeVFYPPF5+fJokZMgz9Dv98PGgBqWGcLju5JJqcQLs8SnDqZoGEqGV6R6WHTHj8TU9kfLcjwZOIJcECLjL8e92qXKdP+9bIFiEPwJo5Yn7lLWvYx533rAfrsTacKlCpKuc0qG0eeP64V4iGaCPVr9ih4bJxFyLFULOlJdUjvNDtTVSbmxVFjr+fvwQBhYO8kxtWjeccZdx7KN6w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yFuJJQyDSsddZYJo9xAC9S4m0Qu0lEtyucdWppBavS8=; b=HTsE1S9CHc2PKb4TVw3OMSjnKmgW5idY7ab+Uu9iYIncbN9erdT8xmDI1xtxMAstyf3r0YEYDgqWaojlMVzLSAjUtJBLSkb1DfSgYst41BHrFF3W6QAtqtjsqBq0Wt164iB3So24L8IkQkzkdJnmFZf0uh+UYQBYM0kYL7wTWdPR3EwgwZ/8oTSRYVIPkE8cFbJNk7j3bVzAOw6uXjpDsj3S56aTqjftI6g4/mGnHrVL9QDKB7hhJ7TT9P2oWK223yA3v/g4Bczp4UUBNo3WbkPFuv98NYODazblWUBC7N2+0zvEkxga4QDY1muYeZkMgl2BwuN5Une/+V+ff7+2fQ== Received: from BN1PR10CA0003.namprd10.prod.outlook.com (2603:10b6:408:e0::8) by DS7PR12MB6238.namprd12.prod.outlook.com (2603:10b6:8:96::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6792.28; Sun, 24 Sep 2023 09:41:55 +0000 Received: from DS1PEPF00017092.namprd03.prod.outlook.com (2603:10b6:408:e0:cafe::49) by BN1PR10CA0003.outlook.office365.com (2603:10b6:408:e0::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6792.35 via Frontend Transport; Sun, 24 Sep 2023 09:41:55 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF00017092.mail.protection.outlook.com (10.167.17.135) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.14 via Frontend Transport; Sun, 24 Sep 2023 09:41:54 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Sun, 24 Sep 2023 02:41:41 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Sun, 24 Sep 2023 02:41:37 -0700 From: Gregory Etelson To: CC: , =?utf-8?b?wqA=?= , "Matan Azrad" , Viacheslav Ovsiienko , "Ori Kam" , Suanming Mou Subject: [PATCH v3] net/mlx5: reuse reformat and modify header actions in a table Date: Sun, 24 Sep 2023 12:41:24 +0300 Message-ID: <20230924094124.752639-1-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230919050027.752483-1-getelson@nvidia.com> References: <20230919050027.752483-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail202.nvidia.com (10.129.68.7) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF00017092:EE_|DS7PR12MB6238:EE_ X-MS-Office365-Filtering-Correlation-Id: 5658ee3c-b706-425b-a003-08dbbce27d1a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: BqvLBlWWyVjywfg1eWAL4EOp84FDEIN9uBYX1qv4imJj0Z6gDRcbg3ZagnC8HJqUI24ZeaxCGwNtM2Qgnjqtx7H/ZsGK7PWkLYgKFN///NgtXsLiWdMWkqQn+SyLm9N8cHaW25bdhbPl19sJ+mXkRvZTl3KceI1IZc2enUR3i1F9koL/6ADuu8yIMdLVE4Q5vA42A29uPo+Xq4S2hyhy3NCjsSifxLMHU+8HglNh5PxbPtrZaKmZnbrF8TEq9zBlMginwDrdihCmc3IrU59Do6bF417DWnK3qnSeVMwIUsCxyXUAZ/LX8HBUOqwJhe8GvFelQficSwLI9xPCz+WdQ7mDsOF/Y1rudBglJVXkS2kUmKd5hVILzVFxrtovMB+2S/2eELSadoRij/wo1VJ11KLpQMgsIkt6+A+UeS5IuKKasopIRAcylNne0agp4CPzbb/8kGe9OwVL/apcj3ITn1DISpv3sXi7LkRtdiS4ZB9fxfNKUkjgkGp7CDP7cIzQT6NI0mB77Et0aRFdZ40HYDQtJxTXeQNjdGDDc9xZO2PfyhvcXJfvOwkitrGNsg/y9ISr1Pek/YLMbnl/tFCpljocaDozo63XmA66BLaV5HXpMdX+FA2zZtNLTU/SX5EgfmV0pG4ZiKPTHonmHPRgbM1YSHmI8rSzmHRvIKVWSTZD0JCFFzxPHjFXQbl8eDizBPM5TtBAZM11GxgbfT4o/tEGDGKdkQkNdF4Fk6m+9Xa8qiaqrZaYcsOz1xJ+iAaH X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(346002)(39860400002)(376002)(396003)(136003)(230922051799003)(82310400011)(186009)(1800799009)(451199024)(46966006)(40470700004)(36840700001)(55016003)(40480700001)(70206006)(70586007)(54906003)(316002)(6666004)(6916009)(30864003)(7636003)(40460700003)(36756003)(86362001)(2906002)(41300700001)(8936002)(5660300002)(2616005)(26005)(4326008)(8676002)(478600001)(6286002)(47076005)(16526019)(336012)(107886003)(83380400001)(426003)(1076003)(7696005)(82740400003)(36860700001)(356005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Sep 2023 09:41:54.9178 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5658ee3c-b706-425b-a003-08dbbce27d1a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF00017092.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6238 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org If application defined several actions templates with non-shared reformat or modify headers actions AND used these templates to create a table, HWS could share reformat or modify headers resources, instead of creating a resource for each action template. The patch activates HWS code in a way that provides reformat or modify header resources sharing. The patch updates modify field and raw encap template actions validations: - modify field does not allow empty action template masks. - raw encap added action template mask validation. Signed-off-by: Gregory Etelson Acked-by: Ori Kam --- Depends-on: series-28881 ("net/mlx5/hws: add support for multi pattern") --- v2: remove Depends-on: patch v3: add Depends-on: series --- drivers/net/mlx5/mlx5_flow.h | 8 +- drivers/net/mlx5/mlx5_flow_dv.c | 3 +- drivers/net/mlx5/mlx5_flow_hw.c | 568 +++++++++++++++++++++++++------- 3 files changed, 452 insertions(+), 127 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 3a97975d69..68fa6cf46d 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1318,7 +1318,9 @@ struct mlx5_hw_jump_action { struct mlx5_hw_encap_decap_action { struct mlx5dr_action *action; /* Action object. */ /* Is header_reformat action shared across flows in table. */ - bool shared; + uint32_t shared:1; + uint32_t multi_pattern:1; + volatile uint32_t *multi_pattern_refcnt; size_t data_size; /* Action metadata size. */ uint8_t data[]; /* Action data. */ }; @@ -1332,7 +1334,9 @@ struct mlx5_hw_modify_header_action { /* Modify header action position in action rule table. */ uint16_t pos; /* Is MODIFY_HEADER action shared across flows in table. */ - bool shared; + uint32_t shared:1; + uint32_t multi_pattern:1; + volatile uint32_t *multi_pattern_refcnt; /* Amount of modification commands stored in the precompiled buffer. */ uint32_t mhdr_cmds_num; /* Precompiled modification commands. */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 3f4325c5c8..d3e002ec41 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -4573,7 +4573,8 @@ flow_dv_convert_encap_data(const struct rte_flow_item *items, uint8_t *buf, (void *)items->type, "items total size is too big" " for encap action"); - rte_memcpy((void *)&buf[temp_size], items->spec, len); + if (items->spec) + rte_memcpy(&buf[temp_size], items->spec, len); switch (items->type) { case RTE_FLOW_ITEM_TYPE_ETH: eth = (struct rte_ether_hdr *)&buf[temp_size]; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 83910c097d..04f1095c9c 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -58,6 +58,95 @@ #define MLX5_HW_VLAN_PUSH_VID_IDX 1 #define MLX5_HW_VLAN_PUSH_PCP_IDX 2 +#define MLX5_CONST_ENCAP_ITEM(encap_type, ptr) \ +(((const struct encap_type *)(ptr))->definition) + +struct mlx5_multi_pattern_ctx { + union { + struct mlx5dr_action_reformat_header reformat_hdr; + struct mlx5dr_action_mh_pattern mh_pattern; + }; + union { + /* action template auxiliary structures for object destruction */ + struct mlx5_hw_encap_decap_action *encap; + struct mlx5_hw_modify_header_action *mhdr; + }; + /* multi pattern action */ + struct mlx5dr_rule_action *rule_action; +}; + +#define MLX5_MULTIPATTERN_ENCAP_NUM 4 + +struct mlx5_tbl_multi_pattern_ctx { + struct { + uint32_t elements_num; + struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + } reformat[MLX5_MULTIPATTERN_ENCAP_NUM]; + + struct { + uint32_t elements_num; + struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + } mh; +}; + +#define MLX5_EMPTY_MULTI_PATTERN_CTX {{{0,}},} + +static int +mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + struct mlx5_tbl_multi_pattern_ctx *mpat, + struct rte_flow_error *error); + +static __rte_always_inline int +mlx5_multi_pattern_reformat_to_index(enum mlx5dr_action_type type) +{ + switch (type) { + case MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2: + return 0; + case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: + return 1; + case MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2: + return 2; + case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: + return 3; + default: + break; + } + return -1; +} + +static __rte_always_inline enum mlx5dr_action_type +mlx5_multi_pattern_reformat_index_to_type(uint32_t ix) +{ + switch (ix) { + case 0: + return MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; + case 1: + return MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2; + case 2: + return MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2; + case 3: + return MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3; + default: + break; + } + return MLX5DR_ACTION_TYP_MAX; +} + +static inline enum mlx5dr_table_type +get_mlx5dr_table_type(const struct rte_flow_attr *attr) +{ + enum mlx5dr_table_type type; + + if (attr->transfer) + type = MLX5DR_TABLE_TYPE_FDB; + else if (attr->egress) + type = MLX5DR_TABLE_TYPE_NIC_TX; + else + type = MLX5DR_TABLE_TYPE_NIC_RX; + return type; +} + static int flow_hw_flush_all_ctrl_flows(struct rte_eth_dev *dev); static int flow_hw_translate_group(struct rte_eth_dev *dev, const struct mlx5_flow_template_table_cfg *cfg, @@ -437,6 +526,34 @@ flow_hw_ct_compile(struct rte_eth_dev *dev, return 0; } +static void +flow_hw_template_destroy_reformat_action(struct mlx5_hw_encap_decap_action *encap_decap) +{ + if (encap_decap->multi_pattern) { + uint32_t refcnt = __atomic_sub_fetch(encap_decap->multi_pattern_refcnt, + 1, __ATOMIC_RELAXED); + if (refcnt) + return; + mlx5_free((void *)(uintptr_t)encap_decap->multi_pattern_refcnt); + } + if (encap_decap->action) + mlx5dr_action_destroy(encap_decap->action); +} + +static void +flow_hw_template_destroy_mhdr_action(struct mlx5_hw_modify_header_action *mhdr) +{ + if (mhdr->multi_pattern) { + uint32_t refcnt = __atomic_sub_fetch(mhdr->multi_pattern_refcnt, + 1, __ATOMIC_RELAXED); + if (refcnt) + return; + mlx5_free((void *)(uintptr_t)mhdr->multi_pattern_refcnt); + } + if (mhdr->action) + mlx5dr_action_destroy(mhdr->action); +} + /** * Destroy DR actions created by action template. * @@ -478,14 +595,12 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev, acts->tir = NULL; } if (acts->encap_decap) { - if (acts->encap_decap->action) - mlx5dr_action_destroy(acts->encap_decap->action); + flow_hw_template_destroy_reformat_action(acts->encap_decap); mlx5_free(acts->encap_decap); acts->encap_decap = NULL; } if (acts->mhdr) { - if (acts->mhdr->action) - mlx5dr_action_destroy(acts->mhdr->action); + flow_hw_template_destroy_mhdr_action(acts->mhdr); mlx5_free(acts->mhdr); acts->mhdr = NULL; } @@ -840,8 +955,6 @@ flow_hw_action_modify_field_is_shared(const struct rte_flow_action *action, if (v->src.field == RTE_FLOW_FIELD_VALUE) { uint32_t j; - if (m == NULL) - return false; for (j = 0; j < RTE_DIM(m->src.value); ++j) { /* * Immediate value is considered to be masked @@ -1387,6 +1500,137 @@ flow_hw_meter_mark_compile(struct rte_eth_dev *dev, return 0; } +static int +mlx5_tbl_translate_reformat(struct mlx5_priv *priv, + const struct rte_flow_template_table_attr *table_attr, + struct mlx5_hw_actions *acts, + struct rte_flow_actions_template *at, + const struct rte_flow_item *enc_item, + const struct rte_flow_item *enc_item_m, + uint8_t *encap_data, uint8_t *encap_data_m, + struct mlx5_tbl_multi_pattern_ctx *mp_ctx, + size_t data_size, uint16_t reformat_src, + enum mlx5dr_action_type refmt_type, + struct rte_flow_error *error) +{ + int mp_reformat_ix = mlx5_multi_pattern_reformat_to_index(refmt_type); + const struct rte_flow_attr *attr = &table_attr->flow_attr; + enum mlx5dr_table_type tbl_type = get_mlx5dr_table_type(attr); + struct mlx5dr_action_reformat_header hdr; + uint8_t buf[MLX5_ENCAP_MAX_LEN]; + bool shared_rfmt = false; + int ret; + + MLX5_ASSERT(at->reformat_off != UINT16_MAX); + if (enc_item) { + MLX5_ASSERT(!encap_data); + ret = flow_dv_convert_encap_data(enc_item, buf, &data_size, error); + if (ret) + return ret; + encap_data = buf; + if (enc_item_m) + shared_rfmt = true; + } else if (encap_data && encap_data_m) { + shared_rfmt = true; + } + acts->encap_decap = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(*acts->encap_decap) + data_size, + 0, SOCKET_ID_ANY); + if (!acts->encap_decap) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "no memory for reformat context"); + hdr.sz = data_size; + hdr.data = encap_data; + if (shared_rfmt || mp_reformat_ix < 0) { + uint16_t reformat_ix = at->reformat_off; + uint32_t flags = mlx5_hw_act_flag[!!attr->group][tbl_type] | + MLX5DR_ACTION_FLAG_SHARED; + + acts->encap_decap->action = + mlx5dr_action_create_reformat(priv->dr_ctx, refmt_type, + 1, &hdr, 0, flags); + if (!acts->encap_decap->action) + return -rte_errno; + acts->rule_acts[reformat_ix].action = acts->encap_decap->action; + acts->rule_acts[reformat_ix].reformat.data = acts->encap_decap->data; + acts->rule_acts[reformat_ix].reformat.offset = 0; + acts->encap_decap->shared = true; + } else { + uint32_t ix; + typeof(mp_ctx->reformat[0]) *reformat_ctx = mp_ctx->reformat + + mp_reformat_ix; + + ix = reformat_ctx->elements_num++; + reformat_ctx->ctx[ix].reformat_hdr = hdr; + reformat_ctx->ctx[ix].rule_action = &acts->rule_acts[at->reformat_off]; + reformat_ctx->ctx[ix].encap = acts->encap_decap; + acts->rule_acts[at->reformat_off].reformat.hdr_idx = ix; + acts->encap_decap_pos = at->reformat_off; + acts->encap_decap->data_size = data_size; + ret = __flow_hw_act_data_encap_append + (priv, acts, (at->actions + reformat_src)->type, + reformat_src, at->reformat_off, data_size); + if (ret) + return -rte_errno; + } + return 0; +} + +static int +mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev, + const struct mlx5_flow_template_table_cfg *cfg, + struct mlx5_hw_actions *acts, + struct mlx5_tbl_multi_pattern_ctx *mp_ctx, + struct mlx5_hw_modify_header_action *mhdr, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + const struct rte_flow_attr *attr = &table_attr->flow_attr; + enum mlx5dr_table_type tbl_type = get_mlx5dr_table_type(attr); + uint16_t mhdr_ix = mhdr->pos; + struct mlx5dr_action_mh_pattern pattern = { + .sz = sizeof(struct mlx5_modification_cmd) * mhdr->mhdr_cmds_num + }; + + if (flow_hw_validate_compiled_modify_field(dev, cfg, mhdr, error)) { + __flow_hw_action_template_destroy(dev, acts); + return -rte_errno; + } + acts->mhdr = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*acts->mhdr), + 0, SOCKET_ID_ANY); + if (!acts->mhdr) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "translate modify_header: no memory for modify header context"); + rte_memcpy(acts->mhdr, mhdr, sizeof(*mhdr)); + pattern.data = (__be64 *)acts->mhdr->mhdr_cmds; + if (mhdr->shared) { + uint32_t flags = mlx5_hw_act_flag[!!attr->group][tbl_type] | + MLX5DR_ACTION_FLAG_SHARED; + + acts->mhdr->action = mlx5dr_action_create_modify_header + (priv->dr_ctx, 1, &pattern, 0, + flags); + if (!acts->mhdr->action) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "translate modify_header: failed to create DR action"); + acts->rule_acts[mhdr_ix].action = acts->mhdr->action; + } else { + typeof(mp_ctx->mh) *mh = &mp_ctx->mh; + uint32_t idx = mh->elements_num; + struct mlx5_multi_pattern_ctx *mh_ctx = mh->ctx + mh->elements_num++; + + mh_ctx->mh_pattern = pattern; + mh_ctx->mhdr = acts->mhdr; + mh_ctx->rule_action = &acts->rule_acts[mhdr_ix]; + acts->rule_acts[mhdr_ix].modify_header.pattern_idx = idx; + } + return 0; +} + /** * Translate rte_flow actions to DR action. * @@ -1415,6 +1659,7 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, const struct mlx5_flow_template_table_cfg *cfg, struct mlx5_hw_actions *acts, struct rte_flow_actions_template *at, + struct mlx5_tbl_multi_pattern_ctx *mp_ctx, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; @@ -1437,7 +1682,7 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, uint16_t action_pos; uint16_t jump_pos; uint32_t ct_idx; - int err; + int ret, err; uint32_t target_grp = 0; flow_hw_modify_field_init(&mhdr, at); @@ -1571,32 +1816,26 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, break; case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: MLX5_ASSERT(!reformat_used); - enc_item = ((const struct rte_flow_action_vxlan_encap *) - actions->conf)->definition; + enc_item = MLX5_CONST_ENCAP_ITEM(rte_flow_action_vxlan_encap, + actions->conf); if (masks->conf) - enc_item_m = ((const struct rte_flow_action_vxlan_encap *) - masks->conf)->definition; + enc_item_m = MLX5_CONST_ENCAP_ITEM(rte_flow_action_vxlan_encap, + masks->conf); reformat_used = true; reformat_src = actions - action_start; refmt_type = MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2; break; case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: MLX5_ASSERT(!reformat_used); - enc_item = ((const struct rte_flow_action_nvgre_encap *) - actions->conf)->definition; + enc_item = MLX5_CONST_ENCAP_ITEM(rte_flow_action_nvgre_encap, + actions->conf); if (masks->conf) - enc_item_m = ((const struct rte_flow_action_nvgre_encap *) - masks->conf)->definition; + enc_item_m = MLX5_CONST_ENCAP_ITEM(rte_flow_action_nvgre_encap, + masks->conf); reformat_used = true; reformat_src = actions - action_start; refmt_type = MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2; break; - case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: - case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: - MLX5_ASSERT(!reformat_used); - reformat_used = true; - refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; - break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: raw_encap_data = (const struct rte_flow_action_raw_encap *) @@ -1620,6 +1859,12 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, } reformat_src = actions - action_start; break; + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: + MLX5_ASSERT(!reformat_used); + reformat_used = true; + refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; + break; case RTE_FLOW_ACTION_TYPE_RAW_DECAP: reformat_used = true; refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; @@ -1770,83 +2015,20 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, } } if (mhdr.pos != UINT16_MAX) { - struct mlx5dr_action_mh_pattern pattern; - uint32_t flags; - uint32_t bulk_size; - size_t mhdr_len; - - if (flow_hw_validate_compiled_modify_field(dev, cfg, &mhdr, error)) { - __flow_hw_action_template_destroy(dev, acts); - return -rte_errno; - } - acts->mhdr = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*acts->mhdr), - 0, SOCKET_ID_ANY); - if (!acts->mhdr) - goto err; - rte_memcpy(acts->mhdr, &mhdr, sizeof(*acts->mhdr)); - mhdr_len = sizeof(struct mlx5_modification_cmd) * acts->mhdr->mhdr_cmds_num; - flags = mlx5_hw_act_flag[!!attr->group][type]; - if (acts->mhdr->shared) { - flags |= MLX5DR_ACTION_FLAG_SHARED; - bulk_size = 0; - } else { - bulk_size = rte_log2_u32(table_attr->nb_flows); - } - pattern.data = (__be64 *)acts->mhdr->mhdr_cmds; - pattern.sz = mhdr_len; - acts->mhdr->action = mlx5dr_action_create_modify_header - (priv->dr_ctx, 1, &pattern, - bulk_size, flags); - if (!acts->mhdr->action) + ret = mlx5_tbl_translate_modify_header(dev, cfg, acts, mp_ctx, + &mhdr, error); + if (ret) goto err; - acts->rule_acts[acts->mhdr->pos].action = acts->mhdr->action; } if (reformat_used) { - struct mlx5dr_action_reformat_header hdr; - uint8_t buf[MLX5_ENCAP_MAX_LEN]; - bool shared_rfmt = true; - - MLX5_ASSERT(at->reformat_off != UINT16_MAX); - if (enc_item) { - MLX5_ASSERT(!encap_data); - if (flow_dv_convert_encap_data(enc_item, buf, &data_size, error)) - goto err; - encap_data = buf; - if (!enc_item_m) - shared_rfmt = false; - } else if (encap_data && !encap_data_m) { - shared_rfmt = false; - } - acts->encap_decap = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(*acts->encap_decap) + data_size, - 0, SOCKET_ID_ANY); - if (!acts->encap_decap) - goto err; - if (data_size) { - acts->encap_decap->data_size = data_size; - memcpy(acts->encap_decap->data, encap_data, data_size); - } - - hdr.sz = data_size; - hdr.data = encap_data; - acts->encap_decap->action = mlx5dr_action_create_reformat - (priv->dr_ctx, refmt_type, - 1, &hdr, - shared_rfmt ? 0 : rte_log2_u32(table_attr->nb_flows), - mlx5_hw_act_flag[!!attr->group][type] | - (shared_rfmt ? MLX5DR_ACTION_FLAG_SHARED : 0)); - if (!acts->encap_decap->action) - goto err; - acts->rule_acts[at->reformat_off].action = acts->encap_decap->action; - acts->rule_acts[at->reformat_off].reformat.data = acts->encap_decap->data; - if (shared_rfmt) - acts->rule_acts[at->reformat_off].reformat.offset = 0; - else if (__flow_hw_act_data_encap_append(priv, acts, - (action_start + reformat_src)->type, - reformat_src, at->reformat_off, data_size)) + ret = mlx5_tbl_translate_reformat(priv, table_attr, acts, at, + enc_item, enc_item_m, + encap_data, encap_data_m, + mp_ctx, data_size, + reformat_src, + refmt_type, error); + if (ret) goto err; - acts->encap_decap->shared = shared_rfmt; - acts->encap_decap_pos = at->reformat_off; } return 0; err: @@ -1875,15 +2057,20 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, struct rte_flow_template_table *tbl, struct rte_flow_error *error) { + int ret; uint32_t i; + struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX; for (i = 0; i < tbl->nb_action_templates; i++) { if (__flow_hw_actions_translate(dev, &tbl->cfg, &tbl->ats[i].acts, tbl->ats[i].action_template, - error)) + &mpat, error)) goto err; } + ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error); + if (ret) + goto err; return 0; err: while (i--) @@ -3393,6 +3580,143 @@ flow_hw_q_flow_flush(struct rte_eth_dev *dev, return 0; } +static int +mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + struct mlx5_tbl_multi_pattern_ctx *mpat, + struct rte_flow_error *error) +{ + uint32_t i; + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_template_table_attr *table_attr = &tbl->cfg.attr; + const struct rte_flow_attr *attr = &table_attr->flow_attr; + enum mlx5dr_table_type type = get_mlx5dr_table_type(attr); + uint32_t flags = mlx5_hw_act_flag[!!attr->group][type]; + struct mlx5dr_action *dr_action; + uint32_t bulk_size = rte_log2_u32(table_attr->nb_flows); + + for (i = 0; i < MLX5_MULTIPATTERN_ENCAP_NUM; i++) { + uint32_t j; + uint32_t *reformat_refcnt; + typeof(mpat->reformat[0]) *reformat = mpat->reformat + i; + struct mlx5dr_action_reformat_header hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + enum mlx5dr_action_type reformat_type = + mlx5_multi_pattern_reformat_index_to_type(i); + + if (!reformat->elements_num) + continue; + for (j = 0; j < reformat->elements_num; j++) + hdr[j] = reformat->ctx[j].reformat_hdr; + reformat_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t), 0, + rte_socket_id()); + if (!reformat_refcnt) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "failed to allocate multi-pattern encap counter"); + *reformat_refcnt = reformat->elements_num; + dr_action = mlx5dr_action_create_reformat + (priv->dr_ctx, reformat_type, reformat->elements_num, hdr, + bulk_size, flags); + if (!dr_action) { + mlx5_free(reformat_refcnt); + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "failed to create multi-pattern encap action"); + } + for (j = 0; j < reformat->elements_num; j++) { + reformat->ctx[j].rule_action->action = dr_action; + reformat->ctx[j].encap->action = dr_action; + reformat->ctx[j].encap->multi_pattern = 1; + reformat->ctx[j].encap->multi_pattern_refcnt = reformat_refcnt; + } + } + if (mpat->mh.elements_num) { + typeof(mpat->mh) *mh = &mpat->mh; + struct mlx5dr_action_mh_pattern pattern[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + uint32_t *mh_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t), + 0, rte_socket_id()); + + if (!mh_refcnt) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "failed to allocate modify header counter"); + *mh_refcnt = mpat->mh.elements_num; + for (i = 0; i < mpat->mh.elements_num; i++) + pattern[i] = mh->ctx[i].mh_pattern; + dr_action = mlx5dr_action_create_modify_header + (priv->dr_ctx, mpat->mh.elements_num, pattern, + bulk_size, flags); + if (!dr_action) { + mlx5_free(mh_refcnt); + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "failed to create multi-pattern header modify action"); + } + for (i = 0; i < mpat->mh.elements_num; i++) { + mh->ctx[i].rule_action->action = dr_action; + mh->ctx[i].mhdr->action = dr_action; + mh->ctx[i].mhdr->multi_pattern = 1; + mh->ctx[i].mhdr->multi_pattern_refcnt = mh_refcnt; + } + } + + return 0; +} + +static int +mlx5_hw_build_template_table(struct rte_eth_dev *dev, + uint8_t nb_action_templates, + struct rte_flow_actions_template *action_templates[], + struct mlx5dr_action_template *at[], + struct rte_flow_template_table *tbl, + struct rte_flow_error *error) +{ + int ret; + uint8_t i; + struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX; + + for (i = 0; i < nb_action_templates; i++) { + uint32_t refcnt = __atomic_add_fetch(&action_templates[i]->refcnt, 1, + __ATOMIC_RELAXED); + + if (refcnt <= 1) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + &action_templates[i], "invalid AT refcount"); + goto at_error; + } + at[i] = action_templates[i]->tmpl; + tbl->ats[i].action_template = action_templates[i]; + LIST_INIT(&tbl->ats[i].acts.act_list); + /* do NOT translate table action if `dev` was not started */ + if (!dev->data->dev_started) + continue; + ret = __flow_hw_actions_translate(dev, &tbl->cfg, + &tbl->ats[i].acts, + action_templates[i], + &mpat, error); + if (ret) { + i++; + goto at_error; + } + } + tbl->nb_action_templates = nb_action_templates; + ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error); + if (ret) + goto at_error; + return 0; + +at_error: + while (i--) { + __flow_hw_action_template_destroy(dev, &tbl->ats[i].acts); + __atomic_sub_fetch(&action_templates[i]->refcnt, + 1, __ATOMIC_RELAXED); + } + return rte_errno; +} + /** * Create flow table. * @@ -3545,29 +3869,12 @@ flow_hw_table_create(struct rte_eth_dev *dev, } tbl->nb_item_templates = nb_item_templates; /* Build the action template. */ - for (i = 0; i < nb_action_templates; i++) { - uint32_t ret; - - ret = __atomic_fetch_add(&action_templates[i]->refcnt, 1, - __ATOMIC_RELAXED) + 1; - if (ret <= 1) { - rte_errno = EINVAL; - goto at_error; - } - at[i] = action_templates[i]->tmpl; - tbl->ats[i].action_template = action_templates[i]; - LIST_INIT(&tbl->ats[i].acts.act_list); - if (!port_started) - continue; - err = __flow_hw_actions_translate(dev, &tbl->cfg, - &tbl->ats[i].acts, - action_templates[i], &sub_error); - if (err) { - i++; - goto at_error; - } + err = mlx5_hw_build_template_table(dev, nb_action_templates, + action_templates, at, tbl, &sub_error); + if (err) { + i = nb_item_templates; + goto it_error; } - tbl->nb_action_templates = nb_action_templates; tbl->matcher = mlx5dr_matcher_create (tbl->grp->tbl, mt, nb_item_templates, at, nb_action_templates, &matcher_attr); if (!tbl->matcher) @@ -3581,7 +3888,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, LIST_INSERT_HEAD(&priv->flow_hw_tbl_ongo, tbl, next); return tbl; at_error: - while (i--) { + for (i = 0; i < nb_action_templates; i++) { __flow_hw_action_template_destroy(dev, &tbl->ats[i].acts); __atomic_fetch_sub(&action_templates[i]->refcnt, 1, __ATOMIC_RELAXED); @@ -3823,6 +4130,10 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action, const struct rte_flow_action_modify_field *mask_conf = mask->conf; int ret; + if (!mask_conf) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "modify_field mask conf is missing"); if (action_conf->operation != mask_conf->operation) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, action, @@ -4183,16 +4494,25 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -flow_hw_validate_action_raw_encap(struct rte_eth_dev *dev __rte_unused, - const struct rte_flow_action *action, +flow_hw_validate_action_raw_encap(const struct rte_flow_action *action, + const struct rte_flow_action *mask, struct rte_flow_error *error) { - const struct rte_flow_action_raw_encap *raw_encap_data = action->conf; + const struct rte_flow_action_raw_encap *mask_conf = mask->conf; + const struct rte_flow_action_raw_encap *action_conf = action->conf; - if (!raw_encap_data || !raw_encap_data->size || !raw_encap_data->data) + if (!mask_conf || !mask_conf->size) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, mask, + "raw_encap: size must be masked"); + if (!action_conf || !action_conf->size) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "raw_encap: invalid action configuration"); + if (mask_conf->data && !action_conf->data) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, action, - "invalid raw_encap_data"); + "raw_encap: masked data is missing"); return 0; } @@ -4430,7 +4750,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, action_flags |= MLX5_FLOW_ACTION_DECAP; break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: - ret = flow_hw_validate_action_raw_encap(dev, action, error); + ret = flow_hw_validate_action_raw_encap(action, mask, error); if (ret < 0) return ret; action_flags |= MLX5_FLOW_ACTION_ENCAP;