From patchwork Thu Feb 10 16:29:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 107297 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 879A0A00BE; Thu, 10 Feb 2022 17:31:43 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 12A1741159; Thu, 10 Feb 2022 17:31:13 +0100 (CET) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2055.outbound.protection.outlook.com [40.107.100.55]) by mails.dpdk.org (Postfix) with ESMTP id 2718A41141 for ; Thu, 10 Feb 2022 17:31:12 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=H5RHVTnFA2lKGbeX92dHXcG+KZkaanueitCAm9/RaCYJ49wxCJgJ0C4LpBnU0Fu+6QufFYbzbcvN1aIyA+4FnFLpCY/8htgQeyr1OGoy9lszlzyW9sV124GyozDjOXst2encD0F2BNYlyK/DGJkA3CGkvaKqy2bfCvE+D5dfyoMvSj96mwAsJWS6XDtw9UVGL9uQq7U3qF078Y3UqUvKOs8k5ELTz9nwSXsgwqhc29+jEv2X8Ugzc8m3g0mF20UgiwljJYLag1PLQfGzzPZUxWx8fgkVwciBI0Nn+3BgtRP51EbPQaIAicPo1jF1InBy57VoUFjgGdjVt9f30s5BKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vHbGfAM+mtSd6aEhDsXuFhmaUoeeQUMeyXQ1Byd5v7M=; b=Azq2v8wnxl1wrlXewXcxIWhQQo3EnXhimfq0wNE+LbcOoRnrJlcKTedq5M44iFwTO13Wtf/fTMXz5Z7jvBqKn8+c53lliDVIf2t0yYM/5/n9vLpqcAdqRhhpEQYPGqoZeTVjVijknhCHxcIaHBVfo+0mZUffP494K9G6OvpXjTppJ44CP27ugNYnEUidglXPtljEvr+phfGiuJTqUKfsPXATPQS5E1n9JJ0cPzFjBdhXkh2sF++ONHQ7YFV14GTHz9flqlriTE6MNRHW7szAw7f0qB457xfbu/nd/Jrv5+q2FjTsClTBQObLUjOXxNb1PKFpbyWY0KmhXZSkXlHxFQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vHbGfAM+mtSd6aEhDsXuFhmaUoeeQUMeyXQ1Byd5v7M=; b=Lc/PKsJGX8wzBYqH5KlSbumZ8A0OKJIbx3lv0Yqsp9+Cgx4stG2lqdiBjAjfMGYbhbt/sIM8xXE+DlbVCS5NnlcWfFlMNO5rLmkRYzzM6wa2SSfCoIIhT5QZM9hJeFJnwKCvRLOGPG/JiRy6daeoYauyqY1K0SzC/D9C4zirGFjMyHRAy9egkCyLyTdagbtdd43AasvreQ1tN8td2qBJmKhWwd5YJvNoYvJ0o0R458pS+91hw8YAzQK6QtDr+VC3Z/CcRelCb6o5gnzNtmp4OWtvetWuS9jDtbYvSqHmBFv6gJtrdhWVK3teRXc/Fy8aTd4TQSp3MJCs/yyHadJhGg== Received: from MWHPR19CA0013.namprd19.prod.outlook.com (2603:10b6:300:d4::23) by DM5PR12MB1737.namprd12.prod.outlook.com (2603:10b6:3:10e::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11; Thu, 10 Feb 2022 16:31:10 +0000 Received: from CO1NAM11FT008.eop-nam11.prod.protection.outlook.com (2603:10b6:300:d4:cafe::61) by MWHPR19CA0013.outlook.office365.com (2603:10b6:300:d4::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:31:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT008.mail.protection.outlook.com (10.13.175.191) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:31:09 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 10 Feb 2022 16:30:11 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 10 Feb 2022 08:30:09 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH 13/13] net/mlx5: add header reformat action Date: Thu, 10 Feb 2022 18:29:26 +0200 Message-ID: <20220210162926.20436-14-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220210162926.20436-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f998bc1f-4991-4e66-068c-08d9ecb2bedb X-MS-TrafficTypeDiagnostic: DM5PR12MB1737:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:9508; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: hEoovUpLknpmrzwyqhN2t/WENl5RXZWS9dGo664lBWGXdb9lQIzANozNvOv4itx1X6tAVi1qkUtExwumm/zuEvbDmKvROPGt/TPAE5OQIQvMqbuU9mL6cArH3dB+WXJhuDLxI17wWR8HeVauxrsLi1T6elONIL9wtFRjvxXnQ6w824zM5h6kD1mYsYw5MLUkVFp99r+BrdyMerkSVkj/AVIKsgMMnO3+uDY8gKQ1iFbv8k/Vn13SCOkgd/SxoG35fqcivXwX4bbinhRVBa1ScqoxhV0HVWD3dxTET2DzbQm4BHBReXVz9NWYyUWx2pZtvsOVcugj8JIV+IbmSRNFiAANBSqqwhRct+BsfFvO1RA63uy7owiZWxQBEU3CQ+6Jsuv6cDNh363vXaSEPuPmvp4qMyESHjAAfoQNyB4uHDi0rS5M2MTTnhRB2SikpBIpvueQHfdLF9UDGUXSEwvsv56F/ilHVyHTGMx9ECIlWWGOgGPGjPUMOPnRElF7v9eb2frWhBFWEJLQkN68QiIDybWazDDk2Y5mlAYUmxQ+4tH5i/VY4c7Hfjq13JAe0MGVm2TrwCg3nMCqgtmUJxGBQ3LNWTl1pUem4ANcDGyyKTd4Gnze3FDiOtaJCAysCwgWE6+kbCxyt7XFL8whIRkeLX53O3MBQDLaTT3sHnY5OGCQ7M8iLNwH8YQcVpY2LJT4lVxag3QyVG9TrVBX1Jhr28IE579B/qkvaARtls7YORlYOs5S3AFA8HUOGIsmAsVTGBNOaefc4kpMbYM92y6JWQ== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(40460700003)(2906002)(36860700001)(47076005)(4326008)(81166007)(6666004)(356005)(55016003)(36756003)(70206006)(70586007)(8676002)(86362001)(6636002)(316002)(30864003)(110136005)(54906003)(83380400001)(5660300002)(508600001)(26005)(186003)(6286002)(426003)(16526019)(336012)(2616005)(7696005)(8936002)(1076003)(82310400004)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2022 16:31:09.8588 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f998bc1f-4991-4e66-068c-08d9ecb2bedb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT008.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1737 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org HW steering header reformat action can work under bulk mode. In this case, when create the table, bulk size of header reformat actions will be allocated in low level. Afterwards, when create flow, just simply specify the action index in the bulk and the encapsulation data to the action will be enough. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 21 +++ drivers/net/mlx5/mlx5_flow_dv.c | 4 +- drivers/net/mlx5/mlx5_flow_hw.c | 228 +++++++++++++++++++++++++++++++- 4 files changed, 251 insertions(+), 3 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index c78dc3c431..e10f55bf8c 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -344,6 +344,7 @@ struct mlx5_hw_q_job { uint32_t type; /* Job type. */ struct rte_flow_hw *flow; /* Flow attached to the job. */ void *user_data; /* Job user data. */ + uint8_t *encap_data; /* Encap data. */ }; /* HW steering job descriptor LIFO header . */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 097e5bf587..16fb6e643b 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1038,6 +1038,14 @@ struct mlx5_action_construct_data { uint16_t action_src; /* rte_flow_action src offset. */ uint16_t action_dst; /* mlx5dr_rule_action dst offset. */ union { + struct { + /* encap src(item) offset. */ + uint16_t src; + /* encap dst data offset. */ + uint16_t dst; + /* encap data len. */ + uint16_t len; + } encap; struct { uint64_t types; /* RSS hash types. */ uint32_t level; /* RSS level. */ @@ -1079,6 +1087,13 @@ struct mlx5_hw_jump_action { struct mlx5dr_action *hws_action; }; +/* Encap decap action struct. */ +struct mlx5_hw_encap_decap_action { + struct mlx5dr_action *action; /* Action object. */ + size_t data_size; /* Action metadata size. */ + uint8_t data[]; /* Action data. */ +}; + /* The maximum actions support in the flow. */ #define MLX5_HW_MAX_ACTS 16 @@ -1088,6 +1103,9 @@ struct mlx5_hw_actions { LIST_HEAD(act_list, mlx5_action_construct_data) act_list; struct mlx5_hw_jump_action *jump; /* Jump action. */ struct mlx5_hrxq *tir; /* TIR action. */ + /* Encap/Decap action. */ + struct mlx5_hw_encap_decap_action *encap_decap; + uint16_t encap_decap_pos; /* Encap/Decap action position. */ uint32_t acts_num:4; /* Total action number. */ uint32_t mark:1; /* Indicate the mark action. */ /* Translated DR action array from action template. */ @@ -2021,4 +2039,7 @@ int flow_dv_action_query(struct rte_eth_dev *dev, const struct rte_flow_action_handle *handle, void *data, struct rte_flow_error *error); +size_t flow_dv_get_item_hdr_len(const enum rte_flow_item_type item_type); +int flow_dv_convert_encap_data(const struct rte_flow_item *items, uint8_t *buf, + size_t *size, struct rte_flow_error *error); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index ca8ae4214b..377ed6c1db 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -4024,7 +4024,7 @@ flow_dv_push_vlan_action_resource_register * @return * sizeof struct item_type, 0 if void or irrelevant. */ -static size_t +size_t flow_dv_get_item_hdr_len(const enum rte_flow_item_type item_type) { size_t retval; @@ -4090,7 +4090,7 @@ flow_dv_get_item_hdr_len(const enum rte_flow_item_type item_type) * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ -static int +int flow_dv_convert_encap_data(const struct rte_flow_item *items, uint8_t *buf, size_t *size, struct rte_flow_error *error) { diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 9fc6f24542..5a652ac8e6 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -345,6 +345,50 @@ __flow_hw_act_data_general_append(struct mlx5_priv *priv, return 0; } +/** + * Append dynamic encap action to the dynamic action list. + * + * @param[in] priv + * Pointer to the port private data structure. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] encap_src + * Offset of source encap raw data. + * @param[in] encap_dst + * Offset of destination encap raw data. + * @param[in] len + * Length of the data to be updated. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +__flow_hw_act_data_encap_append(struct mlx5_priv *priv, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + uint16_t encap_src, + uint16_t encap_dst, + uint16_t len) +{ struct mlx5_action_construct_data *act_data; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return -1; + act_data->encap.src = encap_src; + act_data->encap.dst = encap_dst; + act_data->encap.len = len; + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return 0; +} + /** * Append shared RSS action to the dynamic action list. * @@ -435,6 +479,53 @@ flow_hw_shared_action_translate(struct rte_eth_dev *dev, return 0; } +/** + * Translate encap items to encapsulation list. + * + * @param[in] dev + * Pointer to the rte_eth_dev data structure. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] items + * Encap item pattern. + * @param[in] items_m + * Encap item mask indicates which part are constant and dynamic. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +flow_hw_encap_item_translate(struct rte_eth_dev *dev, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + const struct rte_flow_item *items, + const struct rte_flow_item *items_m) +{ + struct mlx5_priv *priv = dev->data->dev_private; + size_t len, total_len = 0; + uint32_t i = 0; + + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++, items_m++, i++) { + len = flow_dv_get_item_hdr_len(items->type); + if ((!items_m->spec || + memcmp(items_m->spec, items->spec, len)) && + __flow_hw_act_data_encap_append(priv, acts, type, + action_src, action_dst, i, + total_len, len)) + return -1; + total_len += len; + } + return 0; +} + /** * Translate rte_flow actions to DR action. * @@ -472,6 +563,12 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, struct rte_flow_action *actions = at->actions; struct rte_flow_action *action_start = actions; struct rte_flow_action *masks = at->masks; + enum mlx5dr_action_reformat_type refmt_type = 0; + const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL; + uint16_t reformat_pos = MLX5_HW_MAX_ACTS, reformat_src = 0; + uint8_t *encap_data = NULL; + size_t data_size = 0; bool actions_end = false; uint32_t type, i; int err; @@ -573,6 +670,56 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, } i++; break; + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + MLX5_ASSERT(reformat_pos == MLX5_HW_MAX_ACTS); + enc_item = ((const struct rte_flow_action_vxlan_encap *) + actions->conf)->definition; + enc_item_m = + ((const struct rte_flow_action_vxlan_encap *) + masks->conf)->definition; + reformat_pos = i++; + reformat_src = actions - action_start; + refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2; + break; + case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: + MLX5_ASSERT(reformat_pos == MLX5_HW_MAX_ACTS); + enc_item = ((const struct rte_flow_action_nvgre_encap *) + actions->conf)->definition; + enc_item_m = + ((const struct rte_flow_action_nvgre_encap *) + actions->conf)->definition; + reformat_pos = i++; + reformat_src = actions - action_start; + refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2; + break; + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: + MLX5_ASSERT(reformat_pos == MLX5_HW_MAX_ACTS); + reformat_pos = i++; + refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2; + break; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + raw_encap_data = + (const struct rte_flow_action_raw_encap *) + actions->conf; + encap_data = raw_encap_data->data; + data_size = raw_encap_data->size; + if (reformat_pos != MLX5_HW_MAX_ACTS) { + refmt_type = data_size < + MLX5_ENCAPSULATION_DECISION_SIZE ? + MLX5DR_ACTION_REFORMAT_TYPE_TNL_L3_TO_L2 : + MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3; + } else { + reformat_pos = i++; + refmt_type = + MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2; + } + reformat_src = actions - action_start; + break; + case RTE_FLOW_ACTION_TYPE_RAW_DECAP: + reformat_pos = i++; + refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2; + break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; break; @@ -580,6 +727,45 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, break; } } + if (reformat_pos != MLX5_HW_MAX_ACTS) { + uint8_t buf[MLX5_ENCAP_MAX_LEN]; + + if (enc_item) { + MLX5_ASSERT(!encap_data); + if (flow_dv_convert_encap_data + (enc_item, buf, &data_size, error) || + flow_hw_encap_item_translate + (dev, acts, (action_start + reformat_src)->type, + reformat_src, reformat_pos, + enc_item, enc_item_m)) + goto err; + encap_data = buf; + } else if (encap_data && __flow_hw_act_data_encap_append + (priv, acts, + (action_start + reformat_src)->type, + reformat_src, reformat_pos, 0, 0, data_size)) { + goto err; + } + acts->encap_decap = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(*acts->encap_decap) + data_size, + 0, SOCKET_ID_ANY); + if (!acts->encap_decap) + goto err; + if (data_size) { + acts->encap_decap->data_size = data_size; + memcpy(acts->encap_decap->data, encap_data, data_size); + } + acts->encap_decap->action = mlx5dr_action_create_reformat + (priv->dr_ctx, refmt_type, + data_size, encap_data, + rte_log2_u32(table_attr->nb_flows), + mlx5_hw_act_flag[!!attr->group][type]); + if (!acts->encap_decap->action) + goto err; + acts->rule_acts[reformat_pos].action = + acts->encap_decap->action; + acts->encap_decap_pos = reformat_pos; + } acts->acts_num = i; return 0; err: @@ -735,6 +921,9 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, struct rte_flow_template_table *table = job->flow->table; struct mlx5_action_construct_data *act_data; const struct rte_flow_action *action; + const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_item *enc_item = NULL; + uint8_t *buf = job->encap_data; struct rte_flow_attr attr = { .ingress = 1, }; @@ -756,6 +945,9 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, } else { attr.ingress = 1; } + if (hw_acts->encap_decap && hw_acts->encap_decap->data_size) + memcpy(buf, hw_acts->encap_decap->data, + hw_acts->encap_decap->data_size); LIST_FOREACH(act_data, &hw_acts->act_list, next) { uint32_t jump_group; uint32_t tag; @@ -811,10 +1003,38 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, &rule_acts[act_data->action_dst])) return -1; break; + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + enc_item = ((const struct rte_flow_action_vxlan_encap *) + action->conf)->definition; + rte_memcpy((void *)&buf[act_data->encap.dst], + enc_item[act_data->encap.src].spec, + act_data->encap.len); + break; + case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: + enc_item = ((const struct rte_flow_action_nvgre_encap *) + action->conf)->definition; + rte_memcpy((void *)&buf[act_data->encap.dst], + enc_item[act_data->encap.src].spec, + act_data->encap.len); + break; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + raw_encap_data = + (const struct rte_flow_action_raw_encap *) + action->conf; + rte_memcpy((void *)&buf[act_data->encap.dst], + raw_encap_data->data, act_data->encap.len); + MLX5_ASSERT(raw_encap_data->size == + act_data->encap.len); + break; default: break; } } + if (hw_acts->encap_decap) { + rule_acts[hw_acts->encap_decap_pos].reformat.offset = + job->flow->idx - 1; + rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; + } return 0; } @@ -1821,6 +2041,7 @@ flow_hw_configure(struct rte_eth_dev *dev, goto err; } mem_size += (sizeof(struct mlx5_hw_q_job *) + + sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN + sizeof(struct mlx5_hw_q_job)) * queue_attr[0]->size; } @@ -1831,6 +2052,8 @@ flow_hw_configure(struct rte_eth_dev *dev, goto err; } for (i = 0; i < nb_queue; i++) { + uint8_t *encap = NULL; + priv->hw_q[i].job_idx = queue_attr[i]->size; priv->hw_q[i].size = queue_attr[i]->size; if (i == 0) @@ -1841,8 +2064,11 @@ flow_hw_configure(struct rte_eth_dev *dev, &job[queue_attr[i - 1]->size]; job = (struct mlx5_hw_q_job *) &priv->hw_q[i].job[queue_attr[i]->size]; - for (j = 0; j < queue_attr[i]->size; j++) + encap = (uint8_t *)&job[queue_attr[i]->size]; + for (j = 0; j < queue_attr[i]->size; j++) { + job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN]; priv->hw_q[i].job[j] = &job[j]; + } } dr_ctx_attr.pd = priv->sh->cdev->pd; dr_ctx_attr.queues = nb_queue;