From patchwork Sun May 7 09:50:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 126750 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BAB2E42A85; Sun, 7 May 2023 11:51:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4C1B640DFD; Sun, 7 May 2023 11:51:17 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2066.outbound.protection.outlook.com [40.107.244.66]) by mails.dpdk.org (Postfix) with ESMTP id 35B3240DFB for ; Sun, 7 May 2023 11:51:15 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BAQ+t4Bqc+5YKRaXn5HQvLQdEJKFkbLz7SVMtEp9p+L3yAO+jArNqd4g7BuEfKKQMVgVSAXjaVUKBz8ktnQEwSpGSJ3uD+NBgYcYitVV+tftnIsFarelR+Rv4A9whmmYOcRQ3y9elkMPRuDlXxj8vuD8g47UfZk23gFx7PD1GDE1frr1JVp22bRXe/aH222ayzQjGfjsJEbsH+QU2bwGFuaP8WnP9u0xn5farQwrd06WYZqS7MZoV+cdNPmnGKvWAknOKWuB0K5IMenENP8DmRUAyG3ldfjrqQI7NJr9YuSAhH4jOUVBuDhEcsYXi8o0MFcv4VNBKrkRp9urbLmYlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0/aLYWH1esdfTuuJO9veDJEscHTL789UT2DIXWIurjI=; b=a/da7nmLL07Ml7YvAlC/lF/IEF2ARf9jTrtz/krC6ECMLyub/99nAO3LkOTBbN1F82RGcnALYatyOyxHvWSztrbCde7zUTTIdf42b2oJ/tYQhedya/ZNK2GFFw3MquFDv7Bh9LSrO7C5OXeUIzp+4asHmj44zsdk/0K1UIfRzagmbSSjAJfCX59zV7ppT6AIVwHUG+Wp51PC+oqmmB+tNe/UgDGCnb5xO3PTMVHW4YLOP+ZKT9ZWgmBt16fWP8rDahWZEyqUB1ct7gEk0NWTHLpGNtD7TID0cNXpc4AL/kfShudNAM3sRJ69+POL7r4VWJ7zW9ULsm+bDXNckZYOPg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0/aLYWH1esdfTuuJO9veDJEscHTL789UT2DIXWIurjI=; b=X0NlXeE3Pf2d2kU3m3MlBSmhq5LYsFdBM0l7qrfWwhinbyp4kV87iG8/9I3QZW3Nq6tRzGZI7XHofg/aHNQHoSFBsKtH4SrgyGVluxFEW3mpQwgRX/DwMWBPQ67TBO7X9kycTqN3AY0Mh0D6bML0uTPFAjKGL83kHRNQ7mxv+i66QsL6csEigZFqn+6+sWtoVAaHs5oRgZ8013S4Wua6H5HBvAzIkWTv5gmBqijVL2BO25kLVufExJFa6jRpqzKEtXd/T5v1F1lnurbrOSDUdk4oeZck6hONIpKp4i5LOzUY479q0FC/JmOOds0rdnKY0JAJnSMRaQNlvi7Gly0F1A== Received: from BL1PR13CA0175.namprd13.prod.outlook.com (2603:10b6:208:2bd::30) by DM4PR12MB6038.namprd12.prod.outlook.com (2603:10b6:8:ab::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.31; Sun, 7 May 2023 09:51:12 +0000 Received: from BL02EPF0000C409.namprd05.prod.outlook.com (2603:10b6:208:2bd:cafe::81) by BL1PR13CA0175.outlook.office365.com (2603:10b6:208:2bd::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.14 via Frontend Transport; Sun, 7 May 2023 09:51:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BL02EPF0000C409.mail.protection.outlook.com (10.167.241.11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.14 via Frontend Transport; Sun, 7 May 2023 09:51:12 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Sun, 7 May 2023 02:51:03 -0700 Received: from nvidia.com (10.126.230.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Sun, 7 May 2023 02:51:01 -0700 From: Gregory Etelson To: CC: , , , "Ori Kam" , Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Subject: [RFC PATCH v3] ethdev: add indirect list flow action Date: Sun, 7 May 2023 12:50:46 +0300 Message-ID: <20230507095046.5456-1-getelson@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230418172144.24365-1-getelson@nvidia.com> References: <20230418172144.24365-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF0000C409:EE_|DM4PR12MB6038:EE_ X-MS-Office365-Filtering-Correlation-Id: 0678a1ba-56ec-49c4-78cc-08db4ee09763 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: m5s6V61xHYgMi+vXK7Yq8a4PYg81xoRvWDqTcpyzZAIGwIkFKiNxZAHMVML4p7GyjdHRccFH5FAa8EV+EbO3gTSyzBRHxOTSfAiRwjcur2vLKQ8FKBekmUXDhY7wph0alMZFl4t5/4soE5DoZRLiCWE3WAb/53AcoZQ5G1rRB5rsFGaiSyVk/EG2RGZP39TrexJEivJE6zL6BjIXLYA8+L+Z00/jh/q9B9Ek16cJ7xj428Xmu/0f5XsYr/OS6JZR35RvnBhMAHDAMrIjVix42u1DSfrE7JX9l4BZHIy5f5kwm2uQg/jPc1hZLKc+sXom+SvKcI6aVt2AlPEXAeUlMM22XC4wks8OVjFuKq5QzbohnoNj35elZR/eufOpT2zPVkq6xu8UTU5sFcTS+4w4bxQrvQ2+unP1CrhUrQ3fQJf7+Zr0GgRSic0oGTiiiPbeLrx7AsctORNc2TLUil/Wy/122KWU53bWyC6HsALATDPPnYjQqvdzwTD2dwDvuK2PEpPBagN5t0Y8+uypK4kF6r/eH7Z7cd3b6HeKziE5E+JCSY1GPfSOQIqdEviFIgo6U14dekfuXnfLVr1LflSSHYTu+2nbneSbkD/V4xjHnxfKrux03CiV0Ypo4AO3WpD92w23oQXT6S0TuGWfRWomr16OnkK0+LW+LBBMsIIDE26hMAojjbVVhXRV73z+vQiv75/4liEHLNFp7kvpGqPXhw== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(376002)(396003)(136003)(346002)(451199021)(40470700004)(46966006)(36840700001)(6666004)(40460700003)(7696005)(70206006)(70586007)(36756003)(4326008)(6916009)(54906003)(82740400003)(2906002)(30864003)(86362001)(41300700001)(356005)(7636003)(316002)(8936002)(8676002)(5660300002)(82310400005)(40480700001)(55016003)(478600001)(6286002)(186003)(16526019)(47076005)(1076003)(26005)(36860700001)(83380400001)(426003)(336012)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2023 09:51:12.0700 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0678a1ba-56ec-49c4-78cc-08db4ee09763 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF0000C409.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6038 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Indirect API creates a shared flow action with unique action handle. Flow rules can access the shared flow action and resources related to that action through the indirect action handle. In addition, the API allows to update existing shared flow action configuration. After the update completes, new action configuration is available to all flows that reference that shared action. Indirect actions list expands the indirect action API: • Indirect action list creates a handle for one or several flow actions, while legacy indirect action handle references single action only. Input flow actions arranged in END terminated list. • Flow rule can provide rule specific configuration parameters to existing shared handle. Updates of flow rule specific configuration will not change the base action configuration. Base action configuration was set during the action creation. Indirect action list handle defines 2 types of resources: • Mutable handle resource can be changed during handle lifespan. • Immutable handle resource value is set during handle creation and cannot be changed. There are 2 types of mutable indirect handle contexts: • Action mutable context is always shared between all flows that referenced indirect actions list handle. Action mutable context can be changed by explicit invocation of indirect handle update function. • Flow mutable context is private to a flow. Flow mutable context can be updated by indirect list handle flow rule configuration. flow 1: / indirect handle H conf C1 / | | | | | | flow 2: | | / indirect handle H conf C2 / | | | | | | | | | | | | ========================================================= ^ | | | | | | V | V | ~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~ | flow mutable flow mutable | context 1 context 2 | ~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~ indirect | | | action | | | context | V V | ----------------------------------------------------- | action mutable context | ----------------------------------------------------- v action immutable context ========================================================= Indirect action types - immutable, action / flow mutable, are mutually exclusive and depend on the action definition. For example: • Indirect METER_MARK policy is immutable action member and profile is action mutable action member. • Indirect METER_MARK flow action defines init_color as flow mutable member. • Indirect QUOTA flow action does not define flow mutable members. Template API: Action template format: template .. indirect_list handle Htmpl conf Ctmpl .. mask .. indirect_list handle Hmask conf Cmask .. 1 If Htmpl was masked (Hmask != 0), PMD compiles base action configuration during action template, table template and flow rule phases - depending on PMD action implementation. Otherwise, action is compiled from scratch during flow rule processing. 2 If Htmpl and Ctmpl were masked (Hmask !=0 and Cmask != 0), table template processing overwrites base action configuration with Ctmpl parameters. Flow rule format: actions .. indirect_list handle Hflow conf Cflow .. 3 If Htmpl was masked, Hflow can reference a different action of the same type as Htmpl. 4 If Cflow was specified, it overwrites action configuration. Signed-off-by: Gregory Etelson --- v3: do not deprecate indirect flow action. --- lib/ethdev/rte_flow.h | 266 +++++++++++++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 41 ++++++ 2 files changed, 307 insertions(+) diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 713ba8b65c..ac1f51e564 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -2912,6 +2912,13 @@ enum rte_flow_action_type { * applied to the given ethdev Rx queue. */ RTE_FLOW_ACTION_TYPE_SKIP_CMAN, + + /** + * Action handle to reference flow actions list. + * + * @see struct rte_flow_action_indirect_list + */ + RTE_FLOW_ACTION_TYPE_INDIRECT_LIST, }; /** @@ -6118,6 +6125,265 @@ rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id, void *user_data, struct rte_flow_error *error); +struct rte_flow_action_list_handle; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Configure INDIRECT_LIST flow action. + * + * @see RTE_FLOW_ACTION_TYPE_INDIRECT_LIST + */ +struct rte_flow_action_indirect_list { + struct rte_flow_action_list_handle; /**< Indirect action list handle */ + /** + * Flow mutable configuration array. + * NULL if the handle has no flow mutable configuration update. + * Otherwise, if the handle was created with list A1 / A2 .. An / END + * size of conf is n. + * conf[i] points to flow mutable update of Ai in the handle + * actions list or NULL if Ai has no update. + */ + const void **conf; +}; + + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create an indirect flow action object from flow actions list. + * The object is identified by a unique handle. + * The handle has single state and configuration + * across all the flow rules using it. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] conf + * Action configuration for the indirect action list creation. + * @param[in] actions + * Specific configuration of the indirect action lists. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * A valid handle in case of success, NULL otherwise and rte_errno is set + * to one of the error codes defined: + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-EINVAL) if *actions* list invalid. + * - (-ENOTSUP) if *action* list element valid but unsupported. + * - (-E2BIG) to many elements in *actions* + */ +__rte_experimental +struct rte_flow_action_list_handle * +rte_flow_action_list_handle_create(uint16_t port_id, + const + struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Async function call to create an indirect flow action object + * from flow actions list. + * The object is identified by a unique handle. + * The handle has single state and configuration + * across all the flow rules using it. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] queue_id + * Flow queue which is used to update the rule. + * @param[in] attr + * Indirect action update operation attributes. + * @param[in] conf + * Action configuration for the indirect action list creation. + * @param[in] actions + * Specific configuration of the indirect action list. + * @param[in] user_data + * The user data that will be returned on async completion event. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * A valid handle in case of success, NULL otherwise and rte_errno is set + * to one of the error codes defined: + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-EINVAL) if *actions* list invalid. + * - (-ENOTSUP) if *action* list element valid but unsupported. + * - (-E2BIG) to many elements in *actions* + */ +__rte_experimental +struct rte_flow_action_list_handle * +rte_flow_async_action_list_handle_create(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct + rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + void *user_data, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy indirect actions list by handle. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] handle + * Handle for the indirect actions list to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if actions list pointed by *action* handle was not found. + * - (-EBUSY) if actions list pointed by *action* handle still used + * rte_errno is also set. + */ +__rte_experimental +int +rte_flow_action_list_handle_destroy(uint16_t port_id, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action list destruction operation. + * The destroy queue must be the same + * as the queue on which the action was created. + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to destroy the rule. + * @param[in] op_attr + * Indirect action destruction operation attributes. + * @param[in] handle + * Handle for the indirect action object to be destroyed. + * @param[in] user_data + * The user data that will be returned on the completion events. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_async_action_list_handle_destroy + (uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_list_handle *handle, + void *user_data, struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Query and/or update indirect flow actions list. + * If both query and update not NULL, the function atomically + * queries and updates indirect action. Query and update are carried in order + * specified in the mode parameter. + * If ether query or update is NULL, the function executes + * complementing operation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param handle + * Handle for the indirect actions list object to be updated. + * @param update + * If not NULL, update profile specification used to modify the action + * pointed by handle. + * @see struct rte_flow_action_indirect_list + * @param query + * If not NULL pointer to storage for the associated query data type. + * @see struct rte_flow_action_indirect_list + * @param mode + * Operational mode. + * @param error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOTSUP) if underlying device does not support this functionality. + * - (-EINVAL) if *handle* or *mode* invalid or + * both *query* and *update* are NULL. + */ +__rte_experimental +int +rte_flow_action_list_handle_query_update(uint16_t port_id, + const struct + rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue async indirect flow actions list query and/or update + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue which is used to update the rule. + * @param attr + * Indirect action update operation attributes. + * @param handle + * Handle for the indirect actions list object to be updated. + * @param update + * If not NULL, update profile specification used to modify the action + * pointed by handle. + * @see struct rte_flow_action_indirect_list + * @param query + * If not NULL, pointer to storage for the associated query data type. + * Query result returned on async completion event. + * @see struct rte_flow_action_indirect_list + * @param mode + * Operational mode. + * @param user_data + * The user data that will be returned on async completion event. + * @param error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOTSUP) if underlying device does not support this functionality. + * - (-EINVAL) if *handle* or *mode* invalid or + * both *update* and *query* are NULL. + */ +__rte_experimental +int +rte_flow_async_action_list_handle_query_update(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct + rte_flow_action_list_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode mode, + void *user_data, + struct rte_flow_error *error); + + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index a129a4605d..8dc803023c 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -121,6 +121,17 @@ struct rte_flow_ops { const void *update, void *query, enum rte_flow_query_update_mode qu_mode, struct rte_flow_error *error); + /** @see rte_flow_action_list_handle_create() */ + struct rte_flow_action_list_handle *(*action_list_handle_create) + (struct rte_eth_dev *dev, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action actions[], + struct rte_flow_error *error); + /** @see rte_flow_action_list_handle_destroy() */ + int (*action_list_handle_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error); /** See rte_flow_tunnel_decap_set() */ int (*tunnel_decap_set) (struct rte_eth_dev *dev, @@ -302,6 +313,36 @@ struct rte_flow_ops { const void *update, void *query, enum rte_flow_query_update_mode qu_mode, void *user_data, struct rte_flow_error *error); + /** @see rte_flow_async_action_list_handle_create() */ + struct rte_flow_action_list_handle * + (*async_action_list_handle_create) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + void *user_data, struct rte_flow_error *error); + /** @see rte_flow_async_action_list_handle_destroy() */ + int (*async_action_list_handle_destroy) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_list_handle *action_handle, + void *user_data, struct rte_flow_error *error); + /** @see rte_flow_action_list_handle_query_update() */ + int (*action_list_handle_query_update) + (uint16_t port_id, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error); + /** @see rte_flow_async_action_list_handle_query_update() */ + int (*async_action_list_handle_query_update) + (uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + void *user_data, struct rte_flow_error *error); + }; /**