From patchwork Mon Feb 21 23:02:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107913 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3C6D8A034F; Tue, 22 Feb 2022 00:03:09 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 25AD241142; Tue, 22 Feb 2022 00:03:08 +0100 (CET) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2055.outbound.protection.outlook.com [40.107.100.55]) by mails.dpdk.org (Postfix) with ESMTP id DF0C24113C for ; Tue, 22 Feb 2022 00:03:06 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hunei6lR30tzUErpRq5MpgTuQVAzF41j9SZcIBjN2XQ61HWcpkpzgiWGCZ/pKdxtAGzdCewPudP3IDMl6ei8CZs6nvG2DKS9FAmB7N14d3xhvYzx+mciVV9DTpO1xJeTN3ti6DLc6hYj8YrgqthlpxOojSCCNVwFVN4yxy2TeXR+RJz8Sg8edZr0V37jd/twyzWLKQBthrX1ex+IidsrBdcF6EGKWnos2zLBG7s8d85U3CBsZfgUqMjulDfWS/NPXNxTRLcYmlG2YXpQOPk31GWUqCAI1EZFuvKv6mZDAEM155yYb4ewP4ESQHmc83sIKfrgVtcxSECJ5ROTpyqytg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zJJTY3q//oNn0aMbSvN6koNyaNC7HCcrze9YsPsNh24=; b=YN12JeBup1Qo2YNkBQ948mhQ0qguhIzrtJE5+nyZ0vcjiAZB5egewt+pc/jbD5Zm4XvEzFgeAxnnIOt/4Eq3UeAQwRlfT25RoiqO+eX+CSQX3TzPFh2eWqThrjxH3ZB+3TycrLa0pGMrIizK3iEELelK9cRyhRRATeORmbFc11I2tTi81PxMuu9T2HD2iMDSmGuE76UXHvOiyq/ny6B4Iwt1JFbXNT9cK/N9tHDWzD6Yk5C4pRRKrsmoFBTRi4JypP+uv4a+8pktVdCJ39Q8w9SucXU+/o1hJrX4xzgvNC6a1YUTRlxW1Yal8ZMvot1+6DRRU/YGIVLUKsl+rqdYhA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zJJTY3q//oNn0aMbSvN6koNyaNC7HCcrze9YsPsNh24=; b=Lq6B2QmPPliNj6t1wwZnKEoFT2jMw79qKPerbeiTuB7EM6azGK0MsbjnB8vUv93kK/s1oyFUjdGjRfTo272nNhfuxni29T/S92OROiJ33J12gef4gR1Fmo2IB5AXuv7KjVVEfqiAk+Z4a2eWCR5Ufn1CnBUf/6uqySZXUWFZ8BwLTM7pbehdNVtOg6eWhlxuZwEAv8u4uRte6AT1KWFpenpP8+v3EVZltq7E9M10B05cvl4SQAN88GbtHl8H3N1lbqMh8MS1B6L9O+MSIMn42Qrf0fCgk4oYkxwG2lbhqxA/QLOY+8DsRs2niRZR08zaraI+ZRCC7KR7sAYgFU2Mdw== Received: from MWHPR12MB1248.namprd12.prod.outlook.com (2603:10b6:300:12::21) by BN6PR1201MB0196.namprd12.prod.outlook.com (2603:10b6:405:4d::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.16; Mon, 21 Feb 2022 23:03:05 +0000 Received: from MW4PR03CA0252.namprd03.prod.outlook.com (2603:10b6:303:b4::17) by MWHPR12MB1248.namprd12.prod.outlook.com (2603:10b6:300:12::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.16; Mon, 21 Feb 2022 23:03:03 +0000 Received: from CO1NAM11FT047.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b4:cafe::ad) by MW4PR03CA0252.outlook.office365.com (2603:10b6:303:b4::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.16 via Frontend Transport; Mon, 21 Feb 2022 23:03:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT047.mail.protection.outlook.com (10.13.174.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4995.15 via Frontend Transport; Mon, 21 Feb 2022 23:03:01 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 21 Feb 2022 23:03:01 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 21 Feb 2022 15:02:58 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v9 01/11] ethdev: introduce flow engine configuration Date: Tue, 22 Feb 2022 01:02:30 +0200 Message-ID: <20220221230240.2409665-2-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220221230240.2409665-1-akozyrev@nvidia.com> References: <20220220034409.2226860-1-akozyrev@nvidia.com> <20220221230240.2409665-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 8e2995f9-9ddc-4f4b-8967-08d9f58e4f73 X-MS-TrafficTypeDiagnostic: MWHPR12MB1248:EE_|BN6PR1201MB0196:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: r79EneVNQP+3N7rZAPMK/SjFYEVXKPmQVji4ZPzib+xsIri8QCenpa5rV3Q9swdt6dJCqG8N5QCEgbQfPY1eek/KcE988+6JnxOqOBfZrBM0Zbbxm+1tZm3w16X/Oa26PtnPxkSR6TLT4SZ0KvImyX0aFQm2uh4XmtsgxJfJblE/8RGjmtGvRK5xGctSzlSiPhph7IyqXHXWfAa2pQA9EQsY88Aqd1DuTuS1yKaUY49Nd1UPUPImn1NfqOnkX59DHxzdmxBOILusJo7uGeZdt3l7tPCiw/OSN36MHV+CJuz3wJ7wdbHPfVKe/DouTbXIDdHz/dqeiuuaeHZSqWIZwoYwOvDL8jTpTcuawT2MTwb2a7oxkBZC1Fby0uTF3VB8G5ItGBg4UKsJ5+PL8+ZFsaErPTGSqVkck/5vzVxNxYzqemtMlIEbG7h/HH7n4utYaLZnKMXgcv64Ez7DpRj0sIE/39uz9BjD2yk9SuH8vVUGsrne6G7d5QvVagRSpKvjgNUC4Iqmhj+r4Qlw9wYBr920yhGMJon0GEE505sa5fU74me7JlwqjlFhPzMQ1C+48YihGj1T7+OM8G7gfoBSql2bTW4/RJZqyDxSEFDA3NahiYfvNoEL6/TsZcjFmgcpzcQgf085eIcwUB90cd7IrpIyz1bJHSM/L/lpJc8PP3C/sr2fUsK8ZlzOD6vREfDT66aO9QgPN8AHuAaeApzjIg== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(8936002)(70586007)(70206006)(82310400004)(2906002)(8676002)(4326008)(30864003)(81166007)(356005)(6666004)(508600001)(316002)(336012)(6916009)(36860700001)(86362001)(426003)(1076003)(16526019)(2616005)(5660300002)(36756003)(186003)(26005)(40460700003)(83380400001)(47076005)(54906003)(7416002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Feb 2022 23:03:01.5518 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8e2995f9-9ddc-4f4b-8967-08d9f58e4f73 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT047.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR1201MB0196 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The flow rules creation/destruction at a large scale incurs a performance penalty and may negatively impact the packet processing when used as part of the datapath logic. This is mainly because software/hardware resources are allocated and prepared during the flow rule creation. In order to optimize the insertion rate, PMD may use some hints provided by the application at the initialization phase. The rte_flow_configure() function allows to pre-allocate all the needed resources beforehand. These resources can be used at a later stage without costly allocations. Every PMD may use only the subset of hints and ignore unused ones or fail in case the requested configuration is not supported. The rte_flow_info_get() is available to retrieve the information about supported pre-configurable resources. Both these functions must be called before any other usage of the flow API engine. Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- doc/guides/prog_guide/rte_flow.rst | 36 ++++++++ doc/guides/rel_notes/release_22_03.rst | 6 ++ lib/ethdev/ethdev_driver.h | 7 +- lib/ethdev/rte_flow.c | 68 +++++++++++++++ lib/ethdev/rte_flow.h | 111 +++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 10 +++ lib/ethdev/version.map | 2 + 7 files changed, 239 insertions(+), 1 deletion(-) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 0e475019a6..c89161faef 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3606,6 +3606,42 @@ Return values: - 0 on success, a negative errno value otherwise and ``rte_errno`` is set. +Flow engine configuration +------------------------- + +Configure flow API management. + +An application may provide some parameters at the initialization phase about +rules engine configuration and/or expected flow rules characteristics. +These parameters may be used by PMD to preallocate resources and configure NIC. + +Configuration +~~~~~~~~~~~~~ + +This function performs the flow API engine configuration and allocates +requested resources beforehand to avoid costly allocations later. +Expected number of resources in an application allows PMD to prepare +and optimize NIC hardware configuration and memory layout in advance. +``rte_flow_configure()`` must be called before any flow rule is created, +but after an Ethernet device is configured. + +.. code-block:: c + + int + rte_flow_configure(uint16_t port_id, + const struct rte_flow_port_attr *port_attr, + struct rte_flow_error *error); + +Information about the number of available resources can be retrieved via +``rte_flow_info_get()`` API. + +.. code-block:: c + + int + rte_flow_info_get(uint16_t port_id, + struct rte_flow_port_info *port_info, + struct rte_flow_error *error); + .. _flow_isolated_mode: Flow isolated mode diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index 41923f50e6..68b41f2062 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -99,6 +99,12 @@ New Features The information of these properties is important for debug. As the information is private, a dump function is introduced. +* ** Added functions to configure Flow API engine + + * ethdev: Added ``rte_flow_configure`` API to configure Flow Management + engine, allowing to pre-allocate some resources for better performance. + Added ``rte_flow_info_get`` API to retrieve available resources. + * **Updated AF_XDP PMD** * Added support for libxdp >=v1.2.2. diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index 6d697a879a..42f0a3981e 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -138,7 +138,12 @@ struct rte_eth_dev_data { * Indicates whether the device is configured: * CONFIGURED(1) / NOT CONFIGURED(0) */ - dev_configured : 1; + dev_configured : 1, + /** + * Indicates whether the flow engine is configured: + * CONFIGURED(1) / NOT CONFIGURED(0) + */ + flow_configured : 1; /** Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0) */ uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT]; diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 7f93900bc8..7ec7a95a6b 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1392,3 +1392,71 @@ rte_flow_flex_item_release(uint16_t port_id, ret = ops->flex_item_release(dev, handle, error); return flow_err(port_id, ret, error); } + +int +rte_flow_info_get(uint16_t port_id, + struct rte_flow_port_info *port_info, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (dev->data->dev_configured == 0) { + RTE_FLOW_LOG(INFO, + "Device with port_id=%"PRIu16" is not configured.\n", + port_id); + return -EINVAL; + } + if (port_info == NULL) { + RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id); + return -EINVAL; + } + if (likely(!!ops->info_get)) { + return flow_err(port_id, + ops->info_get(dev, port_info, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +int +rte_flow_configure(uint16_t port_id, + const struct rte_flow_port_attr *port_attr, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (dev->data->dev_configured == 0) { + RTE_FLOW_LOG(INFO, + "Device with port_id=%"PRIu16" is not configured.\n", + port_id); + return -EINVAL; + } + if (dev->data->dev_started != 0) { + RTE_FLOW_LOG(INFO, + "Device with port_id=%"PRIu16" already started.\n", + port_id); + return -EINVAL; + } + if (port_attr == NULL) { + RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id); + return -EINVAL; + } + if (likely(!!ops->configure)) { + ret = ops->configure(dev, port_attr, error); + if (ret == 0) + dev->data->flow_configured = 1; + return flow_err(port_id, ret, error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 765beb3e52..7e6f5eba46 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -43,6 +43,9 @@ extern "C" { #endif +#define RTE_FLOW_LOG(level, ...) \ + rte_log(RTE_LOG_ ## level, rte_eth_dev_logtype, "" __VA_ARGS__) + /** * Flow rule attributes. * @@ -4872,6 +4875,114 @@ rte_flow_flex_item_release(uint16_t port_id, const struct rte_flow_item_flex_handle *handle, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Information about flow engine resources. + * The zero value means a resource is not supported. + * + */ +struct rte_flow_port_info { + /** + * Maximum number of counters. + * @see RTE_FLOW_ACTION_TYPE_COUNT + */ + uint32_t max_nb_counters; + /** + * Maximum number of aging objects. + * @see RTE_FLOW_ACTION_TYPE_AGE + */ + uint32_t max_nb_aging_objects; + /** + * Maximum number traffic meters. + * @see RTE_FLOW_ACTION_TYPE_METER + */ + uint32_t max_nb_meters; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Get information about flow engine resources. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[out] port_info + * A pointer to a structure of type *rte_flow_port_info* + * to be filled with the resources information of the port. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_info_get(uint16_t port_id, + struct rte_flow_port_info *port_info, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Flow engine resources settings. + * The zero value means on demand resource allocations only. + * + */ +struct rte_flow_port_attr { + /** + * Number of counters to configure. + * @see RTE_FLOW_ACTION_TYPE_COUNT + */ + uint32_t nb_counters; + /** + * Number of aging objects to configure. + * @see RTE_FLOW_ACTION_TYPE_AGE + */ + uint32_t nb_aging_objects; + /** + * Number of traffic meters to configure. + * @see RTE_FLOW_ACTION_TYPE_METER + */ + uint32_t nb_meters; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Configure the port's flow API engine. + * + * This API can only be invoked before the application + * starts using the rest of the flow library functions. + * + * The API can be invoked multiple times to change the settings. + * The port, however, may reject changes and keep the old config. + * + * Parameters in configuration attributes must not exceed + * numbers of resources returned by the rte_flow_info_get API. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] port_attr + * Port configuration attributes. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_configure(uint16_t port_id, + const struct rte_flow_port_attr *port_attr, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index f691b04af4..7c29930d0f 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -152,6 +152,16 @@ struct rte_flow_ops { (struct rte_eth_dev *dev, const struct rte_flow_item_flex_handle *handle, struct rte_flow_error *error); + /** See rte_flow_info_get() */ + int (*info_get) + (struct rte_eth_dev *dev, + struct rte_flow_port_info *port_info, + struct rte_flow_error *err); + /** See rte_flow_configure() */ + int (*configure) + (struct rte_eth_dev *dev, + const struct rte_flow_port_attr *port_attr, + struct rte_flow_error *err); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index d5cc56a560..0d849c153f 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -264,6 +264,8 @@ EXPERIMENTAL { rte_eth_ip_reassembly_capability_get; rte_eth_ip_reassembly_conf_get; rte_eth_ip_reassembly_conf_set; + rte_flow_info_get; + rte_flow_configure; }; INTERNAL { From patchwork Mon Feb 21 23:02:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107914 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3936AA034F; Tue, 22 Feb 2022 00:03:18 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 48FDE41151; Tue, 22 Feb 2022 00:03:10 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2075.outbound.protection.outlook.com [40.107.236.75]) by mails.dpdk.org (Postfix) with ESMTP id 9E61241145 for ; Tue, 22 Feb 2022 00:03:08 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=X/bL5OFyvqKN7XLHp4SfJ3lk9l8loVQFGVib3qlXJSyp3ejlQDncJdiSDG0zAVjvE37JCuNkA+LUYIKAu8VvG81or/Ecr4jJtXs7uPapqB5+IeFSgVR1sfuZAjYxv+fnKDeNIF4KM16kK3GdI8rJXnYOq8SMYEnG62Vq4aNXTa1NUHWxv1A+e0/7g+vK+UXpyIPLUMABJmhnJTeerqZJBKPg2fqrxkmouNIryHmy+tdE1HQjdZ0Fm3faJ068t/o55Sg3UjNMER+OdwOUzozDR3IgmQtYgWEa5Mq0OhDziIF6WMvbJjpj8v+72YR1MK8m9VhE/mSrZUIw7OqA8zMEAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=TOqf8Bn5I0b4OEhp9Zkc0KbF7GbpcgJQs7xqniMuN7E=; b=ZPTaCAKryOm+gmYmuuEs2a/sDPS4JOIOpETvTACrkNAi/8sFs57//UcAl5a+OSvJrZ5CZF8JB56CSvV/ZSHyVPzGXJYav8TtXx9jBryMiXSfDhi57rtQmZck8FhR/LyKT2qXDwo9rCJcjnPFA87l+PBgQrngXA4CbjBMRbLpSZ5tksN+ZPuThr8gbA15bxa4tz2FUkfFy6vdQjRxI+KdYnLvMI5UjFzdiZHOMiYyZtvrFG+JNZdRBmZfQngsm/JLi5NFo2c5Xre4CQARigLZvFAmcltjLeW3lYRD6hXd1aTYcNGOmUPWTuChUNVttPH61y3wKXgQLbFhSisXOHwzUw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TOqf8Bn5I0b4OEhp9Zkc0KbF7GbpcgJQs7xqniMuN7E=; b=HZOcmM9KTOeSzOyOR9EZF1sAVZLUnhMVy2Yif3eVTyEWe6FzwKU+aA0Hj+4ZKQ/e3s2deXj68JZpQyf1Dq3CNU7oDgRuOUkblKRhr0mW094KkE2ruTXlkifIpi0aN5cF/mp6AOu29uK67paNSb3/9HfMT9UFlsh7AZ0kBLjkk7oeoJGz1v5/Tqx0dSd9DxqxAWxxdLpA9Ve6PN2ohYyYri2ecDUaeYD7vu86UNo/lk3NoibFngnvSnaJCQ4JhzDHrq1hcrpp+aCnmLdxvxxw+XAq8/96N8P6xDH8BfDsJgSlf/xcSvrVz3+hLQXM2Q6l0qCNHCRO0i7lqwz7znuPkQ== Received: from MW3PR12MB4393.namprd12.prod.outlook.com (2603:10b6:303:2c::22) by BL1PR12MB5873.namprd12.prod.outlook.com (2603:10b6:208:395::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.16; Mon, 21 Feb 2022 23:03:06 +0000 Received: from MW4PR03CA0046.namprd03.prod.outlook.com (2603:10b6:303:8e::21) by MW3PR12MB4393.namprd12.prod.outlook.com (2603:10b6:303:2c::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.16; Mon, 21 Feb 2022 23:03:05 +0000 Received: from CO1NAM11FT052.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8e:cafe::f4) by MW4PR03CA0046.outlook.office365.com (2603:10b6:303:8e::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.15 via Frontend Transport; Mon, 21 Feb 2022 23:03:04 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT052.mail.protection.outlook.com (10.13.174.225) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4995.15 via Frontend Transport; Mon, 21 Feb 2022 23:03:04 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 21 Feb 2022 23:03:04 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 21 Feb 2022 15:03:01 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v9 02/11] ethdev: add flow item/action templates Date: Tue, 22 Feb 2022 01:02:31 +0200 Message-ID: <20220221230240.2409665-3-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220221230240.2409665-1-akozyrev@nvidia.com> References: <20220220034409.2226860-1-akozyrev@nvidia.com> <20220221230240.2409665-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 51e9f31f-f88a-44fd-ad08-08d9f58e515b X-MS-TrafficTypeDiagnostic: MW3PR12MB4393:EE_|BL1PR12MB5873:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: PrldOY99RBKQWbTHiLrFrBErdTIO57rWXemK1K+Wf1aYxdA1Qi5QaVr9lmoP8FbGUSFJpAg89dISJJr+Ay6Rw7wX2ENapV/7J/XQ5sQbepdkfCdnw+aBkyEtSy39uNj19CWzulXTjmqtSddZAoy50BIh8CYWbUkXECs5TiYDeLm5HnodJPTvycr1zTdAeQpr5Wg2BOVMMebKH/lvyqzhQSHl4LH3xeG8sK9+hDZLxsimMM1nDOcAvZWUlv4h41CtQ6HiXjL6b8fOoWqWRSpC5haw5Lt3A6A5jllIP94G6lpfap4rYSpQ//CYhtMJon8+u4fQD18b95h+MphSbGInisK1EsgT6/yiTpiMArBIKlkgqjBAxMG9iTFH4O2FmGMpmmsiCxasQcvtGoSjXLmRnFIgBsNi3JE7NYGCILfTEbDI5XVFmGGeFu+JyaklIrlmakZM+1U3lZpWuYA8R9sOcuzl47Sy5QOyJ1UH1+ZPKs2WQPC5vXZZkIQf3J0A+q3gD1/D4JyhsxfDggTN51GxeRcL3vJucSZW39nkZngj/quQpTmM7u8MPaIX5lypjdZh1jFFZUqHFYIZwFJlGc7tfhe5Fha2jAP+PDlbckUHcIenuE8c273lMATGiM75a76B/zzPSDRfSNZ97B3hKXPKYkyeA37vxLUUOyy0lk2fRbF9BA0F4JgMscDfSXVG9vMZG43mLoX3V8J56PI1+nyC2A== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(8936002)(70586007)(70206006)(82310400004)(2906002)(8676002)(4326008)(30864003)(81166007)(356005)(6666004)(508600001)(316002)(336012)(6916009)(36860700001)(86362001)(426003)(1076003)(16526019)(2616005)(5660300002)(36756003)(186003)(26005)(40460700003)(83380400001)(47076005)(54906003)(7416002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Feb 2022 23:03:04.7353 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 51e9f31f-f88a-44fd-ad08-08d9f58e515b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT052.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5873 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Treating every single flow rule as a completely independent and separate entity negatively impacts the flow rules insertion rate. Oftentimes in an application, many flow rules share a common structure (the same item mask and/or action list) so they can be grouped and classified together. This knowledge may be used as a source of optimization by a PMD/HW. The pattern template defines common matching fields (the item mask) without values. The actions template holds a list of action types that will be used together in the same rule. The specific values for items and actions will be given only during the rule creation. A table combines pattern and actions templates along with shared flow rule attributes (group ID, priority and traffic direction). This way a PMD/HW can prepare all the resources needed for efficient flow rules creation in the datapath. To avoid any hiccups due to memory reallocation, the maximum number of flow rules is defined at the table creation time. The flow rule creation is done by selecting a table, a pattern template and an actions template (which are bound to the table), and setting unique values for the items and actions. Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- doc/guides/prog_guide/rte_flow.rst | 135 ++++++++++++ doc/guides/rel_notes/release_22_03.rst | 8 + lib/ethdev/rte_flow.c | 252 ++++++++++++++++++++++ lib/ethdev/rte_flow.h | 280 +++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 37 ++++ lib/ethdev/version.map | 6 + 6 files changed, 718 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index c89161faef..6cdfea09be 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3642,6 +3642,141 @@ Information about the number of available resources can be retrieved via struct rte_flow_port_info *port_info, struct rte_flow_error *error); +Flow templates +~~~~~~~~~~~~~~ + +Oftentimes in an application, many flow rules share a common structure +(the same pattern and/or action list) so they can be grouped and classified +together. This knowledge may be used as a source of optimization by a PMD/HW. +The flow rule creation is done by selecting a table, a pattern template +and an actions template (which are bound to the table), and setting unique +values for the items and actions. This API is not thread-safe. + +Pattern templates +^^^^^^^^^^^^^^^^^ + +The pattern template defines a common pattern (the item mask) without values. +The mask value is used to select a field to match on, spec/last are ignored. +The pattern template may be used by multiple tables and must not be destroyed +until all these tables are destroyed first. + +.. code-block:: c + + struct rte_flow_pattern_template * + rte_flow_pattern_template_create(uint16_t port_id, + const struct rte_flow_pattern_template_attr *template_attr, + const struct rte_flow_item pattern[], + struct rte_flow_error *error); + +For example, to create a pattern template to match on the destination MAC: + +.. code-block:: c + + const struct rte_flow_pattern_template_attr attr = {.ingress = 1}; + struct rte_flow_item_eth eth_m = { + .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff"; + }; + struct rte_flow_item pattern[] = { + [0] = {.type = RTE_FLOW_ITEM_TYPE_ETH, + .mask = ð_m}, + [1] = {.type = RTE_FLOW_ITEM_TYPE_END,}, + }; + struct rte_flow_error err; + + struct rte_flow_pattern_template *pattern_template = + rte_flow_pattern_template_create(port, &attr, &pattern, &err); + +The concrete value to match on will be provided at the rule creation. + +Actions templates +^^^^^^^^^^^^^^^^^ + +The actions template holds a list of action types to be used in flow rules. +The mask parameter allows specifying a shared constant value for every rule. +The actions template may be used by multiple tables and must not be destroyed +until all these tables are destroyed first. + +.. code-block:: c + + struct rte_flow_actions_template * + rte_flow_actions_template_create(uint16_t port_id, + const struct rte_flow_actions_template_attr *template_attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error); + +For example, to create an actions template with the same Mark ID +but different Queue Index for every rule: + +.. code-block:: c + + rte_flow_actions_template_attr attr = {.ingress = 1}; + struct rte_flow_action act[] = { + /* Mark ID is 4 for every rule, Queue Index is unique */ + [0] = {.type = RTE_FLOW_ACTION_TYPE_MARK, + .conf = &(struct rte_flow_action_mark){.id = 4}}, + [1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE}, + [2] = {.type = RTE_FLOW_ACTION_TYPE_END,}, + }; + struct rte_flow_action msk[] = { + /* Assign to MARK mask any non-zero value to make it constant */ + [0] = {.type = RTE_FLOW_ACTION_TYPE_MARK, + .conf = &(struct rte_flow_action_mark){.id = 1}}, + [1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE}, + [2] = {.type = RTE_FLOW_ACTION_TYPE_END,}, + }; + struct rte_flow_error err; + + struct rte_flow_actions_template *actions_template = + rte_flow_actions_template_create(port, &attr, &act, &msk, &err); + +The concrete value for Queue Index will be provided at the rule creation. + +Template table +^^^^^^^^^^^^^^ + +A template table combines a number of pattern and actions templates along with +shared flow rule attributes (group ID, priority and traffic direction). +This way a PMD/HW can prepare all the resources needed for efficient flow rules +creation in the datapath. To avoid any hiccups due to memory reallocation, +the maximum number of flow rules is defined at table creation time. +Any flow rule creation beyond the maximum table size is rejected. +Application may create another table to accommodate more rules in this case. + +.. code-block:: c + + struct rte_flow_template_table * + rte_flow_template_table_create(uint16_t port_id, + const struct rte_flow_template_table_attr *table_attr, + struct rte_flow_pattern_template *pattern_templates[], + uint8_t nb_pattern_templates, + struct rte_flow_actions_template *actions_templates[], + uint8_t nb_actions_templates, + struct rte_flow_error *error); + +A table can be created only after the Flow Rules management is configured +and pattern and actions templates are created. + +.. code-block:: c + + rte_flow_template_table_attr table_attr = { + .flow_attr.ingress = 1, + .nb_flows = 10000; + }; + uint8_t nb_pattern_templ = 1; + struct rte_flow_pattern_template *pattern_templates[nb_pattern_templ]; + pattern_templates[0] = pattern_template; + uint8_t nb_actions_templ = 1; + struct rte_flow_actions_template *actions_templates[nb_actions_templ]; + actions_templates[0] = actions_template; + struct rte_flow_error error; + + struct rte_flow_template_table *table = + rte_flow_template_table_create(port, &table_attr, + &pattern_templates, nb_pattern_templ, + &actions_templates, nb_actions_templ, + &error); + .. _flow_isolated_mode: Flow isolated mode diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index 68b41f2062..8211f5c22c 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -105,6 +105,14 @@ New Features engine, allowing to pre-allocate some resources for better performance. Added ``rte_flow_info_get`` API to retrieve available resources. + * ethdev: Added ``rte_flow_template_table_create`` API to group flow rules + with the same flow attributes and common matching patterns and actions + defined by ``rte_flow_pattern_template_create`` and + ``rte_flow_actions_template_create`` respectively. + Corresponding functions to destroy these entities are: + ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy`` + and ``rte_flow_actions_template_destroy``. + * **Updated AF_XDP PMD** * Added support for libxdp >=v1.2.2. diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 7ec7a95a6b..1f634637aa 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1460,3 +1460,255 @@ rte_flow_configure(uint16_t port_id, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, rte_strerror(ENOTSUP)); } + +struct rte_flow_pattern_template * +rte_flow_pattern_template_create(uint16_t port_id, + const struct rte_flow_pattern_template_attr *template_attr, + const struct rte_flow_item pattern[], + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_pattern_template *template; + + if (unlikely(!ops)) + return NULL; + if (dev->data->flow_configured == 0) { + RTE_FLOW_LOG(INFO, + "Flow engine on port_id=%"PRIu16" is not configured.\n", + port_id); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_STATE, + NULL, rte_strerror(EINVAL)); + return NULL; + } + if (template_attr == NULL) { + RTE_FLOW_LOG(ERR, + "Port %"PRIu16" template attr is NULL.\n", + port_id); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, rte_strerror(EINVAL)); + return NULL; + } + if (pattern == NULL) { + RTE_FLOW_LOG(ERR, + "Port %"PRIu16" pattern is NULL.\n", + port_id); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, rte_strerror(EINVAL)); + return NULL; + } + if (likely(!!ops->pattern_template_create)) { + template = ops->pattern_template_create(dev, template_attr, + pattern, error); + if (template == NULL) + flow_err(port_id, -rte_errno, error); + return template; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_pattern_template_destroy(uint16_t port_id, + struct rte_flow_pattern_template *pattern_template, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (unlikely(pattern_template == NULL)) + return 0; + if (likely(!!ops->pattern_template_destroy)) { + return flow_err(port_id, + ops->pattern_template_destroy(dev, + pattern_template, + error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +struct rte_flow_actions_template * +rte_flow_actions_template_create(uint16_t port_id, + const struct rte_flow_actions_template_attr *template_attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_actions_template *template; + + if (unlikely(!ops)) + return NULL; + if (dev->data->flow_configured == 0) { + RTE_FLOW_LOG(INFO, + "Flow engine on port_id=%"PRIu16" is not configured.\n", + port_id); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_STATE, + NULL, rte_strerror(EINVAL)); + return NULL; + } + if (template_attr == NULL) { + RTE_FLOW_LOG(ERR, + "Port %"PRIu16" template attr is NULL.\n", + port_id); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, rte_strerror(EINVAL)); + return NULL; + } + if (actions == NULL) { + RTE_FLOW_LOG(ERR, + "Port %"PRIu16" actions is NULL.\n", + port_id); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, rte_strerror(EINVAL)); + return NULL; + } + if (masks == NULL) { + RTE_FLOW_LOG(ERR, + "Port %"PRIu16" masks is NULL.\n", + port_id); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, rte_strerror(EINVAL)); + + } + if (likely(!!ops->actions_template_create)) { + template = ops->actions_template_create(dev, template_attr, + actions, masks, error); + if (template == NULL) + flow_err(port_id, -rte_errno, error); + return template; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_actions_template_destroy(uint16_t port_id, + struct rte_flow_actions_template *actions_template, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (unlikely(actions_template == NULL)) + return 0; + if (likely(!!ops->actions_template_destroy)) { + return flow_err(port_id, + ops->actions_template_destroy(dev, + actions_template, + error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +struct rte_flow_template_table * +rte_flow_template_table_create(uint16_t port_id, + const struct rte_flow_template_table_attr *table_attr, + struct rte_flow_pattern_template *pattern_templates[], + uint8_t nb_pattern_templates, + struct rte_flow_actions_template *actions_templates[], + uint8_t nb_actions_templates, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_template_table *table; + + if (unlikely(!ops)) + return NULL; + if (dev->data->flow_configured == 0) { + RTE_FLOW_LOG(INFO, + "Flow engine on port_id=%"PRIu16" is not configured.\n", + port_id); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_STATE, + NULL, rte_strerror(EINVAL)); + return NULL; + } + if (table_attr == NULL) { + RTE_FLOW_LOG(ERR, + "Port %"PRIu16" table attr is NULL.\n", + port_id); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, rte_strerror(EINVAL)); + return NULL; + } + if (pattern_templates == NULL) { + RTE_FLOW_LOG(ERR, + "Port %"PRIu16" pattern templates is NULL.\n", + port_id); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, rte_strerror(EINVAL)); + return NULL; + } + if (actions_templates == NULL) { + RTE_FLOW_LOG(ERR, + "Port %"PRIu16" actions templates is NULL.\n", + port_id); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, rte_strerror(EINVAL)); + return NULL; + } + if (likely(!!ops->template_table_create)) { + table = ops->template_table_create(dev, table_attr, + pattern_templates, nb_pattern_templates, + actions_templates, nb_actions_templates, + error); + if (table == NULL) + flow_err(port_id, -rte_errno, error); + return table; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_template_table_destroy(uint16_t port_id, + struct rte_flow_template_table *template_table, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (unlikely(template_table == NULL)) + return 0; + if (likely(!!ops->template_table_destroy)) { + return flow_err(port_id, + ops->template_table_destroy(dev, + template_table, + error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 7e6f5eba46..ffc38fcc3b 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4983,6 +4983,286 @@ rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, struct rte_flow_error *error); +/** + * Opaque type returned after successful creation of pattern template. + * This handle can be used to manage the created pattern template. + */ +struct rte_flow_pattern_template; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Flow pattern template attributes. + */ +__extension__ +struct rte_flow_pattern_template_attr { + /** + * Relaxed matching policy. + * - If 1, matching is performed only on items with the mask member set + * and matching on protocol layers specified without any masks is skipped. + * - If 0, matching on protocol layers specified without any masks is done + * as well. This is the standard behaviour of Flow API now. + */ + uint32_t relaxed_matching:1; + /** + * Flow direction for the pattern template. + * At least one direction must be specified. + */ + /** Pattern valid for rules applied to ingress traffic. */ + uint32_t ingress:1; + /** Pattern valid for rules applied to egress traffic. */ + uint32_t egress:1; + /** Pattern valid for rules applied to transfer traffic. */ + uint32_t transfer:1; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create flow pattern template. + * + * The pattern template defines common matching fields without values. + * For example, matching on 5 tuple TCP flow, the template will be + * eth(null) + IPv4(source + dest) + TCP(s_port + d_port), + * while values for each rule will be set during the flow rule creation. + * The number and order of items in the template must be the same + * at the rule creation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] template_attr + * Pattern template attributes. + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * The spec member of an item is not used unless the end member is used. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + */ +__rte_experimental +struct rte_flow_pattern_template * +rte_flow_pattern_template_create(uint16_t port_id, + const struct rte_flow_pattern_template_attr *template_attr, + const struct rte_flow_item pattern[], + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy flow pattern template. + * + * This function may be called only when + * there are no more tables referencing this template. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] pattern_template + * Handle of the template to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_pattern_template_destroy(uint16_t port_id, + struct rte_flow_pattern_template *pattern_template, + struct rte_flow_error *error); + +/** + * Opaque type returned after successful creation of actions template. + * This handle can be used to manage the created actions template. + */ +struct rte_flow_actions_template; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Flow actions template attributes. + */ +__extension__ +struct rte_flow_actions_template_attr { + /** + * Flow direction for the actions template. + * At least one direction must be specified. + */ + /** Action valid for rules applied to ingress traffic. */ + uint32_t ingress:1; + /** Action valid for rules applied to egress traffic. */ + uint32_t egress:1; + /** Action valid for rules applied to transfer traffic. */ + uint32_t transfer:1; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create flow actions template. + * + * The actions template holds a list of action types without values. + * For example, the template to change TCP ports is TCP(s_port + d_port), + * while values for each rule will be set during the flow rule creation. + * The number and order of actions in the template must be the same + * at the rule creation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] template_attr + * Template attributes. + * @param[in] actions + * Associated actions (list terminated by the END action). + * The spec member is only used if @p masks spec is non-zero. + * @param[in] masks + * List of actions that marks which of the action's member is constant. + * A mask has the same format as the corresponding action. + * If the action field in @p masks is not 0, + * the corresponding value in an action from @p actions will be the part + * of the template and used in all flow rules. + * The order of actions in @p masks is the same as in @p actions. + * In case of indirect actions present in @p actions, + * the actual action type should be present in @p mask. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + */ +__rte_experimental +struct rte_flow_actions_template * +rte_flow_actions_template_create(uint16_t port_id, + const struct rte_flow_actions_template_attr *template_attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy flow actions template. + * + * This function may be called only when + * there are no more tables referencing this template. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] actions_template + * Handle to the template to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_actions_template_destroy(uint16_t port_id, + struct rte_flow_actions_template *actions_template, + struct rte_flow_error *error); + +/** + * Opaque type returned after successful creation of a template table. + * This handle can be used to manage the created template table. + */ +struct rte_flow_template_table; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Table attributes. + */ +struct rte_flow_template_table_attr { + /** + * Flow attributes to be used in each rule generated from this table. + */ + struct rte_flow_attr flow_attr; + /** + * Maximum number of flow rules that this table holds. + */ + uint32_t nb_flows; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create flow template table. + * + * A template table consists of multiple pattern templates and actions + * templates associated with a single set of rule attributes (group ID, + * priority and traffic direction). + * + * Each rule is free to use any combination of pattern and actions templates + * and specify particular values for items and actions it would like to change. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] table_attr + * Template table attributes. + * @param[in] pattern_templates + * Array of pattern templates to be used in this table. + * @param[in] nb_pattern_templates + * The number of pattern templates in the pattern_templates array. + * @param[in] actions_templates + * Array of actions templates to be used in this table. + * @param[in] nb_actions_templates + * The number of actions templates in the actions_templates array. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + */ +__rte_experimental +struct rte_flow_template_table * +rte_flow_template_table_create(uint16_t port_id, + const struct rte_flow_template_table_attr *table_attr, + struct rte_flow_pattern_template *pattern_templates[], + uint8_t nb_pattern_templates, + struct rte_flow_actions_template *actions_templates[], + uint8_t nb_actions_templates, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy flow template table. + * + * This function may be called only when + * there are no more flow rules referencing this table. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] template_table + * Handle to the table to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_template_table_destroy(uint16_t port_id, + struct rte_flow_template_table *template_table, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index 7c29930d0f..2d96db1dc7 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -162,6 +162,43 @@ struct rte_flow_ops { (struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr, struct rte_flow_error *err); + /** See rte_flow_pattern_template_create() */ + struct rte_flow_pattern_template *(*pattern_template_create) + (struct rte_eth_dev *dev, + const struct rte_flow_pattern_template_attr *template_attr, + const struct rte_flow_item pattern[], + struct rte_flow_error *err); + /** See rte_flow_pattern_template_destroy() */ + int (*pattern_template_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_pattern_template *pattern_template, + struct rte_flow_error *err); + /** See rte_flow_actions_template_create() */ + struct rte_flow_actions_template *(*actions_template_create) + (struct rte_eth_dev *dev, + const struct rte_flow_actions_template_attr *template_attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *err); + /** See rte_flow_actions_template_destroy() */ + int (*actions_template_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_actions_template *actions_template, + struct rte_flow_error *err); + /** See rte_flow_template_table_create() */ + struct rte_flow_template_table *(*template_table_create) + (struct rte_eth_dev *dev, + const struct rte_flow_template_table_attr *table_attr, + struct rte_flow_pattern_template *pattern_templates[], + uint8_t nb_pattern_templates, + struct rte_flow_actions_template *actions_templates[], + uint8_t nb_actions_templates, + struct rte_flow_error *err); + /** See rte_flow_template_table_destroy() */ + int (*template_table_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_template_table *template_table, + struct rte_flow_error *err); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 0d849c153f..62ff791261 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -266,6 +266,12 @@ EXPERIMENTAL { rte_eth_ip_reassembly_conf_set; rte_flow_info_get; rte_flow_configure; + rte_flow_pattern_template_create; + rte_flow_pattern_template_destroy; + rte_flow_actions_template_create; + rte_flow_actions_template_destroy; + rte_flow_template_table_create; + rte_flow_template_table_destroy; }; INTERNAL { From patchwork Mon Feb 21 23:02:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107915 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BCF2FA034F; Tue, 22 Feb 2022 00:03:26 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7061641148; Tue, 22 Feb 2022 00:03:14 +0100 (CET) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2081.outbound.protection.outlook.com [40.107.212.81]) by mails.dpdk.org (Postfix) with ESMTP id AF14D41145 for ; Tue, 22 Feb 2022 00:03:12 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OcwOCCwhj/i58BbA+SXOzbRpjL3rTcj+1+fysQ5A0FJ7iMGSboEcRYWwJLobBe9AN4LJ0nG5sLFgEcVgx2m0iFMIlhqoMoSsRh4/CHxXl8DYrCR6UKOD/oe2OEWzZFp5QwqLd66fs/yPVUvQLPy1ODyt/VtWbFhtOr6Zuvm039aOP3sAPfvTm+Xj+C5KmLMqUTUVeOIIKAyAzN3atZSwnp2Knu/ui9JNcym2u9lxGBI0UgD3TJvQ4datwxa3YzXxd9mv6peHwBoYZRxnCF6fco1bUabOm4LGyA5yaDxumhHdtWBdZWjTkkxuBx2iW9nfbcxCGXt0ojKEggaslth9rA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=k6Ik9Hk094eDLALvbxXiizkEw542UBFZYnYjTCpzEM8=; b=NwcKFQL2/YbIJUzXFn3uhGMj2Kp6NRMr3lz0YwkAajajui+LLfZQdcQwW0HjD9LHWWz++vh5QtSMNK4RZCXwMnYn1V95TFb9ru4+ABQZ1EAjLg5K/iB6QBEG0gCZrsC+7UWBpc6mQiS8bposgSV6oy03h19bteKPgc2UQc2BCwdFMPhEKZQ3EYZyzE4gbvFS4tBdJ8mQf6NhAz4vbvci835Xw3BvjiuGQzoicQQsaxQY3kkxxaw3yam609//QVT2FGQerIAo92G1S+OWdJTOOnvamJYAG83XnZleB85ZNBHAoO6/gtPBowtcLw3jb9VTjKGq47H0JJ5TJmw1lcBl7w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=k6Ik9Hk094eDLALvbxXiizkEw542UBFZYnYjTCpzEM8=; b=lNxwZVQXV1HP/KyYqJcYWiyXkts/wZZTDUIg9ZDDvIhGSxLUNQb8AnSNjxXvGASc4xX4i89DlTniPGVUHLjGDRHgWZ9y8v4vDe4znMsPMWfNKUUQPN985QwGnkjYTWR5kEEOdhnDdyGbZQ5uxgFPac7x4wuLMdFOGhpL63+KL3r/3a/L/qZWKaSvtpVHF5wirNguZ2z9DbdiJXZvgTiOgVif1WStpI8VTFWsaSxG+ZagfnrY3KrEiuk3LvqiMMfEl/wITweDhQqRbsDyhX0jkh9HbkoJAl9B41m9uP0wzYXU2BN0HHbge1QIoKhurcqFGGFKLYVb3rCngoL6j/BeaQ== Received: from DM5PR12MB1465.namprd12.prod.outlook.com (2603:10b6:4:7::21) by LV2PR12MB5725.namprd12.prod.outlook.com (2603:10b6:408:17f::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.24; Mon, 21 Feb 2022 23:03:10 +0000 Received: from BN6PR13CA0054.namprd13.prod.outlook.com (2603:10b6:404:11::16) by DM5PR12MB1465.namprd12.prod.outlook.com (2603:10b6:4:7::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.16; Mon, 21 Feb 2022 23:03:09 +0000 Received: from BN8NAM11FT056.eop-nam11.prod.protection.outlook.com (2603:10b6:404:11:cafe::3a) by BN6PR13CA0054.outlook.office365.com (2603:10b6:404:11::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.16 via Frontend Transport; Mon, 21 Feb 2022 23:03:09 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT056.mail.protection.outlook.com (10.13.177.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4995.15 via Frontend Transport; Mon, 21 Feb 2022 23:03:09 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 21 Feb 2022 23:03:07 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 21 Feb 2022 15:03:04 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v9 03/11] ethdev: bring in async queue-based flow rules operations Date: Tue, 22 Feb 2022 01:02:32 +0200 Message-ID: <20220221230240.2409665-4-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220221230240.2409665-1-akozyrev@nvidia.com> References: <20220220034409.2226860-1-akozyrev@nvidia.com> <20220221230240.2409665-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4feb4ffc-f535-4c96-8bb1-08d9f58e5409 X-MS-TrafficTypeDiagnostic: DM5PR12MB1465:EE_|LV2PR12MB5725:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: S9JWQlvn6G8eZwVeciSO6fRdU7CJ4mc5eQF03F+eXGSenVC2gtsqnaaSMtqkCWyWohYSMA6Zual2mxAzh1psblFh0lPFRabnXvt3s0hCtsGSVvvEAZZ0emtBmEuohFoHt/zWtkkGgjoi4goIh63QVJ39c1Y3ERHwrIx5opZSXa1ixX4Lru2iHl10yQFhraItyhNydSws42NSsXabdmJ1Khw9oXscFmc0mwV8/Q9JO1vuZOLwLDxQV5ztAk4lDlqpSf3LlGscxQIF9wopiK6tVNNRvUBijI2cBgvQW095b+1mtek/Wv/hvmtn/mpAgPxb0s9GBhkoFDsKQIpJ2YDa1UvqL8/Oz2EmdIE9r9IRsv2UaET0rMuI34ncD6dvbX/leRQdXSfQATStncYH31//LmNsnTwwi87on9G8TfI1aFB0q9j1qstQB/jZ+wjQPq3vU5AOyb8JZbS7LJq3abjv3fhubeQ1Uy9kf9V0e4TGZ10Bp1AHf13AvkgBbGTjvG1vposVvqAvPWPFWOFH46hlzYKSfb9Mr32V/SLLM3GigB6hvYP+15sTvbMCngFpEu7/MbajOnenvjF8SZZ0JkqYEA8sBayEUx9MBOOHp4xUFGCAueDRb8/7mH5LrAuwzgVBcbwmzkTpebxem0yYPKcVpQl1DfiQqSj6OsryKixpnngByPT2hJrOpPv8N1urwRLrvgWwlIsgq0W4Nbh8s1i0joA/vaC6ZtihhuA6wyWPqFXvR1Lx8vLqi8897ZxJfB0ji3+4b+KQdRweZt5itKrCD9uLqVFryV/PPvJnzXWTXkiPwhd+o82u6oXtQFqFgLPZ1WdeJdYV4OvLK1SSZ1sXYy0hS50ln+bG759wl4Suo+w= X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(26005)(70586007)(70206006)(8676002)(83380400001)(4326008)(82310400004)(7416002)(40460700003)(16526019)(186003)(47076005)(86362001)(316002)(8936002)(1076003)(5660300002)(6916009)(19273905006)(54906003)(2616005)(508600001)(336012)(36860700001)(426003)(81166007)(30864003)(36756003)(6666004)(356005)(2906002)(36900700001)(579004)(563064011); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Feb 2022 23:03:09.1833 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4feb4ffc-f535-4c96-8bb1-08d9f58e5409 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT056.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV2PR12MB5725 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org A new, faster, queue-based flow rules management mechanism is needed for applications offloading rules inside the datapath. This asynchronous and lockless mechanism frees the CPU for further packet processing and reduces the performance impact of the flow rules creation/destruction on the datapath. Note that queues are not thread-safe and the queue should be accessed from the same thread for all queue operations. It is the responsibility of the app to sync the queue functions in case of multi-threaded access to the same queue. The rte_flow_async_create() function enqueues a flow creation to the requested queue. It benefits from already configured resources and sets unique values on top of item and action templates. A flow rule is enqueued on the specified flow queue and offloaded asynchronously to the hardware. The function returns immediately to spare CPU for further packet processing. The application must invoke the rte_flow_pull() function to complete the flow rule operation offloading, to clear the queue, and to receive the operation status. The rte_flow_async_destroy() function enqueues a flow destruction to the requested queue. Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- .../prog_guide/img/rte_flow_async_init.svg | 205 ++++++++++ .../prog_guide/img/rte_flow_async_usage.svg | 354 ++++++++++++++++++ doc/guides/prog_guide/rte_flow.rst | 124 ++++++ doc/guides/rel_notes/release_22_03.rst | 7 + lib/ethdev/rte_flow.c | 83 +++- lib/ethdev/rte_flow.h | 241 ++++++++++++ lib/ethdev/rte_flow_driver.h | 35 ++ lib/ethdev/version.map | 4 + 8 files changed, 1051 insertions(+), 2 deletions(-) create mode 100644 doc/guides/prog_guide/img/rte_flow_async_init.svg create mode 100644 doc/guides/prog_guide/img/rte_flow_async_usage.svg diff --git a/doc/guides/prog_guide/img/rte_flow_async_init.svg b/doc/guides/prog_guide/img/rte_flow_async_init.svg new file mode 100644 index 0000000000..f66e9c73d7 --- /dev/null +++ b/doc/guides/prog_guide/img/rte_flow_async_init.svg @@ -0,0 +1,205 @@ + + + + + + + + + + + + + + + + + rte_eth_dev_configure + () + + + rte_flow_configure() + + + rte_flow_pattern_template_create() + + rte_flow_actions_template_create() + + rte_eal_init() + + + + + rte_flow_template_table_create() + + + + rte_eth_dev_start() + + + diff --git a/doc/guides/prog_guide/img/rte_flow_async_usage.svg b/doc/guides/prog_guide/img/rte_flow_async_usage.svg new file mode 100644 index 0000000000..bb978bca1e --- /dev/null +++ b/doc/guides/prog_guide/img/rte_flow_async_usage.svg @@ -0,0 +1,354 @@ + + + + + + + + + + + + + + + + rte_eth_rx_burst() + + analyze packet + + rte_flow_async_create() + + more packets? + + + + + + + add new rule? + + + yes + + no + + + destroy the rule? + + + rte_flow_async_destroy() + + + + + rte_flow_pull() + + rte_flow_push() + + + no + + yes + + no + + yes + + diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 6cdfea09be..c6f6f0afba 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3624,12 +3624,16 @@ Expected number of resources in an application allows PMD to prepare and optimize NIC hardware configuration and memory layout in advance. ``rte_flow_configure()`` must be called before any flow rule is created, but after an Ethernet device is configured. +It also creates flow queues for asynchronous flow rules operations via +queue-based API, see `Asynchronous operations`_ section. .. code-block:: c int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error); Information about the number of available resources can be retrieved via @@ -3640,6 +3644,7 @@ Information about the number of available resources can be retrieved via int rte_flow_info_get(uint16_t port_id, struct rte_flow_port_info *port_info, + struct rte_flow_queue_info *queue_info, struct rte_flow_error *error); Flow templates @@ -3777,6 +3782,125 @@ and pattern and actions templates are created. &actions_templates, nb_actions_templ, &error); +Asynchronous operations +----------------------- + +Flow rules management can be done via special lockless flow management queues. +- Queue operations are asynchronous and not thread-safe. + +- Operations can thus be invoked by the app's datapath, + packet processing can continue while queue operations are processed by NIC. + +- Number of flow queues is configured at initialization stage. + +- Available operation types: rule creation, rule destruction, + indirect rule creation, indirect rule destruction, indirect rule update. + +- Operations may be reordered within a queue. + +- Operations can be postponed and pushed to NIC in batches. + +- Results pulling must be done on time to avoid queue overflows. + +- User data is returned as part of the result to identify an operation. + +- Flow handle is valid once the creation operation is enqueued and must be + destroyed even if the operation is not successful and the rule is not inserted. + +- Application must wait for the creation operation result before enqueueing + the deletion operation to make sure the creation is processed by NIC. + +The asynchronous flow rule insertion logic can be broken into two phases. + +1. Initialization stage as shown here: + +.. _figure_rte_flow_async_init: + +.. figure:: img/rte_flow_async_init.* + +2. Main loop as presented on a datapath application example: + +.. _figure_rte_flow_async_usage: + +.. figure:: img/rte_flow_async_usage.* + +Enqueue creation operation +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enqueueing a flow rule creation operation is similar to simple creation. + +.. code-block:: c + + struct rte_flow * + rte_flow_async_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_template_table *template_table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error); + +A valid handle in case of success is returned. It must be destroyed later +by calling ``rte_flow_async_destroy()`` even if the rule is rejected by HW. + +Enqueue destruction operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enqueueing a flow rule destruction operation is similar to simple destruction. + +.. code-block:: c + + int + rte_flow_async_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *flow, + void *user_data, + struct rte_flow_error *error); + +Push enqueued operations +~~~~~~~~~~~~~~~~~~~~~~~~ + +Pushing all internally stored rules from a queue to the NIC. + +.. code-block:: c + + int + rte_flow_push(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error); + +There is the postpone attribute in the queue operation attributes. +When it is set, multiple operations can be bulked together and not sent to HW +right away to save SW/HW interactions and prioritize throughput over latency. +The application must invoke this function to actually push all outstanding +operations to HW in this case. + +Pull enqueued operations +~~~~~~~~~~~~~~~~~~~~~~~~ + +Pulling asynchronous operations results. + +The application must invoke this function in order to complete asynchronous +flow rule operations and to receive flow rule operations statuses. + +.. code-block:: c + + int + rte_flow_pull(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_op_result res[], + uint16_t n_res, + struct rte_flow_error *error); + +Multiple outstanding operation results can be pulled simultaneously. +User data may be provided during a flow creation/destruction in order +to distinguish between multiple operations. User data is returned as part +of the result to provide a method to detect which operation is completed. + .. _flow_isolated_mode: Flow isolated mode diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index 8211f5c22c..2477f53ca6 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -113,6 +113,13 @@ New Features ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy`` and ``rte_flow_actions_template_destroy``. +* ** Added functions for asynchronous flow rules creation/destruction + + * ethdev: Added ``rte_flow_async_create`` and ``rte_flow_async_destroy`` API + to enqueue flow creaion/destruction operations asynchronously as well as + ``rte_flow_pull`` to poll and retrieve results of these operations and + ``rte_flow_push`` to push all the in-flight operations to the NIC. + * **Updated AF_XDP PMD** * Added support for libxdp >=v1.2.2. diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 1f634637aa..c314129870 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1396,6 +1396,7 @@ rte_flow_flex_item_release(uint16_t port_id, int rte_flow_info_get(uint16_t port_id, struct rte_flow_port_info *port_info, + struct rte_flow_queue_info *queue_info, struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; @@ -1415,7 +1416,7 @@ rte_flow_info_get(uint16_t port_id, } if (likely(!!ops->info_get)) { return flow_err(port_id, - ops->info_get(dev, port_info, error), + ops->info_get(dev, port_info, queue_info, error), error); } return rte_flow_error_set(error, ENOTSUP, @@ -1426,6 +1427,8 @@ rte_flow_info_get(uint16_t port_id, int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; @@ -1450,8 +1453,12 @@ rte_flow_configure(uint16_t port_id, RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id); return -EINVAL; } + if (queue_attr == NULL) { + RTE_FLOW_LOG(ERR, "Port %"PRIu16" queue info is NULL.\n", port_id); + return -EINVAL; + } if (likely(!!ops->configure)) { - ret = ops->configure(dev, port_attr, error); + ret = ops->configure(dev, port_attr, nb_queue, queue_attr, error); if (ret == 0) dev->data->flow_configured = 1; return flow_err(port_id, ret, error); @@ -1712,3 +1719,75 @@ rte_flow_template_table_destroy(uint16_t port_id, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, rte_strerror(ENOTSUP)); } + +struct rte_flow * +rte_flow_async_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_template_table *template_table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow *flow; + + flow = ops->async_create(dev, queue_id, + op_attr, template_table, + pattern, pattern_template_index, + actions, actions_template_index, + user_data, error); + if (flow == NULL) + flow_err(port_id, -rte_errno, error); + return flow; +} + +int +rte_flow_async_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *flow, + void *user_data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + return flow_err(port_id, + ops->async_destroy(dev, queue_id, + op_attr, flow, + user_data, error), + error); +} + +int +rte_flow_push(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + return flow_err(port_id, + ops->push(dev, queue_id, error), + error); +} + +int +rte_flow_pull(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_op_result res[], + uint16_t n_res, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + ret = ops->pull(dev, queue_id, res, n_res, error); + return ret ? ret : flow_err(port_id, ret, error); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index ffc38fcc3b..3fb7cb03ae 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4884,6 +4884,10 @@ rte_flow_flex_item_release(uint16_t port_id, * */ struct rte_flow_port_info { + /** + * Maximum number of queues for asynchronous operations. + */ + uint32_t max_nb_queues; /** * Maximum number of counters. * @see RTE_FLOW_ACTION_TYPE_COUNT @@ -4901,6 +4905,21 @@ struct rte_flow_port_info { uint32_t max_nb_meters; }; +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Information about flow engine asynchronous queues. + * The value only valid if @p port_attr.max_nb_queues is not zero. + * + */ +struct rte_flow_queue_info { + /** + * Maximum number of operations a queue can hold. + */ + uint32_t max_size; +}; + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. @@ -4912,6 +4931,9 @@ struct rte_flow_port_info { * @param[out] port_info * A pointer to a structure of type *rte_flow_port_info* * to be filled with the resources information of the port. + * @param[out] queue_info + * A pointer to a structure of type *rte_flow_queue_info* + * to be filled with the asynchronous queues information. * @param[out] error * Perform verbose error reporting if not NULL. * PMDs initialize this structure in case of error only. @@ -4923,6 +4945,7 @@ __rte_experimental int rte_flow_info_get(uint16_t port_id, struct rte_flow_port_info *port_info, + struct rte_flow_queue_info *queue_info, struct rte_flow_error *error); /** @@ -4951,6 +4974,21 @@ struct rte_flow_port_attr { uint32_t nb_meters; }; +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Flow engine asynchronous queues settings. + * The value means default value picked by PMD. + * + */ +struct rte_flow_queue_attr { + /** + * Number of flow rule operations a queue can hold. + */ + uint32_t size; +}; + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. @@ -4970,6 +5008,11 @@ struct rte_flow_port_attr { * Port identifier of Ethernet device. * @param[in] port_attr * Port configuration attributes. + * @param[in] nb_queue + * Number of flow queues to be configured. + * @param[in] queue_attr + * Array that holds attributes for each flow queue. + * Number of elements is set in @p port_attr.nb_queues. * @param[out] error * Perform verbose error reporting if not NULL. * PMDs initialize this structure in case of error only. @@ -4981,6 +5024,8 @@ __rte_experimental int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error); /** @@ -5263,6 +5308,202 @@ rte_flow_template_table_destroy(uint16_t port_id, struct rte_flow_template_table *template_table, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Asynchronous operation attributes. + */ +__extension__ +struct rte_flow_op_attr { + /** + * When set, the requested action will not be sent to the HW immediately. + * The application must call the rte_flow_queue_push to actually send it. + */ + uint32_t postpone:1; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule creation operation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue used to insert the rule. + * @param[in] op_attr + * Rule creation operation attributes. + * @param[in] template_table + * Template table to select templates from. + * @param[in] pattern + * List of pattern items to be used. + * The list order should match the order in the pattern template. + * The spec is the only relevant member of the item that is being used. + * @param[in] pattern_template_index + * Pattern template index in the table. + * @param[in] actions + * List of actions to be used. + * The list order should match the order in the actions template. + * @param[in] actions_template_index + * Actions template index in the table. + * @param[in] user_data + * The user data that will be returned on the completion events. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + * The rule handle doesn't mean that the rule has been populated. + * Only completion result indicates that if there was success or failure. + */ +__rte_experimental +struct rte_flow * +rte_flow_async_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_template_table *template_table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule destruction operation. + * + * This function enqueues a destruction operation on the queue. + * Application should assume that after calling this function + * the rule handle is not valid anymore. + * Completion indicates the full removal of the rule from the HW. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue which is used to destroy the rule. + * This must match the queue on which the rule was created. + * @param[in] op_attr + * Rule destruction operation attributes. + * @param[in] flow + * Flow handle to be destroyed. + * @param[in] user_data + * The user data that will be returned on the completion events. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_async_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *flow, + void *user_data, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Push all internally stored rules to the HW. + * Postponed rules are rules that were inserted with the postpone flag set. + * Can be used to notify the HW about batch of rules prepared by the SW to + * reduce the number of communications between the HW and SW. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue to be pushed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_push(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Asynchronous operation status. + */ +enum rte_flow_op_status { + /** + * The operation was completed successfully. + */ + RTE_FLOW_OP_SUCCESS, + /** + * The operation was not completed successfully. + */ + RTE_FLOW_OP_ERROR, +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Asynchronous operation result. + */ +__extension__ +struct rte_flow_op_result { + /** + * Returns the status of the operation that this completion signals. + */ + enum rte_flow_op_status status; + /** + * The user data that will be returned on the completion events. + */ + void *user_data; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Pull a rte flow operation. + * The application must invoke this function in order to complete + * the flow rule offloading and to retrieve the flow rule operation status. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue which is used to pull the operation. + * @param[out] res + * Array of results that will be set. + * @param[in] n_res + * Maximum number of results that can be returned. + * This value is equal to the size of the res array. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Number of results that were pulled, + * a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_pull(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_op_result res[], + uint16_t n_res, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index 2d96db1dc7..5907dd63c3 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -156,11 +156,14 @@ struct rte_flow_ops { int (*info_get) (struct rte_eth_dev *dev, struct rte_flow_port_info *port_info, + struct rte_flow_queue_info *queue_info, struct rte_flow_error *err); /** See rte_flow_configure() */ int (*configure) (struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *err); /** See rte_flow_pattern_template_create() */ struct rte_flow_pattern_template *(*pattern_template_create) @@ -199,6 +202,38 @@ struct rte_flow_ops { (struct rte_eth_dev *dev, struct rte_flow_template_table *template_table, struct rte_flow_error *err); + /** See rte_flow_async_create() */ + struct rte_flow *(*async_create) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_template_table *template_table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *err); + /** See rte_flow_async_destroy() */ + int (*async_destroy) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *flow, + void *user_data, + struct rte_flow_error *err); + /** See rte_flow_push() */ + int (*push) + (struct rte_eth_dev *dev, + uint32_t queue_id, + struct rte_flow_error *err); + /** See rte_flow_pull() */ + int (*pull) + (struct rte_eth_dev *dev, + uint32_t queue_id, + struct rte_flow_op_result res[], + uint16_t n_res, + struct rte_flow_error *error); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 62ff791261..13c1a22118 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -272,6 +272,10 @@ EXPERIMENTAL { rte_flow_actions_template_destroy; rte_flow_template_table_create; rte_flow_template_table_destroy; + rte_flow_async_create; + rte_flow_async_destroy; + rte_flow_push; + rte_flow_pull; }; INTERNAL { From patchwork Mon Feb 21 23:02:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107916 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9BD52A034F; Tue, 22 Feb 2022 00:03:36 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0741D4115F; Tue, 22 Feb 2022 00:03:17 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2048.outbound.protection.outlook.com [40.107.244.48]) by mails.dpdk.org (Postfix) with ESMTP id E705F41154 for ; Tue, 22 Feb 2022 00:03:14 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jhHfFDZY251AfkCD3cbfapQETtARZiop58Y2501PJZlMLXHZMQWA/mbJlUmJqronOWoFVXvVUAZTriD/oWhIK0koMglr5UAhmvVqWDCEBpX2+smN+RlJl9Pnu06d7z2eSBgdmpZjQFmv46yQkdRzriZos8p3RpbK2r6ggGdlJHLfqAyQxm/6mrU2Fec93HB50c2vCjDbwXQI+AbtiBUuqNMigVCg4GxlHF1Ky01LgLdfhp1uzWQEtg867SLrIvcAnWVyq8vpMLn2OsznT3EBzyQji132iHda/CUnswTw4bEXC0PGYb+w8G6fqdJy9qFw41wvNiDd4Sok8BmLlzOjWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gKSt+aFT9tP8o0rlyszGJ0/fWjG5OUWroYKVYrioEBE=; b=PN5zvdbxcbGR+J3c586AMetYtlkcYze4aaDp0Ae9z616rnFxAW6KAJAJN/ntL60sLvPWGHXE5sE809MToH4WRzuCz34SxA2DFo+mFbV7jClKRJM7VaeURSshdCmUeYwaFH6UQPNLAwlp07pW7HncMU6w4i1sph0sXYAA4bTF9sOrW8QH+Au2Q5cWjwgJdBolZ5/3Zqf71eZFg0Sp96dTCqOprwz7p3dRRSyoWpD1Cn88TngHJz/za6Eq28Jg0/sMT/8hy4F/8ijF24aubsqWfj75eRipMG1LNxw/oypcff9lhr4WRqbTbjqmO9fn3Qm8ynBRELvQ7Q4jCUegGYGT6w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gKSt+aFT9tP8o0rlyszGJ0/fWjG5OUWroYKVYrioEBE=; b=Xv9GDl7RvtgCc5cQBYigVQL8LpJk8wR2ynDF8WEgHPCD/p912TCfvA0hdfgElqJq7eBtnspsmj10+iYvIL5ranq9fo2UKqy9eUPMRH2nJ/GUUmS3Wnu+oV3L1E5WAvSAdYY6H9LIO17WCWZ6uSZgT+4ulisyLYaFz3HvGxYZTFB0Z6poE7+hb+GheCNFM62s+KcH8g4fGo4hg1cwNOn4nRQNYj5e6hqUuDcH4UPpbff37nJ3BQzt+j0nHIxUrKuWjSy4qL1+yPqYRtp/MSPGVXJPe8ul42T0Swpo9bGV6QHPWHWS2kPwvzaJQKl/DKcRMYDJTR65PE/ltIhk2yX1Dg== Received: from MWHPR1201MB0271.namprd12.prod.outlook.com (2603:10b6:301:57::16) by BN8PR12MB3076.namprd12.prod.outlook.com (2603:10b6:408:61::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.15; Mon, 21 Feb 2022 23:03:12 +0000 Received: from BN9PR03CA0651.namprd03.prod.outlook.com (2603:10b6:408:13b::26) by MWHPR1201MB0271.namprd12.prod.outlook.com (2603:10b6:301:57::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.16; Mon, 21 Feb 2022 23:03:11 +0000 Received: from BN8NAM11FT011.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13b:cafe::1c) by BN9PR03CA0651.outlook.office365.com (2603:10b6:408:13b::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.16 via Frontend Transport; Mon, 21 Feb 2022 23:03:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT011.mail.protection.outlook.com (10.13.176.140) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4995.15 via Frontend Transport; Mon, 21 Feb 2022 23:03:10 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 21 Feb 2022 23:03:10 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 21 Feb 2022 15:03:06 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v9 04/11] ethdev: bring in async indirect actions operations Date: Tue, 22 Feb 2022 01:02:33 +0200 Message-ID: <20220221230240.2409665-5-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220221230240.2409665-1-akozyrev@nvidia.com> References: <20220220034409.2226860-1-akozyrev@nvidia.com> <20220221230240.2409665-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: be4830f7-eaf1-4629-6275-08d9f58e54cb X-MS-TrafficTypeDiagnostic: MWHPR1201MB0271:EE_|BN8PR12MB3076:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Rb7shT27bYY54dp35auJMsNhAuo0zrSQHltUnvLsbChoI86SCu4Kf8H+llhM4i4xmD1KFnw4mxzXb4KEg63+bJuXKBbClQCbytaH/YSG8Az4JeF/37QW5ot0DPw87FhV/cC6xKpn65aK4Qw+aN0IZp9AbuXBxrYHRU/lw/xIHe54mvTnHrtDhRae2421gi1zIkMiELL3H1ZtiMdbOpQSwTetQwaY1ny7oOZjlC6RK97J0r57h3SywfXPoidZ99OUJGdhXE5PIRRIQZ0q2IkWTo0yP7RczldSiTXBkN7xvPCaMy6vtxRrSEN8t7I7ywfgmjHojVyIU61Y+FXBi4KJ+TO5aArqcQJ5EMhdIUcQ2Yx7ZUYbPmEv60ebJ80zZoSoct0i7SW+23q5XO0yVO17WjQ9n/ZULz3QjgyCCHuW26EqyRgeUGDgFnAowShtoDxSrSLVLStiGldhmiGaClMZjADEOM2lBNq7Cj00FKu9Bme02sE/nfkKqX7uqQrBAZiN9mecFu6j0yAn+LghigrjI0CCv0jdHS6stxjOYYkw0RsJNb0Xv/dkAoqAEDY3+mCmccaBdUQspw5s/V3JfzDf7kBm2aQ8/GyGi9dbsh/W3M9s2MxNIYa/hkP1Jpf7u92wd9NYTqy0PuhKyKdDEGgZf4k8r8Nrgt2QyZ6EdCdnOgIa5fbHmJR9sXcjWB5lre9dJxRVODkUrh3BzTxCx8Smhg== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(47076005)(70586007)(70206006)(40460700003)(6916009)(54906003)(426003)(83380400001)(8676002)(508600001)(4326008)(36756003)(6666004)(336012)(356005)(186003)(8936002)(86362001)(2616005)(30864003)(81166007)(16526019)(26005)(1076003)(5660300002)(316002)(36860700001)(7416002)(2906002)(82310400004)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Feb 2022 23:03:10.4693 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: be4830f7-eaf1-4629-6275-08d9f58e54cb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT011.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3076 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Queue-based flow rules management mechanism is suitable not only for flow rules creation/destruction, but also for speeding up other types of Flow API management. Indirect action object operations may be executed asynchronously as well. Provide async versions for all indirect action operations, namely: rte_flow_async_action_handle_create, rte_flow_async_action_handle_destroy and rte_flow_async_action_handle_update. Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- doc/guides/prog_guide/rte_flow.rst | 50 ++++++++++++ doc/guides/rel_notes/release_22_03.rst | 5 ++ lib/ethdev/rte_flow.c | 61 ++++++++++++++ lib/ethdev/rte_flow.h | 109 +++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 26 ++++++ lib/ethdev/version.map | 3 + 6 files changed, 254 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index c6f6f0afba..8148531073 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3861,6 +3861,56 @@ Enqueueing a flow rule destruction operation is similar to simple destruction. void *user_data, struct rte_flow_error *error); +Enqueue indirect action creation operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Asynchronous version of indirect action creation API. + +.. code-block:: c + + struct rte_flow_action_handle * + rte_flow_async_action_handle_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + void *user_data, + struct rte_flow_error *error); + +A valid handle in case of success is returned. It must be destroyed later by +``rte_flow_async_action_handle_destroy()`` even if the rule was rejected. + +Enqueue indirect action destruction operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Asynchronous version of indirect action destruction API. + +.. code-block:: c + + int + rte_flow_async_action_handle_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + void *user_data, + struct rte_flow_error *error); + +Enqueue indirect action update operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Asynchronous version of indirect action update API. + +.. code-block:: c + + int + rte_flow_async_action_handle_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + void *user_data, + struct rte_flow_error *error); + Push enqueued operations ~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index 2477f53ca6..da186315a5 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -120,6 +120,11 @@ New Features ``rte_flow_pull`` to poll and retrieve results of these operations and ``rte_flow_push`` to push all the in-flight operations to the NIC. + * ethdev: Added asynchronous API for indirect actions management: + ``rte_flow_async_action_handle_create``, + ``rte_flow_async_action_handle_destroy`` and + ``rte_flow_async_action_handle_update``. + * **Updated AF_XDP PMD** * Added support for libxdp >=v1.2.2. diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index c314129870..9a902da660 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1791,3 +1791,64 @@ rte_flow_pull(uint16_t port_id, ret = ops->pull(dev, queue_id, res, n_res, error); return ret ? ret : flow_err(port_id, ret, error); } + +struct rte_flow_action_handle * +rte_flow_async_action_handle_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + void *user_data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_action_handle *handle; + + handle = ops->async_action_handle_create(dev, queue_id, op_attr, + indir_action_conf, action, user_data, error); + if (handle == NULL) + flow_err(port_id, -rte_errno, error); + return handle; +} + +int +rte_flow_async_action_handle_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + void *user_data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + ret = ops->async_action_handle_destroy(dev, queue_id, op_attr, + action_handle, user_data, error); + return flow_err(port_id, ret, error); +} + +int +rte_flow_async_action_handle_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + void *user_data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (unlikely(!ops->async_action_handle_update)) + return rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOSYS)); + ret = ops->async_action_handle_update(dev, queue_id, op_attr, + action_handle, update, user_data, error); + return flow_err(port_id, ret, error); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 3fb7cb03ae..d8827dd184 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -5504,6 +5504,115 @@ rte_flow_pull(uint16_t port_id, uint16_t n_res, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action creation operation. + * @see rte_flow_action_handle_create + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to create the rule. + * @param[in] op_attr + * Indirect action creation operation attributes. + * @param[in] indir_action_conf + * Action configuration for the indirect action object creation. + * @param[in] action + * Specific configuration of the indirect action object. + * @param[in] user_data + * The user data that will be returned on the completion events. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * A valid handle in case of success, NULL otherwise and rte_errno is set. + */ +__rte_experimental +struct rte_flow_action_handle * +rte_flow_async_action_handle_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + void *user_data, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action destruction operation. + * The destroy queue must be the same + * as the queue on which the action was created. + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to destroy the rule. + * @param[in] op_attr + * Indirect action destruction operation attributes. + * @param[in] action_handle + * Handle for the indirect action object to be destroyed. + * @param[in] user_data + * The user data that will be returned on the completion events. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_async_action_handle_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + void *user_data, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action update operation. + * @see rte_flow_action_handle_create + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to update the rule. + * @param[in] op_attr + * Indirect action update operation attributes. + * @param[in] action_handle + * Handle for the indirect action object to be updated. + * @param[in] update + * Update profile specification used to modify the action pointed by handle. + * *update* could be with the same type of the immediate action corresponding + * to the *handle* argument when creating, or a wrapper structure includes + * action configuration to be updated and bit fields to indicate the member + * of fields inside the action to update. + * @param[in] user_data + * The user data that will be returned on the completion events. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_async_action_handle_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + void *user_data, + struct rte_flow_error *error); #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index 5907dd63c3..2bff732d6a 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -234,6 +234,32 @@ struct rte_flow_ops { struct rte_flow_op_result res[], uint16_t n_res, struct rte_flow_error *error); + /** See rte_flow_async_action_handle_create() */ + struct rte_flow_action_handle *(*async_action_handle_create) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + void *user_data, + struct rte_flow_error *err); + /** See rte_flow_async_action_handle_destroy() */ + int (*async_action_handle_destroy) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + void *user_data, + struct rte_flow_error *error); + /** See rte_flow_async_action_handle_update() */ + int (*async_action_handle_update) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + void *user_data, + struct rte_flow_error *error); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 13c1a22118..20391ab29e 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -276,6 +276,9 @@ EXPERIMENTAL { rte_flow_async_destroy; rte_flow_push; rte_flow_pull; + rte_flow_async_action_handle_create; + rte_flow_async_action_handle_destroy; + rte_flow_async_action_handle_update; }; INTERNAL { From patchwork Mon Feb 21 23:02:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107917 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BA053A034F; Tue, 22 Feb 2022 00:03:42 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0574641154; Tue, 22 Feb 2022 00:03:21 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2044.outbound.protection.outlook.com [40.107.244.44]) by mails.dpdk.org (Postfix) with ESMTP id 6774041156 for ; Tue, 22 Feb 2022 00:03:17 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Vt4+EgICvib+AUO3Fb929i/J5KdvuoTTUzgwFyBOF25XOf9AQB33mh4rGys1T9SZsEf7zopqvlj4uLqvM947ekWkODMMwGfdduqvlP1m2Aq/KggakKeGAij+nqmzmUNCNWlkAFFwCBTPqiyE2KKoEraUinFHOtpLG8lYNGyO15Z+mb0NVcG3jyl6cts1SIaemvPpr2GKx+I6lSS7ap8lDr7iVHJ93NaDmle9ZQXSEaQTrI9KdqFIN+D47WTno2vzTYQc4FgF4f6iSEPxq1MhQMGkX4TZJeMowkt7TSsihnSeK/IswXoJtrejrHQSD+vPRqF2gsLKodKOEDLJ3M9org== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1njSzhhZT0vrwKozTCDbOEoSzHstT8+Ug0QtkO/8b/s=; b=XrGOf+prenvggds9srWWkyeTr3ajMo8wgBBtQ30NWqn/x0P7+pYHb63nmhSF8qsOLcG5JhDJAL114LMfRQOYh5x1NFlKGXS9OGK0v9O0X3J0SA6713TJZBfMNgAZtowMbPjfsLnKsSdMyscKFE2aaYEE4IzCtcd2z9AETae0kmvqZoNocsO+OoeAb1QdNtuJM37hihHY4Oj1X9XWJSOx9LxclssrUBtaoB9XFAL6Vx/QZNoP5VClWCbvlSPF/E1qPdRWBgqT8Wo+OJTL6GyJ5P91f3uQwh9VNEOPBvDwCc3/t/5KteKrsH7bKZQzpraDGmvwgbX2CizwxwEHWC7PDw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1njSzhhZT0vrwKozTCDbOEoSzHstT8+Ug0QtkO/8b/s=; b=lHWTZrUvSel14BkmO2kD0tvrFSoqkIrUKG9pILEVQlkqjtjiMEyjjOD9jVwk1K1QBg491BuuZDye5v8S03Qdl6Ll3O02uLENXO/Ebq/1q8XjmrRve0ROjL4ND19wR3MQwpYns78IrVZfIhFKqyBXAYlGA6fFFidfqblM2jvAMDrvYuaDu8zpOPdWWzgjoUsaRH7xp3MtJNWMBptAUo23vH3vxovqz57OtxM2iKzO5KAf+u+xh4b1tWELYUJtJfdp5U4SnSEgBlAzADN6HUjJchU7XOQjjNf/y8+X++R7P26FpakqnJTvc+tLZMz9ytpjByDF6l7MdAatZJ+O9yuIJg== Received: from BL1PR12MB5205.namprd12.prod.outlook.com (2603:10b6:208:308::17) by LV2PR12MB5943.namprd12.prod.outlook.com (2603:10b6:408:170::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.24; Mon, 21 Feb 2022 23:03:15 +0000 Received: from MWHPR21CA0028.namprd21.prod.outlook.com (2603:10b6:300:129::14) by BL1PR12MB5205.namprd12.prod.outlook.com (2603:10b6:208:308::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.14; Mon, 21 Feb 2022 23:03:14 +0000 Received: from CO1NAM11FT017.eop-nam11.prod.protection.outlook.com (2603:10b6:300:129:cafe::55) by MWHPR21CA0028.outlook.office365.com (2603:10b6:300:129::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.6 via Frontend Transport; Mon, 21 Feb 2022 23:03:14 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT017.mail.protection.outlook.com (10.13.175.108) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4995.15 via Frontend Transport; Mon, 21 Feb 2022 23:03:13 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 21 Feb 2022 23:03:13 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 21 Feb 2022 15:03:09 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v9 05/11] app/testpmd: add flow engine configuration Date: Tue, 22 Feb 2022 01:02:34 +0200 Message-ID: <20220221230240.2409665-6-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220221230240.2409665-1-akozyrev@nvidia.com> References: <20220220034409.2226860-1-akozyrev@nvidia.com> <20220221230240.2409665-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4b8df224-10f5-4f96-3731-08d9f58e56d3 X-MS-TrafficTypeDiagnostic: BL1PR12MB5205:EE_|LV2PR12MB5943:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 0v5Auz93ez/JAxCFfeLWalidbuY+tZRWjzUfc9UUK9izEgUqwTvFYJew8sbGKXrWKkopQ3aLIHhZ/3t7uMNMDBM3blmX5uqvXFr0P3pikzXJAZ3KAAasZQp1An7MQfJK2Qs/At264AOZBt+L+SLaz5fwpJDM9JDjRA5R9r/y8LpF3sov6+iJcmSzFa+1iV2DJeqW0X0Xz37nYoUfYEH6rpaBvpeejR/PZi1lLHaSQ4yTAgBEh181MKWA88fsaxoPxlITjmhvTlMFh26TiaMveiFAIpfZndsjS+g180dDOzW8S/KWgmgTPDNplTIlVJLwWCG4iRMZFs7E/F0mwT0yDqB1jS3AETzg9J6ce7JxOt6rURFWOHIYAvtFOY238VK3OV2gVqp9ElDvjIepUiLEs2jh4gUjKLqDH6oo3+pSIkpmSHjDNpLtKsJjoJMEAodah2eZwQsXq2q6Z+xNmWiZnWK+p1wz3Jq98D5mu9F2nGztxT06roFLvqHYUQfk2pCga2Xdw5o+Hf8ui5Go/gi9l8/4q4E1hEx1+TLyZi+F2y4053SnUoDJ7R5xKtbmIX9byLO2W/C6CH9bdXl2UZTcdLWrP96MO4wkWzrhbvW5OmEPRJBG5TxptuIFx9ase71SQR/vgmMhDoLO0qZ2y5puhAQ+sfe6hadv93vk8BX/vQvDnqMBx6hLYivfFreKKmzcrUOnV8+fn5lJC8vYXjIw6A== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(47076005)(7416002)(5660300002)(83380400001)(8936002)(36756003)(40460700003)(6916009)(36860700001)(86362001)(70206006)(30864003)(8676002)(4326008)(70586007)(1076003)(2906002)(336012)(426003)(508600001)(2616005)(186003)(54906003)(26005)(356005)(81166007)(316002)(82310400004)(16526019)(6666004)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Feb 2022 23:03:13.9073 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4b8df224-10f5-4f96-3731-08d9f58e56d3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT017.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV2PR12MB5943 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_configure API. Provide the command line interface for the Flow management. Usage example: flow configure 0 queues_number 8 queues_size 256 Implement rte_flow_info_get API to get available resources: Usage example: flow info 0 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 126 +++++++++++++++++++- app/test-pmd/config.c | 61 ++++++++++ app/test-pmd/testpmd.h | 7 ++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 61 +++++++++- 4 files changed, 252 insertions(+), 3 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index c0644d678c..0533a33ca2 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -72,6 +72,8 @@ enum index { /* Top-level command. */ FLOW, /* Sub-level commands. */ + INFO, + CONFIGURE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -122,6 +124,13 @@ enum index { DUMP_ALL, DUMP_ONE, + /* Configure arguments */ + CONFIG_QUEUES_NUMBER, + CONFIG_QUEUES_SIZE, + CONFIG_COUNTERS_NUMBER, + CONFIG_AGING_OBJECTS_NUMBER, + CONFIG_METERS_NUMBER, + /* Indirect action arguments */ INDIRECT_ACTION_CREATE, INDIRECT_ACTION_UPDATE, @@ -868,6 +877,11 @@ struct buffer { enum index command; /**< Flow command. */ portid_t port; /**< Affected port ID. */ union { + struct { + struct rte_flow_port_attr port_attr; + uint32_t nb_queue; + struct rte_flow_queue_attr queue_attr; + } configure; /**< Configuration arguments. */ struct { uint32_t *action_id; uint32_t action_id_n; @@ -949,6 +963,16 @@ static const enum index next_flex_item[] = { ZERO, }; +static const enum index next_config_attr[] = { + CONFIG_QUEUES_NUMBER, + CONFIG_QUEUES_SIZE, + CONFIG_COUNTERS_NUMBER, + CONFIG_AGING_OBJECTS_NUMBER, + CONFIG_METERS_NUMBER, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -2045,6 +2069,9 @@ static int parse_aged(struct context *, const struct token *, static int parse_isolate(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_configure(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2270,7 +2297,9 @@ static const struct token token_list[] = { .type = "{command} {port_id} [{arg} [...]]", .help = "manage ingress/egress flow rules", .next = NEXT(NEXT_ENTRY - (INDIRECT_ACTION, + (INFO, + CONFIGURE, + INDIRECT_ACTION, VALIDATE, CREATE, DESTROY, @@ -2285,6 +2314,65 @@ static const struct token token_list[] = { .call = parse_init, }, /* Top-level command. */ + [INFO] = { + .name = "info", + .help = "get information about flow engine", + .next = NEXT(NEXT_ENTRY(END), + NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_configure, + }, + /* Top-level command. */ + [CONFIGURE] = { + .name = "configure", + .help = "configure flow engine", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_configure, + }, + /* Configure arguments. */ + [CONFIG_QUEUES_NUMBER] = { + .name = "queues_number", + .help = "number of queues", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.nb_queue)), + }, + [CONFIG_QUEUES_SIZE] = { + .name = "queues_size", + .help = "number of elements in queues", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.queue_attr.size)), + }, + [CONFIG_COUNTERS_NUMBER] = { + .name = "counters_number", + .help = "number of counters", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.port_attr.nb_counters)), + }, + [CONFIG_AGING_OBJECTS_NUMBER] = { + .name = "aging_counters_number", + .help = "number of aging objects", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.port_attr.nb_aging_objects)), + }, + [CONFIG_METERS_NUMBER] = { + .name = "meters_number", + .help = "number of meters", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.port_attr.nb_meters)), + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -7736,6 +7824,33 @@ parse_isolate(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for info/configure command. */ +static int +parse_configure(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != INFO && ctx->curr != CONFIGURE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + } + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -8964,6 +9079,15 @@ static void cmd_flow_parsed(const struct buffer *in) { switch (in->command) { + case INFO: + port_flow_get_info(in->port); + break; + case CONFIGURE: + port_flow_configure(in->port, + &in->args.configure.port_attr, + in->args.configure.nb_queue, + &in->args.configure.queue_attr); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index de1ec14bc7..33a85cd7ca 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1610,6 +1610,67 @@ action_alloc(portid_t port_id, uint32_t id, return 0; } +/** Get info about flow management resources. */ +int +port_flow_get_info(portid_t port_id) +{ + struct rte_flow_port_info port_info; + struct rte_flow_queue_info queue_info; + struct rte_flow_error error; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x99, sizeof(error)); + memset(&port_info, 0, sizeof(port_info)); + memset(&queue_info, 0, sizeof(queue_info)); + if (rte_flow_info_get(port_id, &port_info, &queue_info, &error)) + return port_flow_complain(&error); + printf("Flow engine resources on port %u:\n" + "Number of queues: %d\n" + "Size of queues: %d\n" + "Number of counters: %d\n" + "Number of aging objects: %d\n" + "Number of meter actions: %d\n", + port_id, port_info.max_nb_queues, + queue_info.max_size, + port_info.max_nb_counters, + port_info.max_nb_aging_objects, + port_info.max_nb_meters); + return 0; +} + +/** Configure flow management resources. */ +int +port_flow_configure(portid_t port_id, + const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr) +{ + struct rte_port *port; + struct rte_flow_error error; + const struct rte_flow_queue_attr *attr_list[nb_queue]; + int std_queue; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + port->queue_nb = nb_queue; + port->queue_sz = queue_attr->size; + for (std_queue = 0; std_queue < nb_queue; std_queue++) + attr_list[std_queue] = queue_attr; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x66, sizeof(error)); + if (rte_flow_configure(port_id, port_attr, nb_queue, attr_list, &error)) + return port_flow_complain(&error); + printf("Configure flows on port %u: " + "number of queues %d with %d elements\n", + port_id, nb_queue, queue_attr->size); + return 0; +} + /** Create indirect action */ int port_action_handle_create(portid_t port_id, uint32_t id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 9967825044..096b6825eb 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -243,6 +243,8 @@ struct rte_port { struct rte_eth_txconf tx_conf[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue tx configuration */ struct rte_ether_addr *mc_addr_pool; /**< pool of multicast addrs */ uint32_t mc_addr_nb; /**< nb. of addr. in mc_addr_pool */ + queueid_t queue_nb; /**< nb. of queues for flow rules */ + uint32_t queue_sz; /**< size of a queue for flow rules */ uint8_t slave_flag; /**< bonding slave port */ struct port_flow *flow_list; /**< Associated flows. */ struct port_indirect_action *actions_list; @@ -885,6 +887,11 @@ struct rte_flow_action_handle *port_action_handle_get_by_id(portid_t port_id, uint32_t id); int port_action_handle_update(portid_t port_id, uint32_t id, const struct rte_flow_action *action); +int port_flow_get_info(portid_t port_id); +int port_flow_configure(portid_t port_id, + const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 9cc248084f..c8f048aeef 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3308,8 +3308,8 @@ Flow rules management --------------------- Control of the generic flow API (*rte_flow*) is fully exposed through the -``flow`` command (validation, creation, destruction, queries and operation -modes). +``flow`` command (configuration, validation, creation, destruction, queries +and operation modes). Considering *rte_flow* overlaps with all `Filter Functions`_, using both features simultaneously may cause undefined side-effects and is therefore @@ -3332,6 +3332,18 @@ The first parameter stands for the operation mode. Possible operations and their general syntax are described below. They are covered in detail in the following sections. +- Get info about flow engine:: + + flow info {port_id} + +- Configure flow engine:: + + flow configure {port_id} + [queues_number {number}] [queues_size {size}] + [counters_number {number}] + [aging_counters_number {number}] + [meters_number {number}] + - Check whether a flow rule can be created:: flow validate {port_id} @@ -3391,6 +3403,51 @@ following sections. flow tunnel list {port_id} +Retrieving info about flow management engine +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow info`` retrieves info on pre-configurable resources in the underlying +device to give a hint of possible values for flow engine configuration. + +``rte_flow_info_get()``:: + + flow info {port_id} + +If successful, it will show:: + + Flow engine resources on port #[...]: + Number of queues: #[...] + Size of queues: #[...] + Number of counters: #[...] + Number of aging objects: #[...] + Number of meters: #[...] + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +Configuring flow management engine +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow configure`` pre-allocates all the needed resources in the underlying +device to be used later at the flow creation. Flow queues are allocated as well +for asynchronous flow creation/destruction operations. It is bound to +``rte_flow_configure()``:: + + flow configure {port_id} + [queues_number {number}] [queues_size {size}] + [counters_number {number}] + [aging_counters_number {number}] + [meters_number {number}] + +If successful, it will show:: + + Configure flows on port #[...]: number of queues #[...] with #[...] elements + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Mon Feb 21 23:02:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107918 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 76A09A034F; Tue, 22 Feb 2022 00:03:51 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4C58641172; Tue, 22 Feb 2022 00:03:23 +0100 (CET) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2087.outbound.protection.outlook.com [40.107.92.87]) by mails.dpdk.org (Postfix) with ESMTP id 8D9F641150 for ; Tue, 22 Feb 2022 00:03:20 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=P5mFBlmoiqBI8jE1+L5QjupQ2sABppwzHSMOgIvRwJdKxhfOuFwTKIkgReB82zVCbxnhmnTB6/4Sd0hC822EEMObe/wpO2jhpmavpCVTTDkeDJZh/uLLWtx+iwxF9DtidPmGGheV/aSDtbplFGIf9fps0ax/b1wdM/c9TvEWpHDyW1vLnkimSLBbS3aeOVuwIBUrp0wMff/HTLsj3qICp8YRFCJWE9lGjDVciZIRJgsMOLX6Di/2u3YYUNH1LOOV4r4gbnu1VKEHklK+suM2q/ipec5sGVdJhOPIStfa3bUnyR4c/53WcnXFkB6ySewsok4FDjggECX/PNWmcL1xYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=AAZYeW/g5iouyFM+w5iZkrXPtvw+WaxaoxTrSjljtZg=; b=AHV4OVF75j+32Prtt1h4i3KEyVOh/Xx5zIGvZS+elQnpgubfY3ITWO2EoYXw56t2kXmAO4I8oWsbIesVk95GxLCiZN97Oa8LjSGWQFrkVztzQYmIdLYKOL37kqz5Ag7LKkkAzP59pi1dLvh8TTrKF1prHtVCG1MXatKVN+1PqX31YcK59pn/H/ZCwNPwRsjysF0feiieskcV5p31Zoy8LJ1yJ+HlZ1223lYl5dKKwuR9Rm1WtgCAyBnUknLfZvqWBffnm0q5j9ytelqd7vYaXC9Cw5AFJQeujbPeONhqeqbYGbM/aTpsqr+iM8pJt9xs9acnRC0Gizpn8it2ERPg6w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=AAZYeW/g5iouyFM+w5iZkrXPtvw+WaxaoxTrSjljtZg=; b=fh6VjhkH1wk5psmELp9CSepv/jXjnisf7oIRo3B4N18gIJP1drdDDpvtJo2kS6+1uVJ06VlF/e28Df6PRL3vxF3bpNSQs+e5QPgNIwflaifjWxweCFecGzbcP/rAN7zVCBHwWUbZHRTuju5Yq1xiVtIM1OJbSaBOzSj/+vxo1nsPeQfaekpGsRBpCcEdo8MziXsqQaeyTBCHn4eF+Hsjyk8Ot7IKRKEFOTlV7rDXAgIbRCquAmWxKJ1WASkHRtZ723zd5+KxS/SSjP1fkUgocjt+vWSETNppu2Hm5hOKFH7VET+wW4ri/dLCeQGWiq8y6OWjyEOj2ETCUKf578ttIg== Received: from DM4PR12MB5962.namprd12.prod.outlook.com (2603:10b6:8:69::7) by DM6PR12MB4203.namprd12.prod.outlook.com (2603:10b6:5:21f::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.14; Mon, 21 Feb 2022 23:03:18 +0000 Received: from MWHPR13CA0011.namprd13.prod.outlook.com (2603:10b6:300:16::21) by DM4PR12MB5962.namprd12.prod.outlook.com (2603:10b6:8:69::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.26; Mon, 21 Feb 2022 23:03:17 +0000 Received: from CO1NAM11FT012.eop-nam11.prod.protection.outlook.com (2603:10b6:300:16:cafe::1a) by MWHPR13CA0011.outlook.office365.com (2603:10b6:300:16::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.21 via Frontend Transport; Mon, 21 Feb 2022 23:03:17 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT012.mail.protection.outlook.com (10.13.175.192) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4995.15 via Frontend Transport; Mon, 21 Feb 2022 23:03:16 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 21 Feb 2022 23:03:16 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 21 Feb 2022 15:03:12 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v9 06/11] app/testpmd: add flow template management Date: Tue, 22 Feb 2022 01:02:35 +0200 Message-ID: <20220221230240.2409665-7-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220221230240.2409665-1-akozyrev@nvidia.com> References: <20220220034409.2226860-1-akozyrev@nvidia.com> <20220221230240.2409665-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 08c63a83-0669-4576-1c9e-08d9f58e588a X-MS-TrafficTypeDiagnostic: DM4PR12MB5962:EE_|DM6PR12MB4203:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: UUUofr8inH7s028LTiZ765Rr8XOOI3P+fLRzhAod8koFcnNOJ/QpfPE0gyJHyjmDF6CzRDcOXPFriiwzUWIbDGiySmC0OjjaFZtLQppliwHYzAwgWrIp0s/Pjiwi65XkKbgsVkoICnL5ASFLQoJP5Hft90o4ISs41zNQIPYhCzSAxf4cajEWhLHW9auEjqN/secbLlfgPCpLQKTMiIJslN9eUPUn8J2Aje+Pbf5rlUNe+Y2wSLcQdgYW1fQrJ98rqmgzD+/1yvXF0ivRiRkXLCZTS/XC2CBLOAjfcY10w2ZU+0I1hr0TOP1pdygV9OvxLnXLlyn0nXMyHBbEu2jTiZV0Q/UcG4YdqSvp1dpPd2rgyFjnQRPn1l0Y/ZYLIlMulpBznQID2hn7+rA+RNCOB/m/m+JM/qn1qGnc7DzAUkEQkt/UiRTe5EUkP3ZD4bDixLLa0gi+THr+Pj8Dk/bwTN1Jd6V2tCK6qAENy5InKPC0/Go+qmvmxkC5EBUvnsCzAtRlCAsKxcpSjTcuFJE4NAtrug/r1vjWf+n1yOiWYxbwHj+ZKa1Ghj3j8LuJJbFkbmLFNKYxQ8CjRv6lVaU9gfEyWnluFr3jh+66nnzucKOORVFDDgR2aSipeprn3NTI+JW3KhNrOkc3bG3280ViZ6kkNAgC7KtS09ohIaoLESxssByRa7cBBjWpt4eV4BvtWnmjTj7KsNDpXazKMdtvBg== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(86362001)(40460700003)(26005)(186003)(2906002)(81166007)(36756003)(6666004)(356005)(36860700001)(47076005)(16526019)(83380400001)(70206006)(316002)(6916009)(508600001)(54906003)(8676002)(4326008)(70586007)(8936002)(7416002)(30864003)(1076003)(2616005)(5660300002)(336012)(426003)(82310400004)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Feb 2022 23:03:16.8305 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 08c63a83-0669-4576-1c9e-08d9f58e588a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT012.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4203 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_pattern_template and rte_flow_actions_template APIs. Provide the command line interface for the template creation/destruction. Usage example: testpmd> flow pattern_template 0 create pattern_template_id 2 template eth dst is 00:16:3e:31:15:c3 / end testpmd> flow actions_template 0 create actions_template_id 4 template drop / end mask drop / end testpmd> flow actions_template 0 destroy actions_template 4 testpmd> flow pattern_template 0 destroy pattern_template 2 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 456 +++++++++++++++++++- app/test-pmd/config.c | 203 +++++++++ app/test-pmd/testpmd.h | 24 ++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 101 +++++ 4 files changed, 782 insertions(+), 2 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 0533a33ca2..1aa32ea217 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -56,6 +56,8 @@ enum index { COMMON_POLICY_ID, COMMON_FLEX_HANDLE, COMMON_FLEX_TOKEN, + COMMON_PATTERN_TEMPLATE_ID, + COMMON_ACTIONS_TEMPLATE_ID, /* TOP-level command. */ ADD, @@ -74,6 +76,8 @@ enum index { /* Sub-level commands. */ INFO, CONFIGURE, + PATTERN_TEMPLATE, + ACTIONS_TEMPLATE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -92,6 +96,28 @@ enum index { FLEX_ITEM_CREATE, FLEX_ITEM_DESTROY, + /* Pattern template arguments. */ + PATTERN_TEMPLATE_CREATE, + PATTERN_TEMPLATE_DESTROY, + PATTERN_TEMPLATE_CREATE_ID, + PATTERN_TEMPLATE_DESTROY_ID, + PATTERN_TEMPLATE_RELAXED_MATCHING, + PATTERN_TEMPLATE_INGRESS, + PATTERN_TEMPLATE_EGRESS, + PATTERN_TEMPLATE_TRANSFER, + PATTERN_TEMPLATE_SPEC, + + /* Actions template arguments. */ + ACTIONS_TEMPLATE_CREATE, + ACTIONS_TEMPLATE_DESTROY, + ACTIONS_TEMPLATE_CREATE_ID, + ACTIONS_TEMPLATE_DESTROY_ID, + ACTIONS_TEMPLATE_INGRESS, + ACTIONS_TEMPLATE_EGRESS, + ACTIONS_TEMPLATE_TRANSFER, + ACTIONS_TEMPLATE_SPEC, + ACTIONS_TEMPLATE_MASK, + /* Tunnel arguments. */ TUNNEL_CREATE, TUNNEL_CREATE_TYPE, @@ -882,6 +908,10 @@ struct buffer { uint32_t nb_queue; struct rte_flow_queue_attr queue_attr; } configure; /**< Configuration arguments. */ + struct { + uint32_t *template_id; + uint32_t template_id_n; + } templ_destroy; /**< Template destroy arguments. */ struct { uint32_t *action_id; uint32_t action_id_n; @@ -890,10 +920,13 @@ struct buffer { uint32_t action_id; } ia; /* Indirect action query arguments */ struct { + uint32_t pat_templ_id; + uint32_t act_templ_id; struct rte_flow_attr attr; struct tunnel_ops tunnel_ops; struct rte_flow_item *pattern; struct rte_flow_action *actions; + struct rte_flow_action *masks; uint32_t pattern_n; uint32_t actions_n; uint8_t *data; @@ -973,6 +1006,49 @@ static const enum index next_config_attr[] = { ZERO, }; +static const enum index next_pt_subcmd[] = { + PATTERN_TEMPLATE_CREATE, + PATTERN_TEMPLATE_DESTROY, + ZERO, +}; + +static const enum index next_pt_attr[] = { + PATTERN_TEMPLATE_CREATE_ID, + PATTERN_TEMPLATE_RELAXED_MATCHING, + PATTERN_TEMPLATE_INGRESS, + PATTERN_TEMPLATE_EGRESS, + PATTERN_TEMPLATE_TRANSFER, + PATTERN_TEMPLATE_SPEC, + ZERO, +}; + +static const enum index next_pt_destroy_attr[] = { + PATTERN_TEMPLATE_DESTROY_ID, + END, + ZERO, +}; + +static const enum index next_at_subcmd[] = { + ACTIONS_TEMPLATE_CREATE, + ACTIONS_TEMPLATE_DESTROY, + ZERO, +}; + +static const enum index next_at_attr[] = { + ACTIONS_TEMPLATE_CREATE_ID, + ACTIONS_TEMPLATE_INGRESS, + ACTIONS_TEMPLATE_EGRESS, + ACTIONS_TEMPLATE_TRANSFER, + ACTIONS_TEMPLATE_SPEC, + ZERO, +}; + +static const enum index next_at_destroy_attr[] = { + ACTIONS_TEMPLATE_DESTROY_ID, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -2072,6 +2148,12 @@ static int parse_isolate(struct context *, const struct token *, static int parse_configure(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_template(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); +static int parse_template_destroy(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2141,6 +2223,10 @@ static int comp_set_modify_field_op(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_set_modify_field_id(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_pattern_template_id(struct context *, const struct token *, + unsigned int, char *, unsigned int); +static int comp_actions_template_id(struct context *, const struct token *, + unsigned int, char *, unsigned int); /** Token definitions. */ static const struct token token_list[] = { @@ -2291,6 +2377,20 @@ static const struct token token_list[] = { .call = parse_flex_handle, .comp = comp_none, }, + [COMMON_PATTERN_TEMPLATE_ID] = { + .name = "{pattern_template_id}", + .type = "PATTERN_TEMPLATE_ID", + .help = "pattern template id", + .call = parse_int, + .comp = comp_pattern_template_id, + }, + [COMMON_ACTIONS_TEMPLATE_ID] = { + .name = "{actions_template_id}", + .type = "ACTIONS_TEMPLATE_ID", + .help = "actions template id", + .call = parse_int, + .comp = comp_actions_template_id, + }, /* Top-level command. */ [FLOW] = { .name = "flow", @@ -2299,6 +2399,8 @@ static const struct token token_list[] = { .next = NEXT(NEXT_ENTRY (INFO, CONFIGURE, + PATTERN_TEMPLATE, + ACTIONS_TEMPLATE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -2373,6 +2475,148 @@ static const struct token token_list[] = { args.configure.port_attr.nb_meters)), }, /* Top-level command. */ + [PATTERN_TEMPLATE] = { + .name = "pattern_template", + .type = "{command} {port_id} [{arg} [...]]", + .help = "manage pattern templates", + .next = NEXT(next_pt_subcmd, NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_template, + }, + /* Sub-level commands. */ + [PATTERN_TEMPLATE_CREATE] = { + .name = "create", + .help = "create pattern template", + .next = NEXT(next_pt_attr), + .call = parse_template, + }, + [PATTERN_TEMPLATE_DESTROY] = { + .name = "destroy", + .help = "destroy pattern template", + .next = NEXT(NEXT_ENTRY(PATTERN_TEMPLATE_DESTROY_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_template_destroy, + }, + /* Pattern template arguments. */ + [PATTERN_TEMPLATE_CREATE_ID] = { + .name = "pattern_template_id", + .help = "specify a pattern template id to create", + .next = NEXT(next_pt_attr, + NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.pat_templ_id)), + }, + [PATTERN_TEMPLATE_DESTROY_ID] = { + .name = "pattern_template", + .help = "specify a pattern template id to destroy", + .next = NEXT(next_pt_destroy_attr, + NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.templ_destroy.template_id)), + .call = parse_template_destroy, + }, + [PATTERN_TEMPLATE_RELAXED_MATCHING] = { + .name = "relaxed", + .help = "is matching relaxed", + .next = NEXT(next_pt_attr, + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY_BF(struct buffer, + args.vc.attr.reserved, 1)), + }, + [PATTERN_TEMPLATE_INGRESS] = { + .name = "ingress", + .help = "attribute pattern to ingress", + .next = NEXT(next_pt_attr), + .call = parse_template, + }, + [PATTERN_TEMPLATE_EGRESS] = { + .name = "egress", + .help = "attribute pattern to egress", + .next = NEXT(next_pt_attr), + .call = parse_template, + }, + [PATTERN_TEMPLATE_TRANSFER] = { + .name = "transfer", + .help = "attribute pattern to transfer", + .next = NEXT(next_pt_attr), + .call = parse_template, + }, + [PATTERN_TEMPLATE_SPEC] = { + .name = "template", + .help = "specify item to create pattern template", + .next = NEXT(next_item), + }, + /* Top-level command. */ + [ACTIONS_TEMPLATE] = { + .name = "actions_template", + .type = "{command} {port_id} [{arg} [...]]", + .help = "manage actions templates", + .next = NEXT(next_at_subcmd, NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_template, + }, + /* Sub-level commands. */ + [ACTIONS_TEMPLATE_CREATE] = { + .name = "create", + .help = "create actions template", + .next = NEXT(next_at_attr), + .call = parse_template, + }, + [ACTIONS_TEMPLATE_DESTROY] = { + .name = "destroy", + .help = "destroy actions template", + .next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_DESTROY_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_template_destroy, + }, + /* Actions template arguments. */ + [ACTIONS_TEMPLATE_CREATE_ID] = { + .name = "actions_template_id", + .help = "specify an actions template id to create", + .next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_MASK), + NEXT_ENTRY(ACTIONS_TEMPLATE_SPEC), + NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.act_templ_id)), + }, + [ACTIONS_TEMPLATE_DESTROY_ID] = { + .name = "actions_template", + .help = "specify an actions template id to destroy", + .next = NEXT(next_at_destroy_attr, + NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.templ_destroy.template_id)), + .call = parse_template_destroy, + }, + [ACTIONS_TEMPLATE_INGRESS] = { + .name = "ingress", + .help = "attribute actions to ingress", + .next = NEXT(next_at_attr), + .call = parse_template, + }, + [ACTIONS_TEMPLATE_EGRESS] = { + .name = "egress", + .help = "attribute actions to egress", + .next = NEXT(next_at_attr), + .call = parse_template, + }, + [ACTIONS_TEMPLATE_TRANSFER] = { + .name = "transfer", + .help = "attribute actions to transfer", + .next = NEXT(next_at_attr), + .call = parse_template, + }, + [ACTIONS_TEMPLATE_SPEC] = { + .name = "template", + .help = "specify action to create actions template", + .next = NEXT(next_action), + .call = parse_template, + }, + [ACTIONS_TEMPLATE_MASK] = { + .name = "mask", + .help = "specify action mask to create actions template", + .next = NEXT(next_action), + .call = parse_template, + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -2695,7 +2939,7 @@ static const struct token token_list[] = { .name = "end", .help = "end list of pattern items", .priv = PRIV_ITEM(END, 0), - .next = NEXT(NEXT_ENTRY(ACTIONS)), + .next = NEXT(NEXT_ENTRY(ACTIONS, END)), .call = parse_vc, }, [ITEM_VOID] = { @@ -5975,7 +6219,9 @@ parse_vc(struct context *ctx, const struct token *token, if (!out) return len; if (!out->command) { - if (ctx->curr != VALIDATE && ctx->curr != CREATE) + if (ctx->curr != VALIDATE && ctx->curr != CREATE && + ctx->curr != PATTERN_TEMPLATE_CREATE && + ctx->curr != ACTIONS_TEMPLATE_CREATE) return -1; if (sizeof(*out) > size) return -1; @@ -7851,6 +8097,132 @@ parse_configure(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for template create command. */ +static int +parse_template(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != PATTERN_TEMPLATE && + ctx->curr != ACTIONS_TEMPLATE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.vc.data = (uint8_t *)out + size; + return len; + } + switch (ctx->curr) { + case PATTERN_TEMPLATE_CREATE: + out->args.vc.pattern = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + out->args.vc.pat_templ_id = UINT32_MAX; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case PATTERN_TEMPLATE_EGRESS: + out->args.vc.attr.egress = 1; + return len; + case PATTERN_TEMPLATE_INGRESS: + out->args.vc.attr.ingress = 1; + return len; + case PATTERN_TEMPLATE_TRANSFER: + out->args.vc.attr.transfer = 1; + return len; + case ACTIONS_TEMPLATE_CREATE: + out->args.vc.act_templ_id = UINT32_MAX; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case ACTIONS_TEMPLATE_SPEC: + out->args.vc.actions = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + ctx->object = out->args.vc.actions; + ctx->objmask = NULL; + return len; + case ACTIONS_TEMPLATE_MASK: + out->args.vc.masks = + (void *)RTE_ALIGN_CEIL((uintptr_t) + (out->args.vc.actions + + out->args.vc.actions_n), + sizeof(double)); + ctx->object = out->args.vc.masks; + ctx->objmask = NULL; + return len; + case ACTIONS_TEMPLATE_EGRESS: + out->args.vc.attr.egress = 1; + return len; + case ACTIONS_TEMPLATE_INGRESS: + out->args.vc.attr.ingress = 1; + return len; + case ACTIONS_TEMPLATE_TRANSFER: + out->args.vc.attr.transfer = 1; + return len; + default: + return -1; + } +} + +/** Parse tokens for template destroy command. */ +static int +parse_template_destroy(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *template_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command || + out->command == PATTERN_TEMPLATE || + out->command == ACTIONS_TEMPLATE) { + if (ctx->curr != PATTERN_TEMPLATE_DESTROY && + ctx->curr != ACTIONS_TEMPLATE_DESTROY) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.templ_destroy.template_id = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; + } + template_id = out->args.templ_destroy.template_id + + out->args.templ_destroy.template_id_n++; + if ((uint8_t *)template_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = template_id; + ctx->objmask = NULL; + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -8820,6 +9192,54 @@ comp_set_modify_field_id(struct context *ctx, const struct token *token, return -1; } +/** Complete available pattern template IDs. */ +static int +comp_pattern_template_id(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + unsigned int i = 0; + struct rte_port *port; + struct port_template *pt; + + (void)token; + if (port_id_is_invalid(ctx->port, DISABLED_WARN) || + ctx->port == (portid_t)RTE_PORT_ALL) + return -1; + port = &ports[ctx->port]; + for (pt = port->pattern_templ_list; pt != NULL; pt = pt->next) { + if (buf && i == ent) + return snprintf(buf, size, "%u", pt->id); + ++i; + } + if (buf) + return -1; + return i; +} + +/** Complete available actions template IDs. */ +static int +comp_actions_template_id(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + unsigned int i = 0; + struct rte_port *port; + struct port_template *pt; + + (void)token; + if (port_id_is_invalid(ctx->port, DISABLED_WARN) || + ctx->port == (portid_t)RTE_PORT_ALL) + return -1; + port = &ports[ctx->port]; + for (pt = port->actions_templ_list; pt != NULL; pt = pt->next) { + if (buf && i == ent) + return snprintf(buf, size, "%u", pt->id); + ++i; + } + if (buf) + return -1; + return i; +} + /** Internal context. */ static struct context cmd_flow_context; @@ -9088,6 +9508,38 @@ cmd_flow_parsed(const struct buffer *in) in->args.configure.nb_queue, &in->args.configure.queue_attr); break; + case PATTERN_TEMPLATE_CREATE: + port_flow_pattern_template_create(in->port, + in->args.vc.pat_templ_id, + &((const struct rte_flow_pattern_template_attr) { + .relaxed_matching = in->args.vc.attr.reserved, + .ingress = in->args.vc.attr.ingress, + .egress = in->args.vc.attr.egress, + .transfer = in->args.vc.attr.transfer, + }), + in->args.vc.pattern); + break; + case PATTERN_TEMPLATE_DESTROY: + port_flow_pattern_template_destroy(in->port, + in->args.templ_destroy.template_id_n, + in->args.templ_destroy.template_id); + break; + case ACTIONS_TEMPLATE_CREATE: + port_flow_actions_template_create(in->port, + in->args.vc.act_templ_id, + &((const struct rte_flow_actions_template_attr) { + .ingress = in->args.vc.attr.ingress, + .egress = in->args.vc.attr.egress, + .transfer = in->args.vc.attr.transfer, + }), + in->args.vc.actions, + in->args.vc.masks); + break; + case ACTIONS_TEMPLATE_DESTROY: + port_flow_actions_template_destroy(in->port, + in->args.templ_destroy.template_id_n, + in->args.templ_destroy.template_id); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 33a85cd7ca..ecaf4ca03c 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1610,6 +1610,49 @@ action_alloc(portid_t port_id, uint32_t id, return 0; } +static int +template_alloc(uint32_t id, struct port_template **template, + struct port_template **list) +{ + struct port_template *lst = *list; + struct port_template **ppt; + struct port_template *pt = NULL; + + *template = NULL; + if (id == UINT32_MAX) { + /* taking first available ID */ + if (lst) { + if (lst->id == UINT32_MAX - 1) { + printf("Highest template ID is already" + " assigned, delete it first\n"); + return -ENOMEM; + } + id = lst->id + 1; + } else { + id = 0; + } + } + pt = calloc(1, sizeof(*pt)); + if (!pt) { + printf("Allocation of port template failed\n"); + return -ENOMEM; + } + ppt = list; + while (*ppt && (*ppt)->id > id) + ppt = &(*ppt)->next; + if (*ppt && (*ppt)->id == id) { + printf("Template #%u is already assigned," + " delete it first\n", id); + free(pt); + return -EINVAL; + } + pt->next = *ppt; + pt->id = id; + *ppt = pt; + *template = pt; + return 0; +} + /** Get info about flow management resources. */ int port_flow_get_info(portid_t port_id) @@ -2086,6 +2129,166 @@ age_action_get(const struct rte_flow_action *actions) return NULL; } +/** Create pattern template */ +int +port_flow_pattern_template_create(portid_t port_id, uint32_t id, + const struct rte_flow_pattern_template_attr *attr, + const struct rte_flow_item *pattern) +{ + struct rte_port *port; + struct port_template *pit; + int ret; + struct rte_flow_error error; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + ret = template_alloc(id, &pit, &port->pattern_templ_list); + if (ret) + return ret; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x22, sizeof(error)); + pit->template.pattern_template = rte_flow_pattern_template_create(port_id, + attr, pattern, &error); + if (!pit->template.pattern_template) { + uint32_t destroy_id = pit->id; + port_flow_pattern_template_destroy(port_id, 1, &destroy_id); + return port_flow_complain(&error); + } + printf("Pattern template #%u created\n", pit->id); + return 0; +} + +/** Destroy pattern template */ +int +port_flow_pattern_template_destroy(portid_t port_id, uint32_t n, + const uint32_t *template) +{ + struct rte_port *port; + struct port_template **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + tmp = &port->pattern_templ_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_template *pit = *tmp; + + if (template[i] != pit->id) + continue; + /* + * Poisoning to make sure PMDs update it in case + * of error. + */ + memset(&error, 0x33, sizeof(error)); + + if (pit->template.pattern_template && + rte_flow_pattern_template_destroy(port_id, + pit->template.pattern_template, + &error)) { + ret = port_flow_complain(&error); + continue; + } + *tmp = pit->next; + printf("Pattern template #%u destroyed\n", pit->id); + free(pit); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + +/** Create actions template */ +int +port_flow_actions_template_create(portid_t port_id, uint32_t id, + const struct rte_flow_actions_template_attr *attr, + const struct rte_flow_action *actions, + const struct rte_flow_action *masks) +{ + struct rte_port *port; + struct port_template *pat; + int ret; + struct rte_flow_error error; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + ret = template_alloc(id, &pat, &port->actions_templ_list); + if (ret) + return ret; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x22, sizeof(error)); + pat->template.actions_template = rte_flow_actions_template_create(port_id, + attr, actions, masks, &error); + if (!pat->template.actions_template) { + uint32_t destroy_id = pat->id; + port_flow_actions_template_destroy(port_id, 1, &destroy_id); + return port_flow_complain(&error); + } + printf("Actions template #%u created\n", pat->id); + return 0; +} + +/** Destroy actions template */ +int +port_flow_actions_template_destroy(portid_t port_id, uint32_t n, + const uint32_t *template) +{ + struct rte_port *port; + struct port_template **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + tmp = &port->actions_templ_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_template *pat = *tmp; + + if (template[i] != pat->id) + continue; + /* + * Poisoning to make sure PMDs update it in case + * of error. + */ + memset(&error, 0x33, sizeof(error)); + + if (pat->template.actions_template && + rte_flow_actions_template_destroy(port_id, + pat->template.actions_template, &error)) { + ret = port_flow_complain(&error); + continue; + } + *tmp = pat->next; + printf("Actions template #%u destroyed\n", pat->id); + free(pat); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 096b6825eb..ce46d754a1 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -166,6 +166,17 @@ enum age_action_context_type { ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION, }; +/** Descriptor for a template. */ +struct port_template { + struct port_template *next; /**< Next template in list. */ + struct port_template *tmp; /**< Temporary linking. */ + uint32_t id; /**< Template ID. */ + union { + struct rte_flow_pattern_template *pattern_template; + struct rte_flow_actions_template *actions_template; + } template; /**< PMD opaque template object */ +}; + /** Descriptor for a single flow. */ struct port_flow { struct port_flow *next; /**< Next flow in list. */ @@ -246,6 +257,8 @@ struct rte_port { queueid_t queue_nb; /**< nb. of queues for flow rules */ uint32_t queue_sz; /**< size of a queue for flow rules */ uint8_t slave_flag; /**< bonding slave port */ + struct port_template *pattern_templ_list; /**< Pattern templates. */ + struct port_template *actions_templ_list; /**< Actions templates. */ struct port_flow *flow_list; /**< Associated flows. */ struct port_indirect_action *actions_list; /**< Associated indirect actions. */ @@ -892,6 +905,17 @@ int port_flow_configure(portid_t port_id, const struct rte_flow_port_attr *port_attr, uint16_t nb_queue, const struct rte_flow_queue_attr *queue_attr); +int port_flow_pattern_template_create(portid_t port_id, uint32_t id, + const struct rte_flow_pattern_template_attr *attr, + const struct rte_flow_item *pattern); +int port_flow_pattern_template_destroy(portid_t port_id, uint32_t n, + const uint32_t *template); +int port_flow_actions_template_create(portid_t port_id, uint32_t id, + const struct rte_flow_actions_template_attr *attr, + const struct rte_flow_action *actions, + const struct rte_flow_action *masks); +int port_flow_actions_template_destroy(portid_t port_id, uint32_t n, + const uint32_t *template); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index c8f048aeef..2e6a23b12a 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3344,6 +3344,26 @@ following sections. [aging_counters_number {number}] [meters_number {number}] +- Create a pattern template:: + flow pattern_template {port_id} create [pattern_template_id {id}] + [relaxed {boolean}] [ingress] [egress] [transfer] + template {item} [/ {item} [...]] / end + +- Destroy a pattern template:: + + flow pattern_template {port_id} destroy pattern_template {id} [...] + +- Create an actions template:: + + flow actions_template {port_id} create [actions_template_id {id}] + [ingress] [egress] [transfer] + template {action} [/ {action} [...]] / end + mask {action} [/ {action} [...]] / end + +- Destroy an actions template:: + + flow actions_template {port_id} destroy actions_template {id} [...] + - Check whether a flow rule can be created:: flow validate {port_id} @@ -3448,6 +3468,87 @@ Otherwise it will show an error message of the form:: Caught error type [...] ([...]): [...] +Creating pattern templates +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow pattern_template create`` creates the specified pattern template. +It is bound to ``rte_flow_pattern_template_create()``:: + + flow pattern_template {port_id} create [pattern_template_id {id}] + [relaxed {boolean}] [ingress] [egress] [transfer] + template {item} [/ {item} [...]] / end + +If successful, it will show:: + + Pattern template #[...] created + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +This command uses the same pattern items as ``flow create``, +their format is described in `Creating flow rules`_. + +Destroying pattern templates +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow pattern_template destroy`` destroys one or more pattern templates +from their template ID (as returned by ``flow pattern_template create``), +this command calls ``rte_flow_pattern_template_destroy()`` as many +times as necessary:: + + flow pattern_template {port_id} destroy pattern_template {id} [...] + +If successful, it will show:: + + Pattern template #[...] destroyed + +It does not report anything for pattern template IDs that do not exist. +The usual error message is shown when a pattern template cannot be destroyed:: + + Caught error type [...] ([...]): [...] + +Creating actions templates +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow actions_template create`` creates the specified actions template. +It is bound to ``rte_flow_actions_template_create()``:: + + flow actions_template {port_id} create [actions_template_id {id}] + [ingress] [egress] [transfer] + template {action} [/ {action} [...]] / end + mask {action} [/ {action} [...]] / end + +If successful, it will show:: + + Actions template #[...] created + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +This command uses the same actions as ``flow create``, +their format is described in `Creating flow rules`_. + +Destroying actions templates +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow actions_template destroy`` destroys one or more actions templates +from their template ID (as returned by ``flow actions_template create``), +this command calls ``rte_flow_actions_template_destroy()`` as many +times as necessary:: + + flow actions_template {port_id} destroy actions_template {id} [...] + +If successful, it will show:: + + Actions template #[...] destroyed + +It does not report anything for actions template IDs that do not exist. +The usual error message is shown when an actions template cannot be destroyed:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Mon Feb 21 23:02:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107919 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D1758A034F; Tue, 22 Feb 2022 00:03:59 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6957A41181; Tue, 22 Feb 2022 00:03:26 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2077.outbound.protection.outlook.com [40.107.220.77]) by mails.dpdk.org (Postfix) with ESMTP id 165CF41168 for ; Tue, 22 Feb 2022 00:03:25 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=X1Y5IKmI0mx2h82aeUAtrElnCgGRmqAAF7RujtDhW1MxC2NCj1W+eEt3dLzYclJMFI7AWJT34wlj5dkx7/S7BTcEiHUtIx2UvuUu7U6/Z76VmSIQsbIUaQ+OHvIZFcIr8x/f3QXfvn3Rp2Qa0YpLDVUplVLfhatNyLTpXf8tjm85H14DLi6SFXRR/V/X6IFGk7SSnA4i5DKlxCctoLvKDMug30/LFRujzqLGTfpJArp2TYjkcS7/M/I944f7H3dpvgX4gTiZr8xSn30ZzmZbLzMJ/Q+mP5JqYBydC0iOvvsH3MpvyOvvBYL+rGWA/1IjszIS2HWc3bL2hkwSDN8yww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zW3wex6mxImQf5L4fQm+N3DnWBnNS1kocEVO8gZ1Weo=; b=VWmofRZlnYs8FDsjN4bTuYQIvgLOw5ulrMQW2yfThgey7a+CjOkINOkigUbpukd2ps3RVLHgjeKvEDUyx8H1vdIjBx49tqWTM/x+znlliIdNNK7wvc+ghjYsgWqjRs8c/IalW+ZHyPt6finjBNdEJ6vqVzbJEmkjj5oThSceWjNlJU/iu9mdU87StRfTkvT0XjC5fWShSsNSK95gZlzficHdo1cs+6KlQJgp0yWI0nmlhZ9NymrVqPElgfOe64dW8YsNe3X0/ZKIdkXBmAuQhXCIlQl8p4r/bDQDQ1ODUMZ6Beht6GgUPvZRuJ/qst2m42LMc/Rm8TqYusXNMcYAbg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zW3wex6mxImQf5L4fQm+N3DnWBnNS1kocEVO8gZ1Weo=; b=FhQjzZoifSEc78rfqelG+3xipUIKfmeh14FKS3gNYOHkRtc8rEfMpoUORh5slxLu4R3RdvpgwCjSlGYRdwPetZ5NGINQoZ6UtQfAOvf3YlI6lv9wXLaQa1SnBGTeBNcfKnUbCP38M+CbhYFW5c7T0XmEPQ3KqbN1a/m+OIkyLmIRj2aCPiPrvFOqCbGPhDpPgIlMTolmk9Ya1HyitLATq5aWEprJv1XsQ1PZY8wO+S3wrTVEWgJpzTbhyCHXc4KMDcra2M6r4dgUdbX9QDoUHQ71eKzdmgk64R+EPfTRJ5RCuJiMOmmvWnrvGrKMQJkb6zO8CUbQSg7n5V6YynqqvA== Received: from DM6PR12MB4957.namprd12.prod.outlook.com (2603:10b6:5:20d::14) by BN6PR1201MB2483.namprd12.prod.outlook.com (2603:10b6:404:a6::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.14; Mon, 21 Feb 2022 23:03:21 +0000 Received: from BN0PR04CA0055.namprd04.prod.outlook.com (2603:10b6:408:e8::30) by DM6PR12MB4957.namprd12.prod.outlook.com (2603:10b6:5:20d::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.17; Mon, 21 Feb 2022 23:03:20 +0000 Received: from BN8NAM11FT039.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e8:cafe::fa) by BN0PR04CA0055.outlook.office365.com (2603:10b6:408:e8::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.14 via Frontend Transport; Mon, 21 Feb 2022 23:03:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT039.mail.protection.outlook.com (10.13.177.169) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4995.15 via Frontend Transport; Mon, 21 Feb 2022 23:03:20 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 21 Feb 2022 23:03:19 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 21 Feb 2022 15:03:16 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v9 07/11] app/testpmd: add flow table management Date: Tue, 22 Feb 2022 01:02:36 +0200 Message-ID: <20220221230240.2409665-8-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220221230240.2409665-1-akozyrev@nvidia.com> References: <20220220034409.2226860-1-akozyrev@nvidia.com> <20220221230240.2409665-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0051edac-e27d-49b8-bf1e-08d9f58e5aad X-MS-TrafficTypeDiagnostic: DM6PR12MB4957:EE_|BN6PR1201MB2483:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: yoGG572a1yIM0cIKUbDDNXJh+e62ZInn4BIaBWJtGMv45j78N73k5D2cebWftcsgFWgUExdcbmRiNwjb8a9GPWBy/HKXEcbzIg4Cff00t2fW8gaH+wBxMAj2vlLWbfHWnwmtQoWR2/jYw5vV51cijIu159oB9nHMdNc/nbj2O/emup/FIB5ZgMYFeDUKECctKYKtlskfeUsIV7FnomOnXv654oi+3bwNp9JZgtbHBrrNgIHojt4peOTyXnStAELdKEkb/rRovarG8PBFKCpVq1Lwbk3ImGC6tXZWnz9CpJR6Nebf8fpECvwhNB89bOen7T//Eb28NNAIGdpYgAnBcflFbhnLOU70wPtHrDGEWyLWCJ0yrfGBJaETEfDfDE7w0lkVjS9F6IgtQo1vdj4NLd13qUyVf3q4DQJUBz4q+vqibZlSY3wYzXD1FB5bPAOLtrTX54GX1gcuP6Q8OJ9GElynPw91ckCBtZks8pIqqhWbpWF3MZHz6MhHi0rGlZg6y6FVPgl4+gStiJNPWZx/s0yKbo7uO0xOSlK297CdHzTJS/k8cMAAnjb0wm61QhQmvPJ82EtL2zLWXM7+o6q/xthiNOFUByNq1djzpEkYGpE5HiFRQSAiIwKz+BcWCivvmLfkluDnA0d41mjqRhs7iMR0FV5lQNWkBgn6RFSiZhMd4T4/PmvgAr6iB6IGYcjYA27A1hjh2SMXc3oT3J/Iwg== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(1076003)(5660300002)(6916009)(4326008)(70586007)(70206006)(2616005)(508600001)(6666004)(36756003)(83380400001)(316002)(47076005)(26005)(16526019)(426003)(30864003)(8676002)(336012)(8936002)(7416002)(186003)(356005)(36860700001)(2906002)(40460700003)(81166007)(54906003)(82310400004)(86362001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Feb 2022 23:03:20.3414 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0051edac-e27d-49b8-bf1e-08d9f58e5aad X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT039.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR1201MB2483 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_table API. Provide the command line interface for the flow table creation/destruction. Usage example: testpmd> flow template_table 0 create table_id 6 group 9 priority 4 ingress mode 1 rules_number 64 pattern_template 2 actions_template 4 testpmd> flow template_table 0 destroy table 6 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 315 ++++++++++++++++++++ app/test-pmd/config.c | 171 +++++++++++ app/test-pmd/testpmd.h | 17 ++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 53 ++++ 4 files changed, 556 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 1aa32ea217..5715899c95 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -58,6 +58,7 @@ enum index { COMMON_FLEX_TOKEN, COMMON_PATTERN_TEMPLATE_ID, COMMON_ACTIONS_TEMPLATE_ID, + COMMON_TABLE_ID, /* TOP-level command. */ ADD, @@ -78,6 +79,7 @@ enum index { CONFIGURE, PATTERN_TEMPLATE, ACTIONS_TEMPLATE, + TABLE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -118,6 +120,20 @@ enum index { ACTIONS_TEMPLATE_SPEC, ACTIONS_TEMPLATE_MASK, + /* Table arguments. */ + TABLE_CREATE, + TABLE_DESTROY, + TABLE_CREATE_ID, + TABLE_DESTROY_ID, + TABLE_GROUP, + TABLE_PRIORITY, + TABLE_INGRESS, + TABLE_EGRESS, + TABLE_TRANSFER, + TABLE_RULES_NUMBER, + TABLE_PATTERN_TEMPLATE, + TABLE_ACTIONS_TEMPLATE, + /* Tunnel arguments. */ TUNNEL_CREATE, TUNNEL_CREATE_TYPE, @@ -912,6 +928,18 @@ struct buffer { uint32_t *template_id; uint32_t template_id_n; } templ_destroy; /**< Template destroy arguments. */ + struct { + uint32_t id; + struct rte_flow_template_table_attr attr; + uint32_t *pat_templ_id; + uint32_t pat_templ_id_n; + uint32_t *act_templ_id; + uint32_t act_templ_id_n; + } table; /**< Table arguments. */ + struct { + uint32_t *table_id; + uint32_t table_id_n; + } table_destroy; /**< Template destroy arguments. */ struct { uint32_t *action_id; uint32_t action_id_n; @@ -1049,6 +1077,32 @@ static const enum index next_at_destroy_attr[] = { ZERO, }; +static const enum index next_table_subcmd[] = { + TABLE_CREATE, + TABLE_DESTROY, + ZERO, +}; + +static const enum index next_table_attr[] = { + TABLE_CREATE_ID, + TABLE_GROUP, + TABLE_PRIORITY, + TABLE_INGRESS, + TABLE_EGRESS, + TABLE_TRANSFER, + TABLE_RULES_NUMBER, + TABLE_PATTERN_TEMPLATE, + TABLE_ACTIONS_TEMPLATE, + END, + ZERO, +}; + +static const enum index next_table_destroy_attr[] = { + TABLE_DESTROY_ID, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -2154,6 +2208,11 @@ static int parse_template(struct context *, const struct token *, static int parse_template_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_table(struct context *, const struct token *, + const char *, unsigned int, void *, unsigned int); +static int parse_table_destroy(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2227,6 +2286,8 @@ static int comp_pattern_template_id(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_actions_template_id(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_table_id(struct context *, const struct token *, + unsigned int, char *, unsigned int); /** Token definitions. */ static const struct token token_list[] = { @@ -2391,6 +2452,13 @@ static const struct token token_list[] = { .call = parse_int, .comp = comp_actions_template_id, }, + [COMMON_TABLE_ID] = { + .name = "{table_id}", + .type = "TABLE_ID", + .help = "table id", + .call = parse_int, + .comp = comp_table_id, + }, /* Top-level command. */ [FLOW] = { .name = "flow", @@ -2401,6 +2469,7 @@ static const struct token token_list[] = { CONFIGURE, PATTERN_TEMPLATE, ACTIONS_TEMPLATE, + TABLE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -2617,6 +2686,104 @@ static const struct token token_list[] = { .call = parse_template, }, /* Top-level command. */ + [TABLE] = { + .name = "template_table", + .type = "{command} {port_id} [{arg} [...]]", + .help = "manage template tables", + .next = NEXT(next_table_subcmd, NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_table, + }, + /* Sub-level commands. */ + [TABLE_CREATE] = { + .name = "create", + .help = "create template table", + .next = NEXT(next_table_attr), + .call = parse_table, + }, + [TABLE_DESTROY] = { + .name = "destroy", + .help = "destroy template table", + .next = NEXT(NEXT_ENTRY(TABLE_DESTROY_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_table_destroy, + }, + /* Table arguments. */ + [TABLE_CREATE_ID] = { + .name = "table_id", + .help = "specify table id to create", + .next = NEXT(next_table_attr, + NEXT_ENTRY(COMMON_TABLE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.table.id)), + }, + [TABLE_DESTROY_ID] = { + .name = "table", + .help = "specify table id to destroy", + .next = NEXT(next_table_destroy_attr, + NEXT_ENTRY(COMMON_TABLE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.table_destroy.table_id)), + .call = parse_table_destroy, + }, + [TABLE_GROUP] = { + .name = "group", + .help = "specify a group", + .next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_GROUP_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.table.attr.flow_attr.group)), + }, + [TABLE_PRIORITY] = { + .name = "priority", + .help = "specify a priority level", + .next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_PRIORITY_LEVEL)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.table.attr.flow_attr.priority)), + }, + [TABLE_EGRESS] = { + .name = "egress", + .help = "affect rule to egress", + .next = NEXT(next_table_attr), + .call = parse_table, + }, + [TABLE_INGRESS] = { + .name = "ingress", + .help = "affect rule to ingress", + .next = NEXT(next_table_attr), + .call = parse_table, + }, + [TABLE_TRANSFER] = { + .name = "transfer", + .help = "affect rule to transfer", + .next = NEXT(next_table_attr), + .call = parse_table, + }, + [TABLE_RULES_NUMBER] = { + .name = "rules_number", + .help = "number of rules in table", + .next = NEXT(next_table_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.table.attr.nb_flows)), + }, + [TABLE_PATTERN_TEMPLATE] = { + .name = "pattern_template", + .help = "specify pattern template id", + .next = NEXT(next_table_attr, + NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.table.pat_templ_id)), + .call = parse_table, + }, + [TABLE_ACTIONS_TEMPLATE] = { + .name = "actions_template", + .help = "specify actions template id", + .next = NEXT(next_table_attr, + NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.table.act_templ_id)), + .call = parse_table, + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -8223,6 +8390,119 @@ parse_template_destroy(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for table create command. */ +static int +parse_table(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *template_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != TABLE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + } + switch (ctx->curr) { + case TABLE_CREATE: + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.table.id = UINT32_MAX; + return len; + case TABLE_PATTERN_TEMPLATE: + out->args.table.pat_templ_id = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + template_id = out->args.table.pat_templ_id + + out->args.table.pat_templ_id_n++; + if ((uint8_t *)template_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = template_id; + ctx->objmask = NULL; + return len; + case TABLE_ACTIONS_TEMPLATE: + out->args.table.act_templ_id = + (void *)RTE_ALIGN_CEIL((uintptr_t) + (out->args.table.pat_templ_id + + out->args.table.pat_templ_id_n), + sizeof(double)); + template_id = out->args.table.act_templ_id + + out->args.table.act_templ_id_n++; + if ((uint8_t *)template_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = template_id; + ctx->objmask = NULL; + return len; + case TABLE_INGRESS: + out->args.table.attr.flow_attr.ingress = 1; + return len; + case TABLE_EGRESS: + out->args.table.attr.flow_attr.egress = 1; + return len; + case TABLE_TRANSFER: + out->args.table.attr.flow_attr.transfer = 1; + return len; + default: + return -1; + } +} + +/** Parse tokens for table destroy command. */ +static int +parse_table_destroy(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *table_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command || out->command == TABLE) { + if (ctx->curr != TABLE_DESTROY) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.table_destroy.table_id = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; + } + table_id = out->args.table_destroy.table_id + + out->args.table_destroy.table_id_n++; + if ((uint8_t *)table_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = table_id; + ctx->objmask = NULL; + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -9240,6 +9520,30 @@ comp_actions_template_id(struct context *ctx, const struct token *token, return i; } +/** Complete available table IDs. */ +static int +comp_table_id(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + unsigned int i = 0; + struct rte_port *port; + struct port_table *pt; + + (void)token; + if (port_id_is_invalid(ctx->port, DISABLED_WARN) || + ctx->port == (portid_t)RTE_PORT_ALL) + return -1; + port = &ports[ctx->port]; + for (pt = port->table_list; pt != NULL; pt = pt->next) { + if (buf && i == ent) + return snprintf(buf, size, "%u", pt->id); + ++i; + } + if (buf) + return -1; + return i; +} + /** Internal context. */ static struct context cmd_flow_context; @@ -9540,6 +9844,17 @@ cmd_flow_parsed(const struct buffer *in) in->args.templ_destroy.template_id_n, in->args.templ_destroy.template_id); break; + case TABLE_CREATE: + port_flow_template_table_create(in->port, in->args.table.id, + &in->args.table.attr, in->args.table.pat_templ_id_n, + in->args.table.pat_templ_id, in->args.table.act_templ_id_n, + in->args.table.act_templ_id); + break; + case TABLE_DESTROY: + port_flow_template_table_destroy(in->port, + in->args.table_destroy.table_id_n, + in->args.table_destroy.table_id); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index ecaf4ca03c..cefbc64c0c 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1653,6 +1653,49 @@ template_alloc(uint32_t id, struct port_template **template, return 0; } +static int +table_alloc(uint32_t id, struct port_table **table, + struct port_table **list) +{ + struct port_table *lst = *list; + struct port_table **ppt; + struct port_table *pt = NULL; + + *table = NULL; + if (id == UINT32_MAX) { + /* taking first available ID */ + if (lst) { + if (lst->id == UINT32_MAX - 1) { + printf("Highest table ID is already" + " assigned, delete it first\n"); + return -ENOMEM; + } + id = lst->id + 1; + } else { + id = 0; + } + } + pt = calloc(1, sizeof(*pt)); + if (!pt) { + printf("Allocation of table failed\n"); + return -ENOMEM; + } + ppt = list; + while (*ppt && (*ppt)->id > id) + ppt = &(*ppt)->next; + if (*ppt && (*ppt)->id == id) { + printf("Table #%u is already assigned," + " delete it first\n", id); + free(pt); + return -EINVAL; + } + pt->next = *ppt; + pt->id = id; + *ppt = pt; + *table = pt; + return 0; +} + /** Get info about flow management resources. */ int port_flow_get_info(portid_t port_id) @@ -2289,6 +2332,134 @@ port_flow_actions_template_destroy(portid_t port_id, uint32_t n, return ret; } +/** Create table */ +int +port_flow_template_table_create(portid_t port_id, uint32_t id, + const struct rte_flow_template_table_attr *table_attr, + uint32_t nb_pattern_templates, uint32_t *pattern_templates, + uint32_t nb_actions_templates, uint32_t *actions_templates) +{ + struct rte_port *port; + struct port_table *pt; + struct port_template *temp = NULL; + int ret; + uint32_t i; + struct rte_flow_error error; + struct rte_flow_pattern_template + *flow_pattern_templates[nb_pattern_templates]; + struct rte_flow_actions_template + *flow_actions_templates[nb_actions_templates]; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + for (i = 0; i < nb_pattern_templates; ++i) { + bool found = false; + temp = port->pattern_templ_list; + while (temp) { + if (pattern_templates[i] == temp->id) { + flow_pattern_templates[i] = + temp->template.pattern_template; + found = true; + break; + } + temp = temp->next; + } + if (!found) { + printf("Pattern template #%u is invalid\n", + pattern_templates[i]); + return -EINVAL; + } + } + for (i = 0; i < nb_actions_templates; ++i) { + bool found = false; + temp = port->actions_templ_list; + while (temp) { + if (actions_templates[i] == temp->id) { + flow_actions_templates[i] = + temp->template.actions_template; + found = true; + break; + } + temp = temp->next; + } + if (!found) { + printf("Actions template #%u is invalid\n", + actions_templates[i]); + return -EINVAL; + } + } + ret = table_alloc(id, &pt, &port->table_list); + if (ret) + return ret; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x22, sizeof(error)); + pt->table = rte_flow_template_table_create(port_id, table_attr, + flow_pattern_templates, nb_pattern_templates, + flow_actions_templates, nb_actions_templates, + &error); + + if (!pt->table) { + uint32_t destroy_id = pt->id; + port_flow_template_table_destroy(port_id, 1, &destroy_id); + return port_flow_complain(&error); + } + pt->nb_pattern_templates = nb_pattern_templates; + pt->nb_actions_templates = nb_actions_templates; + printf("Template table #%u created\n", pt->id); + return 0; +} + +/** Destroy table */ +int +port_flow_template_table_destroy(portid_t port_id, + uint32_t n, const uint32_t *table) +{ + struct rte_port *port; + struct port_table **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + tmp = &port->table_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_table *pt = *tmp; + + if (table[i] != pt->id) + continue; + /* + * Poisoning to make sure PMDs update it in case + * of error. + */ + memset(&error, 0x33, sizeof(error)); + + if (pt->table && + rte_flow_template_table_destroy(port_id, + pt->table, + &error)) { + ret = port_flow_complain(&error); + continue; + } + *tmp = pt->next; + printf("Template table #%u destroyed\n", pt->id); + free(pt); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index ce46d754a1..fd02498faf 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -177,6 +177,16 @@ struct port_template { } template; /**< PMD opaque template object */ }; +/** Descriptor for a flow table. */ +struct port_table { + struct port_table *next; /**< Next table in list. */ + struct port_table *tmp; /**< Temporary linking. */ + uint32_t id; /**< Table ID. */ + uint32_t nb_pattern_templates; /**< Number of pattern templates. */ + uint32_t nb_actions_templates; /**< Number of actions templates. */ + struct rte_flow_template_table *table; /**< PMD opaque template object */ +}; + /** Descriptor for a single flow. */ struct port_flow { struct port_flow *next; /**< Next flow in list. */ @@ -259,6 +269,7 @@ struct rte_port { uint8_t slave_flag; /**< bonding slave port */ struct port_template *pattern_templ_list; /**< Pattern templates. */ struct port_template *actions_templ_list; /**< Actions templates. */ + struct port_table *table_list; /**< Flow tables. */ struct port_flow *flow_list; /**< Associated flows. */ struct port_indirect_action *actions_list; /**< Associated indirect actions. */ @@ -916,6 +927,12 @@ int port_flow_actions_template_create(portid_t port_id, uint32_t id, const struct rte_flow_action *masks); int port_flow_actions_template_destroy(portid_t port_id, uint32_t n, const uint32_t *template); +int port_flow_template_table_create(portid_t port_id, uint32_t id, + const struct rte_flow_template_table_attr *table_attr, + uint32_t nb_pattern_templates, uint32_t *pattern_templates, + uint32_t nb_actions_templates, uint32_t *actions_templates); +int port_flow_template_table_destroy(portid_t port_id, + uint32_t n, const uint32_t *table); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 2e6a23b12a..f63eb76a3a 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3364,6 +3364,19 @@ following sections. flow actions_template {port_id} destroy actions_template {id} [...] +- Create a table:: + + flow table {port_id} create + [table_id {id}] + [group {group_id}] [priority {level}] [ingress] [egress] [transfer] + rules_number {number} + pattern_template {pattern_template_id} + actions_template {actions_template_id} + +- Destroy a table:: + + flow table {port_id} destroy table {id} [...] + - Check whether a flow rule can be created:: flow validate {port_id} @@ -3549,6 +3562,46 @@ The usual error message is shown when an actions template cannot be destroyed:: Caught error type [...] ([...]): [...] +Creating template table +~~~~~~~~~~~~~~~~~~~~~~~ + +``flow template_table create`` creates the specified template table. +It is bound to ``rte_flow_template_table_create()``:: + + flow template_table {port_id} create + [table_id {id}] [group {group_id}] + [priority {level}] [ingress] [egress] [transfer] + rules_number {number} + pattern_template {pattern_template_id} + actions_template {actions_template_id} + +If successful, it will show:: + + Template table #[...] created + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +Destroying flow table +~~~~~~~~~~~~~~~~~~~~~ + +``flow template_table destroy`` destroys one or more template tables +from their table ID (as returned by ``flow template_table create``), +this command calls ``rte_flow_template_table_destroy()`` as many +times as necessary:: + + flow template_table {port_id} destroy table {id} [...] + +If successful, it will show:: + + Template table #[...] destroyed + +It does not report anything for table IDs that do not exist. +The usual error message is shown when a table cannot be destroyed:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Mon Feb 21 23:02:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107920 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 00F62A034F; Tue, 22 Feb 2022 00:04:06 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4A3014116A; Tue, 22 Feb 2022 00:03:29 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2041.outbound.protection.outlook.com [40.107.93.41]) by mails.dpdk.org (Postfix) with ESMTP id F156F41184 for ; Tue, 22 Feb 2022 00:03:26 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FkTRM24WcSzcVcuKJypJWepECIhmJTFqbOLBYJTcH7udKF5ejBqIFIN8iMKkXWVjLjR7PIl/QDHCxP+Tx6Dc6hjKl19vPVLdZCL3eOvENdbgbojO5eGT+rhuaKZvUkUpM+tydSZSRorCJndtbIrKNZ/8PkIdCEFHg4dtdj0Mq3Ik4RQz842n5lLTPAbJrenbXPDOvKGnOXDVrcGOTal4mdMsUt+ch5udDFfKPGVEvlboEjzqWAYISm9081hSvGkoE5UkeQ63YDhVYDTHJBG/zpnKvAntgGaBXm3QOv1lz9bz8cucUXDP5yfOO+bDNgSDDNRbNvLxHn7Ur0GHQkqSTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Dzm3Q11sKnPvpyXOb7SqpN16gyVqtlAyAaJJWYnOOKk=; b=dVWHNcQxgPDqH8n1qvmIyaywdHLYzqQQxpbEKz1OFn+g06krEH3+WPHOxNZHaMTyJPStenAv3Sn7IDGSZyq78l/kJ+XJnSCGcNU9ZTTEl0GphPI6XrfXjilaPH8NUDDM5MMKE/vUFIw+delL9TKwpBdF+7g1NP/Kry34jkHcY4OBDKDc7vHuE3M12ebV4lJu04kQlN/H0jMhob3kJrvN3kZMrkJoyQzECjhmd5gvyRbfnL1zIR7Yg0m6Ea8YAvIvrAVUwpsfzsQ1ounEvzOLqSxHel2Bklo60LEs/qW9fGvQPKIKz/9VzYG35eKCce7ZJ1tC7d0HHjOJCHF6Wib7+w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Dzm3Q11sKnPvpyXOb7SqpN16gyVqtlAyAaJJWYnOOKk=; b=Q2uBF7HRdJMTUKsGbBew4DJ7I8jChvPCKacFJk6TV9uwEMLPdtFb3rFleU/wBDr/6sA0JequqguD28/WlmE3+dtkSI453uTn8EQ9qiZqB6pNNh61kMhz5WetLHldSp1zRr8tmPm7edL52j9jILkxrkMxDr25FERE+Sl3r4ajOvT/kp9fbGPJ7c/YYLkqrTtFMjPKEIK06rGpsJEOSAAuCPVohEt9lNYwKhqcDV+5HOmjIVpEyfGSN64dSlbFRV8weqSkkJFqwpSnwkdOGBcmAx4Q6GgV5GCG1Pwb7zBkUiIsznkVX5Nqp4znXbkjYcfF4aYhxv96qe/GmPV2dDCUnw== Received: from DM6PR12MB4059.namprd12.prod.outlook.com (2603:10b6:5:215::14) by BYAPR12MB3510.namprd12.prod.outlook.com (2603:10b6:a03:13a::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.24; Mon, 21 Feb 2022 23:03:24 +0000 Received: from BN8PR12CA0036.namprd12.prod.outlook.com (2603:10b6:408:60::49) by DM6PR12MB4059.namprd12.prod.outlook.com (2603:10b6:5:215::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.16; Mon, 21 Feb 2022 23:03:23 +0000 Received: from BN8NAM11FT050.eop-nam11.prod.protection.outlook.com (2603:10b6:408:60:cafe::9c) by BN8PR12CA0036.outlook.office365.com (2603:10b6:408:60::49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.16 via Frontend Transport; Mon, 21 Feb 2022 23:03:23 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT050.mail.protection.outlook.com (10.13.177.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4995.15 via Frontend Transport; Mon, 21 Feb 2022 23:03:23 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 21 Feb 2022 23:03:22 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 21 Feb 2022 15:03:19 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v9 08/11] app/testpmd: add async flow create/destroy operations Date: Tue, 22 Feb 2022 01:02:37 +0200 Message-ID: <20220221230240.2409665-9-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220221230240.2409665-1-akozyrev@nvidia.com> References: <20220220034409.2226860-1-akozyrev@nvidia.com> <20220221230240.2409665-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2989efe3-5f82-40af-95d1-08d9f58e5c53 X-MS-TrafficTypeDiagnostic: DM6PR12MB4059:EE_|BYAPR12MB3510:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Gyo0P8U9zLVoUrWj+D7KKVbZcAQS+jwClghBPPORtBwOEnY4YZwHRuZCXA/Pi8MAHQ1xE798CVKzx+MWoSnlDrUv7rJ+nNeVe3a4F9GJfR8tErAJ1lbYnUAj5hG2xbzbNjT/HNTPOgSXjDh1G72fcm+1CUkaa4fGrKYHN61BgHbXFH73rCUYOwl1Cb4suY3I6SHxSSkc0sFYubFIjYXBwioq1eG0nfSNRXun4/baP3/d3TGpV0Qe5ZQkILYH4ahX1PnSocKkVQ3uDIuU/wA3BaJYEwMUCcAbjiM6pxHq8QjO0ptNiCUoG9YE8tngZf78BkSkhds/bOg2f7M5UsY78LcZPIiBom5R7fmgnPZqKEVGSbxg47cF1eZFBPfy2LLZ8uodrEwyqRQj/P2UHxFngNPB55tVdAUJVdjMtGt0Mi6beM9WbyMU5zQQ1Jyhj4tsrzvD7ZkO+G4TuXzdMjN4MDipZdSreU+1GbXCVWlB3vfwimbXd1rZJTqzrn0331NgpoPmyt1/r7gVBV2hrOes021BiSwiQpm0s6ZnScGLgcO+mK2ZAahexIddSxYNWCyFyKMSff3kh2MeF6Zl1nlunKgHmy67QMksitzEGco+NlOARoKAdZh7wX8RmDmmk5dPjDJqrIf4HRothmvd6jSS3lLBSHqSHUsYimiw8loTDV7b9XUYA4n+Kl2zOpY299v3NbTS/GezXnbhdZY6pJbDNQ== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(4326008)(81166007)(26005)(1076003)(8676002)(508600001)(186003)(16526019)(82310400004)(70206006)(70586007)(2616005)(316002)(356005)(86362001)(426003)(336012)(54906003)(6916009)(40460700003)(7416002)(30864003)(83380400001)(2906002)(8936002)(47076005)(36756003)(5660300002)(6666004)(36860700001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Feb 2022 23:03:23.0278 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2989efe3-5f82-40af-95d1-08d9f58e5c53 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT050.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB3510 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API. Provide the command line interface for enqueueing flow creation/destruction operations. Usage example: testpmd> flow queue 0 create 0 postpone no template_table 6 pattern_template 0 actions_template 0 pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end testpmd> flow queue 0 destroy 0 postpone yes rule 0 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 267 +++++++++++++++++++- app/test-pmd/config.c | 166 ++++++++++++ app/test-pmd/testpmd.h | 7 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 57 +++++ 4 files changed, 496 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 5715899c95..d359127df9 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -59,6 +59,7 @@ enum index { COMMON_PATTERN_TEMPLATE_ID, COMMON_ACTIONS_TEMPLATE_ID, COMMON_TABLE_ID, + COMMON_QUEUE_ID, /* TOP-level command. */ ADD, @@ -92,6 +93,7 @@ enum index { ISOLATE, TUNNEL, FLEX, + QUEUE, /* Flex arguments */ FLEX_ITEM_INIT, @@ -120,6 +122,22 @@ enum index { ACTIONS_TEMPLATE_SPEC, ACTIONS_TEMPLATE_MASK, + /* Queue arguments. */ + QUEUE_CREATE, + QUEUE_DESTROY, + + /* Queue create arguments. */ + QUEUE_CREATE_ID, + QUEUE_CREATE_POSTPONE, + QUEUE_TEMPLATE_TABLE, + QUEUE_PATTERN_TEMPLATE, + QUEUE_ACTIONS_TEMPLATE, + QUEUE_SPEC, + + /* Queue destroy arguments. */ + QUEUE_DESTROY_ID, + QUEUE_DESTROY_POSTPONE, + /* Table arguments. */ TABLE_CREATE, TABLE_DESTROY, @@ -918,6 +936,8 @@ struct token { struct buffer { enum index command; /**< Flow command. */ portid_t port; /**< Affected port ID. */ + queueid_t queue; /** Async queue ID. */ + bool postpone; /** Postpone async operation */ union { struct { struct rte_flow_port_attr port_attr; @@ -948,6 +968,7 @@ struct buffer { uint32_t action_id; } ia; /* Indirect action query arguments */ struct { + uint32_t table_id; uint32_t pat_templ_id; uint32_t act_templ_id; struct rte_flow_attr attr; @@ -1103,6 +1124,18 @@ static const enum index next_table_destroy_attr[] = { ZERO, }; +static const enum index next_queue_subcmd[] = { + QUEUE_CREATE, + QUEUE_DESTROY, + ZERO, +}; + +static const enum index next_queue_destroy_attr[] = { + QUEUE_DESTROY_ID, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -2213,6 +2246,12 @@ static int parse_table(struct context *, const struct token *, static int parse_table_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_qo(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); +static int parse_qo_destroy(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2288,6 +2327,8 @@ static int comp_actions_template_id(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_table_id(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_queue_id(struct context *, const struct token *, + unsigned int, char *, unsigned int); /** Token definitions. */ static const struct token token_list[] = { @@ -2459,6 +2500,13 @@ static const struct token token_list[] = { .call = parse_int, .comp = comp_table_id, }, + [COMMON_QUEUE_ID] = { + .name = "{queue_id}", + .type = "QUEUE_ID", + .help = "queue id", + .call = parse_int, + .comp = comp_queue_id, + }, /* Top-level command. */ [FLOW] = { .name = "flow", @@ -2481,7 +2529,8 @@ static const struct token token_list[] = { QUERY, ISOLATE, TUNNEL, - FLEX)), + FLEX, + QUEUE)), .call = parse_init, }, /* Top-level command. */ @@ -2784,6 +2833,84 @@ static const struct token token_list[] = { .call = parse_table, }, /* Top-level command. */ + [QUEUE] = { + .name = "queue", + .help = "queue a flow rule operation", + .next = NEXT(next_queue_subcmd, NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_qo, + }, + /* Sub-level commands. */ + [QUEUE_CREATE] = { + .name = "create", + .help = "create a flow rule", + .next = NEXT(NEXT_ENTRY(QUEUE_TEMPLATE_TABLE), + NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + .call = parse_qo, + }, + [QUEUE_DESTROY] = { + .name = "destroy", + .help = "destroy a flow rule", + .next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID), + NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + .call = parse_qo_destroy, + }, + /* Queue arguments. */ + [QUEUE_TEMPLATE_TABLE] = { + .name = "template table", + .help = "specify table id", + .next = NEXT(NEXT_ENTRY(QUEUE_PATTERN_TEMPLATE), + NEXT_ENTRY(COMMON_TABLE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.vc.table_id)), + .call = parse_qo, + }, + [QUEUE_PATTERN_TEMPLATE] = { + .name = "pattern_template", + .help = "specify pattern template index", + .next = NEXT(NEXT_ENTRY(QUEUE_ACTIONS_TEMPLATE), + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.vc.pat_templ_id)), + .call = parse_qo, + }, + [QUEUE_ACTIONS_TEMPLATE] = { + .name = "actions_template", + .help = "specify actions template index", + .next = NEXT(NEXT_ENTRY(QUEUE_CREATE_POSTPONE), + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.vc.act_templ_id)), + .call = parse_qo, + }, + [QUEUE_CREATE_POSTPONE] = { + .name = "postpone", + .help = "postpone create operation", + .next = NEXT(NEXT_ENTRY(ITEM_PATTERN), + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, postpone)), + .call = parse_qo, + }, + [QUEUE_DESTROY_POSTPONE] = { + .name = "postpone", + .help = "postpone destroy operation", + .next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID), + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, postpone)), + .call = parse_qo_destroy, + }, + [QUEUE_DESTROY_ID] = { + .name = "rule", + .help = "specify rule id to destroy", + .next = NEXT(next_queue_destroy_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.destroy.rule)), + .call = parse_qo_destroy, + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -8503,6 +8630,111 @@ parse_table_destroy(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for queue create commands. */ +static int +parse_qo(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != QUEUE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.vc.data = (uint8_t *)out + size; + return len; + } + switch (ctx->curr) { + case QUEUE_CREATE: + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case QUEUE_TEMPLATE_TABLE: + case QUEUE_PATTERN_TEMPLATE: + case QUEUE_ACTIONS_TEMPLATE: + case QUEUE_CREATE_POSTPONE: + return len; + case ITEM_PATTERN: + out->args.vc.pattern = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + ctx->object = out->args.vc.pattern; + ctx->objmask = NULL; + return len; + case ACTIONS: + out->args.vc.actions = + (void *)RTE_ALIGN_CEIL((uintptr_t) + (out->args.vc.pattern + + out->args.vc.pattern_n), + sizeof(double)); + ctx->object = out->args.vc.actions; + ctx->objmask = NULL; + return len; + default: + return -1; + } +} + +/** Parse tokens for queue destroy command. */ +static int +parse_qo_destroy(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *flow_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command || out->command == QUEUE) { + if (ctx->curr != QUEUE_DESTROY) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.destroy.rule = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; + } + switch (ctx->curr) { + case QUEUE_DESTROY_ID: + flow_id = out->args.destroy.rule + + out->args.destroy.rule_n++; + if ((uint8_t *)flow_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = flow_id; + ctx->objmask = NULL; + return len; + case QUEUE_DESTROY_POSTPONE: + return len; + default: + return -1; + } +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -9544,6 +9776,28 @@ comp_table_id(struct context *ctx, const struct token *token, return i; } +/** Complete available queue IDs. */ +static int +comp_queue_id(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + unsigned int i = 0; + struct rte_port *port; + + (void)token; + if (port_id_is_invalid(ctx->port, DISABLED_WARN) || + ctx->port == (portid_t)RTE_PORT_ALL) + return -1; + port = &ports[ctx->port]; + for (i = 0; i < port->queue_nb; i++) { + if (buf && i == ent) + return snprintf(buf, size, "%u", i); + } + if (buf) + return -1; + return i; +} + /** Internal context. */ static struct context cmd_flow_context; @@ -9855,6 +10109,17 @@ cmd_flow_parsed(const struct buffer *in) in->args.table_destroy.table_id_n, in->args.table_destroy.table_id); break; + case QUEUE_CREATE: + port_queue_flow_create(in->port, in->queue, in->postpone, + in->args.vc.table_id, in->args.vc.pat_templ_id, + in->args.vc.act_templ_id, in->args.vc.pattern, + in->args.vc.actions); + break; + case QUEUE_DESTROY: + port_queue_flow_destroy(in->port, in->queue, in->postpone, + in->args.destroy.rule_n, + in->args.destroy.rule); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index cefbc64c0c..d7ab57b124 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2460,6 +2460,172 @@ port_flow_template_table_destroy(portid_t port_id, return ret; } +/** Enqueue create flow rule operation. */ +int +port_queue_flow_create(portid_t port_id, queueid_t queue_id, + bool postpone, uint32_t table_id, + uint32_t pattern_idx, uint32_t actions_idx, + const struct rte_flow_item *pattern, + const struct rte_flow_action *actions) +{ + struct rte_flow_op_attr op_attr = { .postpone = postpone }; + struct rte_flow_op_result comp = { 0 }; + struct rte_flow *flow; + struct rte_port *port; + struct port_flow *pf; + struct port_table *pt; + uint32_t id = 0; + bool found; + int ret = 0; + struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL }; + struct rte_flow_action_age *age = age_action_get(actions); + + port = &ports[port_id]; + if (port->flow_list) { + if (port->flow_list->id == UINT32_MAX) { + printf("Highest rule ID is already assigned," + " delete it first"); + return -ENOMEM; + } + id = port->flow_list->id + 1; + } + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + found = false; + pt = port->table_list; + while (pt) { + if (table_id == pt->id) { + found = true; + break; + } + pt = pt->next; + } + if (!found) { + printf("Table #%u is invalid\n", table_id); + return -EINVAL; + } + + if (pattern_idx >= pt->nb_pattern_templates) { + printf("Pattern template index #%u is invalid," + " %u templates present in the table\n", + pattern_idx, pt->nb_pattern_templates); + return -EINVAL; + } + if (actions_idx >= pt->nb_actions_templates) { + printf("Actions template index #%u is invalid," + " %u templates present in the table\n", + actions_idx, pt->nb_actions_templates); + return -EINVAL; + } + + pf = port_flow_new(NULL, pattern, actions, &error); + if (!pf) + return port_flow_complain(&error); + if (age) { + pf->age_type = ACTION_AGE_CONTEXT_TYPE_FLOW; + age->context = &pf->age_type; + } + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x11, sizeof(error)); + flow = rte_flow_async_create(port_id, queue_id, &op_attr, pt->table, + pattern, pattern_idx, actions, actions_idx, NULL, &error); + if (!flow) { + uint32_t flow_id = pf->id; + port_queue_flow_destroy(port_id, queue_id, true, 1, &flow_id); + return port_flow_complain(&error); + } + + while (ret == 0) { + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x22, sizeof(error)); + ret = rte_flow_pull(port_id, queue_id, &comp, 1, &error); + if (ret < 0) { + printf("Failed to pull queue\n"); + return -EINVAL; + } + } + + pf->next = port->flow_list; + pf->id = id; + pf->flow = flow; + port->flow_list = pf; + printf("Flow rule #%u creation enqueued\n", pf->id); + return 0; +} + +/** Enqueue number of destroy flow rules operations. */ +int +port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, + bool postpone, uint32_t n, const uint32_t *rule) +{ + struct rte_flow_op_attr op_attr = { .postpone = postpone }; + struct rte_flow_op_result comp = { 0 }; + struct rte_port *port; + struct port_flow **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + tmp = &port->flow_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_flow *pf = *tmp; + + if (rule[i] != pf->id) + continue; + /* + * Poisoning to make sure PMD + * update it in case of error. + */ + memset(&error, 0x33, sizeof(error)); + if (rte_flow_async_destroy(port_id, queue_id, &op_attr, + pf->flow, NULL, &error)) { + ret = port_flow_complain(&error); + continue; + } + + while (ret == 0) { + /* + * Poisoning to make sure PMD + * update it in case of error. + */ + memset(&error, 0x44, sizeof(error)); + ret = rte_flow_pull(port_id, queue_id, + &comp, 1, &error); + if (ret < 0) { + printf("Failed to pull queue\n"); + return -EINVAL; + } + } + + printf("Flow rule #%u destruction enqueued\n", pf->id); + *tmp = pf->next; + free(pf); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index fd02498faf..62e874eaaf 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -933,6 +933,13 @@ int port_flow_template_table_create(portid_t port_id, uint32_t id, uint32_t nb_actions_templates, uint32_t *actions_templates); int port_flow_template_table_destroy(portid_t port_id, uint32_t n, const uint32_t *table); +int port_queue_flow_create(portid_t port_id, queueid_t queue_id, + bool postpone, uint32_t table_id, + uint32_t pattern_idx, uint32_t actions_idx, + const struct rte_flow_item *pattern, + const struct rte_flow_action *actions); +int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, + bool postpone, uint32_t n, const uint32_t *rule); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index f63eb76a3a..194b350932 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3384,6 +3384,20 @@ following sections. pattern {item} [/ {item} [...]] / end actions {action} [/ {action} [...]] / end +- Enqueue creation of a flow rule:: + + flow queue {port_id} create {queue_id} + [postpone {boolean}] template_table {table_id} + pattern_template {pattern_template_index} + actions_template {actions_template_index} + pattern {item} [/ {item} [...]] / end + actions {action} [/ {action} [...]] / end + +- Enqueue destruction of specific flow rules:: + + flow queue {port_id} destroy {queue_id} + [postpone {boolean}] rule {rule_id} [...] + - Create a flow rule:: flow create {port_id} @@ -3708,6 +3722,30 @@ one. **All unspecified object values are automatically initialized to 0.** +Enqueueing creation of flow rules +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue create`` adds creation operation of a flow rule to a queue. +It is bound to ``rte_flow_async_create()``:: + + flow queue {port_id} create {queue_id} + [postpone {boolean}] template_table {table_id} + pattern_template {pattern_template_index} + actions_template {actions_template_index} + pattern {item} [/ {item} [...]] / end + actions {action} [/ {action} [...]] / end + +If successful, it will return a flow rule ID usable with other commands:: + + Flow rule #[...] creaion enqueued + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +This command uses the same pattern items and actions as ``flow create``, +their format is described in `Creating flow rules`_. + Attributes ^^^^^^^^^^ @@ -4430,6 +4468,25 @@ Non-existent rule IDs are ignored:: Flow rule #0 destroyed testpmd> +Enqueueing destruction of flow rules +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue destroy`` adds destruction operations to destroy one or more rules +from their rule ID (as returned by ``flow queue create``) to a queue, +this command calls ``rte_flow_async_destroy()`` as many times as necessary:: + + flow queue {port_id} destroy {queue_id} + [postpone {boolean}] rule {rule_id} [...] + +If successful, it will show:: + + Flow rule #[...] destruction enqueued + +It does not report anything for rule IDs that do not exist. The usual error +message is shown when a rule cannot be destroyed:: + + Caught error type [...] ([...]): [...] + Querying flow rules ~~~~~~~~~~~~~~~~~~~ From patchwork Mon Feb 21 23:02:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107921 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0682BA034F; Tue, 22 Feb 2022 00:04:14 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 72FB3411AB; Tue, 22 Feb 2022 00:03:31 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2075.outbound.protection.outlook.com [40.107.236.75]) by mails.dpdk.org (Postfix) with ESMTP id 5BB894118A for ; Tue, 22 Feb 2022 00:03:30 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lEGrmK3Ke/+WurJ8I04rvYK/ngCt9E2AVVVeFgJcvb4GRVsfL23PcO0ufUWkPftan+80uGDVYMrhTZl8Dya+mB0KR+p6Zr/PGxIz3H4RkT65ANrj+J56MHz2y2wfLhPR8B20yGxyNk5lKohajcxtWWnq2zb9a7hPBDx8+imxbVaNbQpISaQfKWCfOHhEcRLZI5+MNkK+TYVwsmRDyDnGDHn9I1q4s/7ZYBjqlvwhUzuh5DuE2PNY4d95e74v6hR0sointEutQ9W4S2m7NWkBcb4wFs9DczjTM1yHoroWJle1yr7jfTB7YZ5nSlDe1uEBQPm+db8o+Z0YHOPcYJpADQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=h/9qWaEHeBt1m3U3d0A6AAC/yraDWdQv9sDQbN+WT2k=; b=br5twG6hiU0eh84T6F9RvjZ/l9nqHtvtOM1m9Jx3TAl+RQ16U0z9ZpHcCpqrVXeOjUZGIkamzYDSZv1zKFAV52CvzzN30+j8Qoy9bLEn2pfo3PZOEA0ELZ+JpAsgkgMgzXQBNVn3pss8bqYOSz52ppm8QcUZoirR/oLhHPVuEow3T6RRUSIlaSSGJ5QiyXKFQb7NGTK1YLUYM2dL+oGOnJd2cDPimGZGHxKvHdWaxDUkFeLHqwlqr1+wbCoCBp/2K9I661PS8T0nmZ0jM4590Jczag7FnC5as+UUbEQVcGzGqdrQN2/wgs+eVJMA5/V2ZUNnofVL3+BAJanLcj+1rA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=h/9qWaEHeBt1m3U3d0A6AAC/yraDWdQv9sDQbN+WT2k=; b=INuonuZvXKw+a4muIrmrIhiBQfowHk7CpicbidUBN0De32m6Nq9c4KWJoVMftLxOU6DLo4uqbIBI7N7NmO7xoMjWU5EWrk/zsTtlW/5TMkB0XhnwkHuhKqwblHDVH7D0a5q8HHghF4I0u9ktaBgaNVVpb24j5nq7bIlst0dpX2DTrzu6Hm38VF2ifvc83a1Xa2UP7u2BNkQEllOI3q1unznq0OyNo/+u4YzZmqIYp4TwNOfU/w0Qpbxl53uSkYzGSaZuf4uw+xlOvIFuZ+/cbnEt9C6MAEAd+sefLrAjKXtW2RMsR1Mh6arwVckMiv3ECVoWiL9LpCFaDACcpK4gFg== Received: from DM6PR12MB3691.namprd12.prod.outlook.com (2603:10b6:5:1c5::30) by MW3PR12MB4555.namprd12.prod.outlook.com (2603:10b6:303:59::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.17; Mon, 21 Feb 2022 23:03:28 +0000 Received: from MWHPR12CA0041.namprd12.prod.outlook.com (2603:10b6:301:2::27) by DM6PR12MB3691.namprd12.prod.outlook.com (2603:10b6:5:1c5::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.24; Mon, 21 Feb 2022 23:03:26 +0000 Received: from CO1NAM11FT050.eop-nam11.prod.protection.outlook.com (2603:10b6:301:2:cafe::c8) by MWHPR12CA0041.outlook.office365.com (2603:10b6:301:2::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.14 via Frontend Transport; Mon, 21 Feb 2022 23:03:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT050.mail.protection.outlook.com (10.13.174.79) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4995.15 via Frontend Transport; Mon, 21 Feb 2022 23:03:25 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 21 Feb 2022 23:03:25 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 21 Feb 2022 15:03:22 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v9 09/11] app/testpmd: add flow queue push operation Date: Tue, 22 Feb 2022 01:02:38 +0200 Message-ID: <20220221230240.2409665-10-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220221230240.2409665-1-akozyrev@nvidia.com> References: <20220220034409.2226860-1-akozyrev@nvidia.com> <20220221230240.2409665-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 563e66b7-c3a2-41a7-1791-08d9f58e5de1 X-MS-TrafficTypeDiagnostic: DM6PR12MB3691:EE_|MW3PR12MB4555:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4bzHTkZyqRItkBr2cMydr7tzOgLSBF1sDw/mlIzHRNYir2FOUqs22l3eZuw2fpASSpElT8Wd/1Xo2sUR5mYUDwIuj3hbUUboKw7EjnE2pcSuB2Hzfljs9ZqYHG8lMM6eFVDORLk+WxErccb+e5T6yJT+vGm5KTpTa8kf9PmaxCvqNCKv0BdGD50Kc7eSpK7mEI0XkgBnslsKdeXLz9POJYJMYq+GZ6hIkE6MZ2tSbyLONBb6WAtGiEx6OHftAK9Yq9ZHbm0MC39wl+tSa81oVgdhleorq7XhYXeY7lHu/wPRYsZE/a+rMub/gY2K4/MG1knlp8BwBgtPQK1L5bhn0RGAamF9rtzGthAFJcIgk7lFUxiovXVQNn7e4mhMt0tH7nUxAi3uXyBBeOyzWLdM7w3koCEOHBW99tb86zStqbPKxAbRbcO542eg+gmY9hGhuWKIX03wvTrSlMxGgYVHOXMzDWpzOzkrUsCXPfxjZu7n1XuTC6f53Kqu3cctGxcxeIvzA00geJclzUnD9Pv/2wQJ1eqGHHixoK8ieOZFYfjwlFNPJqRy6+ejoaKgIFYRUc1dMHz32mQJ3/yNhQK6DL4i/uQN9kLnCmnPN5QWnPhhyJMqnll2N8RVKgsyBdIy98Vjd6bvD401a+7huTlBqnOEHej2rruSMWfcTVZM4bZAFp4L6MST3xPEAZburneIVjumZ8GHUO9hhHuC52VmRw== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(36756003)(6666004)(2906002)(426003)(4326008)(81166007)(1076003)(86362001)(26005)(16526019)(186003)(82310400004)(336012)(70586007)(2616005)(8676002)(508600001)(8936002)(70206006)(356005)(5660300002)(40460700003)(7416002)(83380400001)(36860700001)(6916009)(316002)(54906003)(47076005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Feb 2022 23:03:25.7736 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 563e66b7-c3a2-41a7-1791-08d9f58e5de1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT050.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4555 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_push API. Provide the command line interface for pushing operations. Usage example: flow queue 0 push 0 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 56 ++++++++++++++++++++- app/test-pmd/config.c | 28 +++++++++++ app/test-pmd/testpmd.h | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 21 ++++++++ 4 files changed, 105 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index d359127df9..af36975cdf 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -94,6 +94,7 @@ enum index { TUNNEL, FLEX, QUEUE, + PUSH, /* Flex arguments */ FLEX_ITEM_INIT, @@ -138,6 +139,9 @@ enum index { QUEUE_DESTROY_ID, QUEUE_DESTROY_POSTPONE, + /* Push arguments. */ + PUSH_QUEUE, + /* Table arguments. */ TABLE_CREATE, TABLE_DESTROY, @@ -2252,6 +2256,9 @@ static int parse_qo(struct context *, const struct token *, static int parse_qo_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_push(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2530,7 +2537,8 @@ static const struct token token_list[] = { ISOLATE, TUNNEL, FLEX, - QUEUE)), + QUEUE, + PUSH)), .call = parse_init, }, /* Top-level command. */ @@ -2911,6 +2919,21 @@ static const struct token token_list[] = { .call = parse_qo_destroy, }, /* Top-level command. */ + [PUSH] = { + .name = "push", + .help = "push enqueued operations", + .next = NEXT(NEXT_ENTRY(PUSH_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_push, + }, + /* Sub-level commands. */ + [PUSH_QUEUE] = { + .name = "queue", + .help = "specify queue id", + .next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -8735,6 +8758,34 @@ parse_qo_destroy(struct context *ctx, const struct token *token, } } +/** Parse tokens for push queue command. */ +static int +parse_push(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != PUSH) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.vc.data = (uint8_t *)out + size; + } + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -10120,6 +10171,9 @@ cmd_flow_parsed(const struct buffer *in) in->args.destroy.rule_n, in->args.destroy.rule); break; + case PUSH: + port_queue_flow_push(in->port, in->queue); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index d7ab57b124..9ffb7d88dc 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2626,6 +2626,34 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, return ret; } +/** Push all the queue operations in the queue to the NIC. */ +int +port_queue_flow_push(portid_t port_id, queueid_t queue_id) +{ + struct rte_port *port; + struct rte_flow_error error; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + memset(&error, 0x55, sizeof(error)); + ret = rte_flow_push(port_id, queue_id, &error); + if (ret < 0) { + printf("Failed to push operations in the queue\n"); + return -EINVAL; + } + printf("Queue #%u operations pushed\n", queue_id); + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 62e874eaaf..24a43fd82c 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -940,6 +940,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id, const struct rte_flow_action *actions); int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, bool postpone, uint32_t n, const uint32_t *rule); +int port_queue_flow_push(portid_t port_id, queueid_t queue_id); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 194b350932..4f1f908d4a 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3398,6 +3398,10 @@ following sections. flow queue {port_id} destroy {queue_id} [postpone {boolean}] rule {rule_id} [...] +- Push enqueued operations:: + + flow push {port_id} queue {queue_id} + - Create a flow rule:: flow create {port_id} @@ -3616,6 +3620,23 @@ The usual error message is shown when a table cannot be destroyed:: Caught error type [...] ([...]): [...] +Pushing enqueued operations +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow push`` pushes all the outstanding enqueued operations +to the underlying device immediately. +It is bound to ``rte_flow_push()``:: + + flow push {port_id} queue {queue_id} + +If successful, it will show:: + + Queue #[...] operations pushed + +The usual error message is shown when operations cannot be pushed:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Mon Feb 21 23:02:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107922 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F15CA034F; Tue, 22 Feb 2022 00:04:19 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5475740E64; Tue, 22 Feb 2022 00:03:34 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2078.outbound.protection.outlook.com [40.107.237.78]) by mails.dpdk.org (Postfix) with ESMTP id C35AC411B2 for ; Tue, 22 Feb 2022 00:03:31 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=S4AM7cPxBZVLFljsGsYTUf5SKpIdShaVwKCOKTIn8CB5pTzxjMvUg1kz6if49fmE/0EFxrG/lVnwCn7gIGxcGrJwoxOUkJZfWah7xz62jmktHilY3nNSsn1MdS8V6rPxPw7VpbvXEVshjumdlljoaaSjVUx6XAOt9BqJbs4+zHmuve+iLg3xnMvmrbg3S3uAy3Lb8emy+seDjf/LuI6cs1ob9nFNHi04WFST5AnRpuvYZaTbs5Jy8De4BX/DOtydkvC7SqMyuk3EZ6ny2fTogGmfiBepXVoLJMA9SlyxALBxaNECRzXT7rYk766EbrInNmkMnmvXFnpscAbqoQ8slQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qtsVJTt46DiE1L7aJlShc0x3CLUto2WyGT60cJf4858=; b=MoBMMdZFEJ6F0toLJWdkOQqAq2ukBaaXwkoKGUCIr/ssTQQyllOydPrecGvj0ZFQ+qJXlJLnv/zX5Z8Ff9dQXlJZzrtwe/9PM05S7sBsqgo27lCUxup4WVqz3cQIx1KkX8JSg1Pv5rkrJV5fQlNXMOfkKOrtlglPsqTfZ1/UhrF6FZ/LcejDtU0l+Fh1T+kHRhaFBLcGJH4QTadnBwOJRio+zlJ772naMNAbLHeqcJkHINbu9R3so4nM5F/HUrZbVWz8MEWWZuaBEY4ppPYYY1mbUR52CeNtLc0KT+kHV4TLByL8l09lBX3+huOWu8Pi+9d5t3sDAg2RChWJR99Chg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=qtsVJTt46DiE1L7aJlShc0x3CLUto2WyGT60cJf4858=; b=oTP+QDEl1iBcbFwhd9NhfDQIM2E8CNTgWh7KoMD7VV/IjQ1lTHQnnJMxJlQm8f+wmMXJ0fAKHVkcHAa2grIrcjJPKUJdYRgr9rn3ZY+pehc8ZnUTc1Que9XYtlxM26iY/lIiJkXhs4VpQQ/kek+sCIwexYhXhEW33HxL5DZXaN2AM8oHMlbCurCc3E04bk8CEuelfk7Q9roHMrUalif88zFXlwh6G5n44jRCuM41PZgKwcT13H+DF+3uw/GP3cGjYjnf7KqYCWCVZdMJlu6vopjAZGhtG4jd5ERxT7WhIVyLuFZs+XJv3QvAZfWME0ABkqN4x08MRcE1Jg3CUpv7gA== Received: from LV2PR12MB5992.namprd12.prod.outlook.com (2603:10b6:408:14e::17) by BN7PR12MB2803.namprd12.prod.outlook.com (2603:10b6:408:32::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.17; Mon, 21 Feb 2022 23:03:30 +0000 Received: from MW4P222CA0012.NAMP222.PROD.OUTLOOK.COM (2603:10b6:303:114::17) by LV2PR12MB5992.namprd12.prod.outlook.com (2603:10b6:408:14e::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.14; Mon, 21 Feb 2022 23:03:29 +0000 Received: from CO1NAM11FT018.eop-nam11.prod.protection.outlook.com (2603:10b6:303:114:cafe::54) by MW4P222CA0012.outlook.office365.com (2603:10b6:303:114::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.16 via Frontend Transport; Mon, 21 Feb 2022 23:03:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT018.mail.protection.outlook.com (10.13.175.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4995.15 via Frontend Transport; Mon, 21 Feb 2022 23:03:28 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 21 Feb 2022 23:03:28 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 21 Feb 2022 15:03:25 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v9 10/11] app/testpmd: add flow queue pull operation Date: Tue, 22 Feb 2022 01:02:39 +0200 Message-ID: <20220221230240.2409665-11-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220221230240.2409665-1-akozyrev@nvidia.com> References: <20220220034409.2226860-1-akozyrev@nvidia.com> <20220221230240.2409665-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 6dd3a408-2af9-4a9d-61f1-08d9f58e5fac X-MS-TrafficTypeDiagnostic: LV2PR12MB5992:EE_|BN7PR12MB2803:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zeEFMjBbWSLiLfpM4+iR5DMpU1N0sr5v3tfCTtvqDNxW1eDZoO5lnFheS6cGHmtktoUtalmJtUmqE6CgdI/RoFUm1cf/vLQbMTO4GKph4kWb+bqOJw8f8wYuesc/G1yxw+wQpkKXmFufziIgVI+Fj9uSICUiosFgdQyfSxGlN0eM0Z8yzNiw8xuAePcDWfRmUEFqDimzHL9vduXWdJJR2rg9WPYsVZ/XJHYYWEf49nCGTK8bHvfwIUzL2fZURL/rD+umi3tGX0uy5TGN4LdHyj3L+PvpTDhw1S6+hPsOLVcM+qP1ougCa6Ftu+1MKWirl7QFfOR/m+lt75UFF/WNTKV39fKVn1siPbph79ywm5jSbIGZcSwHRDhPkHudRwA5NLU2jRWdQKr3D3q+f/A4R/kl+QyjnGF9m72gkCvb0kgt50lve5tzIePzHsUjy7Iz3Nn78+v1S6u5YN1S7lFbNCwBxDxinw0bxOpy2iknG/cHOtOh2COYvuFrlMjCy6l0hRwN6fMY3fsLP19iRZ8+XL0TupKnyXjzB9UHikOuOZM80NmMlHV/dr5avMljsq+i1PWuxfHuFdjwXLUae/w3OaisbquD2I/9B9bCjnarQSBtSrIGomhWPv330jZTSdbvwQxJYqUhG48G0Qg0+VRstdO4N3IDuhh6TshgSrv8jcoaSgq4etUh9TQdIhuRHE5Xpf+BwCq5u6O67vorUy1+Wg== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(54906003)(8676002)(6916009)(7416002)(26005)(1076003)(16526019)(5660300002)(8936002)(70586007)(70206006)(83380400001)(4326008)(186003)(356005)(82310400004)(316002)(336012)(426003)(81166007)(86362001)(40460700003)(47076005)(36756003)(6666004)(2906002)(36860700001)(508600001)(2616005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Feb 2022 23:03:28.7679 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6dd3a408-2af9-4a9d-61f1-08d9f58e5fac X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT018.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN7PR12MB2803 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_pull API. Provide the command line interface for pulling operations results. Usage example: flow pull 0 queue 0 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 56 +++++++++++++++- app/test-pmd/config.c | 74 +++++++++++++-------- app/test-pmd/testpmd.h | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 25 +++++++ 4 files changed, 127 insertions(+), 29 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index af36975cdf..d4b72724e6 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -95,6 +95,7 @@ enum index { FLEX, QUEUE, PUSH, + PULL, /* Flex arguments */ FLEX_ITEM_INIT, @@ -142,6 +143,9 @@ enum index { /* Push arguments. */ PUSH_QUEUE, + /* Pull arguments. */ + PULL_QUEUE, + /* Table arguments. */ TABLE_CREATE, TABLE_DESTROY, @@ -2259,6 +2263,9 @@ static int parse_qo_destroy(struct context *, const struct token *, static int parse_push(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_pull(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2538,7 +2545,8 @@ static const struct token token_list[] = { TUNNEL, FLEX, QUEUE, - PUSH)), + PUSH, + PULL)), .call = parse_init, }, /* Top-level command. */ @@ -2934,6 +2942,21 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct buffer, queue)), }, /* Top-level command. */ + [PULL] = { + .name = "pull", + .help = "pull flow operations results", + .next = NEXT(NEXT_ENTRY(PULL_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_pull, + }, + /* Sub-level commands. */ + [PULL_QUEUE] = { + .name = "queue", + .help = "specify queue id", + .next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -8786,6 +8809,34 @@ parse_push(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for pull command. */ +static int +parse_pull(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != PULL) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.vc.data = (uint8_t *)out + size; + } + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -10174,6 +10225,9 @@ cmd_flow_parsed(const struct buffer *in) case PUSH: port_queue_flow_push(in->port, in->queue); break; + case PULL: + port_queue_flow_pull(in->port, in->queue); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 9ffb7d88dc..158d1b38a8 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2469,14 +2469,12 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, const struct rte_flow_action *actions) { struct rte_flow_op_attr op_attr = { .postpone = postpone }; - struct rte_flow_op_result comp = { 0 }; struct rte_flow *flow; struct rte_port *port; struct port_flow *pf; struct port_table *pt; uint32_t id = 0; bool found; - int ret = 0; struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL }; struct rte_flow_action_age *age = age_action_get(actions); @@ -2539,16 +2537,6 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, return port_flow_complain(&error); } - while (ret == 0) { - /* Poisoning to make sure PMDs update it in case of error. */ - memset(&error, 0x22, sizeof(error)); - ret = rte_flow_pull(port_id, queue_id, &comp, 1, &error); - if (ret < 0) { - printf("Failed to pull queue\n"); - return -EINVAL; - } - } - pf->next = port->flow_list; pf->id = id; pf->flow = flow; @@ -2563,7 +2551,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, bool postpone, uint32_t n, const uint32_t *rule) { struct rte_flow_op_attr op_attr = { .postpone = postpone }; - struct rte_flow_op_result comp = { 0 }; struct rte_port *port; struct port_flow **tmp; uint32_t c = 0; @@ -2599,21 +2586,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, ret = port_flow_complain(&error); continue; } - - while (ret == 0) { - /* - * Poisoning to make sure PMD - * update it in case of error. - */ - memset(&error, 0x44, sizeof(error)); - ret = rte_flow_pull(port_id, queue_id, - &comp, 1, &error); - if (ret < 0) { - printf("Failed to pull queue\n"); - return -EINVAL; - } - } - printf("Flow rule #%u destruction enqueued\n", pf->id); *tmp = pf->next; free(pf); @@ -2654,6 +2626,52 @@ port_queue_flow_push(portid_t port_id, queueid_t queue_id) return ret; } +/** Pull queue operation results from the queue. */ +int +port_queue_flow_pull(portid_t port_id, queueid_t queue_id) +{ + struct rte_port *port; + struct rte_flow_op_result *res; + struct rte_flow_error error; + int ret = 0; + int success = 0; + int i; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + res = calloc(port->queue_sz, sizeof(struct rte_flow_op_result)); + if (!res) { + printf("Failed to allocate memory for pulled results\n"); + return -ENOMEM; + } + + memset(&error, 0x66, sizeof(error)); + ret = rte_flow_pull(port_id, queue_id, res, + port->queue_sz, &error); + if (ret < 0) { + printf("Failed to pull a operation results\n"); + free(res); + return -EINVAL; + } + + for (i = 0; i < ret; i++) { + if (res[i].status == RTE_FLOW_OP_SUCCESS) + success++; + } + printf("Queue #%u pulled %u operations (%u failed, %u succeeded)\n", + queue_id, ret, ret - success, success); + free(res); + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 24a43fd82c..5ea2408a0b 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -941,6 +941,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id, int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, bool postpone, uint32_t n, const uint32_t *rule); int port_queue_flow_push(portid_t port_id, queueid_t queue_id); +int port_queue_flow_pull(portid_t port_id, queueid_t queue_id); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 4f1f908d4a..5080ddb256 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3402,6 +3402,10 @@ following sections. flow push {port_id} queue {queue_id} +- Pull all operations results from a queue:: + + flow pull {port_id} queue {queue_id} + - Create a flow rule:: flow create {port_id} @@ -3637,6 +3641,23 @@ The usual error message is shown when operations cannot be pushed:: Caught error type [...] ([...]): [...] +Pulling flow operations results +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow pull`` asks the underlying device about flow queue operations +results and return all the processed (successfully or not) operations. +It is bound to ``rte_flow_pull()``:: + + flow pull {port_id} queue {queue_id} + +If successful, it will show:: + + Queue #[...] pulled #[...] operations (#[...] failed, #[...] succeeded) + +The usual error message is shown when operations results cannot be pulled:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -3767,6 +3788,8 @@ Otherwise it will show an error message of the form:: This command uses the same pattern items and actions as ``flow create``, their format is described in `Creating flow rules`_. +``flow queue pull`` must be called to retrieve the operation status. + Attributes ^^^^^^^^^^ @@ -4508,6 +4531,8 @@ message is shown when a rule cannot be destroyed:: Caught error type [...] ([...]): [...] +``flow queue pull`` must be called to retrieve the operation status. + Querying flow rules ~~~~~~~~~~~~~~~~~~~ From patchwork Mon Feb 21 23:02:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107923 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C004DA034F; Tue, 22 Feb 2022 00:04:26 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 840DF41171; Tue, 22 Feb 2022 00:03:47 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2078.outbound.protection.outlook.com [40.107.237.78]) by mails.dpdk.org (Postfix) with ESMTP id 58D2241180 for ; Tue, 22 Feb 2022 00:03:46 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eM6lefn/17umfCugpbqTpy8Uqhe0OucYemoa4/uAJC/GgiBOWk6VQvaes4oG+zRkE16WGtzX0IiNfUvkjjbfd626/8HI1GRNdoThaSRoBzJHtiqTxK80Y9f6YEVedhddQFc6E0t2L2qIv8pDU45RxBJt7qTxDvqrBNqSZffwjf//BbmunsPn4cABdzcSBMjm8urzV8HC7NR7wVCDpuOkObXE7+WhrmTshMvJBzggvDe/DzZrAZCg/fdSAmpLuBQniu1sd156uSWw9+YPzJBP9Jhe7p2QiG+B8S6+AbMeYZi1vQa5pMXwMsJYbj3FQ3UnhoF1dIyjTjvXk7ml8rRVpA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6isooIwRS2RhH47ayqKlrXJLzETLRVD30seqGY5i8yo=; b=aj3KvDrVsJEtFNKnEbE+yrcZE5EMG2piBVGHj7lGLxnwLbjtVtOgEm1J230bnt2cpUthbGbBOJ3dfVyNrVWPuuLRPqk2Q1J87yN49B2R+IAPDIPYx5NMiewnfZPsK6pvy/FifrHpG191HzWCswRosB3EYlXSoELdd3Dgy8umQRy1NGSZwJItT2ICkMi0xSGmAhW8sEvXFaBYDPJli3KVIkdqkWIPKc+DYjeGmRxb6LOrdwiuzNW6yRf85Dj02JBFZwncniMR3MCz+TzDKhpoSORla4slEomC1gqGoJM1WwKT79IKUY8qYzCZ5bSSkph/dDEcc4N/jGxyfrrO4BTSOQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6isooIwRS2RhH47ayqKlrXJLzETLRVD30seqGY5i8yo=; b=pcA/x1CRC4flGpEOlyjISfEd0AWCqyQuuhkoHuZY8vb7x9vTzqDu7OkAx5qVGA+y0ozcZbkeJFBpvHXb/V5k+y+RX88uoxfzIxtsmrgmfigLXl2GudwGomGVDmnl2EuIzjfHgda6XCKNHiTe0tUkLURdosISuZ9m610yXAJWv6PuGG/QIMWNgSKNtPQXBNER7rh0LrTdGAwUGVVUgIddbD+MRa72YP4YhB9d54ik9v0KRXFJd3FvQeUCNalbTtvVp8kUgkvBJQe/1hT1Y+5r33loBQKmjORX9dSL85b7hOGqVOIs1qbwdxHSrBp1E6IjmrJXpnSLaASrTErC9dyrkg== Received: from DM8PR12MB5464.namprd12.prod.outlook.com (2603:10b6:8:3d::13) by MWHPR1201MB0222.namprd12.prod.outlook.com (2603:10b6:301:54::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.24; Mon, 21 Feb 2022 23:03:43 +0000 Received: from BN9P221CA0010.NAMP221.PROD.OUTLOOK.COM (2603:10b6:408:10a::13) by DM8PR12MB5464.namprd12.prod.outlook.com (2603:10b6:8:3d::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.24; Mon, 21 Feb 2022 23:03:32 +0000 Received: from BN8NAM11FT012.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10a:cafe::47) by BN9P221CA0010.outlook.office365.com (2603:10b6:408:10a::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.16 via Frontend Transport; Mon, 21 Feb 2022 23:03:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT012.mail.protection.outlook.com (10.13.177.55) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4995.15 via Frontend Transport; Mon, 21 Feb 2022 23:03:32 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 21 Feb 2022 23:03:31 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 21 Feb 2022 15:03:27 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v9 11/11] app/testpmd: add async indirect actions operations Date: Tue, 22 Feb 2022 01:02:40 +0200 Message-ID: <20220221230240.2409665-12-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220221230240.2409665-1-akozyrev@nvidia.com> References: <20220220034409.2226860-1-akozyrev@nvidia.com> <20220221230240.2409665-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 109c17aa-3355-465f-993f-08d9f58e61b8 X-MS-TrafficTypeDiagnostic: DM8PR12MB5464:EE_|MWHPR1201MB0222:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: XKA9LY4Yu//XzqiSjSmCEdjK5ynDvs8RHR/ZqhM4Uv6IRS+ozkuutI9Xm2DCeAwQIoOzg7bmtOytNxq3k+x4XqdIWXyUnJZkhwhX+y2hYi6nWttvs0knxmPiKjO713J1SZg+IGLOD3SZRDD+0K8ni75ag3XH4dVzTP9b+626OWZAJaK9m3T1ElLHfyDscuv2oZLcAc2Es14tUx3PK2oRqbeNg/Hw+wmR5NaGpVf/o3KDQIGnlyTvWNdDn7i4AykmkmxxLlwp/8vumteLXaw5VuNelqEdZOWQ8C6ZVsmLIY7lWfZ/ABKgsrcs4Jvdg1B4BVfcQqrPWMRRZpBA4nNHvM+AoIKnfYQHhCWMIPUJSe6MDH6aFTlAyan00wS49GVAQwSuIYrQuxh/57JcjXy9MRD2Tk4XbZmo1ZeXGh70v9PS62dKiRt2pLHkGt1lwU+IAiNh2imGEeWG06OlYL3KAhxiBruqM7MK0/Ele5t5B9ZzmVzuExfysznKcNRSLz1hLxmX7eaA/LM7PbfvhYuWczZDN66qMJbDK5Z+6Yd8HbszgrPqeU+UguxjTSrQXLeaZJS4mRzdCviIwfb4zHONATJMjtx+oMW8peXaqEe6++0mCjfc3oZyZM960RfNvQdid0J7TWSJ6UlMWRkHchwnQsHY9x6WwE1l+QwII6zzPZyFo8oPCq4bMPnHj5jBkVrNU2MgDZxjKTEZHMryoC/Shw== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(6916009)(54906003)(86362001)(7416002)(5660300002)(8936002)(70586007)(4326008)(508600001)(40460700003)(8676002)(316002)(6666004)(30864003)(83380400001)(2906002)(26005)(16526019)(186003)(2616005)(70206006)(1076003)(36756003)(336012)(426003)(81166007)(356005)(36860700001)(47076005)(82310400004)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Feb 2022 23:03:32.1409 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 109c17aa-3355-465f-993f-08d9f58e61b8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT012.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR1201MB0222 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_async_action_handle API. Provide the command line interface for operations dequeue. Usage example: flow queue 0 indirect_action 0 create action_id 9 ingress postpone yes action rss / end flow queue 0 indirect_action 0 update action_id 9 action queue index 0 / end flow queue 0 indirect_action 0 destroy action_id 9 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 276 ++++++++++++++++++++ app/test-pmd/config.c | 131 ++++++++++ app/test-pmd/testpmd.h | 10 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 65 +++++ 4 files changed, 482 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index d4b72724e6..b5f1191e55 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -127,6 +127,7 @@ enum index { /* Queue arguments. */ QUEUE_CREATE, QUEUE_DESTROY, + QUEUE_INDIRECT_ACTION, /* Queue create arguments. */ QUEUE_CREATE_ID, @@ -140,6 +141,26 @@ enum index { QUEUE_DESTROY_ID, QUEUE_DESTROY_POSTPONE, + /* Queue indirect action arguments */ + QUEUE_INDIRECT_ACTION_CREATE, + QUEUE_INDIRECT_ACTION_UPDATE, + QUEUE_INDIRECT_ACTION_DESTROY, + + /* Queue indirect action create arguments */ + QUEUE_INDIRECT_ACTION_CREATE_ID, + QUEUE_INDIRECT_ACTION_INGRESS, + QUEUE_INDIRECT_ACTION_EGRESS, + QUEUE_INDIRECT_ACTION_TRANSFER, + QUEUE_INDIRECT_ACTION_CREATE_POSTPONE, + QUEUE_INDIRECT_ACTION_SPEC, + + /* Queue indirect action update arguments */ + QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE, + + /* Queue indirect action destroy arguments */ + QUEUE_INDIRECT_ACTION_DESTROY_ID, + QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE, + /* Push arguments. */ PUSH_QUEUE, @@ -1135,6 +1156,7 @@ static const enum index next_table_destroy_attr[] = { static const enum index next_queue_subcmd[] = { QUEUE_CREATE, QUEUE_DESTROY, + QUEUE_INDIRECT_ACTION, ZERO, }; @@ -1144,6 +1166,36 @@ static const enum index next_queue_destroy_attr[] = { ZERO, }; +static const enum index next_qia_subcmd[] = { + QUEUE_INDIRECT_ACTION_CREATE, + QUEUE_INDIRECT_ACTION_UPDATE, + QUEUE_INDIRECT_ACTION_DESTROY, + ZERO, +}; + +static const enum index next_qia_create_attr[] = { + QUEUE_INDIRECT_ACTION_CREATE_ID, + QUEUE_INDIRECT_ACTION_INGRESS, + QUEUE_INDIRECT_ACTION_EGRESS, + QUEUE_INDIRECT_ACTION_TRANSFER, + QUEUE_INDIRECT_ACTION_CREATE_POSTPONE, + QUEUE_INDIRECT_ACTION_SPEC, + ZERO, +}; + +static const enum index next_qia_update_attr[] = { + QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE, + QUEUE_INDIRECT_ACTION_SPEC, + ZERO, +}; + +static const enum index next_qia_destroy_attr[] = { + QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE, + QUEUE_INDIRECT_ACTION_DESTROY_ID, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -2260,6 +2312,12 @@ static int parse_qo(struct context *, const struct token *, static int parse_qo_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_qia(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); +static int parse_qia_destroy(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_push(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2873,6 +2931,13 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct buffer, queue)), .call = parse_qo_destroy, }, + [QUEUE_INDIRECT_ACTION] = { + .name = "indirect_action", + .help = "queue indirect actions", + .next = NEXT(next_qia_subcmd, NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + .call = parse_qia, + }, /* Queue arguments. */ [QUEUE_TEMPLATE_TABLE] = { .name = "template table", @@ -2926,6 +2991,90 @@ static const struct token token_list[] = { args.destroy.rule)), .call = parse_qo_destroy, }, + /* Queue indirect action arguments */ + [QUEUE_INDIRECT_ACTION_CREATE] = { + .name = "create", + .help = "create indirect action", + .next = NEXT(next_qia_create_attr), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_UPDATE] = { + .name = "update", + .help = "update indirect action", + .next = NEXT(next_qia_update_attr, + NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_DESTROY] = { + .name = "destroy", + .help = "destroy indirect action", + .next = NEXT(next_qia_destroy_attr), + .call = parse_qia_destroy, + }, + /* Indirect action destroy arguments. */ + [QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE] = { + .name = "postpone", + .help = "postpone destroy operation", + .next = NEXT(next_qia_destroy_attr, + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, postpone)), + }, + [QUEUE_INDIRECT_ACTION_DESTROY_ID] = { + .name = "action_id", + .help = "specify a indirect action id to destroy", + .next = NEXT(next_qia_destroy_attr, + NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.ia_destroy.action_id)), + .call = parse_qia_destroy, + }, + /* Indirect action update arguments. */ + [QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE] = { + .name = "postpone", + .help = "postpone update operation", + .next = NEXT(next_qia_update_attr, + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, postpone)), + }, + /* Indirect action create arguments. */ + [QUEUE_INDIRECT_ACTION_CREATE_ID] = { + .name = "action_id", + .help = "specify a indirect action id to create", + .next = NEXT(next_qia_create_attr, + NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)), + }, + [QUEUE_INDIRECT_ACTION_INGRESS] = { + .name = "ingress", + .help = "affect rule to ingress", + .next = NEXT(next_qia_create_attr), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_EGRESS] = { + .name = "egress", + .help = "affect rule to egress", + .next = NEXT(next_qia_create_attr), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_TRANSFER] = { + .name = "transfer", + .help = "affect rule to transfer", + .next = NEXT(next_qia_create_attr), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_CREATE_POSTPONE] = { + .name = "postpone", + .help = "postpone create operation", + .next = NEXT(next_qia_create_attr, + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, postpone)), + }, + [QUEUE_INDIRECT_ACTION_SPEC] = { + .name = "action", + .help = "specify action to create indirect handle", + .next = NEXT(next_action), + }, /* Top-level command. */ [PUSH] = { .name = "push", @@ -6501,6 +6650,110 @@ parse_ia_destroy(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for indirect action commands. */ +static int +parse_qia(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != QUEUE) + return -1; + if (sizeof(*out) > size) + return -1; + out->args.vc.data = (uint8_t *)out + size; + return len; + } + switch (ctx->curr) { + case QUEUE_INDIRECT_ACTION: + return len; + case QUEUE_INDIRECT_ACTION_CREATE: + case QUEUE_INDIRECT_ACTION_UPDATE: + out->args.vc.actions = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + out->args.vc.attr.group = UINT32_MAX; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case QUEUE_INDIRECT_ACTION_EGRESS: + out->args.vc.attr.egress = 1; + return len; + case QUEUE_INDIRECT_ACTION_INGRESS: + out->args.vc.attr.ingress = 1; + return len; + case QUEUE_INDIRECT_ACTION_TRANSFER: + out->args.vc.attr.transfer = 1; + return len; + case QUEUE_INDIRECT_ACTION_CREATE_POSTPONE: + return len; + default: + return -1; + } +} + +/** Parse tokens for indirect action destroy command. */ +static int +parse_qia_destroy(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *action_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command || out->command == QUEUE) { + if (ctx->curr != QUEUE_INDIRECT_ACTION_DESTROY) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.ia_destroy.action_id = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; + } + switch (ctx->curr) { + case QUEUE_INDIRECT_ACTION: + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case QUEUE_INDIRECT_ACTION_DESTROY_ID: + action_id = out->args.ia_destroy.action_id + + out->args.ia_destroy.action_id_n++; + if ((uint8_t *)action_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = action_id; + ctx->objmask = NULL; + return len; + case QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE: + return len; + default: + return -1; + } +} + /** Parse tokens for meter policy action commands. */ static int parse_mp(struct context *ctx, const struct token *token, @@ -10228,6 +10481,29 @@ cmd_flow_parsed(const struct buffer *in) case PULL: port_queue_flow_pull(in->port, in->queue); break; + case QUEUE_INDIRECT_ACTION_CREATE: + port_queue_action_handle_create( + in->port, in->queue, in->postpone, + in->args.vc.attr.group, + &((const struct rte_flow_indir_action_conf) { + .ingress = in->args.vc.attr.ingress, + .egress = in->args.vc.attr.egress, + .transfer = in->args.vc.attr.transfer, + }), + in->args.vc.actions); + break; + case QUEUE_INDIRECT_ACTION_DESTROY: + port_queue_action_handle_destroy(in->port, + in->queue, in->postpone, + in->args.ia_destroy.action_id_n, + in->args.ia_destroy.action_id); + break; + case QUEUE_INDIRECT_ACTION_UPDATE: + port_queue_action_handle_update(in->port, + in->queue, in->postpone, + in->args.vc.attr.group, + in->args.vc.actions); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 158d1b38a8..cc8e7aa138 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2598,6 +2598,137 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, return ret; } +/** Enqueue indirect action create operation. */ +int +port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, + bool postpone, uint32_t id, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action) +{ + const struct rte_flow_op_attr attr = { .postpone = postpone}; + struct rte_port *port; + struct port_indirect_action *pia; + int ret; + struct rte_flow_error error; + + ret = action_alloc(port_id, id, &pia); + if (ret) + return ret; + + port = &ports[port_id]; + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + if (action->type == RTE_FLOW_ACTION_TYPE_AGE) { + struct rte_flow_action_age *age = + (struct rte_flow_action_age *)(uintptr_t)(action->conf); + + pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION; + age->context = &pia->age_type; + } + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x88, sizeof(error)); + pia->handle = rte_flow_async_action_handle_create(port_id, queue_id, + &attr, conf, action, NULL, &error); + if (!pia->handle) { + uint32_t destroy_id = pia->id; + port_queue_action_handle_destroy(port_id, queue_id, + postpone, 1, &destroy_id); + return port_flow_complain(&error); + } + pia->type = action->type; + printf("Indirect action #%u creation queued\n", pia->id); + return 0; +} + +/** Enqueue indirect action destroy operation. */ +int +port_queue_action_handle_destroy(portid_t port_id, + uint32_t queue_id, bool postpone, + uint32_t n, const uint32_t *actions) +{ + const struct rte_flow_op_attr attr = { .postpone = postpone}; + struct rte_port *port; + struct port_indirect_action **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + tmp = &port->actions_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_indirect_action *pia = *tmp; + + if (actions[i] != pia->id) + continue; + /* + * Poisoning to make sure PMDs update it in case + * of error. + */ + memset(&error, 0x99, sizeof(error)); + + if (pia->handle && + rte_flow_async_action_handle_destroy(port_id, + queue_id, &attr, pia->handle, NULL, &error)) { + ret = port_flow_complain(&error); + continue; + } + *tmp = pia->next; + printf("Indirect action #%u destruction queued\n", + pia->id); + free(pia); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + +/** Enqueue indirect action update operation. */ +int +port_queue_action_handle_update(portid_t port_id, + uint32_t queue_id, bool postpone, uint32_t id, + const struct rte_flow_action *action) +{ + const struct rte_flow_op_attr attr = { .postpone = postpone}; + struct rte_port *port; + struct rte_flow_error error; + struct rte_flow_action_handle *action_handle; + + action_handle = port_action_handle_get_by_id(port_id, id); + if (!action_handle) + return -EINVAL; + + port = &ports[port_id]; + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + if (rte_flow_async_action_handle_update(port_id, queue_id, &attr, + action_handle, action, NULL, &error)) { + return port_flow_complain(&error); + } + printf("Indirect action #%u update queued\n", id); + return 0; +} + /** Push all the queue operations in the queue to the NIC. */ int port_queue_flow_push(portid_t port_id, queueid_t queue_id) diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 5ea2408a0b..31f766c965 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -940,6 +940,16 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id, const struct rte_flow_action *actions); int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, bool postpone, uint32_t n, const uint32_t *rule); +int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, + bool postpone, uint32_t id, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action); +int port_queue_action_handle_destroy(portid_t port_id, + uint32_t queue_id, bool postpone, + uint32_t n, const uint32_t *action); +int port_queue_action_handle_update(portid_t port_id, uint32_t queue_id, + bool postpone, uint32_t id, + const struct rte_flow_action *action); int port_queue_flow_push(portid_t port_id, queueid_t queue_id); int port_queue_flow_pull(portid_t port_id, queueid_t queue_id); int port_flow_validate(portid_t port_id, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 5080ddb256..1083c6d538 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -4792,6 +4792,31 @@ port 0:: testpmd> flow indirect_action 0 create action_id \ ingress action rss queues 0 1 end / end +Enqueueing creation of indirect actions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue indirect_action create`` adds creation operation of an indirect +action to a queue. It is bound to ``rte_flow_async_action_handle_create()``:: + + flow queue {port_id} create {queue_id} [postpone {boolean}] + table {table_id} item_template {item_template_id} + action_template {action_template_id} + pattern {item} [/ {item} [...]] / end + actions {action} [/ {action} [...]] / end + +If successful, it will show:: + + Indirect action #[...] creation queued + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +This command uses the same parameters as ``flow indirect_action create``, +described in `Creating indirect actions`_. + +``flow queue pull`` must be called to retrieve the operation status. + Updating indirect actions ~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -4821,6 +4846,25 @@ Update indirect rss action having id 100 on port 0 with rss to queues 0 and 3 testpmd> flow indirect_action 0 update 100 action rss queues 0 3 end / end +Enqueueing update of indirect actions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue indirect_action update`` adds update operation for an indirect +action to a queue. It is bound to ``rte_flow_async_action_handle_update()``:: + + flow queue {port_id} indirect_action {queue_id} update + {indirect_action_id} [postpone {boolean}] action {action} / end + +If successful, it will show:: + + Indirect action #[...] update queued + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +``flow queue pull`` must be called to retrieve the operation status. + Destroying indirect actions ~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -4844,6 +4888,27 @@ Destroy indirect actions having id 100 & 101:: testpmd> flow indirect_action 0 destroy action_id 100 action_id 101 +Enqueueing destruction of indirect actions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue indirect_action destroy`` adds destruction operation to destroy +one or more indirect actions from their indirect action IDs (as returned by +``flow queue {port_id} indirect_action {queue_id} create``) to a queue. +It is bound to ``rte_flow_async_action_handle_destroy()``:: + + flow queue {port_id} indirect_action {queue_id} destroy + [postpone {boolean}] action_id {indirect_action_id} [...] + +If successful, it will show:: + + Indirect action #[...] destruction queued + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +``flow queue pull`` must be called to retrieve the operation status. + Query indirect actions ~~~~~~~~~~~~~~~~~~~~~~