From patchwork Wed Feb 9 21:38:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107184 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CEC54A0032; Wed, 9 Feb 2022 22:38:41 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EB9B741151; Wed, 9 Feb 2022 22:38:39 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2062.outbound.protection.outlook.com [40.107.220.62]) by mails.dpdk.org (Postfix) with ESMTP id 7988E41150 for ; Wed, 9 Feb 2022 22:38:38 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dewzK8j0s2Hhm7tKbebawDpShz88i0xZ2CB2z7g98G60tNub8AEcdciEDFeIHbErBltq9BDbvqEB49RNBqwGsXTys0hG4J7Hw9E40ciKxkmCX4sOU0U/VvrjorJV+3IJpeaFJPfzJP4WhlOEBw5NJx+W7WQXGiOObfEw0WvPxjHQ9FwEIrr2mgGFS66ptQCYfdYLqoxLs+9O4hkfOrkZyfdGVPe6UcSztbRDunErUU9hVdbV/Q3gq45tWfwgnqTsbABM76Kva084pZTV7+VGgV3EYQk0xFS5soHdzpMnHY0GM9T1O+hDH0jscb4/8jQA5ZXOI18Mz24t0g1GkJoqtw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=5ldIwqFcSQEgC04JQBhVGsQO0KLEKvsmcF92WoiZkto=; b=He+3V1B7Q8Bpd4uy+WunU8F2ShjRT3hNs3m0NOJk1sDY2eZirbR0hrT13LnEtCS6RmeXE4q+X60aIuyN+ebm5HXzM++chAdvwCGAFbtXWwtDge8xylAp6q/cMOg0Hiq7p92zq1beUkRqXuneif8RTVLBfFiAdeZq88mWlkqj7Zrx/WzWyr3B0/C/dFtS4gjdSi1gnBjTA+VSuTJR0So6p+bcboefIQU/XG9ViFncpyj1bRkAeCtMZc6sErfAjQy1/CA/I+gd0ynOEuyVLjhuaCrSpS9Bfw8XWM5uHysVF8BRPLbOHqYTP0Fr3qPtz6OjR4psIz3Sse+JIIdVmLHD6Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=5ldIwqFcSQEgC04JQBhVGsQO0KLEKvsmcF92WoiZkto=; b=kCs82eLei3djw8vzOnN0e3eohjkv6RDfZ9ODzG0or97YH2+8YnxiJ6WVuv01A560NgGmPftNVIBz6Pc56+IZB4tsDkdXuiVTMSgriuz+x2mQkUH1QOurTyVFBprJtEmNCfte7jvIKZQ9bL98/d3E5lCRkGCD2n3slLu6olLNyoqz887p09c+CvGt2B/N8fBZNXNqtlhdPjzWUoMKIl5yiyWVTXB+HzJ3Q16mhte5AWLBCg2lgAqouns6F3c+ldBhMQ2gm9aKk8+3gNEESE0j0HeQqd9opTulNoy1GsGWO7FGbeSMOUVrlvmB1fFxMTe2M+C0Xgjo+sO3tSGvGh5MoA== Received: from CH0PR12MB5169.namprd12.prod.outlook.com (2603:10b6:610:b8::8) by MN2PR12MB3326.namprd12.prod.outlook.com (2603:10b6:208:cb::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.17; Wed, 9 Feb 2022 21:38:35 +0000 Received: from CO1PR15CA0077.namprd15.prod.outlook.com (2603:10b6:101:20::21) by CH0PR12MB5169.namprd12.prod.outlook.com (2603:10b6:610:b8::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11; Wed, 9 Feb 2022 21:38:34 +0000 Received: from CO1NAM11FT037.eop-nam11.prod.protection.outlook.com (2603:10b6:101:20:cafe::a0) by CO1PR15CA0077.outlook.office365.com (2603:10b6:101:20::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Wed, 9 Feb 2022 21:38:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT037.mail.protection.outlook.com (10.13.174.91) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Wed, 9 Feb 2022 21:38:34 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 9 Feb 2022 21:38:33 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Wed, 9 Feb 2022 13:38:30 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v4 01/10] ethdev: introduce flow pre-configuration hints Date: Wed, 9 Feb 2022 23:38:00 +0200 Message-ID: <20220209213809.1208269-2-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220209213809.1208269-1-akozyrev@nvidia.com> References: <20220206032526.816079-1-akozyrev@nvidia.com > <20220209213809.1208269-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b14e3daa-8893-4226-a4bb-08d9ec1485fd X-MS-TrafficTypeDiagnostic: CH0PR12MB5169:EE_|MN2PR12MB3326:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:10000; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: VywTPrYNqAJmO9uLaTM3b/0AlniPKF4UhARai91vlcBAsWLaI3tkLz/SySfsxIFjXa0+briVX3oLb1M4KIBg061Y9zIiwcCp/LXvyYBKyNqZYn573Rvg7XJk2khWKN9VhQeMMJLEcYagcJeq32cwYSAS9BPbNLEgO8eP0NCpcvBx+GvJ5Z//J/a6NYt+emGPSL75FJzux7/zg8reiCURLRnAB9uEzYkUycqBgrRrqSUtjl9OIn66tBeGMFSgKtuvtVT8vbgh5uPjFxZNvmpC3slJPlmQRpIIrILoGYfjhoBtVSk2tQ4uY78UdujdfSfkH+oMOYjTqrJ6QFGZuV63ERZzwn8oNM9nMlF3ylj1SKpruQPziNke/nZGLgLAYGqUw9iHDRhJN8A3BFnBiLWB88q/TdINnxU2cjFbbWb5u0CYE74EUBuR5cwpiAr3nbCIDoL2gs+lZ5yVyM+BKGR2bGtetSS45EC7kPx17OiME6FigoCBXE3Z14rO2eBqg3m4lBwoGj9eabKqWtUn9DjcsInY1R/hRsazQbFsbqY0CNDGzEp+sh6mw3wbzs7ipUmqdFE+oZxfoItGYEtVeTjwJGC7fYR0/VgRb7LG5v+BzFCxAtBudK6EdtHERuU29VGkoclnLjXHLmqB14u2WX0Yhtu/tSrKNkTCZuWDEmzFW+mc5zn4IcqvVdXBfBDraZKwRu8w8lwf+edImofp7TgXsA== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(1076003)(16526019)(54906003)(6666004)(26005)(81166007)(316002)(356005)(336012)(86362001)(8936002)(186003)(508600001)(8676002)(426003)(4326008)(2616005)(83380400001)(36756003)(82310400004)(5660300002)(70206006)(36860700001)(6916009)(47076005)(70586007)(40460700003)(7416002)(2906002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2022 21:38:34.0035 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b14e3daa-8893-4226-a4bb-08d9ec1485fd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT037.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3326 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The flow rules creation/destruction at a large scale incurs a performance penalty and may negatively impact the packet processing when used as part of the datapath logic. This is mainly because software/hardware resources are allocated and prepared during the flow rule creation. In order to optimize the insertion rate, PMD may use some hints provided by the application at the initialization phase. The rte_flow_configure() function allows to pre-allocate all the needed resources beforehand. These resources can be used at a later stage without costly allocations. Every PMD may use only the subset of hints and ignore unused ones or fail in case the requested configuration is not supported. The rte_flow_info_get() is available to retrieve the information about supported pre-configurable resources. Both these functions must be called before any other usage of the flow API engine. Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- doc/guides/prog_guide/rte_flow.rst | 37 +++++++++ doc/guides/rel_notes/release_22_03.rst | 6 ++ lib/ethdev/rte_flow.c | 40 +++++++++ lib/ethdev/rte_flow.h | 108 +++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 10 +++ lib/ethdev/version.map | 2 + 6 files changed, 203 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index b4aa9c47c2..72fb1132ac 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3589,6 +3589,43 @@ Return values: - 0 on success, a negative errno value otherwise and ``rte_errno`` is set. +Flow engine configuration +------------------------- + +Configure flow API management. + +An application may provide some parameters at the initialization phase about +rules engine configuration and/or expected flow rules characteristics. +These parameters may be used by PMD to preallocate resources and configure NIC. + +Configuration +~~~~~~~~~~~~~ + +This function performs the flow API management configuration and +pre-allocates needed resources beforehand to avoid costly allocations later. +Expected number of counters or meters in an application, for example, +allow PMD to prepare and optimize NIC memory layout in advance. +``rte_flow_configure()`` must be called before any flow rule is created, +but after an Ethernet device is configured. + +.. code-block:: c + + int + rte_flow_configure(uint16_t port_id, + const struct rte_flow_port_attr *port_attr, + struct rte_flow_error *error); + +Information about resources that can benefit from pre-allocation can be +retrieved via ``rte_flow_info_get()`` API. It returns the maximum number +of pre-configurable resources for a given port on a system. + +.. code-block:: c + + int + rte_flow_info_get(uint16_t port_id, + struct rte_flow_port_info *port_info, + struct rte_flow_error *error); + .. _flow_isolated_mode: Flow isolated mode diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index f03183ee86..2a47a37f0a 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -69,6 +69,12 @@ New Features New APIs, ``rte_eth_dev_priority_flow_ctrl_queue_info_get()`` and ``rte_eth_dev_priority_flow_ctrl_queue_configure()``, was added. +* ** Added functions to configure Flow API engine + + * ethdev: Added ``rte_flow_configure`` API to configure Flow Management + engine, allowing to pre-allocate some resources for better performance. + Added ``rte_flow_info_get`` API to retrieve pre-configurable resources. + * **Updated AF_XDP PMD** * Added support for libxdp >=v1.2.2. diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index a93f68abbc..66614ae29b 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1391,3 +1391,43 @@ rte_flow_flex_item_release(uint16_t port_id, ret = ops->flex_item_release(dev, handle, error); return flow_err(port_id, ret, error); } + +int +rte_flow_info_get(uint16_t port_id, + struct rte_flow_port_info *port_info, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->info_get)) { + return flow_err(port_id, + ops->info_get(dev, port_info, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +int +rte_flow_configure(uint16_t port_id, + const struct rte_flow_port_attr *port_attr, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->configure)) { + return flow_err(port_id, + ops->configure(dev, port_attr, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 1031fb246b..92be2a9a89 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4853,6 +4853,114 @@ rte_flow_flex_item_release(uint16_t port_id, const struct rte_flow_item_flex_handle *handle, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Information about available pre-configurable resources. + * The zero value means a resource cannot be pre-allocated. + * + */ +struct rte_flow_port_info { + /** + * Number of pre-configurable counter actions. + * @see RTE_FLOW_ACTION_TYPE_COUNT + */ + uint32_t nb_counters; + /** + * Number of pre-configurable aging flows actions. + * @see RTE_FLOW_ACTION_TYPE_AGE + */ + uint32_t nb_aging_flows; + /** + * Number of pre-configurable traffic metering actions. + * @see RTE_FLOW_ACTION_TYPE_METER + */ + uint32_t nb_meters; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Retrieve configuration attributes supported by the port. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[out] port_info + * A pointer to a structure of type *rte_flow_port_info* + * to be filled with the contextual information of the port. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_info_get(uint16_t port_id, + struct rte_flow_port_info *port_info, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Resource pre-allocation and pre-configuration settings. + * The zero value means on demand resource allocations only. + * + */ +struct rte_flow_port_attr { + /** + * Number of counter actions pre-configured. + * @see RTE_FLOW_ACTION_TYPE_COUNT + */ + uint32_t nb_counters; + /** + * Number of aging flows actions pre-configured. + * @see RTE_FLOW_ACTION_TYPE_AGE + */ + uint32_t nb_aging_flows; + /** + * Number of traffic metering actions pre-configured. + * @see RTE_FLOW_ACTION_TYPE_METER + */ + uint32_t nb_meters; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Configure the port's flow API engine. + * + * This API can only be invoked before the application + * starts using the rest of the flow library functions. + * + * The API can be invoked multiple times to change the + * settings. The port, however, may reject the changes. + * + * Parameters in configuration attributes must not exceed + * numbers of resources returned by the rte_flow_info_get API. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] port_attr + * Port configuration attributes. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_configure(uint16_t port_id, + const struct rte_flow_port_attr *port_attr, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index f691b04af4..7c29930d0f 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -152,6 +152,16 @@ struct rte_flow_ops { (struct rte_eth_dev *dev, const struct rte_flow_item_flex_handle *handle, struct rte_flow_error *error); + /** See rte_flow_info_get() */ + int (*info_get) + (struct rte_eth_dev *dev, + struct rte_flow_port_info *port_info, + struct rte_flow_error *err); + /** See rte_flow_configure() */ + int (*configure) + (struct rte_eth_dev *dev, + const struct rte_flow_port_attr *port_attr, + struct rte_flow_error *err); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index cd0c4c428d..f1235aa913 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -260,6 +260,8 @@ EXPERIMENTAL { # added in 22.03 rte_eth_dev_priority_flow_ctrl_queue_configure; rte_eth_dev_priority_flow_ctrl_queue_info_get; + rte_flow_info_get; + rte_flow_configure; }; INTERNAL { From patchwork Wed Feb 9 21:38:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107185 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C74A8A0032; Wed, 9 Feb 2022 22:38:47 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D376241159; Wed, 9 Feb 2022 22:38:41 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2073.outbound.protection.outlook.com [40.107.223.73]) by mails.dpdk.org (Postfix) with ESMTP id 9A86041155 for ; Wed, 9 Feb 2022 22:38:40 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mdcucfL30wOrLWxE/qfdfYiygSQ41pv97VuNlwk8EGQQlfR0XQjvo+G+V67Lk2ef/8a4GNdQsiSKrKSpFwMzt2k7rX8KT6iCqZ4K6qpVhQUFtm7FiC/7CBoNtiN4d1o9kDhUyqMXSVXklFDDKj2ea1re4SeZ14EBYn872RbkhzYwXX8pV8jHlXOMkkDK7WVKBmszFRtwdjN3vpb5mNUpO27yApgV7jXWmomGidpky1P0sUDlsTVnAb39/8RJk0h7i+isIGqQC+UqLgOxAWjP8GNzWfBj7iOEsLBtPQzbVSz1MK70suQzyUwe914g14fcmjFsFiR8C/DZllGXtRw0mA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=isiEYEaTlshEsYStErvIsBY/NJ0oN9oMtH/Xcfqt+6o=; b=OJ2qSArH/6eKRFZw5SmW1TvD95nhXEPTI94BDTPSIwhacehbn9NLJp18WAjOnGCbrZM2O/iD/5zMXpi8V9wCPNadooBCEaPJhgQnimtqzcaIPm6mi/GD43j8VO6swxTfdnjvXFCWZkEfKULQSAkcB3fCIKV7Bewc3tvuSl9pF66mASvfJQk/GxyTKsmm6kCbMAt4RlcDO/LuGZSzgy6tNS5A1caZWcRlGKnouL6YJG8f4/Z6pcVGqJTXL/wjWH3GPU0gpb5Zg+tClqT4mkDUX10GSTKKWP46GvZ8xkwqWV+4qySmB3w0zHxMLe+KMeI3OHOqbrX2jHZnNHvf8hYeyg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=isiEYEaTlshEsYStErvIsBY/NJ0oN9oMtH/Xcfqt+6o=; b=WtVzjGyoe+XiesEMYJi2OSt6/WzR1qp7LOfTjL3S1Zg8ay9kxfBEA1GfzVQDTtLlqNCoMB4IpfxXsX65d0j71tMAE/JvQtGcxMgf9z0M2QAeRlpFSs4vNG22F2A/UuiocRdN/m4nlZwITJxFQVVNvKvYc29CjYuEJohLrtjzaO3CMfMteKYmbNN4PV6E0o6Gsuui9e7s/0qH72P+GEBJAzmQWwJFZwwmVqjSC5B1emcI2T3ym92GcP5BkHGytUn11qogRIgdWjRkswWzKkiJNbwb5jV0dbpujEsspqMtniiSd8QRYnw/twWQcoSt8tPgZa4Ew60UnuF3uiazaDh7yg== Received: from CH0PR12MB5251.namprd12.prod.outlook.com (2603:10b6:610:d2::23) by MN0PR12MB6002.namprd12.prod.outlook.com (2603:10b6:208:37e::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.18; Wed, 9 Feb 2022 21:38:38 +0000 Received: from CO1PR15CA0079.namprd15.prod.outlook.com (2603:10b6:101:20::23) by CH0PR12MB5251.namprd12.prod.outlook.com (2603:10b6:610:d2::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Wed, 9 Feb 2022 21:38:37 +0000 Received: from CO1NAM11FT035.eop-nam11.prod.protection.outlook.com (2603:10b6:101:20:cafe::bc) by CO1PR15CA0079.outlook.office365.com (2603:10b6:101:20::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Wed, 9 Feb 2022 21:38:37 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT035.mail.protection.outlook.com (10.13.175.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Wed, 9 Feb 2022 21:38:37 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 9 Feb 2022 21:38:36 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Wed, 9 Feb 2022 13:38:33 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v4 02/10] ethdev: add flow item/action templates Date: Wed, 9 Feb 2022 23:38:01 +0200 Message-ID: <20220209213809.1208269-3-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220209213809.1208269-1-akozyrev@nvidia.com> References: <20220206032526.816079-1-akozyrev@nvidia.com > <20220209213809.1208269-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: de554ee3-2249-4d79-ce49-08d9ec1487e6 X-MS-TrafficTypeDiagnostic: CH0PR12MB5251:EE_|MN0PR12MB6002:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8273; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: aK2AmgNERzTZeYaFmAq5LxH080RfgxTvLn6DZ19Jh2R1qn+SX7eK1M9rz1jlSQ+GBoCqkyHp2gQkx/AkvnS9i6hNRTY0rg162flkCW5Td+ajryv0Gulu4n0Jv98EsmVJcMaX1C2l1+7B2Vg6N8Sn0F4w/JNVXHes36tg+oIlyBiaaiLK7x72f0MGBQ9YI6Dg/UEcccGyNodPK/B44BMlC7UhcefsSr6I2/s0cw7gVO2aWBjTTG2+FQMpgXYerVXPqzEC/55JsGdVbWoYVHLmmamTdGKOMz4FRBhhKMMLDIsoZ5nkjs36nF4eUOCiY00OCGTO5BaujHCa7axlhL6Dq94uEiQMvXbYH0K5iLvP3UmEPYnlWvBscjzauqW7se7WxGlWoUNBcEK+vh+58N4dvvs/Y+NWPziKh/GxkuAoqhNUocgextUve+TSyJP0UwLJyrIb+yuGjBUKWUNXOpczumDD1NDD6ELdJZidTfqA7bfPQGmcDzgFrujR+4qrOExJouCro3wmRwV4vI1Nsbg8B2zk8w8G62z3dywdQtaalNRAIsFWeIVKpmLKNvmRMGPr4Pj5efZ6Ce5Td0i5/aLfxs7p3YCt0gJZWmmgix84jphHRPpoeYk9Q0PsbzrcEwfwWGx0Nh9oh8nr1w7WKHIJ20MHDJcYG/oNnZsbMkS4EDvCufr6atMZSGpuFJQLG9+sE1jXwiBJ3I+bQtCBVwJU7Q== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(36860700001)(186003)(26005)(1076003)(2616005)(8936002)(8676002)(2906002)(336012)(47076005)(426003)(4326008)(16526019)(82310400004)(70586007)(83380400001)(70206006)(86362001)(30864003)(81166007)(7416002)(356005)(5660300002)(316002)(508600001)(36756003)(40460700003)(6666004)(54906003)(6916009)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2022 21:38:37.1937 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: de554ee3-2249-4d79-ce49-08d9ec1487e6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT035.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6002 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Treating every single flow rule as a completely independent and separate entity negatively impacts the flow rules insertion rate. Oftentimes in an application, many flow rules share a common structure (the same item mask and/or action list) so they can be grouped and classified together. This knowledge may be used as a source of optimization by a PMD/HW. The pattern template defines common matching fields (the item mask) without values. The actions template holds a list of action types that will be used together in the same rule. The specific values for items and actions will be given only during the rule creation. A table combines pattern and actions templates along with shared flow rule attributes (group ID, priority and traffic direction). This way a PMD/HW can prepare all the resources needed for efficient flow rules creation in the datapath. To avoid any hiccups due to memory reallocation, the maximum number of flow rules is defined at the table creation time. The flow rule creation is done by selecting a table, a pattern template and an actions template (which are bound to the table), and setting unique values for the items and actions. Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- doc/guides/prog_guide/rte_flow.rst | 124 ++++++++++++ doc/guides/rel_notes/release_22_03.rst | 8 + lib/ethdev/rte_flow.c | 147 ++++++++++++++ lib/ethdev/rte_flow.h | 260 +++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 37 ++++ lib/ethdev/version.map | 6 + 6 files changed, 582 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 72fb1132ac..5391648833 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3626,6 +3626,130 @@ of pre-configurable resources for a given port on a system. struct rte_flow_port_info *port_info, struct rte_flow_error *error); +Flow templates +~~~~~~~~~~~~~~ + +Oftentimes in an application, many flow rules share a common structure +(the same pattern and/or action list) so they can be grouped and classified +together. This knowledge may be used as a source of optimization by a PMD/HW. +The flow rule creation is done by selecting a table, a pattern template +and an actions template (which are bound to the table), and setting unique +values for the items and actions. This API is not thread-safe. + +Pattern templates +^^^^^^^^^^^^^^^^^ + +The pattern template defines a common pattern (the item mask) without values. +The mask value is used to select a field to match on, spec/last are ignored. +The pattern template may be used by multiple tables and must not be destroyed +until all these tables are destroyed first. + +.. code-block:: c + + struct rte_flow_pattern_template * + rte_flow_pattern_template_create(uint16_t port_id, + const struct rte_flow_pattern_template_attr *template_attr, + const struct rte_flow_item pattern[], + struct rte_flow_error *error); + +For example, to create a pattern template to match on the destination MAC: + +.. code-block:: c + + struct rte_flow_item pattern[2] = {{0}}; + struct rte_flow_item_eth eth_m = {0}; + pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH; + eth_m.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff"; + pattern[0].mask = ð_m; + pattern[1].type = RTE_FLOW_ITEM_TYPE_END; + + struct rte_flow_pattern_template *pattern_template = + rte_flow_pattern_template_create(port, &itr, &pattern, &error); + +The concrete value to match on will be provided at the rule creation. + +Actions templates +^^^^^^^^^^^^^^^^^ + +The actions template holds a list of action types to be used in flow rules. +The mask parameter allows specifying a shared constant value for every rule. +The actions template may be used by multiple tables and must not be destroyed +until all these tables are destroyed first. + +.. code-block:: c + + struct rte_flow_actions_template * + rte_flow_actions_template_create(uint16_t port_id, + const struct rte_flow_actions_template_attr *template_attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error); + +For example, to create an actions template with the same Mark ID +but different Queue Index for every rule: + +.. code-block:: c + + struct rte_flow_action actions[] = { + /* Mark ID is constant (4) for every rule, Queue Index is unique */ + [0] = {.type = RTE_FLOW_ACTION_TYPE_MARK, + .conf = &(struct rte_flow_action_mark){.id = 4}}, + [1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE}, + [2] = {.type = RTE_FLOW_ACTION_TYPE_END,}, + }; + struct rte_flow_action masks[] = { + /* Assign to MARK mask any non-zero value to make it constant */ + [0] = {.type = RTE_FLOW_ACTION_TYPE_MARK, + .conf = &(struct rte_flow_action_mark){.id = 1}}, + [1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE}, + [2] = {.type = RTE_FLOW_ACTION_TYPE_END,}, + }; + + struct rte_flow_actions_template *at = + rte_flow_actions_template_create(port, &atr, &actions, &masks, &error); + +The concrete value for Queue Index will be provided at the rule creation. + +Template table +^^^^^^^^^^^^^^ + +A template table combines a number of pattern and actions templates along with +shared flow rule attributes (group ID, priority and traffic direction). +This way a PMD/HW can prepare all the resources needed for efficient flow rules +creation in the datapath. To avoid any hiccups due to memory reallocation, +the maximum number of flow rules is defined at table creation time. +Any flow rule creation beyond the maximum table size is rejected. +Application may create another table to accommodate more rules in this case. + +.. code-block:: c + + struct rte_flow_template_table * + rte_flow_template_table_create(uint16_t port_id, + const struct rte_flow_template_table_attr *table_attr, + struct rte_flow_pattern_template *pattern_templates[], + uint8_t nb_pattern_templates, + struct rte_flow_actions_template *actions_templates[], + uint8_t nb_actions_templates, + struct rte_flow_error *error); + +A table can be created only after the Flow Rules management is configured +and pattern and actions templates are created. + +.. code-block:: c + + rte_flow_configure(port, *port_attr, *error); + + struct rte_flow_pattern_template *pattern_templates[0] = + rte_flow_pattern_template_create(port, &itr, &pattern, &error); + struct rte_flow_actions_template *actions_templates[0] = + rte_flow_actions_template_create(port, &atr, &actions, &masks, &error); + + struct rte_flow_template_table *table = + rte_flow_template_table_create(port, *table_attr, + *pattern_templates, nb_pattern_templates, + *actions_templates, nb_actions_templates, + *error); + .. _flow_isolated_mode: Flow isolated mode diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index 2a47a37f0a..6656b35295 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -75,6 +75,14 @@ New Features engine, allowing to pre-allocate some resources for better performance. Added ``rte_flow_info_get`` API to retrieve pre-configurable resources. + * ethdev: Added ``rte_flow_template_table_create`` API to group flow rules + with the same flow attributes and common matching patterns and actions + defined by ``rte_flow_pattern_template_create`` and + ``rte_flow_actions_template_create`` respectively. + Corresponding functions to destroy these entities are: + ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy`` + and ``rte_flow_actions_template_destroy``. + * **Updated AF_XDP PMD** * Added support for libxdp >=v1.2.2. diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 66614ae29b..b53f8c9b89 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1431,3 +1431,150 @@ rte_flow_configure(uint16_t port_id, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, rte_strerror(ENOTSUP)); } + +struct rte_flow_pattern_template * +rte_flow_pattern_template_create(uint16_t port_id, + const struct rte_flow_pattern_template_attr *template_attr, + const struct rte_flow_item pattern[], + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_pattern_template *template; + + if (unlikely(!ops)) + return NULL; + if (likely(!!ops->pattern_template_create)) { + template = ops->pattern_template_create(dev, template_attr, + pattern, error); + if (template == NULL) + flow_err(port_id, -rte_errno, error); + return template; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_pattern_template_destroy(uint16_t port_id, + struct rte_flow_pattern_template *pattern_template, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->pattern_template_destroy)) { + return flow_err(port_id, + ops->pattern_template_destroy(dev, + pattern_template, + error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +struct rte_flow_actions_template * +rte_flow_actions_template_create(uint16_t port_id, + const struct rte_flow_actions_template_attr *template_attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_actions_template *template; + + if (unlikely(!ops)) + return NULL; + if (likely(!!ops->actions_template_create)) { + template = ops->actions_template_create(dev, template_attr, + actions, masks, error); + if (template == NULL) + flow_err(port_id, -rte_errno, error); + return template; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_actions_template_destroy(uint16_t port_id, + struct rte_flow_actions_template *actions_template, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->actions_template_destroy)) { + return flow_err(port_id, + ops->actions_template_destroy(dev, + actions_template, + error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +struct rte_flow_template_table * +rte_flow_template_table_create(uint16_t port_id, + const struct rte_flow_template_table_attr *table_attr, + struct rte_flow_pattern_template *pattern_templates[], + uint8_t nb_pattern_templates, + struct rte_flow_actions_template *actions_templates[], + uint8_t nb_actions_templates, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_template_table *table; + + if (unlikely(!ops)) + return NULL; + if (likely(!!ops->template_table_create)) { + table = ops->template_table_create(dev, table_attr, + pattern_templates, nb_pattern_templates, + actions_templates, nb_actions_templates, + error); + if (table == NULL) + flow_err(port_id, -rte_errno, error); + return table; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_template_table_destroy(uint16_t port_id, + struct rte_flow_template_table *template_table, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->template_table_destroy)) { + return flow_err(port_id, + ops->template_table_destroy(dev, + template_table, + error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 92be2a9a89..e87db5a540 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4961,6 +4961,266 @@ rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, struct rte_flow_error *error); +/** + * Opaque type returned after successful creation of pattern template. + * This handle can be used to manage the created pattern template. + */ +struct rte_flow_pattern_template; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Flow pattern template attributes. + */ +__extension__ +struct rte_flow_pattern_template_attr { + /** + * Relaxed matching policy. + * - PMD may match only on items with mask member set and skip + * matching on protocol layers specified without any masks. + * - If not set, PMD will match on protocol layers + * specified without any masks as well. + * - Packet data must be stacked in the same order as the + * protocol layers to match inside packets, starting from the lowest. + */ + uint32_t relaxed_matching:1; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create pattern template. + * + * The pattern template defines common matching fields without values. + * For example, matching on 5 tuple TCP flow, the template will be + * eth(null) + IPv4(source + dest) + TCP(s_port + d_port), + * while values for each rule will be set during the flow rule creation. + * The number and order of items in the template must be the same + * at the rule creation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] template_attr + * Pattern template attributes. + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * The spec member of an item is not used unless the end member is used. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + */ +__rte_experimental +struct rte_flow_pattern_template * +rte_flow_pattern_template_create(uint16_t port_id, + const struct rte_flow_pattern_template_attr *template_attr, + const struct rte_flow_item pattern[], + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy pattern template. + * + * This function may be called only when + * there are no more tables referencing this template. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] pattern_template + * Handle of the template to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_pattern_template_destroy(uint16_t port_id, + struct rte_flow_pattern_template *pattern_template, + struct rte_flow_error *error); + +/** + * Opaque type returned after successful creation of actions template. + * This handle can be used to manage the created actions template. + */ +struct rte_flow_actions_template; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Flow actions template attributes. + */ +struct rte_flow_actions_template_attr; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create actions template. + * + * The actions template holds a list of action types without values. + * For example, the template to change TCP ports is TCP(s_port + d_port), + * while values for each rule will be set during the flow rule creation. + * The number and order of actions in the template must be the same + * at the rule creation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] template_attr + * Template attributes. + * @param[in] actions + * Associated actions (list terminated by the END action). + * The spec member is only used if @p masks spec is non-zero. + * @param[in] masks + * List of actions that marks which of the action's member is constant. + * A mask has the same format as the corresponding action. + * If the action field in @p masks is not 0, + * the corresponding value in an action from @p actions will be the part + * of the template and used in all flow rules. + * The order of actions in @p masks is the same as in @p actions. + * In case of indirect actions present in @p actions, + * the actual action type should be present in @p mask. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + */ +__rte_experimental +struct rte_flow_actions_template * +rte_flow_actions_template_create(uint16_t port_id, + const struct rte_flow_actions_template_attr *template_attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy actions template. + * + * This function may be called only when + * there are no more tables referencing this template. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] actions_template + * Handle to the template to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_actions_template_destroy(uint16_t port_id, + struct rte_flow_actions_template *actions_template, + struct rte_flow_error *error); + +/** + * Opaque type returned after successful creation of a template table. + * This handle can be used to manage the created template table. + */ +struct rte_flow_template_table; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Table attributes. + */ +struct rte_flow_template_table_attr { + /** + * Flow attributes to be used in each rule generated from this table. + */ + struct rte_flow_attr flow_attr; + /** + * Maximum number of flow rules that this table holds. + */ + uint32_t nb_flows; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create template table. + * + * A template table consists of multiple pattern templates and actions + * templates associated with a single set of rule attributes (group ID, + * priority and traffic direction). + * + * Each rule is free to use any combination of pattern and actions templates + * and specify particular values for items and actions it would like to change. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] table_attr + * Template table attributes. + * @param[in] pattern_templates + * Array of pattern templates to be used in this table. + * @param[in] nb_pattern_templates + * The number of pattern templates in the pattern_templates array. + * @param[in] actions_templates + * Array of actions templates to be used in this table. + * @param[in] nb_actions_templates + * The number of actions templates in the actions_templates array. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + */ +__rte_experimental +struct rte_flow_template_table * +rte_flow_template_table_create(uint16_t port_id, + const struct rte_flow_template_table_attr *table_attr, + struct rte_flow_pattern_template *pattern_templates[], + uint8_t nb_pattern_templates, + struct rte_flow_actions_template *actions_templates[], + uint8_t nb_actions_templates, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy template table. + * + * This function may be called only when + * there are no more flow rules referencing this table. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] template_table + * Handle to the table to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_template_table_destroy(uint16_t port_id, + struct rte_flow_template_table *template_table, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index 7c29930d0f..2d96db1dc7 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -162,6 +162,43 @@ struct rte_flow_ops { (struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr, struct rte_flow_error *err); + /** See rte_flow_pattern_template_create() */ + struct rte_flow_pattern_template *(*pattern_template_create) + (struct rte_eth_dev *dev, + const struct rte_flow_pattern_template_attr *template_attr, + const struct rte_flow_item pattern[], + struct rte_flow_error *err); + /** See rte_flow_pattern_template_destroy() */ + int (*pattern_template_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_pattern_template *pattern_template, + struct rte_flow_error *err); + /** See rte_flow_actions_template_create() */ + struct rte_flow_actions_template *(*actions_template_create) + (struct rte_eth_dev *dev, + const struct rte_flow_actions_template_attr *template_attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *err); + /** See rte_flow_actions_template_destroy() */ + int (*actions_template_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_actions_template *actions_template, + struct rte_flow_error *err); + /** See rte_flow_template_table_create() */ + struct rte_flow_template_table *(*template_table_create) + (struct rte_eth_dev *dev, + const struct rte_flow_template_table_attr *table_attr, + struct rte_flow_pattern_template *pattern_templates[], + uint8_t nb_pattern_templates, + struct rte_flow_actions_template *actions_templates[], + uint8_t nb_actions_templates, + struct rte_flow_error *err); + /** See rte_flow_template_table_destroy() */ + int (*template_table_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_template_table *template_table, + struct rte_flow_error *err); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index f1235aa913..5fd2108895 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -262,6 +262,12 @@ EXPERIMENTAL { rte_eth_dev_priority_flow_ctrl_queue_info_get; rte_flow_info_get; rte_flow_configure; + rte_flow_pattern_template_create; + rte_flow_pattern_template_destroy; + rte_flow_actions_template_create; + rte_flow_actions_template_destroy; + rte_flow_template_table_create; + rte_flow_template_table_destroy; }; INTERNAL { From patchwork Wed Feb 9 21:38:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107186 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB352A0032; Wed, 9 Feb 2022 22:38:59 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 516614114F; Wed, 9 Feb 2022 22:38:48 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2052.outbound.protection.outlook.com [40.107.223.52]) by mails.dpdk.org (Postfix) with ESMTP id D540640140 for ; Wed, 9 Feb 2022 22:38:46 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZrvkNfrciEwO59SOT1H1/wbetsxZsMVTaqJ6k0vTAMZH9zJfbsUo+kvMhi3aWozMdhwGNMKOBZZwQ3HXUH6Vn1bNGi8JKFRnm/95v6Q/mpX00nyci9Gx/9ToEnYI0oAxh9BAX8eCOKS5vVzpw0menECwU6MVsAYjBJkWv1Eq0MvNZO+Ah81y97F/otCktmLtCPaZiVzUu3gzcGpUbdOh/8Voe392GfjdlroSN9+t5HUAph102CLe7AVBuXQ/qdfvZkr1MW2NkrL5IVj5BcB24CS9LZmjy9kqdHA4kV3vB4Onit7v/wZGxledU4LQ/tpljOiCEU+Xa4bvH1ItfFCKgQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Uu5MchZux1E8iROMrCTmFQh2WAINStMa9D9YAF5ZWMk=; b=SYdBqaZiDigqa7C1TIEYUeUeXfruHpk4rveDcby5jcGSiTkTthMdp3TrgYjbmfVpppan0pN7+168ZFUeK5irZKY2XD7HQn4XxbUzEftHLbDfvdm/xjMLhRy8O2pMCk6ZWFqAcRjqjbCWBZAmJk5KjqwZLbczSupt3fMtypQJD5Y/4b4KKdYKLWt5/9lHJaEWJl3Zf4SilRK0H0y+TlBR0luV80+2LtHy7fxFr8nT36wQkEH4toxEIIvnjfcGSoVQBwtEJ5gA0ccFQc2Xe/QSF7CcXMSvaZoOdyXrNVSoPm1t0aqIwM8JoFp6gtq5PzY/KygGPpSztEprmi72HripoQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Uu5MchZux1E8iROMrCTmFQh2WAINStMa9D9YAF5ZWMk=; b=DfMG0+KFHyA/tuD6sSXuRcMXR/5eDMkaf2SQ92Yp3tHGAY24eU7iApRzD7qzzf/kl3RgebeXCdzMDWhNljMzy5lwn/gVzhKA0zQa8rshWycP3YmcJkIcgJdcTO74n8zqK5EZyLsdME6JHpvt0szqC9hfTWWZEV+xj/PzhXU4+RB3ONrijfyZFMljWNi7GcbyWrP5MzenTjzHrbBUnj8527j5NsEEDcE0W5Ugo1LnSn/n0z9WoTmVuslJdxE3n8hW0Ne5YME3NXFQRm7FONFmfve6TuxNOOYzVkf3hbxmRJv5OjmVsToIkV5Oq1tvlz5FH1zu24u8avjO8xLZlwc3Ww== Received: from CH2PR12MB3896.namprd12.prod.outlook.com (2603:10b6:610:25::14) by BYAPR12MB3159.namprd12.prod.outlook.com (2603:10b6:a03:134::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.17; Wed, 9 Feb 2022 21:38:43 +0000 Received: from BN6PR13CA0057.namprd13.prod.outlook.com (2603:10b6:404:11::19) by CH2PR12MB3896.namprd12.prod.outlook.com (2603:10b6:610:25::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Wed, 9 Feb 2022 21:38:41 +0000 Received: from BN8NAM11FT044.eop-nam11.prod.protection.outlook.com (2603:10b6:404:11:cafe::62) by BN6PR13CA0057.outlook.office365.com (2603:10b6:404:11::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Wed, 9 Feb 2022 21:38:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT044.mail.protection.outlook.com (10.13.177.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Wed, 9 Feb 2022 21:38:40 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 9 Feb 2022 21:38:40 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Wed, 9 Feb 2022 13:38:36 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v4 03/10] ethdev: bring in async queue-based flow rules operations Date: Wed, 9 Feb 2022 23:38:02 +0200 Message-ID: <20220209213809.1208269-4-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220209213809.1208269-1-akozyrev@nvidia.com> References: <20220206032526.816079-1-akozyrev@nvidia.com > <20220209213809.1208269-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a083172a-ab80-4f14-15be-08d9ec148a3a X-MS-TrafficTypeDiagnostic: CH2PR12MB3896:EE_|BYAPR12MB3159:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:10000; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: B8r6HfcoCjcFBJZqRYm+Go099e2Wbxv5JDoRnaXobmSrlhnQutkC2iV82gB6rY1iiIj1/dShenveuAk9vYbXtOcuuh5yLEjABHhIgBZwI+yoGkqCyNDxqt9Aq4arD9xMDqKGX9InH9XpHL8O1+/BkrxEnfcQ2gv089VYvOT7Mn/+gIeJ1W7SIVCcJdyTDDx+DNLg1AS3Yy1eF+6flPLBp1aARRbR5IMVsX08eS8B6EAceS4IJk2D3oTKzdtrGXZ5SV7oe7XYQJK0z//b7YuCrCUEi5eGJJeZfuTY66hbigGolTChY2grv+1akzAL7udDCUzLQpbxEmu5S7D73WfBWl0Tm+JK0TVwiB0bi2faO4CizochNUq+3FgziYI2NZQs95FRcDw6DeYP7LFAUFx2UqWE3IAV9pNej4zwLlVbaVUhwb46SUhX5e8xJBavuVjVA7pyXpF7P40gN7DkfTzlSHw4nXdyDqRS1EtI+/49eXixg0UsC8fh2xTHlecv92juyRgYTez1MNFpul07osNGYGv2bXCofup2v5YgchSfSOd1POI3iHtiAsk8BvToIrBk75R8XQxJJI0n4f1lzIkVFnmOfh+h9V0smNhXMlBH0dFrGldZl771gT6atC8dw2TiRlQSM+SJE4Oyt03OdXeIG91ebv0DgQJvEAIcje8JkN++WpQG0+eOTAmFz0TpUexIGUWr5671hFRoMC8RD3Q+i/Ys6lx7TKdp6R3RNcjcHMCNKuZQY3r6G2CLpuXQwyN9I1nzP8nH7c1CGtDS+EYyiY4cOH1M/4lWE3j+1dgGUt6Jc0sm0OxzHz2Qhyaj50Dy X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(83380400001)(6666004)(186003)(36756003)(70206006)(70586007)(19273905006)(508600001)(356005)(16526019)(336012)(7416002)(1076003)(426003)(5660300002)(26005)(30864003)(36860700001)(2906002)(8676002)(316002)(82310400004)(4326008)(8936002)(47076005)(54906003)(81166007)(2616005)(6916009)(40460700003)(86362001)(36900700001)(579004)(563064011); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2022 21:38:40.9699 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a083172a-ab80-4f14-15be-08d9ec148a3a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT044.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB3159 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org A new, faster, queue-based flow rules management mechanism is needed for applications offloading rules inside the datapath. This asynchronous and lockless mechanism frees the CPU for further packet processing and reduces the performance impact of the flow rules creation/destruction on the datapath. Note that queues are not thread-safe and the queue should be accessed from the same thread for all queue operations. It is the responsibility of the app to sync the queue functions in case of multi-threaded access to the same queue. The rte_flow_q_flow_create() function enqueues a flow creation to the requested queue. It benefits from already configured resources and sets unique values on top of item and action templates. A flow rule is enqueued on the specified flow queue and offloaded asynchronously to the hardware. The function returns immediately to spare CPU for further packet processing. The application must invoke the rte_flow_q_pull() function to complete the flow rule operation offloading, to clear the queue, and to receive the operation status. The rte_flow_q_flow_destroy() function enqueues a flow destruction to the requested queue. Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- doc/guides/prog_guide/img/rte_flow_q_init.svg | 205 ++++++++++ .../prog_guide/img/rte_flow_q_usage.svg | 351 ++++++++++++++++++ doc/guides/prog_guide/rte_flow.rst | 167 ++++++++- doc/guides/rel_notes/release_22_03.rst | 8 + lib/ethdev/rte_flow.c | 174 ++++++++- lib/ethdev/rte_flow.h | 334 +++++++++++++++++ lib/ethdev/rte_flow_driver.h | 55 +++ lib/ethdev/version.map | 7 + 8 files changed, 1299 insertions(+), 2 deletions(-) create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg diff --git a/doc/guides/prog_guide/img/rte_flow_q_init.svg b/doc/guides/prog_guide/img/rte_flow_q_init.svg new file mode 100644 index 0000000000..96160bde42 --- /dev/null +++ b/doc/guides/prog_guide/img/rte_flow_q_init.svg @@ -0,0 +1,205 @@ + + + + + + + + + + + + + + + + + rte_eth_dev_configure + () + + + rte_flow_configure() + + + rte_flow_pattern_template_create() + + rte_flow_actions_template_create() + + rte_eal_init() + + + + + rte_flow_template_table_create() + + + + rte_eth_dev_start() + + + diff --git a/doc/guides/prog_guide/img/rte_flow_q_usage.svg b/doc/guides/prog_guide/img/rte_flow_q_usage.svg new file mode 100644 index 0000000000..a1f6c0a0a8 --- /dev/null +++ b/doc/guides/prog_guide/img/rte_flow_q_usage.svg @@ -0,0 +1,351 @@ + + + + + + + + + + + + + + + + rte_eth_rx_burst() + + analyze packet + + rte_flow_q_flow_create() + + more packets? + + + + + + + add new rule? + + + yes + + no + + + destroy the rule? + + + rte_flow_q_flow_destroy() + + + + + rte_flow_q_pull() + + rte_flow_q_push() + + + no + + yes + + no + + yes + + diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 5391648833..964c104ed3 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3607,12 +3607,16 @@ Expected number of counters or meters in an application, for example, allow PMD to prepare and optimize NIC memory layout in advance. ``rte_flow_configure()`` must be called before any flow rule is created, but after an Ethernet device is configured. +It also creates flow queues for asynchronous flow rules operations via +queue-based API, see `Asynchronous operations`_ section. .. code-block:: c int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error); Information about resources that can benefit from pre-allocation can be @@ -3737,7 +3741,7 @@ and pattern and actions templates are created. .. code-block:: c - rte_flow_configure(port, *port_attr, *error); + rte_flow_configure(port, *port_attr, nb_queue, *queue_attr, *error); struct rte_flow_pattern_template *pattern_templates[0] = rte_flow_pattern_template_create(port, &itr, &pattern, &error); @@ -3750,6 +3754,167 @@ and pattern and actions templates are created. *actions_templates, nb_actions_templates, *error); +Asynchronous operations +----------------------- + +Flow rules management can be done via special lockless flow management queues. +- Queue operations are asynchronous and not thread-safe. + +- Operations can thus be invoked by the app's datapath, +packet processing can continue while queue operations are processed by NIC. + +- The queue number is configured at initialization stage. + +- Available operation types: rule creation, rule destruction, +indirect rule creation, indirect rule destruction, indirect rule update. + +- Operations may be reordered within a queue. + +- Operations can be postponed and pushed to NIC in batches. + +- Results pulling must be done on time to avoid queue overflows. + +- User data is returned as part of the result to identify an operation. + +- Flow handle is valid once the creation operation is enqueued and must be +destroyed even if the operation is not successful and the rule is not inserted. + +The asynchronous flow rule insertion logic can be broken into two phases. + +1. Initialization stage as shown here: + +.. _figure_rte_flow_q_init: + +.. figure:: img/rte_flow_q_init.* + +2. Main loop as presented on a datapath application example: + +.. _figure_rte_flow_q_usage: + +.. figure:: img/rte_flow_q_usage.* + +Enqueue creation operation +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enqueueing a flow rule creation operation is similar to simple creation. + +.. code-block:: c + + struct rte_flow * + rte_flow_q_flow_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_template_table *template_table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + struct rte_flow_error *error); + +A valid handle in case of success is returned. It must be destroyed later +by calling ``rte_flow_q_flow_destroy()`` even if the rule is rejected by HW. + +Enqueue destruction operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enqueueing a flow rule destruction operation is similar to simple destruction. + +.. code-block:: c + + int + rte_flow_q_flow_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *error); + +Push enqueued operations +~~~~~~~~~~~~~~~~~~~~~~~~ + +Pushing all internally stored rules from a queue to the NIC. + +.. code-block:: c + + int + rte_flow_q_push(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error); + +There is the postpone attribute in the queue operation attributes. +When it is set, multiple operations can be bulked together and not sent to HW +right away to save SW/HW interactions and prioritize throughput over latency. +The application must invoke this function to actually push all outstanding +operations to HW in this case. + +Pull enqueued operations +~~~~~~~~~~~~~~~~~~~~~~~~ + +Pulling asynchronous operations results. + +The application must invoke this function in order to complete asynchronous +flow rule operations and to receive flow rule operations statuses. + +.. code-block:: c + + int + rte_flow_q_pull(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); + +Multiple outstanding operation results can be pulled simultaneously. +User data may be provided during a flow creation/destruction in order +to distinguish between multiple operations. User data is returned as part +of the result to provide a method to detect which operation is completed. + +Enqueue indirect action creation operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Asynchronous version of indirect action creation API. + +.. code-block:: c + + struct rte_flow_action_handle * + rte_flow_q_action_handle_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + struct rte_flow_error *error); + +A valid handle in case of success is returned. It must be destroyed later by +calling ``rte_flow_q_action_handle_destroy()`` even if the rule is rejected. + +Enqueue indirect action destruction operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Asynchronous version of indirect action destruction API. + +.. code-block:: c + + int + rte_flow_q_action_handle_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + struct rte_flow_error *error); + +Enqueue indirect action update operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Asynchronous version of indirect action update API. + +.. code-block:: c + + int + rte_flow_q_action_handle_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + struct rte_flow_error *error); + .. _flow_isolated_mode: Flow isolated mode diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index 6656b35295..b4e18836ea 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -83,6 +83,14 @@ New Features ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy`` and ``rte_flow_actions_template_destroy``. + * ethdev: Added ``rte_flow_q_flow_create`` and ``rte_flow_q_flow_destroy`` + API to enqueue flow creaion/destruction operations asynchronously as well + as ``rte_flow_q_pull`` to poll and retrieve results of these operations + and ``rte_flow_q_push`` to push all the in-flight operations to the NIC. + Introduced asynchronous API for indirect actions management as well: + ``rte_flow_q_action_handle_create``, ``rte_flow_q_action_handle_destroy`` + and ``rte_flow_q_action_handle_update``. + * **Updated AF_XDP PMD** * Added support for libxdp >=v1.2.2. diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index b53f8c9b89..bf1d3d2062 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1415,6 +1415,8 @@ rte_flow_info_get(uint16_t port_id, int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; @@ -1424,7 +1426,7 @@ rte_flow_configure(uint16_t port_id, return -rte_errno; if (likely(!!ops->configure)) { return flow_err(port_id, - ops->configure(dev, port_attr, error), + ops->configure(dev, port_attr, nb_queue, queue_attr, error), error); } return rte_flow_error_set(error, ENOTSUP, @@ -1578,3 +1580,173 @@ rte_flow_template_table_destroy(uint16_t port_id, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, rte_strerror(ENOTSUP)); } + +struct rte_flow * +rte_flow_q_flow_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_template_table *template_table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow *flow; + + if (unlikely(!ops)) + return NULL; + if (likely(!!ops->q_flow_create)) { + flow = ops->q_flow_create(dev, queue_id, + q_ops_attr, template_table, + pattern, pattern_template_index, + actions, actions_template_index, + error); + if (flow == NULL) + flow_err(port_id, -rte_errno, error); + return flow; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_q_flow_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->q_flow_destroy)) { + return flow_err(port_id, + ops->q_flow_destroy(dev, queue_id, + q_ops_attr, flow, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +struct rte_flow_action_handle * +rte_flow_q_action_handle_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_action_handle *handle; + + if (unlikely(!ops)) + return NULL; + if (unlikely(!ops->q_action_handle_create)) { + rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); + return NULL; + } + handle = ops->q_action_handle_create(dev, queue_id, q_ops_attr, + indir_action_conf, action, error); + if (handle == NULL) + flow_err(port_id, -rte_errno, error); + return handle; +} + +int +rte_flow_q_action_handle_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (unlikely(!ops->q_action_handle_destroy)) + return rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOSYS)); + ret = ops->q_action_handle_destroy(dev, queue_id, q_ops_attr, + action_handle, error); + return flow_err(port_id, ret, error); +} + +int +rte_flow_q_action_handle_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (unlikely(!ops->q_action_handle_update)) + return rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOSYS)); + ret = ops->q_action_handle_update(dev, queue_id, q_ops_attr, + action_handle, update, error); + return flow_err(port_id, ret, error); +} + +int +rte_flow_q_push(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->q_push)) { + return flow_err(port_id, + ops->q_push(dev, queue_id, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +int +rte_flow_q_pull(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->q_pull)) { + ret = ops->q_pull(dev, queue_id, res, n_res, error); + return ret ? ret : flow_err(port_id, ret, error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index e87db5a540..b0d4f33bfd 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4862,6 +4862,10 @@ rte_flow_flex_item_release(uint16_t port_id, * */ struct rte_flow_port_info { + /** + * Number of queues for asynchronous operations. + */ + uint32_t nb_queues; /** * Number of pre-configurable counter actions. * @see RTE_FLOW_ACTION_TYPE_COUNT @@ -4879,6 +4883,17 @@ struct rte_flow_port_info { uint32_t nb_meters; }; +/** + * Flow engine queue configuration. + */ +__extension__ +struct rte_flow_queue_attr { + /** + * Number of flow rule operations a queue can hold. + */ + uint32_t size; +}; + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. @@ -4948,6 +4963,11 @@ struct rte_flow_port_attr { * Port identifier of Ethernet device. * @param[in] port_attr * Port configuration attributes. + * @param[in] nb_queue + * Number of flow queues to be configured. + * @param[in] queue_attr + * Array that holds attributes for each flow queue. + * Number of elements is set in @p port_attr.nb_queues. * @param[out] error * Perform verbose error reporting if not NULL. * PMDs initialize this structure in case of error only. @@ -4959,6 +4979,8 @@ __rte_experimental int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error); /** @@ -5221,6 +5243,318 @@ rte_flow_template_table_destroy(uint16_t port_id, struct rte_flow_template_table *template_table, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Queue operation attributes. + */ +struct rte_flow_q_ops_attr { + /** + * The user data that will be returned on the completion events. + */ + void *user_data; + /** + * When set, the requested action will not be sent to the HW immediately. + * The application must call the rte_flow_queue_push to actually send it. + */ + uint32_t postpone:1; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule creation operation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue used to insert the rule. + * @param[in] q_ops_attr + * Rule creation operation attributes. + * @param[in] template_table + * Template table to select templates from. + * @param[in] pattern + * List of pattern items to be used. + * The list order should match the order in the pattern template. + * The spec is the only relevant member of the item that is being used. + * @param[in] pattern_template_index + * Pattern template index in the table. + * @param[in] actions + * List of actions to be used. + * The list order should match the order in the actions template. + * @param[in] actions_template_index + * Actions template index in the table. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + * The rule handle doesn't mean that the rule was offloaded. + * Only completion result indicates that the rule was offloaded. + */ +__rte_experimental +struct rte_flow * +rte_flow_q_flow_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_template_table *template_table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule destruction operation. + * + * This function enqueues a destruction operation on the queue. + * Application should assume that after calling this function + * the rule handle is not valid anymore. + * Completion indicates the full removal of the rule from the HW. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue which is used to destroy the rule. + * This must match the queue on which the rule was created. + * @param[in] q_ops_attr + * Rule destroy operation attributes. + * @param[in] flow + * Flow handle to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_q_flow_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action creation operation. + * @see rte_flow_action_handle_create + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to create the rule. + * @param[in] q_ops_attr + * Queue operation attributes. + * @param[in] indir_action_conf + * Action configuration for the indirect action object creation. + * @param[in] action + * Specific configuration of the indirect action object. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if action pointed by *action* handle was not found. + * - (-EBUSY) if action pointed by *action* handle still used by some rules + * rte_errno is also set. + */ +__rte_experimental +struct rte_flow_action_handle * +rte_flow_q_action_handle_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action destruction operation. + * The destroy queue must be the same + * as the queue on which the action was created. + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to destroy the rule. + * @param[in] q_ops_attr + * Queue operation attributes. + * @param[in] action_handle + * Handle for the indirect action object to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if action pointed by *action* handle was not found. + * - (-EBUSY) if action pointed by *action* handle still used by some rules + * rte_errno is also set. + */ +__rte_experimental +int +rte_flow_q_action_handle_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action update operation. + * @see rte_flow_action_handle_create + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to update the rule. + * @param[in] q_ops_attr + * Queue operation attributes. + * @param[in] action_handle + * Handle for the indirect action object to be updated. + * @param[in] update + * Update profile specification used to modify the action pointed by handle. + * *update* could be with the same type of the immediate action corresponding + * to the *handle* argument when creating, or a wrapper structure includes + * action configuration to be updated and bit fields to indicate the member + * of fields inside the action to update. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if action pointed by *action* handle was not found. + * - (-EBUSY) if action pointed by *action* handle still used by some rules + * rte_errno is also set. + */ +__rte_experimental +int +rte_flow_q_action_handle_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Push all internally stored rules to the HW. + * Postponed rules are rules that were inserted with the postpone flag set. + * Can be used to notify the HW about batch of rules prepared by the SW to + * reduce the number of communications between the HW and SW. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue to be pushed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_q_push(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Queue operation status. + */ +enum rte_flow_q_op_status { + /** + * The operation was completed successfully. + */ + RTE_FLOW_Q_OP_SUCCESS, + /** + * The operation was not completed successfully. + */ + RTE_FLOW_Q_OP_ERROR, +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Queue operation results. + */ +__extension__ +struct rte_flow_q_op_res { + /** + * Returns the status of the operation that this completion signals. + */ + enum rte_flow_q_op_status status; + /** + * The user data that will be returned on the completion events. + */ + void *user_data; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Pull a rte flow operation. + * The application must invoke this function in order to complete + * the flow rule offloading and to retrieve the flow rule operation status. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue which is used to pull the operation. + * @param[out] res + * Array of results that will be set. + * @param[in] n_res + * Maximum number of results that can be returned. + * This value is equal to the size of the res array. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Number of results that were pulled, + * a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_q_pull(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index 2d96db1dc7..33dc57a15e 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -161,6 +161,8 @@ struct rte_flow_ops { int (*configure) (struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *err); /** See rte_flow_pattern_template_create() */ struct rte_flow_pattern_template *(*pattern_template_create) @@ -199,6 +201,59 @@ struct rte_flow_ops { (struct rte_eth_dev *dev, struct rte_flow_template_table *template_table, struct rte_flow_error *err); + /** See rte_flow_q_flow_create() */ + struct rte_flow *(*q_flow_create) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_template_table *template_table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + struct rte_flow_error *err); + /** See rte_flow_q_flow_destroy() */ + int (*q_flow_destroy) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *err); + /** See rte_flow_q_action_handle_create() */ + struct rte_flow_action_handle *(*q_action_handle_create) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + struct rte_flow_error *err); + /** See rte_flow_q_action_handle_destroy() */ + int (*q_action_handle_destroy) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + struct rte_flow_error *error); + /** See rte_flow_q_action_handle_update() */ + int (*q_action_handle_update) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + struct rte_flow_error *error); + /** See rte_flow_q_push() */ + int (*q_push) + (struct rte_eth_dev *dev, + uint32_t queue_id, + struct rte_flow_error *err); + /** See rte_flow_q_pull() */ + int (*q_pull) + (struct rte_eth_dev *dev, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 5fd2108895..46a4151053 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -268,6 +268,13 @@ EXPERIMENTAL { rte_flow_actions_template_destroy; rte_flow_template_table_create; rte_flow_template_table_destroy; + rte_flow_q_flow_create; + rte_flow_q_flow_destroy; + rte_flow_q_action_handle_create; + rte_flow_q_action_handle_destroy; + rte_flow_q_action_handle_update; + rte_flow_q_push; + rte_flow_q_pull; }; INTERNAL { From patchwork Wed Feb 9 21:38:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107187 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0358FA0032; Wed, 9 Feb 2022 22:39:07 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7AF8F41174; Wed, 9 Feb 2022 22:38:50 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2088.outbound.protection.outlook.com [40.107.244.88]) by mails.dpdk.org (Postfix) with ESMTP id D086341160 for ; Wed, 9 Feb 2022 22:38:48 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HtDPfFTAkZo8Gu6wk0k4jGAMxx5uaHTjoKQa92o28C5ju0g+cL7LzN18Zq5Zg6fQT0aMNF1PEzFNQWLBwhFIhlFLjuV9C1Azk+iHj0/AEWiNU+Oy2UD1uwpGnAHlTA4wToIkNWnMAiz5ceYMWm/eXfgA1A+aLZNg6QKokZl6xVn4rjlz+hopbq35Vp6OxULYaJH7biQKB18SXeOqzBu8mFRMyoduOz5IseNBfFJjpvD3DyFHmnSPUf6L8mKeCaJaVyVqcffkym68C4gqE+mn6gXKP8cT4COungtVmpE4GzDT/xM0fVX5LzmVu6vA/e7Z0Cy2VZs/ad2Bpf3CXqmq8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=x68OGukhBZ4Mt24v7c+kcjPuEelGVD7xjljGu9DCsos=; b=U7SAMo9IClSfy1YeWu9TegiKAobikObDCOLhRWXEvb1U7LCxjpWx+2zlxzyq3dYw2l/sdIkoqUnxWR0rTvSkbEqtMD/s50vm9WZip6aAT17vhGWETXLZTyVwj3u4xtMfDAOmNdaSK8jGHdjZv1vyomw2MABA9PG03byRds3E5LLBsGE8hlTe/jTbE+VTFEU0BZoag3id811s/DZDR/RdfnHAtGz/DfG2r3f6xvLA9/egnvVtZmbXsZ6KeexNXcioaXdsocSNxp3NfQhDxWRMpGwY2XtVucFuqYemkaMqYquqDJMrkL/jK9rNpxRj43xApy5jT8h1EHy6BFt7lXYsZw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=x68OGukhBZ4Mt24v7c+kcjPuEelGVD7xjljGu9DCsos=; b=lKy3VelhQXP82Pox57QNidVdNrfIwL1iwbD9agMI/p0gxm3KfHmrvVCvyfA6Ye7EpxeoSAQ0XkD4p6+FeJxcc4ewLVRHaiggQVhZZhZX87UfBQN2L7N6CS7ILolv/XZBH8BrLQjZ1WBJK1RfliAwFoPv6B9nCnsdpu0JU9sOWwuZlrHFiRqQUYo7YC6CHDz0Rc7Y+LMazH88kLre0ddajltYnS+8ejMTGw+c4TNT+dzi8t/+Z3xHxJnwYCcnxtlKwBWA8XIqVJPYr5i8Sh9lJ2vqKj8TNl98dKDRgqlx0Pb0rKPb+5ynuARIgU+SPa7KGxn5k9Te+bpo7c7ZjKVwIg== Received: from DM5PR12MB1257.namprd12.prod.outlook.com (2603:10b6:3:74::13) by BN6PR1201MB0195.namprd12.prod.outlook.com (2603:10b6:405:53::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.17; Wed, 9 Feb 2022 21:38:46 +0000 Received: from BN9P220CA0010.NAMP220.PROD.OUTLOOK.COM (2603:10b6:408:13e::15) by DM5PR12MB1257.namprd12.prod.outlook.com (2603:10b6:3:74::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.17; Wed, 9 Feb 2022 21:38:44 +0000 Received: from BN8NAM11FT029.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13e:cafe::7d) by BN9P220CA0010.outlook.office365.com (2603:10b6:408:13e::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Wed, 9 Feb 2022 21:38:44 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT029.mail.protection.outlook.com (10.13.177.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Wed, 9 Feb 2022 21:38:44 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 9 Feb 2022 21:38:43 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Wed, 9 Feb 2022 13:38:40 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v4 04/10] app/testpmd: implement rte flow configuration Date: Wed, 9 Feb 2022 23:38:03 +0200 Message-ID: <20220209213809.1208269-5-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220209213809.1208269-1-akozyrev@nvidia.com> References: <20220206032526.816079-1-akozyrev@nvidia.com > <20220209213809.1208269-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0293e05d-e357-446d-7344-08d9ec148c3c X-MS-TrafficTypeDiagnostic: DM5PR12MB1257:EE_|BN6PR1201MB0195:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6108; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: RE6d3lAVd+GB2Gn0xZo+RHz0aT9kIiUw+y4GOzecNHmGgSektL2xKaSUgQlTIrxB8UVy6B/tHwo7V/rBHg7dACD3Jz2An19qJCkLK4oMujTAGjZhr5TxWFTx4XU/31GMEjXbEs5fgOwKrxG1hGBm8Wx9EYm/rKAKjqAp/QLamQnE7QPDZ14qmXeNQ/bgiItpS8LHaIU92Lkc6CWUQXGH0rNJqZPbxuumIMSk/DeGJD38gLzDANHae1SfP+LxsLpiATFUbyvYty6RBfJjdtOMRIBj9bNmWbsYZoQvYuixLlvIhZhQFGeNWIY/SythyEXp6cvGxLUnzZK0XK6W7zUol9wJIEFD+tJ62Cdo6Qa/FcT6dG8yv2g/O9gt+mo15pXq6V+mD/aFXPJhSg3dmihFyEfQMf99p6UxkTN9cNGS5RLYhga4X4s7fgiOOen+RqjHLDctsEetXaKZXfHNXAH3wBWfECyAl1Tz/YP43k8KM4PDrPbTZxSfNaZ8aoDUnFHcu9YgtRKlqie4oBnGU1X+2uCNWx9xwAN9p//eKWXxRNGvx+LuhqrtCG5vTLiGUFyfzvICnSl+4SktbG2aC9rhbuRz8KLXJI1hKUeVrJMO7MNP9OZBDXkV2QJAn5P5s//8iXPJbrx9p3zs1TBey7zQ3uFJuIDueVA5nmtlgiqSUHQuXtGvWROPti2sixPSVrMapEnnc0qxOP2mKqrFs9b+OQ== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(508600001)(83380400001)(54906003)(316002)(82310400004)(6916009)(2906002)(40460700003)(86362001)(36860700001)(47076005)(5660300002)(6666004)(36756003)(2616005)(4326008)(336012)(426003)(1076003)(186003)(81166007)(8936002)(8676002)(16526019)(26005)(356005)(7416002)(70586007)(30864003)(70206006)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2022 21:38:44.3409 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0293e05d-e357-446d-7344-08d9ec148c3c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT029.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR1201MB0195 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_configure API. Provide the command line interface for the Flow management. Usage example: flow configure 0 queues_number 8 queues_size 256 Implement rte_flow_info_get API to get available resources: Usage example: flow info 0 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 126 +++++++++++++++++++- app/test-pmd/config.c | 54 +++++++++ app/test-pmd/testpmd.h | 7 ++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 60 +++++++++- 4 files changed, 244 insertions(+), 3 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 7b56b1b0ff..cc3003e6eb 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -72,6 +72,8 @@ enum index { /* Top-level command. */ FLOW, /* Sub-level commands. */ + INFO, + CONFIGURE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -122,6 +124,13 @@ enum index { DUMP_ALL, DUMP_ONE, + /* Configure arguments */ + CONFIG_QUEUES_NUMBER, + CONFIG_QUEUES_SIZE, + CONFIG_COUNTERS_NUMBER, + CONFIG_AGING_COUNTERS_NUMBER, + CONFIG_METERS_NUMBER, + /* Indirect action arguments */ INDIRECT_ACTION_CREATE, INDIRECT_ACTION_UPDATE, @@ -847,6 +856,11 @@ struct buffer { enum index command; /**< Flow command. */ portid_t port; /**< Affected port ID. */ union { + struct { + struct rte_flow_port_attr port_attr; + uint32_t nb_queue; + struct rte_flow_queue_attr queue_attr; + } configure; /**< Configuration arguments. */ struct { uint32_t *action_id; uint32_t action_id_n; @@ -928,6 +942,16 @@ static const enum index next_flex_item[] = { ZERO, }; +static const enum index next_config_attr[] = { + CONFIG_QUEUES_NUMBER, + CONFIG_QUEUES_SIZE, + CONFIG_COUNTERS_NUMBER, + CONFIG_AGING_COUNTERS_NUMBER, + CONFIG_METERS_NUMBER, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -1964,6 +1988,9 @@ static int parse_aged(struct context *, const struct token *, static int parse_isolate(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_configure(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2189,7 +2216,9 @@ static const struct token token_list[] = { .type = "{command} {port_id} [{arg} [...]]", .help = "manage ingress/egress flow rules", .next = NEXT(NEXT_ENTRY - (INDIRECT_ACTION, + (INFO, + CONFIGURE, + INDIRECT_ACTION, VALIDATE, CREATE, DESTROY, @@ -2204,6 +2233,65 @@ static const struct token token_list[] = { .call = parse_init, }, /* Top-level command. */ + [INFO] = { + .name = "info", + .help = "get information about flow engine", + .next = NEXT(NEXT_ENTRY(END), + NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_configure, + }, + /* Top-level command. */ + [CONFIGURE] = { + .name = "configure", + .help = "configure flow engine", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_configure, + }, + /* Configure arguments. */ + [CONFIG_QUEUES_NUMBER] = { + .name = "queues_number", + .help = "number of queues", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.nb_queue)), + }, + [CONFIG_QUEUES_SIZE] = { + .name = "queues_size", + .help = "number of elements in queues", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.queue_attr.size)), + }, + [CONFIG_COUNTERS_NUMBER] = { + .name = "counters_number", + .help = "number of counters", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.port_attr.nb_counters)), + }, + [CONFIG_AGING_COUNTERS_NUMBER] = { + .name = "aging_counters_number", + .help = "number of aging flows", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.port_attr.nb_aging_flows)), + }, + [CONFIG_METERS_NUMBER] = { + .name = "meters_number", + .help = "number of meters", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.port_attr.nb_meters)), + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -7480,6 +7568,33 @@ parse_isolate(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for info/configure command. */ +static int +parse_configure(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != INFO && ctx->curr != CONFIGURE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + } + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -8708,6 +8823,15 @@ static void cmd_flow_parsed(const struct buffer *in) { switch (in->command) { + case INFO: + port_flow_get_info(in->port); + break; + case CONFIGURE: + port_flow_configure(in->port, + &in->args.configure.port_attr, + in->args.configure.nb_queue, + &in->args.configure.queue_attr); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index e812f57151..df83f8dbdd 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1609,6 +1609,60 @@ action_alloc(portid_t port_id, uint32_t id, return 0; } +/** Get info about flow management resources. */ +int +port_flow_get_info(portid_t port_id) +{ + struct rte_flow_port_info port_info; + struct rte_flow_error error; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x99, sizeof(error)); + if (rte_flow_info_get(port_id, &port_info, &error)) + return port_flow_complain(&error); + printf("Pre-configurable resources on port %u:\n" + "Number of queues: %d\n" + "Number of counters: %d\n" + "Number of aging flows: %d\n" + "Number of meters: %d\n", + port_id, port_info.nb_queues, port_info.nb_counters, + port_info.nb_aging_flows, port_info.nb_meters); + return 0; +} + +/** Configure flow management resources. */ +int +port_flow_configure(portid_t port_id, + const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr) +{ + struct rte_port *port; + struct rte_flow_error error; + const struct rte_flow_queue_attr *attr_list[nb_queue]; + int std_queue; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + port->queue_nb = nb_queue; + port->queue_sz = queue_attr->size; + for (std_queue = 0; std_queue < nb_queue; std_queue++) + attr_list[std_queue] = queue_attr; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x66, sizeof(error)); + if (rte_flow_configure(port_id, port_attr, nb_queue, attr_list, &error)) + return port_flow_complain(&error); + printf("Configure flows on port %u: " + "number of queues %d with %d elements\n", + port_id, nb_queue, queue_attr->size); + return 0; +} + /** Create indirect action */ int port_action_handle_create(portid_t port_id, uint32_t id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 9967825044..096b6825eb 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -243,6 +243,8 @@ struct rte_port { struct rte_eth_txconf tx_conf[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue tx configuration */ struct rte_ether_addr *mc_addr_pool; /**< pool of multicast addrs */ uint32_t mc_addr_nb; /**< nb. of addr. in mc_addr_pool */ + queueid_t queue_nb; /**< nb. of queues for flow rules */ + uint32_t queue_sz; /**< size of a queue for flow rules */ uint8_t slave_flag; /**< bonding slave port */ struct port_flow *flow_list; /**< Associated flows. */ struct port_indirect_action *actions_list; @@ -885,6 +887,11 @@ struct rte_flow_action_handle *port_action_handle_get_by_id(portid_t port_id, uint32_t id); int port_action_handle_update(portid_t port_id, uint32_t id, const struct rte_flow_action *action); +int port_flow_get_info(portid_t port_id); +int port_flow_configure(portid_t port_id, + const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index b2e98df6e1..cfdda5005c 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3308,8 +3308,8 @@ Flow rules management --------------------- Control of the generic flow API (*rte_flow*) is fully exposed through the -``flow`` command (validation, creation, destruction, queries and operation -modes). +``flow`` command (configuration, validation, creation, destruction, queries +and operation modes). Considering *rte_flow* overlaps with all `Filter Functions`_, using both features simultaneously may cause undefined side-effects and is therefore @@ -3332,6 +3332,18 @@ The first parameter stands for the operation mode. Possible operations and their general syntax are described below. They are covered in detail in the following sections. +- Get info about flow engine:: + + flow info {port_id} + +- Configure flow engine:: + + flow configure {port_id} + [queues_number {number}] [queues_size {size}] + [counters_number {number}] + [aging_counters_number {number}] + [meters_number {number}] + - Check whether a flow rule can be created:: flow validate {port_id} @@ -3391,6 +3403,50 @@ following sections. flow tunnel list {port_id} +Retrieving info about flow management engine +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow info`` retrieves info on pre-configurable resources in the underlying +device to give a hint of possible values for flow engine configuration. + +``rte_flow_info_get()``:: + + flow info {port_id} + +If successful, it will show:: + + Pre-configurable resources on port #[...]: + Number of queues: #[...] + Number of counters: #[...] + Number of aging flows: #[...] + Number of meters: #[...] + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +Configuring flow management engine +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow configure`` pre-allocates all the needed resources in the underlying +device to be used later at the flow creation. Flow queues are allocated as well +for asynchronous flow creation/destruction operations. It is bound to +``rte_flow_configure()``:: + + flow configure {port_id} + [queues_number {number}] [queues_size {size}] + [counters_number {number}] + [aging_counters_number {number}] + [meters_number {number}] + +If successful, it will show:: + + Configure flows on port #[...]: number of queues #[...] with #[...] elements + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Wed Feb 9 21:38:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107188 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B5FEAA0032; Wed, 9 Feb 2022 22:39:12 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6C7F64116A; Wed, 9 Feb 2022 22:38:52 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2045.outbound.protection.outlook.com [40.107.94.45]) by mails.dpdk.org (Postfix) with ESMTP id D9E8441177 for ; Wed, 9 Feb 2022 22:38:50 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jWxF0Ni8kJs84nzsiLsAEnDt/1wLXv9ITHIWUxFZKzTuN/EPBbgy/wpKTtqH37WvuaeyHa2cetaVVS4b6NLdiXNciRfo87fTk5Im6+RB87Z8faoOFOBsos0BLTfyrdVOozdU3uUI9Hd4ZNIdX/nsSvNzDPQMy/kEPbii3cybV0GRgzKXpZqN7Zaw2faKaZWT2KLF0Xk0WmwIQXSvSBb3gsl4gyRol39V2hKpIuY5+iLKT0l1UYi9LaTKl5Hkgo1dwnlNkYwOAazMaMIDBM48s0g15Kcf/TD6ZgGM8UiD0FeZKFtqNKXIWF0lMD4jIPrMnjlDFPCi9PJIpyNf9/w80Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=lX+dE49mPw4H3osU7IQEhmpLX8bovdemmy5E5H8HZfM=; b=A4XSlOyrMUwLMZizMBPjvrjGu86PNvh5nPRk5O8SDohBRRuDriI2Se7hrPh+kot38f+m7ptbqJ3D5377Bmt+hqYRWoCtpJgojA6G1VOio9hFY8ZW9DhapwzxGxBfySRIenYwkheqa/8ftGxw7A3eYWTui0UV5+k0fQlEg7TKDw/cY0JPTJaMhEkDxU6rMD0q/PsByZjMv0hcawA3pDCXuNT4AWFZ50NZDRl9zPahUA+Q91GQSbcUYjDH3XBceYZ/0qDyzjBlS/nh4VkAuvdcdh4Qf3A5ZSqgdScd7hdBGGPwYF2Ju4ZTkylePRtipU5X3gkc4vgICMf/muFKj0iPtw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=lX+dE49mPw4H3osU7IQEhmpLX8bovdemmy5E5H8HZfM=; b=cx5mXrqrJJOox1vhQIWZG/u+QChH4bfPsBbH/7FZ3ASKAPiUqicmJmfJdVXI3sydfcPjElzrHj0Y/e+BtS2HeIuLq7EIjvHd0Q8AZeMBtK79dLTwwsO8PkY4vlgogEgOmvppG+XDVRexxXIBpOpD9FPF/7OrAz1n9hFmpZ2X8pD04vy5MBFDoi3q8fnmGqKDDQD+1oj9jSkmgUqi9DzOAHdTeFB+gTDP+yUO9VWXQf/C1X1f9OhL8FeVJjxfKadu8EOWhnYdLBvn1wBaFxehQPnInpI0TQnm2ZyDFGYnFyGnwQSp2RiF68Q+VsaXfyFmqkvIwOdpctmvhiaZ0hbxoQ== Received: from BY5PR12MB4132.namprd12.prod.outlook.com (2603:10b6:a03:209::22) by MW2PR12MB4684.namprd12.prod.outlook.com (2603:10b6:302:13::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.17; Wed, 9 Feb 2022 21:38:48 +0000 Received: from MWHPR1201CA0006.namprd12.prod.outlook.com (2603:10b6:301:4a::16) by BY5PR12MB4132.namprd12.prod.outlook.com (2603:10b6:a03:209::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Wed, 9 Feb 2022 21:38:47 +0000 Received: from CO1NAM11FT020.eop-nam11.prod.protection.outlook.com (2603:10b6:301:4a:cafe::4e) by MWHPR1201CA0006.outlook.office365.com (2603:10b6:301:4a::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Wed, 9 Feb 2022 21:38:47 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT020.mail.protection.outlook.com (10.13.174.149) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Wed, 9 Feb 2022 21:38:46 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 9 Feb 2022 21:38:46 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Wed, 9 Feb 2022 13:38:43 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v4 05/10] app/testpmd: implement rte flow template management Date: Wed, 9 Feb 2022 23:38:04 +0200 Message-ID: <20220209213809.1208269-6-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220209213809.1208269-1-akozyrev@nvidia.com> References: <20220206032526.816079-1-akozyrev@nvidia.com > <20220209213809.1208269-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 8dda8089-f72e-4aec-8e13-08d9ec148dba X-MS-TrafficTypeDiagnostic: BY5PR12MB4132:EE_|MW2PR12MB4684:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2331; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: PwEqkf/QWeFqjRcv345xRTZB3kUPb/5DqMdkgZaf1+6vN3681xvAcpDaUhptWSaCv95eInIDCbltMypaXhVOIyvaIwt5UfRE59WhEwc979uW4YWrqJH0zAQzkKp+t4C5b71G+r9ChuLb9J5MZy5+ZGPS8Yw7+ROKl11z9tT4oZYMNZ/DCQ/H2srnz0xgl96yqZQQ5eoB0ELqxYBoHM5ffVaOxVK+5aapbz7B58Jv9iWOTIDUYdvjWFUg41WvtOUk+ZEENPoRKvFjjceq0YxTQLH43nf5jvr/6KCOL8NVdBUzJvoJ+xfPNZep2oENmSaV69bMNpdwI0ExEdszU5oWUYJTqluaN5rmictqjxGG8I6AuukqDn4fE9Rmbb5ZKe4akB0xqRYW544v/dl6hqNlWfbGp3taOEWaCHjuaScECmr/bjRWv3KhRqajMYtCzwqwh+ovkOob7WyOVQBcBGD8n+yNOXuCfx3bcs/ncXt5lIMmAQUPy4fh+oSSrCUq6Bn+71sqk3geAcwtzOF6Wl8JG32HcVTmTmOSnnOq8ieg3eCCxlN+1AynyCVkXCIO+mNmd+nXJvjnQMOik4j1su/tCGc8aMAE0BNkT3Dv2NfpM091H6C+3e/rm4qwW6kpMoL4h8x+ceQOs2+j3AYyG7VK4mQX25ktDjP2jQnBmZSmE1NMvZg9ZRsIliMLm0E47171U7ITN6QIdMoQgKBu4WLq1w== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(8676002)(316002)(8936002)(70586007)(70206006)(82310400004)(6666004)(86362001)(4326008)(356005)(508600001)(6916009)(81166007)(54906003)(40460700003)(36756003)(1076003)(16526019)(186003)(2906002)(5660300002)(426003)(336012)(47076005)(26005)(36860700001)(30864003)(7416002)(2616005)(83380400001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2022 21:38:46.9856 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8dda8089-f72e-4aec-8e13-08d9ec148dba X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT020.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW2PR12MB4684 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_pattern_template and rte_flow_actions_template APIs. Provide the command line interface for the template creation/destruction. Usage example: testpmd> flow pattern_template 0 create pattern_template_id 2 template eth dst is 00:16:3e:31:15:c3 / end testpmd> flow actions_template 0 create actions_template_id 4 template drop / end mask drop / end testpmd> flow actions_template 0 destroy actions_template 4 testpmd> flow pattern_template 0 destroy pattern_template 2 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 376 +++++++++++++++++++- app/test-pmd/config.c | 203 +++++++++++ app/test-pmd/testpmd.h | 23 ++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 97 +++++ 4 files changed, 697 insertions(+), 2 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index cc3003e6eb..34bc73eea3 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -56,6 +56,8 @@ enum index { COMMON_POLICY_ID, COMMON_FLEX_HANDLE, COMMON_FLEX_TOKEN, + COMMON_PATTERN_TEMPLATE_ID, + COMMON_ACTIONS_TEMPLATE_ID, /* TOP-level command. */ ADD, @@ -74,6 +76,8 @@ enum index { /* Sub-level commands. */ INFO, CONFIGURE, + PATTERN_TEMPLATE, + ACTIONS_TEMPLATE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -92,6 +96,22 @@ enum index { FLEX_ITEM_CREATE, FLEX_ITEM_DESTROY, + /* Pattern template arguments. */ + PATTERN_TEMPLATE_CREATE, + PATTERN_TEMPLATE_DESTROY, + PATTERN_TEMPLATE_CREATE_ID, + PATTERN_TEMPLATE_DESTROY_ID, + PATTERN_TEMPLATE_RELAXED_MATCHING, + PATTERN_TEMPLATE_SPEC, + + /* Actions template arguments. */ + ACTIONS_TEMPLATE_CREATE, + ACTIONS_TEMPLATE_DESTROY, + ACTIONS_TEMPLATE_CREATE_ID, + ACTIONS_TEMPLATE_DESTROY_ID, + ACTIONS_TEMPLATE_SPEC, + ACTIONS_TEMPLATE_MASK, + /* Tunnel arguments. */ TUNNEL_CREATE, TUNNEL_CREATE_TYPE, @@ -861,6 +881,10 @@ struct buffer { uint32_t nb_queue; struct rte_flow_queue_attr queue_attr; } configure; /**< Configuration arguments. */ + struct { + uint32_t *template_id; + uint32_t template_id_n; + } templ_destroy; /**< Template destroy arguments. */ struct { uint32_t *action_id; uint32_t action_id_n; @@ -869,10 +893,13 @@ struct buffer { uint32_t action_id; } ia; /* Indirect action query arguments */ struct { + uint32_t pat_templ_id; + uint32_t act_templ_id; struct rte_flow_attr attr; struct tunnel_ops tunnel_ops; struct rte_flow_item *pattern; struct rte_flow_action *actions; + struct rte_flow_action *masks; uint32_t pattern_n; uint32_t actions_n; uint8_t *data; @@ -952,6 +979,43 @@ static const enum index next_config_attr[] = { ZERO, }; +static const enum index next_pt_subcmd[] = { + PATTERN_TEMPLATE_CREATE, + PATTERN_TEMPLATE_DESTROY, + ZERO, +}; + +static const enum index next_pt_attr[] = { + PATTERN_TEMPLATE_CREATE_ID, + PATTERN_TEMPLATE_RELAXED_MATCHING, + PATTERN_TEMPLATE_SPEC, + ZERO, +}; + +static const enum index next_pt_destroy_attr[] = { + PATTERN_TEMPLATE_DESTROY_ID, + END, + ZERO, +}; + +static const enum index next_at_subcmd[] = { + ACTIONS_TEMPLATE_CREATE, + ACTIONS_TEMPLATE_DESTROY, + ZERO, +}; + +static const enum index next_at_attr[] = { + ACTIONS_TEMPLATE_CREATE_ID, + ACTIONS_TEMPLATE_SPEC, + ZERO, +}; + +static const enum index next_at_destroy_attr[] = { + ACTIONS_TEMPLATE_DESTROY_ID, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -1991,6 +2055,12 @@ static int parse_isolate(struct context *, const struct token *, static int parse_configure(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_template(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); +static int parse_template_destroy(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2060,6 +2130,10 @@ static int comp_set_modify_field_op(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_set_modify_field_id(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_pattern_template_id(struct context *, const struct token *, + unsigned int, char *, unsigned int); +static int comp_actions_template_id(struct context *, const struct token *, + unsigned int, char *, unsigned int); /** Token definitions. */ static const struct token token_list[] = { @@ -2210,6 +2284,20 @@ static const struct token token_list[] = { .call = parse_flex_handle, .comp = comp_none, }, + [COMMON_PATTERN_TEMPLATE_ID] = { + .name = "{pattern_template_id}", + .type = "PATTERN_TEMPLATE_ID", + .help = "pattern template id", + .call = parse_int, + .comp = comp_pattern_template_id, + }, + [COMMON_ACTIONS_TEMPLATE_ID] = { + .name = "{actions_template_id}", + .type = "ACTIONS_TEMPLATE_ID", + .help = "actions template id", + .call = parse_int, + .comp = comp_actions_template_id, + }, /* Top-level command. */ [FLOW] = { .name = "flow", @@ -2218,6 +2306,8 @@ static const struct token token_list[] = { .next = NEXT(NEXT_ENTRY (INFO, CONFIGURE, + PATTERN_TEMPLATE, + ACTIONS_TEMPLATE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -2292,6 +2382,112 @@ static const struct token token_list[] = { args.configure.port_attr.nb_meters)), }, /* Top-level command. */ + [PATTERN_TEMPLATE] = { + .name = "pattern_template", + .type = "{command} {port_id} [{arg} [...]]", + .help = "manage pattern templates", + .next = NEXT(next_pt_subcmd, NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_template, + }, + /* Sub-level commands. */ + [PATTERN_TEMPLATE_CREATE] = { + .name = "create", + .help = "create pattern template", + .next = NEXT(next_pt_attr), + .call = parse_template, + }, + [PATTERN_TEMPLATE_DESTROY] = { + .name = "destroy", + .help = "destroy pattern template", + .next = NEXT(NEXT_ENTRY(PATTERN_TEMPLATE_DESTROY_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_template_destroy, + }, + /* Pattern template arguments. */ + [PATTERN_TEMPLATE_CREATE_ID] = { + .name = "pattern_template_id", + .help = "specify a pattern template id to create", + .next = NEXT(next_pt_attr, + NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.pat_templ_id)), + }, + [PATTERN_TEMPLATE_DESTROY_ID] = { + .name = "pattern_template", + .help = "specify a pattern template id to destroy", + .next = NEXT(next_pt_destroy_attr, + NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.templ_destroy.template_id)), + .call = parse_template_destroy, + }, + [PATTERN_TEMPLATE_RELAXED_MATCHING] = { + .name = "relaxed", + .help = "is matching relaxed", + .next = NEXT(next_pt_attr, + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY_BF(struct buffer, + args.vc.attr.reserved, 1)), + }, + [PATTERN_TEMPLATE_SPEC] = { + .name = "template", + .help = "specify item to create pattern template", + .next = NEXT(next_item), + }, + /* Top-level command. */ + [ACTIONS_TEMPLATE] = { + .name = "actions_template", + .type = "{command} {port_id} [{arg} [...]]", + .help = "manage actions templates", + .next = NEXT(next_at_subcmd, NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_template, + }, + /* Sub-level commands. */ + [ACTIONS_TEMPLATE_CREATE] = { + .name = "create", + .help = "create actions template", + .next = NEXT(next_at_attr), + .call = parse_template, + }, + [ACTIONS_TEMPLATE_DESTROY] = { + .name = "destroy", + .help = "destroy actions template", + .next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_DESTROY_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_template_destroy, + }, + /* Actions template arguments. */ + [ACTIONS_TEMPLATE_CREATE_ID] = { + .name = "actions_template_id", + .help = "specify an actions template id to create", + .next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_MASK), + NEXT_ENTRY(ACTIONS_TEMPLATE_SPEC), + NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.act_templ_id)), + }, + [ACTIONS_TEMPLATE_DESTROY_ID] = { + .name = "actions_template", + .help = "specify an actions template id to destroy", + .next = NEXT(next_at_destroy_attr, + NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.templ_destroy.template_id)), + .call = parse_template_destroy, + }, + [ACTIONS_TEMPLATE_SPEC] = { + .name = "template", + .help = "specify action to create actions template", + .next = NEXT(next_action), + .call = parse_template, + }, + [ACTIONS_TEMPLATE_MASK] = { + .name = "mask", + .help = "specify action mask to create actions template", + .next = NEXT(next_action), + .call = parse_template, + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -2614,7 +2810,7 @@ static const struct token token_list[] = { .name = "end", .help = "end list of pattern items", .priv = PRIV_ITEM(END, 0), - .next = NEXT(NEXT_ENTRY(ACTIONS)), + .next = NEXT(NEXT_ENTRY(ACTIONS, END)), .call = parse_vc, }, [ITEM_VOID] = { @@ -5731,7 +5927,9 @@ parse_vc(struct context *ctx, const struct token *token, if (!out) return len; if (!out->command) { - if (ctx->curr != VALIDATE && ctx->curr != CREATE) + if (ctx->curr != VALIDATE && ctx->curr != CREATE && + ctx->curr != PATTERN_TEMPLATE_CREATE && + ctx->curr != ACTIONS_TEMPLATE_CREATE) return -1; if (sizeof(*out) > size) return -1; @@ -7595,6 +7793,114 @@ parse_configure(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for template create command. */ +static int +parse_template(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != PATTERN_TEMPLATE && + ctx->curr != ACTIONS_TEMPLATE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.vc.data = (uint8_t *)out + size; + return len; + } + switch (ctx->curr) { + case PATTERN_TEMPLATE_CREATE: + out->args.vc.pattern = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + out->args.vc.pat_templ_id = UINT32_MAX; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case ACTIONS_TEMPLATE_CREATE: + out->args.vc.act_templ_id = UINT32_MAX; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case ACTIONS_TEMPLATE_SPEC: + out->args.vc.actions = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + ctx->object = out->args.vc.actions; + ctx->objmask = NULL; + return len; + case ACTIONS_TEMPLATE_MASK: + out->args.vc.masks = + (void *)RTE_ALIGN_CEIL((uintptr_t) + (out->args.vc.actions + + out->args.vc.actions_n), + sizeof(double)); + ctx->object = out->args.vc.masks; + ctx->objmask = NULL; + return len; + default: + return -1; + } +} + +/** Parse tokens for template destroy command. */ +static int +parse_template_destroy(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *template_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command || + out->command == PATTERN_TEMPLATE || + out->command == ACTIONS_TEMPLATE) { + if (ctx->curr != PATTERN_TEMPLATE_DESTROY && + ctx->curr != ACTIONS_TEMPLATE_DESTROY) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.templ_destroy.template_id = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; + } + template_id = out->args.templ_destroy.template_id + + out->args.templ_destroy.template_id_n++; + if ((uint8_t *)template_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = template_id; + ctx->objmask = NULL; + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -8564,6 +8870,54 @@ comp_set_modify_field_id(struct context *ctx, const struct token *token, return -1; } +/** Complete available pattern template IDs. */ +static int +comp_pattern_template_id(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + unsigned int i = 0; + struct rte_port *port; + struct port_template *pt; + + (void)token; + if (port_id_is_invalid(ctx->port, DISABLED_WARN) || + ctx->port == (portid_t)RTE_PORT_ALL) + return -1; + port = &ports[ctx->port]; + for (pt = port->pattern_templ_list; pt != NULL; pt = pt->next) { + if (buf && i == ent) + return snprintf(buf, size, "%u", pt->id); + ++i; + } + if (buf) + return -1; + return i; +} + +/** Complete available actions template IDs. */ +static int +comp_actions_template_id(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + unsigned int i = 0; + struct rte_port *port; + struct port_template *pt; + + (void)token; + if (port_id_is_invalid(ctx->port, DISABLED_WARN) || + ctx->port == (portid_t)RTE_PORT_ALL) + return -1; + port = &ports[ctx->port]; + for (pt = port->actions_templ_list; pt != NULL; pt = pt->next) { + if (buf && i == ent) + return snprintf(buf, size, "%u", pt->id); + ++i; + } + if (buf) + return -1; + return i; +} + /** Internal context. */ static struct context cmd_flow_context; @@ -8832,6 +9186,24 @@ cmd_flow_parsed(const struct buffer *in) in->args.configure.nb_queue, &in->args.configure.queue_attr); break; + case PATTERN_TEMPLATE_CREATE: + port_flow_pattern_template_create(in->port, in->args.vc.pat_templ_id, + in->args.vc.attr.reserved, in->args.vc.pattern); + break; + case PATTERN_TEMPLATE_DESTROY: + port_flow_pattern_template_destroy(in->port, + in->args.templ_destroy.template_id_n, + in->args.templ_destroy.template_id); + break; + case ACTIONS_TEMPLATE_CREATE: + port_flow_actions_template_create(in->port, in->args.vc.act_templ_id, + in->args.vc.actions, in->args.vc.masks); + break; + case ACTIONS_TEMPLATE_DESTROY: + port_flow_actions_template_destroy(in->port, + in->args.templ_destroy.template_id_n, + in->args.templ_destroy.template_id); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index df83f8dbdd..2ef7c3e07a 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1609,6 +1609,49 @@ action_alloc(portid_t port_id, uint32_t id, return 0; } +static int +template_alloc(uint32_t id, struct port_template **template, + struct port_template **list) +{ + struct port_template *lst = *list; + struct port_template **ppt; + struct port_template *pt = NULL; + + *template = NULL; + if (id == UINT32_MAX) { + /* taking first available ID */ + if (lst) { + if (lst->id == UINT32_MAX - 1) { + printf("Highest template ID is already" + " assigned, delete it first\n"); + return -ENOMEM; + } + id = lst->id + 1; + } else { + id = 0; + } + } + pt = calloc(1, sizeof(*pt)); + if (!pt) { + printf("Allocation of port template failed\n"); + return -ENOMEM; + } + ppt = list; + while (*ppt && (*ppt)->id > id) + ppt = &(*ppt)->next; + if (*ppt && (*ppt)->id == id) { + printf("Template #%u is already assigned," + " delete it first\n", id); + free(pt); + return -EINVAL; + } + pt->next = *ppt; + pt->id = id; + *ppt = pt; + *template = pt; + return 0; +} + /** Get info about flow management resources. */ int port_flow_get_info(portid_t port_id) @@ -2078,6 +2121,166 @@ age_action_get(const struct rte_flow_action *actions) return NULL; } +/** Create pattern template */ +int +port_flow_pattern_template_create(portid_t port_id, uint32_t id, bool relaxed, + const struct rte_flow_item *pattern) +{ + struct rte_port *port; + struct port_template *pit; + int ret; + struct rte_flow_pattern_template_attr attr = { + .relaxed_matching = relaxed }; + struct rte_flow_error error; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + ret = template_alloc(id, &pit, &port->pattern_templ_list); + if (ret) + return ret; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x22, sizeof(error)); + pit->template.pattern_template = rte_flow_pattern_template_create(port_id, + &attr, pattern, &error); + if (!pit->template.pattern_template) { + uint32_t destroy_id = pit->id; + port_flow_pattern_template_destroy(port_id, 1, &destroy_id); + return port_flow_complain(&error); + } + printf("Pattern template #%u created\n", pit->id); + return 0; +} + +/** Destroy pattern template */ +int +port_flow_pattern_template_destroy(portid_t port_id, uint32_t n, + const uint32_t *template) +{ + struct rte_port *port; + struct port_template **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + tmp = &port->pattern_templ_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_template *pit = *tmp; + + if (template[i] != pit->id) + continue; + /* + * Poisoning to make sure PMDs update it in case + * of error. + */ + memset(&error, 0x33, sizeof(error)); + + if (pit->template.pattern_template && + rte_flow_pattern_template_destroy(port_id, + pit->template.pattern_template, + &error)) { + ret = port_flow_complain(&error); + continue; + } + *tmp = pit->next; + printf("Pattern template #%u destroyed\n", pit->id); + free(pit); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + +/** Create actions template */ +int +port_flow_actions_template_create(portid_t port_id, uint32_t id, + const struct rte_flow_action *actions, + const struct rte_flow_action *masks) +{ + struct rte_port *port; + struct port_template *pat; + int ret; + struct rte_flow_error error; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + ret = template_alloc(id, &pat, &port->actions_templ_list); + if (ret) + return ret; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x22, sizeof(error)); + pat->template.actions_template = rte_flow_actions_template_create(port_id, + NULL, actions, masks, &error); + if (!pat->template.actions_template) { + uint32_t destroy_id = pat->id; + port_flow_actions_template_destroy(port_id, 1, &destroy_id); + return port_flow_complain(&error); + } + printf("Actions template #%u created\n", pat->id); + return 0; +} + +/** Destroy actions template */ +int +port_flow_actions_template_destroy(portid_t port_id, uint32_t n, + const uint32_t *template) +{ + struct rte_port *port; + struct port_template **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + tmp = &port->actions_templ_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_template *pat = *tmp; + + if (template[i] != pat->id) + continue; + /* + * Poisoning to make sure PMDs update it in case + * of error. + */ + memset(&error, 0x33, sizeof(error)); + + if (pat->template.actions_template && + rte_flow_actions_template_destroy(port_id, + pat->template.actions_template, &error)) { + ret = port_flow_complain(&error); + continue; + } + *tmp = pat->next; + printf("Actions template #%u destroyed\n", pat->id); + free(pat); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 096b6825eb..c70b1fa4e8 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -166,6 +166,17 @@ enum age_action_context_type { ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION, }; +/** Descriptor for a template. */ +struct port_template { + struct port_template *next; /**< Next template in list. */ + struct port_template *tmp; /**< Temporary linking. */ + uint32_t id; /**< Template ID. */ + union { + struct rte_flow_pattern_template *pattern_template; + struct rte_flow_actions_template *actions_template; + } template; /**< PMD opaque template object */ +}; + /** Descriptor for a single flow. */ struct port_flow { struct port_flow *next; /**< Next flow in list. */ @@ -246,6 +257,8 @@ struct rte_port { queueid_t queue_nb; /**< nb. of queues for flow rules */ uint32_t queue_sz; /**< size of a queue for flow rules */ uint8_t slave_flag; /**< bonding slave port */ + struct port_template *pattern_templ_list; /**< Pattern templates. */ + struct port_template *actions_templ_list; /**< Actions templates. */ struct port_flow *flow_list; /**< Associated flows. */ struct port_indirect_action *actions_list; /**< Associated indirect actions. */ @@ -892,6 +905,16 @@ int port_flow_configure(portid_t port_id, const struct rte_flow_port_attr *port_attr, uint16_t nb_queue, const struct rte_flow_queue_attr *queue_attr); +int port_flow_pattern_template_create(portid_t port_id, uint32_t id, + bool relaxed, + const struct rte_flow_item *pattern); +int port_flow_pattern_template_destroy(portid_t port_id, uint32_t n, + const uint32_t *template); +int port_flow_actions_template_create(portid_t port_id, uint32_t id, + const struct rte_flow_action *actions, + const struct rte_flow_action *masks); +int port_flow_actions_template_destroy(portid_t port_id, uint32_t n, + const uint32_t *template); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index cfdda5005c..acb763bdf0 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3344,6 +3344,24 @@ following sections. [aging_counters_number {number}] [meters_number {number}] +- Create a pattern template:: + flow pattern_template {port_id} create [pattern_template_id {id}] + [relaxed {boolean}] template {item} [/ {item} [...]] / end + +- Destroy a pattern template:: + + flow pattern_template {port_id} destroy pattern_template {id} [...] + +- Create an actions template:: + + flow actions_template {port_id} create [actions_template_id {id}] + template {action} [/ {action} [...]] / end + mask {action} [/ {action} [...]] / end + +- Destroy an actions template:: + + flow actions_template {port_id} destroy actions_template {id} [...] + - Check whether a flow rule can be created:: flow validate {port_id} @@ -3447,6 +3465,85 @@ Otherwise it will show an error message of the form:: Caught error type [...] ([...]): [...] +Creating pattern templates +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow pattern_template create`` creates the specified pattern template. +It is bound to ``rte_flow_pattern_template_create()``:: + + flow pattern_template {port_id} create [pattern_template_id {id}] + [relaxed {boolean}] template {item} [/ {item} [...]] / end + +If successful, it will show:: + + Pattern template #[...] created + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +This command uses the same pattern items as ``flow create``, +their format is described in `Creating flow rules`_. + +Destroying pattern templates +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow pattern_template destroy`` destroys one or more pattern templates +from their template ID (as returned by ``flow pattern_template create``), +this command calls ``rte_flow_pattern_template_destroy()`` as many +times as necessary:: + + flow pattern_template {port_id} destroy pattern_template {id} [...] + +If successful, it will show:: + + Pattern template #[...] destroyed + +It does not report anything for pattern template IDs that do not exist. +The usual error message is shown when a pattern template cannot be destroyed:: + + Caught error type [...] ([...]): [...] + +Creating actions templates +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow actions_template create`` creates the specified actions template. +It is bound to ``rte_flow_actions_template_create()``:: + + flow actions_template {port_id} create [actions_template_id {id}] + template {action} [/ {action} [...]] / end + mask {action} [/ {action} [...]] / end + +If successful, it will show:: + + Actions template #[...] created + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +This command uses the same actions as ``flow create``, +their format is described in `Creating flow rules`_. + +Destroying actions templates +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow actions_template destroy`` destroys one or more actions templates +from their template ID (as returned by ``flow actions_template create``), +this command calls ``rte_flow_actions_template_destroy()`` as many +times as necessary:: + + flow actions_template {port_id} destroy actions_template {id} [...] + +If successful, it will show:: + + Actions template #[...] destroyed + +It does not report anything for actions template IDs that do not exist. +The usual error message is shown when an actions template cannot be destroyed:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Wed Feb 9 21:38:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107190 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3BBE1A0032; Wed, 9 Feb 2022 22:39:26 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8CAF34115D; Wed, 9 Feb 2022 22:39:00 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2054.outbound.protection.outlook.com [40.107.94.54]) by mails.dpdk.org (Postfix) with ESMTP id 73FDB40140 for ; Wed, 9 Feb 2022 22:38:58 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aZtETHZkaSLZAEKXeEbf6+M/ez8Iq6NOSX3blOI8/VnY3FUalTQuSlFMWVp8aUFMj7MIU9kcL6ai1sTsWjOXfo10wRifIVmwe+rPDZVmJ6vOZ8nM47Dc+Nt9xtYOe1G+tGr6Wm69T+XPLYj+W+BWOgj+wsFB/cNpbb6NE22UxLZhMmGlIBYkE6gUrtpGtCcx2Ar8J234zFA6pPzG+8wmvLmZqWwzB2sYwRHMN7preQj1WKitl4gQr89Q4JBfL2eEtKfMNbZ+h7uLsSzZTm4tNmYoHFrQLTaUXqNIV+FdkOqEvJI/2p10TqZF79XSSPc84IZKXqiht8I7UlPNMXXU2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=UHxnyR6gGcBJ+3B80U/HfAvZzsatCl0PZb/7luYmg/E=; b=JAnmrCXsOP/YIOzyiVDeSAgrqHML9e9ImcZ6ZdTy9w1CFZBzAYGPmgFJS0kG8MTwThaCiBgI2dEbLBFrOQso7ZfCOqWiDl+BqQJP5NMCFeEBJFmVimNmiIF3TkmcQ5Sv3/2YpuWbzCLpVNKk7ghtgIXLk39mxUV6+AB0REVphKvQOaAGsnLSCQn1Eculapty//DGtk5VkjAj/Uz1ETbfc8/ORB0c6a5IAVxNototJjyIRtHHoaMTyVOMaFJYPUOxKlJlyzhhXhxcFF2Hxnek5jgZ9t05R/1gborFksaGi6rbUrE707VbibBBYwM2ZTT6aNB67OKg4vNHLgbDYiCgCA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UHxnyR6gGcBJ+3B80U/HfAvZzsatCl0PZb/7luYmg/E=; b=AjoX9KgTdA+FOynWtFunQTzmkDjCuFnxESmU48FjtkR+INIh0pNiUeTyzsrPudDVxCkBV8mxzWXyIaiCbd1HZjzNXKCw35fG1QZA9sbLgGzXTpidqMHUvzlvYPFzpHQOT57OXy4QXrELMmd4w7P1V0cJKAwwBIoTveyOk3G5x+IDODSSxfABZ0DO6UtrcS4rlYnfKSSaaWZUzg+XNbYTqvFklVdstbCqRX8T0zi2WyhL8CY2ZnPIobi8zi45+oZ/ihhY8MRdTx9sGvQwl1b5ZukkK5H+vbncKhKUaH4iJVqGU50a+5iNfmNUOZGDevCTnR5BjF80FavHn6s4rv06/A== Received: from CY4PR1201MB0104.namprd12.prod.outlook.com (2603:10b6:910:1d::16) by MWHPR12MB1422.namprd12.prod.outlook.com (2603:10b6:300:10::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Wed, 9 Feb 2022 21:38:55 +0000 Received: from CO2PR04CA0087.namprd04.prod.outlook.com (2603:10b6:104:6::13) by CY4PR1201MB0104.namprd12.prod.outlook.com (2603:10b6:910:1d::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.17; Wed, 9 Feb 2022 21:38:54 +0000 Received: from CO1NAM11FT048.eop-nam11.prod.protection.outlook.com (2603:10b6:104:6:cafe::8f) by CO2PR04CA0087.outlook.office365.com (2603:10b6:104:6::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Wed, 9 Feb 2022 21:38:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT048.mail.protection.outlook.com (10.13.175.148) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Wed, 9 Feb 2022 21:38:54 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 9 Feb 2022 21:38:49 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Wed, 9 Feb 2022 13:38:46 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v4 06/10] app/testpmd: implement rte flow table management Date: Wed, 9 Feb 2022 23:38:05 +0200 Message-ID: <20220209213809.1208269-7-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220209213809.1208269-1-akozyrev@nvidia.com> References: <20220206032526.816079-1-akozyrev@nvidia.com > <20220209213809.1208269-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: c17a93c7-972e-4cc5-bb0b-08d9ec149227 X-MS-TrafficTypeDiagnostic: CY4PR1201MB0104:EE_|MWHPR12MB1422:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2331; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ZGpTUJtxQZ26S+1L10wjIbjmgAeSJ3jUx4jRbGhRaDigHHpcUlFuGIkNYSADU7LPbi7WMrl7uRSWXkyhkxC366OFAgpgnARF1+F17ZlnZHCMuvsX4kgum1YF7ndALnME0goQ/Q/ykaC4y4hfQ1PvkVEkWAzPDUCOyxlwiQOUjngufuPPu/S6KXJRgcPpj+pplGv7uju4fok55lc7XGDquaN5JeffMLBCyfR5/yPARTRmz+g4liEF6uK/fj4NM/MXf1Lrz88GKA08ndErhgNuJE1ewuPyd0M8/gnrG/OnBV8i/UxwVgNX/ZCA4qvN6EgqSHTcYsLpxr1S/jrDKfiOz2PcRHffZaa69s/0M4CHJkcVGrYxvaagSp2qSsKWbjQOS6zc1ldsGAv9Avdeg1cPHHgK/rp3W3MFevhNNVChCrkgUxvJuIHuclkEIOUOpUqMStULxabodqqloSod4xz4Mwbo2Jrh5mTN9Jyhzl1ltVNl6RxdeT3bZHwKHeLwR0DS/Wf6QyAY2CHfagbAK09hxkeRoVLAByjjxzTn1yT0C5limptEso0g3UyiSpvGK9vVNXgMmpiCqtlRYE4IcTdYmzkxtIzdRhA6fBd1tcLetUbHv9zEIB1RxgaX5lU1qeu1pDMd9fCUqZTRnok9smk7O50I1GEGJA9sLMEzfcxbmR4VaPXZdXk6cIvIjYEC0in8bW6rgtB3R5nkoVXXfgKqjg== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(54906003)(6916009)(7416002)(81166007)(86362001)(508600001)(16526019)(36756003)(1076003)(82310400004)(2906002)(36860700001)(70586007)(70206006)(4326008)(316002)(47076005)(40460700003)(2616005)(30864003)(426003)(5660300002)(8936002)(8676002)(26005)(83380400001)(186003)(336012)(356005)(6666004)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2022 21:38:54.3808 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c17a93c7-972e-4cc5-bb0b-08d9ec149227 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT048.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1422 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_table API. Provide the command line interface for the flow table creation/destruction. Usage example: testpmd> flow template_table 0 create table_id 6 group 9 priority 4 ingress mode 1 rules_number 64 pattern_template 2 actions_template 4 testpmd> flow template_table 0 destroy table 6 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 315 ++++++++++++++++++++ app/test-pmd/config.c | 171 +++++++++++ app/test-pmd/testpmd.h | 17 ++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 53 ++++ 4 files changed, 556 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 34bc73eea3..3e89525445 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -58,6 +58,7 @@ enum index { COMMON_FLEX_TOKEN, COMMON_PATTERN_TEMPLATE_ID, COMMON_ACTIONS_TEMPLATE_ID, + COMMON_TABLE_ID, /* TOP-level command. */ ADD, @@ -78,6 +79,7 @@ enum index { CONFIGURE, PATTERN_TEMPLATE, ACTIONS_TEMPLATE, + TABLE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -112,6 +114,20 @@ enum index { ACTIONS_TEMPLATE_SPEC, ACTIONS_TEMPLATE_MASK, + /* Table arguments. */ + TABLE_CREATE, + TABLE_DESTROY, + TABLE_CREATE_ID, + TABLE_DESTROY_ID, + TABLE_GROUP, + TABLE_PRIORITY, + TABLE_INGRESS, + TABLE_EGRESS, + TABLE_TRANSFER, + TABLE_RULES_NUMBER, + TABLE_PATTERN_TEMPLATE, + TABLE_ACTIONS_TEMPLATE, + /* Tunnel arguments. */ TUNNEL_CREATE, TUNNEL_CREATE_TYPE, @@ -885,6 +901,18 @@ struct buffer { uint32_t *template_id; uint32_t template_id_n; } templ_destroy; /**< Template destroy arguments. */ + struct { + uint32_t id; + struct rte_flow_template_table_attr attr; + uint32_t *pat_templ_id; + uint32_t pat_templ_id_n; + uint32_t *act_templ_id; + uint32_t act_templ_id_n; + } table; /**< Table arguments. */ + struct { + uint32_t *table_id; + uint32_t table_id_n; + } table_destroy; /**< Template destroy arguments. */ struct { uint32_t *action_id; uint32_t action_id_n; @@ -1016,6 +1044,32 @@ static const enum index next_at_destroy_attr[] = { ZERO, }; +static const enum index next_table_subcmd[] = { + TABLE_CREATE, + TABLE_DESTROY, + ZERO, +}; + +static const enum index next_table_attr[] = { + TABLE_CREATE_ID, + TABLE_GROUP, + TABLE_PRIORITY, + TABLE_INGRESS, + TABLE_EGRESS, + TABLE_TRANSFER, + TABLE_RULES_NUMBER, + TABLE_PATTERN_TEMPLATE, + TABLE_ACTIONS_TEMPLATE, + END, + ZERO, +}; + +static const enum index next_table_destroy_attr[] = { + TABLE_DESTROY_ID, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -2061,6 +2115,11 @@ static int parse_template(struct context *, const struct token *, static int parse_template_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_table(struct context *, const struct token *, + const char *, unsigned int, void *, unsigned int); +static int parse_table_destroy(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2134,6 +2193,8 @@ static int comp_pattern_template_id(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_actions_template_id(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_table_id(struct context *, const struct token *, + unsigned int, char *, unsigned int); /** Token definitions. */ static const struct token token_list[] = { @@ -2298,6 +2359,13 @@ static const struct token token_list[] = { .call = parse_int, .comp = comp_actions_template_id, }, + [COMMON_TABLE_ID] = { + .name = "{table_id}", + .type = "TABLE_ID", + .help = "table id", + .call = parse_int, + .comp = comp_table_id, + }, /* Top-level command. */ [FLOW] = { .name = "flow", @@ -2308,6 +2376,7 @@ static const struct token token_list[] = { CONFIGURE, PATTERN_TEMPLATE, ACTIONS_TEMPLATE, + TABLE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -2488,6 +2557,104 @@ static const struct token token_list[] = { .call = parse_template, }, /* Top-level command. */ + [TABLE] = { + .name = "template_table", + .type = "{command} {port_id} [{arg} [...]]", + .help = "manage template tables", + .next = NEXT(next_table_subcmd, NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_table, + }, + /* Sub-level commands. */ + [TABLE_CREATE] = { + .name = "create", + .help = "create template table", + .next = NEXT(next_table_attr), + .call = parse_table, + }, + [TABLE_DESTROY] = { + .name = "destroy", + .help = "destroy template table", + .next = NEXT(NEXT_ENTRY(TABLE_DESTROY_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_table_destroy, + }, + /* Table arguments. */ + [TABLE_CREATE_ID] = { + .name = "table_id", + .help = "specify table id to create", + .next = NEXT(next_table_attr, + NEXT_ENTRY(COMMON_TABLE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.table.id)), + }, + [TABLE_DESTROY_ID] = { + .name = "table", + .help = "specify table id to destroy", + .next = NEXT(next_table_destroy_attr, + NEXT_ENTRY(COMMON_TABLE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.table_destroy.table_id)), + .call = parse_table_destroy, + }, + [TABLE_GROUP] = { + .name = "group", + .help = "specify a group", + .next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_GROUP_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.table.attr.flow_attr.group)), + }, + [TABLE_PRIORITY] = { + .name = "priority", + .help = "specify a priority level", + .next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_PRIORITY_LEVEL)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.table.attr.flow_attr.priority)), + }, + [TABLE_EGRESS] = { + .name = "egress", + .help = "affect rule to egress", + .next = NEXT(next_table_attr), + .call = parse_table, + }, + [TABLE_INGRESS] = { + .name = "ingress", + .help = "affect rule to ingress", + .next = NEXT(next_table_attr), + .call = parse_table, + }, + [TABLE_TRANSFER] = { + .name = "transfer", + .help = "affect rule to transfer", + .next = NEXT(next_table_attr), + .call = parse_table, + }, + [TABLE_RULES_NUMBER] = { + .name = "rules_number", + .help = "number of rules in table", + .next = NEXT(next_table_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.table.attr.nb_flows)), + }, + [TABLE_PATTERN_TEMPLATE] = { + .name = "pattern_template", + .help = "specify pattern template id", + .next = NEXT(next_table_attr, + NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.table.pat_templ_id)), + .call = parse_table, + }, + [TABLE_ACTIONS_TEMPLATE] = { + .name = "actions_template", + .help = "specify actions template id", + .next = NEXT(next_table_attr, + NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.table.act_templ_id)), + .call = parse_table, + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -7901,6 +8068,119 @@ parse_template_destroy(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for table create command. */ +static int +parse_table(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *template_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != TABLE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + } + switch (ctx->curr) { + case TABLE_CREATE: + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.table.id = UINT32_MAX; + return len; + case TABLE_PATTERN_TEMPLATE: + out->args.table.pat_templ_id = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + template_id = out->args.table.pat_templ_id + + out->args.table.pat_templ_id_n++; + if ((uint8_t *)template_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = template_id; + ctx->objmask = NULL; + return len; + case TABLE_ACTIONS_TEMPLATE: + out->args.table.act_templ_id = + (void *)RTE_ALIGN_CEIL((uintptr_t) + (out->args.table.pat_templ_id + + out->args.table.pat_templ_id_n), + sizeof(double)); + template_id = out->args.table.act_templ_id + + out->args.table.act_templ_id_n++; + if ((uint8_t *)template_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = template_id; + ctx->objmask = NULL; + return len; + case TABLE_INGRESS: + out->args.table.attr.flow_attr.ingress = 1; + return len; + case TABLE_EGRESS: + out->args.table.attr.flow_attr.egress = 1; + return len; + case TABLE_TRANSFER: + out->args.table.attr.flow_attr.transfer = 1; + return len; + default: + return -1; + } +} + +/** Parse tokens for table destroy command. */ +static int +parse_table_destroy(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *table_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command || out->command == TABLE) { + if (ctx->curr != TABLE_DESTROY) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.table_destroy.table_id = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; + } + table_id = out->args.table_destroy.table_id + + out->args.table_destroy.table_id_n++; + if ((uint8_t *)table_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = table_id; + ctx->objmask = NULL; + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -8918,6 +9198,30 @@ comp_actions_template_id(struct context *ctx, const struct token *token, return i; } +/** Complete available table IDs. */ +static int +comp_table_id(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + unsigned int i = 0; + struct rte_port *port; + struct port_table *pt; + + (void)token; + if (port_id_is_invalid(ctx->port, DISABLED_WARN) || + ctx->port == (portid_t)RTE_PORT_ALL) + return -1; + port = &ports[ctx->port]; + for (pt = port->table_list; pt != NULL; pt = pt->next) { + if (buf && i == ent) + return snprintf(buf, size, "%u", pt->id); + ++i; + } + if (buf) + return -1; + return i; +} + /** Internal context. */ static struct context cmd_flow_context; @@ -9204,6 +9508,17 @@ cmd_flow_parsed(const struct buffer *in) in->args.templ_destroy.template_id_n, in->args.templ_destroy.template_id); break; + case TABLE_CREATE: + port_flow_template_table_create(in->port, in->args.table.id, + &in->args.table.attr, in->args.table.pat_templ_id_n, + in->args.table.pat_templ_id, in->args.table.act_templ_id_n, + in->args.table.act_templ_id); + break; + case TABLE_DESTROY: + port_flow_template_table_destroy(in->port, + in->args.table_destroy.table_id_n, + in->args.table_destroy.table_id); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 2ef7c3e07a..316c16901a 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1652,6 +1652,49 @@ template_alloc(uint32_t id, struct port_template **template, return 0; } +static int +table_alloc(uint32_t id, struct port_table **table, + struct port_table **list) +{ + struct port_table *lst = *list; + struct port_table **ppt; + struct port_table *pt = NULL; + + *table = NULL; + if (id == UINT32_MAX) { + /* taking first available ID */ + if (lst) { + if (lst->id == UINT32_MAX - 1) { + printf("Highest table ID is already" + " assigned, delete it first\n"); + return -ENOMEM; + } + id = lst->id + 1; + } else { + id = 0; + } + } + pt = calloc(1, sizeof(*pt)); + if (!pt) { + printf("Allocation of table failed\n"); + return -ENOMEM; + } + ppt = list; + while (*ppt && (*ppt)->id > id) + ppt = &(*ppt)->next; + if (*ppt && (*ppt)->id == id) { + printf("Table #%u is already assigned," + " delete it first\n", id); + free(pt); + return -EINVAL; + } + pt->next = *ppt; + pt->id = id; + *ppt = pt; + *table = pt; + return 0; +} + /** Get info about flow management resources. */ int port_flow_get_info(portid_t port_id) @@ -2281,6 +2324,134 @@ port_flow_actions_template_destroy(portid_t port_id, uint32_t n, return ret; } +/** Create table */ +int +port_flow_template_table_create(portid_t port_id, uint32_t id, + const struct rte_flow_template_table_attr *table_attr, + uint32_t nb_pattern_templates, uint32_t *pattern_templates, + uint32_t nb_actions_templates, uint32_t *actions_templates) +{ + struct rte_port *port; + struct port_table *pt; + struct port_template *temp = NULL; + int ret; + uint32_t i; + struct rte_flow_error error; + struct rte_flow_pattern_template + *flow_pattern_templates[nb_pattern_templates]; + struct rte_flow_actions_template + *flow_actions_templates[nb_actions_templates]; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + for (i = 0; i < nb_pattern_templates; ++i) { + bool found = false; + temp = port->pattern_templ_list; + while (temp) { + if (pattern_templates[i] == temp->id) { + flow_pattern_templates[i] = + temp->template.pattern_template; + found = true; + break; + } + temp = temp->next; + } + if (!found) { + printf("Pattern template #%u is invalid\n", + pattern_templates[i]); + return -EINVAL; + } + } + for (i = 0; i < nb_actions_templates; ++i) { + bool found = false; + temp = port->actions_templ_list; + while (temp) { + if (actions_templates[i] == temp->id) { + flow_actions_templates[i] = + temp->template.actions_template; + found = true; + break; + } + temp = temp->next; + } + if (!found) { + printf("Actions template #%u is invalid\n", + actions_templates[i]); + return -EINVAL; + } + } + ret = table_alloc(id, &pt, &port->table_list); + if (ret) + return ret; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x22, sizeof(error)); + pt->table = rte_flow_template_table_create(port_id, table_attr, + flow_pattern_templates, nb_pattern_templates, + flow_actions_templates, nb_actions_templates, + &error); + + if (!pt->table) { + uint32_t destroy_id = pt->id; + port_flow_template_table_destroy(port_id, 1, &destroy_id); + return port_flow_complain(&error); + } + pt->nb_pattern_templates = nb_pattern_templates; + pt->nb_actions_templates = nb_actions_templates; + printf("Template table #%u created\n", pt->id); + return 0; +} + +/** Destroy table */ +int +port_flow_template_table_destroy(portid_t port_id, + uint32_t n, const uint32_t *table) +{ + struct rte_port *port; + struct port_table **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + tmp = &port->table_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_table *pt = *tmp; + + if (table[i] != pt->id) + continue; + /* + * Poisoning to make sure PMDs update it in case + * of error. + */ + memset(&error, 0x33, sizeof(error)); + + if (pt->table && + rte_flow_template_table_destroy(port_id, + pt->table, + &error)) { + ret = port_flow_complain(&error); + continue; + } + *tmp = pt->next; + printf("Template table #%u destroyed\n", pt->id); + free(pt); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index c70b1fa4e8..4c6e775bad 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -177,6 +177,16 @@ struct port_template { } template; /**< PMD opaque template object */ }; +/** Descriptor for a flow table. */ +struct port_table { + struct port_table *next; /**< Next table in list. */ + struct port_table *tmp; /**< Temporary linking. */ + uint32_t id; /**< Table ID. */ + uint32_t nb_pattern_templates; /**< Number of pattern templates. */ + uint32_t nb_actions_templates; /**< Number of actions templates. */ + struct rte_flow_template_table *table; /**< PMD opaque template object */ +}; + /** Descriptor for a single flow. */ struct port_flow { struct port_flow *next; /**< Next flow in list. */ @@ -259,6 +269,7 @@ struct rte_port { uint8_t slave_flag; /**< bonding slave port */ struct port_template *pattern_templ_list; /**< Pattern templates. */ struct port_template *actions_templ_list; /**< Actions templates. */ + struct port_table *table_list; /**< Flow tables. */ struct port_flow *flow_list; /**< Associated flows. */ struct port_indirect_action *actions_list; /**< Associated indirect actions. */ @@ -915,6 +926,12 @@ int port_flow_actions_template_create(portid_t port_id, uint32_t id, const struct rte_flow_action *masks); int port_flow_actions_template_destroy(portid_t port_id, uint32_t n, const uint32_t *template); +int port_flow_template_table_create(portid_t port_id, uint32_t id, + const struct rte_flow_template_table_attr *table_attr, + uint32_t nb_pattern_templates, uint32_t *pattern_templates, + uint32_t nb_actions_templates, uint32_t *actions_templates); +int port_flow_template_table_destroy(portid_t port_id, + uint32_t n, const uint32_t *table); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index acb763bdf0..16b874250c 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3362,6 +3362,19 @@ following sections. flow actions_template {port_id} destroy actions_template {id} [...] +- Create a table:: + + flow table {port_id} create + [table_id {id}] + [group {group_id}] [priority {level}] [ingress] [egress] [transfer] + rules_number {number} + pattern_template {pattern_template_id} + actions_template {actions_template_id} + +- Destroy a table:: + + flow table {port_id} destroy table {id} [...] + - Check whether a flow rule can be created:: flow validate {port_id} @@ -3544,6 +3557,46 @@ The usual error message is shown when an actions template cannot be destroyed:: Caught error type [...] ([...]): [...] +Creating template table +~~~~~~~~~~~~~~~~~~~~~~~ + +``flow template_table create`` creates the specified template table. +It is bound to ``rte_flow_template_table_create()``:: + + flow template_table {port_id} create + [table_id {id}] [group {group_id}] + [priority {level}] [ingress] [egress] [transfer] + rules_number {number} + pattern_template {pattern_template_id} + actions_template {actions_template_id} + +If successful, it will show:: + + Template table #[...] created + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +Destroying flow table +~~~~~~~~~~~~~~~~~~~~~ + +``flow template_table destroy`` destroys one or more template tables +from their table ID (as returned by ``flow template_table create``), +this command calls ``rte_flow_template_table_destroy()`` as many +times as necessary:: + + flow template_table {port_id} destroy table {id} [...] + +If successful, it will show:: + + Template table #[...] destroyed + +It does not report anything for table IDs that do not exist. +The usual error message is shown when a table cannot be destroyed:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Wed Feb 9 21:38:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107189 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A4CB2A0032; Wed, 9 Feb 2022 22:39:20 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9EE0241171; Wed, 9 Feb 2022 22:38:59 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2045.outbound.protection.outlook.com [40.107.220.45]) by mails.dpdk.org (Postfix) with ESMTP id 9983F40140 for ; Wed, 9 Feb 2022 22:38:57 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=P38lWay6hvME55P9kxbLWeoRWz12X4A+8THrxYH8IcCHxys3lS9e4aS/mzCw0zYnbfo4viHI/Bb9ehZzv5SINepI3pCr1/dIstEGziveKMagwx9SncH9Q/D82oL3I9oEm/GxTY48fOBDfUdlYWB0LTkHblfaPHirzDh0+6KYu/TciSB9rCSc/7HuCJdCol+ZXjcr5Isf7F51bvakiyXVrQgKtpqYYP3lj4BDKENq7pHk8x06cwf080+pRqP+UX+Y/1fj2bhmt5CjJ80dHT2F+F+FPeMet+79Z4W7LklOd7klTkkhgyHFwADjCjExliFrgNf2fb8TymlDUA5d9AJiPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=pdHxum6QXfA4pMN8CXFgRfm2JRJ5IWLx/ulYFoqhH44=; b=oCtzGtoRkQ+m72ZnkDq9VXcxW2TeA4LU+se2Eo4zVU4ezfeQP8wlIb6e+9VhFADZnC9sm4GYFexYGlooR+5ejgvhjRExm5SzhDARks6cGQz1T4XpK9HH+dRUvvh4V4LUBvdG5sbQj2uZVc/Er4O3ie7fJWDpbseajuN7oaMj/ztNlcrTO/eghmQsOJUww1nO57KW+vxEyutaPU+xekqzF6VamBRnToHgDjJ9Fc0o7mVbIGDVr191jVhnVMXlXByJzLEcQYjW6JyvefbiPU0LRPvCo8ui9m8K3r8+MUZV+o6kcIUdUa6DMyrD5xIfiqQpRYVkzfsU16pRRvFxze2i7A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pdHxum6QXfA4pMN8CXFgRfm2JRJ5IWLx/ulYFoqhH44=; b=PaLxkNaxqOm2y7cAGfVbRmAZ8huz9gosZgLTOZd/ZdReSIffvS9FJcyhI4LxgoZM5Q/QBFntV1V9MJ7QFWGU3mNwrTxw8rYCWdECbWWkY0dn1UstAUrQYZa7oqKGJrP9jaH7j1rfkL9ux/Rd2o7Kn+me2DneugtGM/nb3fFJG8H3iaZlKXcec+aGc75Qv7nvrHwvJoZ3AhGdVbzcwk0ZXBcVuis0Ir73NE/8QaKr3ELnHdoty9zOXybdXdWOdBSFJilS41+RIBUy2EAhXOa9jjoByIMTY/JJTme9RMjTRm1suzYjIdpZ0th1rJO8LQqt2se4HChVh4WDqSt2/S23Bw== Received: from CY4PR12MB1558.namprd12.prod.outlook.com (2603:10b6:910:b::23) by BY5PR12MB4193.namprd12.prod.outlook.com (2603:10b6:a03:20c::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Wed, 9 Feb 2022 21:38:55 +0000 Received: from BN8PR03CA0019.namprd03.prod.outlook.com (2603:10b6:408:94::32) by CY4PR12MB1558.namprd12.prod.outlook.com (2603:10b6:910:b::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.17; Wed, 9 Feb 2022 21:38:54 +0000 Received: from BN8NAM11FT011.eop-nam11.prod.protection.outlook.com (2603:10b6:408:94:cafe::2d) by BN8PR03CA0019.outlook.office365.com (2603:10b6:408:94::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Wed, 9 Feb 2022 21:38:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT011.mail.protection.outlook.com (10.13.176.140) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Wed, 9 Feb 2022 21:38:53 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 9 Feb 2022 21:38:53 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Wed, 9 Feb 2022 13:38:49 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v4 07/10] app/testpmd: implement rte flow queue flow operations Date: Wed, 9 Feb 2022 23:38:06 +0200 Message-ID: <20220209213809.1208269-8-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220209213809.1208269-1-akozyrev@nvidia.com> References: <20220206032526.816079-1-akozyrev@nvidia.com > <20220209213809.1208269-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d362520d-9617-4938-cfc0-08d9ec1491e8 X-MS-TrafficTypeDiagnostic: CY4PR12MB1558:EE_|BY5PR12MB4193:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5516; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: tDiMxSNjtgUV0vzF7SkgxpuMEoKjmTshWnPxCm7MZxS0ka8I9sllFX9sm4griXl0Q7OMniEPTa6IF3SNXa0MzQesBsL2rD7aS0jNNjjq6xxw+ptmNtMYsvz5rme0G2ffqps/xeizUgDM8RZ5HLY2ODbKhxa0D97PwUMS87w1pJNe1z3lP9UhiNoWgueve4ANq2/jJfaUBYrqAMTarS0W1KW1P4DubNxjSsVQ3X0XMChHdAoZEmEG6bHlmydTjrfZNH4dzOGlfIivjKFlTegrvRoMy5A/U2o0o04eIYxRL7/HYtEE0J3HkH24rPj4UrQyjFf4Xq8+Iw24X++XIH1BUAcQ5Vpyu3WXAzljA7ENu9J2njLu4PiEEqC8QaEswnlZ6sQcwsL+9fxV1FbjOYUzIW0IkAyS/a/dFdCRNlP0nr2oFbPdckwX4cX/szuFr3nZ7M7zUjXdjb6l6TIC/jDnU0/AZ8w50MJATr0igx4cYTmNEc0MEZsDIr/stwtfWnEPXn0o+TrJCAj+Qzto6fp5+qhjuq3z9/8n6q9WSsh7d/XMHohi7ppM5ODhumahXmpsxodsJ9dwLykbHFMpOT0cS3DqMC5ZebZZLZfryTkSYYcpkpHsE0s5yImgZECLGaKnRwYR4QrZa1Qr8e3Bm58Cqnu0ot8juL4wP33CIbmwiugjUfOezJ2HsxCsWOZk5peQAEpMr4if0+ujrjugsZ0c1A== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(70206006)(6916009)(8676002)(54906003)(8936002)(70586007)(6666004)(356005)(508600001)(186003)(26005)(82310400004)(426003)(316002)(2616005)(1076003)(86362001)(81166007)(40460700003)(36756003)(2906002)(16526019)(36860700001)(47076005)(4326008)(5660300002)(83380400001)(7416002)(336012)(30864003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2022 21:38:53.8590 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d362520d-9617-4938-cfc0-08d9ec1491e8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT011.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4193 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API. Provide the command line interface for enqueueing flow creation/destruction operations. Usage example: testpmd> flow queue 0 create 0 postpone no template_table 6 pattern_template 0 actions_template 0 pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end testpmd> flow queue 0 destroy 0 postpone yes rule 0 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 267 +++++++++++++++++++- app/test-pmd/config.c | 166 ++++++++++++ app/test-pmd/testpmd.h | 7 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 57 +++++ 4 files changed, 496 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 3e89525445..f794a83a07 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -59,6 +59,7 @@ enum index { COMMON_PATTERN_TEMPLATE_ID, COMMON_ACTIONS_TEMPLATE_ID, COMMON_TABLE_ID, + COMMON_QUEUE_ID, /* TOP-level command. */ ADD, @@ -92,6 +93,7 @@ enum index { ISOLATE, TUNNEL, FLEX, + QUEUE, /* Flex arguments */ FLEX_ITEM_INIT, @@ -114,6 +116,22 @@ enum index { ACTIONS_TEMPLATE_SPEC, ACTIONS_TEMPLATE_MASK, + /* Queue arguments. */ + QUEUE_CREATE, + QUEUE_DESTROY, + + /* Queue create arguments. */ + QUEUE_CREATE_ID, + QUEUE_CREATE_POSTPONE, + QUEUE_TEMPLATE_TABLE, + QUEUE_PATTERN_TEMPLATE, + QUEUE_ACTIONS_TEMPLATE, + QUEUE_SPEC, + + /* Queue destroy arguments. */ + QUEUE_DESTROY_ID, + QUEUE_DESTROY_POSTPONE, + /* Table arguments. */ TABLE_CREATE, TABLE_DESTROY, @@ -891,6 +909,8 @@ struct token { struct buffer { enum index command; /**< Flow command. */ portid_t port; /**< Affected port ID. */ + queueid_t queue; /** Async queue ID. */ + bool postpone; /** Postpone async operation */ union { struct { struct rte_flow_port_attr port_attr; @@ -921,6 +941,7 @@ struct buffer { uint32_t action_id; } ia; /* Indirect action query arguments */ struct { + uint32_t table_id; uint32_t pat_templ_id; uint32_t act_templ_id; struct rte_flow_attr attr; @@ -1070,6 +1091,18 @@ static const enum index next_table_destroy_attr[] = { ZERO, }; +static const enum index next_queue_subcmd[] = { + QUEUE_CREATE, + QUEUE_DESTROY, + ZERO, +}; + +static const enum index next_queue_destroy_attr[] = { + QUEUE_DESTROY_ID, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -2120,6 +2153,12 @@ static int parse_table(struct context *, const struct token *, static int parse_table_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_qo(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); +static int parse_qo_destroy(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2195,6 +2234,8 @@ static int comp_actions_template_id(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_table_id(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_queue_id(struct context *, const struct token *, + unsigned int, char *, unsigned int); /** Token definitions. */ static const struct token token_list[] = { @@ -2366,6 +2407,13 @@ static const struct token token_list[] = { .call = parse_int, .comp = comp_table_id, }, + [COMMON_QUEUE_ID] = { + .name = "{queue_id}", + .type = "QUEUE_ID", + .help = "queue id", + .call = parse_int, + .comp = comp_queue_id, + }, /* Top-level command. */ [FLOW] = { .name = "flow", @@ -2388,7 +2436,8 @@ static const struct token token_list[] = { QUERY, ISOLATE, TUNNEL, - FLEX)), + FLEX, + QUEUE)), .call = parse_init, }, /* Top-level command. */ @@ -2655,6 +2704,84 @@ static const struct token token_list[] = { .call = parse_table, }, /* Top-level command. */ + [QUEUE] = { + .name = "queue", + .help = "queue a flow rule operation", + .next = NEXT(next_queue_subcmd, NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_qo, + }, + /* Sub-level commands. */ + [QUEUE_CREATE] = { + .name = "create", + .help = "create a flow rule", + .next = NEXT(NEXT_ENTRY(QUEUE_TEMPLATE_TABLE), + NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + .call = parse_qo, + }, + [QUEUE_DESTROY] = { + .name = "destroy", + .help = "destroy a flow rule", + .next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID), + NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + .call = parse_qo_destroy, + }, + /* Queue arguments. */ + [QUEUE_TEMPLATE_TABLE] = { + .name = "template table", + .help = "specify table id", + .next = NEXT(NEXT_ENTRY(QUEUE_PATTERN_TEMPLATE), + NEXT_ENTRY(COMMON_TABLE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.vc.table_id)), + .call = parse_qo, + }, + [QUEUE_PATTERN_TEMPLATE] = { + .name = "pattern_template", + .help = "specify pattern template index", + .next = NEXT(NEXT_ENTRY(QUEUE_ACTIONS_TEMPLATE), + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.vc.pat_templ_id)), + .call = parse_qo, + }, + [QUEUE_ACTIONS_TEMPLATE] = { + .name = "actions_template", + .help = "specify actions template index", + .next = NEXT(NEXT_ENTRY(QUEUE_CREATE_POSTPONE), + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.vc.act_templ_id)), + .call = parse_qo, + }, + [QUEUE_CREATE_POSTPONE] = { + .name = "postpone", + .help = "postpone create operation", + .next = NEXT(NEXT_ENTRY(ITEM_PATTERN), + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, postpone)), + .call = parse_qo, + }, + [QUEUE_DESTROY_POSTPONE] = { + .name = "postpone", + .help = "postpone destroy operation", + .next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID), + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, postpone)), + .call = parse_qo_destroy, + }, + [QUEUE_DESTROY_ID] = { + .name = "rule", + .help = "specify rule id to destroy", + .next = NEXT(next_queue_destroy_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.destroy.rule)), + .call = parse_qo_destroy, + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -8181,6 +8308,111 @@ parse_table_destroy(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for queue create commands. */ +static int +parse_qo(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != QUEUE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.vc.data = (uint8_t *)out + size; + return len; + } + switch (ctx->curr) { + case QUEUE_CREATE: + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case QUEUE_TEMPLATE_TABLE: + case QUEUE_PATTERN_TEMPLATE: + case QUEUE_ACTIONS_TEMPLATE: + case QUEUE_CREATE_POSTPONE: + return len; + case ITEM_PATTERN: + out->args.vc.pattern = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + ctx->object = out->args.vc.pattern; + ctx->objmask = NULL; + return len; + case ACTIONS: + out->args.vc.actions = + (void *)RTE_ALIGN_CEIL((uintptr_t) + (out->args.vc.pattern + + out->args.vc.pattern_n), + sizeof(double)); + ctx->object = out->args.vc.actions; + ctx->objmask = NULL; + return len; + default: + return -1; + } +} + +/** Parse tokens for queue destroy command. */ +static int +parse_qo_destroy(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *flow_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command || out->command == QUEUE) { + if (ctx->curr != QUEUE_DESTROY) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.destroy.rule = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; + } + switch (ctx->curr) { + case QUEUE_DESTROY_ID: + flow_id = out->args.destroy.rule + + out->args.destroy.rule_n++; + if ((uint8_t *)flow_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = flow_id; + ctx->objmask = NULL; + return len; + case QUEUE_DESTROY_POSTPONE: + return len; + default: + return -1; + } +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -9222,6 +9454,28 @@ comp_table_id(struct context *ctx, const struct token *token, return i; } +/** Complete available queue IDs. */ +static int +comp_queue_id(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + unsigned int i = 0; + struct rte_port *port; + + (void)token; + if (port_id_is_invalid(ctx->port, DISABLED_WARN) || + ctx->port == (portid_t)RTE_PORT_ALL) + return -1; + port = &ports[ctx->port]; + for (i = 0; i < port->queue_nb; i++) { + if (buf && i == ent) + return snprintf(buf, size, "%u", i); + } + if (buf) + return -1; + return i; +} + /** Internal context. */ static struct context cmd_flow_context; @@ -9519,6 +9773,17 @@ cmd_flow_parsed(const struct buffer *in) in->args.table_destroy.table_id_n, in->args.table_destroy.table_id); break; + case QUEUE_CREATE: + port_queue_flow_create(in->port, in->queue, in->postpone, + in->args.vc.table_id, in->args.vc.pat_templ_id, + in->args.vc.act_templ_id, in->args.vc.pattern, + in->args.vc.actions); + break; + case QUEUE_DESTROY: + port_queue_flow_destroy(in->port, in->queue, in->postpone, + in->args.destroy.rule_n, + in->args.destroy.rule); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 316c16901a..e8ae16a044 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2452,6 +2452,172 @@ port_flow_template_table_destroy(portid_t port_id, return ret; } +/** Enqueue create flow rule operation. */ +int +port_queue_flow_create(portid_t port_id, queueid_t queue_id, + bool postpone, uint32_t table_id, + uint32_t pattern_idx, uint32_t actions_idx, + const struct rte_flow_item *pattern, + const struct rte_flow_action *actions) +{ + struct rte_flow_q_ops_attr ops_attr = { .postpone = postpone }; + struct rte_flow_q_op_res comp = { 0 }; + struct rte_flow *flow; + struct rte_port *port; + struct port_flow *pf; + struct port_table *pt; + uint32_t id = 0; + bool found; + int ret = 0; + struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL }; + struct rte_flow_action_age *age = age_action_get(actions); + + port = &ports[port_id]; + if (port->flow_list) { + if (port->flow_list->id == UINT32_MAX) { + printf("Highest rule ID is already assigned," + " delete it first"); + return -ENOMEM; + } + id = port->flow_list->id + 1; + } + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + found = false; + pt = port->table_list; + while (pt) { + if (table_id == pt->id) { + found = true; + break; + } + pt = pt->next; + } + if (!found) { + printf("Table #%u is invalid\n", table_id); + return -EINVAL; + } + + if (pattern_idx >= pt->nb_pattern_templates) { + printf("Pattern template index #%u is invalid," + " %u templates present in the table\n", + pattern_idx, pt->nb_pattern_templates); + return -EINVAL; + } + if (actions_idx >= pt->nb_actions_templates) { + printf("Actions template index #%u is invalid," + " %u templates present in the table\n", + actions_idx, pt->nb_actions_templates); + return -EINVAL; + } + + pf = port_flow_new(NULL, pattern, actions, &error); + if (!pf) + return port_flow_complain(&error); + if (age) { + pf->age_type = ACTION_AGE_CONTEXT_TYPE_FLOW; + age->context = &pf->age_type; + } + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x11, sizeof(error)); + flow = rte_flow_q_flow_create(port_id, queue_id, &ops_attr, + pt->table, pattern, pattern_idx, actions, actions_idx, &error); + if (!flow) { + uint32_t flow_id = pf->id; + port_queue_flow_destroy(port_id, queue_id, true, 1, &flow_id); + return port_flow_complain(&error); + } + + while (ret == 0) { + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x22, sizeof(error)); + ret = rte_flow_q_pull(port_id, queue_id, &comp, 1, &error); + if (ret < 0) { + printf("Failed to pull queue\n"); + return -EINVAL; + } + } + + pf->next = port->flow_list; + pf->id = id; + pf->flow = flow; + port->flow_list = pf; + printf("Flow rule #%u creation enqueued\n", pf->id); + return 0; +} + +/** Enqueue number of destroy flow rules operations. */ +int +port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, + bool postpone, uint32_t n, const uint32_t *rule) +{ + struct rte_flow_q_ops_attr op_attr = { .postpone = postpone }; + struct rte_flow_q_op_res comp = { 0 }; + struct rte_port *port; + struct port_flow **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + tmp = &port->flow_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_flow *pf = *tmp; + + if (rule[i] != pf->id) + continue; + /* + * Poisoning to make sure PMD + * update it in case of error. + */ + memset(&error, 0x33, sizeof(error)); + if (rte_flow_q_flow_destroy(port_id, queue_id, &op_attr, + pf->flow, &error)) { + ret = port_flow_complain(&error); + continue; + } + + while (ret == 0) { + /* + * Poisoning to make sure PMD + * update it in case of error. + */ + memset(&error, 0x44, sizeof(error)); + ret = rte_flow_q_pull(port_id, queue_id, + &comp, 1, &error); + if (ret < 0) { + printf("Failed to pull queue\n"); + return -EINVAL; + } + } + + printf("Flow rule #%u destruction enqueued\n", pf->id); + *tmp = pf->next; + free(pf); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 4c6e775bad..d0e1e3eeec 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -932,6 +932,13 @@ int port_flow_template_table_create(portid_t port_id, uint32_t id, uint32_t nb_actions_templates, uint32_t *actions_templates); int port_flow_template_table_destroy(portid_t port_id, uint32_t n, const uint32_t *table); +int port_queue_flow_create(portid_t port_id, queueid_t queue_id, + bool postpone, uint32_t table_id, + uint32_t pattern_idx, uint32_t actions_idx, + const struct rte_flow_item *pattern, + const struct rte_flow_action *actions); +int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, + bool postpone, uint32_t n, const uint32_t *rule); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 16b874250c..b802288c66 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3382,6 +3382,20 @@ following sections. pattern {item} [/ {item} [...]] / end actions {action} [/ {action} [...]] / end +- Enqueue creation of a flow rule:: + + flow queue {port_id} create {queue_id} + [postpone {boolean}] template_table {table_id} + pattern_template {pattern_template_index} + actions_template {actions_template_index} + pattern {item} [/ {item} [...]] / end + actions {action} [/ {action} [...]] / end + +- Enqueue destruction of specific flow rules:: + + flow queue {port_id} destroy {queue_id} + [postpone {boolean}] rule {rule_id} [...] + - Create a flow rule:: flow create {port_id} @@ -3703,6 +3717,30 @@ one. **All unspecified object values are automatically initialized to 0.** +Enqueueing creation of flow rules +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue create`` adds creation operation of a flow rule to a queue. +It is bound to ``rte_flow_q_flow_create()``:: + + flow queue {port_id} create {queue_id} + [postpone {boolean}] template_table {table_id} + pattern_template {pattern_template_index} + actions_template {actions_template_index} + pattern {item} [/ {item} [...]] / end + actions {action} [/ {action} [...]] / end + +If successful, it will return a flow rule ID usable with other commands:: + + Flow rule #[...] creaion enqueued + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +This command uses the same pattern items and actions as ``flow create``, +their format is described in `Creating flow rules`_. + Attributes ^^^^^^^^^^ @@ -4418,6 +4456,25 @@ Non-existent rule IDs are ignored:: Flow rule #0 destroyed testpmd> +Enqueueing destruction of flow rules +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue destroy`` adds destruction operations to destroy one or more rules +from their rule ID (as returned by ``flow queue create``) to a queue, +this command calls ``rte_flow_q_flow_destroy()`` as many times as necessary:: + + flow queue {port_id} destroy {queue_id} + [postpone {boolean}] rule {rule_id} [...] + +If successful, it will show:: + + Flow rule #[...] destruction enqueued + +It does not report anything for rule IDs that do not exist. The usual error +message is shown when a rule cannot be destroyed:: + + Caught error type [...] ([...]): [...] + Querying flow rules ~~~~~~~~~~~~~~~~~~~ From patchwork Wed Feb 9 21:38:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107191 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2ECB1A0032; Wed, 9 Feb 2022 22:39:32 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7085A41199; Wed, 9 Feb 2022 22:39:01 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2065.outbound.protection.outlook.com [40.107.244.65]) by mails.dpdk.org (Postfix) with ESMTP id 207F040140 for ; Wed, 9 Feb 2022 22:38:59 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aS36Nr0CTw9oCr37wvDJsf9a0x4NuYVVS7wf8+XEVEkE1vl5fwSdcHYLmiI4KEhXS2vDXudLhr/pHYu0AeurLaOvw3jjwea701FP/YKgjgTsV5oZ96/Y/xBVY4y3MNQmFHN7Bf9s/DJd9mOLWTcv3m2sReKRPoTip769c3kEYHlrvGeTYiKFSwSH1d5egnXqhHdWHWxysVJeA+b1zSTY9f66G4M7cwuctWLXIblJo3LtexIL5Ban1qNip9IqAlxh6rB2qbgeEzgFiPBwYddj9XC/jrSo9uF0O1CJtwRfyfD25VTl5R/I2Mz8GJ3vclm9ywwX6S6f81MjU+409y7FNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=UuyOyf0BPbCLnC0M2G/BM16Lb2S8HgPsQ5OG23fv07U=; b=O/okYqUhghCp/zOL1sz5kLA3KapIreZBVz3G4JG2j9AwcpaPG4Rbnfc0k28RoMD3xUuxCC8p0Ew5gGcAVhZqfeAntd+t5win1gMnAl7oTcIqEIFPMs9PPvCkVsP4CHqWJbNlyxnBb3dwQYtmaAfV30kmufIYb+Urzd0Ixlfr6im7qXBgZH47CIgOQOUEu6WzfkfKR3wm6S2s/m5e66TsAulfD8r4tDNO2qkjcnbQauiL4c4zcwCmzHWIrUkVKOrZNzd0yjm9K1ir71Lig0ZpnTp1Fh8swtqYJYuHQ86TtkB5IGu5TWk6EHfWle85mKnEXXPVlEhUh2tiPo0mfZJk7A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UuyOyf0BPbCLnC0M2G/BM16Lb2S8HgPsQ5OG23fv07U=; b=kD2wMuPUWYIzEWqDzqonqGdp+dVkK6DDwW0RP25rwwKTqyJGRDWTFjDzLxU99VKeTy0t2xgQSoKZVVpY2gPfBRgwnfXauJ7fsdSPQgtToHox4LZ2W3fnm4yUfuoqxXRkjVu2DslqiRqANedet85IvoNwPfNZihSxerOEzt4VzMpQjnYWTyxxE2AjNI/eSnQrZmFY1K/yuS/Cb9CkxgqBrytoSeKrAxzCmup4HQrvY4Lf9qADjamjTD2RVSq76lZeecyMP3ouka3rBu+g+CNRYovrMHifWWvT+NWc8jjM/vqN0q3N0MP7otAoup/NdS1QEIgEeUSqrS9ReSXUILmrXg== Received: from CY4PR12MB1143.namprd12.prod.outlook.com (2603:10b6:903:38::7) by CY4PR12MB1734.namprd12.prod.outlook.com (2603:10b6:903:121::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.17; Wed, 9 Feb 2022 21:38:57 +0000 Received: from MWHPR18CA0041.namprd18.prod.outlook.com (2603:10b6:320:31::27) by CY4PR12MB1143.namprd12.prod.outlook.com (2603:10b6:903:38::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Wed, 9 Feb 2022 21:38:56 +0000 Received: from CO1NAM11FT034.eop-nam11.prod.protection.outlook.com (2603:10b6:320:31:cafe::61) by MWHPR18CA0041.outlook.office365.com (2603:10b6:320:31::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.19 via Frontend Transport; Wed, 9 Feb 2022 21:38:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT034.mail.protection.outlook.com (10.13.174.248) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Wed, 9 Feb 2022 21:38:56 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 9 Feb 2022 21:38:56 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Wed, 9 Feb 2022 13:38:52 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v4 08/10] app/testpmd: implement rte flow push operations Date: Wed, 9 Feb 2022 23:38:07 +0200 Message-ID: <20220209213809.1208269-9-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220209213809.1208269-1-akozyrev@nvidia.com> References: <20220206032526.816079-1-akozyrev@nvidia.com > <20220209213809.1208269-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e383a17e-87d5-424e-aea6-08d9ec14936b X-MS-TrafficTypeDiagnostic: CY4PR12MB1143:EE_|CY4PR12MB1734:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4714; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: kI48Rq8HkCc7bMT5hr7Jl99XtuRtXy7zVvHeiq4XwW5WYoN17GC5kHezTPFbriyblI+0zGOQJZ6jWNFzJFE53XsJUlmtrIeg2jDETKQLbV54csDuOzlDrBqP9xWDBkl7+JIchkc8XPgI6SoFqNFAUsvhxxK/DNmnFsCx86sAMDTkY3L0SIrC23nArTcI88zbeA5s7uarf24hTSZSZ/YLLJmImjzjXMZnphnlvHcfL+FfgbxzKWF+f6AdofWsqa9W/Q/qycqBC90YKChTzcARYCN3mMW22/wPz/V4wJFgOi+IHk7Lw4lXxc/6K8bvTsPf/hJ+rDqVOHGaGGcZzU3oTm/WI1nNKDL7VytEDcex8bHF03xF9/w5ANwjoVM4okkckwEQdfGQE86dHy7jmSGc/cFUJlVYl9XT6hvQxZKV+Hbp8BKptEYROD402HpWlqjbWf4UecSvhWyHahbt5/TNQAowQS1trhEerUUcR43XFAfMsCQkAW71455fa855Eku6M5bgz2HMYVLSjDs4vpob3Vc0CXw1yFQ3cpqxWjvj5MQ8Hf4wyAysGfC4w8wyHwoXE/o90qvc884ztuu5Mrm+VyUhVXwvo4JXB+INdtoX5Y7VBGFJ8gCwc/LcbMrcKeJvELnQLuVjVp4M8aGOuBbBZUgzGtMDcZ1dSqJbSi3HBJlSuBB/9do7c9IKa9dehYI0p5tgIPh2Ej+lUmqXSa1/Qg== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(86362001)(6666004)(26005)(47076005)(2906002)(5660300002)(508600001)(2616005)(83380400001)(36756003)(36860700001)(82310400004)(186003)(7416002)(16526019)(356005)(1076003)(426003)(8676002)(8936002)(70586007)(316002)(70206006)(336012)(81166007)(40460700003)(54906003)(4326008)(6916009)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2022 21:38:56.5351 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e383a17e-87d5-424e-aea6-08d9ec14936b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT034.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1734 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_q_push API. Provide the command line interface for pushing operations. Usage example: flow queue 0 push 0 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 56 ++++++++++++++++++++- app/test-pmd/config.c | 28 +++++++++++ app/test-pmd/testpmd.h | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 21 ++++++++ 4 files changed, 105 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index f794a83a07..11240d6f04 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -94,6 +94,7 @@ enum index { TUNNEL, FLEX, QUEUE, + PUSH, /* Flex arguments */ FLEX_ITEM_INIT, @@ -132,6 +133,9 @@ enum index { QUEUE_DESTROY_ID, QUEUE_DESTROY_POSTPONE, + /* Push arguments. */ + PUSH_QUEUE, + /* Table arguments. */ TABLE_CREATE, TABLE_DESTROY, @@ -2159,6 +2163,9 @@ static int parse_qo(struct context *, const struct token *, static int parse_qo_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_push(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2437,7 +2444,8 @@ static const struct token token_list[] = { ISOLATE, TUNNEL, FLEX, - QUEUE)), + QUEUE, + PUSH)), .call = parse_init, }, /* Top-level command. */ @@ -2782,6 +2790,21 @@ static const struct token token_list[] = { .call = parse_qo_destroy, }, /* Top-level command. */ + [PUSH] = { + .name = "push", + .help = "push enqueued operations", + .next = NEXT(NEXT_ENTRY(PUSH_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_push, + }, + /* Sub-level commands. */ + [PUSH_QUEUE] = { + .name = "queue", + .help = "specify queue id", + .next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -8413,6 +8436,34 @@ parse_qo_destroy(struct context *ctx, const struct token *token, } } +/** Parse tokens for push queue command. */ +static int +parse_push(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != PUSH) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.vc.data = (uint8_t *)out + size; + } + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -9784,6 +9835,9 @@ cmd_flow_parsed(const struct buffer *in) in->args.destroy.rule_n, in->args.destroy.rule); break; + case PUSH: + port_queue_flow_push(in->port, in->queue); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index e8ae16a044..24660c01dd 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2618,6 +2618,34 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, return ret; } +/** Push all the queue operations in the queue to the NIC. */ +int +port_queue_flow_push(portid_t port_id, queueid_t queue_id) +{ + struct rte_port *port; + struct rte_flow_error error; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + memset(&error, 0x55, sizeof(error)); + ret = rte_flow_q_push(port_id, queue_id, &error); + if (ret < 0) { + printf("Failed to push operations in the queue\n"); + return -EINVAL; + } + printf("Queue #%u operations pushed\n", queue_id); + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index d0e1e3eeec..03f135ff46 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -939,6 +939,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id, const struct rte_flow_action *actions); int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, bool postpone, uint32_t n, const uint32_t *rule); +int port_queue_flow_push(portid_t port_id, queueid_t queue_id); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index b802288c66..01e5e3c19f 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3396,6 +3396,10 @@ following sections. flow queue {port_id} destroy {queue_id} [postpone {boolean}] rule {rule_id} [...] +- Push enqueued operations:: + + flow push {port_id} queue {queue_id} + - Create a flow rule:: flow create {port_id} @@ -3611,6 +3615,23 @@ The usual error message is shown when a table cannot be destroyed:: Caught error type [...] ([...]): [...] +Pushing enqueued operations +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow push`` pushes all the outstanding enqueued operations +to the underlying device immediately. +It is bound to ``rte_flow_q_push()``:: + + flow push {port_id} queue {queue_id} + +If successful, it will show:: + + Queue #[...] operations pushed + +The usual error message is shown when operations cannot be pushed:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Wed Feb 9 21:38:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107192 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2DBDBA0032; Wed, 9 Feb 2022 22:39:39 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AA82341169; Wed, 9 Feb 2022 22:39:04 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2078.outbound.protection.outlook.com [40.107.236.78]) by mails.dpdk.org (Postfix) with ESMTP id 86260411C9 for ; Wed, 9 Feb 2022 22:39:03 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JTQFEKe1MSbrE7VtoTz1wvz7jcflXvZn79ZBtJwjJoGqIJKUBuWHlYrT6/li7XW1bpBLRnG5o72TGs7Pd+7Dae88b2qJai1iqQdVjlBHyOzySo2VEjagp4yOCozU4v07JKXPd6wkzo1Sn66GchHM0W5uzhtz8gqnMHEXjv1u3EM/nBtvbQFxNKPceDxqHAKo+nvY2aY3MrC2e63GV7s6aUUQA/PjxRLIiyVW1pWulz9zbTCADtJ8UaJCii8oLHi3X3THqHApNEtIHVh2K0kWtxdrsnpo3abtkO/kvz0nAoftCPIv8NiAAATxkgIV3RTrts11zToSIiNoNKzXcTxo7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=3XiWShL+v+avXDuHpuCmW4SigTt0hCRQI0OiOoKYwUA=; b=QJXfAg7Gvf7enqDATp7D5U6c94e9BCUtHVfvUtovI0Sq8z8utNc234NM5N5L3Fpfp5R1GR2zzdquBMSeoWwRDOTs3YPwcFxkkwqdPXh1epDU9+Sa/tH/vOg2D5CTk2DI24qIxq5OXsFPuEQp50b/qcVxM1TekZAq1ZWMSFV4WkB9zZU9IqFItvf6BwNC1eXBPNENAJcq0nBtlqLn/GSNbG6tXQodCPhpN8aDLuaxAWjLZ1gunO2wwxMSUXYgTHCSjN3DsI8VFSUGAdkAhKGb2JUhFk0HB6nmit6v1IYTbLaakg1fBmD8O4jexQP6pfomVJLhuZIA4cRS3eMcQTMxAw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3XiWShL+v+avXDuHpuCmW4SigTt0hCRQI0OiOoKYwUA=; b=FdeYNcfCSGP76+mkawiw0K6XNcB0/55VziVUi2IBUW4gBVh/V2QiaV60OginkwXmRbr+oNO31MainuZoEH8tKOHSHOeGXww979xRJoZ21sVfl4Dq81QYSb56ydP7k5a7O7cwjg+QNS9wgRp5SRnMMSY0vgrVFoaeTlYjYKFcD9cQuNMVdq4JdODcGMsx3VjKKBsenDNlc4CzJCH3PzyCTDjYRTcE7E+2WPfLtaPpRfNkRkKuUQsF1yx6/FWNAwISlwkFkadjwE+v9++w7x1Nuuf51b8QxZ+YvrQQTt6C40Sn//cuLB5MT+FVjqaOsugznvL+/BdLip0gHKsjw+cVoQ== Received: from BL0PR12MB5521.namprd12.prod.outlook.com (2603:10b6:208:1c7::12) by BL1PR12MB5827.namprd12.prod.outlook.com (2603:10b6:208:396::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11; Wed, 9 Feb 2022 21:39:01 +0000 Received: from MW4PR04CA0152.namprd04.prod.outlook.com (2603:10b6:303:85::7) by BL0PR12MB5521.namprd12.prod.outlook.com (2603:10b6:208:1c7::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.11; Wed, 9 Feb 2022 21:39:00 +0000 Received: from CO1NAM11FT068.eop-nam11.prod.protection.outlook.com (2603:10b6:303:85:cafe::91) by MW4PR04CA0152.outlook.office365.com (2603:10b6:303:85::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.14 via Frontend Transport; Wed, 9 Feb 2022 21:39:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT068.mail.protection.outlook.com (10.13.175.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Wed, 9 Feb 2022 21:39:00 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 9 Feb 2022 21:38:59 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Wed, 9 Feb 2022 13:38:56 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v4 09/10] app/testpmd: implement rte flow pull operations Date: Wed, 9 Feb 2022 23:38:08 +0200 Message-ID: <20220209213809.1208269-10-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220209213809.1208269-1-akozyrev@nvidia.com> References: <20220206032526.816079-1-akozyrev@nvidia.com > <20220209213809.1208269-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 67aa9460-9630-493e-55c3-08d9ec149583 X-MS-TrafficTypeDiagnostic: BL0PR12MB5521:EE_|BL1PR12MB5827:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5516; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: biYSET3Zun8dI0HOJmFd+jLnaRP0plPWqdDjX4kWzQEyeXLPBLRtQfUJLi2/Gz1wm8COzfxwCBaYW0b0sOSB8ObMdMb71k5zs+eY/n/VCC/mSEm7+f3wgdyt4NLymoQgkfK2hHKe/0Upoy/2C4ScdkULWUIK76oQ5QvOnY5cfovx4Tir1ZnmgZ+mfFdRBfKalu33VHoT6pit0Nv8Lp646IeJFDH55yq/LGbIoZ01PNnGHPDfMoBQvzM6RcsT3PPouqjAodOHeEqU/SR45gvKI3837cxnm9gJ8eO+dm2ODigp94sqk4yxz4cUTtn+bFBllI9CypxxJQAsdm+G63+/HXZMvgcwb7h3NfxuTyDmUVeBz3DipN1uQFICGE1JffnowEaKH+IQjjIJbRy7HnwNZTL9d/D47R4cNRFr4y3sCu9Wu+A7NQSGfLaXKkj0GAwvX8CsOJedz6YoyVVXqoTBrzZrR7tSmkko9PW8v9VhCE5X28haKbRZ10YMxZVPaw4o0E2iQLln6W7I8Zi81S9kBf8YC5/nSFSHpdGb30a2GADX1iEhCqhFsNeiV3Pc9QPR4nhQ9m4Q5JHGBRObyc131yvRqGxNEpjz+15fp21OWKiobtdxxthJ/EiAt1rh3exuj1+2Iw4t4d4CUuub+DHIOcUmHra+3zygIKDUGn1k5q0cF+CFD096Gv/Vr/b+yCjWKcQFG66dE1izKeTuDxR2yg== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(508600001)(8936002)(186003)(336012)(426003)(26005)(2616005)(1076003)(16526019)(70206006)(70586007)(83380400001)(6916009)(316002)(54906003)(4326008)(86362001)(8676002)(82310400004)(81166007)(36756003)(36860700001)(2906002)(6666004)(356005)(40460700003)(47076005)(5660300002)(7416002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2022 21:39:00.0143 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 67aa9460-9630-493e-55c3-08d9ec149583 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT068.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5827 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_q_pull API. Provide the command line interface for pulling operations results. Usage example: flow pull 0 queue 0 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 56 +++++++++++++++- app/test-pmd/config.c | 74 +++++++++++++-------- app/test-pmd/testpmd.h | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 25 +++++++ 4 files changed, 127 insertions(+), 29 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 11240d6f04..26ef2ccfd4 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -95,6 +95,7 @@ enum index { FLEX, QUEUE, PUSH, + PULL, /* Flex arguments */ FLEX_ITEM_INIT, @@ -136,6 +137,9 @@ enum index { /* Push arguments. */ PUSH_QUEUE, + /* Pull arguments. */ + PULL_QUEUE, + /* Table arguments. */ TABLE_CREATE, TABLE_DESTROY, @@ -2166,6 +2170,9 @@ static int parse_qo_destroy(struct context *, const struct token *, static int parse_push(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_pull(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2445,7 +2452,8 @@ static const struct token token_list[] = { TUNNEL, FLEX, QUEUE, - PUSH)), + PUSH, + PULL)), .call = parse_init, }, /* Top-level command. */ @@ -2805,6 +2813,21 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct buffer, queue)), }, /* Top-level command. */ + [PULL] = { + .name = "pull", + .help = "pull flow operations results", + .next = NEXT(NEXT_ENTRY(PULL_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_pull, + }, + /* Sub-level commands. */ + [PULL_QUEUE] = { + .name = "queue", + .help = "specify queue id", + .next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -8464,6 +8487,34 @@ parse_push(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for pull command. */ +static int +parse_pull(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != PULL) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.vc.data = (uint8_t *)out + size; + } + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -9838,6 +9889,9 @@ cmd_flow_parsed(const struct buffer *in) case PUSH: port_queue_flow_push(in->port, in->queue); break; + case PULL: + port_queue_flow_pull(in->port, in->queue); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 24660c01dd..4937851c41 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2461,14 +2461,12 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, const struct rte_flow_action *actions) { struct rte_flow_q_ops_attr ops_attr = { .postpone = postpone }; - struct rte_flow_q_op_res comp = { 0 }; struct rte_flow *flow; struct rte_port *port; struct port_flow *pf; struct port_table *pt; uint32_t id = 0; bool found; - int ret = 0; struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL }; struct rte_flow_action_age *age = age_action_get(actions); @@ -2531,16 +2529,6 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, return port_flow_complain(&error); } - while (ret == 0) { - /* Poisoning to make sure PMDs update it in case of error. */ - memset(&error, 0x22, sizeof(error)); - ret = rte_flow_q_pull(port_id, queue_id, &comp, 1, &error); - if (ret < 0) { - printf("Failed to pull queue\n"); - return -EINVAL; - } - } - pf->next = port->flow_list; pf->id = id; pf->flow = flow; @@ -2555,7 +2543,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, bool postpone, uint32_t n, const uint32_t *rule) { struct rte_flow_q_ops_attr op_attr = { .postpone = postpone }; - struct rte_flow_q_op_res comp = { 0 }; struct rte_port *port; struct port_flow **tmp; uint32_t c = 0; @@ -2591,21 +2578,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, ret = port_flow_complain(&error); continue; } - - while (ret == 0) { - /* - * Poisoning to make sure PMD - * update it in case of error. - */ - memset(&error, 0x44, sizeof(error)); - ret = rte_flow_q_pull(port_id, queue_id, - &comp, 1, &error); - if (ret < 0) { - printf("Failed to pull queue\n"); - return -EINVAL; - } - } - printf("Flow rule #%u destruction enqueued\n", pf->id); *tmp = pf->next; free(pf); @@ -2646,6 +2618,52 @@ port_queue_flow_push(portid_t port_id, queueid_t queue_id) return ret; } +/** Pull queue operation results from the queue. */ +int +port_queue_flow_pull(portid_t port_id, queueid_t queue_id) +{ + struct rte_port *port; + struct rte_flow_q_op_res *res; + struct rte_flow_error error; + int ret = 0; + int success = 0; + int i; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + res = calloc(port->queue_sz, sizeof(struct rte_flow_q_op_res)); + if (!res) { + printf("Failed to allocate memory for pulled results\n"); + return -ENOMEM; + } + + memset(&error, 0x66, sizeof(error)); + ret = rte_flow_q_pull(port_id, queue_id, res, + port->queue_sz, &error); + if (ret < 0) { + printf("Failed to pull a operation results\n"); + free(res); + return -EINVAL; + } + + for (i = 0; i < ret; i++) { + if (res[i].status == RTE_FLOW_Q_OP_SUCCESS) + success++; + } + printf("Queue #%u pulled %u operations (%u failed, %u succeeded)\n", + queue_id, ret, ret - success, success); + free(res); + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 03f135ff46..6fe829edab 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -940,6 +940,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id, int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, bool postpone, uint32_t n, const uint32_t *rule); int port_queue_flow_push(portid_t port_id, queueid_t queue_id); +int port_queue_flow_pull(portid_t port_id, queueid_t queue_id); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 01e5e3c19f..d5d9125d50 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3400,6 +3400,10 @@ following sections. flow push {port_id} queue {queue_id} +- Pull all operations results from a queue:: + + flow pull {port_id} queue {queue_id} + - Create a flow rule:: flow create {port_id} @@ -3632,6 +3636,23 @@ The usual error message is shown when operations cannot be pushed:: Caught error type [...] ([...]): [...] +Pulling flow operations results +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow pull`` asks the underlying device about flow queue operations +results and return all the processed (successfully or not) operations. +It is bound to ``rte_flow_q_pull()``:: + + flow pull {port_id} queue {queue_id} + +If successful, it will show:: + + Queue #[...] pulled #[...] operations (#[...] failed, #[...] succeeded) + +The usual error message is shown when operations results cannot be pulled:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -3762,6 +3783,8 @@ Otherwise it will show an error message of the form:: This command uses the same pattern items and actions as ``flow create``, their format is described in `Creating flow rules`_. +``flow queue pull`` must be called to retrieve the operation status. + Attributes ^^^^^^^^^^ @@ -4496,6 +4519,8 @@ message is shown when a rule cannot be destroyed:: Caught error type [...] ([...]): [...] +``flow queue pull`` must be called to retrieve the operation status. + Querying flow rules ~~~~~~~~~~~~~~~~~~~ From patchwork Wed Feb 9 21:38:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 107193 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 448E4A0032; Wed, 9 Feb 2022 22:39:46 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EF04041168; Wed, 9 Feb 2022 22:39:08 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2056.outbound.protection.outlook.com [40.107.93.56]) by mails.dpdk.org (Postfix) with ESMTP id DCF8F41163 for ; Wed, 9 Feb 2022 22:39:06 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CMF93j2D+m4iOKMt7M3sJmclAD8Wgn3qSIPzCvJm8QnSyt93ArCLx9OqHhlo4Zk4LQmsAvFDauDINQg+NkxJhtLuhBQ+cU8iuxg55yNu7rDT1H27Wby9aH2ZyHyDZO6Gl75zTgIoNqoARSNfS2rBSiF8QIjNBe0ew6OUj7nnocBdyyzo7hGlc1Z5XrnBgJo9QVoevB//4hbd0ZWz1Nv1uk1PEFwCILmPPoRfl744KHcFrp3zO+LSqlm4/6dFMSeJGj53n3MU26JhV3lh6zfTqqqX+uRVDWnvL8wRnFnnmotad2RKwuWLUc7+c9GeAkNy6onOncbmGgrXF7R86MAHBw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zF3zeA5GgLMdxWw8tZ6psssPlRLqC1sdhGtnseDqCZA=; b=JbbdVipJcl2U8qf7fBs0nh/8gFe9efxGS7gYxOPHfVUgCuUmeW2gAhkVU2Daf2mNVt2E13dAQzg8bSPcb++62e70QL5qDPUt6F7cYuo8c0+0ltR4tdUWW4wxKFcP6ReVhoaPWXLjFHvvp7kHpoPkmqVWg/B8bokDDP8ipTey38XPoUuqTuNNK3yl/wvWoT0BQAbr/4ISaEXb/KNOaZK8CqAI8sOi9mDCVasVodpCq4Xq5lkezKakA5HH7IKcizY2AX1BGRjF0NTTIh7B3DqnrMAUSGb8z6gLbkVgphx4xInuDDPzTtCDrcrWRkYomMd74oPG2s2aB3fXDc4VTzS83g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zF3zeA5GgLMdxWw8tZ6psssPlRLqC1sdhGtnseDqCZA=; b=kmWed0s0fb4jmxN6gnaR1+/x+bnqJBNu8OD3xhvpYwgEgtmvSI5EkI5XNKfhXtZRRjDoj8W0AOtr2sAqdulWdWpDBzMoyr37vgtpAv3+AAmQQ71ATn4iLIrPMBzrmKiR1n2eGQFhCbA2m/7vnmM5m9X1RxKeVIMxAYiDN3aJp9H+ouqElfiQzczxwj7tFKC/q/M/quV13/fAZXExOb/rH4YXC0ORYv7ChY92HWUuQFyx7dOZizY3f+qAZ73bcTD/j0MBcjpAX5F0bPKsTRVuzlj4dT73EKigSVjeFPp2bwJFNELaVdPluG77fxzaiW3gf1DJEJXl158Ia25G0a55qQ== Received: from DM4PR12MB5327.namprd12.prod.outlook.com (2603:10b6:5:39e::21) by DM6PR12MB4108.namprd12.prod.outlook.com (2603:10b6:5:220::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Wed, 9 Feb 2022 21:39:05 +0000 Received: from BN0PR03CA0048.namprd03.prod.outlook.com (2603:10b6:408:e7::23) by DM4PR12MB5327.namprd12.prod.outlook.com (2603:10b6:5:39e::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11; Wed, 9 Feb 2022 21:39:03 +0000 Received: from BN8NAM11FT050.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e7:cafe::8b) by BN0PR03CA0048.outlook.office365.com (2603:10b6:408:e7::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12 via Frontend Transport; Wed, 9 Feb 2022 21:39:03 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT050.mail.protection.outlook.com (10.13.177.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Wed, 9 Feb 2022 21:39:03 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 9 Feb 2022 21:39:02 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Wed, 9 Feb 2022 13:38:59 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v4 10/10] app/testpmd: implement rte flow queue indirect actions Date: Wed, 9 Feb 2022 23:38:09 +0200 Message-ID: <20220209213809.1208269-11-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220209213809.1208269-1-akozyrev@nvidia.com> References: <20220206032526.816079-1-akozyrev@nvidia.com > <20220209213809.1208269-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 775923a9-5a36-49cc-5be3-08d9ec149764 X-MS-TrafficTypeDiagnostic: DM4PR12MB5327:EE_|DM6PR12MB4108:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4502; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: V76F3h0wX9kbQb2qj8tchex563D2DYPhup8Hf+KAGyXhP7Ev1E8N6wBK9j0bO+NXD8Idxgz1mMvkMoL861kiZmzwnTBf6YpZljkyvVnx3tVw/LU4uDlM8s6yrNiKL7MSiy3h4boOKKKzE7RdMbkJBFsENTNrhW6xJD4G22+FmGYOjugQTTQwzZZ4HVxkyeNyOB5g2JBzQlljTCGP1ZzzM770e6WlNL1LINfy1RBtYu70mRKOLpOE/6W/0UJqjhNnkEif3hN36Td8O94CjOdl1LP0ATUg/WEI0oWdmgkjBs8CsqVK8YsaBoLKN0z6LDpWMLqn/a3l7uJMwwkNwNaRWTkAFE3NzRFUs6KMPbAo+t55H3EL5bY6iK7QdweFaKstfco0k8T9iQLiYkGx1NOrb2rfCHsoTpYdmzVcA9C6zL2mEf7JPewMIpljlcaabm9jLVwVF7Y/mUNpzUqAFgOPq5VXD5JtrB5ggT9UxZxbP+8LU9pEx1bedJ8oqdruPPurBquliowJKozN188PMQp5evz9+L2NnCVcdVtouQNC5XfxDoRPONq0FM8vTm/ZySusCITcpMgLZNHSJkxEt7tj6i+8LC+XU3/kjJomPW/3fh4fMTj5+qkmMTkMVtRdJ9TPh7c0xymRGIXlobaePEvwOB+dHkhbxkU8vWXJ7ojHYu9TEX3vtkqDu4KLvPPijsV3qfB18SImFLF10MpLJMx5tA== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(8936002)(6916009)(83380400001)(36756003)(5660300002)(40460700003)(81166007)(36860700001)(70586007)(70206006)(30864003)(8676002)(356005)(7416002)(54906003)(1076003)(86362001)(336012)(26005)(186003)(16526019)(47076005)(82310400004)(426003)(2906002)(316002)(4326008)(508600001)(6666004)(2616005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2022 21:39:03.0573 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 775923a9-5a36-49cc-5be3-08d9ec149764 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT050.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4108 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_q_action_handle API. Provide the command line interface for operations dequeue. Usage example: flow queue 0 indirect_action 0 create action_id 9 ingress postpone yes action rss / end flow queue 0 indirect_action 0 update action_id 9 action queue index 0 / end flow queue 0 indirect_action 0 destroy action_id 9 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 276 ++++++++++++++++++++ app/test-pmd/config.c | 131 ++++++++++ app/test-pmd/testpmd.h | 10 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 65 +++++ 4 files changed, 482 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 26ef2ccfd4..b9edb1d482 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -121,6 +121,7 @@ enum index { /* Queue arguments. */ QUEUE_CREATE, QUEUE_DESTROY, + QUEUE_INDIRECT_ACTION, /* Queue create arguments. */ QUEUE_CREATE_ID, @@ -134,6 +135,26 @@ enum index { QUEUE_DESTROY_ID, QUEUE_DESTROY_POSTPONE, + /* Queue indirect action arguments */ + QUEUE_INDIRECT_ACTION_CREATE, + QUEUE_INDIRECT_ACTION_UPDATE, + QUEUE_INDIRECT_ACTION_DESTROY, + + /* Queue indirect action create arguments */ + QUEUE_INDIRECT_ACTION_CREATE_ID, + QUEUE_INDIRECT_ACTION_INGRESS, + QUEUE_INDIRECT_ACTION_EGRESS, + QUEUE_INDIRECT_ACTION_TRANSFER, + QUEUE_INDIRECT_ACTION_CREATE_POSTPONE, + QUEUE_INDIRECT_ACTION_SPEC, + + /* Queue indirect action update arguments */ + QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE, + + /* Queue indirect action destroy arguments */ + QUEUE_INDIRECT_ACTION_DESTROY_ID, + QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE, + /* Push arguments. */ PUSH_QUEUE, @@ -1102,6 +1123,7 @@ static const enum index next_table_destroy_attr[] = { static const enum index next_queue_subcmd[] = { QUEUE_CREATE, QUEUE_DESTROY, + QUEUE_INDIRECT_ACTION, ZERO, }; @@ -1111,6 +1133,36 @@ static const enum index next_queue_destroy_attr[] = { ZERO, }; +static const enum index next_qia_subcmd[] = { + QUEUE_INDIRECT_ACTION_CREATE, + QUEUE_INDIRECT_ACTION_UPDATE, + QUEUE_INDIRECT_ACTION_DESTROY, + ZERO, +}; + +static const enum index next_qia_create_attr[] = { + QUEUE_INDIRECT_ACTION_CREATE_ID, + QUEUE_INDIRECT_ACTION_INGRESS, + QUEUE_INDIRECT_ACTION_EGRESS, + QUEUE_INDIRECT_ACTION_TRANSFER, + QUEUE_INDIRECT_ACTION_CREATE_POSTPONE, + QUEUE_INDIRECT_ACTION_SPEC, + ZERO, +}; + +static const enum index next_qia_update_attr[] = { + QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE, + QUEUE_INDIRECT_ACTION_SPEC, + ZERO, +}; + +static const enum index next_qia_destroy_attr[] = { + QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE, + QUEUE_INDIRECT_ACTION_DESTROY_ID, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -2167,6 +2219,12 @@ static int parse_qo(struct context *, const struct token *, static int parse_qo_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_qia(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); +static int parse_qia_destroy(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_push(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2744,6 +2802,13 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct buffer, queue)), .call = parse_qo_destroy, }, + [QUEUE_INDIRECT_ACTION] = { + .name = "indirect_action", + .help = "queue indirect actions", + .next = NEXT(next_qia_subcmd, NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + .call = parse_qia, + }, /* Queue arguments. */ [QUEUE_TEMPLATE_TABLE] = { .name = "template table", @@ -2797,6 +2862,90 @@ static const struct token token_list[] = { args.destroy.rule)), .call = parse_qo_destroy, }, + /* Queue indirect action arguments */ + [QUEUE_INDIRECT_ACTION_CREATE] = { + .name = "create", + .help = "create indirect action", + .next = NEXT(next_qia_create_attr), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_UPDATE] = { + .name = "update", + .help = "update indirect action", + .next = NEXT(next_qia_update_attr, + NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_DESTROY] = { + .name = "destroy", + .help = "destroy indirect action", + .next = NEXT(next_qia_destroy_attr), + .call = parse_qia_destroy, + }, + /* Indirect action destroy arguments. */ + [QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE] = { + .name = "postpone", + .help = "postpone destroy operation", + .next = NEXT(next_qia_destroy_attr, + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, postpone)), + }, + [QUEUE_INDIRECT_ACTION_DESTROY_ID] = { + .name = "action_id", + .help = "specify a indirect action id to destroy", + .next = NEXT(next_qia_destroy_attr, + NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.ia_destroy.action_id)), + .call = parse_qia_destroy, + }, + /* Indirect action update arguments. */ + [QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE] = { + .name = "postpone", + .help = "postpone update operation", + .next = NEXT(next_qia_update_attr, + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, postpone)), + }, + /* Indirect action create arguments. */ + [QUEUE_INDIRECT_ACTION_CREATE_ID] = { + .name = "action_id", + .help = "specify a indirect action id to create", + .next = NEXT(next_qia_create_attr, + NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)), + }, + [QUEUE_INDIRECT_ACTION_INGRESS] = { + .name = "ingress", + .help = "affect rule to ingress", + .next = NEXT(next_qia_create_attr), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_EGRESS] = { + .name = "egress", + .help = "affect rule to egress", + .next = NEXT(next_qia_create_attr), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_TRANSFER] = { + .name = "transfer", + .help = "affect rule to transfer", + .next = NEXT(next_qia_create_attr), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_CREATE_POSTPONE] = { + .name = "postpone", + .help = "postpone create operation", + .next = NEXT(next_qia_create_attr, + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, postpone)), + }, + [QUEUE_INDIRECT_ACTION_SPEC] = { + .name = "action", + .help = "specify action to create indirect handle", + .next = NEXT(next_action), + }, /* Top-level command. */ [PUSH] = { .name = "push", @@ -6209,6 +6358,110 @@ parse_ia_destroy(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for indirect action commands. */ +static int +parse_qia(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != QUEUE) + return -1; + if (sizeof(*out) > size) + return -1; + out->args.vc.data = (uint8_t *)out + size; + return len; + } + switch (ctx->curr) { + case QUEUE_INDIRECT_ACTION: + return len; + case QUEUE_INDIRECT_ACTION_CREATE: + case QUEUE_INDIRECT_ACTION_UPDATE: + out->args.vc.actions = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + out->args.vc.attr.group = UINT32_MAX; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case QUEUE_INDIRECT_ACTION_EGRESS: + out->args.vc.attr.egress = 1; + return len; + case QUEUE_INDIRECT_ACTION_INGRESS: + out->args.vc.attr.ingress = 1; + return len; + case QUEUE_INDIRECT_ACTION_TRANSFER: + out->args.vc.attr.transfer = 1; + return len; + case QUEUE_INDIRECT_ACTION_CREATE_POSTPONE: + return len; + default: + return -1; + } +} + +/** Parse tokens for indirect action destroy command. */ +static int +parse_qia_destroy(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *action_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command || out->command == QUEUE) { + if (ctx->curr != QUEUE_INDIRECT_ACTION_DESTROY) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.ia_destroy.action_id = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; + } + switch (ctx->curr) { + case QUEUE_INDIRECT_ACTION: + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case QUEUE_INDIRECT_ACTION_DESTROY_ID: + action_id = out->args.ia_destroy.action_id + + out->args.ia_destroy.action_id_n++; + if ((uint8_t *)action_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = action_id; + ctx->objmask = NULL; + return len; + case QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE: + return len; + default: + return -1; + } +} + /** Parse tokens for meter policy action commands. */ static int parse_mp(struct context *ctx, const struct token *token, @@ -9892,6 +10145,29 @@ cmd_flow_parsed(const struct buffer *in) case PULL: port_queue_flow_pull(in->port, in->queue); break; + case QUEUE_INDIRECT_ACTION_CREATE: + port_queue_action_handle_create( + in->port, in->queue, in->postpone, + in->args.vc.attr.group, + &((const struct rte_flow_indir_action_conf) { + .ingress = in->args.vc.attr.ingress, + .egress = in->args.vc.attr.egress, + .transfer = in->args.vc.attr.transfer, + }), + in->args.vc.actions); + break; + case QUEUE_INDIRECT_ACTION_DESTROY: + port_queue_action_handle_destroy(in->port, + in->queue, in->postpone, + in->args.ia_destroy.action_id_n, + in->args.ia_destroy.action_id); + break; + case QUEUE_INDIRECT_ACTION_UPDATE: + port_queue_action_handle_update(in->port, + in->queue, in->postpone, + in->args.vc.attr.group, + in->args.vc.actions); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 4937851c41..e69dd2feff 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2590,6 +2590,137 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, return ret; } +/** Enqueue indirect action create operation. */ +int +port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, + bool postpone, uint32_t id, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action) +{ + const struct rte_flow_q_ops_attr attr = { .postpone = postpone}; + struct rte_port *port; + struct port_indirect_action *pia; + int ret; + struct rte_flow_error error; + + ret = action_alloc(port_id, id, &pia); + if (ret) + return ret; + + port = &ports[port_id]; + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + if (action->type == RTE_FLOW_ACTION_TYPE_AGE) { + struct rte_flow_action_age *age = + (struct rte_flow_action_age *)(uintptr_t)(action->conf); + + pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION; + age->context = &pia->age_type; + } + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x88, sizeof(error)); + pia->handle = rte_flow_q_action_handle_create(port_id, queue_id, &attr, + conf, action, &error); + if (!pia->handle) { + uint32_t destroy_id = pia->id; + port_queue_action_handle_destroy(port_id, queue_id, + postpone, 1, &destroy_id); + return port_flow_complain(&error); + } + pia->type = action->type; + printf("Indirect action #%u creation queued\n", pia->id); + return 0; +} + +/** Enqueue indirect action destroy operation. */ +int +port_queue_action_handle_destroy(portid_t port_id, + uint32_t queue_id, bool postpone, + uint32_t n, const uint32_t *actions) +{ + const struct rte_flow_q_ops_attr attr = { .postpone = postpone}; + struct rte_port *port; + struct port_indirect_action **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + tmp = &port->actions_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_indirect_action *pia = *tmp; + + if (actions[i] != pia->id) + continue; + /* + * Poisoning to make sure PMDs update it in case + * of error. + */ + memset(&error, 0x99, sizeof(error)); + + if (pia->handle && + rte_flow_q_action_handle_destroy(port_id, queue_id, + &attr, pia->handle, &error)) { + ret = port_flow_complain(&error); + continue; + } + *tmp = pia->next; + printf("Indirect action #%u destruction queued\n", + pia->id); + free(pia); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + +/** Enqueue indirect action update operation. */ +int +port_queue_action_handle_update(portid_t port_id, + uint32_t queue_id, bool postpone, uint32_t id, + const struct rte_flow_action *action) +{ + const struct rte_flow_q_ops_attr attr = { .postpone = postpone}; + struct rte_port *port; + struct rte_flow_error error; + struct rte_flow_action_handle *action_handle; + + action_handle = port_action_handle_get_by_id(port_id, id); + if (!action_handle) + return -EINVAL; + + port = &ports[port_id]; + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + if (rte_flow_q_action_handle_update(port_id, queue_id, &attr, + action_handle, action, &error)) { + return port_flow_complain(&error); + } + printf("Indirect action #%u update queued\n", id); + return 0; +} + /** Push all the queue operations in the queue to the NIC. */ int port_queue_flow_push(portid_t port_id, queueid_t queue_id) diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 6fe829edab..167f1741dc 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -939,6 +939,16 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id, const struct rte_flow_action *actions); int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, bool postpone, uint32_t n, const uint32_t *rule); +int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, + bool postpone, uint32_t id, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action); +int port_queue_action_handle_destroy(portid_t port_id, + uint32_t queue_id, bool postpone, + uint32_t n, const uint32_t *action); +int port_queue_action_handle_update(portid_t port_id, uint32_t queue_id, + bool postpone, uint32_t id, + const struct rte_flow_action *action); int port_queue_flow_push(portid_t port_id, queueid_t queue_id); int port_queue_flow_pull(portid_t port_id, queueid_t queue_id); int port_flow_validate(portid_t port_id, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index d5d9125d50..65ecef754e 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -4780,6 +4780,31 @@ port 0:: testpmd> flow indirect_action 0 create action_id \ ingress action rss queues 0 1 end / end +Enqueueing creation of indirect actions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue indirect_action create`` adds creation operation of an indirect +action to a queue. It is bound to ``rte_flow_q_action_handle_create()``:: + + flow queue {port_id} create {queue_id} [postpone {boolean}] + table {table_id} item_template {item_template_id} + action_template {action_template_id} + pattern {item} [/ {item} [...]] / end + actions {action} [/ {action} [...]] / end + +If successful, it will show:: + + Indirect action #[...] creation queued + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +This command uses the same parameters as ``flow indirect_action create``, +described in `Creating indirect actions`_. + +``flow queue pull`` must be called to retrieve the operation status. + Updating indirect actions ~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -4809,6 +4834,25 @@ Update indirect rss action having id 100 on port 0 with rss to queues 0 and 3 testpmd> flow indirect_action 0 update 100 action rss queues 0 3 end / end +Enqueueing update of indirect actions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue indirect_action update`` adds update operation for an indirect +action to a queue. It is bound to ``rte_flow_q_action_handle_update()``:: + + flow queue {port_id} indirect_action {queue_id} update + {indirect_action_id} [postpone {boolean}] action {action} / end + +If successful, it will show:: + + Indirect action #[...] update queued + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +``flow queue pull`` must be called to retrieve the operation status. + Destroying indirect actions ~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -4832,6 +4876,27 @@ Destroy indirect actions having id 100 & 101:: testpmd> flow indirect_action 0 destroy action_id 100 action_id 101 +Enqueueing destruction of indirect actions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue indirect_action destroy`` adds destruction operation to destroy +one or more indirect actions from their indirect action IDs (as returned by +``flow queue {port_id} indirect_action {queue_id} create``) to a queue. +It is bound to ``rte_flow_q_action_handle_destroy()``:: + + flow queue {port_id} indirect_action {queue_id} destroy + [postpone {boolean}] action_id {indirect_action_id} [...] + +If successful, it will show:: + + Indirect action #[...] destruction queued + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +``flow queue pull`` must be called to retrieve the operation status. + Query indirect actions ~~~~~~~~~~~~~~~~~~~~~~