From patchwork Thu Jun 25 16:03:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 72200 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3B861A0350; Thu, 25 Jun 2020 18:04:25 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 650031B91B; Thu, 25 Jun 2020 18:04:07 +0200 (CEST) Received: from EUR02-HE1-obe.outbound.protection.outlook.com (mail-eopbgr10089.outbound.protection.outlook.com [40.107.1.89]) by dpdk.org (Postfix) with ESMTP id 5EB71A3 for ; Thu, 25 Jun 2020 18:04:03 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bFIzNs7zfBd/YYCvWBmPExi7kNSWq7twCbnvCvR45YYMXovmsuiN7JBvfV5nOCZTLFzPHLrUHyECnRBqR6+t3cOqLYPPT64f81oR65oQcsyZaq1r1mIRUeSvKlDlI7YN330q4FP+tqgjBlyw2xwZuzdEeLCPaaNEsZlc/IInK6QCTd/o5sZZHOfoebgFIAORm4mnVPl1DDvn5pjmhH3r/LZy0eUyEHiwsP8tO/CJ5X1NvtoX8xzeXA20Uy07YNxA7drFJYlGjf9hCEhWHE1k65Lpuv5J/98BlabH8agYrFQtqZQriBDMFgH6fIUpYsN01UDSj4qOewwLP0pOW9fDRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=d92BMCWnPesNmrH+/NeLlMSMSu7nkgXp+NqTYf3JTp4=; b=CKWjZyxQCIsuKqkc+DvO3n8iB6ha3uXdvnOhOBta1z1EOPNJ39Sjv9Ozgb5I0P1i+aDc9oBx+YNjUlE+dCGYzRi2R3aNvtxDTrJKN8K5/jdJSq/8qniadDGYrM32vZHBFeixKarnwFMHnXjCggeYzfLzO5IY1ipWIYHxGw5XIPMbFVY5whCy+ZEbM4c60ScR3f9Opsw7bXUUN3EDwllsJNqbn8BZs0x5Hkm54zX2foDsP9JINhKL6rC6nxtD23R8URRyz2AtFrF7mF+i13nOldq9PwrKkyFUabM+fb7eya6yfSkXq1VlChjpWdgF9ZryyGq/iu/S9iLUWjWobPitfw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=mellanox.com; dmarc=pass action=none header.from=mellanox.com; dkim=pass header.d=mellanox.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=d92BMCWnPesNmrH+/NeLlMSMSu7nkgXp+NqTYf3JTp4=; b=b913/bR84edPRzNXn2PYZIPSJ6swoOC65jybYHNd4Aat27miXdShcERHsEQpEwN43VZnY7huUB2cmRM5vQfiLS+Wx/AJT+Xv12dzB3dNQBjlAAmvaTIXBY2ooubLAKuq3fO8oDVmQVtUBRy/1MKTyN3V3XmIiAk7GtWxHOGDxeo= Authentication-Results: dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=none action=none header.from=mellanox.com; Received: from DB8PR05MB6761.eurprd05.prod.outlook.com (2603:10a6:10:139::21) by DB6PR0501MB2552.eurprd05.prod.outlook.com (2603:10a6:4:5f::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3131.21; Thu, 25 Jun 2020 16:04:02 +0000 Received: from DB8PR05MB6761.eurprd05.prod.outlook.com ([fe80::10cf:d50d:bee:a43]) by DB8PR05MB6761.eurprd05.prod.outlook.com ([fe80::10cf:d50d:bee:a43%5]) with mapi id 15.20.3131.023; Thu, 25 Jun 2020 16:04:02 +0000 From: Gregory Etelson To: dev@dpdk.org Cc: getelson@mellanox.com, matan@mellanox.com, rasland@mellanox.com, Eli Britstein , Ori Kam Date: Thu, 25 Jun 2020 19:03:48 +0300 Message-Id: <20200625160348.26220-3-getelson@mellanox.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200625160348.26220-1-getelson@mellanox.com> References: <20200625160348.26220-1-getelson@mellanox.com> X-ClientProxiedBy: AM0PR01CA0124.eurprd01.prod.exchangelabs.com (2603:10a6:208:168::29) To DB8PR05MB6761.eurprd05.prod.outlook.com (2603:10a6:10:139::21) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from mellanox.com (176.230.225.181) by AM0PR01CA0124.eurprd01.prod.exchangelabs.com (2603:10a6:208:168::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3131.20 via Frontend Transport; Thu, 25 Jun 2020 16:04:01 +0000 X-Mailer: git-send-email 2.25.1 X-Originating-IP: [176.230.225.181] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 84b3044d-3b60-475f-3277-08d819216087 X-MS-TrafficTypeDiagnostic: DB6PR0501MB2552: X-LD-Processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtFwd X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:9508; X-Forefront-PRVS: 0445A82F82 X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: YcSFEnbqZM19XZU70QNQYTmqQBxKgR53IcS74Fq/QskxLjHydszLJniWZhN/C8uaLWkQ2fJYbmKkXAyeh9NDt/0hhkd68/9Q+YG8BsWdDw9wZk+eyr9smYgXNn/91y70mfHevV+2aUEqozlgzqWHJ/AfdSnX2qBx7Pn+PBGSd42Q5tw0b1RQvSTjdbHmQQAqvKlwbWJcv2gm9rnFCn7pt7YuGMBTyWbShvuhzEfSYe9m3hh9//EgP5QzNXlvj9EJMIlEtme5pZUHl5RnW5bgz2n6yJHQkaEsZtWrEhsWoqG1EA/X0hnsnMRrcNGw6V5rO0klYoLGiOLw+ke2Ho9BUIXXfr85sTfquFfKrciYhQGiQ213HRV7knji7INzy4I5NWY2w7Vm6cH6UQGQ9GesRA== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB8PR05MB6761.eurprd05.prod.outlook.com; PTR:; CAT:NONE; SFTY:; SFS:(4636009)(136003)(39860400002)(346002)(366004)(396003)(376002)(8886007)(30864003)(83380400001)(36756003)(316002)(2906002)(26005)(186003)(66946007)(66476007)(66556008)(16526019)(55016002)(86362001)(5660300002)(8676002)(478600001)(8936002)(7696005)(52116002)(966005)(54906003)(6666004)(1076003)(107886003)(4326008)(6916009)(2616005)(956004); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: 50p+CvOCnbLcnzyr7uuDyTkdGZYZXFMgOby173ESsFHUvgR+iM/b/ZlJVXh8EvX5PUsTUqYwTsfPVpAAGV0kvWNKzIfuQTbe6tCJ6KwjEyTG8M2cDYkyK2XLFx+c+wgNijTTzGTmMMaFGfmsSo4vyNnL+OkhumT8FyKvGQc1excoBS1Dv1OEsi+7wzxNQvGRQwVT1u0zJW612eJkGZrCYySYWbwq101sLELSWZVj1j5Y+HN+qMuZtV2OY+n63SfQt8sL5wNIVweFn+vR4/HI2myVZ88SBNmIS5ib5nLx7q6wOq4FUp8Ck/Qy0fWARY06sjMK5orO2lY+1e+p4WB85Cm013Bu/+rhOUAdUzwVetALDK6qujSlF+n8Amz6kCqj9Yc3OVIhDlx7qumOc9hH+J89rxPFfFpa2MrbYw+zXCvgDJK98K6nxoEYPta94O/gxUDJaeem5NmdCdRqcmSv9LOe6vQnJvu+XSNt+yAiXTiM+BBu1OOQ9zO+8G3Pxdno X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 84b3044d-3b60-475f-3277-08d819216087 X-MS-Exchange-CrossTenant-AuthSource: DB8PR05MB6761.eurprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2020 16:04:01.8914 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: i0fH548W3rlY6LMQqC+Adjb/gkuWGcRmUyZuvrv2UYgSzPHhQeySfjGMYSFRXE/Wng7J5oQdKM4hPxFEnDGJhQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0501MB2552 Subject: [dpdk-dev] [PATCH 2/2] ethdev: tunnel offload model X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Eli Britstein Hardware vendors implement tunneled traffic offload techniques differently. Although RTE flow API provides tools capable to offload all sorts of network stacks, software application must reference this hardware differences in flow rules compilation. As the result tunneled traffic flow rules that utilize hardware capabilities can be different for the same traffic. Tunnel port offload proposed in [1] provides software application with unified rules model for tunneled traffic regardless underlying hardware. - The model introduces a concept of a virtual tunnel port (VTP). - The model uses VTP to offload ingress tunneled network traffic  with RTE flow rules. - The model is implemented as set of helper functions. Each PMD implements VTP offload according to underlying hardware offload capabilities. Applications must query PMD for VTP flow items / actions before using in creation of a VTP flow rule. The model components: - Virtual Tunnel Port (VTP) is a stateless software object that describes tunneled network traffic. VTP object usually contains descriptions of outer headers, tunnel headers and inner headers. - Tunnel Steering flow Rule (TSR) detects tunneled packets and delegates them to tunnel processing infrastructure, implemented in PMD for optimal hardware utilization, for further processing. - Tunnel Matching flow Rule (TMR) verifies packet configuration and runs offload actions in case of a match. Application actions: 1 Initialize VTP object according to tunnel network parameters. 2 Create TSR flow rule: 2.1 Query PMD for VTP actions: application can query for VTP actions more than once int rte_flow_tunnel_decap_set(uint16_t port_id, struct rte_flow_tunnel *tunnel, struct rte_flow_action **pmd_actions, uint32_t *num_of_pmd_actions, struct rte_flow_error *error); 2.2 Integrate PMD actions into TSR actions list. 2.3 Create TSR flow rule: flow create group 0 match {tunnel items} / end actions {PMD actions} / {App actions} / end 3 Create TMR flow rule: 3.1 Query PMD for VTP items: application can query for VTP items more than once int rte_flow_tunnel_match(uint16_t port_id, struct rte_flow_tunnel *tunnel, struct rte_flow_item **pmd_items, uint32_t *num_of_pmd_items, struct rte_flow_error *error); 3.2 Integrate PMD items into TMR items list: 3.3 Create TMR flow rule flow create group 0 match {PMD items} / {APP items} / end actions {offload actions} / end The model provides helper function call to restore packets that miss tunnel TMR rules to its original state: int rte_flow_get_restore_info(uint16_t port_id, struct rte_mbuf *mbuf, struct rte_flow_restore_info *info, struct rte_flow_error *error); rte_tunnel object filled by the call inside rte_flow_restore_info *info parameter can be used by the application to create new TMR rule for that tunnel. The model requirements: Software application must initialize rte_tunnel object with tunnel parameters before calling rte_flow_tunnel_decap_set() & rte_flow_tunnel_match(). PMD actions array obtained in rte_flow_tunnel_decap_set() must be released by application with rte_flow_action_release() call. Application can release the actionsfter TSR rule was created. PMD items array obtained with rte_flow_tunnel_match() must be released by application with rte_flow_item_release() call. Application can release the items after rule was created. However, if the application needs to create additional TMR rule for the same tunnel it will need to obtain PMD items again. Application cannot destroy rte_tunnel object before it releases all PMD actions & PMD items referencing that tunnel. [1] https://mails.dpdk.org/archives/dev/2020-June/169656.html Signed-off-by: Eli Britstein Acked-by: Ori Kam --- doc/guides/prog_guide/rte_flow.rst | 105 ++++++++++++ lib/librte_ethdev/rte_ethdev_version.map | 5 + lib/librte_ethdev/rte_flow.c | 112 +++++++++++++ lib/librte_ethdev/rte_flow.h | 196 +++++++++++++++++++++++ lib/librte_ethdev/rte_flow_driver.h | 32 ++++ 5 files changed, 450 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index d5dd18ce99..cfd98c2e7d 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3010,6 +3010,111 @@ operations include: - Duplication of a complete flow rule description. - Pattern item or action name retrieval. +Tunneled traffic offload +~~~~~~~~~~~~~~~~~~~~~~~~ + +Provide software application with unified rules model for tunneled traffic +regardless underlying hardware. + + - The model introduces a concept of a virtual tunnel port (VTP). + - The model uses VTP to offload ingress tunneled network traffic  + with RTE flow rules. + - The model is implemented as set of helper functions. Each PMD + implements VTP offload according to underlying hardware offload + capabilities. Applications must query PMD for VTP flow + items / actions before using in creation of a VTP flow rule. + +The model components: + +- Virtual Tunnel Port (VTP) is a stateless software object that + describes tunneled network traffic. VTP object usually contains + descriptions of outer headers, tunnel headers and inner headers. +- Tunnel Steering flow Rule (TSR) detects tunneled packets and + delegates them to tunnel processing infrastructure, implemented + in PMD for optimal hardware utilization, for further processing. +- Tunnel Matching flow Rule (TMR) verifies packet configuration and + runs offload actions in case of a match. + +Application actions: + +1 Initialize VTP object according to tunnel network parameters. + +2 Create TSR flow rule. + +2.1 Query PMD for VTP actions. Application can query for VTP actions more than once. + + .. code-block:: c + + int + rte_flow_tunnel_decap_set(uint16_t port_id, + struct rte_flow_tunnel *tunnel, + struct rte_flow_action **pmd_actions, + uint32_t *num_of_pmd_actions, + struct rte_flow_error *error); + +2.2 Integrate PMD actions into TSR actions list. + +2.3 Create TSR flow rule. + + .. code-block:: console + + flow create group 0 match {tunnel items} / end actions {PMD actions} / {App actions} / end + +3 Create TMR flow rule. + +3.1 Query PMD for VTP items. Application can query for VTP items more than once. + + .. code-block:: c + + int + rte_flow_tunnel_match(uint16_t port_id, + struct rte_flow_tunnel *tunnel, + struct rte_flow_item **pmd_items, + uint32_t *num_of_pmd_items, + struct rte_flow_error *error); + +3.2 Integrate PMD items into TMR items list. + +3.3 Create TMR flow rule. + + .. code-block:: console + + flow create group 0 match {PMD items} / {APP items} / end actions {offload actions} / end + +The model provides helper function call to restore packets that miss +tunnel TMR rules to its original state: + +.. code-block:: c + + int + rte_flow_get_restore_info(uint16_t port_id, + struct rte_mbuf *mbuf, + struct rte_flow_restore_info *info, + struct rte_flow_error *error); + +rte_tunnel object filled by the call inside +``rte_flow_restore_info *info parameter`` can be used by the application +to create new TMR rule for that tunnel. + +The model requirements: + +Software application must initialize +rte_tunnel object with tunnel parameters before calling +rte_flow_tunnel_decap_set() & rte_flow_tunnel_match(). + +PMD actions array obtained in rte_flow_tunnel_decap_set() must be +released by application with rte_flow_action_release() call. +Application can release the actionsfter TSR rule was created. + +PMD items array obtained with rte_flow_tunnel_match() must be released +by application with rte_flow_item_release() call. Application can +release the items after rule was created. However, if the application +needs to create additional TMR rule for the same tunnel it will need +to obtain PMD items again. + +Application cannot destroy rte_tunnel object before it releases all +PMD actions & PMD items referencing that tunnel. + Caveats ------- diff --git a/lib/librte_ethdev/rte_ethdev_version.map b/lib/librte_ethdev/rte_ethdev_version.map index 7155056045..63800811df 100644 --- a/lib/librte_ethdev/rte_ethdev_version.map +++ b/lib/librte_ethdev/rte_ethdev_version.map @@ -241,4 +241,9 @@ EXPERIMENTAL { __rte_ethdev_trace_rx_burst; __rte_ethdev_trace_tx_burst; rte_flow_get_aged_flows; + rte_flow_tunnel_decap_set; + rte_flow_tunnel_match; + rte_flow_tunnel_get_restore_info; + rte_flow_tunnel_action_decap_release; + rte_flow_tunnel_item_release; }; diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c index c19d25649f..2dc5bfbb3f 100644 --- a/lib/librte_ethdev/rte_flow.c +++ b/lib/librte_ethdev/rte_flow.c @@ -1268,3 +1268,115 @@ rte_flow_get_aged_flows(uint16_t port_id, void **contexts, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, rte_strerror(ENOTSUP)); } + +int +rte_flow_tunnel_decap_set(uint16_t port_id, + struct rte_flow_tunnel *tunnel, + struct rte_flow_action **actions, + uint32_t *num_of_actions, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->tunnel_decap_set)) { + return flow_err(port_id, + ops->tunnel_decap_set(dev, tunnel, actions, + num_of_actions, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +int +rte_flow_tunnel_match(uint16_t port_id, + struct rte_flow_tunnel *tunnel, + struct rte_flow_item **items, + uint32_t *num_of_items, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->tunnel_match)) { + return flow_err(port_id, + ops->tunnel_match(dev, tunnel, items, + num_of_items, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +int +rte_flow_tunnel_get_restore_info(uint16_t port_id, + struct rte_mbuf *m, + struct rte_flow_restore_info *restore_info, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->get_restore_info)) { + return flow_err(port_id, + ops->get_restore_info(dev, m, restore_info, + error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +int +rte_flow_tunnel_action_decap_release(uint16_t port_id, + struct rte_flow_action *actions, + uint32_t num_of_actions, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->action_release)) { + return flow_err(port_id, + ops->action_release(dev, actions, + num_of_actions, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +int +rte_flow_tunnel_item_release(uint16_t port_id, + struct rte_flow_item *items, + uint32_t num_of_items, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->item_release)) { + return flow_err(port_id, + ops->item_release(dev, items, + num_of_items, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h index b0e4199192..1374b6e5a7 100644 --- a/lib/librte_ethdev/rte_flow.h +++ b/lib/librte_ethdev/rte_flow.h @@ -3324,6 +3324,202 @@ int rte_flow_get_aged_flows(uint16_t port_id, void **contexts, uint32_t nb_contexts, struct rte_flow_error *error); +/* Tunnel information. */ +__rte_experimental +struct rte_flow_ip_tunnel_key { + rte_be64_t tun_id; /**< Tunnel identification. */ + union { + struct { + rte_be32_t src_addr; /**< IPv4 source address. */ + rte_be32_t dst_addr; /**< IPv4 destination address. */ + } ipv4; + struct { + uint8_t src_addr[16]; /**< IPv6 source address. */ + uint8_t dst_addr[16]; /**< IPv6 destination address. */ + } ipv6; + } u; + bool is_ipv6; /**< True for valid IPv6 fields. Otherwise IPv4. */ + rte_be16_t tun_flags; /**< Tunnel flags. */ + uint8_t tos; /**< TOS for IPv4, TC for IPv6. */ + uint8_t ttl; /**< TTL for IPv4, HL for IPv6. */ + rte_be32_t label; /**< Flow Label for IPv6. */ + rte_be16_t tp_src; /**< Tunnel port source. */ + rte_be16_t tp_dst; /**< Tunnel port destination. */ +}; + + +/* Tunnel has a type and the key information. */ +__rte_experimental +struct rte_flow_tunnel { + /** + * Tunnel type, for example RTE_FLOW_ITEM_TYPE_VXLAN, + * RTE_FLOW_ITEM_TYPE_NVGRE etc. + */ + enum rte_flow_item_type type; + struct rte_flow_ip_tunnel_key tun_info; /**< Tunnel key info. */ +}; + +/** + * Indicate that the packet has a tunnel. + */ +#define RTE_FLOW_RESTORE_INFO_TUNNEL (1ULL << 0) + +/** + * Indicate that the packet has a non decapsulated tunnel header. + */ +#define RTE_FLOW_RESTORE_INFO_ENCAPSULATED (1ULL << 1) + +/** + * Indicate that the packet has a group_id. + */ +#define RTE_FLOW_RESTORE_INFO_GROUP_ID (1ULL << 2) + +/** + * Restore information structure to communicate the current packet processing + * state when some of the processing pipeline is done in hardware and should + * continue in software. + */ +__rte_experimental +struct rte_flow_restore_info { + /** + * Bitwise flags (RTE_FLOW_RESTORE_INFO_*) to indicate validation of + * other fields in struct rte_flow_restore_info. + */ + uint64_t flags; + uint32_t group_id; /**< Group ID. */ + struct rte_flow_tunnel tunnel; /**< Tunnel information. */ +}; + +/** + * Allocate an array of actions to be used in rte_flow_create, to implement + * tunnel-decap-set for the given tunnel. + * Sample usage: + * actions vxlan_decap / tunnel-decap-set(tunnel properties) / + * jump group 0 / end + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] tunnel + * Tunnel properties. + * @param[out] actions + * Array of actions to be allocated by the PMD. This array should be + * concatenated with the actions array provided to rte_flow_create. + * @param[out] num_of_actions + * Number of actions allocated. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_tunnel_decap_set(uint16_t port_id, + struct rte_flow_tunnel *tunnel, + struct rte_flow_action **actions, + uint32_t *num_of_actions, + struct rte_flow_error *error); + +/** + * Allocate an array of items to be used in rte_flow_create, to implement + * tunnel-match for the given tunnel. + * Sample usage: + * pattern tunnel-match(tunnel properties) / outer-header-matches / + * inner-header-matches / end + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] tunnel + * Tunnel properties. + * @param[out] items + * Array of items to be allocated by the PMD. This array should be + * concatenated with the items array provided to rte_flow_create. + * @param[out] num_of_items + * Number of items allocated. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_tunnel_match(uint16_t port_id, + struct rte_flow_tunnel *tunnel, + struct rte_flow_item **items, + uint32_t *num_of_items, + struct rte_flow_error *error); + +/** + * Populate the current packet processing state, if exists, for the given mbuf. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] m + * Mbuf struct. + * @param[out] info + * Restore information. Upon success contains the HW state. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_tunnel_get_restore_info(uint16_t port_id, + struct rte_mbuf *m, + struct rte_flow_restore_info *info, + struct rte_flow_error *error); + +/** + * Release the action array as allocated by rte_flow_tunnel_decap_set. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] actions + * Array of actions to be released. + * @param[in] num_of_actions + * Number of elements in actions array. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_tunnel_action_decap_release(uint16_t port_id, + struct rte_flow_action *actions, + uint32_t num_of_actions, + struct rte_flow_error *error); + +/** + * Release the item array as allocated by rte_flow_tunnel_match. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] items + * Array of items to be released. + * @param[in] num_of_items + * Number of elements in item array. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_tunnel_item_release(uint16_t port_id, + struct rte_flow_item *items, + uint32_t num_of_items, + struct rte_flow_error *error); #ifdef __cplusplus } #endif diff --git a/lib/librte_ethdev/rte_flow_driver.h b/lib/librte_ethdev/rte_flow_driver.h index 881cc469b7..ad1d7a2cdc 100644 --- a/lib/librte_ethdev/rte_flow_driver.h +++ b/lib/librte_ethdev/rte_flow_driver.h @@ -107,6 +107,38 @@ struct rte_flow_ops { void **context, uint32_t nb_contexts, struct rte_flow_error *err); + /** See rte_flow_tunnel_decap_set() */ + int (*tunnel_decap_set) + (struct rte_eth_dev *dev, + struct rte_flow_tunnel *tunnel, + struct rte_flow_action **pmd_actions, + uint32_t *num_of_actions, + struct rte_flow_error *err); + /** See rte_flow_tunnel_match() */ + int (*tunnel_match) + (struct rte_eth_dev *dev, + struct rte_flow_tunnel *tunnel, + struct rte_flow_item **pmd_items, + uint32_t *num_of_items, + struct rte_flow_error *err); + /** See rte_flow_get_rte_flow_restore_info() */ + int (*get_restore_info) + (struct rte_eth_dev *dev, + struct rte_mbuf *m, + struct rte_flow_restore_info *info, + struct rte_flow_error *err); + /** See rte_flow_action_tunnel_decap_release() */ + int (*action_release) + (struct rte_eth_dev *dev, + struct rte_flow_action *pmd_actions, + uint32_t num_of_actions, + struct rte_flow_error *err); + /** See rte_flow_item_release() */ + int (*item_release) + (struct rte_eth_dev *dev, + struct rte_flow_item *pmd_items, + uint32_t num_of_items, + struct rte_flow_error *err); }; /**