From patchwork Fri Dec 14 07:45:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 48803 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9BB571B70C; Fri, 14 Dec 2018 08:41:36 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id AB1AC1B70B for ; Fri, 14 Dec 2018 08:41:35 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Dec 2018 23:41:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,352,1539673200"; d="scan'208";a="100641749" Received: from jeffguo-s2600wt2.sh.intel.com (HELO localhost.localdomain) ([10.67.110.10]) by orsmga006.jf.intel.com with ESMTP; 13 Dec 2018 23:41:32 -0800 From: Jeff Guo To: bernard.iremonger@intel.com, wenzhuo.lu@intel.com, shahafs@mellanox.com, thomas@monjalon.net, matan@mellanox.com Cc: ferruh.yigit@intel.com, konstantin.ananyev@intel.com, dev@dpdk.org, jia.guo@intel.com, stephen@networkplumber.org, gaetan.rivet@6wind.com, qi.z.zhang@intel.com, arybchenko@solarflare.com, bruce.richardson@intel.com, shaopeng.he@intel.com, anatoly.burakov@intel.com Date: Fri, 14 Dec 2018 15:45:39 +0800 Message-Id: <1544773540-89825-3-git-send-email-jia.guo@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1544773540-89825-1-git-send-email-jia.guo@intel.com> References: <1544773540-89825-1-git-send-email-jia.guo@intel.com> Subject: [dpdk-dev] [PATCH 2/3] ethdev: remove ethdev rmv interrupt X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Since eal device event had been introduced, and application could monitor eal device event and accordingly handle hot-unplug for device, so the ethdev rmv event could be replaced by eal device event. This patch aim to abandon ethdev rmv interrupt, its every usages in pmd and testpmd will be removed, while use a common way to detect device hotplug. Signed-off-by: Jeff Guo --- app/test-pmd/parameters.c | 2 -- app/test-pmd/testpmd.c | 40 +++------------------------------ drivers/net/failsafe/failsafe_ether.c | 12 +++++----- drivers/net/failsafe/failsafe_ops.c | 3 +-- drivers/net/failsafe/failsafe_private.h | 6 ++--- drivers/net/mlx4/mlx4_intr.c | 1 - drivers/net/mlx5/mlx5_ethdev.c | 7 +++--- lib/librte_ethdev/rte_ethdev.h | 1 - 8 files changed, 16 insertions(+), 56 deletions(-) diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 38b4197..7819b30 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -529,8 +529,6 @@ parse_event_printing_config(const char *optarg, int enable) mask = UINT32_C(1) << RTE_ETH_EVENT_IPSEC; else if (!strcmp(optarg, "macsec")) mask = UINT32_C(1) << RTE_ETH_EVENT_MACSEC; - else if (!strcmp(optarg, "intr_rmv")) - mask = UINT32_C(1) << RTE_ETH_EVENT_INTR_RMV; else if (!strcmp(optarg, "dev_probed")) mask = UINT32_C(1) << RTE_ETH_EVENT_NEW; else if (!strcmp(optarg, "dev_released")) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 4c75587..bd44b21 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -357,7 +357,6 @@ static const char * const eth_event_desc[] = { [RTE_ETH_EVENT_VF_MBOX] = "VF mbox", [RTE_ETH_EVENT_IPSEC] = "IPsec", [RTE_ETH_EVENT_MACSEC] = "MACsec", - [RTE_ETH_EVENT_INTR_RMV] = "device removal", [RTE_ETH_EVENT_NEW] = "device probed", [RTE_ETH_EVENT_DESTROY] = "device released", [RTE_ETH_EVENT_MAX] = NULL, @@ -372,8 +371,8 @@ uint32_t event_print_mask = (UINT32_C(1) << RTE_ETH_EVENT_UNKNOWN) | (UINT32_C(1) << RTE_ETH_EVENT_QUEUE_STATE) | (UINT32_C(1) << RTE_ETH_EVENT_INTR_RESET) | (UINT32_C(1) << RTE_ETH_EVENT_IPSEC) | - (UINT32_C(1) << RTE_ETH_EVENT_MACSEC) | - (UINT32_C(1) << RTE_ETH_EVENT_INTR_RMV); + (UINT32_C(1) << RTE_ETH_EVENT_MACSEC); + /* * Decide if all memory are locked for performance. */ @@ -2567,13 +2566,6 @@ eth_event_callback(portid_t port_id, enum rte_eth_event_type type, void *param, ports[port_id].need_setup = 1; ports[port_id].port_status = RTE_PORT_HANDLING; break; - case RTE_ETH_EVENT_INTR_RMV: - if (port_id_is_invalid(port_id, DISABLED_WARN)) - break; - if (rte_eal_alarm_set(100000, - rmv_port_callback, (void *)(intptr_t)port_id)) - fprintf(stderr, "Could not set up deferred device removal\n"); - break; default: break; } @@ -2626,19 +2618,7 @@ dev_event_callback(const char *device_name, enum rte_dev_event_type type, device_name); return; } - /* - * Because the user's callback is invoked in eal interrupt - * callback, the interrupt callback need to be finished before - * it can be unregistered when detaching device. So finish - * callback soon and use a deferred removal to detach device - * is need. It is a workaround, once the device detaching be - * moved into the eal in the future, the deferred removal could - * be deleted. - */ - if (rte_eal_alarm_set(100000, - rmv_port_callback, (void *)(intptr_t)port_id)) - RTE_LOG(ERR, EAL, - "Could not set up deferred device removal\n"); + rmv_port_callback((void *)(intptr_t)port_id); break; case RTE_DEV_EVENT_ADD: RTE_LOG(ERR, EAL, "The device: %s has been added!\n", @@ -3170,20 +3150,6 @@ main(int argc, char** argv) init_config(); if (hot_plug) { - ret = rte_dev_hotplug_handle_enable(); - if (ret) { - RTE_LOG(ERR, EAL, - "fail to enable hotplug handling."); - return -1; - } - - ret = rte_dev_event_monitor_start(); - if (ret) { - RTE_LOG(ERR, EAL, - "fail to start device event monitoring."); - return -1; - } - ret = rte_dev_event_callback_register(NULL, dev_event_callback, NULL); if (ret) { diff --git a/drivers/net/failsafe/failsafe_ether.c b/drivers/net/failsafe/failsafe_ether.c index 1783165..e3ddbfa 100644 --- a/drivers/net/failsafe/failsafe_ether.c +++ b/drivers/net/failsafe/failsafe_ether.c @@ -345,8 +345,7 @@ failsafe_eth_dev_unregister_callbacks(struct sub_device *sdev) if (sdev == NULL) return; if (sdev->rmv_callback) { - ret = rte_eth_dev_callback_unregister(PORT_ID(sdev), - RTE_ETH_EVENT_INTR_RMV, + ret = rte_dev_event_callback_unregister(sdev->dev->name, failsafe_eth_rmv_event_callback, sdev); if (ret) @@ -559,10 +558,10 @@ failsafe_stats_increment(struct rte_eth_stats *to, struct rte_eth_stats *from) } } -int -failsafe_eth_rmv_event_callback(uint16_t port_id __rte_unused, - enum rte_eth_event_type event __rte_unused, - void *cb_arg, void *out __rte_unused) +void +failsafe_eth_rmv_event_callback(const char *device_name __rte_unused, + enum rte_dev_event_type event __rte_unused, + void *cb_arg) { struct sub_device *sdev = cb_arg; @@ -577,7 +576,6 @@ failsafe_eth_rmv_event_callback(uint16_t port_id __rte_unused, */ sdev->remove = 1; fs_unlock(sdev->fs_dev, 0); - return 0; } int diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c index 7f8bcd4..7868f42 100644 --- a/drivers/net/failsafe/failsafe_ops.c +++ b/drivers/net/failsafe/failsafe_ops.c @@ -140,8 +140,7 @@ fs_dev_configure(struct rte_eth_dev *dev) return ret; } if (rmv_interrupt && sdev->rmv_callback == 0) { - ret = rte_eth_dev_callback_register(PORT_ID(sdev), - RTE_ETH_EVENT_INTR_RMV, + ret = rte_dev_event_callback_register(sdev->dev->name, failsafe_eth_rmv_event_callback, sdev); if (ret) diff --git a/drivers/net/failsafe/failsafe_private.h b/drivers/net/failsafe/failsafe_private.h index 7e31896..163bb98 100644 --- a/drivers/net/failsafe/failsafe_private.h +++ b/drivers/net/failsafe/failsafe_private.h @@ -224,9 +224,9 @@ void failsafe_eth_dev_unregister_callbacks(struct sub_device *sdev); void failsafe_dev_remove(struct rte_eth_dev *dev); void failsafe_stats_increment(struct rte_eth_stats *to, struct rte_eth_stats *from); -int failsafe_eth_rmv_event_callback(uint16_t port_id, - enum rte_eth_event_type type, - void *arg, void *out); +void failsafe_eth_rmv_event_callback(const char *device_name, + enum rte_dev_event_type event, + void *cb_arg); int failsafe_eth_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type event, void *cb_arg, void *out); diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c index eeb982a..401cc84 100644 --- a/drivers/net/mlx4/mlx4_intr.c +++ b/drivers/net/mlx4/mlx4_intr.c @@ -180,7 +180,6 @@ mlx4_interrupt_handler(struct priv *priv) enum { LSC, RMV, }; static const enum rte_eth_event_type type[] = { [LSC] = RTE_ETH_EVENT_INTR_LSC, - [RMV] = RTE_ETH_EVENT_INTR_RMV, }; uint32_t caught[RTE_DIM(type)] = { 0 }; struct ibv_async_event event; diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index d178ed6..7d1194f 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -1033,7 +1033,7 @@ mlx5_dev_status_handler(struct rte_eth_dev *dev) ret |= (1 << RTE_ETH_EVENT_INTR_LSC); else if (event.event_type == IBV_EVENT_DEVICE_FATAL && dev->data->dev_conf.intr_conf.rmv == 1) - ret |= (1 << RTE_ETH_EVENT_INTR_RMV); + ret |= (1 << RTE_DEV_EVENT_REMOVE); else DRV_LOG(DEBUG, "port %u event type %d on not handled", @@ -1060,8 +1060,9 @@ mlx5_dev_interrupt_handler(void *cb_arg) events = mlx5_dev_status_handler(dev); if (events & (1 << RTE_ETH_EVENT_INTR_LSC)) _rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL); - if (events & (1 << RTE_ETH_EVENT_INTR_RMV)) - _rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RMV, NULL); + if (events & (1 << RTE_DEV_EVENT_REMOVE)) + rte_dev_event_callback_process(dev->device->name, + RTE_DEV_EVENT_REMOVE); } /** diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h index 1960f3a..6d54743 100644 --- a/lib/librte_ethdev/rte_ethdev.h +++ b/lib/librte_ethdev/rte_ethdev.h @@ -2619,7 +2619,6 @@ enum rte_eth_event_type { /**< reset interrupt event, sent to VF on PF reset */ RTE_ETH_EVENT_VF_MBOX, /**< message from the VF received by PF */ RTE_ETH_EVENT_MACSEC, /**< MACsec offload related event */ - RTE_ETH_EVENT_INTR_RMV, /**< device removal event */ RTE_ETH_EVENT_NEW, /**< port is probed */ RTE_ETH_EVENT_DESTROY, /**< port is released */ RTE_ETH_EVENT_IPSEC, /**< IPsec offload related event */ From patchwork Tue Jul 13 13:17:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Monjalon X-Patchwork-Id: 95791 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3D8F2A0C51; Tue, 13 Jul 2021 15:17:35 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3595241285; Tue, 13 Jul 2021 15:17:30 +0200 (CEST) Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) by mails.dpdk.org (Postfix) with ESMTP id 7E7A54127F; Tue, 13 Jul 2021 15:17:29 +0200 (CEST) Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailout.nyi.internal (Postfix) with ESMTP id 2FC905C00E1; Tue, 13 Jul 2021 09:17:29 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Tue, 13 Jul 2021 09:17:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; s=fm1; bh=6D6RNjqDiKjnnN58kGF3iyiLeJ zCQVr0MS8RYKdlJWA=; b=3ef+n0Qn9pWK5C8z8cm2rzCWitVomG3+SWZzCdbgYB kkIxJLrI1qm4goY4ijJT+yFfHIrl3wz7WnNHpLnKcNxbO6s09ze/B3eaftM835vs 3d2KYZDBtk3ORZATiywrWGyCKYvNMOvhKC7MnilOBWEStAGqm/LAWvMLXCaAXofj jNR+LFIvRSlhmJLi0aXF7qyio7oZyGeKVSDDDuKa/tdA0Fw9hPD01k1mhrQmJtzZ hV4/9IXTDw9AK9BbsWlVVM7bIc6M3nWguzXDcaVVLrLwTYehR6KVN2MWv9C54ryQ QzmeaG5SbkXcpdZmnS8BiGzl5hyVkfnfk4D4yvG7BjIA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :message-id:mime-version:subject:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; bh=6D6RNjqDiKjnnN58k GF3iyiLeJzCQVr0MS8RYKdlJWA=; b=Whm38gmMIXzyjnnvoZsCNj7pjC5txFeJJ eB39dWEBxDarfU0/J9tJrR6wYG93f/5E2OdOumG4hBJ/zwsG3KSOt2F56mg1b95F qMkc07Wn2IZZG9JiBejYqPwF01RA6gHY0P0ZU3gFYGYNPc+CSdQRlZBTuE7nODvY +RqRZPhoZuG0LwTnDDQ+3m6Q3IQZGaDminhDxYWAMUbQb35VRUrypdeAa8qN7k2H aROXJ/+4CVAFECu9Yarz8k74ranVW9oCgF4IVcuxK6Jr9MRFI9DNU7pGu+UaMMGi qOxEQGIx1iQFgLbsaqQTGUXGgnxwYecbZTLFlLJMEHDGPAVcM6gsQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudehgdehfecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgggfestdekredtredttdenucfhrhhomhepvfhhohhmrghsucfo ohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenucggtffrrg htthgvrhhnpedvledvudehvdduudevuedvveehgeduleegiefgjeehudehtddtgeduffej iefhgfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe hthhhomhgrshesmhhonhhjrghlohhnrdhnvght X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 13 Jul 2021 09:17:27 -0400 (EDT) From: Thomas Monjalon To: dev@dpdk.org Cc: stable@dpdk.org, Ferruh Yigit , Andrew Rybchenko , Matan Azrad Date: Tue, 13 Jul 2021 15:17:13 +0200 Message-Id: <20210713131714.964500-1-thomas@monjalon.net> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH] ethdev: avoid unregistering a non-allocated callback X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When registering a new event callback, if allocation fails, there is no need for unregistering the callback, because it is not registered. Fixes: 9ec0b3869d8d ("ethdev: allow event registration for all ports") Cc: stable@dpdk.org Signed-off-by: Thomas Monjalon Reviewed-by: Andrew Rybchenko --- lib/ethdev/rte_ethdev.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 9d95cd11e1..1731854628 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -4649,8 +4649,6 @@ rte_eth_dev_callback_register(uint16_t port_id, user_cb, next); } else { rte_spinlock_unlock(ð_dev_cb_lock); - rte_eth_dev_callback_unregister(port_id, event, - cb_fn, cb_arg); return -ENOMEM; } From patchwork Thu Sep 16 02:56:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "humin (Q)" X-Patchwork-Id: 98978 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 41504A0C41; Thu, 16 Sep 2021 04:58:10 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CA5204003F; Thu, 16 Sep 2021 04:58:09 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id 972464003C for ; Thu, 16 Sep 2021 04:58:08 +0200 (CEST) Received: from dggeme756-chm.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4H91ql2mDWz8yTZ; Thu, 16 Sep 2021 10:53:39 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggeme756-chm.china.huawei.com (10.3.19.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2308.8; Thu, 16 Sep 2021 10:58:06 +0800 From: "Min Hu (Connor)" To: CC: , Date: Thu, 16 Sep 2021 10:56:36 +0800 Message-ID: <20210916025636.48024-1-humin29@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <29b75903-d212-c6e6-eedf-e3bc92ab816a@huawei.com> References: <29b75903-d212-c6e6-eedf-e3bc92ab816a@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggeme756-chm.china.huawei.com (10.3.19.102) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [RFC] ethdev: improve link speed to string X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently, link speed to string only supports specific speeds, like 10M, 100M, 1G etc. This patch expands support for any link speed which is over 1M and one decimal place will kept for display at most. Signed-off-by: Min Hu (Connor) --- lib/ethdev/rte_ethdev.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index daf5ca9242..1d3b960305 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -2750,24 +2750,24 @@ rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *eth_link) const char * rte_eth_link_speed_to_str(uint32_t link_speed) { - switch (link_speed) { - case ETH_SPEED_NUM_NONE: return "None"; - case ETH_SPEED_NUM_10M: return "10 Mbps"; - case ETH_SPEED_NUM_100M: return "100 Mbps"; - case ETH_SPEED_NUM_1G: return "1 Gbps"; - case ETH_SPEED_NUM_2_5G: return "2.5 Gbps"; - case ETH_SPEED_NUM_5G: return "5 Gbps"; - case ETH_SPEED_NUM_10G: return "10 Gbps"; - case ETH_SPEED_NUM_20G: return "20 Gbps"; - case ETH_SPEED_NUM_25G: return "25 Gbps"; - case ETH_SPEED_NUM_40G: return "40 Gbps"; - case ETH_SPEED_NUM_50G: return "50 Gbps"; - case ETH_SPEED_NUM_56G: return "56 Gbps"; - case ETH_SPEED_NUM_100G: return "100 Gbps"; - case ETH_SPEED_NUM_200G: return "200 Gbps"; - case ETH_SPEED_NUM_UNKNOWN: return "Unknown"; - default: return "Invalid"; +#define SPEED_STRING_LEN 16 + static char name[SPEED_STRING_LEN]; + + if (link_speed == ETH_SPEED_NUM_NONE) + return "None"; + if (link_speed == ETH_SPEED_NUM_UNKNOWN) + return "Unknown"; + if (link_speed < ETH_SPEED_NUM_1G) { + snprintf(name, sizeof(name), "%u Mbps", link_speed); + } else if (link_speed % ETH_SPEED_NUM_1G != 0) { + snprintf(name, sizeof(name), "%.1f Gbps", + (double)link_speed / ETH_SPEED_NUM_1G); + } else { + snprintf(name, sizeof(name), "%u Gbps", + link_speed / ETH_SPEED_NUM_1G); } + + return (const char *)name; } int From patchwork Fri Oct 1 09:07:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 100208 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AC2D8A0032; Fri, 1 Oct 2021 11:08:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9B67A41140; Fri, 1 Oct 2021 11:08:01 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 4AAA841135 for ; Fri, 1 Oct 2021 11:07:59 +0200 (CEST) Received: by shelob.oktetlabs.ru (Postfix, from userid 122) id 0AD3F7F6DA; Fri, 1 Oct 2021 12:07:59 +0300 (MSK) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on shelob.oktetlabs.ru X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=ALL_TRUSTED, DKIM_ADSP_DISCARD, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.2 Received: from aros.oktetlabs.ru (aros.oktetlabs.ru [192.168.38.17]) by shelob.oktetlabs.ru (Postfix) with ESMTP id CFBC87F6D3; Fri, 1 Oct 2021 12:07:28 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru CFBC87F6D3 Authentication-Results: shelob.oktetlabs.ru/CFBC87F6D3; dkim=none; dkim-atps=neutral From: Andrew Rybchenko To: Igor Russkikh , Somalapuram Amaranath , Rasesh Mody , Shahed Shaikh , Ajit Khaparde , Somnath Kotur , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Rahul Lakkireddy , Hemant Agrawal , Sachin Saxena , Haiyue Wang , Marcin Wojtas , Michal Krawczyk , Shai Brandes , Evgeny Schemeilin , Igor Chauskin , Gaetan Rivet , Qi Zhang , Xiao Wang , Ziyang Xuan , Xiaoyun Wang , Guoyang Zhou , "Min Hu (Connor)" , Yisen Zhuang , Lijun Ou , Beilei Xing , Jingjing Wu , Qiming Yang , Andrew Boyer , Rosen Xu , Shijith Thotton , Srisivasubramanian Srinivasan , Matan Azrad , Viacheslav Ovsiienko , Liron Himi , Stephen Hemminger , Long Li , Jerin Jacob , Devendra Singh Rawat , Jiawen Wu , Jian Wang , Maxime Coquelin , Chenbo Xia , Yong Wang , Thomas Monjalon , Ferruh Yigit Cc: dev@dpdk.org Date: Fri, 1 Oct 2021 12:07:23 +0300 Message-Id: <20211001090723.1414911-5-andrew.rybchenko@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211001090723.1414911-1-andrew.rybchenko@oktetlabs.ru> References: <20210604144225.287678-1-andrew.rybchenko@oktetlabs.ru> <20211001090723.1414911-1-andrew.rybchenko@oktetlabs.ru> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v9 5/5] ethdev: merge driver ops to get all xstats names and by ID X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" All xstats names may be retrieved passing NULL ids. If a driver does not support getting names by IDs, the callback should return -ENOTSUP on attempt to use it. If so, the request is handled on ethdev layer by getting all names and filtering out requested only. Signed-off-by: Andrew Rybchenko --- v9: - document -ENOTSUP in callback - simplify documentation about supported cases v8: - handle -ENOTSUP on ethdev to fallback to by-IDs handling in ethdev doc/guides/nics/features.rst | 2 +- drivers/net/atlantic/atl_ethdev.c | 5 ++ drivers/net/axgbe/axgbe_ethdev.c | 30 ++++----- drivers/net/bnx2x/bnx2x_ethdev.c | 4 ++ drivers/net/bnxt/bnxt_stats.c | 4 ++ drivers/net/bnxt/bnxt_stats.h | 1 + drivers/net/cnxk/cnxk_ethdev.c | 1 - drivers/net/cnxk/cnxk_ethdev.h | 5 +- drivers/net/cnxk/cnxk_stats.c | 18 +++--- drivers/net/cxgbe/cxgbe_ethdev.c | 17 ++--- drivers/net/dpaa/dpaa_ethdev.c | 13 ++-- drivers/net/dpaa2/dpaa2_ethdev.c | 13 ++-- drivers/net/e1000/igb_ethdev.c | 36 +++-------- drivers/net/ena/ena_ethdev.c | 6 +- drivers/net/failsafe/failsafe_ops.c | 5 +- drivers/net/fm10k/fm10k_ethdev.c | 4 ++ drivers/net/hinic/hinic_pmd_ethdev.c | 4 ++ drivers/net/hns3/hns3_ethdev.c | 1 - drivers/net/hns3/hns3_ethdev_vf.c | 1 - drivers/net/hns3/hns3_stats.c | 22 +++---- drivers/net/hns3/hns3_stats.h | 10 +-- drivers/net/i40e/i40e_ethdev.c | 5 ++ drivers/net/iavf/iavf_ethdev.c | 13 ++-- drivers/net/ice/ice_ethdev.c | 5 ++ drivers/net/igc/igc_ethdev.c | 10 +-- drivers/net/ionic/ionic_ethdev.c | 25 +------- drivers/net/ipn3ke/ipn3ke_representor.c | 5 +- drivers/net/ixgbe/ixgbe_ethdev.c | 71 +++------------------ drivers/net/liquidio/lio_ethdev.c | 4 ++ drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_stats.c | 7 ++- drivers/net/mvpp2/mrvl_ethdev.c | 4 ++ drivers/net/netvsc/hn_ethdev.c | 5 +- drivers/net/octeontx2/otx2_ethdev.c | 1 - drivers/net/octeontx2/otx2_ethdev.h | 5 +- drivers/net/octeontx2/otx2_stats.c | 14 ++--- drivers/net/qede/qede_ethdev.c | 4 ++ drivers/net/sfc/sfc_ethdev.c | 84 ++++++++++++------------- drivers/net/txgbe/txgbe_ethdev.c | 7 +-- drivers/net/txgbe/txgbe_ethdev_vf.c | 4 ++ drivers/net/vhost/rte_eth_vhost.c | 4 ++ drivers/net/virtio/virtio_ethdev.c | 5 ++ drivers/net/vmxnet3/vmxnet3_ethdev.c | 5 ++ lib/ethdev/ethdev_driver.h | 21 ++++--- lib/ethdev/rte_ethdev.c | 17 +++-- 45 files changed, 245 insertions(+), 283 deletions(-) diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst index 4fce8cd1c9..8fef2939fb 100644 --- a/doc/guides/nics/features.rst +++ b/doc/guides/nics/features.rst @@ -708,7 +708,7 @@ Extended stats Supports Extended Statistics, changes from driver to driver. * **[implements] eth_dev_ops**: ``xstats_get``, ``xstats_reset``, ``xstats_get_names``. -* **[implements] eth_dev_ops**: ``xstats_get_by_id``, ``xstats_get_names_by_id``. +* **[implements] eth_dev_ops**: ``xstats_get_by_id``. * **[related] API**: ``rte_eth_xstats_get()``, ``rte_eth_xstats_reset()``, ``rte_eth_xstats_get_names``, ``rte_eth_xstats_get_by_id()``, ``rte_eth_xstats_get_names_by_id()``, ``rte_eth_xstats_get_id_by_name()``. diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c index 0ce35eb519..f55d18ae9a 100644 --- a/drivers/net/atlantic/atl_ethdev.c +++ b/drivers/net/atlantic/atl_ethdev.c @@ -29,6 +29,7 @@ static int atl_dev_allmulticast_disable(struct rte_eth_dev *dev); static int atl_dev_link_update(struct rte_eth_dev *dev, int wait); static int atl_dev_xstats_get_names(struct rte_eth_dev *dev __rte_unused, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int size); @@ -1003,12 +1004,16 @@ atl_dev_xstats_get_count(struct rte_eth_dev *dev) static int atl_dev_xstats_get_names(struct rte_eth_dev *dev __rte_unused, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int size) { unsigned int i; unsigned int count = atl_dev_xstats_get_count(dev); + if (ids != NULL) + return -ENOTSUP; + if (xstats_names) { for (i = 0; i < size && i < count; i++) { snprintf(xstats_names[i].name, diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c index ebd5411fdd..6a55a211df 100644 --- a/drivers/net/axgbe/axgbe_ethdev.c +++ b/drivers/net/axgbe/axgbe_ethdev.c @@ -47,19 +47,14 @@ static int axgbe_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *stats, unsigned int n); static int -axgbe_dev_xstats_get_names(struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, - unsigned int size); -static int axgbe_dev_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, uint64_t *values, unsigned int n); static int -axgbe_dev_xstats_get_names_by_id(struct rte_eth_dev *dev, - const uint64_t *ids, - struct rte_eth_xstat_name *xstats_names, - unsigned int size); +axgbe_dev_xstats_get_names(struct rte_eth_dev *dev, const uint64_t *ids, + struct rte_eth_xstat_name *xstats_names, + unsigned int size); static int axgbe_dev_xstats_reset(struct rte_eth_dev *dev); static int axgbe_dev_rss_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, @@ -239,7 +234,6 @@ static const struct eth_dev_ops axgbe_eth_dev_ops = { .xstats_get = axgbe_dev_xstats_get, .xstats_reset = axgbe_dev_xstats_reset, .xstats_get_names = axgbe_dev_xstats_get_names, - .xstats_get_names_by_id = axgbe_dev_xstats_get_names_by_id, .xstats_get_by_id = axgbe_dev_xstats_get_by_id, .reta_update = axgbe_dev_rss_reta_update, .reta_query = axgbe_dev_rss_reta_query, @@ -1022,9 +1016,9 @@ axgbe_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *stats, } static int -axgbe_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, - unsigned int n) +axgbe_dev_xstats_get_all_names(__rte_unused struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names, + unsigned int n) { unsigned int i; @@ -1075,18 +1069,18 @@ axgbe_dev_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, } static int -axgbe_dev_xstats_get_names_by_id(struct rte_eth_dev *dev, - const uint64_t *ids, - struct rte_eth_xstat_name *xstats_names, - unsigned int size) +axgbe_dev_xstats_get_names(struct rte_eth_dev *dev, + const uint64_t *ids, + struct rte_eth_xstat_name *xstats_names, + unsigned int size) { struct rte_eth_xstat_name xstats_names_copy[AXGBE_XSTATS_COUNT]; unsigned int i; if (!ids) - return axgbe_dev_xstats_get_names(dev, xstats_names, size); + return axgbe_dev_xstats_get_all_names(dev, xstats_names, size); - axgbe_dev_xstats_get_names(dev, xstats_names_copy, size); + axgbe_dev_xstats_get_all_names(dev, xstats_names_copy, size); for (i = 0; i < size; i++) { if (ids[i] >= AXGBE_XSTATS_COUNT) { diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c index 463886f17a..b18d14d735 100644 --- a/drivers/net/bnx2x/bnx2x_ethdev.c +++ b/drivers/net/bnx2x/bnx2x_ethdev.c @@ -484,11 +484,15 @@ bnx2x_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) static int bnx2x_get_xstats_names(__rte_unused struct rte_eth_dev *dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, __rte_unused unsigned limit) { unsigned int i, stat_cnt = RTE_DIM(bnx2x_xstats_strings); + if (ids != NULL) + return -ENOTSUP; + if (xstats_names != NULL) for (i = 0; i < stat_cnt; i++) strlcpy(xstats_names[i].name, diff --git a/drivers/net/bnxt/bnxt_stats.c b/drivers/net/bnxt/bnxt_stats.c index 991eafc644..aca350402b 100644 --- a/drivers/net/bnxt/bnxt_stats.c +++ b/drivers/net/bnxt/bnxt_stats.c @@ -845,6 +845,7 @@ int bnxt_flow_stats_cnt(struct bnxt *bp) } int bnxt_dev_xstats_get_names_op(struct rte_eth_dev *eth_dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, __rte_unused unsigned int limit) { @@ -862,6 +863,9 @@ int bnxt_dev_xstats_get_names_op(struct rte_eth_dev *eth_dev, if (rc) return rc; + if (ids != NULL) + return -ENOTSUP; + if (xstats_names != NULL) { count = 0; diff --git a/drivers/net/bnxt/bnxt_stats.h b/drivers/net/bnxt/bnxt_stats.h index 1ca9b9c594..497380ae2d 100644 --- a/drivers/net/bnxt/bnxt_stats.h +++ b/drivers/net/bnxt/bnxt_stats.h @@ -13,6 +13,7 @@ int bnxt_stats_get_op(struct rte_eth_dev *eth_dev, struct rte_eth_stats *bnxt_stats); int bnxt_stats_reset_op(struct rte_eth_dev *eth_dev); int bnxt_dev_xstats_get_names_op(struct rte_eth_dev *eth_dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, __rte_unused unsigned int limit); int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev, diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index 8629193d50..c208611e88 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -1258,7 +1258,6 @@ struct eth_dev_ops cnxk_eth_dev_ops = { .xstats_get_names = cnxk_nix_xstats_get_names, .xstats_reset = cnxk_nix_xstats_reset, .xstats_get_by_id = cnxk_nix_xstats_get_by_id, - .xstats_get_names_by_id = cnxk_nix_xstats_get_names_by_id, .fw_version_get = cnxk_nix_fw_version_get, .rxq_info_get = cnxk_nix_rxq_info_get, .txq_info_get = cnxk_nix_txq_info_get, diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index 946629f72e..1165482baf 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -362,12 +362,9 @@ int cnxk_nix_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); int cnxk_nix_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *xstats, unsigned int n); int cnxk_nix_xstats_get_names(struct rte_eth_dev *eth_dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int limit); -int cnxk_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev, - const uint64_t *ids, - struct rte_eth_xstat_name *xstats_names, - unsigned int limit); int cnxk_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev, const uint64_t *ids, uint64_t *values, unsigned int n); int cnxk_nix_xstats_reset(struct rte_eth_dev *eth_dev); diff --git a/drivers/net/cnxk/cnxk_stats.c b/drivers/net/cnxk/cnxk_stats.c index 4b0deac05e..ae3eef3628 100644 --- a/drivers/net/cnxk/cnxk_stats.c +++ b/drivers/net/cnxk/cnxk_stats.c @@ -162,10 +162,10 @@ cnxk_nix_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *xstats, return size; } -int -cnxk_nix_xstats_get_names(struct rte_eth_dev *eth_dev, - struct rte_eth_xstat_name *xstats_names, - unsigned int limit) +static int +cnxk_nix_xstats_get_all_names(struct rte_eth_dev *eth_dev, + struct rte_eth_xstat_name *xstats_names, + unsigned int limit) { struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); struct roc_nix_xstat_name roc_xstats_name[limit]; @@ -226,10 +226,10 @@ cnxk_nix_xstats_get_names(struct rte_eth_dev *eth_dev, } int -cnxk_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev, - const uint64_t *ids, - struct rte_eth_xstat_name *xstats_names, - unsigned int limit) +cnxk_nix_xstats_get_names(struct rte_eth_dev *eth_dev, + const uint64_t *ids, + struct rte_eth_xstat_name *xstats_names, + unsigned int limit) { struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); uint32_t nix_cnt = roc_nix_num_xstats_get(&dev->nix); @@ -247,7 +247,7 @@ cnxk_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev, if (xstats_names == NULL) return -ENOMEM; - cnxk_nix_xstats_get_names(eth_dev, xnames, stat_cnt); + cnxk_nix_xstats_get_all_names(eth_dev, xnames, stat_cnt); for (i = 0; i < limit; i++) { if (ids[i] >= stat_cnt) diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c index 4929766d9a..371550069e 100644 --- a/drivers/net/cxgbe/cxgbe_ethdev.c +++ b/drivers/net/cxgbe/cxgbe_ethdev.c @@ -1005,10 +1005,10 @@ static int cxgbe_dev_xstats_get_by_id(struct rte_eth_dev *dev, } /* Get names of port extended statistics by ID. */ -static int cxgbe_dev_xstats_get_names_by_id(struct rte_eth_dev *dev, - const uint64_t *ids, - struct rte_eth_xstat_name *xnames, - unsigned int n) +static int cxgbe_dev_xstats_get_names(struct rte_eth_dev *dev, + const uint64_t *ids, + struct rte_eth_xstat_name *xnames, + unsigned int n) { struct port_info *pi = dev->data->dev_private; struct rte_eth_xstat_name *xnames_copy; @@ -1048,14 +1048,6 @@ static int cxgbe_dev_xstats_get(struct rte_eth_dev *dev, return cxgbe_dev_xstats(dev, NULL, xstats, n); } -/* Get names of port extended statistics. */ -static int cxgbe_dev_xstats_get_names(struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, - unsigned int n) -{ - return cxgbe_dev_xstats(dev, xstats_names, NULL, n); -} - /* Reset port extended statistics. */ static int cxgbe_dev_xstats_reset(struct rte_eth_dev *dev) { @@ -1620,7 +1612,6 @@ static const struct eth_dev_ops cxgbe_eth_dev_ops = { .xstats_get = cxgbe_dev_xstats_get, .xstats_get_by_id = cxgbe_dev_xstats_get_by_id, .xstats_get_names = cxgbe_dev_xstats_get_names, - .xstats_get_names_by_id = cxgbe_dev_xstats_get_names_by_id, .xstats_reset = cxgbe_dev_xstats_reset, .flow_ctrl_get = cxgbe_flow_ctrl_get, .flow_ctrl_set = cxgbe_flow_ctrl_set, diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 1f80e8d744..06e293b17f 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -758,9 +758,9 @@ dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, } static int -dpaa_xstats_get_names(__rte_unused struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, - unsigned int limit) +dpaa_xstats_get_all_names(__rte_unused struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names, + unsigned int limit) { unsigned int i, stat_cnt = RTE_DIM(dpaa_xstats_strings); @@ -813,7 +813,7 @@ dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, } static int -dpaa_xstats_get_names_by_id( +dpaa_xstats_get_names( struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, @@ -823,9 +823,9 @@ dpaa_xstats_get_names_by_id( struct rte_eth_xstat_name xstats_names_copy[stat_cnt]; if (!ids) - return dpaa_xstats_get_names(dev, xstats_names, limit); + return dpaa_xstats_get_all_names(dev, xstats_names, limit); - dpaa_xstats_get_names(dev, xstats_names_copy, limit); + dpaa_xstats_get_all_names(dev, xstats_names_copy, limit); for (i = 0; i < limit; i++) { if (ids[i] >= stat_cnt) { @@ -1585,7 +1585,6 @@ static struct eth_dev_ops dpaa_devops = { .stats_get = dpaa_eth_stats_get, .xstats_get = dpaa_dev_xstats_get, .xstats_get_by_id = dpaa_xstats_get_by_id, - .xstats_get_names_by_id = dpaa_xstats_get_names_by_id, .xstats_get_names = dpaa_xstats_get_names, .xstats_reset = dpaa_eth_stats_reset, .stats_reset = dpaa_eth_stats_reset, diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index ea191564fc..9aaeb0bc17 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -1711,9 +1711,9 @@ dpaa2_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, } static int -dpaa2_xstats_get_names(__rte_unused struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, - unsigned int limit) +dpaa2_xstats_get_all_names(__rte_unused struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names, + unsigned int limit) { unsigned int i, stat_cnt = RTE_DIM(dpaa2_xstats_strings); @@ -1793,7 +1793,7 @@ dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, } static int -dpaa2_xstats_get_names_by_id( +dpaa2_xstats_get_names( struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, @@ -1803,9 +1803,9 @@ dpaa2_xstats_get_names_by_id( struct rte_eth_xstat_name xstats_names_copy[stat_cnt]; if (!ids) - return dpaa2_xstats_get_names(dev, xstats_names, limit); + return dpaa2_xstats_get_all_names(dev, xstats_names, limit); - dpaa2_xstats_get_names(dev, xstats_names_copy, limit); + dpaa2_xstats_get_all_names(dev, xstats_names_copy, limit); for (i = 0; i < limit; i++) { if (ids[i] >= stat_cnt) { @@ -2413,7 +2413,6 @@ static struct eth_dev_ops dpaa2_ethdev_ops = { .stats_get = dpaa2_dev_stats_get, .xstats_get = dpaa2_dev_xstats_get, .xstats_get_by_id = dpaa2_xstats_get_by_id, - .xstats_get_names_by_id = dpaa2_xstats_get_names_by_id, .xstats_get_names = dpaa2_xstats_get_names, .stats_reset = dpaa2_dev_stats_reset, .xstats_reset = dpaa2_dev_stats_reset, diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c index 6510cd7ceb..8bf254a802 100644 --- a/drivers/net/e1000/igb_ethdev.c +++ b/drivers/net/e1000/igb_ethdev.c @@ -93,9 +93,6 @@ static int eth_igb_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, uint64_t *values, unsigned int n); static int eth_igb_xstats_get_names(struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, - unsigned int size); -static int eth_igb_xstats_get_names_by_id(struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int limit); static int eth_igb_stats_reset(struct rte_eth_dev *dev); @@ -166,6 +163,7 @@ static int eth_igbvf_stats_get(struct rte_eth_dev *dev, static int eth_igbvf_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned n); static int eth_igbvf_xstats_get_names(struct rte_eth_dev *dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned limit); static int eth_igbvf_stats_reset(struct rte_eth_dev *dev); @@ -343,7 +341,6 @@ static const struct eth_dev_ops eth_igb_ops = { .stats_get = eth_igb_stats_get, .xstats_get = eth_igb_xstats_get, .xstats_get_by_id = eth_igb_xstats_get_by_id, - .xstats_get_names_by_id = eth_igb_xstats_get_names_by_id, .xstats_get_names = eth_igb_xstats_get_names, .stats_reset = eth_igb_stats_reset, .xstats_reset = eth_igb_xstats_reset, @@ -1863,26 +1860,7 @@ eth_igb_xstats_reset(struct rte_eth_dev *dev) return 0; } -static int eth_igb_xstats_get_names(__rte_unused struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, - __rte_unused unsigned int size) -{ - unsigned i; - - if (xstats_names == NULL) - return IGB_NB_XSTATS; - - /* Note: limit checked in rte_eth_xstats_names() */ - - for (i = 0; i < IGB_NB_XSTATS; i++) { - strlcpy(xstats_names[i].name, rte_igb_stats_strings[i].name, - sizeof(xstats_names[i].name)); - } - - return IGB_NB_XSTATS; -} - -static int eth_igb_xstats_get_names_by_id(struct rte_eth_dev *dev, +static int eth_igb_xstats_get_names(struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int limit) { @@ -1902,7 +1880,7 @@ static int eth_igb_xstats_get_names_by_id(struct rte_eth_dev *dev, } else { struct rte_eth_xstat_name xstats_names_copy[IGB_NB_XSTATS]; - eth_igb_xstats_get_names_by_id(dev, NULL, xstats_names_copy, + eth_igb_xstats_get_names(dev, NULL, xstats_names_copy, IGB_NB_XSTATS); for (i = 0; i < limit; i++) { @@ -2035,11 +2013,15 @@ igbvf_read_stats_registers(struct e1000_hw *hw, struct e1000_vf_stats *hw_stats) } static int eth_igbvf_xstats_get_names(__rte_unused struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, - __rte_unused unsigned limit) + const uint64_t *ids, + struct rte_eth_xstat_name *xstats_names, + __rte_unused unsigned limit) { unsigned i; + if (ids != NULL) + return -ENOTSUP; + if (xstats_names != NULL) for (i = 0; i < IGBVF_NB_XSTATS; i++) { strlcpy(xstats_names[i].name, diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 4cebf60a68..a807e50dba 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -212,7 +212,7 @@ static void ena_interrupt_handler_rte(void *cb_arg); static void ena_timer_wd_callback(struct rte_timer *timer, void *arg); static void ena_destroy_device(struct rte_eth_dev *eth_dev); static int eth_ena_dev_init(struct rte_eth_dev *eth_dev); -static int ena_xstats_get_names(struct rte_eth_dev *dev, +static int ena_xstats_get_names(struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int n); static int ena_xstats_get(struct rte_eth_dev *dev, @@ -2585,12 +2585,16 @@ int ena_copy_eni_stats(struct ena_adapter *adapter) * Number of xstats names. */ static int ena_xstats_get_names(struct rte_eth_dev *dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int n) { unsigned int xstats_count = ena_xstats_calc_num(dev->data); unsigned int stat, i, count = 0; + if (ids != NULL) + return -ENOTSUP; + if (n < xstats_count || !xstats_names) return xstats_count; diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c index 5ff33e03e0..dae85b7677 100644 --- a/drivers/net/failsafe/failsafe_ops.c +++ b/drivers/net/failsafe/failsafe_ops.c @@ -990,12 +990,15 @@ __fs_xstats_get_names(struct rte_eth_dev *dev, } static int -fs_xstats_get_names(struct rte_eth_dev *dev, +fs_xstats_get_names(struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int limit) { int ret; + if (ids != NULL) + return -ENOTSUP; + fs_lock(dev, 0); ret = __fs_xstats_get_names(dev, xstats_names, limit); fs_unlock(dev, 0); diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c index 3236290e40..16af1751f9 100644 --- a/drivers/net/fm10k/fm10k_ethdev.c +++ b/drivers/net/fm10k/fm10k_ethdev.c @@ -1232,11 +1232,15 @@ fm10k_link_update(struct rte_eth_dev *dev, } static int fm10k_xstats_get_names(__rte_unused struct rte_eth_dev *dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, __rte_unused unsigned limit) { unsigned i, q; unsigned count = 0; + if (ids != NULL) + return -ENOTSUP; + if (xstats_names != NULL) { /* Note: limit checked in rte_eth_xstats_names() */ diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c index c01e2ec1d4..bc6f07d070 100644 --- a/drivers/net/hinic/hinic_pmd_ethdev.c +++ b/drivers/net/hinic/hinic_pmd_ethdev.c @@ -2280,6 +2280,7 @@ static void hinic_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, * Number of xstats names. */ static int hinic_dev_xstats_get_names(struct rte_eth_dev *dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, __rte_unused unsigned int limit) { @@ -2287,6 +2288,9 @@ static int hinic_dev_xstats_get_names(struct rte_eth_dev *dev, int count = 0; u16 i = 0, q_num; + if (ids != NULL) + return -ENOTSUP; + if (xstats_names == NULL) return hinic_xstats_calc_num(nic_dev); diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 7d37004972..87ae92080f 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -7413,7 +7413,6 @@ static const struct eth_dev_ops hns3_eth_dev_ops = { .xstats_get_names = hns3_dev_xstats_get_names, .xstats_reset = hns3_dev_xstats_reset, .xstats_get_by_id = hns3_dev_xstats_get_by_id, - .xstats_get_names_by_id = hns3_dev_xstats_get_names_by_id, .dev_infos_get = hns3_dev_infos_get, .fw_version_get = hns3_fw_version_get, .rx_queue_setup = hns3_rx_queue_setup, diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 8d9b7979c8..d65236e003 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -2905,7 +2905,6 @@ static const struct eth_dev_ops hns3vf_eth_dev_ops = { .xstats_get_names = hns3_dev_xstats_get_names, .xstats_reset = hns3_dev_xstats_reset, .xstats_get_by_id = hns3_dev_xstats_get_by_id, - .xstats_get_names_by_id = hns3_dev_xstats_get_names_by_id, .dev_infos_get = hns3vf_dev_infos_get, .fw_version_get = hns3vf_fw_version_get, .rx_queue_setup = hns3_rx_queue_setup, diff --git a/drivers/net/hns3/hns3_stats.c b/drivers/net/hns3/hns3_stats.c index 0fe853d626..abb2d42144 100644 --- a/drivers/net/hns3/hns3_stats.c +++ b/drivers/net/hns3/hns3_stats.c @@ -1189,7 +1189,7 @@ hns3_imissed_stats_name_get(struct rte_eth_dev *dev, } /* - * Retrieve names of extended statistics of an Ethernet device. + * Retrieve all names of extended statistics of an Ethernet device. * * There is an assumption that 'xstat_names' and 'xstats' arrays are matched * by array index: @@ -1212,10 +1212,10 @@ hns3_imissed_stats_name_get(struct rte_eth_dev *dev, * - A positive value lower or equal to size: success. The return value * is the number of entries filled in the stats table. */ -int -hns3_dev_xstats_get_names(struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, - __rte_unused unsigned int size) +static int +hns3_dev_xstats_get_all_names(struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names, + __rte_unused unsigned int size) { struct hns3_adapter *hns = dev->data->dev_private; int cnt_stats = hns3_xstats_calc_num(dev); @@ -1382,10 +1382,9 @@ hns3_dev_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, * shall not be used by the caller. */ int -hns3_dev_xstats_get_names_by_id(struct rte_eth_dev *dev, - const uint64_t *ids, - struct rte_eth_xstat_name *xstats_names, - uint32_t size) +hns3_dev_xstats_get_names(struct rte_eth_dev *dev, const uint64_t *ids, + struct rte_eth_xstat_name *xstats_names, + uint32_t size) { const uint32_t cnt_stats = hns3_xstats_calc_num(dev); struct hns3_adapter *hns = dev->data->dev_private; @@ -1401,7 +1400,8 @@ hns3_dev_xstats_get_names_by_id(struct rte_eth_dev *dev, if (size < cnt_stats) return cnt_stats; - return hns3_dev_xstats_get_names(dev, xstats_names, cnt_stats); + return hns3_dev_xstats_get_all_names(dev, xstats_names, + cnt_stats); } len = cnt_stats * sizeof(struct rte_eth_xstat_name); @@ -1412,7 +1412,7 @@ hns3_dev_xstats_get_names_by_id(struct rte_eth_dev *dev, return -ENOMEM; } - (void)hns3_dev_xstats_get_names(dev, names_copy, cnt_stats); + (void)hns3_dev_xstats_get_all_names(dev, names_copy, cnt_stats); for (i = 0; i < size; i++) { if (ids[i] >= cnt_stats) { diff --git a/drivers/net/hns3/hns3_stats.h b/drivers/net/hns3/hns3_stats.h index d1230f94cb..53fd1572f0 100644 --- a/drivers/net/hns3/hns3_stats.h +++ b/drivers/net/hns3/hns3_stats.h @@ -153,17 +153,13 @@ int hns3_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *rte_stats); int hns3_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n); int hns3_dev_xstats_reset(struct rte_eth_dev *dev); -int hns3_dev_xstats_get_names(struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, - __rte_unused unsigned int size); int hns3_dev_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, uint64_t *values, uint32_t size); -int hns3_dev_xstats_get_names_by_id(struct rte_eth_dev *dev, - const uint64_t *ids, - struct rte_eth_xstat_name *xstats_names, - uint32_t size); +int hns3_dev_xstats_get_names(struct rte_eth_dev *dev, const uint64_t *ids, + struct rte_eth_xstat_name *xstats_names, + unsigned int size); int hns3_stats_reset(struct rte_eth_dev *dev); int hns3_tqp_stats_init(struct hns3_hw *hw); void hns3_tqp_stats_uninit(struct hns3_hw *hw); diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 7a2a8281d2..832c9bff01 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -258,6 +258,7 @@ static int i40e_dev_stats_get(struct rte_eth_dev *dev, static int i40e_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned n); static int i40e_dev_xstats_get_names(struct rte_eth_dev *dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned limit); static int i40e_dev_stats_reset(struct rte_eth_dev *dev); @@ -3567,12 +3568,16 @@ i40e_xstats_calc_num(void) } static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, __rte_unused unsigned limit) { unsigned count = 0; unsigned i, prio; + if (ids != NULL) + return -ENOTSUP; + if (xstats_names == NULL) return i40e_xstats_calc_num(); diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 5a5a7f59e1..a7be7abf1a 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -87,8 +87,9 @@ static int iavf_dev_stats_reset(struct rte_eth_dev *dev); static int iavf_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n); static int iavf_dev_xstats_get_names(struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, - unsigned int limit); + const uint64_t *ids, + struct rte_eth_xstat_name *xstats_names, + unsigned int limit); static int iavf_dev_promiscuous_enable(struct rte_eth_dev *dev); static int iavf_dev_promiscuous_disable(struct rte_eth_dev *dev); static int iavf_dev_allmulticast_enable(struct rte_eth_dev *dev); @@ -1611,11 +1612,15 @@ iavf_dev_stats_reset(struct rte_eth_dev *dev) } static int iavf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, - __rte_unused unsigned int limit) + const uint64_t *ids, + struct rte_eth_xstat_name *xstats_names, + __rte_unused unsigned int limit) { unsigned int i; + if (ids != NULL) + return -ENOTSUP; + if (xstats_names != NULL) for (i = 0; i < IAVF_NB_XSTATS; i++) { snprintf(xstats_names[i].name, diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index ea3b5c02aa..adeb5a00f3 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -145,6 +145,7 @@ static int ice_stats_reset(struct rte_eth_dev *dev); static int ice_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n); static int ice_xstats_get_names(struct rte_eth_dev *dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int limit); static int ice_dev_flow_ops_get(struct rte_eth_dev *dev, @@ -5420,12 +5421,16 @@ ice_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, } static int ice_xstats_get_names(__rte_unused struct rte_eth_dev *dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, __rte_unused unsigned int limit) { unsigned int count = 0; unsigned int i; + if (ids != NULL) + return -ENOTSUP; + if (!xstats_names) return ice_xstats_calc_num(); diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c index be2c066111..969e515a5f 100644 --- a/drivers/net/igc/igc_ethdev.c +++ b/drivers/net/igc/igc_ethdev.c @@ -213,9 +213,6 @@ static int eth_igc_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, uint64_t *values, unsigned int n); static int eth_igc_xstats_get_names(struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, - unsigned int size); -static int eth_igc_xstats_get_names_by_id(struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int limit); static int eth_igc_xstats_reset(struct rte_eth_dev *dev); @@ -280,7 +277,6 @@ static const struct eth_dev_ops eth_igc_ops = { .stats_get = eth_igc_stats_get, .xstats_get = eth_igc_xstats_get, .xstats_get_by_id = eth_igc_xstats_get_by_id, - .xstats_get_names_by_id = eth_igc_xstats_get_names_by_id, .xstats_get_names = eth_igc_xstats_get_names, .stats_reset = eth_igc_xstats_reset, .xstats_reset = eth_igc_xstats_reset, @@ -1991,7 +1987,7 @@ eth_igc_xstats_reset(struct rte_eth_dev *dev) } static int -eth_igc_xstats_get_names(__rte_unused struct rte_eth_dev *dev, +eth_igc_xstats_get_all_names(__rte_unused struct rte_eth_dev *dev, struct rte_eth_xstat_name *xstats_names, unsigned int size) { unsigned int i; @@ -2012,14 +2008,14 @@ eth_igc_xstats_get_names(__rte_unused struct rte_eth_dev *dev, } static int -eth_igc_xstats_get_names_by_id(struct rte_eth_dev *dev, +eth_igc_xstats_get_names(struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int limit) { unsigned int i; if (!ids) - return eth_igc_xstats_get_names(dev, xstats_names, limit); + return eth_igc_xstats_get_all_names(dev, xstats_names, limit); for (i = 0; i < limit; i++) { if (ids[i] >= IGC_NB_XSTATS) { diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c index 344c076f30..d813e9f909 100644 --- a/drivers/net/ionic/ionic_ethdev.c +++ b/drivers/net/ionic/ionic_ethdev.c @@ -50,8 +50,6 @@ static int ionic_dev_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, uint64_t *values, unsigned int n); static int ionic_dev_xstats_reset(struct rte_eth_dev *dev); static int ionic_dev_xstats_get_names(struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, unsigned int size); -static int ionic_dev_xstats_get_names_by_id(struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int limit); static int ionic_dev_fw_version_get(struct rte_eth_dev *eth_dev, @@ -119,7 +117,6 @@ static const struct eth_dev_ops ionic_eth_dev_ops = { .xstats_get_by_id = ionic_dev_xstats_get_by_id, .xstats_reset = ionic_dev_xstats_reset, .xstats_get_names = ionic_dev_xstats_get_names, - .xstats_get_names_by_id = ionic_dev_xstats_get_names_by_id, .fw_version_get = ionic_dev_fw_version_get, }; @@ -714,25 +711,7 @@ ionic_dev_stats_reset(struct rte_eth_dev *eth_dev) } static int -ionic_dev_xstats_get_names(__rte_unused struct rte_eth_dev *eth_dev, - struct rte_eth_xstat_name *xstats_names, - __rte_unused unsigned int size) -{ - unsigned int i; - - if (xstats_names != NULL) { - for (i = 0; i < IONIC_NB_HW_STATS; i++) { - snprintf(xstats_names[i].name, - sizeof(xstats_names[i].name), - "%s", rte_ionic_xstats_strings[i].name); - } - } - - return IONIC_NB_HW_STATS; -} - -static int -ionic_dev_xstats_get_names_by_id(struct rte_eth_dev *eth_dev, +ionic_dev_xstats_get_names(struct rte_eth_dev *eth_dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int limit) { @@ -751,7 +730,7 @@ ionic_dev_xstats_get_names_by_id(struct rte_eth_dev *eth_dev, return IONIC_NB_HW_STATS; } - ionic_dev_xstats_get_names_by_id(eth_dev, NULL, xstats_names_copy, + ionic_dev_xstats_get_names(eth_dev, NULL, xstats_names_copy, IONIC_NB_HW_STATS); for (i = 0; i < limit; i++) { diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c index 589d9fa587..5312b955d6 100644 --- a/drivers/net/ipn3ke/ipn3ke_representor.c +++ b/drivers/net/ipn3ke/ipn3ke_representor.c @@ -2329,13 +2329,16 @@ ipn3ke_rpst_xstats_get static int ipn3ke_rpst_xstats_get_names -(__rte_unused struct rte_eth_dev *dev, +(__rte_unused struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, __rte_unused unsigned int limit) { unsigned int count = 0; unsigned int i, prio; + if (ids != NULL) + return -ENOTSUP; + if (!xstats_names) return ipn3ke_rpst_xstats_calc_num(); diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index aae8b55d83..51b86ea5ce 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -167,15 +167,12 @@ ixgbe_dev_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, static int ixgbe_dev_stats_reset(struct rte_eth_dev *dev); static int ixgbe_dev_xstats_reset(struct rte_eth_dev *dev); static int ixgbe_dev_xstats_get_names(struct rte_eth_dev *dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int size); static int ixgbevf_dev_xstats_get_names(struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, unsigned limit); -static int ixgbe_dev_xstats_get_names_by_id( - struct rte_eth_dev *dev, const uint64_t *ids, - struct rte_eth_xstat_name *xstats_names, - unsigned int limit); + struct rte_eth_xstat_name *xstats_names, unsigned limit); static int ixgbe_dev_queue_stats_mapping_set(struct rte_eth_dev *eth_dev, uint16_t queue_id, uint8_t stat_idx, @@ -499,7 +496,6 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = { .stats_reset = ixgbe_dev_stats_reset, .xstats_reset = ixgbe_dev_xstats_reset, .xstats_get_names = ixgbe_dev_xstats_get_names, - .xstats_get_names_by_id = ixgbe_dev_xstats_get_names_by_id, .queue_stats_mapping_set = ixgbe_dev_queue_stats_mapping_set, .fw_version_get = ixgbe_fw_version_get, .dev_infos_get = ixgbe_dev_info_get, @@ -3381,61 +3377,7 @@ ixgbe_xstats_calc_num(void) { (IXGBE_NB_TXQ_PRIO_STATS * IXGBE_NB_TXQ_PRIO_VALUES); } -static int ixgbe_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, __rte_unused unsigned int size) -{ - const unsigned cnt_stats = ixgbe_xstats_calc_num(); - unsigned stat, i, count; - - if (xstats_names != NULL) { - count = 0; - - /* Note: limit >= cnt_stats checked upstream - * in rte_eth_xstats_names() - */ - - /* Extended stats from ixgbe_hw_stats */ - for (i = 0; i < IXGBE_NB_HW_STATS; i++) { - strlcpy(xstats_names[count].name, - rte_ixgbe_stats_strings[i].name, - sizeof(xstats_names[count].name)); - count++; - } - - /* MACsec Stats */ - for (i = 0; i < IXGBE_NB_MACSEC_STATS; i++) { - strlcpy(xstats_names[count].name, - rte_ixgbe_macsec_strings[i].name, - sizeof(xstats_names[count].name)); - count++; - } - - /* RX Priority Stats */ - for (stat = 0; stat < IXGBE_NB_RXQ_PRIO_STATS; stat++) { - for (i = 0; i < IXGBE_NB_RXQ_PRIO_VALUES; i++) { - snprintf(xstats_names[count].name, - sizeof(xstats_names[count].name), - "rx_priority%u_%s", i, - rte_ixgbe_rxq_strings[stat].name); - count++; - } - } - - /* TX Priority Stats */ - for (stat = 0; stat < IXGBE_NB_TXQ_PRIO_STATS; stat++) { - for (i = 0; i < IXGBE_NB_TXQ_PRIO_VALUES; i++) { - snprintf(xstats_names[count].name, - sizeof(xstats_names[count].name), - "tx_priority%u_%s", i, - rte_ixgbe_txq_strings[stat].name); - count++; - } - } - } - return cnt_stats; -} - -static int ixgbe_dev_xstats_get_names_by_id( +static int ixgbe_dev_xstats_get_names( struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, @@ -3497,8 +3439,7 @@ static int ixgbe_dev_xstats_get_names_by_id( uint16_t size = ixgbe_xstats_calc_num(); struct rte_eth_xstat_name xstats_names_copy[size]; - ixgbe_dev_xstats_get_names_by_id(dev, NULL, xstats_names_copy, - size); + ixgbe_dev_xstats_get_names(dev, NULL, xstats_names_copy, size); for (i = 0; i < limit; i++) { if (ids[i] >= size) { @@ -3512,6 +3453,7 @@ static int ixgbe_dev_xstats_get_names_by_id( } static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned limit) { unsigned i; @@ -3519,6 +3461,9 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, if (limit < IXGBEVF_NB_XSTATS && xstats_names != NULL) return -ENOMEM; + if (ids != NULL) + return -ENOTSUP; + if (xstats_names != NULL) for (i = 0; i < IXGBEVF_NB_XSTATS; i++) strlcpy(xstats_names[i].name, diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c index b72060a449..68b674c2a6 100644 --- a/drivers/net/liquidio/lio_ethdev.c +++ b/drivers/net/liquidio/lio_ethdev.c @@ -214,6 +214,7 @@ lio_dev_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *xstats, static int lio_dev_xstats_get_names(struct rte_eth_dev *eth_dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned limit __rte_unused) { @@ -226,6 +227,9 @@ lio_dev_xstats_get_names(struct rte_eth_dev *eth_dev, return -EINVAL; } + if (ids != NULL) + return -ENOTSUP; + if (xstats_names == NULL) return LIO_NB_XSTATS; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 3581414b78..4eab2554bc 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1620,6 +1620,7 @@ int mlx5_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *stats, unsigned int n); int mlx5_xstats_reset(struct rte_eth_dev *dev); int mlx5_xstats_get_names(struct rte_eth_dev *dev __rte_unused, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int n); diff --git a/drivers/net/mlx5/mlx5_stats.c b/drivers/net/mlx5/mlx5_stats.c index ae2f5668a7..d67afd6bc1 100644 --- a/drivers/net/mlx5/mlx5_stats.c +++ b/drivers/net/mlx5/mlx5_stats.c @@ -260,6 +260,8 @@ mlx5_xstats_reset(struct rte_eth_dev *dev) * * @param dev * Pointer to Ethernet device structure. + * @param ids + * Array of xstats IDs to get names * @param[out] xstats_names * Buffer to insert names into. * @param n @@ -269,7 +271,7 @@ mlx5_xstats_reset(struct rte_eth_dev *dev) * Number of xstats names. */ int -mlx5_xstats_get_names(struct rte_eth_dev *dev, +mlx5_xstats_get_names(struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int n) { unsigned int i; @@ -277,6 +279,9 @@ mlx5_xstats_get_names(struct rte_eth_dev *dev, struct mlx5_xstats_ctrl *xstats_ctrl = &priv->xstats_ctrl; unsigned int mlx5_xstats_n = xstats_ctrl->mlx5_stats_n; + if (ids != NULL) + return -ENOTSUP; + if (n >= mlx5_xstats_n && xstats_names) { for (i = 0; i != mlx5_xstats_n; ++i) { strncpy(xstats_names[i].name, diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c index 078aefbb8d..554aba1be9 100644 --- a/drivers/net/mvpp2/mrvl_ethdev.c +++ b/drivers/net/mvpp2/mrvl_ethdev.c @@ -1689,11 +1689,15 @@ mrvl_xstats_reset(struct rte_eth_dev *dev) */ static int mrvl_xstats_get_names(struct rte_eth_dev *dev __rte_unused, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int size) { unsigned int i; + if (ids != NULL) + return -ENOTSUP; + if (!xstats_names) return RTE_DIM(mrvl_xstats_tbl); diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c index 9e2a405973..4260f0b4ab 100644 --- a/drivers/net/netvsc/hn_ethdev.c +++ b/drivers/net/netvsc/hn_ethdev.c @@ -851,13 +851,16 @@ hn_dev_xstats_count(struct rte_eth_dev *dev) } static int -hn_dev_xstats_get_names(struct rte_eth_dev *dev, +hn_dev_xstats_get_names(struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int limit) { unsigned int i, t, count = 0; int ret; + if (ids != NULL) + return -ENOTSUP; + if (!xstats_names) return hn_dev_xstats_count(dev); diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 75d4cabf2e..c1fd2ad7d5 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -2338,7 +2338,6 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .xstats_get_names = otx2_nix_xstats_get_names, .xstats_reset = otx2_nix_xstats_reset, .xstats_get_by_id = otx2_nix_xstats_get_by_id, - .xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id, .rxq_info_get = otx2_nix_rxq_info_get, .txq_info_get = otx2_nix_txq_info_get, .rx_burst_mode_get = otx2_rx_burst_mode_get, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index b1575f59a2..1eeb77c9dd 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -506,6 +506,7 @@ int otx2_nix_queue_stats_mapping(struct rte_eth_dev *dev, int otx2_nix_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *xstats, unsigned int n); int otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int limit); int otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev); @@ -513,10 +514,6 @@ int otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev); int otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev, const uint64_t *ids, uint64_t *values, unsigned int n); -int otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev, - const uint64_t *ids, - struct rte_eth_xstat_name *xstats_names, - unsigned int limit); /* RSS */ void otx2_nix_rss_set_key(struct otx2_eth_dev *dev, diff --git a/drivers/net/octeontx2/otx2_stats.c b/drivers/net/octeontx2/otx2_stats.c index 3adf21608c..70bfaa3d77 100644 --- a/drivers/net/octeontx2/otx2_stats.c +++ b/drivers/net/octeontx2/otx2_stats.c @@ -200,8 +200,8 @@ otx2_nix_xstats_get(struct rte_eth_dev *eth_dev, return count; } -int -otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev, +static int +otx2_nix_xstats_get_all_names(struct rte_eth_dev *eth_dev, struct rte_eth_xstat_name *xstats_names, unsigned int limit) { @@ -239,10 +239,10 @@ otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev, } int -otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev, - const uint64_t *ids, - struct rte_eth_xstat_name *xstats_names, - unsigned int limit) +otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev, + const uint64_t *ids, + struct rte_eth_xstat_name *xstats_names, + unsigned int limit) { struct rte_eth_xstat_name xstats_names_copy[OTX2_NIX_NUM_XSTATS_REG]; uint16_t i; @@ -256,7 +256,7 @@ otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev, if (xstats_names == NULL) return -ENOMEM; - otx2_nix_xstats_get_names(eth_dev, xstats_names_copy, limit); + otx2_nix_xstats_get_all_names(eth_dev, xstats_names_copy, limit); for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) { if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) { diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c index a4304e0eff..40443b30bf 100644 --- a/drivers/net/qede/qede_ethdev.c +++ b/drivers/net/qede/qede_ethdev.c @@ -1718,6 +1718,7 @@ qede_get_xstats_count(struct qede_dev *qdev) { static int qede_get_xstats_names(struct rte_eth_dev *dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, __rte_unused unsigned int limit) { @@ -1726,6 +1727,9 @@ qede_get_xstats_names(struct rte_eth_dev *dev, const unsigned int stat_cnt = qede_get_xstats_count(qdev); unsigned int i, qid, hw_fn, stat_idx = 0; + if (ids != NULL) + return -ENOTSUP; + if (xstats_names == NULL) return stat_cnt; diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c index f212ca8ad6..d05a1fb5ca 100644 --- a/drivers/net/sfc/sfc_ethdev.c +++ b/drivers/net/sfc/sfc_ethdev.c @@ -763,43 +763,6 @@ sfc_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, return nb_supported; } -static int -sfc_xstats_get_names(struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, - unsigned int xstats_count) -{ - struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev); - struct sfc_port *port = &sa->port; - unsigned int i; - unsigned int nstats = 0; - unsigned int nb_written = 0; - int ret; - - if (unlikely(xstats_names == NULL)) - return sfc_xstats_get_nb_supported(sa); - - for (i = 0; i < EFX_MAC_NSTATS; ++i) { - if (EFX_MAC_STAT_SUPPORTED(port->mac_stats_mask, i)) { - if (nstats < xstats_count) { - strlcpy(xstats_names[nstats].name, - efx_mac_stat_name(sa->nic, i), - sizeof(xstats_names[0].name)); - nb_written++; - } - nstats++; - } - } - - ret = sfc_sw_xstats_get_names(sa, xstats_names, xstats_count, - &nb_written, &nstats); - if (ret != 0) { - SFC_ASSERT(ret < 0); - return ret; - } - - return nstats; -} - static int sfc_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, uint64_t *values, unsigned int n) @@ -837,10 +800,42 @@ sfc_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, } static int -sfc_xstats_get_names_by_id(struct rte_eth_dev *dev, - const uint64_t *ids, - struct rte_eth_xstat_name *xstats_names, - unsigned int size) +sfc_xstats_get_all_names(struct sfc_adapter *sa, + struct rte_eth_xstat_name *xstats_names, + unsigned int xstats_count) +{ + struct sfc_port *port = &sa->port; + unsigned int i; + unsigned int nstats = 0; + unsigned int nb_written = 0; + int ret; + + for (i = 0; i < EFX_MAC_NSTATS; ++i) { + if (EFX_MAC_STAT_SUPPORTED(port->mac_stats_mask, i)) { + if (nstats < xstats_count) { + strlcpy(xstats_names[nstats].name, + efx_mac_stat_name(sa->nic, i), + sizeof(xstats_names[0].name)); + nb_written++; + } + nstats++; + } + } + + ret = sfc_sw_xstats_get_names(sa, xstats_names, xstats_count, + &nb_written, &nstats); + if (ret != 0) { + SFC_ASSERT(ret < 0); + return ret; + } + + return nstats; +} + +static int +sfc_xstats_get_names(struct rte_eth_dev *dev, const uint64_t *ids, + struct rte_eth_xstat_name *xstats_names, + unsigned int size) { struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev); struct sfc_port *port = &sa->port; @@ -848,13 +843,15 @@ sfc_xstats_get_names_by_id(struct rte_eth_dev *dev, unsigned int i; int ret; - if (unlikely(xstats_names == NULL && ids != NULL) || - unlikely(xstats_names != NULL && ids == NULL)) + if (unlikely(xstats_names == NULL && ids != NULL)) return -EINVAL; if (unlikely(xstats_names == NULL && ids == NULL)) return sfc_xstats_get_nb_supported(sa); + if (ids == NULL) + return sfc_xstats_get_all_names(sa, xstats_names, size); + /* * Names array could be filled in nonsequential order. Fill names with * string indicating invalid ID first. @@ -1905,7 +1902,6 @@ static const struct eth_dev_ops sfc_eth_dev_ops = { .txq_info_get = sfc_tx_queue_info_get, .fw_version_get = sfc_fw_version_get, .xstats_get_by_id = sfc_xstats_get_by_id, - .xstats_get_names_by_id = sfc_xstats_get_names_by_id, .pool_ops_supported = sfc_pool_ops_supported, }; diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index b267da462b..be6e073141 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -2424,7 +2424,7 @@ txgbe_get_offset_by_id(uint32_t id, uint32_t *offset) return -1; } -static int txgbe_dev_xstats_get_names(struct rte_eth_dev *dev, +static int txgbe_dev_xstats_get_all_names(struct rte_eth_dev *dev, struct rte_eth_xstat_name *xstats_names, unsigned int limit) { unsigned int i, count; @@ -2450,7 +2450,7 @@ static int txgbe_dev_xstats_get_names(struct rte_eth_dev *dev, return i; } -static int txgbe_dev_xstats_get_names_by_id(struct rte_eth_dev *dev, +static int txgbe_dev_xstats_get_names(struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int limit) @@ -2458,7 +2458,7 @@ static int txgbe_dev_xstats_get_names_by_id(struct rte_eth_dev *dev, unsigned int i; if (ids == NULL) - return txgbe_dev_xstats_get_names(dev, xstats_names, limit); + return txgbe_dev_xstats_get_all_names(dev, xstats_names, limit); for (i = 0; i < limit; i++) { if (txgbe_get_name_by_id(ids[i], xstats_names[i].name, @@ -5292,7 +5292,6 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = { .stats_reset = txgbe_dev_stats_reset, .xstats_reset = txgbe_dev_xstats_reset, .xstats_get_names = txgbe_dev_xstats_get_names, - .xstats_get_names_by_id = txgbe_dev_xstats_get_names_by_id, .queue_stats_mapping_set = txgbe_dev_queue_stats_mapping_set, .fw_version_get = txgbe_fw_version_get, .dev_supported_ptypes_get = txgbe_dev_supported_ptypes_get, diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 896da8a887..cbe2f46f34 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -348,10 +348,14 @@ static struct rte_pci_driver rte_txgbevf_pmd = { }; static int txgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int limit) { unsigned int i; + if (ids != NULL) + return -ENOTSUP; + if (limit < TXGBEVF_NB_XSTATS && xstats_names != NULL) return -ENOMEM; diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c index a202931e9a..e5d723bfd7 100644 --- a/drivers/net/vhost/rte_eth_vhost.c +++ b/drivers/net/vhost/rte_eth_vhost.c @@ -251,6 +251,7 @@ vhost_dev_xstats_reset(struct rte_eth_dev *dev) static int vhost_dev_xstats_get_names(struct rte_eth_dev *dev __rte_unused, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int limit __rte_unused) { @@ -258,6 +259,9 @@ vhost_dev_xstats_get_names(struct rte_eth_dev *dev __rte_unused, int count = 0; int nstats = VHOST_NB_XSTATS_RXPORT + VHOST_NB_XSTATS_TXPORT; + if (ids != NULL) + return -ENOTSUP; + if (!xstats_names) return nstats; for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) { diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index b60eeb24ab..da405e0896 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -60,6 +60,7 @@ static int virtio_dev_stats_get(struct rte_eth_dev *dev, static int virtio_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned n); static int virtio_dev_xstats_get_names(struct rte_eth_dev *dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned limit); static int virtio_dev_stats_reset(struct rte_eth_dev *dev); @@ -1045,6 +1046,7 @@ virtio_update_stats(struct rte_eth_dev *dev, struct rte_eth_stats *stats) } static int virtio_dev_xstats_get_names(struct rte_eth_dev *dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, __rte_unused unsigned limit) { @@ -1055,6 +1057,9 @@ static int virtio_dev_xstats_get_names(struct rte_eth_dev *dev, unsigned nstats = dev->data->nb_tx_queues * VIRTIO_NB_TXQ_XSTATS + dev->data->nb_rx_queues * VIRTIO_NB_RXQ_XSTATS; + if (ids != NULL) + return -ENOTSUP; + if (xstats_names != NULL) { /* Note: limit checked in rte_eth_xstats_names() */ diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c index 2f40ae907d..48767e6db1 100644 --- a/drivers/net/vmxnet3/vmxnet3_ethdev.c +++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c @@ -80,6 +80,7 @@ static int vmxnet3_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); static int vmxnet3_dev_stats_reset(struct rte_eth_dev *dev); static int vmxnet3_dev_xstats_get_names(struct rte_eth_dev *dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats, unsigned int n); static int vmxnet3_dev_xstats_get(struct rte_eth_dev *dev, @@ -1201,6 +1202,7 @@ vmxnet3_hw_stats_save(struct vmxnet3_hw *hw) static int vmxnet3_dev_xstats_get_names(struct rte_eth_dev *dev, + const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int n) { @@ -1209,6 +1211,9 @@ vmxnet3_dev_xstats_get_names(struct rte_eth_dev *dev, dev->data->nb_tx_queues * RTE_DIM(vmxnet3_txq_stat_strings) + dev->data->nb_rx_queues * RTE_DIM(vmxnet3_rxq_stat_strings); + if (ids != NULL) + return -ENOTSUP; + if (!xstats_names || n < nstats) return nstats; diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index 96dd0ecaf3..98b9a2ad9a 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -227,29 +227,34 @@ typedef int (*eth_xstats_get_by_id_t)(struct rte_eth_dev *dev, */ typedef int (*eth_xstats_reset_t)(struct rte_eth_dev *dev); -typedef int (*eth_xstats_get_names_t)(struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, unsigned int size); -/**< @internal Get names of extended stats of an Ethernet device. */ - /** * @internal * Get names of extended stats of an Ethernet device. * + * If @p size is 0, get the number of available statistics. + * + * If @p ids is NULL, get names of all available statistics. + * + * Otherwise, get names of statistics specified by @p ids. + * * @param dev * ethdev handle of port. * @param ids - * IDs array to retrieve specific statistics. Must not be NULL. + * IDs array to retrieve specific statistics. * @param xstats_names * An rte_eth_xstat_name array of at least @p size elements to be filled. - * Must not be NULL. * @param size * Element count in @p ids and @p xstats_names. * * @return + * - A number greater than @p size and equal to the number of extended + * statistics if @p ids is NULL and @p size is too small to return + * names of available statistics. * - A number of filled in stats. + * - -ENOTSUP if non-NULL @p ids are not supported * - A negative value on error. */ -typedef int (*eth_xstats_get_names_by_id_t)(struct rte_eth_dev *dev, +typedef int (*eth_xstats_get_names_t)(struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int size); @@ -936,8 +941,6 @@ struct eth_dev_ops { eth_xstats_get_by_id_t xstats_get_by_id; /**< Get extended device statistic values by ID. */ - eth_xstats_get_names_by_id_t xstats_get_names_by_id; - /**< Get name of extended device statistics by ID. */ eth_tm_ops_get_t tm_ops_get; /**< Get Traffic Management (TM) operations. */ diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 655d7be3b5..c951c0ba35 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -2867,7 +2867,7 @@ eth_dev_get_xstats_count(uint16_t port_id) RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; if (dev->dev_ops->xstats_get_names != NULL) { - count = (*dev->dev_ops->xstats_get_names)(dev, NULL, 0); + count = (*dev->dev_ops->xstats_get_names)(dev, NULL, NULL, 0); if (count < 0) return eth_err(port_id, count); } else @@ -3005,7 +3005,7 @@ rte_eth_xstats_get_names_by_id(uint16_t port_id, if (ids && !xstats_names) return -EINVAL; - if (ids && dev->dev_ops->xstats_get_names_by_id != NULL && size > 0) { + if (ids && dev->dev_ops->xstats_get_names != NULL && size > 0) { uint64_t ids_copy[size]; for (i = 0; i < size; i++) { @@ -3021,9 +3021,16 @@ rte_eth_xstats_get_names_by_id(uint16_t port_id, ids_copy[i] = ids[i] - basic_count; } - if (no_basic_stat_requested) - return (*dev->dev_ops->xstats_get_names_by_id)(dev, + if (no_basic_stat_requested) { + ret = (*dev->dev_ops->xstats_get_names)(dev, ids_copy, xstats_names, size); + if (ret == 0 || ret != -ENOTSUP) + return ret; + /* + * Driver does not support getting names by IDs. + * Fallback to support on ethdev layer. + */ + } } /* Retrieve all stats */ @@ -3104,7 +3111,7 @@ rte_eth_xstats_get_names(uint16_t port_id, * to end of list. */ cnt_driver_entries = (*dev->dev_ops->xstats_get_names)( - dev, + dev, NULL, xstats_names + cnt_used_entries, size - cnt_used_entries); if (cnt_driver_entries < 0) From patchwork Tue Oct 5 17:19:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Havl=C3=ADk_Martin?= X-Patchwork-Id: 100547 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C34BFA0C4D; Tue, 5 Oct 2021 19:30:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 916AA4140D; Tue, 5 Oct 2021 19:30:02 +0200 (CEST) Received: from eva.fit.vutbr.cz (eva.fit.vutbr.cz [147.229.176.14]) by mails.dpdk.org (Postfix) with ESMTP id BEE9041407 for ; Tue, 5 Oct 2021 19:30:01 +0200 (CEST) Received: from dpdk-test7.liberouter.org (rt-tmc-kou.liberouter.org [195.113.172.126]) (authenticated bits=0) by eva.fit.vutbr.cz (8.16.1/8.16.1) with ESMTPSA id 195HJNlY050467 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO); Tue, 5 Oct 2021 19:19:27 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=stud.fit.vutbr.cz; s=studfit; t=1633454368; bh=BCTAJ8QZZFJew4TuBTDx7SQdchVF+zGKPYWfiLGSMSQ=; h=From:To:Cc:Subject:Date; b=UfaqSI88NRcHK3S1E8h+73GEwZ4eCWBGTHh40yp/LwItmelJ5aaXdq6D8BORkEFK8 l3U+v5BRtmb1/0JYEkjwnTHo+9vOKaapom0COwWQ0iODD7bQYSMBNmPEzeoDX6KWc0 IOT9G0Fa5tlMhmxcAOFIcU62d7EUFttqkdGeVh/0= From: Martin Havlik To: xhavli56@stud.fit.vutbr.cz, Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko , "Min Hu (Connor)" , Ajit Khaparde , Xueming Li , Bing Zhao , Chengchang Tang Cc: Jan Viktorin , dev@dpdk.org, chas3@att.com, haiyue.wang@intel.com, ivan.ilchenko@oktetlabs.ru, aman.deep.singh@intel.com, kirankn@juniper.net, lirongqing@baidu.com Date: Tue, 5 Oct 2021 19:19:14 +0200 Message-Id: <20211005171914.2936-1-xhavli56@stud.fit.vutbr.cz> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 1/2] lib/ethdev: introduce RTE_ETH_DEV_CAPA_FLOW_CREATE_BEFORE_START X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Not all PMDs allow RTE flow rules to be created before start. This capability will be set for the ones that allow it. Signed-off-by: Martin Havlik Acked-by: Ajit Khaparde --- lib/ethdev/rte_ethdev.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index bef24173cf..3115a6fccf 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1448,6 +1448,8 @@ struct rte_eth_conf { #define RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP 0x00000001 /** Device supports Tx queue setup after device started. */ #define RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP 0x00000002 +/** Device supports RTE Flow rule creation before device start. */ +#define RTE_ETH_DEV_CAPA_FLOW_CREATE_BEFORE_START 0x00000004 /**@}*/ /* From patchwork Wed Apr 20 08:16:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feifei Wang X-Patchwork-Id: 109909 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C3207A00BE; Wed, 20 Apr 2022 10:17:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 91383427F8; Wed, 20 Apr 2022 10:17:09 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id 80095427F9 for ; Wed, 20 Apr 2022 10:17:08 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 141EC1FB; Wed, 20 Apr 2022 01:17:08 -0700 (PDT) Received: from net-x86-dell-8268.shanghai.arm.com (net-x86-dell-8268.shanghai.arm.com [10.169.210.114]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 095673F73B; Wed, 20 Apr 2022 01:17:04 -0700 (PDT) From: Feifei Wang To: Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko , Ray Kinsella Cc: dev@dpdk.org, nd@arm.com, Feifei Wang , Honnappa Nagarahalli , Ruifeng Wang Subject: [PATCH v1 3/5] ethdev: add API for direct rearm mode Date: Wed, 20 Apr 2022 16:16:48 +0800 Message-Id: <20220420081650.2043183-4-feifei.wang2@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420081650.2043183-1-feifei.wang2@arm.com> References: <20220420081650.2043183-1-feifei.wang2@arm.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add API for enabling direct rearm mode and for mapping RX and TX queues. Currently, the API supports 1:1(txq : rxq) mapping. Suggested-by: Honnappa Nagarahalli Signed-off-by: Feifei Wang Reviewed-by: Ruifeng Wang Reviewed-by: Honnappa Nagarahalli --- lib/ethdev/ethdev_driver.h | 15 +++++++++++++++ lib/ethdev/rte_ethdev.c | 14 ++++++++++++++ lib/ethdev/rte_ethdev.h | 31 +++++++++++++++++++++++++++++++ lib/ethdev/version.map | 1 + 4 files changed, 61 insertions(+) diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index 69d9dc21d8..22022f6da9 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -485,6 +485,16 @@ typedef int (*eth_rx_enable_intr_t)(struct rte_eth_dev *dev, typedef int (*eth_rx_disable_intr_t)(struct rte_eth_dev *dev, uint16_t rx_queue_id); +/** @internal Enable direct rearm of a receive queue of an Ethernet device. */ +typedef int (*eth_rx_direct_rearm_enable_t)(struct rte_eth_dev *dev, + uint16_t queue_id); + +/**< @internal map Rx/Tx queue of direct rearm mode */ +typedef int (*eth_rx_direct_rearm_map_t)(struct rte_eth_dev *dev, + uint16_t rx_queue_id, + uint16_t tx_port_id, + uint16_t tx_queue_id); + /** @internal Release memory resources allocated by given Rx/Tx queue. */ typedef void (*eth_queue_release_t)(struct rte_eth_dev *dev, uint16_t queue_id); @@ -1152,6 +1162,11 @@ struct eth_dev_ops { /** Disable Rx queue interrupt */ eth_rx_disable_intr_t rx_queue_intr_disable; + /** Enable Rx queue direct rearm mode */ + eth_rx_direct_rearm_enable_t rx_queue_direct_rearm_enable; + /** Map Rx/Tx queue for direct rearm mode */ + eth_rx_direct_rearm_map_t rx_queue_direct_rearm_map; + eth_tx_queue_setup_t tx_queue_setup;/**< Set up device Tx queue */ eth_queue_release_t tx_queue_release; /**< Release Tx queue */ eth_tx_done_cleanup_t tx_done_cleanup;/**< Free Tx ring mbufs */ diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 29a3d80466..8e6f0284f4 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -2139,6 +2139,20 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, return eth_err(port_id, ret); } +int +rte_eth_direct_rxrearm_map(uint16_t rx_port_id, uint16_t rx_queue_id, + uint16_t tx_port_id, uint16_t tx_queue_id) +{ + struct rte_eth_dev *dev; + + dev = &rte_eth_devices[rx_port_id]; + (*dev->dev_ops->rx_queue_direct_rearm_enable)(dev, rx_queue_id); + (*dev->dev_ops->rx_queue_direct_rearm_map)(dev, rx_queue_id, + tx_port_id, tx_queue_id); + + return 0; +} + int rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port) { diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 04cff8ee10..4a431fcbed 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -5190,6 +5190,37 @@ __rte_experimental int rte_eth_dev_hairpin_capability_get(uint16_t port_id, struct rte_eth_hairpin_cap *cap); +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * Enable direct re-arm mode. In this mode the RX queue will be re-armed using + * buffers that have completed transmission on the transmit side. + * + * @note + * It is assumed that the buffers have completed transmission belong to the + * mempool used at the receive side, and have refcnt = 1. + * + * @param rx_port_id + * Port identifying the receive side. + * @param rx_queue_id + * The index of the receive queue identifying the receive side. + * The value must be in the range [0, nb_rx_queue - 1] previously supplied + * to rte_eth_dev_configure(). + * @param tx_port_id + * Port identifying the transmit side. + * @param tx_queue_id + * The index of the transmit queue identifying the transmit side. + * The value must be in the range [0, nb_tx_queue - 1] previously supplied + * to rte_eth_dev_configure(). + * + * @return + * - (0) if successful. + */ +__rte_experimental +int rte_eth_direct_rxrearm_map(uint16_t rx_port_id, uint16_t rx_queue_id, + uint16_t tx_port_id, uint16_t tx_queue_id); + /** * @warning * @b EXPERIMENTAL: this structure may change without prior notice. diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 20391ab29e..68d664498c 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -279,6 +279,7 @@ EXPERIMENTAL { rte_flow_async_action_handle_create; rte_flow_async_action_handle_destroy; rte_flow_async_action_handle_update; + rte_eth_direct_rxrearm_map; }; INTERNAL { From patchwork Wed Jun 1 06:39:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "humin (Q)" X-Patchwork-Id: 112195 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 12CEBA054F; Wed, 1 Jun 2022 08:41:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2704C4280B; Wed, 1 Jun 2022 08:41:13 +0200 (CEST) Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by mails.dpdk.org (Postfix) with ESMTP id 5FB1B4069F for ; Wed, 1 Jun 2022 08:41:10 +0200 (CEST) Received: from kwepemi500012.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4LCffw6nXczDqcy for ; Wed, 1 Jun 2022 14:40:56 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by kwepemi500012.china.huawei.com (7.221.188.12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 1 Jun 2022 14:41:07 +0800 From: "Min Hu (Connor)" To: Subject: [PATCH v4 1/2] ethdev: fix one address occupies two indexes in MAC addrs Date: Wed, 1 Jun 2022 14:39:48 +0800 Message-ID: <20220601063949.43202-2-humin29@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20220601063949.43202-1-humin29@huawei.com> References: <20220514020049.57294-1-humin29@huawei.com> <20220601063949.43202-1-humin29@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemi500012.china.huawei.com (7.221.188.12) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Huisong Li The dev->data->mac_addrs[0] will be changed to a new MAC address when applications modify the default MAC address by rte_eth_dev_default_mac_addr_set(). However, if the new default one has been added as a non-default MAC address by rte_eth_dev_mac_addr_add(), the the rte_eth_dev_default_mac_addr_set() doesn't remove it from the mac_addrs list. As a result, one MAC address occupies two indexes in the list. Like: add(MAC1) add(MAC2) add(MAC3) add(MAC4) set_default(MAC3) default=MAC3, filters=MAC1, MAC2, MAC3, MAC4 In addition, some PMDs, such as i40e, ice, hns3 and so on, do remove the old default MAC when set default MAC. If user continues to do set_default(MAC5), and the mac_addrs list is default=MAC5, filters=(MAC1, MAC2, MAC3, MAC4). At this moment, user can still view MAC3 from the list, but packets with MAC3 aren't actually received by the PMD. Fixes: 854d8ad4ef68 ("ethdev: add default mac address modifier") Cc: stable@dpdk.org Signed-off-by: Huisong Li Signed-off-by: Min Hu --- lib/ethdev/rte_ethdev.c | 39 +++++++++++++++++++++++++++++++++++++-- 1 file changed, 37 insertions(+), 2 deletions(-) diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 46c088dc88..fc9ca8d6fd 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -4260,7 +4260,10 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id, struct rte_ether_addr *addr) int rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr) { + uint64_t mac_pool_sel_bk = 0; struct rte_eth_dev *dev; + uint32_t pool; + int index; int ret; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); @@ -4278,16 +4281,48 @@ rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr) RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_set, -ENOTSUP); + /* + * If the address has been added as a non-default MAC address by + * rte_eth_dev_mac_addr_add API, it should be removed from + * dev->data->mac_addrs[]. + */ + index = eth_dev_get_mac_addr_index(port_id, addr); + if (index > 0) { + /* remove address in NIC data structure */ + mac_pool_sel_bk = dev->data->mac_pool_sel[index]; + ret = rte_eth_dev_mac_addr_remove(port_id, addr); + if (ret < 0) { + RTE_ETHDEV_LOG(ERR, + "Delete MAC address from the MAC list of ethdev port %u.\n", + port_id); + return ret; + } + /* reset pool bitmap */ + dev->data->mac_pool_sel[index] = 0; + } + ret = (*dev->dev_ops->mac_addr_set)(dev, addr); if (ret < 0) - return ret; + goto back; /* Update default address in NIC data structure */ rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]); return 0; -} +back: + if (index > 0) { + pool = 0; + do { + if (mac_pool_sel_bk & UINT64_C(1)) + rte_eth_dev_mac_addr_add(port_id, addr, pool); + mac_pool_sel_bk >>= 1; + pool++; + } while (mac_pool_sel_bk); + } + + return ret; +} /* * Returns index into MAC address array of addr. Use 00:00:00:00:00:00 to find From patchwork Wed Jun 1 06:39:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "humin (Q)" X-Patchwork-Id: 112193 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B9F58A054F; Wed, 1 Jun 2022 08:41:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5B3D140694; Wed, 1 Jun 2022 08:41:11 +0200 (CEST) Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by mails.dpdk.org (Postfix) with ESMTP id 1F83440689 for ; Wed, 1 Jun 2022 08:41:10 +0200 (CEST) Received: from kwepemi500012.china.huawei.com (unknown [172.30.72.53]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4LCfdD5h3Xz1K96T for ; Wed, 1 Jun 2022 14:39:28 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by kwepemi500012.china.huawei.com (7.221.188.12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 1 Jun 2022 14:41:07 +0800 From: "Min Hu (Connor)" To: Subject: [PATCH v4 2/2] ethdev: document default and non-default MAC address Date: Wed, 1 Jun 2022 14:39:49 +0800 Message-ID: <20220601063949.43202-3-humin29@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20220601063949.43202-1-humin29@huawei.com> References: <20220514020049.57294-1-humin29@huawei.com> <20220601063949.43202-1-humin29@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemi500012.china.huawei.com (7.221.188.12) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Huisong Li The rte_eth_dev_data::mac_addrs is a MAC address array. The index zero of this array is as the default address index, and other indexes can't be the same as the address corresponding to index 0. If we break it, may cause following problems: 1) waste of MAC address spaces. 2) a fake MAC address in the MAC list, isn't in hardware MAC entries. 3) a MAC address is assigned to diffent pool. Signed-off-by: Huisong Li Signed-off-by: Min Hu --- lib/ethdev/ethdev_driver.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index 69d9dc21d8..d49e9138c6 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -115,7 +115,12 @@ struct rte_eth_dev_data { uint64_t rx_mbuf_alloc_failed; /**< Rx ring mbuf allocation failures */ - /** Device Ethernet link address. @see rte_eth_dev_release_port() */ + /** + * Device Ethernet link address. The index zero of the array is as the + * index of the default address, and other indexes can't be the same + * as the address corresponding to index 0. + * @see rte_eth_dev_release_port() + */ struct rte_ether_addr *mac_addrs; /** Bitmap associating MAC addresses to pools */ uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR]; From patchwork Wed Jun 1 07:39:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jack Min X-Patchwork-Id: 112196 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 57C89A0547; Wed, 1 Jun 2022 09:39:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 497F740E09; Wed, 1 Jun 2022 09:39:46 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2070.outbound.protection.outlook.com [40.107.92.70]) by mails.dpdk.org (Postfix) with ESMTP id C6BF740DF7 for ; Wed, 1 Jun 2022 09:39:44 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=a9VyBDpHpc/s0ZgxuxQXcfJFULfXsZFHpWIHgjA0AInbHMhoz4CyyDlDwFTzZ8K2RdgAlgIqCCkiW3TaVrpJQV/4XmfmBib24aWIEt8cpxe7WbnzDl6ytdKtkMz1+Vcr/u0BQFv2oj5Ki0qElEuQ9Qs0vYP98Z4j2yodOO6UMtuJC3B47q+ZDGuzq0RSheZhzWlD9aJzck6E60vp7PsYgcFb/c2dgd2ehfm99z0BAeNJEcaovGG3H2pHRQyi29jzKplcUp9rU/vJ2RuDdWlLaFmVgui4wzvv/KBHFwboNqbubblqBMhAqxRcgoX2NumqCtZInYbJWwRGfDbIE7z/4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=aSiegVonyL7LN/41aBNoQvqCvRYU2P36bBXMLvFi3eM=; b=R+FdQ2R0YbsIis4CV7EuI4ZsyTi+glKfh1wiWs3cnVry4Z4gG/ZxHe7GAnKysCRDFfN31P0pnT79T4dPOz9NvhWFONrepWq159ddAFlldbi0iYlVg5hGyMoVPhqxC6FWb4icqatSAP5LdS2XzBCoF/24Nc4qT6vBf7Gb5Vee/gVef7qekJOTblx5YGsNz5vzmSVv/LCOk5IYzmWy3p6Aa6Xxl+uZWSDKC+smiL7xEse0TmbN2HVQrlBiyQrt7yODcskiyj60y1Dco+GRUClZj3kj7o1fBvpbobeWRH2jnon7+zOVlx+tkejONXsUlhnaJ8ZG8nz9qh7cxqswK3FyDQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=oktetlabs.ru smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aSiegVonyL7LN/41aBNoQvqCvRYU2P36bBXMLvFi3eM=; b=EOmLgu5NRTcDn4zHhmgQSwARms9RWre3E9Q7Yk4z34EHjXfKhTw/Tx1qvgXGLGHfJyI3LVPK+Npq7Ho59XBaomWE/1zeredprU82ArgBaNUBlEI26gyvGRBvL3q0qwilWfpnwk2LL8l+JtxOkz+Q66yVT6XOfTj8M8gzrsaxl90HpY74gEk8IuAlQHKRUtCdCSlnYY5SmgSlr+StkoQTfAazMDO8JjOvh7z1iM9Ee2tty0kP9+v7EIi06zz14C/NvoLaeIkBZkn0/urC6neCBxNrkGUrQNUkmXYHrt7AmdAkyeTDGxeNzPHRhUFsy0+hHO3hWzteip3TtfzuyBzkDg== Received: from MWHPR10CA0015.namprd10.prod.outlook.com (2603:10b6:301::25) by DM4PR12MB6589.namprd12.prod.outlook.com (2603:10b6:8:b4::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.19; Wed, 1 Jun 2022 07:39:43 +0000 Received: from CO1NAM11FT055.eop-nam11.prod.protection.outlook.com (2603:10b6:301:0:cafe::1b) by MWHPR10CA0015.outlook.office365.com (2603:10b6:301::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12 via Frontend Transport; Wed, 1 Jun 2022 07:39:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.238) by CO1NAM11FT055.mail.protection.outlook.com (10.13.175.129) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Wed, 1 Jun 2022 07:39:41 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Wed, 1 Jun 2022 07:39:40 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 1 Jun 2022 00:39:38 -0700 From: Xiaoyu Min To: , Ori Kam , Ferruh Yigit , Andrew Rybchenko CC: Subject: [RFC v2 1/2] ethdev: port flags for pre-configuration flow hints Date: Wed, 1 Jun 2022 15:39:18 +0800 Message-ID: <608febf8d5d3c434a1eddb2e56f425ebbd6ff0b4.1654063912.git.jackmin@nvidia.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: cd585039-46b8-4992-d288-08da43a1e3a5 X-MS-TrafficTypeDiagnostic: DM4PR12MB6589:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: pcogULH+40tE7mBxGPFSMvnUxznJCQjz2r6r4YgyEsoarxNTssyhXvkVfmVPzRpKmQjzz6iwvJsqiaig/rQNz4x+Q/+f3jd6Hupf06dpILhC72q4hYPDqg8iM/DWZTMlZdZO1f3YQOvB+01YfcjMW1feq1Zg4v9M9CRdv+qeGD+rXBEl0WPtQQSAtjgYbvVkQBe5MrzthK67Q6JrD0ui+F2Y4AUvY5oYB7k+p8RhTTBRCmHmnQ6DCxiI4fzqgr4FGYmsEejVFCSSLpcmLkZ9SmwAdbNv0CcbP4PRp3c+oN3Tz7sVdFrua7ioDNUY+iqRHLcW0pqbvkfRgVAd7iohM7k130TZOJ0+bwTHK6DBfQsZsjoUCGRzFxVp0UrrXaja30tMQcF80dLbN8OX+VPxWOX4F1kunaZLQS8qjC+tyVtp4iueQ7uUcLoW9rCpHJJlOL1v6G2ZV3fyZ6SU7P+SiE5b7ULehgchL8P9Q6ba43mlIY8yAipsBGLQDVcPnHN9OvyQkCYPV7T07BEBZdPG8xSL4Z7D93YiLVtrga4cHu9VRBCXPm4SCfsZmdRd4Jfz2XmcBvwMiUvTbxo67QxBGlZ5ICdyIjDC6513swB3995O0zpbpDCxWXSOLXGGyA68Ccl3k3V4ma9pdhAZRbcjg+7t3i/VEk+OxfsY2RolyzSjX96xsvrYctkNhDYR8fmco12TqO2artoNt6SowDxiJg== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(36860700001)(83380400001)(6666004)(7696005)(82310400005)(26005)(316002)(186003)(16526019)(6286002)(47076005)(426003)(336012)(508600001)(36756003)(8676002)(55016003)(2616005)(110136005)(40460700003)(5660300002)(8936002)(86362001)(70586007)(356005)(70206006)(81166007)(4326008)(2906002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 07:39:41.3187 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cd585039-46b8-4992-d288-08da43a1e3a5 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT055.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6589 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The data-path focused flow rule management can manage flow rules in more optimized way than traditional one by using hints provided by application in initialization phase. In addition to the current hints we have in port attr, more hints could be provided by application about its behaviour. One example is how the application do with the same flow rule ? A. create/destroy flow on same queue but query flow on different queue or queue-less way (i.e, counter query) B. All flow operations will be exactly on the same queue, by which PMD could be in more optimized way then A because resource could be isolated and access based on queue, without lock, for example. This patch add flag about above situation and could be extended to cover more situations. Signed-off-by: Xiaoyu Min Acked-by: Ori Kam --- lib/ethdev/rte_flow.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index d8827dd184..38439fcd1d 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4948,6 +4948,12 @@ rte_flow_info_get(uint16_t port_id, struct rte_flow_queue_info *queue_info, struct rte_flow_error *error); +/** + * Indicate all operations for a given flow rule will _strictly_ happen + * on the same queue (create/destroy/query/update). + */ +#define RTE_FLOW_PORT_FLAG_STRICT_QUEUE RTE_BIT32(0) + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. @@ -4972,6 +4978,11 @@ struct rte_flow_port_attr { * @see RTE_FLOW_ACTION_TYPE_METER */ uint32_t nb_meters; + /** + * Port flags. + * @see RTE_FLOW_PORT_FLAG_STRICT_QUEUE + */ + uint32_t flags; }; /** From patchwork Wed Jun 1 07:39:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jack Min X-Patchwork-Id: 112197 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5A8A7A0547; Wed, 1 Jun 2022 09:39:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 31BFF427FF; Wed, 1 Jun 2022 09:39:50 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2048.outbound.protection.outlook.com [40.107.94.48]) by mails.dpdk.org (Postfix) with ESMTP id 5F52B40689 for ; Wed, 1 Jun 2022 09:39:48 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=X4UXnI9o8gG7e8kLitMmWntPXh/znHPXwtGo0SuTU10eObWillEUr6D268KcJlqJjyTOZvY1uUqAi6hcB1Lbooy3SLWbJi3GZH07Cce55zBm5gUIND+JYVAeB979EXZH76rfPdIrAqUXm5SwKv2V7ZmiCqJc0jQ643wcEwLr77NT2AFxYdyaoOJ4AnLpM2bFpGqckUdXGS7DSMtXByZoI50eKtiiDJsjdY6B1rNAlIl2+MKoTlnJFWy2jZJHvvwZUnP/cPfC/7B4qGy80u1TdDVSRm2qS3RZ/KPg9u7vbRUrctr4ORZXJRGXmY8AMp5mztzm9wdiIyl+3B0QTXTfZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=pyIVVvBTI/dGRsH0uWD9vkoFiTT+lgWJO5wOscQRggA=; b=V8d7sWmIUOkEfI3T1RUUaadhhOPZDaSbOEFlJvLAAxJCZxZJXU/dhz/VJ4bdmSUSgigwZXKdAFNSS/C95aOR68OhUFavz+qH4XlEwsN2q95fjworTmCBOGbHJPZr8BDHXbzkP9bJAio82LWdeQzFqZVzv4YYcJfqmJK+hW024a520nDYBcxDpXaZwq9GIN14/y5hV7Zwa7Q3fMQZQMddy9EAN8SmQ7F+AR3DC8ZRIH2MT4HPmjCJvVangyc0G5dc5sQgtc+B6Ip5VlScAose6aPgSfYWsvXJaQ9ZRps10lprSRZjR1ChCNCMJF3LDkI9taYw+KC9Zx65IAudcMGWmQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=oktetlabs.ru smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pyIVVvBTI/dGRsH0uWD9vkoFiTT+lgWJO5wOscQRggA=; b=f692nSBhdpaI9jiMOKgcPG0bicWAqfbW0Of7iDZrlCdcoWYiZiZbeo4xzTIEE7onk0P0u33eOjFPJZGeQBkBJJVgsrJz3I/Xc9HPk+ZJqOiJjlyksOKYI8iV6NEVjgZo9Yfix2o4tWhSrAyP5zU3sMy8oLsPj6Cv7cqpaZwdIlRuv0IZKxeHmk1wmFHaUrGWE3w2DMNTD8X3N/zR/fbZ+UwhrpieXuo5XjwRMFqUiG5KIYKgqeVTZ1h2NZVa8nZjlEp7v5w9Vs/VHFyqmRtex53EgYe9bzzXWdYuLKLlgorUIC7oZVxQvoC0zk6muYbyf1P+S1L0FRPEJgTdKXdiIQ== Received: from BN0PR04CA0102.namprd04.prod.outlook.com (2603:10b6:408:ec::17) by DM6PR12MB3004.namprd12.prod.outlook.com (2603:10b6:5:11b::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.13; Wed, 1 Jun 2022 07:39:46 +0000 Received: from BN8NAM11FT038.eop-nam11.prod.protection.outlook.com (2603:10b6:408:ec:cafe::56) by BN0PR04CA0102.outlook.office365.com (2603:10b6:408:ec::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.13 via Frontend Transport; Wed, 1 Jun 2022 07:39:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by BN8NAM11FT038.mail.protection.outlook.com (10.13.176.246) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Wed, 1 Jun 2022 07:39:44 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Wed, 1 Jun 2022 07:39:44 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 1 Jun 2022 00:39:42 -0700 From: Xiaoyu Min To: , Ori Kam , Ferruh Yigit , Andrew Rybchenko CC: Subject: [RFC v2 2/2] ethdev: queue-based flow aged report Date: Wed, 1 Jun 2022 15:39:19 +0800 Message-ID: <7a45693f478b1b721b4e05131141b526185a175c.1654063912.git.jackmin@nvidia.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: c2db35d2-af82-4134-9743-08da43a1e5d9 X-MS-TrafficTypeDiagnostic: DM6PR12MB3004:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: h+859afXtiZjtuxnlzBucgFOYle4iTMuTvMuaBHCXfeISpseFL5G3PA/kKfVqfvcWCNxdQ1a+8+dqtfLFVFCVctiLL5FfSrloGaPhc49t2J0v5AEAzsNyvpgyUHpBSDG9s+aReUBc06IiQFqDKAj+XR2gJbobG3ieXScOcgXmHlUxK20KpU3d3WJVBvmsEZPfSZquhjliPnPaMMTnv2yGj70aKHxCIFH/zEJ9eGHGxXh/w5iwuEMuJlwIj/ZUfxIRKAG8w1L4QWamFjJv3CDqCF++rDlS04HJiWj63rwryw+CQMg8nOTs3UyDNkaJ8YYRhi2CXw2Eg2FtZCzRABg9QDS0sgKnWwgCYMmmbvdq5MDIhl2BjBb+shFuUt9HgYYaV3I04ZOHEdZ7uxwjjnanZFPvgqixqEGrh9DRUJCsOoZ6iAnLFNRBhjspiU6tMNpOrKojgtub0KKBj8RISSJMMWEYlh1pOnK4JEHJG0z4VrjSHWTDknCJGLtgsQTxqGofD/V15HkIhNxkB+RUJz2y/FlCxjxXNussjnNpq+0fjfOlaL2RnP8JO2HIM3FmYEv08ylFhxupgavZzknVOMgSPhUGCku+/gv4iOhrtvVM07Lf8yS4VgChjMVtwgOFiUGvVgEpatrHW3pCC8vB5R98wvL3WrkyVeNtF938jTPDCbHRKNRYhByst2fPMA3aExa8CALxsGExFBnFiWFfWz8iA== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(2906002)(110136005)(26005)(316002)(82310400005)(6286002)(2616005)(8676002)(36860700001)(55016003)(508600001)(47076005)(7696005)(356005)(81166007)(6666004)(86362001)(16526019)(40460700003)(186003)(336012)(36756003)(8936002)(83380400001)(426003)(70206006)(70586007)(4326008)(5660300002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 07:39:44.9666 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c2db35d2-af82-4134-9743-08da43a1e5d9 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT038.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3004 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When application use queue-based flow rule management and operate the same flow rule on the same queue, e.g create/destroy/query, API of querying aged flow rules should also have queue id parameter just like other queue-based flow APIs. By this way, PMD can work in more optimized way since resources are isolated by queue and needn't synchronize. If application do use queue-based flow management but configure port without RTE_FLOW_PORT_FLAG_STRICT_QUEUE, which means application operate a given flow rule on different queues, the queue id parameter will be ignored. In addition to the above change, another new API is added which help the application get information about which queues have aged out flows after RTE_ETH_EVENT_FLOW_AGED event received. The queried queue id can be used in the above queue based query aged flows API. Signed-off-by: Xiaoyu Min Signed-off-by: Xiaoyu Min --- lib/ethdev/rte_flow.h | 82 ++++++++++++++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 13 ++++++ 2 files changed, 95 insertions(+) diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 38439fcd1d..a12becfe3b 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -2810,6 +2810,7 @@ enum rte_flow_action_type { * See function rte_flow_get_aged_flows * see enum RTE_ETH_EVENT_FLOW_AGED * See struct rte_flow_query_age + * See function rte_flow_get_q_aged_flows */ RTE_FLOW_ACTION_TYPE_AGE, @@ -5624,6 +5625,87 @@ rte_flow_async_action_handle_update(uint16_t port_id, const void *update, void *user_data, struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Get flow queues which have aged out flows on a given port. + * + * The application can use this function to query which queues have aged out flows after + * a RTE_ETH_EVENT_FLOW_AGED event is received so the returned queue id can be used to + * get aged out flows on this given queue by call rte_flow_get_q_aged_flows. + * + * This function can be called from the event callback or synchronously regardless of the event. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in, out] queue_id + * Array of queue id that will be set. + * @param[in] nb_queue_id + * Maximum number of the queue id that can be returned. + * This value should be equal to the size of the queue_id array. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * if nb_queue_id is 0, return the amount of all queues which have aged out flows. + * if nb_queue_id is not 0 , return the amount of queues which have aged out flows + * reported in the queue_id array, otherwise negative errno value. + * + * @see rte_flow_action_age + * @see RTE_ETH_EVENT_FLOW_AGED + */ + +__rte_experimental +int +rte_flow_get_aged_queues(uint16_t port_id, uint32_t queue_id[], uint32_t nb_queue_id, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Get aged-out flows of a given port on the given flow queue. + * + * RTE_ETH_EVENT_FLOW_AGED event will be triggered at least one new aged out flow was + * detected on any flow queue after the last call to rte_flow_get_q_aged_flows. + * + * The application can use rte_flow_get_aged_queues to query which queues have aged + * out flows after RTE_ETH_EVEN_FLOW_AGED event. + * + * If application configure port attribute without RTE_FLOW_PORT_FLAG_STRICT_QUEUE + * the @p queue_id will be ignored. + * This function can be called to get the aged flows asynchronously from the + * event callback or synchronously regardless the event. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue to query. Ignored when RTE_FLOW_PORT_FLAG_STRICT_QUEUE not set. + * @param[in, out] contexts + * The address of an array of pointers to the aged-out flows contexts. + * @param[in] nb_contexts + * The length of context array pointers. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * if nb_contexts is 0, return the amount of all aged contexts. + * if nb_contexts is not 0 , return the amount of aged flows reported + * in the context array, otherwise negative errno value. + * + * @see rte_flow_action_age + * @see RTE_ETH_EVENT_FLOW_AGED + * @see rte_flow_port_flag + */ + +__rte_experimental +int +rte_flow_get_q_aged_flows(uint16_t port_id, uint32_t queue_id, void **contexts, + uint32_t nb_contexts, struct rte_flow_error *error); #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index 2bff732d6a..b665170bf4 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -260,6 +260,19 @@ struct rte_flow_ops { const void *update, void *user_data, struct rte_flow_error *error); + /** See rte_flow_get_aged_queues() */ + int (*get_aged_queues) + (uint16_t port_id, + uint32_t queue_id[], + uint32_t nb_queue_id, + struct rte_flow_error *error); + /** See rte_flow_get_q_aged_flows() */ + int (*get_q_aged_flows) + (uint16_t port_id, + uint32_t queue_id, + void **contexts, + uint32_t nb_contexts, + struct rte_flow_error *error); }; /** From patchwork Wed Jun 1 07:49:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "humin (Q)" X-Patchwork-Id: 112202 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E2BF4A0548; Wed, 1 Jun 2022 09:51:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 238304281E; Wed, 1 Jun 2022 09:51:08 +0200 (CEST) Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by mails.dpdk.org (Postfix) with ESMTP id 01F8F40694 for ; Wed, 1 Jun 2022 09:51:06 +0200 (CEST) Received: from kwepemi500012.china.huawei.com (unknown [172.30.72.53]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4LChCK5SN3zDqbX for ; Wed, 1 Jun 2022 15:50:37 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by kwepemi500012.china.huawei.com (7.221.188.12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 1 Jun 2022 15:50:48 +0800 From: "Min Hu (Connor)" To: Subject: [PATCH v3 1/4] ethdev: introduce ethdev HW desc dump PI Date: Wed, 1 Jun 2022 15:49:27 +0800 Message-ID: <20220601074930.10313-2-humin29@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20220601074930.10313-1-humin29@huawei.com> References: <20220527023351.40577-1-humin29@huawei.com> <20220601074930.10313-1-humin29@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemi500012.china.huawei.com (7.221.188.12) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added the ethdev HW Rx desc dump API which provides functions for query HW descriptor from device. HW descriptor info differs in different NICs. The information demonstrates I/O process which is important for debug. As the information is different between NICs, the new API is introduced. Signed-off-by: Min Hu (Connor) --- doc/guides/rel_notes/release_22_07.rst | 7 ++++ lib/ethdev/ethdev_driver.h | 42 ++++++++++++++++++++++++ lib/ethdev/rte_ethdev.c | 44 ++++++++++++++++++++++++++ lib/ethdev/rte_ethdev.h | 44 ++++++++++++++++++++++++++ lib/ethdev/version.map | 2 ++ 5 files changed, 139 insertions(+) diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index 8932a1d478..56c675121a 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -137,6 +137,13 @@ New Features * ``RTE_EVENT_QUEUE_ATTR_WEIGHT`` * ``RTE_EVENT_QUEUE_ATTR_AFFINITY`` +* **Added ethdev HW desc dump API, to dump Rx/Tx HW desc info from device.** + + Added the ethdev HW Rx desc dump API which provides functions for query + HW descriptor from device. HW descriptor info differs in different NICs. + The information demonstrates I/O process which is important for debug. + As the information is different between NICs, the new API is introduced. + Removed Items ------------- diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index 69d9dc21d8..9c1726eb2d 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -1073,6 +1073,42 @@ typedef int (*eth_ip_reassembly_conf_set_t)(struct rte_eth_dev *dev, */ typedef int (*eth_dev_priv_dump_t)(struct rte_eth_dev *dev, FILE *file); +/** + * @internal + * Dump ethdev Rx descriptor info to a file. + * + * @param file + * A pointer to a file for output. + * @param dev + * Port (ethdev) handle. + * @param queue_id + * The selected queue. + * @param desc_id + * The selected descriptor. + * @return + * Negative errno value on error, zero on success. + */ +typedef int (*eth_rx_hw_desc_dump_t)(FILE *file, const struct rte_eth_dev *dev, + uint16_t queue_id, uint16_t desc_id); + +/** + * @internal + * Dump ethdev Tx descriptor info to a file. + * + * @param file + * A pointer to a file for output. + * @param dev + * Port (ethdev) handle. + * @param queue_id + * The selected queue. + * @param desc_id + * The selected descriptor. + * @return + * Negative errno value on error, zero on success. + */ +typedef int (*eth_tx_hw_desc_dump_t)(FILE *file, const struct rte_eth_dev *dev, + uint16_t queue_id, uint16_t desc_id); + /** * @internal A structure containing the functions exported by an Ethernet driver. */ @@ -1283,6 +1319,12 @@ struct eth_dev_ops { /** Dump private info from device */ eth_dev_priv_dump_t eth_dev_priv_dump; + + /** Dump ethdev Rx descriptor info */ + eth_rx_hw_desc_dump_t eth_rx_hw_desc_dump; + + /** Dump ethdev Tx descriptor info */ + eth_tx_hw_desc_dump_t eth_tx_hw_desc_dump; }; /** diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 46c088dc88..bbd8439fa0 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -5876,6 +5876,50 @@ rte_eth_dev_priv_dump(uint16_t port_id, FILE *file) return eth_err(port_id, (*dev->dev_ops->eth_dev_priv_dump)(dev, file)); } +int +rte_eth_rx_hw_desc_dump(FILE *file, uint16_t port_id, uint16_t queue_id, + uint16_t desc_id) +{ + struct rte_eth_dev *dev; + int ret; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + dev = &rte_eth_devices[port_id]; + + if (queue_id >= dev->data->nb_rx_queues) { + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->eth_rx_hw_desc_dump, -ENOTSUP); + ret = (*dev->dev_ops->eth_rx_hw_desc_dump)(file, dev, queue_id, + desc_id); + + return ret; +} + +int +rte_eth_tx_hw_desc_dump(FILE *file, uint16_t port_id, uint16_t queue_id, + uint16_t desc_id) +{ + struct rte_eth_dev *dev; + int ret; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + dev = &rte_eth_devices[port_id]; + + if (queue_id >= dev->data->nb_tx_queues) { + RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", queue_id); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->eth_tx_hw_desc_dump, -ENOTSUP); + ret = (*dev->dev_ops->eth_tx_hw_desc_dump)(file, dev, queue_id, + desc_id); + + return ret; +} + RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO); RTE_INIT(ethdev_init_telemetry) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 02df65d923..56ae630209 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -5456,6 +5456,50 @@ typedef struct { __rte_experimental int rte_eth_dev_priv_dump(uint16_t port_id, FILE *file); +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * Dump ethdev Rx descriptor info to a file. + * + * @param file + * A pointer to a file for output. + * @param dev + * Port (ethdev) handle. + * @param queue_id + * The selected queue. + * @param desc_id + * The selected descriptor. + * @return + * - On success, zero. + * - On failure, a negative value. + */ +__rte_experimental +int rte_eth_rx_hw_desc_dump(FILE *file, uint16_t port_id, uint16_t queue_id, + uint16_t desc_id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * Dump ethdev Tx descriptor info to a file. + * + * @param file + * A pointer to a file for output. + * @param dev + * Port (ethdev) handle. + * @param queue_id + * The selected queue. + * @param desc_id + * The selected descriptor. + * @return + * - On success, zero. + * - On failure, a negative value. + */ +__rte_experimental +int rte_eth_tx_hw_desc_dump(FILE *file, uint16_t port_id, uint16_t queue_id, + uint16_t desc_id); + #include /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index daca7851f2..109f4ea818 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -285,6 +285,8 @@ EXPERIMENTAL { rte_mtr_color_in_protocol_priority_get; rte_mtr_color_in_protocol_set; rte_mtr_meter_vlan_table_update; + rte_eth_rx_hw_desc_dump; + rte_eth_tx_hw_desc_dump; }; INTERNAL { From patchwork Sat Jul 2 08:17:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dongdong Liu X-Patchwork-Id: 113633 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 656C3A00C3; Sat, 2 Jul 2022 10:18:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 04BBC42B76; Sat, 2 Jul 2022 10:18:19 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id 22A1542B75; Sat, 2 Jul 2022 10:18:17 +0200 (CEST) Received: from kwepemi500017.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LZlJh07ZnzkWZm; Sat, 2 Jul 2022 16:16:19 +0800 (CST) Received: from localhost.localdomain (10.28.79.22) by kwepemi500017.china.huawei.com (7.221.188.110) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 2 Jul 2022 16:18:15 +0800 From: Dongdong Liu To: , , , CC: , Huisong Li , Dongdong Liu Subject: [PATCH v2 3/3] ethdev: add the check for the valitity of timestamp offload Date: Sat, 2 Jul 2022 16:17:30 +0800 Message-ID: <20220702081730.1168-4-liudongdong3@huawei.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20220702081730.1168-1-liudongdong3@huawei.com> References: <20220628133959.21381-1-liudongdong3@huawei.com> <20220702081730.1168-1-liudongdong3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.28.79.22] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemi500017.china.huawei.com (7.221.188.110) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Huisong Li This patch adds the check for the valitity of timestamp offload. Signed-off-by: Huisong Li Signed-off-by: Dongdong Liu --- lib/ethdev/rte_ethdev.c | 65 ++++++++++++++++++++++++++++++++++++++++- 1 file changed, 64 insertions(+), 1 deletion(-) diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 1979dc0850..9b8ba3a348 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -5167,15 +5167,48 @@ rte_eth_dev_set_mc_addr_list(uint16_t port_id, mc_addr_set, nb_mc_addr)); } +static int +rte_eth_timestamp_offload_valid(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_info dev_info; + struct rte_eth_rxmode *rxmode; + int ret; + + ret = rte_eth_dev_info_get(dev->data->port_id, &dev_info); + if (ret != 0) { + RTE_ETHDEV_LOG(ERR, "Cannot get port (%u) device information.\n", + dev->data->port_id); + return ret; + } + + if ((dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP) == 0) { + RTE_ETHDEV_LOG(ERR, "Driver does not support PTP.\n"); + return -ENOTSUP; + } + + rxmode = &dev->data->dev_conf.rxmode; + if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) == 0) { + RTE_ETHDEV_LOG(ERR, "Please enable 'RTE_ETH_RX_OFFLOAD_TIMESTAMP' offload.\n"); + return -EINVAL; + } + + return 0; +} + int rte_eth_timesync_enable(uint16_t port_id) { struct rte_eth_dev *dev; + int ret; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_enable, -ENOTSUP); + ret = rte_eth_timestamp_offload_valid(dev); + if (ret != 0) + return ret; + return eth_err(port_id, (*dev->dev_ops->timesync_enable)(dev)); } @@ -5183,11 +5216,15 @@ int rte_eth_timesync_disable(uint16_t port_id) { struct rte_eth_dev *dev; + int ret; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_disable, -ENOTSUP); + ret = rte_eth_timestamp_offload_valid(dev); + if (ret != 0) + return ret; return eth_err(port_id, (*dev->dev_ops->timesync_disable)(dev)); } @@ -5196,6 +5233,7 @@ rte_eth_timesync_read_rx_timestamp(uint16_t port_id, struct timespec *timestamp, uint32_t flags) { struct rte_eth_dev *dev; + int ret; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5207,7 +5245,12 @@ rte_eth_timesync_read_rx_timestamp(uint16_t port_id, struct timespec *timestamp, return -EINVAL; } - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_rx_timestamp, -ENOTSUP); + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_rx_timestamp, + -ENOTSUP); + ret = rte_eth_timestamp_offload_valid(dev); + if (ret != 0) + return ret; + return eth_err(port_id, (*dev->dev_ops->timesync_read_rx_timestamp) (dev, timestamp, flags)); } @@ -5217,6 +5260,7 @@ rte_eth_timesync_read_tx_timestamp(uint16_t port_id, struct timespec *timestamp) { struct rte_eth_dev *dev; + int ret; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5229,6 +5273,10 @@ rte_eth_timesync_read_tx_timestamp(uint16_t port_id, } RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_tx_timestamp, -ENOTSUP); + ret = rte_eth_timestamp_offload_valid(dev); + if (ret != 0) + return ret; + return eth_err(port_id, (*dev->dev_ops->timesync_read_tx_timestamp) (dev, timestamp)); } @@ -5237,11 +5285,16 @@ int rte_eth_timesync_adjust_time(uint16_t port_id, int64_t delta) { struct rte_eth_dev *dev; + int ret; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_adjust_time, -ENOTSUP); + ret = rte_eth_timestamp_offload_valid(dev); + if (ret != 0) + return ret; + return eth_err(port_id, (*dev->dev_ops->timesync_adjust_time)(dev, delta)); } @@ -5249,6 +5302,7 @@ int rte_eth_timesync_read_time(uint16_t port_id, struct timespec *timestamp) { struct rte_eth_dev *dev; + int ret; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5261,6 +5315,10 @@ rte_eth_timesync_read_time(uint16_t port_id, struct timespec *timestamp) } RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_time, -ENOTSUP); + ret = rte_eth_timestamp_offload_valid(dev); + if (ret != 0) + return ret; + return eth_err(port_id, (*dev->dev_ops->timesync_read_time)(dev, timestamp)); } @@ -5269,6 +5327,7 @@ int rte_eth_timesync_write_time(uint16_t port_id, const struct timespec *timestamp) { struct rte_eth_dev *dev; + int ret; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5281,6 +5340,10 @@ rte_eth_timesync_write_time(uint16_t port_id, const struct timespec *timestamp) } RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_write_time, -ENOTSUP); + ret = rte_eth_timestamp_offload_valid(dev); + if (ret != 0) + return ret; + return eth_err(port_id, (*dev->dev_ops->timesync_write_time)(dev, timestamp)); } From patchwork Tue Jul 26 16:30:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 114225 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 28EB6A00C4; Tue, 26 Jul 2022 18:30:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 000D14281C; Tue, 26 Jul 2022 18:30:27 +0200 (CEST) Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) by mails.dpdk.org (Postfix) with ESMTP id DA61740E0F for ; Tue, 26 Jul 2022 18:30:25 +0200 (CEST) Received: by mail-pf1-f180.google.com with SMTP id e16so13661000pfm.11 for ; Tue, 26 Jul 2022 09:30:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XKyv0hMJK3GCp4VCx3Uc4LSIi8qIpehs11wDkqucVGM=; b=8Nurn3zrWwu3qzae8n7nk1sqPKxqb8w7E8UQVK37ZRwm3H+z/9JQrppvoc11g3TGtZ wM9rZMEvx1Ziwp5mI7RllC4jLxM8JlmG3klVYhDFMvLY5xASzyZxWOgdbQm4QtBbJSv9 Tdc18q5i6mpFV85pKHob42xnHtoB2HPZVo2P1f/8YkaNHyJdxmMlcubP+GGo9gp56Cs+ bV+hItR/Q4M6ivgCyq122uVNKDD+hCB1DGiX+v29WWSRihMCoLFOArME3mmSbKMeQr/H pvW3SygJgDg5wT/v7SwDv2dfEJrwRPam8DLHSgUEsjHXdIwo6M0MC7HvUVpP0HCOC5O4 gMlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XKyv0hMJK3GCp4VCx3Uc4LSIi8qIpehs11wDkqucVGM=; b=Lg9zP7ey1sBIDG/prD85OH+6IBpxegEH5nmD3B3yU4oNWXM64Sw1dAzo0/XEhYf/ff sDWybuVqqQ+JPtM/CmctFqgIor1XOGSHmBwXwGJ027m7XCBKmfulNNxQAlRpZKq6EMwf KXNlvizb+9wJxY0E23MQm6q/GVTHqElcKi+ji9wdYtJsrHtRBkQjoRLr88yTwMMLsWDg v5uDH0vkE9Cs+FoovDZaahkVKBm8Yvq+OO2NNme1VesEgObGnvLQQOLnJ2kCDqjLIX02 fYvd6vWS3Kjmtd6MiWmk0bMNK+CTfB7HiIKrRi6sx+vmnr1ef0sAiplHFCghe+QJ55Aj PutA== X-Gm-Message-State: AJIora92JMNXu/tGuMR0Eev/OXk53sxVsumLmUux99oLRfxkAyS4DQP5 TX7Gt79YCGig6wtRQzWddhJrl9N34AnSLQ== X-Google-Smtp-Source: AGRyM1vMPy067e2IAFESaXjdlOlI/f3umX75gj/msDM1hK1V27bk+9ZQRzGTc/eH9g0lLRB6YndUjw== X-Received: by 2002:a63:6c87:0:b0:419:b667:6622 with SMTP id h129-20020a636c87000000b00419b6676622mr15147073pgc.495.1658853025268; Tue, 26 Jul 2022 09:30:25 -0700 (PDT) Received: from hermes.local (204-195-120-218.wavecable.com. [204.195.120.218]) by smtp.gmail.com with ESMTPSA id y190-20020a6232c7000000b0051bbe085f16sm11844155pfy.104.2022.07.26.09.30.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Jul 2022 09:30:25 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Subject: [PATCH v3 01/20] ethdev: reword dev_info_get description. Date: Tue, 26 Jul 2022 09:30:01 -0700 Message-Id: <20220726163020.15679-2-stephen@networkplumber.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220726163020.15679-1-stephen@networkplumber.org> References: <20220722214106.162640-1-stephen@networkplumber.org> <20220726163020.15679-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The original comment was redundant and had duplicate word 'of'. Signed-off-by: Stephen Hemminger --- lib/ethdev/rte_ethdev.h | 33 +++++---------------------------- 1 file changed, 5 insertions(+), 28 deletions(-) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index de9e970d4d11..d2eff20b8917 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -3356,34 +3356,11 @@ int rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr *ma, /** * Retrieve the contextual information of an Ethernet device. * - * As part of this function, a number of of fields in dev_info will be - * initialized as follows: - * - * rx_desc_lim = lim - * tx_desc_lim = lim - * - * Where lim is defined within the rte_eth_dev_info_get as - * - * const struct rte_eth_desc_lim lim = { - * .nb_max = UINT16_MAX, - * .nb_min = 0, - * .nb_align = 1, - * .nb_seg_max = UINT16_MAX, - * .nb_mtu_seg_max = UINT16_MAX, - * }; - * - * device = dev->device - * min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - * max_mtu = UINT16_MAX - * - * The following fields will be populated if support for dev_infos_get() - * exists for the device and the rte_eth_dev 'dev' has been populated - * successfully with a call to it: - * - * driver_name = dev->device->driver->name - * nb_rx_queues = dev->data->nb_rx_queues - * nb_tx_queues = dev->data->nb_tx_queues - * dev_flags = &dev->data->dev_flags + * The device information about driver, descriptors limits, + * capabilities, flags, and queues is returned. + * + * The fields are populated with generic values that then are + * overridden by the device driver specific values. * * @param port_id * The port identifier of the Ethernet device. From patchwork Thu Aug 4 13:44:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ankur Dwivedi X-Patchwork-Id: 114617 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9BF51A00C4; Thu, 4 Aug 2022 15:48:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8D19742BE0; Thu, 4 Aug 2022 15:48:17 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 67BDB42BDB for ; Thu, 4 Aug 2022 15:48:15 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 274DK71p022469; Thu, 4 Aug 2022 06:45:56 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=86dMi328c1wORskrYYSXAZaImGrs680u3pBERDwspJo=; b=T3hd2IWzHsgN4pFKAyeMI+Tod7hdVBNxkD1GASuaEjLbpy49WQA/6EXYyTsECy5gv+vc AJLVBIay2HzQESU8E03lbfom0/bLt7gH3fa+Ztso0Fp/U9JSu2vegwCF678jtIeoLw4H VaJOTAx4cXZzxKjfUS8rxQWc/6ru1HwoIOoZoxQHSmW+Zs4Ugpm3dKd+uYbF7//zm4bQ FaH2DYyUrMeHn+lIs9x8jV8zV9MIzmk6Fn2tkSTfJZS72uvgTH1c0wN0tBL0UgDwWAaU 9xgIkz1cjlw+WuWXZf5Xp4fj9GFuEXfFcCuysr6KhUWb0WB4+fvd9HlxLK+nfw+0OPux 2Q== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3hqgf1xr24-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 04 Aug 2022 06:45:56 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Aug 2022 06:45:52 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 4 Aug 2022 06:45:52 -0700 Received: from hyd1349.t110.caveonetworks.com.com (unknown [10.29.45.13]) by maili.marvell.com (Postfix) with ESMTP id 654783F7057; Thu, 4 Aug 2022 06:45:30 -0700 (PDT) From: Ankur Dwivedi To: CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Ankur Dwivedi Subject: [PATCH 1/6] ethdev: add trace points Date: Thu, 4 Aug 2022 19:14:25 +0530 Message-ID: <20220804134430.6192-2-adwivedi@marvell.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20220804134430.6192-1-adwivedi@marvell.com> References: <20220804134430.6192-1-adwivedi@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: otxNh5ddflM8-Y0RLStbzFOEcMm6tKaK X-Proofpoint-ORIG-GUID: otxNh5ddflM8-Y0RLStbzFOEcMm6tKaK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-04_03,2022-08-04_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add trace points for ethdev functions. Signed-off-by: Ankur Dwivedi --- lib/ethdev/ethdev_private.c | 5 + lib/ethdev/ethdev_trace_points.c | 438 +++++++++++ lib/ethdev/rte_ethdev.c | 150 ++++ lib/ethdev/rte_ethdev_trace.h | 1182 ++++++++++++++++++++++++++++++ lib/ethdev/version.map | 147 ++++ 5 files changed, 1922 insertions(+) diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c index 48090c879a..e483145816 100644 --- a/lib/ethdev/ethdev_private.c +++ b/lib/ethdev/ethdev_private.c @@ -5,6 +5,7 @@ #include #include "rte_ethdev.h" +#include "rte_ethdev_trace.h" #include "ethdev_driver.h" #include "ethdev_private.h" @@ -291,6 +292,8 @@ rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id, { const struct rte_eth_rxtx_callback *cb = opaque; + rte_eth_trace_call_rx_callbacks(port_id, queue_id, rx_pkts, nb_rx, + nb_pkts, opaque); while (cb != NULL) { nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx, nb_pkts, cb->param); @@ -306,6 +309,8 @@ rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id, { const struct rte_eth_rxtx_callback *cb = opaque; + rte_eth_trace_call_tx_callbacks(port_id, queue_id, tx_pkts, nb_pkts, + opaque); while (cb != NULL) { nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts, cb->param); diff --git a/lib/ethdev/ethdev_trace_points.c b/lib/ethdev/ethdev_trace_points.c index 2919409a15..2e80401771 100644 --- a/lib/ethdev/ethdev_trace_points.c +++ b/lib/ethdev/ethdev_trace_points.c @@ -29,3 +29,441 @@ RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_burst, RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_tx_burst, lib.ethdev.tx.burst) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_add_first_rx_callback, + lib.ethdev.add_first_rx_callback) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_add_rx_callback, + lib.ethdev.add_rx_callback) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_add_tx_callback, + lib.ethdev.add_tx_callback) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_allmulticast_disable, + lib.ethdev.allmulticast_disable) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_allmulticast_enable, + lib.ethdev.allmulticast_enable) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_allmulticast_get, + lib.ethdev.allmulticast_get) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_call_rx_callbacks, + lib.ethdev.call_rx_callbacks) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_call_tx_callbacks, + lib.ethdev.call_tx_callbacks) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_mtu, + lib.ethdev.set_mtu) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_adjust_nb_rx_tx_desc, + lib.ethdev.adjust_nb_rx_tx_desc) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_callback_register, + lib.ethdev.callback_register) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_callback_unregister, + lib.ethdev.callback_unregister) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_default_mac_addr_set, + lib.ethdev.default_mac_addr_set) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_flow_ctrl_get, + lib.ethdev.flow_ctrl_get) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_flow_ctrl_set, + lib.ethdev.flow_ctrl_set) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_fw_version_get, + lib.ethdev.fw_version_get) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_dcb_info, + lib.ethdev.get_dcb_info) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_eeprom, + lib.ethdev.get_eeprom) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_eeprom_length, + lib.ethdev.get_eeprom_length) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_mtu, + lib.ethdev.get_mtu) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_count_avail, + lib.ethdev.count_avail) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_count_total, + lib.ethdev.count_total) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_name_by_port, + lib.ethdev.get_name_by_port) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_port_by_name, + lib.ethdev.get_port_by_name) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_reg_info, + lib.ethdev.get_reg_info) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_sec_ctx, + lib.ethdev.get_sec_ctx) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_supported_ptypes, + lib.ethdev.get_supported_ptypes) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_vlan_offload, + lib.ethdev.get_vlan_offload) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_info_get, + lib.ethdev.info_get) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_is_removed, + lib.ethdev.is_removed) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_is_valid_port, + lib.ethdev.is_valid_port) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_mac_addr_add, + lib.ethdev.mac_addr_add) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_mac_addr_remove, + lib.ethdev.mac_addr_remove) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_pool_ops_supported, + lib.ethdev.pool_ops_supported) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_priority_flow_ctrl_set, + lib.ethdev.priority_flow_ctrl_set) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_reset, + lib.ethdev.reset) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rss_hash_conf_get, + lib.ethdev.rss_hash_conf_get) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rss_hash_update, + lib.ethdev.rss_hash_update) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rss_reta_query, + lib.ethdev.rss_reta_query) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rss_reta_update, + lib.ethdev.rss_reta_update) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_intr_ctl, + lib.ethdev.rx_intr_ctl) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_intr_ctl_q, + lib.ethdev.rx_intr_ctl_q) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_intr_ctl_q_get_fd, + lib.ethdev.rx_intr_ctl_q_get_fd) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_intr_disable, + lib.ethdev.rx_intr_disable) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_intr_enable, + lib.ethdev.rx_intr_enable) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_offload_name, + lib.ethdev.rx_offload_name) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_queue_start, + lib.ethdev.rx_queue_start) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_queue_stop, + lib.ethdev.rx_queue_stop) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_eeprom, + lib.ethdev.set_eeprom) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_link_down, + lib.ethdev.set_link_down) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_link_up, + lib.ethdev.set_link_up) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_mc_addr_list, + lib.ethdev.set_mc_addr_list) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_ptypes, + lib.ethdev.set_ptypes) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_rx_queue_stats_mapping, + lib.ethdev.set_rx_queue_stats_mapping) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_tx_queue_stats_mapping, + lib.ethdev.set_tx_queue_stats_mapping) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_vlan_ether_type, + lib.ethdev.set_vlan_ether_type) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_vlan_offload, + lib.ethdev.set_vlan_offload) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_vlan_pvid, + lib.ethdev.set_vlan_pvid) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_vlan_strip_on_queue, + lib.ethdev.set_vlan_strip_on_queue) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_socket_id, + lib.ethdev.socket_id) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_tx_offload_name, + lib.ethdev.tx_offload_name) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_tx_queue_start, + lib.ethdev.tx_queue_start) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_tx_queue_stop, + lib.ethdev.tx_queue_stop) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_uc_all_hash_table_set, + lib.ethdev.uc_all_hash_table_set) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_uc_hash_table_set, + lib.ethdev.uc_hash_table_set) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_udp_tunnel_port_add, + lib.ethdev.udp_tunnel_port_add) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_udp_tunnel_port_delete, + lib.ethdev.udp_tunnel_port_delete) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_vlan_filter, + lib.ethdev.vlan_filter) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_find_next, + lib.ethdev.find_next) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_find_next_of, + lib.ethdev.find_next_of) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_find_next_owned_by, + lib.ethdev.find_next_owned_by) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_find_next_sibling, + lib.ethdev.find_next_sibling) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_iterator_cleanup, + lib.ethdev.iterator_cleanup) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_iterator_init, + lib.ethdev.iterator_init) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_iterator_next, + lib.ethdev.iterator_next) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_led_off, + lib.ethdev.led_off) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_led_on, + lib.ethdev.led_on) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_link_get, + lib.ethdev.link_get) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_link_get_nowait, + lib.ethdev.link_get_nowait) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_macaddr_get, + lib.ethdev.macaddr_get) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_promiscuous_disable, + lib.ethdev.promiscuous_disable) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_promiscuous_enable, + lib.ethdev.promiscuous_enable) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_promiscuous_get, + lib.ethdev.promiscuous_get) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_remove_rx_callback, + lib.ethdev.remove_rx_callback) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_remove_tx_callback, + lib.ethdev.remove_tx_callback) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_rx_burst_mode_get, + lib.ethdev.rx_burst_mode_get) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_rx_queue_info_get, + lib.ethdev.rx_queue_info_get) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_rx_queue_setup, + lib.ethdev.rx_queue_setup) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_set_queue_rate_limit, + lib.ethdev.set_queue_rate_limit) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_speed_bitflag, + lib.ethdev.speed_bitflag) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_stats_get, + lib.ethdev.stats_get) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_stats_reset, + lib.ethdev.stats_reset) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_timesync_adjust_time, + lib.ethdev.timesync_adjust_time) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_timesync_disable, + lib.ethdev.timesync_disable) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_timesync_enable, + lib.ethdev.timesync_enable) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_timesync_read_rx_timestamp, + lib.ethdev.timesync_read_rx_timestamp) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_timesync_read_time, + lib.ethdev.timesync_read_time) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_timesync_read_tx_timestamp, + lib.ethdev.timesync_read_tx_timestamp) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_timesync_write_time, + lib.ethdev.timesync_write_time) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_tx_buffer_count_callback, + lib.ethdev.tx_buffer_count_callback) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_tx_buffer_drop_callback, + lib.ethdev.tx_buffer_drop_callback) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_tx_buffer_init, + lib.ethdev.tx_buffer_init) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_tx_buffer_set_err_callback, + lib.ethdev.tx_buffer_set_err_callback) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_tx_burst_mode_get, + lib.ethdev.tx_burst_mode_get) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_tx_done_cleanup, + lib.ethdev.tx_done_cleanup) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_tx_queue_info_get, + lib.ethdev.tx_queue_info_get) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_tx_queue_setup, + lib.ethdev.tx_queue_setup) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_xstats_get, + lib.ethdev.xstats_get) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_xstats_get_by_id, + lib.ethdev.xstats_get_by_id) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_xstats_get_id_by_name, + lib.ethdev.xstats_get_id_by_name) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_xstats_get_names, + lib.ethdev.xstats_get_names) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_xstats_get_names_by_id, + lib.ethdev.xstats_get_names_by_id) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_xstats_reset, + lib.ethdev.xstats_reset) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_owner_delete, + lib.ethdev.owner_delete) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_owner_get, + lib.ethdev.owner_get) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_owner_new, + lib.ethdev.owner_new) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_owner_set, + lib.ethdev.owner_set) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_owner_unset, + lib.ethdev.owner_unset) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_module_eeprom, + lib.ethdev.get_module_eeprom) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_module_info, + lib.ethdev.get_module_info) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_read_clock, + lib.ethdev.read_clock) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_hairpin_capability_get, + lib.ethdev.hairpin_capability_get) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_rx_hairpin_queue_setup, + lib.ethdev.rx.hairpin_queue_setup) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_tx_hairpin_queue_setup, + lib.ethdev.tx.hairpin_queue_setup) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_hairpin_bind, + lib.ethdev.hairpin_bind) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_hairpin_get_peer_ports, + lib.ethdev.hairpin_get_peer_ports) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_hairpin_unbind, + lib.ethdev.hairpin_unbind) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_link_speed_to_str, + lib.ethdev.link_speed_to_str) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_link_to_str, + lib.ethdev.link_to_str) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_fec_get_capability, + lib.ethdev.fec_get_capability) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_fec_get, + lib.ethdev.fec_get) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_fec_set, + lib.ethdev.fec_set) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_get_monitor_addr, + lib.ethdev.get_monitor_addr) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_representor_info_get, + lib.ethdev.representor_info_get) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_capability_name, + lib.ethdev.capability_name) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_conf_get, + lib.ethdev.conf_get) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_macaddrs_get, + lib.ethdev.macaddrs_get) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_rx_metadata_negotiate, + lib.ethdev.rx_metadata_negotiate) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_priority_flow_ctrl_queue_configure, + lib.ethdev.priority_flow_ctrl_queue_configure) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_priority_flow_ctrl_queue_info_get, + lib.ethdev.priority_flow_ctrl_queue_info_get) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_priv_dump, + lib.ethdev.priv_dump) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_ip_reassembly_capability_get, + lib.ethdev.ip_reassembly_capability_get) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_ip_reassembly_conf_get, + lib.ethdev.ip_reassembly_conf_get) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_ip_reassembly_conf_set, + lib.ethdev.ip_reassembly_conf_set) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_rx_avail_thresh_query, + lib.ethdev.rx_avail_thresh_query) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_rx_avail_thresh_set, + lib.ethdev.rx_avail_thresh_set) diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 1979dc0850..a6fb370b22 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -167,6 +167,7 @@ rte_eth_iterator_init(struct rte_dev_iterator *iter, const char *devargs_str) char *cls_str = NULL; int str_size; + rte_eth_trace_iterator_init(iter, devargs_str); if (iter == NULL) { RTE_ETHDEV_LOG(ERR, "Cannot initialize NULL iterator\n"); return -EINVAL; @@ -273,6 +274,7 @@ rte_eth_iterator_init(struct rte_dev_iterator *iter, const char *devargs_str) uint16_t rte_eth_iterator_next(struct rte_dev_iterator *iter) { + rte_eth_trace_iterator_next(iter); if (iter == NULL) { RTE_ETHDEV_LOG(ERR, "Cannot get next device from NULL iterator\n"); @@ -308,6 +310,7 @@ rte_eth_iterator_next(struct rte_dev_iterator *iter) void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter) { + rte_eth_trace_iterator_cleanup(iter); if (iter == NULL) { RTE_ETHDEV_LOG(ERR, "Cannot do clean up from NULL iterator\n"); return; @@ -323,6 +326,7 @@ rte_eth_iterator_cleanup(struct rte_dev_iterator *iter) uint16_t rte_eth_find_next(uint16_t port_id) { + rte_eth_trace_find_next(port_id); while (port_id < RTE_MAX_ETHPORTS && rte_eth_devices[port_id].state == RTE_ETH_DEV_UNUSED) port_id++; @@ -345,6 +349,7 @@ rte_eth_find_next(uint16_t port_id) uint16_t rte_eth_find_next_of(uint16_t port_id, const struct rte_device *parent) { + rte_eth_trace_find_next_of(port_id, parent); port_id = rte_eth_find_next(port_id); while (port_id < RTE_MAX_ETHPORTS && rte_eth_devices[port_id].device != parent) @@ -356,6 +361,7 @@ rte_eth_find_next_of(uint16_t port_id, const struct rte_device *parent) uint16_t rte_eth_find_next_sibling(uint16_t port_id, uint16_t ref_port_id) { + rte_eth_trace_find_next_sibling(port_id, ref_port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(ref_port_id, RTE_MAX_ETHPORTS); return rte_eth_find_next_of(port_id, rte_eth_devices[ref_port_id].device); @@ -370,6 +376,7 @@ eth_dev_is_allocated(const struct rte_eth_dev *ethdev) int rte_eth_dev_is_valid_port(uint16_t port_id) { + rte_ethdev_trace_is_valid_port(port_id); if (port_id >= RTE_MAX_ETHPORTS || (rte_eth_devices[port_id].state == RTE_ETH_DEV_UNUSED)) return 0; @@ -389,6 +396,7 @@ eth_is_valid_owner_id(uint64_t owner_id) uint64_t rte_eth_find_next_owned_by(uint16_t port_id, const uint64_t owner_id) { + rte_eth_trace_find_next_owned_by(port_id, owner_id); port_id = rte_eth_find_next(port_id); while (port_id < RTE_MAX_ETHPORTS && rte_eth_devices[port_id].data->owner.id != owner_id) @@ -412,6 +420,7 @@ rte_eth_dev_owner_new(uint64_t *owner_id) *owner_id = eth_dev_shared_data->next_owner_id++; rte_spinlock_unlock(ð_dev_shared_data->ownership_lock); + rte_ethdev_trace_owner_new(*owner_id); return 0; } @@ -468,6 +477,7 @@ rte_eth_dev_owner_set(const uint16_t port_id, { int ret; + rte_ethdev_trace_owner_set(port_id, owner); eth_dev_shared_data_prepare(); rte_spinlock_lock(ð_dev_shared_data->ownership_lock); @@ -485,6 +495,7 @@ rte_eth_dev_owner_unset(const uint16_t port_id, const uint64_t owner_id) {.id = RTE_ETH_DEV_NO_OWNER, .name = ""}; int ret; + rte_ethdev_trace_owner_unset(port_id, owner_id); eth_dev_shared_data_prepare(); rte_spinlock_lock(ð_dev_shared_data->ownership_lock); @@ -525,6 +536,7 @@ rte_eth_dev_owner_delete(const uint64_t owner_id) rte_spinlock_unlock(ð_dev_shared_data->ownership_lock); + rte_ethdev_trace_owner_delete(owner_id, ret); return ret; } @@ -554,12 +566,14 @@ rte_eth_dev_owner_get(const uint16_t port_id, struct rte_eth_dev_owner *owner) rte_memcpy(owner, ðdev->data->owner, sizeof(*owner)); rte_spinlock_unlock(ð_dev_shared_data->ownership_lock); + rte_ethdev_trace_owner_get(port_id, owner); return 0; } int rte_eth_dev_socket_id(uint16_t port_id) { + rte_ethdev_trace_socket_id(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -1); return rte_eth_devices[port_id].data->numa_node; } @@ -567,6 +581,7 @@ rte_eth_dev_socket_id(uint16_t port_id) void * rte_eth_dev_get_sec_ctx(uint16_t port_id) { + rte_ethdev_trace_get_sec_ctx(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL); return rte_eth_devices[port_id].security_ctx; } @@ -582,6 +597,7 @@ rte_eth_dev_count_avail(void) RTE_ETH_FOREACH_DEV(p) count++; + rte_ethdev_trace_count_avail(count); return count; } @@ -593,6 +609,7 @@ rte_eth_dev_count_total(void) RTE_ETH_FOREACH_VALID_DEV(port) count++; + rte_ethdev_trace_count_total(count); return count; } @@ -613,6 +630,7 @@ rte_eth_dev_get_name_by_port(uint16_t port_id, char *name) * because it might be overwritten by VDEV PMD */ tmp = eth_dev_shared_data->data[port_id].name; strcpy(name, tmp); + rte_ethdev_trace_get_name_by_port(port_id, name); return 0; } @@ -635,6 +653,7 @@ rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id) RTE_ETH_FOREACH_VALID_DEV(pid) if (!strcmp(name, eth_dev_shared_data->data[pid].name)) { *port_id = pid; + rte_ethdev_trace_get_port_by_name(name, *port_id); return 0; } @@ -705,6 +724,7 @@ rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id) struct rte_eth_dev *dev; int ret; + rte_ethdev_trace_rx_queue_start(port_id, rx_queue_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -744,6 +764,7 @@ rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id) struct rte_eth_dev *dev; int ret; + rte_ethdev_trace_rx_queue_stop(port_id, rx_queue_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -776,6 +797,7 @@ rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id) struct rte_eth_dev *dev; int ret; + rte_ethdev_trace_tx_queue_start(port_id, tx_queue_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -815,6 +837,7 @@ rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id) struct rte_eth_dev *dev; int ret; + rte_ethdev_trace_tx_queue_stop(port_id, tx_queue_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -844,6 +867,7 @@ rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id) uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex) { + rte_eth_trace_speed_bitflag(speed, duplex); switch (speed) { case RTE_ETH_SPEED_NUM_10M: return duplex ? RTE_ETH_LINK_SPEED_10M : RTE_ETH_LINK_SPEED_10M_HD; @@ -889,6 +913,7 @@ rte_eth_dev_rx_offload_name(uint64_t offload) } } + rte_ethdev_trace_rx_offload_name(offload, name); return name; } @@ -905,6 +930,7 @@ rte_eth_dev_tx_offload_name(uint64_t offload) } } + rte_ethdev_trace_tx_offload_name(offload, name); return name; } @@ -921,6 +947,7 @@ rte_eth_dev_capability_name(uint64_t capability) } } + rte_ethdev_trace_capability_name(capability, name); return name; } @@ -1538,6 +1565,7 @@ rte_eth_dev_set_link_up(uint16_t port_id) { struct rte_eth_dev *dev; + rte_ethdev_trace_set_link_up(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -1550,6 +1578,7 @@ rte_eth_dev_set_link_down(uint16_t port_id) { struct rte_eth_dev *dev; + rte_ethdev_trace_set_link_down(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -1596,6 +1625,7 @@ rte_eth_dev_reset(uint16_t port_id) struct rte_eth_dev *dev; int ret; + rte_ethdev_trace_reset(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -1631,6 +1661,7 @@ rte_eth_dev_is_removed(uint16_t port_id) /* Device is physically removed. */ dev->state = RTE_ETH_DEV_REMOVED; + rte_ethdev_trace_is_removed(port_id, ret); return ret; } @@ -1910,6 +1941,8 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, int i; int count; + rte_eth_trace_rx_hairpin_queue_setup(port_id, rx_queue_id, nb_rx_desc, + conf); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -1985,6 +2018,8 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; + rte_eth_trace_tx_queue_setup(port_id, tx_queue_id, nb_tx_desc, + socket_id, tx_conf); if (tx_queue_id >= dev->data->nb_tx_queues) { RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", tx_queue_id); return -EINVAL; @@ -2076,6 +2111,7 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, int count; int ret; + rte_eth_trace_tx_hairpin_queue_setup(port_id, tx_queue_id, nb_tx_desc, conf); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -2144,6 +2180,7 @@ rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port) struct rte_eth_dev *dev; int ret; + rte_eth_trace_hairpin_bind(tx_port, rx_port); RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port, -ENODEV); dev = &rte_eth_devices[tx_port]; @@ -2168,6 +2205,7 @@ rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port) struct rte_eth_dev *dev; int ret; + rte_eth_trace_hairpin_unbind(tx_port, rx_port); RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port, -ENODEV); dev = &rte_eth_devices[tx_port]; @@ -2193,6 +2231,7 @@ rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports, struct rte_eth_dev *dev; int ret; + rte_eth_trace_hairpin_get_peer_ports(port_id, peer_ports, len, direction); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -2226,6 +2265,7 @@ void rte_eth_tx_buffer_drop_callback(struct rte_mbuf **pkts, uint16_t unsent, void *userdata __rte_unused) { + rte_eth_trace_tx_buffer_drop_callback(pkts, unsent); rte_pktmbuf_free_bulk(pkts, unsent); } @@ -2237,12 +2277,14 @@ rte_eth_tx_buffer_count_callback(struct rte_mbuf **pkts, uint16_t unsent, rte_pktmbuf_free_bulk(pkts, unsent); *count += unsent; + rte_eth_trace_tx_buffer_count_callback(pkts, unsent, *count); } int rte_eth_tx_buffer_set_err_callback(struct rte_eth_dev_tx_buffer *buffer, buffer_tx_error_fn cbfn, void *userdata) { + rte_eth_trace_tx_buffer_set_err_callback(buffer, cbfn, userdata); if (buffer == NULL) { RTE_ETHDEV_LOG(ERR, "Cannot set Tx buffer error callback to NULL buffer\n"); @@ -2259,6 +2301,7 @@ rte_eth_tx_buffer_init(struct rte_eth_dev_tx_buffer *buffer, uint16_t size) { int ret = 0; + rte_eth_trace_tx_buffer_init(buffer, size); if (buffer == NULL) { RTE_ETHDEV_LOG(ERR, "Cannot initialize NULL buffer\n"); return -EINVAL; @@ -2279,6 +2322,7 @@ rte_eth_tx_done_cleanup(uint16_t port_id, uint16_t queue_id, uint32_t free_cnt) struct rte_eth_dev *dev; int ret; + rte_eth_trace_tx_done_cleanup(port_id, queue_id, free_cnt); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -2296,6 +2340,7 @@ rte_eth_promiscuous_enable(uint16_t port_id) struct rte_eth_dev *dev; int diag = 0; + rte_eth_trace_promiscuous_enable(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -2316,6 +2361,7 @@ rte_eth_promiscuous_disable(uint16_t port_id) struct rte_eth_dev *dev; int diag = 0; + rte_eth_trace_promiscuous_disable(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -2337,6 +2383,7 @@ rte_eth_promiscuous_get(uint16_t port_id) { struct rte_eth_dev *dev; + rte_eth_trace_promiscuous_get(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -2349,6 +2396,7 @@ rte_eth_allmulticast_enable(uint16_t port_id) struct rte_eth_dev *dev; int diag; + rte_eth_trace_allmulticast_enable(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -2368,6 +2416,7 @@ rte_eth_allmulticast_disable(uint16_t port_id) struct rte_eth_dev *dev; int diag; + rte_eth_trace_allmulticast_disable(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -2388,6 +2437,7 @@ rte_eth_allmulticast_get(uint16_t port_id) { struct rte_eth_dev *dev; + rte_eth_trace_allmulticast_get(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -2416,6 +2466,7 @@ rte_eth_link_get(uint16_t port_id, struct rte_eth_link *eth_link) *eth_link = dev->data->dev_link; } + rte_eth_trace_link_get(port_id, eth_link); return 0; } @@ -2441,12 +2492,14 @@ rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *eth_link) *eth_link = dev->data->dev_link; } + rte_eth_trace_link_get_nowait(port_id, eth_link); return 0; } const char * rte_eth_link_speed_to_str(uint32_t link_speed) { + rte_eth_trace_link_speed_to_str(link_speed); switch (link_speed) { case RTE_ETH_SPEED_NUM_NONE: return "None"; case RTE_ETH_SPEED_NUM_10M: return "10 Mbps"; @@ -2470,6 +2523,7 @@ rte_eth_link_speed_to_str(uint32_t link_speed) int rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link) { + rte_eth_trace_link_to_str(str, len, eth_link); if (str == NULL) { RTE_ETHDEV_LOG(ERR, "Cannot convert link to NULL string\n"); return -EINVAL; @@ -2502,6 +2556,7 @@ rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats) { struct rte_eth_dev *dev; + rte_eth_trace_stats_get(port_id, stats); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -2524,6 +2579,7 @@ rte_eth_stats_reset(uint16_t port_id) struct rte_eth_dev *dev; int ret; + rte_eth_trace_stats_reset(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -2598,6 +2654,7 @@ rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name, return -ENOMEM; } + rte_eth_trace_xstats_get_id_by_name(port_id, xstat_name, id); /* Get count */ cnt_xstats = rte_eth_xstats_get_names_by_id(port_id, NULL, 0, NULL); if (cnt_xstats < 0) { @@ -2770,6 +2827,8 @@ rte_eth_xstats_get_names_by_id(uint16_t port_id, return -1; } xstats_names[i] = xstats_names_copy[ids[i]]; + rte_eth_trace_xstats_get_names_by_id(port_id, &xstats_names[i], + ids[i]); } free(xstats_names_copy); @@ -2809,6 +2868,7 @@ rte_eth_xstats_get_names(uint16_t port_id, cnt_used_entries += cnt_driver_entries; } + rte_eth_trace_xstats_get_names(port_id, xstats_names, size, cnt_used_entries); return cnt_used_entries; } @@ -2884,6 +2944,7 @@ rte_eth_xstats_get_by_id(uint16_t port_id, const uint64_t *ids, RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; + rte_eth_trace_xstats_get_by_id(port_id, ids, values, size); ret = eth_dev_get_xstats_count(port_id); if (ret < 0) return ret; @@ -3005,6 +3066,8 @@ rte_eth_xstats_get(uint16_t port_id, struct rte_eth_xstat *xstats, for ( ; i < count + xcount; i++) xstats[i].id += count; + for (i = 0; i < n; i++) + rte_eth_trace_xstats_get(port_id, xstats[i], i); return count + xcount; } @@ -3017,6 +3080,7 @@ rte_eth_xstats_reset(uint16_t port_id) RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; + rte_eth_trace_xstats_reset(port_id); /* implemented by the driver */ if (dev->dev_ops->xstats_reset != NULL) return eth_err(port_id, (*dev->dev_ops->xstats_reset)(dev)); @@ -3051,6 +3115,8 @@ int rte_eth_dev_set_tx_queue_stats_mapping(uint16_t port_id, uint16_t tx_queue_id, uint8_t stat_idx) { + rte_ethdev_trace_set_tx_queue_stats_mapping(port_id, tx_queue_id, + stat_idx); return eth_err(port_id, eth_dev_set_queue_stats_mapping(port_id, tx_queue_id, stat_idx, STAT_QMAP_TX)); @@ -3060,6 +3126,8 @@ int rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id, uint16_t rx_queue_id, uint8_t stat_idx) { + rte_ethdev_trace_set_rx_queue_stats_mapping(port_id, rx_queue_id, + stat_idx); return eth_err(port_id, eth_dev_set_queue_stats_mapping(port_id, rx_queue_id, stat_idx, STAT_QMAP_RX)); @@ -3070,6 +3138,7 @@ rte_eth_dev_fw_version_get(uint16_t port_id, char *fw_version, size_t fw_size) { struct rte_eth_dev *dev; + rte_ethdev_trace_fw_version_get(port_id, fw_version, fw_size); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3141,6 +3210,7 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info) dev_info->dev_flags = &dev->data->dev_flags; + rte_ethdev_trace_info_get(port_id, dev_info); return 0; } @@ -3149,6 +3219,7 @@ rte_eth_dev_conf_get(uint16_t port_id, struct rte_eth_conf *dev_conf) { struct rte_eth_dev *dev; + rte_ethdev_trace_conf_get(port_id, dev_conf); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3172,6 +3243,7 @@ rte_eth_dev_get_supported_ptypes(uint16_t port_id, uint32_t ptype_mask, struct rte_eth_dev *dev; const uint32_t *all_ptypes; + rte_ethdev_trace_get_supported_ptypes(port_id, ptype_mask, ptypes, num); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3217,6 +3289,7 @@ rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask, unsigned int i, j; int ret; + rte_ethdev_trace_set_ptypes(port_id, ptype_mask, set_ptypes, num); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3296,6 +3369,7 @@ rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr *ma, struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; + rte_eth_trace_macaddrs_get(port_id, ma, num); if (ma == NULL) { RTE_ETHDEV_LOG(ERR, "%s: invalid parameters\n", __func__); return -EINVAL; @@ -3318,6 +3392,7 @@ rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr) { struct rte_eth_dev *dev; + rte_eth_trace_macaddr_get(port_id, mac_addr); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3348,6 +3423,7 @@ rte_eth_dev_get_mtu(uint16_t port_id, uint16_t *mtu) } *mtu = dev->data->mtu; + rte_ethdev_trace_get_mtu(port_id, *mtu); return 0; } @@ -3389,6 +3465,7 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu) if (ret == 0) dev->data->mtu = mtu; + rte_ethdev_trace_set_mtu(port_id, mtu, ret); return eth_err(port_id, ret); } @@ -3398,6 +3475,7 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on) struct rte_eth_dev *dev; int ret; + rte_ethdev_trace_vlan_filter(port_id, vlan_id, on); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3440,6 +3518,7 @@ rte_eth_dev_set_vlan_strip_on_queue(uint16_t port_id, uint16_t rx_queue_id, { struct rte_eth_dev *dev; + rte_ethdev_trace_set_vlan_strip_on_queue(port_id, rx_queue_id, on); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3461,6 +3540,7 @@ rte_eth_dev_set_vlan_ether_type(uint16_t port_id, { struct rte_eth_dev *dev; + rte_ethdev_trace_set_vlan_ether_type(port_id, vlan_type, tpid); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3481,6 +3561,7 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask) uint64_t dev_offloads; uint64_t new_offloads; + rte_ethdev_trace_set_vlan_offload(port_id, offload_mask); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3583,6 +3664,7 @@ rte_eth_dev_get_vlan_offload(uint16_t port_id) if (*dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) ret |= RTE_ETH_QINQ_STRIP_OFFLOAD; + rte_ethdev_trace_get_vlan_offload(port_id, ret); return ret; } @@ -3591,6 +3673,7 @@ rte_eth_dev_set_vlan_pvid(uint16_t port_id, uint16_t pvid, int on) { struct rte_eth_dev *dev; + rte_ethdev_trace_set_vlan_pvid(port_id, pvid, on); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3603,6 +3686,7 @@ rte_eth_dev_flow_ctrl_get(uint16_t port_id, struct rte_eth_fc_conf *fc_conf) { struct rte_eth_dev *dev; + rte_ethdev_trace_flow_ctrl_get(port_id, fc_conf); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3623,6 +3707,7 @@ rte_eth_dev_flow_ctrl_set(uint16_t port_id, struct rte_eth_fc_conf *fc_conf) { struct rte_eth_dev *dev; + rte_ethdev_trace_flow_ctrl_set(port_id, fc_conf); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3648,6 +3733,7 @@ rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id, { struct rte_eth_dev *dev; + rte_ethdev_trace_priority_flow_ctrl_set(port_id, pfc_conf); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3735,6 +3821,7 @@ rte_eth_dev_priority_flow_ctrl_queue_info_get(uint16_t port_id, return -EINVAL; } + rte_ethdev_trace_priority_flow_ctrl_queue_info_get(port_id, pfc_queue_info); if (*dev->dev_ops->priority_flow_ctrl_queue_info_get) return eth_err(port_id, (*dev->dev_ops->priority_flow_ctrl_queue_info_get) (dev, pfc_queue_info)); @@ -3750,6 +3837,8 @@ rte_eth_dev_priority_flow_ctrl_queue_configure(uint16_t port_id, struct rte_eth_dev *dev; int ret; + rte_ethdev_trace_priority_flow_ctrl_queue_configure(port_id, + pfc_queue_conf); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3865,6 +3954,7 @@ rte_eth_dev_rss_reta_update(uint16_t port_id, struct rte_eth_dev *dev; int ret; + rte_ethdev_trace_rss_reta_update(port_id, reta_conf, reta_size); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3912,6 +4002,7 @@ rte_eth_dev_rss_reta_query(uint16_t port_id, struct rte_eth_dev *dev; int ret; + rte_ethdev_trace_rss_reta_query(port_id, reta_conf, reta_size); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3941,6 +4032,7 @@ rte_eth_dev_rss_hash_update(uint16_t port_id, enum rte_eth_rx_mq_mode mq_mode; int ret; + rte_ethdev_trace_rss_hash_update(port_id, rss_conf); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3982,6 +4074,7 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id, { struct rte_eth_dev *dev; + rte_ethdev_trace_rss_hash_conf_get(port_id, rss_conf); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4003,6 +4096,7 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id, { struct rte_eth_dev *dev; + rte_ethdev_trace_udp_tunnel_port_add(port_id, udp_tunnel); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4029,6 +4123,7 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id, { struct rte_eth_dev *dev; + rte_ethdev_trace_udp_tunnel_port_delete(port_id, udp_tunnel); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4054,6 +4149,7 @@ rte_eth_led_on(uint16_t port_id) { struct rte_eth_dev *dev; + rte_eth_trace_led_on(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4066,6 +4162,7 @@ rte_eth_led_off(uint16_t port_id) { struct rte_eth_dev *dev; + rte_eth_trace_led_off(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4081,6 +4178,7 @@ rte_eth_fec_get_capability(uint16_t port_id, struct rte_eth_dev *dev; int ret; + rte_eth_trace_fec_get_capability(port_id, speed_fec_capa, num); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4102,6 +4200,7 @@ rte_eth_fec_get(uint16_t port_id, uint32_t *fec_capa) { struct rte_eth_dev *dev; + rte_eth_trace_fec_get(port_id, fec_capa); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4121,6 +4220,7 @@ rte_eth_fec_set(uint16_t port_id, uint32_t fec_capa) { struct rte_eth_dev *dev; + rte_eth_trace_fec_set(port_id, fec_capa); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4163,6 +4263,7 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr, uint64_t pool_mask; int ret; + rte_ethdev_trace_mac_addr_add(port_id, addr, pool); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4221,6 +4322,7 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id, struct rte_ether_addr *addr) struct rte_eth_dev *dev; int index; + rte_ethdev_trace_mac_addr_remove(port_id, addr); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4260,6 +4362,7 @@ rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr) struct rte_eth_dev *dev; int ret; + rte_ethdev_trace_default_mac_addr_set(port_id, addr); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4322,6 +4425,7 @@ rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct rte_ether_addr *addr, int ret; struct rte_eth_dev *dev; + rte_ethdev_trace_uc_hash_table_set(port_id, addr, on); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4379,6 +4483,7 @@ rte_eth_dev_uc_all_hash_table_set(uint16_t port_id, uint8_t on) { struct rte_eth_dev *dev; + rte_ethdev_trace_uc_all_hash_table_set(port_id, on); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4395,6 +4500,7 @@ int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx, struct rte_eth_link link; int ret; + rte_eth_trace_set_queue_rate_limit(port_id, queue_idx, tx_rate); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4428,6 +4534,7 @@ int rte_eth_rx_avail_thresh_set(uint16_t port_id, uint16_t queue_id, { struct rte_eth_dev *dev; + rte_eth_trace_rx_avail_thresh_set(port_id, queue_id, avail_thresh); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4462,6 +4569,7 @@ int rte_eth_rx_avail_thresh_query(uint16_t port_id, uint16_t *queue_id, if (*queue_id >= dev->data->nb_rx_queues) *queue_id = 0; + rte_eth_trace_rx_avail_thresh_query(port_id, *queue_id); RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_avail_thresh_query, -ENOTSUP); return eth_err(port_id, (*dev->dev_ops->rx_queue_avail_thresh_query)(dev, queue_id, avail_thresh)); @@ -4493,6 +4601,7 @@ rte_eth_dev_callback_register(uint16_t port_id, uint16_t next_port; uint16_t last_port; + rte_ethdev_trace_callback_register(port_id, event, cb_fn, cb_arg); if (cb_fn == NULL) { RTE_ETHDEV_LOG(ERR, "Cannot register ethdev port %u callback from NULL\n", @@ -4560,6 +4669,7 @@ rte_eth_dev_callback_unregister(uint16_t port_id, uint16_t next_port; uint16_t last_port; + rte_ethdev_trace_callback_unregister(port_id, event, cb_fn, cb_arg); if (cb_fn == NULL) { RTE_ETHDEV_LOG(ERR, "Cannot unregister ethdev port %u callback from NULL\n", @@ -4619,6 +4729,7 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data) uint16_t qid; int rc; + rte_ethdev_trace_rx_intr_ctl(port_id, epfd, op, data); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4679,6 +4790,7 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id) (vec - RTE_INTR_VEC_RXTX_OFFSET) : vec; fd = rte_intr_efds_index_get(intr_handle, efd_idx); + rte_ethdev_trace_rx_intr_ctl_q_get_fd(port_id, queue_id, fd); return fd; } @@ -4691,6 +4803,7 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id, struct rte_intr_handle *intr_handle; int rc; + rte_ethdev_trace_rx_intr_ctl_q(port_id, queue_id, epfd, op, data); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4729,6 +4842,7 @@ rte_eth_dev_rx_intr_enable(uint16_t port_id, struct rte_eth_dev *dev; int ret; + rte_ethdev_trace_rx_intr_enable(port_id, queue_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4747,6 +4861,7 @@ rte_eth_dev_rx_intr_disable(uint16_t port_id, struct rte_eth_dev *dev; int ret; + rte_ethdev_trace_rx_intr_disable(port_id, queue_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -4769,6 +4884,7 @@ rte_eth_add_rx_callback(uint16_t port_id, uint16_t queue_id, #endif struct rte_eth_dev *dev; + rte_eth_trace_add_rx_callback(port_id, queue_id, fn, user_param); /* check input parameters */ if (!rte_eth_dev_is_valid_port(port_id) || fn == NULL || queue_id >= rte_eth_devices[port_id].data->nb_rx_queues) { @@ -4824,6 +4940,7 @@ rte_eth_add_first_rx_callback(uint16_t port_id, uint16_t queue_id, rte_errno = ENOTSUP; return NULL; #endif + rte_eth_trace_add_first_rx_callback(port_id, queue_id, fn, user_param); /* check input parameters */ if (!rte_eth_dev_is_valid_port(port_id) || fn == NULL || queue_id >= rte_eth_devices[port_id].data->nb_rx_queues) { @@ -4865,6 +4982,7 @@ rte_eth_add_tx_callback(uint16_t port_id, uint16_t queue_id, #endif struct rte_eth_dev *dev; + rte_eth_trace_add_tx_callback(port_id, queue_id, fn, user_param); /* check input parameters */ if (!rte_eth_dev_is_valid_port(port_id) || fn == NULL || queue_id >= rte_eth_devices[port_id].data->nb_tx_queues) { @@ -4932,6 +5050,7 @@ rte_eth_remove_rx_callback(uint16_t port_id, uint16_t queue_id, struct rte_eth_rxtx_callback **prev_cb; int ret = -EINVAL; + rte_eth_trace_remove_rx_callback(port_id, queue_id, user_cb); rte_spinlock_lock(ð_dev_rx_cb_lock); prev_cb = &dev->post_rx_burst_cbs[queue_id]; for (; *prev_cb != NULL; prev_cb = &cb->next) { @@ -4966,6 +5085,7 @@ rte_eth_remove_tx_callback(uint16_t port_id, uint16_t queue_id, struct rte_eth_rxtx_callback *cb; struct rte_eth_rxtx_callback **prev_cb; + rte_eth_trace_remove_tx_callback(port_id, queue_id, user_cb); rte_spinlock_lock(ð_dev_tx_cb_lock); prev_cb = &dev->pre_tx_burst_cbs[queue_id]; for (; *prev_cb != NULL; prev_cb = &cb->next) { @@ -5024,6 +5144,7 @@ rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id, dev->dev_ops->rxq_info_get(dev, queue_id, qinfo); qinfo->queue_state = dev->data->rx_queue_state[queue_id]; + rte_eth_trace_rx_queue_info_get(port_id, queue_id, qinfo); return 0; } @@ -5069,6 +5190,7 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id, dev->dev_ops->txq_info_get(dev, queue_id, qinfo); qinfo->queue_state = dev->data->tx_queue_state[queue_id]; + rte_eth_trace_tx_queue_info_get(port_id, queue_id, qinfo); return 0; } @@ -5078,6 +5200,7 @@ rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id, { struct rte_eth_dev *dev; + rte_eth_trace_rx_burst_mode_get(port_id, queue_id, mode); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5105,6 +5228,7 @@ rte_eth_tx_burst_mode_get(uint16_t port_id, uint16_t queue_id, { struct rte_eth_dev *dev; + rte_eth_trace_tx_burst_mode_get(port_id, queue_id, mode); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5132,6 +5256,7 @@ rte_eth_get_monitor_addr(uint16_t port_id, uint16_t queue_id, { struct rte_eth_dev *dev; + rte_eth_trace_get_monitor_addr(port_id, queue_id, pmc); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5159,6 +5284,7 @@ rte_eth_dev_set_mc_addr_list(uint16_t port_id, { struct rte_eth_dev *dev; + rte_ethdev_trace_set_mc_addr_list(port_id, mc_addr_set, nb_mc_addr); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5172,6 +5298,7 @@ rte_eth_timesync_enable(uint16_t port_id) { struct rte_eth_dev *dev; + rte_eth_trace_timesync_enable(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5184,6 +5311,7 @@ rte_eth_timesync_disable(uint16_t port_id) { struct rte_eth_dev *dev; + rte_eth_trace_timesync_disable(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5197,6 +5325,7 @@ rte_eth_timesync_read_rx_timestamp(uint16_t port_id, struct timespec *timestamp, { struct rte_eth_dev *dev; + rte_eth_trace_timesync_read_rx_timestamp(port_id, timestamp, flags); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5218,6 +5347,7 @@ rte_eth_timesync_read_tx_timestamp(uint16_t port_id, { struct rte_eth_dev *dev; + rte_eth_trace_timesync_read_tx_timestamp(port_id, timestamp); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5238,6 +5368,7 @@ rte_eth_timesync_adjust_time(uint16_t port_id, int64_t delta) { struct rte_eth_dev *dev; + rte_eth_trace_timesync_adjust_time(port_id, delta); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5250,6 +5381,7 @@ rte_eth_timesync_read_time(uint16_t port_id, struct timespec *timestamp) { struct rte_eth_dev *dev; + rte_eth_trace_timesync_read_time(port_id, timestamp); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5270,6 +5402,7 @@ rte_eth_timesync_write_time(uint16_t port_id, const struct timespec *timestamp) { struct rte_eth_dev *dev; + rte_eth_trace_timesync_write_time(port_id, timestamp); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5290,6 +5423,7 @@ rte_eth_read_clock(uint16_t port_id, uint64_t *clock) { struct rte_eth_dev *dev; + rte_eth_trace_read_clock(port_id, clock); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5308,6 +5442,7 @@ rte_eth_dev_get_reg_info(uint16_t port_id, struct rte_dev_reg_info *info) { struct rte_eth_dev *dev; + rte_ethdev_trace_get_reg_info(port_id, info); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5327,6 +5462,7 @@ rte_eth_dev_get_eeprom_length(uint16_t port_id) { struct rte_eth_dev *dev; + rte_ethdev_trace_get_eeprom_length(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5339,6 +5475,7 @@ rte_eth_dev_get_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info) { struct rte_eth_dev *dev; + rte_ethdev_trace_get_eeprom(port_id, info); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5358,6 +5495,7 @@ rte_eth_dev_set_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info) { struct rte_eth_dev *dev; + rte_ethdev_trace_set_eeprom(port_id, info); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5378,6 +5516,7 @@ rte_eth_dev_get_module_info(uint16_t port_id, { struct rte_eth_dev *dev; + rte_ethdev_trace_get_module_info(port_id, modinfo); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5398,6 +5537,7 @@ rte_eth_dev_get_module_eeprom(uint16_t port_id, { struct rte_eth_dev *dev; + rte_ethdev_trace_get_module_eeprom(port_id, info); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5432,6 +5572,7 @@ rte_eth_dev_get_dcb_info(uint16_t port_id, { struct rte_eth_dev *dev; + rte_ethdev_trace_get_dcb_info(port_id, dcb_info); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5481,6 +5622,7 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint16_t port_id, if (nb_tx_desc != NULL) eth_dev_adjust_nb_desc(nb_tx_desc, &dev_info.tx_desc_lim); + rte_ethdev_trace_adjust_nb_rx_tx_desc(port_id, *nb_rx_desc, *nb_tx_desc); return 0; } @@ -5490,6 +5632,7 @@ rte_eth_dev_hairpin_capability_get(uint16_t port_id, { struct rte_eth_dev *dev; + rte_ethdev_trace_hairpin_capability_get(port_id, cap); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5510,6 +5653,7 @@ rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool) { struct rte_eth_dev *dev; + rte_ethdev_trace_pool_ops_supported(port_id, pool); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5772,6 +5916,7 @@ rte_eth_representor_info_get(uint16_t port_id, { struct rte_eth_dev *dev; + rte_eth_trace_representor_info_get(port_id, info); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5784,6 +5929,7 @@ rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features) { struct rte_eth_dev *dev; + rte_eth_trace_rx_metadata_negotiate(port_id, features); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5810,6 +5956,7 @@ rte_eth_ip_reassembly_capability_get(uint16_t port_id, { struct rte_eth_dev *dev; + rte_eth_trace_ip_reassembly_capability_get(port_id, reassembly_capa); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5840,6 +5987,7 @@ rte_eth_ip_reassembly_conf_get(uint16_t port_id, { struct rte_eth_dev *dev; + rte_eth_trace_ip_reassembly_conf_get(port_id, conf); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5869,6 +6017,7 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id, { struct rte_eth_dev *dev; + rte_eth_trace_ip_reassembly_conf_set(port_id, conf); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -5905,6 +6054,7 @@ rte_eth_dev_priv_dump(uint16_t port_id, FILE *file) { struct rte_eth_dev *dev; + rte_ethdev_trace_priv_dump(port_id); RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h index 1491c815c3..de728d355d 100644 --- a/lib/ethdev/rte_ethdev_trace.h +++ b/lib/ethdev/rte_ethdev_trace.h @@ -88,6 +88,1188 @@ RTE_TRACE_POINT( rte_trace_point_emit_u16(port_id); ) +RTE_TRACE_POINT( + rte_eth_trace_add_first_rx_callback, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, + rte_rx_callback_fn fn, void *user_param), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_ptr(fn); + rte_trace_point_emit_ptr(user_param); +) + +RTE_TRACE_POINT( + rte_eth_trace_add_rx_callback, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, + rte_rx_callback_fn fn, void *user_param), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_ptr(fn); + rte_trace_point_emit_ptr(user_param); +) + +RTE_TRACE_POINT( + rte_eth_trace_add_tx_callback, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, + rte_tx_callback_fn fn, void *user_param), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_ptr(fn); + rte_trace_point_emit_ptr(user_param); +) + +RTE_TRACE_POINT( + rte_eth_trace_allmulticast_disable, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_allmulticast_enable, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_allmulticast_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_call_rx_callbacks, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, + struct rte_mbuf **rx_pkts, uint16_t nb_rx, + uint16_t nb_pkts, void *opaque), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_ptr(rx_pkts); + rte_trace_point_emit_u16(nb_rx); + rte_trace_point_emit_u16(nb_pkts); + rte_trace_point_emit_ptr(opaque); +) + +RTE_TRACE_POINT( + rte_eth_trace_call_tx_callbacks, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, + struct rte_mbuf **tx_pkts, uint16_t nb_pkts, + void *opaque), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_ptr(tx_pkts); + rte_trace_point_emit_u16(nb_pkts); + rte_trace_point_emit_ptr(opaque); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_set_mtu, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t mtu, int ret), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(mtu); + rte_trace_point_emit_int(ret); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_adjust_nb_rx_tx_desc, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t nb_rx_desc, + uint16_t nb_tx_desc), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(nb_rx_desc); + rte_trace_point_emit_u16(nb_tx_desc); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_callback_register, + RTE_TRACE_POINT_ARGS(uint16_t port_id, enum rte_eth_event_type event, + rte_eth_dev_cb_fn cb_fn, void *cb_arg), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_int(event); + rte_trace_point_emit_ptr(cb_fn); + rte_trace_point_emit_ptr(cb_arg); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_callback_unregister, + RTE_TRACE_POINT_ARGS(uint16_t port_id, enum rte_eth_event_type event, + rte_eth_dev_cb_fn cb_fn, void *cb_arg), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_int(event); + rte_trace_point_emit_ptr(cb_fn); + rte_trace_point_emit_ptr(cb_arg); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_default_mac_addr_set, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_ether_addr *addr), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(addr); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_flow_ctrl_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_eth_fc_conf *fc_conf), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(fc_conf); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_flow_ctrl_set, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_eth_fc_conf *fc_conf), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(fc_conf->high_water); + rte_trace_point_emit_u32(fc_conf->low_water); + rte_trace_point_emit_u16(fc_conf->pause_time); + rte_trace_point_emit_u16(fc_conf->send_xon); + rte_trace_point_emit_int(fc_conf->mode); + rte_trace_point_emit_u8(fc_conf->mac_ctrl_frame_fwd); + rte_trace_point_emit_u8(fc_conf->autoneg); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_fw_version_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, char *fw_version, size_t fw_size), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(fw_version); + rte_trace_point_emit_size_t(fw_size); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_get_dcb_info, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_eth_dcb_info *dcb_info), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(dcb_info); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_get_eeprom, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_dev_eeprom_info *info), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(info); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_get_eeprom_length, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_get_mtu, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t mtu), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(mtu); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_count_avail, + RTE_TRACE_POINT_ARGS(uint16_t count), + rte_trace_point_emit_u16(count); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_count_total, + RTE_TRACE_POINT_ARGS(uint16_t count), + rte_trace_point_emit_u16(count); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_get_name_by_port, + RTE_TRACE_POINT_ARGS(uint16_t port_id, char *name), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_string(name); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_get_port_by_name, + RTE_TRACE_POINT_ARGS(const char *name, uint16_t port_id), + rte_trace_point_emit_string(name); + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_get_reg_info, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_dev_reg_info *info), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(info); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_get_sec_ctx, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_get_supported_ptypes, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t ptype_mask, + uint32_t *ptypes, int num), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(ptype_mask); + rte_trace_point_emit_ptr(ptypes); + rte_trace_point_emit_int(num); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_get_vlan_offload, + RTE_TRACE_POINT_ARGS(uint16_t port_id, int ret), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_int(ret); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_info_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_eth_dev_info *dev_info), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_string(dev_info->driver_name); + rte_trace_point_emit_u32(dev_info->if_index); + rte_trace_point_emit_u16(dev_info->min_mtu); + rte_trace_point_emit_u16(dev_info->max_mtu); + rte_trace_point_emit_u32(dev_info->min_rx_bufsize); + rte_trace_point_emit_u32(dev_info->max_rx_pktlen); + rte_trace_point_emit_u64(dev_info->rx_offload_capa); + rte_trace_point_emit_u64(dev_info->tx_offload_capa); + rte_trace_point_emit_u64(dev_info->rx_queue_offload_capa); + rte_trace_point_emit_u64(dev_info->tx_queue_offload_capa); + rte_trace_point_emit_u16(dev_info->reta_size); + rte_trace_point_emit_u8(dev_info->hash_key_size); + rte_trace_point_emit_u16(dev_info->nb_rx_queues); + rte_trace_point_emit_u16(dev_info->nb_tx_queues); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_is_removed, + RTE_TRACE_POINT_ARGS(uint16_t port_id, int ret), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_int(ret); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_is_valid_port, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_mac_addr_add, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_ether_addr *addr, + uint32_t pool), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(addr); + rte_trace_point_emit_u32(pool); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_mac_addr_remove, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_ether_addr *addr), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(addr); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_pool_ops_supported, + RTE_TRACE_POINT_ARGS(uint16_t port_id, const char *pool), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(pool); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_priority_flow_ctrl_set, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_eth_pfc_conf *pfc_conf), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(pfc_conf->fc.high_water); + rte_trace_point_emit_u32(pfc_conf->fc.low_water); + rte_trace_point_emit_u16(pfc_conf->fc.pause_time); + rte_trace_point_emit_u16(pfc_conf->fc.send_xon); + rte_trace_point_emit_int(pfc_conf->fc.mode); + rte_trace_point_emit_u8(pfc_conf->fc.mac_ctrl_frame_fwd); + rte_trace_point_emit_u8(pfc_conf->fc.autoneg); + rte_trace_point_emit_u8(pfc_conf->priority); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_reset, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_rss_hash_conf_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_eth_rss_conf *rss_conf), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(rss_conf); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_rss_hash_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_eth_rss_conf *rss_conf), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(rss_conf->rss_key); + rte_trace_point_emit_u8(rss_conf->rss_key_len); + rte_trace_point_emit_u64(rss_conf->rss_hf); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_rss_reta_query, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(reta_conf); + rte_trace_point_emit_u16(reta_size); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_rss_reta_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u64(reta_conf->mask); + rte_trace_point_emit_u16(reta_size); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_rx_intr_ctl, + RTE_TRACE_POINT_ARGS(uint16_t port_id, int epfd, int op, void *data), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_int(epfd); + rte_trace_point_emit_int(op); + rte_trace_point_emit_ptr(data); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_rx_intr_ctl_q, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, int epfd, + int op, void *data), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_int(epfd); + rte_trace_point_emit_int(op); + rte_trace_point_emit_ptr(data); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_rx_intr_ctl_q_get_fd, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, int fd), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_int(fd); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_rx_intr_disable, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_rx_intr_enable, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_rx_offload_name, + RTE_TRACE_POINT_ARGS(uint64_t offload, const char *name), + rte_trace_point_emit_u64(offload); + rte_trace_point_emit_string(name); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_rx_queue_start, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t rx_queue_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(rx_queue_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_rx_queue_stop, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t rx_queue_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(rx_queue_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_set_eeprom, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_dev_eeprom_info *info), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(info->data); + rte_trace_point_emit_u32(info->offset); + rte_trace_point_emit_u32(info->length); + rte_trace_point_emit_u32(info->magic); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_set_link_down, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_set_link_up, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_set_mc_addr_list, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_ether_addr *mc_addr_set, + uint32_t nb_mc_addr), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(mc_addr_set); + rte_trace_point_emit_u32(nb_mc_addr); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_set_ptypes, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t ptype_mask, + uint32_t *set_ptypes, unsigned int num), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(ptype_mask); + rte_trace_point_emit_ptr(set_ptypes); + rte_trace_point_emit_u32(num); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_set_rx_queue_stats_mapping, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t rx_queue_id, + uint8_t stat_idx), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(rx_queue_id); + rte_trace_point_emit_u8(stat_idx); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_set_tx_queue_stats_mapping, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t tx_queue_id, + uint8_t stat_idx), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(tx_queue_id); + rte_trace_point_emit_u8(stat_idx); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_set_vlan_ether_type, + RTE_TRACE_POINT_ARGS(uint16_t port_id, enum rte_vlan_type vlan_type, + uint16_t tag_type), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_int(vlan_type); + rte_trace_point_emit_u16(tag_type); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_set_vlan_offload, + RTE_TRACE_POINT_ARGS(uint16_t port_id, int offload_mask), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_int(offload_mask); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_set_vlan_pvid, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t pvid, int on), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(pvid); + rte_trace_point_emit_int(on); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_set_vlan_strip_on_queue, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t rx_queue_id, + int on), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(rx_queue_id); + rte_trace_point_emit_int(on); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_socket_id, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_tx_offload_name, + RTE_TRACE_POINT_ARGS(uint64_t offload, const char *name), + rte_trace_point_emit_u64(offload); + rte_trace_point_emit_string(name); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_tx_queue_start, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t tx_queue_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(tx_queue_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_tx_queue_stop, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t tx_queue_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(tx_queue_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_uc_all_hash_table_set, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint8_t on), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u8(on); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_uc_hash_table_set, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_ether_addr *addr, + uint8_t on), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(addr); + rte_trace_point_emit_u8(on); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_udp_tunnel_port_add, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_eth_udp_tunnel *tunnel_udp), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(tunnel_udp->udp_port); + rte_trace_point_emit_u8(tunnel_udp->prot_type); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_udp_tunnel_port_delete, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_eth_udp_tunnel *tunnel_udp), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(tunnel_udp->udp_port); + rte_trace_point_emit_u8(tunnel_udp->prot_type); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_vlan_filter, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t vlan_id, int on), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(vlan_id); + rte_trace_point_emit_int(on); +) + +RTE_TRACE_POINT( + rte_eth_trace_find_next, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_find_next_of, + RTE_TRACE_POINT_ARGS(uint16_t port_id_start, + const struct rte_device *parent), + rte_trace_point_emit_u16(port_id_start); + rte_trace_point_emit_ptr(parent); +) + +RTE_TRACE_POINT( + rte_eth_trace_find_next_owned_by, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + const uint64_t owner_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u64(owner_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_find_next_sibling, + RTE_TRACE_POINT_ARGS(uint16_t port_id_start, uint16_t ref_port_id), + rte_trace_point_emit_u16(port_id_start); + rte_trace_point_emit_u16(ref_port_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_iterator_cleanup, + RTE_TRACE_POINT_ARGS(struct rte_dev_iterator *iter), + rte_trace_point_emit_ptr(iter); +) + +RTE_TRACE_POINT( + rte_eth_trace_iterator_init, + RTE_TRACE_POINT_ARGS(struct rte_dev_iterator *iter, const char *devargs), + rte_trace_point_emit_ptr(iter); + rte_trace_point_emit_ptr(devargs); +) + +RTE_TRACE_POINT( + rte_eth_trace_iterator_next, + RTE_TRACE_POINT_ARGS(struct rte_dev_iterator *iter), + rte_trace_point_emit_ptr(iter); +) + +RTE_TRACE_POINT( + rte_eth_trace_led_off, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_led_on, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_link_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_eth_link *link), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(link->link_speed); +) + +RTE_TRACE_POINT( + rte_eth_trace_link_get_nowait, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_eth_link *link), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(link->link_speed); +) + +RTE_TRACE_POINT( + rte_eth_trace_macaddr_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_ether_addr *mac_addr), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(mac_addr); +) + +RTE_TRACE_POINT( + rte_eth_trace_promiscuous_disable, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_promiscuous_enable, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_promiscuous_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_remove_rx_callback, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, + const struct rte_eth_rxtx_callback *user_cb), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_ptr(user_cb); +) + +RTE_TRACE_POINT( + rte_eth_trace_remove_tx_callback, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, + const struct rte_eth_rxtx_callback *user_cb), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_ptr(user_cb); +) + +RTE_TRACE_POINT( + rte_eth_trace_rx_burst_mode_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, + struct rte_eth_burst_mode *mode), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_ptr(mode); +) + +RTE_TRACE_POINT( + rte_eth_trace_rx_queue_info_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, + struct rte_eth_rxq_info *qinfo), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_ptr(qinfo->mp); + rte_trace_point_emit_u8(qinfo->scattered_rx); + rte_trace_point_emit_u8(qinfo->queue_state); + rte_trace_point_emit_u16(qinfo->nb_desc); + rte_trace_point_emit_u16(qinfo->rx_buf_size); +) + +RTE_TRACE_POINT( + rte_eth_trace_rx_queue_setup, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t rx_queue_id, + uint16_t nb_rx_desc, unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mb_pool), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(rx_queue_id); + rte_trace_point_emit_u16(nb_rx_desc); + rte_trace_point_emit_u32(socket_id); + rte_trace_point_emit_ptr(rx_conf); + rte_trace_point_emit_ptr(mb_pool); +) + +RTE_TRACE_POINT( + rte_eth_trace_set_queue_rate_limit, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_idx, + uint16_t tx_rate), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_idx); + rte_trace_point_emit_u16(tx_rate); +) + +RTE_TRACE_POINT( + rte_eth_trace_speed_bitflag, + RTE_TRACE_POINT_ARGS(uint32_t speed, int duplex), + rte_trace_point_emit_u32(speed); + rte_trace_point_emit_int(duplex); +) + +RTE_TRACE_POINT( + rte_eth_trace_stats_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_eth_stats *stats), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(stats); +) + +RTE_TRACE_POINT( + rte_eth_trace_stats_reset, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_timesync_adjust_time, + RTE_TRACE_POINT_ARGS(uint16_t port_id, int64_t delta), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_i64(delta); +) + +RTE_TRACE_POINT( + rte_eth_trace_timesync_disable, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_timesync_enable, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_timesync_read_rx_timestamp, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct timespec *timestamp, uint32_t flags), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(timestamp); + rte_trace_point_emit_u32(flags); +) + +RTE_TRACE_POINT( + rte_eth_trace_timesync_read_time, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct timespec *time), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(time); +) + +RTE_TRACE_POINT( + rte_eth_trace_timesync_read_tx_timestamp, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct timespec *timestamp), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(timestamp); +) + +RTE_TRACE_POINT( + rte_eth_trace_timesync_write_time, + RTE_TRACE_POINT_ARGS(uint16_t port_id, const struct timespec *time), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(time); +) + +RTE_TRACE_POINT( + rte_eth_trace_tx_buffer_count_callback, + RTE_TRACE_POINT_ARGS(struct rte_mbuf **pkts, uint16_t unsent, + uint64_t count), + rte_trace_point_emit_ptr(pkts); + rte_trace_point_emit_u16(unsent); + rte_trace_point_emit_u64(count); +) + +RTE_TRACE_POINT( + rte_eth_trace_tx_buffer_drop_callback, + RTE_TRACE_POINT_ARGS(struct rte_mbuf **pkts, uint16_t unsent), + rte_trace_point_emit_ptr(pkts); + rte_trace_point_emit_u16(unsent); +) + +RTE_TRACE_POINT( + rte_eth_trace_tx_buffer_init, + RTE_TRACE_POINT_ARGS(struct rte_eth_dev_tx_buffer *buffer, uint16_t size), + rte_trace_point_emit_ptr(buffer); + rte_trace_point_emit_u16(size); +) + +RTE_TRACE_POINT( + rte_eth_trace_tx_buffer_set_err_callback, + RTE_TRACE_POINT_ARGS(struct rte_eth_dev_tx_buffer *buffer, + buffer_tx_error_fn callback, void *userdata), + rte_trace_point_emit_ptr(buffer); + rte_trace_point_emit_ptr(callback); + rte_trace_point_emit_ptr(userdata); +) + +RTE_TRACE_POINT( + rte_eth_trace_tx_burst_mode_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, + struct rte_eth_burst_mode *mode), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_ptr(mode); +) + +RTE_TRACE_POINT( + rte_eth_trace_tx_done_cleanup, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, uint32_t free_cnt), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_u32(free_cnt); +) + +RTE_TRACE_POINT( + rte_eth_trace_tx_queue_info_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, + struct rte_eth_txq_info *qinfo), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_u16(qinfo->nb_desc); + rte_trace_point_emit_u8(qinfo->queue_state); +) + +RTE_TRACE_POINT( + rte_eth_trace_tx_queue_setup, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t tx_queue_id, + uint16_t nb_tx_desc, unsigned int socket_id, + const struct rte_eth_txconf *tx_conf), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(tx_queue_id); + rte_trace_point_emit_u16(nb_tx_desc); + rte_trace_point_emit_u32(socket_id); + rte_trace_point_emit_ptr(tx_conf); +) + +RTE_TRACE_POINT( + rte_eth_trace_xstats_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_eth_xstat xstats, + int i), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u64(xstats.id); + rte_trace_point_emit_u64(xstats.value); + rte_trace_point_emit_u32(i); +) + +RTE_TRACE_POINT( + rte_eth_trace_xstats_get_by_id, + RTE_TRACE_POINT_ARGS(uint16_t port_id, const uint64_t *ids, + uint64_t *values, unsigned int size), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(ids); + rte_trace_point_emit_ptr(values); + rte_trace_point_emit_u32(size); +) + +RTE_TRACE_POINT( + rte_eth_trace_xstats_get_id_by_name, + RTE_TRACE_POINT_ARGS(uint16_t port_id, const char *xstat_name, + uint64_t *id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_string(xstat_name); + rte_trace_point_emit_ptr(id); +) + +RTE_TRACE_POINT( + rte_eth_trace_xstats_get_names, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_eth_xstat_name *xstats_names, + unsigned int size, int cnt_used_entries), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_string(xstats_names->name); + rte_trace_point_emit_u32(size); + rte_trace_point_emit_int(cnt_used_entries); +) + +RTE_TRACE_POINT( + rte_eth_trace_xstats_get_names_by_id, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_eth_xstat_name *xstats_names, uint64_t ids), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_string(xstats_names->name); + rte_trace_point_emit_u64(ids); +) + +RTE_TRACE_POINT( + rte_eth_trace_xstats_reset, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_owner_delete, + RTE_TRACE_POINT_ARGS(const uint64_t owner_id, int ret), + rte_trace_point_emit_u64(owner_id); + rte_trace_point_emit_int(ret); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_owner_get, + RTE_TRACE_POINT_ARGS(const uint16_t port_id, + struct rte_eth_dev_owner *owner), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u64(owner->id); + rte_trace_point_emit_string(owner->name); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_owner_new, + RTE_TRACE_POINT_ARGS(uint64_t owner_id), + rte_trace_point_emit_u64(owner_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_owner_set, + RTE_TRACE_POINT_ARGS(const uint16_t port_id, + const struct rte_eth_dev_owner *owner), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u64(owner->id); + rte_trace_point_emit_string(owner->name); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_owner_unset, + RTE_TRACE_POINT_ARGS(const uint16_t port_id, + const uint64_t owner_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u64(owner_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_get_module_eeprom, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_dev_eeprom_info *info), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(info); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_get_module_info, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_eth_dev_module_info *modinfo), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(modinfo); +) + +RTE_TRACE_POINT( + rte_eth_trace_read_clock, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint64_t *clk), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(clk); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_hairpin_capability_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_eth_hairpin_cap *cap), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(cap); +) + +RTE_TRACE_POINT( + rte_eth_trace_rx_hairpin_queue_setup, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t rx_queue_id, + uint16_t nb_rx_desc, const struct rte_eth_hairpin_conf *conf), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(rx_queue_id); + rte_trace_point_emit_u16(nb_rx_desc); + rte_trace_point_emit_ptr(conf); +) + +RTE_TRACE_POINT( + rte_eth_trace_tx_hairpin_queue_setup, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t tx_queue_id, + uint16_t nb_tx_desc, const struct rte_eth_hairpin_conf *conf), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(tx_queue_id); + rte_trace_point_emit_u16(nb_tx_desc); + rte_trace_point_emit_ptr(conf); +) + +RTE_TRACE_POINT( + rte_eth_trace_hairpin_bind, + RTE_TRACE_POINT_ARGS(uint16_t tx_port, uint16_t rx_port), + rte_trace_point_emit_u16(tx_port); + rte_trace_point_emit_u16(rx_port); +) + +RTE_TRACE_POINT( + rte_eth_trace_hairpin_get_peer_ports, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t *peer_ports, + size_t len, uint32_t direction), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(peer_ports); + rte_trace_point_emit_size_t(len); + rte_trace_point_emit_u32(direction); +) + +RTE_TRACE_POINT( + rte_eth_trace_hairpin_unbind, + RTE_TRACE_POINT_ARGS(uint16_t tx_port, uint16_t rx_port), + rte_trace_point_emit_u16(tx_port); + rte_trace_point_emit_u16(rx_port); +) + +RTE_TRACE_POINT( + rte_eth_trace_link_speed_to_str, + RTE_TRACE_POINT_ARGS(uint32_t link_speed), + rte_trace_point_emit_u32(link_speed); +) + +RTE_TRACE_POINT( + rte_eth_trace_link_to_str, + RTE_TRACE_POINT_ARGS(char *str, size_t len, + const struct rte_eth_link *eth_link), + rte_trace_point_emit_ptr(str); + rte_trace_point_emit_size_t(len); + rte_trace_point_emit_ptr(eth_link); +) + +RTE_TRACE_POINT( + rte_eth_trace_fec_get_capability, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_eth_fec_capa *speed_fec_capa, + unsigned int num), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(speed_fec_capa); + rte_trace_point_emit_u32(num); +) + +RTE_TRACE_POINT( + rte_eth_trace_fec_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t *fec_capa), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(fec_capa); +) + +RTE_TRACE_POINT( + rte_eth_trace_fec_set, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t fec_capa), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(fec_capa); +) + +RTE_TRACE_POINT( + rte_eth_trace_get_monitor_addr, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, + struct rte_power_monitor_cond *pmc), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_ptr(pmc); +) + +RTE_TRACE_POINT( + rte_eth_trace_representor_info_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_eth_representor_info *info), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(info); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_capability_name, + RTE_TRACE_POINT_ARGS(uint64_t capability, const char *name), + rte_trace_point_emit_u64(capability); + rte_trace_point_emit_string(name); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_conf_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_eth_conf *dev_conf), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(dev_conf); +) + +RTE_TRACE_POINT( + rte_eth_trace_macaddrs_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_ether_addr *ma, + unsigned int num), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(ma); + rte_trace_point_emit_u32(num); +) + +RTE_TRACE_POINT( + rte_eth_trace_rx_metadata_negotiate, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint64_t *features), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(features); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_priority_flow_ctrl_queue_configure, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_eth_pfc_queue_conf *pfc_queue_conf), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(pfc_queue_conf); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_priority_flow_ctrl_queue_info_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_eth_pfc_queue_info *pfc_queue_info), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(pfc_queue_info); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_priv_dump, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_ip_reassembly_capability_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_eth_ip_reassembly_params *capa), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(capa); +) + +RTE_TRACE_POINT( + rte_eth_trace_ip_reassembly_conf_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_eth_ip_reassembly_params *conf), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(conf); +) + +RTE_TRACE_POINT( + rte_eth_trace_ip_reassembly_conf_set, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + const struct rte_eth_ip_reassembly_params *conf), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(conf); +) + +RTE_TRACE_POINT( + rte_eth_trace_rx_avail_thresh_query, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_rx_avail_thresh_set, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, + uint8_t avail_thresh), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_u8(avail_thresh); +) + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index d46f31b63f..79bf947042 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -285,6 +285,153 @@ EXPERIMENTAL { rte_mtr_color_in_protocol_priority_get; rte_mtr_color_in_protocol_set; rte_mtr_meter_vlan_table_update; + + # added in 22.11 + __rte_eth_trace_add_first_rx_callback; + __rte_eth_trace_add_rx_callback; + __rte_eth_trace_add_tx_callback; + __rte_eth_trace_allmulticast_disable; + __rte_eth_trace_allmulticast_enable; + __rte_eth_trace_allmulticast_get; + __rte_eth_trace_call_rx_callbacks; + __rte_eth_trace_call_tx_callbacks; + __rte_ethdev_trace_count_avail; + __rte_ethdev_trace_count_total; + __rte_ethdev_trace_set_mtu; + __rte_ethdev_trace_adjust_nb_rx_tx_desc; + __rte_ethdev_trace_callback_register; + __rte_ethdev_trace_callback_unregister; + __rte_ethdev_trace_default_mac_addr_set; + __rte_ethdev_trace_flow_ctrl_get; + __rte_ethdev_trace_flow_ctrl_set; + __rte_ethdev_trace_fw_version_get; + __rte_ethdev_trace_get_dcb_info; + __rte_ethdev_trace_get_eeprom; + __rte_ethdev_trace_get_eeprom_length; + __rte_ethdev_trace_get_mtu; + __rte_ethdev_trace_get_name_by_port; + __rte_ethdev_trace_get_port_by_name; + __rte_ethdev_trace_get_reg_info; + __rte_ethdev_trace_get_sec_ctx; + __rte_ethdev_trace_get_supported_ptypes; + __rte_ethdev_trace_get_vlan_offload; + __rte_ethdev_trace_info_get; + __rte_ethdev_trace_is_removed; + __rte_ethdev_trace_is_valid_port; + __rte_ethdev_trace_mac_addr_add; + __rte_ethdev_trace_mac_addr_remove; + __rte_ethdev_trace_pool_ops_supported; + __rte_ethdev_trace_priority_flow_ctrl_set; + __rte_ethdev_trace_reset; + __rte_ethdev_trace_rss_hash_conf_get; + __rte_ethdev_trace_rss_hash_update; + __rte_ethdev_trace_rss_reta_query; + __rte_ethdev_trace_rss_reta_update; + __rte_ethdev_trace_rx_intr_ctl; + __rte_ethdev_trace_rx_intr_ctl_q; + __rte_ethdev_trace_rx_intr_ctl_q_get_fd; + __rte_ethdev_trace_rx_intr_disable; + __rte_ethdev_trace_rx_intr_enable; + __rte_ethdev_trace_rx_offload_name; + __rte_ethdev_trace_rx_queue_start; + __rte_ethdev_trace_rx_queue_stop; + __rte_ethdev_trace_set_eeprom; + __rte_ethdev_trace_set_link_down; + __rte_ethdev_trace_set_link_up; + __rte_ethdev_trace_set_mc_addr_list; + __rte_ethdev_trace_set_ptypes; + __rte_ethdev_trace_set_rx_queue_stats_mapping; + __rte_ethdev_trace_set_tx_queue_stats_mapping; + __rte_ethdev_trace_set_vlan_ether_type; + __rte_ethdev_trace_set_vlan_offload; + __rte_ethdev_trace_set_vlan_pvid; + __rte_ethdev_trace_set_vlan_strip_on_queue; + __rte_ethdev_trace_socket_id; + __rte_ethdev_trace_tx_offload_name; + __rte_ethdev_trace_tx_queue_start; + __rte_ethdev_trace_tx_queue_stop; + __rte_ethdev_trace_uc_all_hash_table_set; + __rte_ethdev_trace_uc_hash_table_set; + __rte_ethdev_trace_udp_tunnel_port_add; + __rte_ethdev_trace_udp_tunnel_port_delete; + __rte_ethdev_trace_vlan_filter; + __rte_eth_trace_find_next; + __rte_eth_trace_find_next_of; + __rte_eth_trace_find_next_owned_by; + __rte_eth_trace_find_next_sibling; + __rte_eth_trace_iterator_cleanup; + __rte_eth_trace_iterator_init; + __rte_eth_trace_iterator_next; + __rte_eth_trace_led_off; + __rte_eth_trace_led_on; + __rte_eth_trace_link_get; + __rte_eth_trace_link_get_nowait; + __rte_eth_trace_macaddr_get; + __rte_eth_trace_promiscuous_disable; + __rte_eth_trace_promiscuous_enable; + __rte_eth_trace_promiscuous_get; + __rte_eth_trace_remove_rx_callback; + __rte_eth_trace_remove_tx_callback; + __rte_eth_trace_rx_burst_mode_get; + __rte_eth_trace_rx_queue_info_get; + __rte_eth_trace_rx_queue_setup; + __rte_eth_trace_set_queue_rate_limit; + __rte_eth_trace_speed_bitflag; + __rte_eth_trace_stats_get; + __rte_eth_trace_stats_reset; + __rte_eth_trace_timesync_adjust_time; + __rte_eth_trace_timesync_disable; + __rte_eth_trace_timesync_enable; + __rte_eth_trace_timesync_read_rx_timestamp; + __rte_eth_trace_timesync_read_time; + __rte_eth_trace_timesync_read_tx_timestamp; + __rte_eth_trace_timesync_write_time; + __rte_eth_trace_tx_buffer_count_callback; + __rte_eth_trace_tx_buffer_drop_callback; + __rte_eth_trace_tx_buffer_init; + __rte_eth_trace_tx_buffer_set_err_callback; + __rte_eth_trace_tx_burst_mode_get; + __rte_eth_trace_tx_done_cleanup; + __rte_eth_trace_tx_queue_info_get; + __rte_eth_trace_tx_queue_setup; + __rte_eth_trace_xstats_get; + __rte_eth_trace_xstats_get_by_id; + __rte_eth_trace_xstats_get_id_by_name; + __rte_eth_trace_xstats_get_names; + __rte_eth_trace_xstats_get_names_by_id; + __rte_eth_trace_xstats_reset; + __rte_ethdev_trace_owner_delete; + __rte_ethdev_trace_owner_get; + __rte_ethdev_trace_owner_new; + __rte_ethdev_trace_owner_set; + __rte_ethdev_trace_owner_unset; + __rte_ethdev_trace_get_module_eeprom; + __rte_ethdev_trace_get_module_info; + __rte_ethdev_trace_hairpin_capability_get; + __rte_eth_trace_rx_hairpin_queue_setup; + __rte_eth_trace_tx_hairpin_queue_setup; + __rte_eth_trace_hairpin_bind; + __rte_eth_trace_hairpin_get_peer_ports; + __rte_eth_trace_hairpin_unbind; + __rte_eth_trace_link_speed_to_str; + __rte_eth_trace_link_to_str; + __rte_eth_trace_fec_get_capability; + __rte_eth_trace_fec_get; + __rte_eth_trace_fec_set; + __rte_eth_trace_get_monitor_addr; + __rte_eth_trace_representor_info_get; + __rte_ethdev_trace_capability_name; + __rte_ethdev_trace_conf_get; + __rte_eth_trace_macaddrs_get; + __rte_eth_trace_rx_metadata_negotiate; + __rte_ethdev_trace_priority_flow_ctrl_queue_configure; + __rte_ethdev_trace_priority_flow_ctrl_queue_info_get; + __rte_ethdev_trace_priv_dump; + __rte_eth_trace_ip_reassembly_capability_get; + __rte_eth_trace_ip_reassembly_conf_get; + __rte_eth_trace_ip_reassembly_conf_set; + __rte_eth_trace_rx_avail_thresh_query; + __rte_eth_trace_rx_avail_thresh_set; }; INTERNAL { From patchwork Thu Aug 4 13:44:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ankur Dwivedi X-Patchwork-Id: 114618 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6839DA00C4; Thu, 4 Aug 2022 15:48:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 589BB42BD9; Thu, 4 Aug 2022 15:48:43 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 31B5D4282B for ; Thu, 4 Aug 2022 15:48:41 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 274C1Ovn021008; Thu, 4 Aug 2022 06:46:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=dWVoIBRpp1ZDkNUhB/lEyifdGZQH4NYYhmbevyiCeZ4=; b=BQ+bVaAYVvnT1N65bFM1iJvj3qsR+4itu10Hwe/XRZFSPQMi9AIVYx9CmAvQy3xi8bj5 XfoM1Yc3l94UB/rOLoZokiFpF9g6YV1azc0zopf+gb6Y/efA66cUW2ulccHoOMTwsXv5 SsLLXC1MLCK9kEdSWRfuvoowmM9CfsZ42ahqLVSnI6Cx63AtfBghswRYk7f4+rfAVB+o V43tdhU8rFjSAqCv/Y/Xssso7WDnIPblF8o/0+L5CJwzFpPxmehcPjrVDnqMJYqsiL1l akalBM18UHzCQQVAJu/PQYGzzhIsevep6KgfD6J6eIBX96L5Mxa3EoDTJjILBts9B6uS lA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3hqgf1xr3g-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 04 Aug 2022 06:46:22 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Aug 2022 06:46:19 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 4 Aug 2022 06:46:19 -0700 Received: from hyd1349.t110.caveonetworks.com.com (unknown [10.29.45.13]) by maili.marvell.com (Postfix) with ESMTP id 9BD373F7057; Thu, 4 Aug 2022 06:45:56 -0700 (PDT) From: Ankur Dwivedi To: CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Ankur Dwivedi Subject: [PATCH 2/6] ethdev: add trace points for flow Date: Thu, 4 Aug 2022 19:14:26 +0530 Message-ID: <20220804134430.6192-3-adwivedi@marvell.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20220804134430.6192-1-adwivedi@marvell.com> References: <20220804134430.6192-1-adwivedi@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 5r29cVT8AOvuI15x6wOzusXMuBdxafrN X-Proofpoint-ORIG-GUID: 5r29cVT8AOvuI15x6wOzusXMuBdxafrN X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-04_03,2022-08-04_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adds trace points for rte_flow specific functions in ethdev lib. Signed-off-by: Ankur Dwivedi --- lib/ethdev/ethdev_trace_points.c | 117 +++++++++ lib/ethdev/rte_ethdev_trace.h | 405 +++++++++++++++++++++++++++++++ lib/ethdev/rte_flow.c | 54 +++++ lib/ethdev/version.map | 39 +++ 4 files changed, 615 insertions(+) diff --git a/lib/ethdev/ethdev_trace_points.c b/lib/ethdev/ethdev_trace_points.c index 2e80401771..a8b974564c 100644 --- a/lib/ethdev/ethdev_trace_points.c +++ b/lib/ethdev/ethdev_trace_points.c @@ -467,3 +467,120 @@ RTE_TRACE_POINT_REGISTER(rte_eth_trace_rx_avail_thresh_query, RTE_TRACE_POINT_REGISTER(rte_eth_trace_rx_avail_thresh_set, lib.ethdev.rx_avail_thresh_set) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_copy, + lib.ethdev.flow.copy) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_create, + lib.ethdev.flow.create) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_destroy, + lib.ethdev.flow.destroy) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_error_set, + lib.ethdev.flow.error_set) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_flush, + lib.ethdev.flow.flush) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_isolate, + lib.ethdev.flow.isolate) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_query, + lib.ethdev.flow.query) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_validate, + lib.ethdev.flow.validate) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_conv, + lib.ethdev.flow.conv) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_dynf_metadata_register, + lib.ethdev.dynf_metadata_register) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_dev_dump, + lib.ethdev.flow.dev_dump) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_get_aged_flows, + lib.ethdev.flow.get_aged_flows) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_tunnel_decap_set, + lib.ethdev.flow.tunnel_decap_set) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_tunnel_match, + lib.ethdev.flow.tunnel_match) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_get_restore_info, + lib.ethdev.flow.get_restore_info) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_tunnel_action_decap_release, + lib.ethdev.flow.tunnel_action_decap_release) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_tunnel_item_release, + lib.ethdev.flow.tunnel_item_release) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_action_handle_create, + lib.ethdev.flow.action_handle_create) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_action_handle_destroy, + lib.ethdev.flow.action_handle_destroy) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_action_handle_update, + lib.ethdev.flow.action_handle_update) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_action_handle_query, + lib.ethdev.flow.action_handle_query) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_flex_item_create, + lib.ethdev.flow.flex_item_create) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_flex_item_release, + lib.ethdev.flow.flex_item_release) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_pick_transfer_proxy, + lib.ethdev.flow.pick_transfer_proxy) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_info_get, + lib.ethdev.flow.info_get) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_configure, + lib.ethdev.flow.configure) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_pattern_template_create, + lib.ethdev.flow.pattern_template_create) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_pattern_template_destroy, + lib.ethdev.flow.pattern_template_destroy) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_actions_template_create, + lib.ethdev.flow.actions_template_create) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_actions_template_destroy, + lib.ethdev.flow.actions_template_destroy) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_template_table_create, + lib.ethdev.flow.template_table_create) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_template_table_destroy, + lib.ethdev.flow.template_table_destroy) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_create, + lib.ethdev.flow.async_create) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_destroy, + lib.ethdev.flow.async_destroy) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_push, + lib.ethdev.flow.push) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_pull, + lib.ethdev.flow.pull) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_action_handle_create, + lib.ethdev.flow.async_action_handle_create) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_action_handle_destroy, + lib.ethdev.flow.async_action_handle_destroy) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_action_handle_update, + lib.ethdev.flow.async_action_handle_update) diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h index de728d355d..94d4b955b6 100644 --- a/lib/ethdev/rte_ethdev_trace.h +++ b/lib/ethdev/rte_ethdev_trace.h @@ -1270,6 +1270,411 @@ RTE_TRACE_POINT( rte_trace_point_emit_u8(avail_thresh); ) +RTE_TRACE_POINT( + rte_flow_trace_copy, + RTE_TRACE_POINT_ARGS(struct rte_flow_desc *fd, size_t len, + const struct rte_flow_attr *attr, + const struct rte_flow_item *items, + const struct rte_flow_action *actions), + rte_trace_point_emit_ptr(fd); + rte_trace_point_emit_size_t(len); + rte_trace_point_emit_u32(attr->group); + rte_trace_point_emit_u32(attr->priority); + rte_trace_point_emit_ptr(items); + rte_trace_point_emit_ptr(actions); +) + +RTE_TRACE_POINT( + rte_flow_trace_create, + RTE_TRACE_POINT_ARGS(uint16_t port_id, const struct rte_flow_attr *attr, + const struct rte_flow_item *pattern, + const struct rte_flow_action *actions), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(attr->group); + rte_trace_point_emit_u32(attr->priority); + rte_trace_point_emit_ptr(pattern); + rte_trace_point_emit_ptr(actions); +) + +RTE_TRACE_POINT( + rte_flow_trace_destroy, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_flow *flow), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(flow); +) + +RTE_TRACE_POINT( + rte_flow_trace_error_set, + RTE_TRACE_POINT_ARGS(struct rte_flow_error *err_p, + int code, enum rte_flow_error_type type, + const void *cause, const char *message), + rte_trace_point_emit_ptr(err_p); + rte_trace_point_emit_int(code); + rte_trace_point_emit_int(type); + rte_trace_point_emit_ptr(cause); + rte_trace_point_emit_string(message); +) + +RTE_TRACE_POINT( + rte_flow_trace_flush, + RTE_TRACE_POINT_ARGS(uint16_t port_id), + rte_trace_point_emit_u16(port_id); +) + +RTE_TRACE_POINT( + rte_flow_trace_isolate, + RTE_TRACE_POINT_ARGS(uint16_t port_id, int set), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_int(set); +) + +RTE_TRACE_POINT( + rte_flow_trace_query, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_flow *flow, + const struct rte_flow_action *action, void *data), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(flow); + rte_trace_point_emit_ptr(action); + rte_trace_point_emit_ptr(data); +) + +RTE_TRACE_POINT( + rte_flow_trace_validate, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + const struct rte_flow_attr *attr, + const struct rte_flow_item *pattern, + const struct rte_flow_action *actions), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(attr->group); + rte_trace_point_emit_u32(attr->priority); + rte_trace_point_emit_ptr(pattern); + rte_trace_point_emit_ptr(actions); +) + +RTE_TRACE_POINT( + rte_flow_trace_conv, + RTE_TRACE_POINT_ARGS(enum rte_flow_conv_op op, void *dst, + size_t size, const void *src), + rte_trace_point_emit_int(op); + rte_trace_point_emit_ptr(dst); + rte_trace_point_emit_size_t(size); + rte_trace_point_emit_ptr(src); +) + +RTE_TRACE_POINT( + rte_flow_trace_dynf_metadata_register, + RTE_TRACE_POINT_ARGS(int offset, uint64_t flag), + rte_trace_point_emit_int(offset); + rte_trace_point_emit_u64(flag); +) + +RTE_TRACE_POINT( + rte_flow_trace_dev_dump, + RTE_TRACE_POINT_ARGS(uint16_t port_id, struct rte_flow *flow), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(flow); +) + +RTE_TRACE_POINT( + rte_flow_trace_get_aged_flows, + RTE_TRACE_POINT_ARGS(uint16_t port_id, void **contexts, + uint32_t nb_contexts), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(contexts); + rte_trace_point_emit_u32(nb_contexts); +) + +RTE_TRACE_POINT( + rte_flow_trace_tunnel_decap_set, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_flow_tunnel *tunnel, + struct rte_flow_action **actions, + uint32_t *num_of_actions), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(tunnel); + rte_trace_point_emit_ptr(actions); + rte_trace_point_emit_ptr(num_of_actions); +) + +RTE_TRACE_POINT( + rte_flow_trace_tunnel_match, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_flow_tunnel *tunnel, + struct rte_flow_item **items, + uint32_t *num_of_items), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(tunnel); + rte_trace_point_emit_ptr(items); + rte_trace_point_emit_ptr(num_of_items); +) + +RTE_TRACE_POINT( + rte_flow_trace_get_restore_info, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_mbuf *m, + struct rte_flow_restore_info *info), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(m); + rte_trace_point_emit_ptr(info); +) + +RTE_TRACE_POINT( + rte_flow_trace_tunnel_action_decap_release, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_flow_action *actions, + uint32_t num_of_actions), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(actions); + rte_trace_point_emit_u32(num_of_actions); +) + +RTE_TRACE_POINT( + rte_flow_trace_tunnel_item_release, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_flow_item *items, + uint32_t num_of_items), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(items); + rte_trace_point_emit_u32(num_of_items); +) + +RTE_TRACE_POINT( + rte_flow_trace_action_handle_create, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(conf); + rte_trace_point_emit_ptr(action); +) + +RTE_TRACE_POINT( + rte_flow_trace_action_handle_destroy, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_flow_action_handle *handle), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(handle); +) + +RTE_TRACE_POINT( + rte_flow_trace_action_handle_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_flow_action_handle *handle, + const void *update), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(handle); + rte_trace_point_emit_ptr(update); +) + +RTE_TRACE_POINT( + rte_flow_trace_action_handle_query, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + const struct rte_flow_action_handle *handle, + void *data), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(handle); + rte_trace_point_emit_ptr(data); +) + +RTE_TRACE_POINT( + rte_flow_trace_flex_item_create, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + const struct rte_flow_item_flex_conf *conf), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(conf); +) + +RTE_TRACE_POINT( + rte_flow_trace_flex_item_release, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + const struct rte_flow_item_flex_handle *handle), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(handle); +) + +RTE_TRACE_POINT( + rte_flow_trace_pick_transfer_proxy, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t *proxy_port_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(proxy_port_id); +) + +RTE_TRACE_POINT( + rte_flow_trace_info_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_flow_port_info *port_info, + struct rte_flow_queue_info *queue_info), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(port_info); + rte_trace_point_emit_ptr(queue_info); +) + +RTE_TRACE_POINT( + rte_flow_trace_configure, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr **queue_attr), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(port_attr); + rte_trace_point_emit_u16(nb_queue); + rte_trace_point_emit_ptr(queue_attr); +) + +RTE_TRACE_POINT( + rte_flow_trace_pattern_template_create, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + const struct rte_flow_pattern_template_attr *template_attr, + const struct rte_flow_item *pattern), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(template_attr); + rte_trace_point_emit_ptr(pattern); +) + +RTE_TRACE_POINT( + rte_flow_trace_pattern_template_destroy, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_flow_pattern_template *pattern_template), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(pattern_template); +) + +RTE_TRACE_POINT( + rte_flow_trace_actions_template_create, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + const struct rte_flow_actions_template_attr *template_attr, + const struct rte_flow_action *actions, + const struct rte_flow_action *masks), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(template_attr); + rte_trace_point_emit_ptr(actions); + rte_trace_point_emit_ptr(masks); +) + +RTE_TRACE_POINT( + rte_flow_trace_actions_template_destroy, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_flow_actions_template *actions_template), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(actions_template); +) + +RTE_TRACE_POINT( + rte_flow_trace_template_table_create, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + const struct rte_flow_template_table_attr *table_attr, + struct rte_flow_pattern_template **pattern_templates, + uint8_t nb_pattern_templates, + struct rte_flow_actions_template **actions_templates, + uint8_t nb_actions_templates), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(table_attr); + rte_trace_point_emit_ptr(pattern_templates); + rte_trace_point_emit_u8(nb_pattern_templates); + rte_trace_point_emit_ptr(actions_templates); + rte_trace_point_emit_u8(nb_actions_templates); +) + +RTE_TRACE_POINT( + rte_flow_trace_template_table_destroy, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_flow_template_table *template_table), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(template_table); +) + +RTE_TRACE_POINT( + rte_flow_trace_async_create, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_template_table *template_table, + const struct rte_flow_item *pattern, + uint8_t pattern_template_index, + const struct rte_flow_action *actions, + uint8_t actions_template_index, + void *user_data), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(queue_id); + rte_trace_point_emit_ptr(op_attr); + rte_trace_point_emit_ptr(template_table); + rte_trace_point_emit_ptr(pattern); + rte_trace_point_emit_u8(pattern_template_index); + rte_trace_point_emit_ptr(actions); + rte_trace_point_emit_u8(actions_template_index); + rte_trace_point_emit_ptr(user_data); +) + +RTE_TRACE_POINT( + rte_flow_trace_async_destroy, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *flow, void *user_data), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(queue_id); + rte_trace_point_emit_ptr(op_attr); + rte_trace_point_emit_ptr(flow); + rte_trace_point_emit_ptr(user_data); +) + +RTE_TRACE_POINT( + rte_flow_trace_push, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t queue_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(queue_id); +) + +RTE_TRACE_POINT( + rte_flow_trace_pull, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t queue_id, + struct rte_flow_op_result *res, uint16_t n_res), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(queue_id); + rte_trace_point_emit_ptr(res); + rte_trace_point_emit_u16(n_res); +) + +RTE_TRACE_POINT( + rte_flow_trace_async_action_handle_create, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + void *user_data), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(queue_id); + rte_trace_point_emit_ptr(op_attr); + rte_trace_point_emit_ptr(indir_action_conf); + rte_trace_point_emit_ptr(action); + rte_trace_point_emit_ptr(user_data); +) + +RTE_TRACE_POINT( + rte_flow_trace_async_action_handle_destroy, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + void *user_data), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(queue_id); + rte_trace_point_emit_ptr(op_attr); + rte_trace_point_emit_ptr(action_handle); + rte_trace_point_emit_ptr(user_data); +) + +RTE_TRACE_POINT( + rte_flow_trace_async_action_handle_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + const void *update, void *user_data), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(queue_id); + rte_trace_point_emit_ptr(op_attr); + rte_trace_point_emit_ptr(action_handle); + rte_trace_point_emit_ptr(update); + rte_trace_point_emit_ptr(user_data); +) + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 501be9d602..e349d112f9 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -13,6 +13,7 @@ #include #include #include "rte_ethdev.h" +#include "rte_ethdev_trace.h" #include "rte_flow_driver.h" #include "rte_flow.h" @@ -284,6 +285,7 @@ rte_flow_dynf_metadata_register(void) goto error; rte_flow_dynf_metadata_offs = offset; rte_flow_dynf_metadata_mask = RTE_BIT64(flag); + rte_flow_trace_dynf_metadata_register(offset, RTE_BIT64(flag)); return 0; error: @@ -357,6 +359,7 @@ rte_flow_validate(uint16_t port_id, struct rte_eth_dev *dev = &rte_eth_devices[port_id]; int ret; + rte_flow_trace_validate(port_id, attr, pattern, actions); if (unlikely(!ops)) return -rte_errno; if (likely(!!ops->validate)) { @@ -382,6 +385,7 @@ rte_flow_create(uint16_t port_id, struct rte_flow *flow; const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + rte_flow_trace_create(port_id, attr, pattern, actions); if (unlikely(!ops)) return NULL; if (likely(!!ops->create)) { @@ -407,6 +411,7 @@ rte_flow_destroy(uint16_t port_id, const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); int ret; + rte_flow_trace_destroy(port_id, flow); if (unlikely(!ops)) return -rte_errno; if (likely(!!ops->destroy)) { @@ -429,6 +434,7 @@ rte_flow_flush(uint16_t port_id, const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); int ret; + rte_flow_trace_flush(port_id); if (unlikely(!ops)) return -rte_errno; if (likely(!!ops->flush)) { @@ -454,6 +460,7 @@ rte_flow_query(uint16_t port_id, const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); int ret; + rte_flow_trace_query(port_id, flow, action, data); if (!ops) return -rte_errno; if (likely(!!ops->query)) { @@ -477,6 +484,7 @@ rte_flow_isolate(uint16_t port_id, const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); int ret; + rte_flow_trace_isolate(port_id, set); if (!ops) return -rte_errno; if (likely(!!ops->isolate)) { @@ -506,6 +514,7 @@ rte_flow_error_set(struct rte_flow_error *error, }; } rte_errno = code; + rte_flow_trace_error_set(error, code, type, cause, message); return -code; } @@ -1004,6 +1013,7 @@ rte_flow_conv(enum rte_flow_conv_op op, const void *src, struct rte_flow_error *error) { + rte_flow_trace_conv(op, dst, size, src); switch (op) { const struct rte_flow_attr *attr; @@ -1069,6 +1079,7 @@ rte_flow_copy(struct rte_flow_desc *desc, size_t len, RTE_BUILD_BUG_ON(sizeof(struct rte_flow_desc) < sizeof(struct rte_flow_conv_rule)); + rte_flow_trace_copy(desc, len, attr, items, actions); if (dst_size && (&dst->pattern != &desc->items || &dst->actions != &desc->actions || @@ -1099,6 +1110,7 @@ rte_flow_dev_dump(uint16_t port_id, struct rte_flow *flow, const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); int ret; + rte_flow_trace_dev_dump(port_id, flow); if (unlikely(!ops)) return -rte_errno; if (likely(!!ops->dev_dump)) { @@ -1120,6 +1132,7 @@ rte_flow_get_aged_flows(uint16_t port_id, void **contexts, const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); int ret; + rte_flow_trace_get_aged_flows(port_id, contexts, nb_contexts); if (unlikely(!ops)) return -rte_errno; if (likely(!!ops->get_aged_flows)) { @@ -1142,6 +1155,7 @@ rte_flow_action_handle_create(uint16_t port_id, struct rte_flow_action_handle *handle; const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + rte_flow_trace_action_handle_create(port_id, conf, action); if (unlikely(!ops)) return NULL; if (unlikely(!ops->action_handle_create)) { @@ -1165,6 +1179,7 @@ rte_flow_action_handle_destroy(uint16_t port_id, int ret; const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + rte_flow_trace_action_handle_destroy(port_id, handle); if (unlikely(!ops)) return -rte_errno; if (unlikely(!ops->action_handle_destroy)) @@ -1185,6 +1200,7 @@ rte_flow_action_handle_update(uint16_t port_id, int ret; const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + rte_flow_trace_action_handle_update(port_id, handle, update); if (unlikely(!ops)) return -rte_errno; if (unlikely(!ops->action_handle_update)) @@ -1205,6 +1221,7 @@ rte_flow_action_handle_query(uint16_t port_id, int ret; const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + rte_flow_trace_action_handle_query(port_id, handle, data); if (unlikely(!ops)) return -rte_errno; if (unlikely(!ops->action_handle_query)) @@ -1226,6 +1243,7 @@ rte_flow_tunnel_decap_set(uint16_t port_id, struct rte_eth_dev *dev = &rte_eth_devices[port_id]; const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + rte_flow_trace_tunnel_decap_set(port_id, tunnel, actions, num_of_actions); if (unlikely(!ops)) return -rte_errno; if (likely(!!ops->tunnel_decap_set)) { @@ -1249,6 +1267,7 @@ rte_flow_tunnel_match(uint16_t port_id, struct rte_eth_dev *dev = &rte_eth_devices[port_id]; const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + rte_flow_trace_tunnel_match(port_id, tunnel, items, num_of_items); if (unlikely(!ops)) return -rte_errno; if (likely(!!ops->tunnel_match)) { @@ -1271,6 +1290,7 @@ rte_flow_get_restore_info(uint16_t port_id, struct rte_eth_dev *dev = &rte_eth_devices[port_id]; const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + rte_flow_trace_get_restore_info(port_id, m, restore_info); if (unlikely(!ops)) return -rte_errno; if (likely(!!ops->get_restore_info)) { @@ -1293,6 +1313,7 @@ rte_flow_tunnel_action_decap_release(uint16_t port_id, struct rte_eth_dev *dev = &rte_eth_devices[port_id]; const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + rte_flow_trace_tunnel_action_decap_release(port_id, actions, num_of_actions); if (unlikely(!ops)) return -rte_errno; if (likely(!!ops->tunnel_action_decap_release)) { @@ -1316,6 +1337,7 @@ rte_flow_tunnel_item_release(uint16_t port_id, struct rte_eth_dev *dev = &rte_eth_devices[port_id]; const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + rte_flow_trace_tunnel_item_release(port_id, items, num_of_items); if (unlikely(!ops)) return -rte_errno; if (likely(!!ops->tunnel_item_release)) { @@ -1336,6 +1358,7 @@ rte_flow_pick_transfer_proxy(uint16_t port_id, uint16_t *proxy_port_id, const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); struct rte_eth_dev *dev; + rte_flow_trace_pick_transfer_proxy(port_id, proxy_port_id); if (unlikely(ops == NULL)) return -rte_errno; @@ -1360,6 +1383,7 @@ rte_flow_flex_item_create(uint16_t port_id, const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); struct rte_flow_item_flex_handle *handle; + rte_flow_trace_flex_item_create(port_id, conf); if (unlikely(!ops)) return NULL; if (unlikely(!ops->flex_item_create)) { @@ -1383,6 +1407,7 @@ rte_flow_flex_item_release(uint16_t port_id, struct rte_eth_dev *dev = &rte_eth_devices[port_id]; const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + rte_flow_trace_flex_item_release(port_id, handle); if (unlikely(!ops || !ops->flex_item_release)) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -1400,6 +1425,7 @@ rte_flow_info_get(uint16_t port_id, struct rte_eth_dev *dev = &rte_eth_devices[port_id]; const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + rte_flow_trace_info_get(port_id, port_info, queue_info); if (unlikely(!ops)) return -rte_errno; if (dev->data->dev_configured == 0) { @@ -1433,6 +1459,7 @@ rte_flow_configure(uint16_t port_id, const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); int ret; + rte_flow_trace_configure(port_id, port_attr, nb_queue, queue_attr); if (unlikely(!ops)) return -rte_errno; if (dev->data->dev_configured == 0) { @@ -1476,6 +1503,8 @@ rte_flow_pattern_template_create(uint16_t port_id, const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); struct rte_flow_pattern_template *template; + rte_flow_trace_pattern_template_create(port_id, template_attr, + pattern); if (unlikely(!ops)) return NULL; if (dev->data->flow_configured == 0) { @@ -1526,6 +1555,7 @@ rte_flow_pattern_template_destroy(uint16_t port_id, struct rte_eth_dev *dev = &rte_eth_devices[port_id]; const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + rte_flow_trace_pattern_template_destroy(port_id, pattern_template); if (unlikely(!ops)) return -rte_errno; if (unlikely(pattern_template == NULL)) @@ -1553,6 +1583,8 @@ rte_flow_actions_template_create(uint16_t port_id, const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); struct rte_flow_actions_template *template; + rte_flow_trace_actions_template_create(port_id, template_attr, actions, + masks); if (unlikely(!ops)) return NULL; if (dev->data->flow_configured == 0) { @@ -1612,6 +1644,7 @@ rte_flow_actions_template_destroy(uint16_t port_id, struct rte_eth_dev *dev = &rte_eth_devices[port_id]; const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + rte_flow_trace_actions_template_destroy(port_id, actions_template); if (unlikely(!ops)) return -rte_errno; if (unlikely(actions_template == NULL)) @@ -1641,6 +1674,11 @@ rte_flow_template_table_create(uint16_t port_id, const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); struct rte_flow_template_table *table; + rte_flow_trace_template_table_create(port_id, table_attr, + pattern_templates, + nb_pattern_templates, + actions_templates, + nb_actions_templates); if (unlikely(!ops)) return NULL; if (dev->data->flow_configured == 0) { @@ -1702,6 +1740,7 @@ rte_flow_template_table_destroy(uint16_t port_id, struct rte_eth_dev *dev = &rte_eth_devices[port_id]; const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + rte_flow_trace_template_table_destroy(port_id, template_table); if (unlikely(!ops)) return -rte_errno; if (unlikely(template_table == NULL)) @@ -1734,6 +1773,9 @@ rte_flow_async_create(uint16_t port_id, const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); struct rte_flow *flow; + rte_flow_trace_async_create(port_id, queue_id, op_attr, template_table, + pattern, pattern_template_index, actions, + actions_template_index, user_data); flow = ops->async_create(dev, queue_id, op_attr, template_table, pattern, pattern_template_index, @@ -1755,6 +1797,8 @@ rte_flow_async_destroy(uint16_t port_id, struct rte_eth_dev *dev = &rte_eth_devices[port_id]; const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + rte_flow_trace_async_destroy(port_id, queue_id, op_attr, flow, + user_data); return flow_err(port_id, ops->async_destroy(dev, queue_id, op_attr, flow, @@ -1770,6 +1814,7 @@ rte_flow_push(uint16_t port_id, struct rte_eth_dev *dev = &rte_eth_devices[port_id]; const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + rte_flow_trace_push(port_id, queue_id); return flow_err(port_id, ops->push(dev, queue_id, error), error); @@ -1786,6 +1831,7 @@ rte_flow_pull(uint16_t port_id, const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); int ret; + rte_flow_trace_pull(port_id, queue_id, res, n_res); ret = ops->pull(dev, queue_id, res, n_res, error); return ret ? ret : flow_err(port_id, ret, error); } @@ -1803,6 +1849,9 @@ rte_flow_async_action_handle_create(uint16_t port_id, const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); struct rte_flow_action_handle *handle; + rte_flow_trace_async_action_handle_create(port_id, queue_id, op_attr, + indir_action_conf, action, + user_data); handle = ops->async_action_handle_create(dev, queue_id, op_attr, indir_action_conf, action, user_data, error); if (handle == NULL) @@ -1822,6 +1871,8 @@ rte_flow_async_action_handle_destroy(uint16_t port_id, const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); int ret; + rte_flow_trace_async_action_handle_destroy(port_id, queue_id, op_attr, + action_handle, user_data); ret = ops->async_action_handle_destroy(dev, queue_id, op_attr, action_handle, user_data, error); return flow_err(port_id, ret, error); @@ -1840,6 +1891,9 @@ rte_flow_async_action_handle_update(uint16_t port_id, const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); int ret; + rte_flow_trace_async_action_handle_update(port_id, queue_id, op_attr, + action_handle, update, + user_data); ret = ops->async_action_handle_update(dev, queue_id, op_attr, action_handle, update, user_data, error); return flow_err(port_id, ret, error); diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 79bf947042..848ec442f1 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -432,6 +432,45 @@ EXPERIMENTAL { __rte_eth_trace_ip_reassembly_conf_set; __rte_eth_trace_rx_avail_thresh_query; __rte_eth_trace_rx_avail_thresh_set; + __rte_flow_trace_action_handle_create; + __rte_flow_trace_action_handle_destroy; + __rte_flow_trace_action_handle_query; + __rte_flow_trace_action_handle_update; + __rte_flow_trace_actions_template_create; + __rte_flow_trace_actions_template_destroy; + __rte_flow_trace_async_action_handle_create; + __rte_flow_trace_async_action_handle_destroy; + __rte_flow_trace_async_action_handle_update; + __rte_flow_trace_async_create; + __rte_flow_trace_async_destroy; + __rte_flow_trace_conv; + __rte_flow_trace_configure; + __rte_flow_trace_copy; + __rte_flow_trace_create; + __rte_flow_trace_destroy; + __rte_flow_trace_dev_dump; + __rte_flow_trace_dynf_metadata_register; + __rte_flow_trace_error_set; + __rte_flow_trace_flex_item_create; + __rte_flow_trace_flex_item_release; + __rte_flow_trace_flush; + __rte_flow_trace_get_aged_flows; + __rte_flow_trace_get_restore_info; + __rte_flow_trace_info_get; + __rte_flow_trace_isolate; + __rte_flow_trace_pattern_template_create; + __rte_flow_trace_pattern_template_destroy; + __rte_flow_trace_pick_transfer_proxy; + __rte_flow_trace_push; + __rte_flow_trace_pull; + __rte_flow_trace_query; + __rte_flow_trace_template_table_create; + __rte_flow_trace_template_table_destroy; + __rte_flow_trace_tunnel_action_decap_release; + __rte_flow_trace_tunnel_decap_set; + __rte_flow_trace_tunnel_item_release; + __rte_flow_trace_tunnel_match; + __rte_flow_trace_validate; }; INTERNAL { From patchwork Thu Aug 4 13:44:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ankur Dwivedi X-Patchwork-Id: 114619 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B4504A00C4; Thu, 4 Aug 2022 15:49:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A424A42BE6; Thu, 4 Aug 2022 15:49:03 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 442CA42BE0 for ; Thu, 4 Aug 2022 15:49:02 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 274DK71t022469; Thu, 4 Aug 2022 06:46:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=ymMdppQgqo4ziNKHdtXPsU7FPMZUP7M6u2M6jzY7wLs=; b=GYYivrJPK6KwMncfRIhLR0VObbgLnqsK4x668sYfN8Shu8rmVshtdLwsAH1ie0hieaqB B7/3VGefMxGWG+agxErA9sYkS6M03cXgVm2CgIxxNFPQig/ZICs7OdcmbECaX4Wqn3SE Qeo6bojMxdfDY79Q8oV56xKM54mHyOTRNE4PUdjcefMOaPwpgy/vX77dva6ujEHV9CLC 9HTNIrojuHMy2MCBZxMOra675vRu4p8FgfqS9VPuqGIworYUIXM+jlOQh0LwsJZu3Gdt a7YkJCv84KQRHVxb+Mz4smqEPKHUNhB9uWg4Y68FhK8kG59cKqVQajJ/qEejWcBJ8Hu6 6g== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3hqgf1xr4u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 04 Aug 2022 06:46:46 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Aug 2022 06:46:44 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 4 Aug 2022 06:46:44 -0700 Received: from hyd1349.t110.caveonetworks.com.com (unknown [10.29.45.13]) by maili.marvell.com (Postfix) with ESMTP id 638A13F7057; Thu, 4 Aug 2022 06:46:23 -0700 (PDT) From: Ankur Dwivedi To: CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Ankur Dwivedi Subject: [PATCH 3/6] ethdev: add trace points for mtr Date: Thu, 4 Aug 2022 19:14:27 +0530 Message-ID: <20220804134430.6192-4-adwivedi@marvell.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20220804134430.6192-1-adwivedi@marvell.com> References: <20220804134430.6192-1-adwivedi@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: QdyPczojiy4G14i_nRIBu8PxPUEBvd6m X-Proofpoint-ORIG-GUID: QdyPczojiy4G14i_nRIBu8PxPUEBvd6m X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-04_03,2022-08-04_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adds trace points for rte_mtr specific functions in ethdev lib. Signed-off-by: Ankur Dwivedi --- lib/ethdev/ethdev_trace_points.c | 57 ++++++++++ lib/ethdev/rte_ethdev_trace.h | 176 +++++++++++++++++++++++++++++++ lib/ethdev/rte_mtr.c | 27 +++++ lib/ethdev/version.map | 19 ++++ 4 files changed, 279 insertions(+) diff --git a/lib/ethdev/ethdev_trace_points.c b/lib/ethdev/ethdev_trace_points.c index a8b974564c..673a0be13b 100644 --- a/lib/ethdev/ethdev_trace_points.c +++ b/lib/ethdev/ethdev_trace_points.c @@ -584,3 +584,60 @@ RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_action_handle_destroy, RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_action_handle_update, lib.ethdev.flow.async_action_handle_update) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_capabilities_get, + lib.ethdev.mtr.capabilities_get) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_create, + lib.ethdev.mtr.create) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_destroy, + lib.ethdev.mtr.destroy) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_disable, + lib.ethdev.mtr.meter_disable) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_dscp_table_update, + lib.ethdev.mtr.meter_dscp_table_update) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_enable, + lib.ethdev.mtr.meter_enable) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_profile_add, + lib.ethdev.mtr.meter_profile_add) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_profile_delete, + lib.ethdev.mtr.meter_profile_delete) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_profile_update, + lib.ethdev.mtr.meter_profile_update) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_stats_read, + lib.ethdev.mtr.stats_read) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_stats_update, + lib.ethdev.mtr.stats_update) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_policy_add, + lib.ethdev.mtr.meter_policy_add) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_policy_delete, + lib.ethdev.mtr.meter_policy_delete) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_policy_update, + lib.ethdev.mtr.meter_policy_update) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_policy_validate, + lib.ethdev.mtr.meter_policy_validate) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_vlan_table_update, + lib.ethdev.mtr.meter_vlan_table_update) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_color_in_protocol_get, + lib.ethdev.mtr.color_in_protocol_get) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_color_in_protocol_priority_get, + lib.ethdev.mtr.color_in_protocol_priority_get) + +RTE_TRACE_POINT_REGISTER(rte_mtr_trace_color_in_protocol_set, + lib.ethdev.mtr.color_in_protocol_set) diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h index 94d4b955b6..c07b5b3fb6 100644 --- a/lib/ethdev/rte_ethdev_trace.h +++ b/lib/ethdev/rte_ethdev_trace.h @@ -18,6 +18,7 @@ extern "C" { #include #include "rte_ethdev.h" +#include "rte_mtr.h" RTE_TRACE_POINT( rte_ethdev_trace_configure, @@ -1675,6 +1676,181 @@ RTE_TRACE_POINT( rte_trace_point_emit_ptr(user_data); ) +RTE_TRACE_POINT( + rte_mtr_trace_capabilities_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_mtr_capabilities *cap), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(cap); +) + +RTE_TRACE_POINT( + rte_mtr_trace_create, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t mtr_id, + struct rte_mtr_params *params, int shared), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(mtr_id); + rte_trace_point_emit_ptr(params); + rte_trace_point_emit_u32(params->meter_profile_id); + rte_trace_point_emit_int(params->use_prev_mtr_color); + rte_trace_point_emit_int(params->meter_enable); + rte_trace_point_emit_u64(params->stats_mask); + rte_trace_point_emit_u32(params->meter_policy_id); + rte_trace_point_emit_int(params->default_input_color); + rte_trace_point_emit_int(shared); +) + +RTE_TRACE_POINT( + rte_mtr_trace_destroy, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t mtr_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(mtr_id); +) + +RTE_TRACE_POINT( + rte_mtr_trace_meter_disable, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t mtr_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(mtr_id); +) + +RTE_TRACE_POINT( + rte_mtr_trace_meter_dscp_table_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t mtr_id, + enum rte_color *dscp_table), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(mtr_id); + rte_trace_point_emit_ptr(dscp_table); +) + +RTE_TRACE_POINT( + rte_mtr_trace_meter_enable, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t mtr_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(mtr_id); +) + +RTE_TRACE_POINT( + rte_mtr_trace_meter_profile_add, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + uint32_t meter_profile_id, + struct rte_mtr_meter_profile *profile), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(meter_profile_id); + rte_trace_point_emit_int(profile->alg); + rte_trace_point_emit_int(profile->packet_mode); +) + +RTE_TRACE_POINT( + rte_mtr_trace_meter_profile_delete, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + uint32_t meter_profile_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(meter_profile_id); +) + +RTE_TRACE_POINT( + rte_mtr_trace_meter_profile_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t mtr_id, + uint32_t meter_profile_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(mtr_id); + rte_trace_point_emit_u32(meter_profile_id); +) + +RTE_TRACE_POINT( + rte_mtr_trace_stats_read, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t mtr_id, + struct rte_mtr_stats *stats, uint64_t *stats_mask, + int clear), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(mtr_id); + rte_trace_point_emit_ptr(stats); + rte_trace_point_emit_ptr(stats_mask); + rte_trace_point_emit_int(clear); +) + +RTE_TRACE_POINT( + rte_mtr_trace_stats_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t mtr_id, + uint64_t stats_mask), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(mtr_id); + rte_trace_point_emit_u64(stats_mask); +) + +RTE_TRACE_POINT( + rte_mtr_trace_meter_policy_add, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t policy_id, + const struct rte_flow_action *actions), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(policy_id); + rte_trace_point_emit_ptr(actions); + rte_trace_point_emit_int(actions->type); + rte_trace_point_emit_ptr(actions->conf); +) + +RTE_TRACE_POINT( + rte_mtr_trace_meter_policy_delete, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t policy_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(policy_id); +) + +RTE_TRACE_POINT( + rte_mtr_trace_meter_policy_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t mtr_id, + uint32_t meter_policy_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(mtr_id); + rte_trace_point_emit_u32(meter_policy_id); +) + +RTE_TRACE_POINT( + rte_mtr_trace_meter_policy_validate, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + const struct rte_flow_action *actions), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(actions); + rte_trace_point_emit_int(actions->type); + rte_trace_point_emit_ptr(actions->conf); +) + +RTE_TRACE_POINT( + rte_mtr_trace_meter_vlan_table_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t mtr_id, + enum rte_color *vlan_table), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(mtr_id); + rte_trace_point_emit_ptr(vlan_table); +) + +RTE_TRACE_POINT( + rte_mtr_trace_color_in_protocol_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t mtr_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(mtr_id); +) + +RTE_TRACE_POINT( + rte_mtr_trace_color_in_protocol_priority_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t mtr_id, + enum rte_mtr_color_in_protocol proto), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(mtr_id); + rte_trace_point_emit_int(proto); +) + +RTE_TRACE_POINT( + rte_mtr_trace_color_in_protocol_set, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t mtr_id, + enum rte_mtr_color_in_protocol proto, uint32_t priority), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(mtr_id); + rte_trace_point_emit_int(proto); + rte_trace_point_emit_u32(priority); +) + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_mtr.c b/lib/ethdev/rte_mtr.c index c460e4f4e0..ff7265cd21 100644 --- a/lib/ethdev/rte_mtr.c +++ b/lib/ethdev/rte_mtr.c @@ -6,6 +6,7 @@ #include #include "rte_ethdev.h" +#include "rte_ethdev_trace.h" #include "rte_mtr_driver.h" #include "rte_mtr.h" @@ -63,6 +64,7 @@ rte_mtr_capabilities_get(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_mtr_trace_capabilities_get(port_id, cap); return RTE_MTR_FUNC(port_id, capabilities_get)(dev, cap, error); } @@ -75,6 +77,7 @@ rte_mtr_meter_profile_add(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_mtr_trace_meter_profile_add(port_id, meter_profile_id, profile); return RTE_MTR_FUNC(port_id, meter_profile_add)(dev, meter_profile_id, profile, error); } @@ -86,6 +89,7 @@ rte_mtr_meter_profile_delete(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_mtr_trace_meter_profile_delete(port_id, meter_profile_id); return RTE_MTR_FUNC(port_id, meter_profile_delete)(dev, meter_profile_id, error); } @@ -97,6 +101,10 @@ rte_mtr_meter_policy_validate(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + int i; + + for (i = 0; i < RTE_COLORS; i++) + rte_mtr_trace_meter_policy_validate(port_id, policy->actions[i]); return RTE_MTR_FUNC(port_id, meter_policy_validate)(dev, policy, error); } @@ -109,6 +117,11 @@ rte_mtr_meter_policy_add(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + int i; + + for (i = 0; i < RTE_COLORS; i++) + rte_mtr_trace_meter_policy_add(port_id, policy_id, + policy->actions[i]); return RTE_MTR_FUNC(port_id, meter_policy_add)(dev, policy_id, policy, error); } @@ -120,6 +133,7 @@ rte_mtr_meter_policy_delete(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_mtr_trace_meter_policy_delete(port_id, policy_id); return RTE_MTR_FUNC(port_id, meter_policy_delete)(dev, policy_id, error); } @@ -133,6 +147,7 @@ rte_mtr_create(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_mtr_trace_create(port_id, mtr_id, params, shared); return RTE_MTR_FUNC(port_id, create)(dev, mtr_id, params, shared, error); } @@ -144,6 +159,7 @@ rte_mtr_destroy(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_mtr_trace_destroy(port_id, mtr_id); return RTE_MTR_FUNC(port_id, destroy)(dev, mtr_id, error); } @@ -155,6 +171,7 @@ rte_mtr_meter_enable(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_mtr_trace_meter_enable(port_id, mtr_id); return RTE_MTR_FUNC(port_id, meter_enable)(dev, mtr_id, error); } @@ -166,6 +183,7 @@ rte_mtr_meter_disable(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_mtr_trace_meter_disable(port_id, mtr_id); return RTE_MTR_FUNC(port_id, meter_disable)(dev, mtr_id, error); } @@ -178,6 +196,7 @@ rte_mtr_meter_profile_update(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_mtr_trace_meter_profile_update(port_id, mtr_id, meter_profile_id); return RTE_MTR_FUNC(port_id, meter_profile_update)(dev, mtr_id, meter_profile_id, error); } @@ -190,6 +209,7 @@ rte_mtr_meter_policy_update(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_mtr_trace_meter_policy_update(port_id, mtr_id, meter_policy_id); return RTE_MTR_FUNC(port_id, meter_policy_update)(dev, mtr_id, meter_policy_id, error); } @@ -202,6 +222,7 @@ rte_mtr_meter_dscp_table_update(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_mtr_trace_meter_dscp_table_update(port_id, mtr_id, dscp_table); return RTE_MTR_FUNC(port_id, meter_dscp_table_update)(dev, mtr_id, dscp_table, error); } @@ -214,6 +235,7 @@ rte_mtr_meter_vlan_table_update(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_mtr_trace_meter_vlan_table_update(port_id, mtr_id, vlan_table); return RTE_MTR_FUNC(port_id, meter_vlan_table_update)(dev, mtr_id, vlan_table, error); } @@ -227,6 +249,7 @@ rte_mtr_color_in_protocol_set(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_mtr_trace_color_in_protocol_set(port_id, mtr_id, proto, priority); return RTE_MTR_FUNC(port_id, in_proto_set)(dev, mtr_id, proto, priority, error); } @@ -239,6 +262,7 @@ rte_mtr_color_in_protocol_get(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_mtr_trace_color_in_protocol_get(port_id, mtr_id); return RTE_MTR_FUNC(port_id, in_proto_get)(dev, mtr_id, proto_mask, error); } @@ -252,6 +276,7 @@ rte_mtr_color_in_protocol_priority_get(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_mtr_trace_color_in_protocol_priority_get(port_id, mtr_id, proto); return RTE_MTR_FUNC(port_id, in_proto_prio_get)(dev, mtr_id, proto, priority, error); } @@ -264,6 +289,7 @@ rte_mtr_stats_update(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_mtr_trace_stats_update(port_id, mtr_id, stats_mask); return RTE_MTR_FUNC(port_id, stats_update)(dev, mtr_id, stats_mask, error); } @@ -278,6 +304,7 @@ rte_mtr_stats_read(uint16_t port_id, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_mtr_trace_stats_read(port_id, mtr_id, stats, stats_mask, clear); return RTE_MTR_FUNC(port_id, stats_read)(dev, mtr_id, stats, stats_mask, clear, error); } diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 848ec442f1..2e282bb457 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -471,6 +471,25 @@ EXPERIMENTAL { __rte_flow_trace_tunnel_item_release; __rte_flow_trace_tunnel_match; __rte_flow_trace_validate; + __rte_mtr_trace_capabilities_get; + __rte_mtr_trace_color_in_protocol_get; + __rte_mtr_trace_color_in_protocol_priority_get; + __rte_mtr_trace_color_in_protocol_set; + __rte_mtr_trace_create; + __rte_mtr_trace_destroy; + __rte_mtr_trace_meter_disable; + __rte_mtr_trace_meter_dscp_table_update; + __rte_mtr_trace_meter_enable; + __rte_mtr_trace_meter_policy_add; + __rte_mtr_trace_meter_policy_delete; + __rte_mtr_trace_meter_policy_update; + __rte_mtr_trace_meter_policy_validate; + __rte_mtr_trace_meter_profile_add; + __rte_mtr_trace_meter_profile_delete; + __rte_mtr_trace_meter_profile_update; + __rte_mtr_trace_meter_vlan_table_update; + __rte_mtr_trace_stats_read; + __rte_mtr_trace_stats_update; }; INTERNAL { From patchwork Thu Aug 4 13:44:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ankur Dwivedi X-Patchwork-Id: 114620 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D7056A00C4; Thu, 4 Aug 2022 15:49:28 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C315542BDB; Thu, 4 Aug 2022 15:49:28 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 5306142BD9 for ; Thu, 4 Aug 2022 15:49:27 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 274COOAU020987; Thu, 4 Aug 2022 06:47:11 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=YvoFjKlzJI+FxnIFSBVVd//2FtWemtdld29ZyGCzC8c=; b=M17bOC59o6RqLDw6MuAgGvvENUv5y01EY6WSLQLn7iKejPwSVHTHkBJzcggjEI3HgePz 0XaTRMe1dPB52if3vksweI/heXkLm4UG6MmOY/qkXaRY1y2PhC/c89vNHAUthsmO7wj8 sczaDxB6TTCJxHhsl+36FTLnUwy0OYwAf3OUsfYx6jm1xKl2mnwHW5zJlev2aGAW4iaG /k5QzoIfjIxVw9eJ9xMUTTbCFEteFNYsRoCE8/8VYuYklK4oUG1OujX+ZYqEZIO5e3zu le03Mak+4QN3R4+GTceF87Vf/ZKYKok7nPpbcvujEpd3rl8jXSa7RME8Ec0YCtnym61Q 3Q== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3hqgf1xr5y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 04 Aug 2022 06:47:11 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 4 Aug 2022 06:47:09 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 4 Aug 2022 06:47:09 -0700 Received: from hyd1349.t110.caveonetworks.com.com (unknown [10.29.45.13]) by maili.marvell.com (Postfix) with ESMTP id 520DB5B6941; Thu, 4 Aug 2022 06:46:48 -0700 (PDT) From: Ankur Dwivedi To: CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Ankur Dwivedi Subject: [PATCH 4/6] ethdev: add trace points for tm Date: Thu, 4 Aug 2022 19:14:28 +0530 Message-ID: <20220804134430.6192-5-adwivedi@marvell.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20220804134430.6192-1-adwivedi@marvell.com> References: <20220804134430.6192-1-adwivedi@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: XgBe4K05mJRU4N-2c7TtBXcPLMsnOJYK X-Proofpoint-ORIG-GUID: XgBe4K05mJRU4N-2c7TtBXcPLMsnOJYK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-04_03,2022-08-04_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adds trace points for rte_tm specific functions in ethdev lib. Signed-off-by: Ankur Dwivedi --- lib/ethdev/ethdev_trace_points.c | 90 ++++++++++ lib/ethdev/rte_ethdev_trace.h | 283 +++++++++++++++++++++++++++++++ lib/ethdev/rte_tm.c | 40 +++++ lib/ethdev/version.map | 30 ++++ 4 files changed, 443 insertions(+) diff --git a/lib/ethdev/ethdev_trace_points.c b/lib/ethdev/ethdev_trace_points.c index 673a0be13b..341901d031 100644 --- a/lib/ethdev/ethdev_trace_points.c +++ b/lib/ethdev/ethdev_trace_points.c @@ -641,3 +641,93 @@ RTE_TRACE_POINT_REGISTER(rte_mtr_trace_color_in_protocol_priority_get, RTE_TRACE_POINT_REGISTER(rte_mtr_trace_color_in_protocol_set, lib.ethdev.mtr.color_in_protocol_set) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_capabilities_get, + lib.ethdev.tm.capabilities_get) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_get_number_of_leaf_nodes, + lib.ethdev.tm.get_number_of_leaf_nodes) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_hierarchy_commit, + lib.ethdev.tm.hierarchy_commit) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_level_capabilities_get, + lib.ethdev.tm.level_capabilities_get) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_mark_ip_dscp, + lib.ethdev.tm.mark_ip_dscp) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_mark_ip_ecn, + lib.ethdev.tm.mark_ip_ecn) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_mark_vlan_dei, + lib.ethdev.tm.mark_vlan_dei) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_add, + lib.ethdev.tm.node_add) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_capabilities_get, + lib.ethdev.tm.node_capabilities_get) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_cman_update, + lib.ethdev.tm.node_cman_update) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_delete, + lib.ethdev.tm.node_delete) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_parent_update, + lib.ethdev.tm.node_parent_update) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_resume, + lib.ethdev.tm.node_resume) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_shaper_update, + lib.ethdev.tm.node_shaper_update) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_shared_shaper_update, + lib.ethdev.tm.node_shared_shaper_update) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_shared_wred_context_update, + lib.ethdev.tm.node_shared_wred_context_update) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_stats_read, + lib.ethdev.tm.node_stats_read) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_stats_update, + lib.ethdev.tm.node_stats_update) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_suspend, + lib.ethdev.tm.node_suspend) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_type_get, + lib.ethdev.tm.node_type_get) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_wfq_weight_mode_update, + lib.ethdev.tm.node_wfq_weight_mode_update) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_wred_context_update, + lib.ethdev.tm.node_wred_context_update) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_shaper_profile_add, + lib.ethdev.tm.shaper_profile_add) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_shaper_profile_delete, + lib.ethdev.tm.shaper_profile_delete) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_shared_shaper_add_update, + lib.ethdev.tm.shared_shaper_add_update) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_shared_shaper_delete, + lib.ethdev.tm.shared_shaper_delete) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_shared_wred_context_add_update, + lib.ethdev.tm.shared_wred_context_add_update) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_shared_wred_context_delete, + lib.ethdev.tm.shared_wred_context_delete) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_wred_profile_add, + lib.ethdev.tm.wred_profile_add) + +RTE_TRACE_POINT_REGISTER(rte_tm_trace_wred_profile_delete, + lib.ethdev.tm.wred_profile_delete) diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h index c07b5b3fb6..aa34a6a5e9 100644 --- a/lib/ethdev/rte_ethdev_trace.h +++ b/lib/ethdev/rte_ethdev_trace.h @@ -19,6 +19,7 @@ extern "C" { #include "rte_ethdev.h" #include "rte_mtr.h" +#include "rte_tm.h" RTE_TRACE_POINT( rte_ethdev_trace_configure, @@ -1851,6 +1852,288 @@ RTE_TRACE_POINT( rte_trace_point_emit_u32(priority); ) +RTE_TRACE_POINT( + rte_tm_trace_capabilities_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + struct rte_tm_capabilities *cap), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(cap); +) + +RTE_TRACE_POINT( + rte_tm_trace_get_number_of_leaf_nodes, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t n_leaf_nodes, + struct rte_tm_error *error), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(n_leaf_nodes); + rte_trace_point_emit_int(error->type); + rte_trace_point_emit_string(error->message); +) + +RTE_TRACE_POINT( + rte_tm_trace_hierarchy_commit, + RTE_TRACE_POINT_ARGS(uint16_t port_id, int clear_on_fail), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_int(clear_on_fail); +) + +RTE_TRACE_POINT( + rte_tm_trace_level_capabilities_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t level_id, + struct rte_tm_level_capabilities *cap), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(level_id); + rte_trace_point_emit_ptr(cap); +) + +RTE_TRACE_POINT( + rte_tm_trace_mark_ip_dscp, + RTE_TRACE_POINT_ARGS(uint16_t port_id, int mark_green, + int mark_yellow, int mark_red), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_int(mark_green); + rte_trace_point_emit_int(mark_yellow); + rte_trace_point_emit_int(mark_red); +) + +RTE_TRACE_POINT( + rte_tm_trace_mark_ip_ecn, + RTE_TRACE_POINT_ARGS(uint16_t port_id, int mark_green, + int mark_yellow, int mark_red), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_int(mark_green); + rte_trace_point_emit_int(mark_yellow); + rte_trace_point_emit_int(mark_red); +) + +RTE_TRACE_POINT( + rte_tm_trace_mark_vlan_dei, + RTE_TRACE_POINT_ARGS(uint16_t port_id, int mark_green, + int mark_yellow, int mark_red), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_int(mark_green); + rte_trace_point_emit_int(mark_yellow); + rte_trace_point_emit_int(mark_red); +) + +RTE_TRACE_POINT( + rte_tm_trace_node_add, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t node_id, + uint32_t parent_node_id, uint32_t priority, + uint32_t weight, uint32_t level_id, + struct rte_tm_node_params *params), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(node_id); + rte_trace_point_emit_u32(parent_node_id); + rte_trace_point_emit_u32(priority); + rte_trace_point_emit_u32(weight); + rte_trace_point_emit_u32(level_id); + rte_trace_point_emit_ptr(params); + rte_trace_point_emit_u32(params->shaper_profile_id); + rte_trace_point_emit_u32(params->n_shared_shapers); + rte_trace_point_emit_u64(params->stats_mask); +) + +RTE_TRACE_POINT( + rte_tm_trace_node_capabilities_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t node_id, + struct rte_tm_node_capabilities *cap), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(node_id); + rte_trace_point_emit_ptr(cap); +) + +RTE_TRACE_POINT( + rte_tm_trace_node_cman_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t node_id, + enum rte_tm_cman_mode cman), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(node_id); + rte_trace_point_emit_int(cman); +) + +RTE_TRACE_POINT( + rte_tm_trace_node_delete, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t node_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(node_id); +) + +RTE_TRACE_POINT( + rte_tm_trace_node_parent_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t node_id, + uint32_t parent_node_id, uint32_t priority, + uint32_t weight), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(node_id); + rte_trace_point_emit_u32(parent_node_id); + rte_trace_point_emit_u32(priority); + rte_trace_point_emit_u32(weight); +) + +RTE_TRACE_POINT( + rte_tm_trace_node_resume, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t node_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(node_id); +) + +RTE_TRACE_POINT( + rte_tm_trace_node_shaper_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t node_id, + uint32_t shaper_profile_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(node_id); + rte_trace_point_emit_u32(shaper_profile_id); +) + +RTE_TRACE_POINT( + rte_tm_trace_node_shared_shaper_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t node_id, + uint32_t shared_shaper_id, int add), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(node_id); + rte_trace_point_emit_u32(shared_shaper_id); + rte_trace_point_emit_int(add); +) + +RTE_TRACE_POINT( + rte_tm_trace_node_shared_wred_context_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t node_id, + uint32_t shared_wred_context_id, int add), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(node_id); + rte_trace_point_emit_u32(shared_wred_context_id); + rte_trace_point_emit_int(add); +) + +RTE_TRACE_POINT( + rte_tm_trace_node_stats_read, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t node_id, + struct rte_tm_node_stats *stats, + uint64_t *stats_mask, int clear), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(node_id); + rte_trace_point_emit_ptr(stats); + rte_trace_point_emit_ptr(stats_mask); + rte_trace_point_emit_int(clear); +) + +RTE_TRACE_POINT( + rte_tm_trace_node_stats_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t node_id, + uint64_t stats_mask), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(node_id); + rte_trace_point_emit_u64(stats_mask); +) + +RTE_TRACE_POINT( + rte_tm_trace_node_suspend, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t node_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(node_id); +) + +RTE_TRACE_POINT( + rte_tm_trace_node_type_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t node_id, + int *is_leaf), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(node_id); + rte_trace_point_emit_ptr(is_leaf); +) + +RTE_TRACE_POINT( + rte_tm_trace_node_wfq_weight_mode_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t node_id, + int *wfq_weight_mode, uint32_t n_sp_priorities), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(node_id); + rte_trace_point_emit_ptr(wfq_weight_mode); + rte_trace_point_emit_u32(n_sp_priorities); +) + +RTE_TRACE_POINT( + rte_tm_trace_node_wred_context_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t node_id, + uint32_t wred_profile_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(node_id); + rte_trace_point_emit_u32(wred_profile_id); +) + +RTE_TRACE_POINT( + rte_tm_trace_shaper_profile_add, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t shaper_profile_id, + struct rte_tm_shaper_params *profile), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(shaper_profile_id); + rte_trace_point_emit_ptr(profile); + rte_trace_point_emit_u64(profile->committed.rate); + rte_trace_point_emit_u64(profile->committed.size); + rte_trace_point_emit_u64(profile->peak.rate); + rte_trace_point_emit_u64(profile->peak.size); + rte_trace_point_emit_i32(profile->pkt_length_adjust); + rte_trace_point_emit_int(profile->packet_mode); +) + +RTE_TRACE_POINT( + rte_tm_trace_shaper_profile_delete, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t shaper_profile_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(shaper_profile_id); +) + +RTE_TRACE_POINT( + rte_tm_trace_shared_shaper_add_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t shared_shaper_id, + uint32_t shaper_profile_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(shared_shaper_id); + rte_trace_point_emit_u32(shaper_profile_id); +) + +RTE_TRACE_POINT( + rte_tm_trace_shared_shaper_delete, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t shared_shaper_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(shared_shaper_id); +) + +RTE_TRACE_POINT( + rte_tm_trace_shared_wred_context_add_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t shared_wred_context_id, + uint32_t wred_profile_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(shared_wred_context_id); + rte_trace_point_emit_u32(wred_profile_id); +) + +RTE_TRACE_POINT( + rte_tm_trace_shared_wred_context_delete, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t shared_wred_context_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(shared_wred_context_id); +) + +RTE_TRACE_POINT( + rte_tm_trace_wred_profile_add, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t wred_profile_id, + struct rte_tm_wred_params *profile), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(wred_profile_id); + rte_trace_point_emit_ptr(profile); + rte_trace_point_emit_int(profile->packet_mode); +) + +RTE_TRACE_POINT( + rte_tm_trace_wred_profile_delete, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t wred_profile_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(wred_profile_id); +) + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_tm.c b/lib/ethdev/rte_tm.c index 9709454f35..b0b43eacc0 100644 --- a/lib/ethdev/rte_tm.c +++ b/lib/ethdev/rte_tm.c @@ -6,6 +6,7 @@ #include #include "rte_ethdev.h" +#include "rte_ethdev_trace.h" #include "rte_tm_driver.h" #include "rte_tm.h" @@ -79,6 +80,7 @@ rte_tm_get_number_of_leaf_nodes(uint16_t port_id, } *n_leaf_nodes = dev->data->nb_tx_queues; + rte_tm_trace_get_number_of_leaf_nodes(port_id, *n_leaf_nodes, error); return 0; } @@ -90,6 +92,7 @@ rte_tm_node_type_get(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_node_type_get(port_id, node_id, is_leaf); return RTE_TM_FUNC(port_id, node_type_get)(dev, node_id, is_leaf, error); } @@ -100,6 +103,7 @@ int rte_tm_capabilities_get(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_capabilities_get(port_id, cap); return RTE_TM_FUNC(port_id, capabilities_get)(dev, cap, error); } @@ -111,6 +115,7 @@ int rte_tm_level_capabilities_get(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_level_capabilities_get(port_id, level_id, cap); return RTE_TM_FUNC(port_id, level_capabilities_get)(dev, level_id, cap, error); } @@ -122,6 +127,7 @@ int rte_tm_node_capabilities_get(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_node_capabilities_get(port_id, node_id, cap); return RTE_TM_FUNC(port_id, node_capabilities_get)(dev, node_id, cap, error); } @@ -133,6 +139,7 @@ int rte_tm_wred_profile_add(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_wred_profile_add(port_id, wred_profile_id, profile); return RTE_TM_FUNC(port_id, wred_profile_add)(dev, wred_profile_id, profile, error); } @@ -143,6 +150,7 @@ int rte_tm_wred_profile_delete(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_wred_profile_delete(port_id, wred_profile_id); return RTE_TM_FUNC(port_id, wred_profile_delete)(dev, wred_profile_id, error); } @@ -154,6 +162,8 @@ int rte_tm_shared_wred_context_add_update(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_shared_wred_context_add_update(port_id, shared_wred_context_id, + wred_profile_id); return RTE_TM_FUNC(port_id, shared_wred_context_add_update)(dev, shared_wred_context_id, wred_profile_id, error); } @@ -164,6 +174,7 @@ int rte_tm_shared_wred_context_delete(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_shared_wred_context_delete(port_id, shared_wred_context_id); return RTE_TM_FUNC(port_id, shared_wred_context_delete)(dev, shared_wred_context_id, error); } @@ -175,6 +186,7 @@ int rte_tm_shaper_profile_add(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_shaper_profile_add(port_id, shaper_profile_id, profile); return RTE_TM_FUNC(port_id, shaper_profile_add)(dev, shaper_profile_id, profile, error); } @@ -185,6 +197,7 @@ int rte_tm_shaper_profile_delete(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_shaper_profile_delete(port_id, shaper_profile_id); return RTE_TM_FUNC(port_id, shaper_profile_delete)(dev, shaper_profile_id, error); } @@ -196,6 +209,8 @@ int rte_tm_shared_shaper_add_update(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_shared_shaper_add_update(port_id, shared_shaper_id, + shaper_profile_id); return RTE_TM_FUNC(port_id, shared_shaper_add_update)(dev, shared_shaper_id, shaper_profile_id, error); } @@ -206,6 +221,7 @@ int rte_tm_shared_shaper_delete(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_shared_shaper_delete(port_id, shared_shaper_id); return RTE_TM_FUNC(port_id, shared_shaper_delete)(dev, shared_shaper_id, error); } @@ -221,6 +237,8 @@ int rte_tm_node_add(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_node_add(port_id, node_id, parent_node_id, priority, + weight, level_id, params); return RTE_TM_FUNC(port_id, node_add)(dev, node_id, parent_node_id, priority, weight, level_id, params, error); @@ -232,6 +250,7 @@ int rte_tm_node_delete(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_node_delete(port_id, node_id); return RTE_TM_FUNC(port_id, node_delete)(dev, node_id, error); } @@ -242,6 +261,7 @@ int rte_tm_node_suspend(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_node_suspend(port_id, node_id); return RTE_TM_FUNC(port_id, node_suspend)(dev, node_id, error); } @@ -252,6 +272,7 @@ int rte_tm_node_resume(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_node_resume(port_id, node_id); return RTE_TM_FUNC(port_id, node_resume)(dev, node_id, error); } @@ -262,6 +283,7 @@ int rte_tm_hierarchy_commit(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_hierarchy_commit(port_id, clear_on_fail); return RTE_TM_FUNC(port_id, hierarchy_commit)(dev, clear_on_fail, error); } @@ -275,6 +297,8 @@ int rte_tm_node_parent_update(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_node_parent_update(port_id, node_id, parent_node_id, + priority, weight); return RTE_TM_FUNC(port_id, node_parent_update)(dev, node_id, parent_node_id, priority, weight, error); } @@ -286,6 +310,7 @@ int rte_tm_node_shaper_update(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_node_shaper_update(port_id, node_id, shaper_profile_id); return RTE_TM_FUNC(port_id, node_shaper_update)(dev, node_id, shaper_profile_id, error); } @@ -298,6 +323,8 @@ int rte_tm_node_shared_shaper_update(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_node_shared_shaper_update(port_id, node_id, shared_shaper_id, + add); return RTE_TM_FUNC(port_id, node_shared_shaper_update)(dev, node_id, shared_shaper_id, add, error); } @@ -309,6 +336,7 @@ int rte_tm_node_stats_update(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_node_stats_update(port_id, node_id, stats_mask); return RTE_TM_FUNC(port_id, node_stats_update)(dev, node_id, stats_mask, error); } @@ -321,6 +349,8 @@ int rte_tm_node_wfq_weight_mode_update(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_node_wfq_weight_mode_update(port_id, node_id, wfq_weight_mode, + n_sp_priorities); return RTE_TM_FUNC(port_id, node_wfq_weight_mode_update)(dev, node_id, wfq_weight_mode, n_sp_priorities, error); } @@ -332,6 +362,7 @@ int rte_tm_node_cman_update(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_node_cman_update(port_id, node_id, cman); return RTE_TM_FUNC(port_id, node_cman_update)(dev, node_id, cman, error); } @@ -343,6 +374,7 @@ int rte_tm_node_wred_context_update(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_node_wred_context_update(port_id, node_id, wred_profile_id); return RTE_TM_FUNC(port_id, node_wred_context_update)(dev, node_id, wred_profile_id, error); } @@ -355,6 +387,9 @@ int rte_tm_node_shared_wred_context_update(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_node_shared_wred_context_update(port_id, node_id, + shared_wred_context_id, + add); return RTE_TM_FUNC(port_id, node_shared_wred_context_update)(dev, node_id, shared_wred_context_id, add, error); } @@ -368,6 +403,8 @@ int rte_tm_node_stats_read(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_node_stats_read(port_id, node_id, stats, stats_mask, + clear); return RTE_TM_FUNC(port_id, node_stats_read)(dev, node_id, stats, stats_mask, clear, error); } @@ -380,6 +417,7 @@ int rte_tm_mark_vlan_dei(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_mark_vlan_dei(port_id, mark_green, mark_yellow, mark_red); return RTE_TM_FUNC(port_id, mark_vlan_dei)(dev, mark_green, mark_yellow, mark_red, error); } @@ -392,6 +430,7 @@ int rte_tm_mark_ip_ecn(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_mark_ip_ecn(port_id, mark_green, mark_yellow, mark_red); return RTE_TM_FUNC(port_id, mark_ip_ecn)(dev, mark_green, mark_yellow, mark_red, error); } @@ -404,6 +443,7 @@ int rte_tm_mark_ip_dscp(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + rte_tm_trace_mark_ip_dscp(port_id, mark_green, mark_yellow, mark_red); return RTE_TM_FUNC(port_id, mark_ip_dscp)(dev, mark_green, mark_yellow, mark_red, error); } diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 2e282bb457..ee4012789f 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -490,6 +490,36 @@ EXPERIMENTAL { __rte_mtr_trace_meter_vlan_table_update; __rte_mtr_trace_stats_read; __rte_mtr_trace_stats_update; + __rte_tm_trace_capabilities_get; + __rte_tm_trace_get_number_of_leaf_nodes; + __rte_tm_trace_hierarchy_commit; + __rte_tm_trace_level_capabilities_get; + __rte_tm_trace_mark_ip_dscp; + __rte_tm_trace_mark_ip_ecn; + __rte_tm_trace_mark_vlan_dei; + __rte_tm_trace_node_add; + __rte_tm_trace_node_capabilities_get; + __rte_tm_trace_node_cman_update; + __rte_tm_trace_node_delete; + __rte_tm_trace_node_parent_update; + __rte_tm_trace_node_resume; + __rte_tm_trace_node_shaper_update; + __rte_tm_trace_node_shared_shaper_update; + __rte_tm_trace_node_shared_wred_context_update; + __rte_tm_trace_node_stats_read; + __rte_tm_trace_node_stats_update; + __rte_tm_trace_node_suspend; + __rte_tm_trace_node_type_get; + __rte_tm_trace_node_wfq_weight_mode_update; + __rte_tm_trace_node_wred_context_update; + __rte_tm_trace_shaper_profile_add; + __rte_tm_trace_shaper_profile_delete; + __rte_tm_trace_shared_shaper_add_update; + __rte_tm_trace_shared_shaper_delete; + __rte_tm_trace_shared_wred_context_add_update; + __rte_tm_trace_shared_wred_context_delete; + __rte_tm_trace_wred_profile_add; + __rte_tm_trace_wred_profile_delete; }; INTERNAL { From patchwork Thu Aug 4 13:44:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ankur Dwivedi X-Patchwork-Id: 114621 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D6A67A00C4; Thu, 4 Aug 2022 15:50:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C3A3642BE0; Thu, 4 Aug 2022 15:50:03 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id CF87042BD9 for ; Thu, 4 Aug 2022 15:50:02 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 274CHYfI020978; Thu, 4 Aug 2022 06:47:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=W0xEx8RZo+rU+q0PKcBQ061/4mG9OzfImRa64uyA1n4=; b=AzosckasoxGLPeu0WUuU1vofj63QYw/lDenHD8fyH5qhcewKmJurKY/a6oxiGpxzIj3V 4xT7XrYcgozq2f0KRpsiWPO6c4+4hZkPa2OQMhsPt2/0eP5fvYYNE3/xoMMZ82tCeYek e1lM725OT+okqrI14H4thvPr46gc1wLHjITbSJOYgznNas23l+e1j7uDNyN4ZSIvgFXj GwNZRCO/Yy5beDwdGhmtksB/V4yxDQMKcN2rp0oftpx9UERnUjNKYf+Rj2AlYyINJYsU U+j0yaKEvE2sUPK2Ek0E7KZqmIH2b74z6JheyiKbEop4ep91m+D+RraUmb47Udgwr4lW YQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3hqgf1xr7k-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 04 Aug 2022 06:47:36 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Aug 2022 06:47:34 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 4 Aug 2022 06:47:34 -0700 Received: from hyd1349.t110.caveonetworks.com.com (unknown [10.29.45.13]) by maili.marvell.com (Postfix) with ESMTP id 791043F705C; Thu, 4 Aug 2022 06:47:13 -0700 (PDT) From: Ankur Dwivedi To: CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Ankur Dwivedi Subject: [PATCH 5/6] ethdev: add trace points for driver Date: Thu, 4 Aug 2022 19:14:29 +0530 Message-ID: <20220804134430.6192-6-adwivedi@marvell.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20220804134430.6192-1-adwivedi@marvell.com> References: <20220804134430.6192-1-adwivedi@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: gFZ-iJH2smNziLRFb2-vRoIrk5nUWXcK X-Proofpoint-ORIG-GUID: gFZ-iJH2smNziLRFb2-vRoIrk5nUWXcK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-04_03,2022-08-04_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adds trace points for ethdev driver specific functions in ethdev lib. Signed-off-by: Ankur Dwivedi --- lib/ethdev/ethdev_driver.c | 29 +++++ lib/ethdev/ethdev_trace_points.c | 66 ++++++++++ lib/ethdev/rte_ethdev_trace.h | 200 +++++++++++++++++++++++++++++++ lib/ethdev/version.map | 22 ++++ 4 files changed, 317 insertions(+) diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c index a285f213f0..74ec57f5fe 100644 --- a/lib/ethdev/ethdev_driver.c +++ b/lib/ethdev/ethdev_driver.c @@ -5,6 +5,7 @@ #include #include +#include "rte_ethdev_trace.h" #include "ethdev_driver.h" #include "ethdev_private.h" @@ -113,6 +114,7 @@ rte_eth_dev_allocate(const char *name) unlock: rte_spinlock_unlock(ð_dev_shared_data->ownership_lock); + rte_ethdev_trace_allocate(name, eth_dev); return eth_dev; } @@ -121,6 +123,7 @@ rte_eth_dev_allocated(const char *name) { struct rte_eth_dev *ethdev; + rte_ethdev_trace_allocated(name); eth_dev_shared_data_prepare(); rte_spinlock_lock(ð_dev_shared_data->ownership_lock); @@ -162,6 +165,7 @@ rte_eth_dev_attach_secondary(const char *name) } rte_spinlock_unlock(ð_dev_shared_data->ownership_lock); + rte_ethdev_trace_attach_secondary(name, eth_dev); return eth_dev; } @@ -173,6 +177,7 @@ rte_eth_dev_callback_process(struct rte_eth_dev *dev, struct rte_eth_dev_callback dev_cb; int rc = 0; + rte_ethdev_trace_callback_process(dev, event, ret_param); rte_spinlock_lock(ð_dev_cb_lock); TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) { if (cb_lst->cb_fn == NULL || cb_lst->event != event) @@ -195,6 +200,7 @@ rte_eth_dev_callback_process(struct rte_eth_dev *dev, void rte_eth_dev_probing_finish(struct rte_eth_dev *dev) { + rte_ethdev_trace_probing_finish(dev); if (dev == NULL) return; @@ -214,6 +220,7 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev) int rte_eth_dev_release_port(struct rte_eth_dev *eth_dev) { + rte_ethdev_trace_release_port(eth_dev); if (eth_dev == NULL) return -EINVAL; @@ -264,6 +271,9 @@ rte_eth_dev_create(struct rte_device *device, const char *name, struct rte_eth_dev *ethdev; int retval; + rte_ethdev_trace_create(device, name, priv_data_size, + ethdev_bus_specific_init, bus_init_params, + ethdev_init, init_params); RTE_FUNC_PTR_OR_ERR_RET(*ethdev_init, -EINVAL); if (rte_eal_process_type() == RTE_PROC_PRIMARY) { @@ -324,6 +334,7 @@ rte_eth_dev_destroy(struct rte_eth_dev *ethdev, { int ret; + rte_ethdev_trace_destroy(ethdev, ethdev_uninit); ethdev = rte_eth_dev_allocated(ethdev->data->name); if (!ethdev) return -ENODEV; @@ -342,6 +353,7 @@ rte_eth_dev_get_by_name(const char *name) { uint16_t pid; + rte_ethdev_trace_get_by_name(name); if (rte_eth_dev_get_port_by_name(name, &pid)) return NULL; @@ -351,6 +363,7 @@ rte_eth_dev_get_by_name(const char *name) int rte_eth_dev_is_rx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id) { + rte_ethdev_trace_is_rx_hairpin_queue(dev, queue_id); if (dev->data->rx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN) return 1; return 0; @@ -359,6 +372,7 @@ rte_eth_dev_is_rx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id) int rte_eth_dev_is_tx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id) { + rte_ethdev_trace_is_tx_hairpin_queue(dev, queue_id); if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN) return 1; return 0; @@ -367,6 +381,7 @@ rte_eth_dev_is_tx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id) void rte_eth_dev_internal_reset(struct rte_eth_dev *dev) { + rte_ethdev_trace_internal_reset(dev, dev->data->dev_started); if (dev->data->dev_started) { RTE_ETHDEV_LOG(ERR, "Port %u must be stopped to allow reset\n", dev->data->port_id); @@ -451,6 +466,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da) unsigned int i; int result = 0; + rte_eth_trace_devargs_parse(dargs, eth_da); memset(eth_da, 0, sizeof(*eth_da)); result = eth_dev_devargs_tokenise(&args, dargs); @@ -495,6 +511,7 @@ rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const char *ring_name, const struct rte_memzone *mz; int rc = 0; + rte_eth_trace_dma_zone_free(dev, ring_name, queue_id); rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id, queue_id, ring_name); if (rc >= RTE_MEMZONE_NAMESIZE) { @@ -520,6 +537,7 @@ rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name, const struct rte_memzone *mz; int rc; + rte_eth_trace_dma_zone_reserve(dev, ring_name, queue_id, size, align, socket_id); rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id, queue_id, ring_name); if (rc >= RTE_MEMZONE_NAMESIZE) { @@ -553,6 +571,8 @@ rte_eth_hairpin_queue_peer_bind(uint16_t cur_port, uint16_t cur_queue, { struct rte_eth_dev *dev; + rte_eth_trace_hairpin_queue_peer_bind(cur_port, cur_queue, peer_info, + direction); if (peer_info == NULL) return -EINVAL; @@ -571,6 +591,7 @@ rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue, { struct rte_eth_dev *dev; + rte_eth_trace_hairpin_queue_peer_unbind(cur_port, cur_queue, direction); /* No need to check the validity again. */ dev = &rte_eth_devices[cur_port]; RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_unbind, @@ -588,6 +609,8 @@ rte_eth_hairpin_queue_peer_update(uint16_t peer_port, uint16_t peer_queue, { struct rte_eth_dev *dev; + rte_eth_trace_hairpin_queue_peer_update(peer_port, peer_queue, cur_info, + peer_info, direction); /* Current queue information is not mandatory. */ if (peer_info == NULL) return -EINVAL; @@ -626,6 +649,8 @@ rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag_offset) if (flag_offset != NULL) *flag_offset = offset; + rte_eth_trace_ip_reassembly_dynfield_register(*field_offset, + *flag_offset); return 0; } @@ -729,6 +754,8 @@ rte_eth_representor_id_get(uint16_t port_id, } out: free(info); + rte_eth_trace_representor_id_get(port_id, type, controller, pf, + representor_port, *repr_id); return ret; } @@ -745,6 +772,7 @@ rte_eth_switch_domain_alloc(uint16_t *domain_id) eth_dev_switch_domains[i].state = RTE_ETH_SWITCH_DOMAIN_ALLOCATED; *domain_id = i; + rte_eth_trace_switch_domain_alloc(*domain_id); return 0; } } @@ -755,6 +783,7 @@ rte_eth_switch_domain_alloc(uint16_t *domain_id) int rte_eth_switch_domain_free(uint16_t domain_id) { + rte_eth_trace_switch_domain_free(domain_id); if (domain_id == RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID || domain_id >= RTE_MAX_ETHPORTS) return -EINVAL; diff --git a/lib/ethdev/ethdev_trace_points.c b/lib/ethdev/ethdev_trace_points.c index 341901d031..61539f379c 100644 --- a/lib/ethdev/ethdev_trace_points.c +++ b/lib/ethdev/ethdev_trace_points.c @@ -731,3 +731,69 @@ RTE_TRACE_POINT_REGISTER(rte_tm_trace_wred_profile_add, RTE_TRACE_POINT_REGISTER(rte_tm_trace_wred_profile_delete, lib.ethdev.tm.wred_profile_delete) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_allocate, + lib.ethdev.allocate) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_allocated, + lib.ethdev.allocated) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_attach_secondary, + lib.ethdev.attach_secondary) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_callback_process, + lib.ethdev.callback_process) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_create, + lib.ethdev_create) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_destroy, + lib.ethdev.destroy) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_by_name, + lib.ethdev.get_by_name) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_is_rx_hairpin_queue, + lib.ethdev.is_rx_hairpin_queue) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_is_tx_hairpin_queue, + lib.ethdev.is_tx_hairpin_queue) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_probing_finish, + lib.ethdev.probing_finish) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_release_port, + lib.ethdev.release_port) + +RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_internal_reset, + lib.ethdev.internal_reset) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_devargs_parse, + lib.ethdev.devargs_parse) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_dma_zone_free, + lib.ethdev.dma_zone_free) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_dma_zone_reserve, + lib.ethdev.dma_zone_reserve) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_hairpin_queue_peer_bind, + lib.ethdev.hairpin_queue_peer_bind) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_hairpin_queue_peer_unbind, + lib.ethdev.hairpin_queue_peer_unbind) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_hairpin_queue_peer_update, + lib.ethdev.hairpin_queue_peer_update) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_ip_reassembly_dynfield_register, + lib.ethdev.ip_reassembly_dynfield_register) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_representor_id_get, + lib.ethdev.representor_id_get) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_switch_domain_alloc, + lib.ethdev.switch_domain_alloc) + +RTE_TRACE_POINT_REGISTER(rte_eth_trace_switch_domain_free, + lib.ethdev.switch_domain_free) diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h index aa34a6a5e9..a3c0b6fa76 100644 --- a/lib/ethdev/rte_ethdev_trace.h +++ b/lib/ethdev/rte_ethdev_trace.h @@ -17,6 +17,7 @@ extern "C" { #include +#include "ethdev_driver.h" #include "rte_ethdev.h" #include "rte_mtr.h" #include "rte_tm.h" @@ -2134,6 +2135,205 @@ RTE_TRACE_POINT( rte_trace_point_emit_u32(wred_profile_id); ) +RTE_TRACE_POINT( + rte_ethdev_trace_allocate, + RTE_TRACE_POINT_ARGS(const char *name, struct rte_eth_dev *eth_dev), + rte_trace_point_emit_string(name); + rte_trace_point_emit_u16(eth_dev->data->nb_rx_queues); + rte_trace_point_emit_u16(eth_dev->data->nb_tx_queues); + rte_trace_point_emit_u16(eth_dev->data->mtu); + rte_trace_point_emit_u16(eth_dev->data->port_id); + rte_trace_point_emit_int(eth_dev->state); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_allocated, + RTE_TRACE_POINT_ARGS(const char *name), + rte_trace_point_emit_string(name); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_attach_secondary, + RTE_TRACE_POINT_ARGS(const char *name, struct rte_eth_dev *eth_dev), + rte_trace_point_emit_string(name); + rte_trace_point_emit_u16(eth_dev->data->nb_rx_queues); + rte_trace_point_emit_u16(eth_dev->data->nb_tx_queues); + rte_trace_point_emit_u16(eth_dev->data->mtu); + rte_trace_point_emit_u16(eth_dev->data->port_id); + rte_trace_point_emit_int(eth_dev->state); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_callback_process, + RTE_TRACE_POINT_ARGS(struct rte_eth_dev *dev, + enum rte_eth_event_type event, + void *ret_param), + rte_trace_point_emit_ptr(dev); + rte_trace_point_emit_int(event); + rte_trace_point_emit_ptr(ret_param); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_create, + RTE_TRACE_POINT_ARGS(struct rte_device *device, const char *name, + size_t priv_data_size, + ethdev_bus_specific_init bus_specific_init, + void *bus_init_params, ethdev_init_t ethdev_init, + void *init_params), + rte_trace_point_emit_ptr(device); + rte_trace_point_emit_string(name); + rte_trace_point_emit_size_t(priv_data_size); + rte_trace_point_emit_ptr(bus_specific_init); + rte_trace_point_emit_ptr(bus_init_params); + rte_trace_point_emit_ptr(ethdev_init); + rte_trace_point_emit_ptr(init_params); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_destroy, + RTE_TRACE_POINT_ARGS(struct rte_eth_dev *ethdev, ethdev_uninit_t ethdev_uninit), + rte_trace_point_emit_ptr(ethdev); + rte_trace_point_emit_ptr(ethdev_uninit); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_get_by_name, + RTE_TRACE_POINT_ARGS(const char *name), + rte_trace_point_emit_string(name); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_is_rx_hairpin_queue, + RTE_TRACE_POINT_ARGS(struct rte_eth_dev *dev, uint16_t queue_id), + rte_trace_point_emit_ptr(dev); + rte_trace_point_emit_u16(queue_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_is_tx_hairpin_queue, + RTE_TRACE_POINT_ARGS(struct rte_eth_dev *dev, uint16_t queue_id), + rte_trace_point_emit_ptr(dev); + rte_trace_point_emit_u16(queue_id); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_probing_finish, + RTE_TRACE_POINT_ARGS(struct rte_eth_dev *dev), + rte_trace_point_emit_ptr(dev); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_release_port, + RTE_TRACE_POINT_ARGS(struct rte_eth_dev *eth_dev), + rte_trace_point_emit_ptr(eth_dev); +) + +RTE_TRACE_POINT( + rte_ethdev_trace_internal_reset, + RTE_TRACE_POINT_ARGS(struct rte_eth_dev *dev, + uint8_t dev_started), + rte_trace_point_emit_ptr(dev); + rte_trace_point_emit_u8(dev_started); +) + +RTE_TRACE_POINT( + rte_eth_trace_devargs_parse, + RTE_TRACE_POINT_ARGS(const char *devargs, struct rte_eth_devargs *eth_devargs), + rte_trace_point_emit_string(devargs); + rte_trace_point_emit_ptr(eth_devargs); + rte_trace_point_emit_u16(eth_devargs->nb_mh_controllers); + rte_trace_point_emit_u16(eth_devargs->nb_ports); + rte_trace_point_emit_u16(eth_devargs->nb_representor_ports); +) + +RTE_TRACE_POINT( + rte_eth_trace_dma_zone_free, + RTE_TRACE_POINT_ARGS(const struct rte_eth_dev *eth_dev, const char *name, + uint16_t queue_id), + rte_trace_point_emit_ptr(eth_dev); + rte_trace_point_emit_string(name); + rte_trace_point_emit_u16(queue_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_dma_zone_reserve, + RTE_TRACE_POINT_ARGS(const struct rte_eth_dev *eth_dev, const char *name, + uint16_t queue_id, size_t size, unsigned int align, + int socket_id), + rte_trace_point_emit_ptr(eth_dev); + rte_trace_point_emit_string(name); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_size_t(size); + rte_trace_point_emit_u32(align); + rte_trace_point_emit_int(socket_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_hairpin_queue_peer_bind, + RTE_TRACE_POINT_ARGS(uint16_t cur_port, uint16_t cur_queue, + struct rte_hairpin_peer_info *peer_info, + uint32_t direction), + rte_trace_point_emit_u16(cur_port); + rte_trace_point_emit_u16(cur_queue); + rte_trace_point_emit_ptr(peer_info); + rte_trace_point_emit_u32(direction); +) + +RTE_TRACE_POINT( + rte_eth_trace_hairpin_queue_peer_unbind, + RTE_TRACE_POINT_ARGS(uint16_t cur_port, uint16_t cur_queue, + uint32_t direction), + rte_trace_point_emit_u16(cur_port); + rte_trace_point_emit_u16(cur_queue); + rte_trace_point_emit_u32(direction); +) + +RTE_TRACE_POINT( + rte_eth_trace_hairpin_queue_peer_update, + RTE_TRACE_POINT_ARGS(uint16_t peer_port, uint16_t peer_queue, + struct rte_hairpin_peer_info *cur_info, + struct rte_hairpin_peer_info *peer_info, + uint32_t direction), + rte_trace_point_emit_u16(peer_port); + rte_trace_point_emit_u16(peer_queue); + rte_trace_point_emit_ptr(cur_info); + rte_trace_point_emit_ptr(peer_info); + rte_trace_point_emit_u32(direction); +) + +RTE_TRACE_POINT( + rte_eth_trace_ip_reassembly_dynfield_register, + RTE_TRACE_POINT_ARGS(int field_offset, int flag_offset), + rte_trace_point_emit_int(field_offset); + rte_trace_point_emit_int(flag_offset); +) + +RTE_TRACE_POINT( + rte_eth_trace_representor_id_get, + RTE_TRACE_POINT_ARGS(uint16_t port_id, + enum rte_eth_representor_type type, + int controller, int pf, int representor_port, + uint16_t repr_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_int(type); + rte_trace_point_emit_int(controller); + rte_trace_point_emit_int(pf); + rte_trace_point_emit_int(representor_port); + rte_trace_point_emit_u16(repr_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_switch_domain_alloc, + RTE_TRACE_POINT_ARGS(uint16_t domain_id), + rte_trace_point_emit_u16(domain_id); +) + +RTE_TRACE_POINT( + rte_eth_trace_switch_domain_free, + RTE_TRACE_POINT_ARGS(uint16_t domain_id), + rte_trace_point_emit_u16(domain_id); +) + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index ee4012789f..ee3ff4793d 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -520,6 +520,28 @@ EXPERIMENTAL { __rte_tm_trace_shared_wred_context_delete; __rte_tm_trace_wred_profile_add; __rte_tm_trace_wred_profile_delete; + __rte_ethdev_trace_allocate; + __rte_ethdev_trace_allocated; + __rte_ethdev_trace_attach_secondary; + __rte_ethdev_trace_callback_process; + __rte_ethdev_trace_create; + __rte_ethdev_trace_destroy; + __rte_ethdev_trace_get_by_name; + __rte_ethdev_trace_is_rx_hairpin_queue; + __rte_ethdev_trace_is_tx_hairpin_queue; + __rte_ethdev_trace_probing_finish; + __rte_ethdev_trace_release_port; + __rte_ethdev_trace_internal_reset; + __rte_eth_trace_devargs_parse; + __rte_eth_trace_dma_zone_free; + __rte_eth_trace_dma_zone_reserve; + __rte_eth_trace_hairpin_queue_peer_bind; + __rte_eth_trace_hairpin_queue_peer_unbind; + __rte_eth_trace_hairpin_queue_peer_update; + __rte_eth_trace_ip_reassembly_dynfield_register; + __rte_eth_trace_representor_id_get; + __rte_eth_trace_switch_domain_alloc; + __rte_eth_trace_switch_domain_free; }; INTERNAL { From patchwork Fri Aug 12 03:13:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ding, Xuan" X-Patchwork-Id: 114852 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0057BA0540; Fri, 12 Aug 2022 05:13:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CBEAF406A2; Fri, 12 Aug 2022 05:13:29 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 0CB6940697 for ; Fri, 12 Aug 2022 05:13:27 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660274008; x=1691810008; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=P5EdVQnsCVjNTXBtHbNsD//SgTup4F2iaBdEpHDK3zY=; b=O1anQbJVFnd/M7G3gTYiaPtplEVoiZW7NHljxUcmMgcxVb/C7s/ayhgx xmszr+Xx0QcTTXPQ9M7VypZ0xgvWa1tB1qf5fH9eVv9t2jevy5uzNFkLk W77+VtBmh8kC1NSilNvJ4y5237VyNng9na03Q8VmUpS6ykRAH07BZ5xrJ F0uvRH696XGqHs8glci9aaPfKGXemzkORDqaAyLlM/ro4LZxAZA0inH8x xd2FqXkq3V/fnf8WM9dh0yNigt3U5DJvl+mMlVekEWT9ExhjvO+JXKZi0 vSuDMhjOZmaqwpoliIy1teJeVBQcHx+OJ5FLwB4gz13W/CSIneMWZtrcb g==; X-IronPort-AV: E=McAfee;i="6400,9594,10436"; a="317478421" X-IronPort-AV: E=Sophos;i="5.93,231,1654585200"; d="scan'208";a="317478421" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Aug 2022 20:13:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,231,1654585200"; d="scan'208";a="665655888" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.111.163]) by fmsmga008.fm.intel.com with ESMTP; 11 Aug 2022 20:13:19 -0700 From: xuan.ding@intel.com To: thomas@monjalon.net, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@xilinx.com, viacheslavo@nvidia.com, jerinj@marvell.com, cristian.dumitrescu@intel.com, aman.deep.singh@intel.com, yuying.zhang@intel.com, chas3@att.com, humin29@huawei.com, gakhil@marvell.com, qi.z.zhang@intel.com, xiao.w.wang@intel.com, qiming.yang@intel.com, wenjun1.wu@intel.com, mdr@ashroe.eu, ndabilpuram@marvell.com, kirankumark@marvell.com, skori@marvell.com, skoteshwar@marvell.com, grive@u256.net, beilei.xing@intel.com, zr@semihalf.com, lironh@marvell.com, mczekaj@marvell.com, nicolas.chautru@intel.com, orika@nvidia.com, konstantin.v.ananyev@yandex.ru, radu.nicolau@intel.com, roy.fan.zhang@intel.com, pbhagavatula@marvell.com, bruce.richardson@intel.com, anatoly.burakov@intel.com, jingjing.wu@intel.com, junfeng.guo@intel.com, jasvinder.singh@intel.com, maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, Xuan Ding Subject: [PATCH v2] ethdev: remove header split Rx offload Date: Fri, 12 Aug 2022 03:13:13 +0000 Message-Id: <20220812031313.87385-1-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220811092028.99919-1-xuan.ding@intel.com> References: <20220811092028.99919-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding As announced in the deprecation note, this patch removes the Rx offload flag 'RTE_ETH_RX_OFFLOAD_HEADER_SPLIT' and 'split_hdr_size' field from the structure 'rte_eth_rxmode'. Meanwhile, the place where the examples and apps initialize the 'split_hdr_size' field, and where the drivers check if the 'split_hdr_size' value is 0 are also removed. User can still use `RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT` for per-queue packet split offload, which is configured by 'rte_eth_rxseg_split'. Signed-off-by: Xuan Ding Acked-by: Andrew Rybchenko --- v2: * fix CI build error --- app/test-eventdev/test_perf_common.c | 1 - app/test-pipeline/init.c | 1 - app/test-pmd/cmdline.c | 12 ++++++------ app/test/test_link_bonding.c | 1 - app/test/test_link_bonding_mode4.c | 1 - app/test/test_link_bonding_rssconf.c | 2 -- app/test/test_pmd_perf.c | 1 - app/test/test_security_inline_proto.c | 1 - doc/guides/nics/fm10k.rst | 4 ---- doc/guides/nics/ixgbe.rst | 4 ---- doc/guides/rel_notes/deprecation.rst | 6 ------ doc/guides/rel_notes/release_22_11.rst | 5 +++++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 ++-- drivers/net/cnxk/cnxk_ethdev_ops.c | 1 - drivers/net/failsafe/failsafe_ops.c | 2 -- drivers/net/fm10k/fm10k_ethdev.c | 1 - drivers/net/fm10k/fm10k_rxtx_vec.c | 4 ---- drivers/net/i40e/i40e_rxtx_vec_common.h | 4 ---- drivers/net/mvneta/mvneta_ethdev.c | 5 ----- drivers/net/mvpp2/mrvl_ethdev.c | 5 ----- drivers/net/thunderx/nicvf_ethdev.c | 5 ----- examples/bbdev_app/main.c | 1 - examples/bond/main.c | 1 - examples/flow_filtering/main.c | 3 --- examples/ip_fragmentation/main.c | 1 - examples/ip_pipeline/link.c | 1 - examples/ip_reassembly/main.c | 1 - examples/ipsec-secgw/ipsec-secgw.c | 1 - examples/ipv4_multicast/main.c | 1 - examples/l2fwd-crypto/main.c | 1 - examples/l2fwd-event/l2fwd_common.c | 3 --- examples/l2fwd-jobstats/main.c | 3 --- examples/l2fwd-keepalive/main.c | 3 --- examples/l2fwd/main.c | 3 --- examples/l3fwd-graph/main.c | 1 - examples/l3fwd-power/main.c | 1 - examples/l3fwd/main.c | 1 - examples/link_status_interrupt/main.c | 3 --- examples/multi_process/symmetric_mp/main.c | 1 - examples/ntb/ntb_fwd.c | 1 - examples/pipeline/obj.c | 1 - examples/qos_meter/main.c | 1 - examples/qos_sched/init.c | 3 --- examples/vhost/main.c | 1 - examples/vmdq/main.c | 1 - examples/vmdq_dcb/main.c | 1 - lib/ethdev/rte_ethdev.c | 1 - lib/ethdev/rte_ethdev.h | 3 --- 48 files changed, 13 insertions(+), 100 deletions(-) diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c index 81420be73a..7474b9270a 100644 --- a/app/test-eventdev/test_perf_common.c +++ b/app/test-eventdev/test_perf_common.c @@ -1244,7 +1244,6 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt) struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_RSS, - .split_hdr_size = 0, }, .rx_adv_conf = { .rss_conf = { diff --git a/app/test-pipeline/init.c b/app/test-pipeline/init.c index eee0719b67..d146c44be0 100644 --- a/app/test-pipeline/init.c +++ b/app/test-pipeline/init.c @@ -68,7 +68,6 @@ struct app_params app = { static struct rte_eth_conf port_conf = { .rxmode = { - .split_hdr_size = 0, .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM, }, .rx_adv_conf = { diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index b4fe9dfb17..5787659c32 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -745,7 +745,7 @@ static void cmd_help_long_parsed(void *parsed_result, "port config rx_offload vlan_strip|" "ipv4_cksum|udp_cksum|tcp_cksum|tcp_lro|qinq_strip|" - "outer_ipv4_cksum|macsec_strip|header_split|" + "outer_ipv4_cksum|macsec_strip|" "vlan_filter|vlan_extend|jumbo_frame|scatter|" "buffer_split|timestamp|security|keep_crc on|off\n" " Enable or disable a per port Rx offloading" @@ -753,7 +753,7 @@ static void cmd_help_long_parsed(void *parsed_result, "port (port_id) rxq (queue_id) rx_offload vlan_strip|" "ipv4_cksum|udp_cksum|tcp_cksum|tcp_lro|qinq_strip|" - "outer_ipv4_cksum|macsec_strip|header_split|" + "outer_ipv4_cksum|macsec_strip|" "vlan_filter|vlan_extend|jumbo_frame|scatter|" "buffer_split|timestamp|security|keep_crc on|off\n" " Enable or disable a per queue Rx offloading" @@ -12522,7 +12522,7 @@ static cmdline_parse_token_string_t cmd_config_per_port_rx_offload_result_offloa (struct cmd_config_per_port_rx_offload_result, offload, "vlan_strip#ipv4_cksum#udp_cksum#tcp_cksum#tcp_lro#" "qinq_strip#outer_ipv4_cksum#macsec_strip#" - "header_split#vlan_filter#vlan_extend#jumbo_frame#" + "vlan_filter#vlan_extend#jumbo_frame#" "scatter#buffer_split#timestamp#security#" "keep_crc#rss_hash"); static cmdline_parse_token_string_t cmd_config_per_port_rx_offload_result_on_off = @@ -12604,7 +12604,7 @@ static cmdline_parse_inst_t cmd_config_per_port_rx_offload = { .data = NULL, .help_str = "port config rx_offload vlan_strip|ipv4_cksum|" "udp_cksum|tcp_cksum|tcp_lro|qinq_strip|outer_ipv4_cksum|" - "macsec_strip|header_split|vlan_filter|vlan_extend|" + "macsec_strip|vlan_filter|vlan_extend|" "jumbo_frame|scatter|buffer_split|timestamp|security|" "keep_crc|rss_hash on|off", .tokens = { @@ -12654,7 +12654,7 @@ static cmdline_parse_token_string_t cmd_config_per_queue_rx_offload_result_offlo (struct cmd_config_per_queue_rx_offload_result, offload, "vlan_strip#ipv4_cksum#udp_cksum#tcp_cksum#tcp_lro#" "qinq_strip#outer_ipv4_cksum#macsec_strip#" - "header_split#vlan_filter#vlan_extend#jumbo_frame#" + "vlan_filter#vlan_extend#jumbo_frame#" "scatter#buffer_split#timestamp#security#keep_crc"); static cmdline_parse_token_string_t cmd_config_per_queue_rx_offload_result_on_off = TOKEN_STRING_INITIALIZER @@ -12712,7 +12712,7 @@ static cmdline_parse_inst_t cmd_config_per_queue_rx_offload = { .help_str = "port rxq rx_offload " "vlan_strip|ipv4_cksum|" "udp_cksum|tcp_cksum|tcp_lro|qinq_strip|outer_ipv4_cksum|" - "macsec_strip|header_split|vlan_filter|vlan_extend|" + "macsec_strip|vlan_filter|vlan_extend|" "jumbo_frame|scatter|buffer_split|timestamp|security|" "keep_crc on|off", .tokens = { diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c index 194ed5a7ec..977ddc1c00 100644 --- a/app/test/test_link_bonding.c +++ b/app/test/test_link_bonding.c @@ -135,7 +135,6 @@ static uint16_t vlan_id = 0x100; static struct rte_eth_conf default_pmd_conf = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_NONE, - .split_hdr_size = 0, }, .txmode = { .mq_mode = RTE_ETH_MQ_TX_NONE, diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c index d9b9c323c7..21c512c94b 100644 --- a/app/test/test_link_bonding_mode4.c +++ b/app/test/test_link_bonding_mode4.c @@ -108,7 +108,6 @@ static struct link_bonding_unittest_params test_params = { static struct rte_eth_conf default_pmd_conf = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_NONE, - .split_hdr_size = 0, }, .txmode = { .mq_mode = RTE_ETH_MQ_TX_NONE, diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c index b3d71c6f3a..464fb2dbd0 100644 --- a/app/test/test_link_bonding_rssconf.c +++ b/app/test/test_link_bonding_rssconf.c @@ -81,7 +81,6 @@ static struct link_bonding_rssconf_unittest_params test_params = { static struct rte_eth_conf default_pmd_conf = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_NONE, - .split_hdr_size = 0, }, .txmode = { .mq_mode = RTE_ETH_MQ_TX_NONE, @@ -92,7 +91,6 @@ static struct rte_eth_conf default_pmd_conf = { static struct rte_eth_conf rss_pmd_conf = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_RSS, - .split_hdr_size = 0, }, .txmode = { .mq_mode = RTE_ETH_MQ_TX_NONE, diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c index ec3dc251d1..34551e9b1a 100644 --- a/app/test/test_pmd_perf.c +++ b/app/test/test_pmd_perf.c @@ -62,7 +62,6 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS]; static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_NONE, - .split_hdr_size = 0, }, .txmode = { .mq_mode = RTE_ETH_MQ_TX_NONE, diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c index 5f26a04b06..0e5f69e636 100644 --- a/app/test/test_security_inline_proto.c +++ b/app/test/test_security_inline_proto.c @@ -73,7 +73,6 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS]; static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_NONE, - .split_hdr_size = 0, .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM | RTE_ETH_RX_OFFLOAD_SECURITY, }, diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst index d6efac0917..c0a37d35cc 100644 --- a/doc/guides/nics/fm10k.rst +++ b/doc/guides/nics/fm10k.rst @@ -63,8 +63,6 @@ vPMD. They are: * Flow director -* Header split - * RX checksum offload Other features are supported using optional MACRO configuration. They include: @@ -82,8 +80,6 @@ will be checked: * ``RTE_ETH_RX_OFFLOAD_CHECKSUM`` -* ``RTE_ETH_RX_OFFLOAD_HEADER_SPLIT`` - * ``fdir_conf->mode`` diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst index ad1a3da610..868d4c08cc 100644 --- a/doc/guides/nics/ixgbe.rst +++ b/doc/guides/nics/ixgbe.rst @@ -77,8 +77,6 @@ They are: * FDIR -* Header split - * RX checksum off load Other features are supported using optional MACRO configuration. They include: @@ -95,8 +93,6 @@ To guarantee the constraint, capabilities in dev_conf.rxmode.offloads will be ch * RTE_ETH_RX_OFFLOAD_CHECKSUM -* RTE_ETH_RX_OFFLOAD_HEADER_SPLIT - * dev_conf fdir_conf->mode will also be checked. diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index e7583cae4c..7ceb0c9955 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -92,12 +92,6 @@ Deprecation Notices The ``rate`` parameter will be modified to ``uint32_t`` in DPDK 22.11 so that it can work for more than 64 Gbps. -* ethdev: Since no single PMD supports ``RTE_ETH_RX_OFFLOAD_HEADER_SPLIT`` - offload and the ``split_hdr_size`` field in structure ``rte_eth_rxmode`` - to enable per-port header split, they will be removed in DPDK 22.11. - The per-queue Rx packet split offload ``RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT`` - can still be used, and it is configured by ``rte_eth_rxseg_split``. - * ethdev: The flow director API, including ``rte_eth_conf.fdir_conf`` field, and the related structures (``rte_fdir_*`` and ``rte_eth_fdir_*``), will be removed in DPDK 20.11. diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 8c021cf050..d28e07b2d6 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -100,6 +100,11 @@ ABI Changes Also, make sure to start the actual text at the margin. ======================================================= + * ethdev: Removed the Rx offload flag ``RTE_ETH_RX_OFFLOAD_HEADER_SPLIT`` + and field ``split_hdr_size`` from the structure ``rte_eth_rxmode`` used + to configure header split. Instead, user can still use + ``RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT`` for per-queue packet split + offlod, which is configured by ``rte_eth_rxseg_split``. Known Issues ------------ diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 330e34427d..0093fbfcff 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -1626,7 +1626,7 @@ Enable or disable a per port Rx offloading on all Rx queues of a port:: * ``offloading``: can be any of these offloading capability: vlan_strip, ipv4_cksum, udp_cksum, tcp_cksum, tcp_lro, qinq_strip, outer_ipv4_cksum, macsec_strip, - header_split, vlan_filter, vlan_extend, jumbo_frame, + vlan_filter, vlan_extend, jumbo_frame, scatter, timestamp, security, keep_crc, rss_hash This command should be run when the port is stopped, or else it will fail. @@ -1641,7 +1641,7 @@ Enable or disable a per queue Rx offloading only on a specific Rx queue:: * ``offloading``: can be any of these offloading capability: vlan_strip, ipv4_cksum, udp_cksum, tcp_cksum, tcp_lro, qinq_strip, outer_ipv4_cksum, macsec_strip, - header_split, vlan_filter, vlan_extend, jumbo_frame, + vlan_filter, vlan_extend, jumbo_frame, scatter, timestamp, security, keep_crc This command should be run when the port is stopped, or else it will fail. diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c index 1592971073..8c81d8a862 100644 --- a/drivers/net/cnxk/cnxk_ethdev_ops.c +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c @@ -90,7 +90,6 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, {RTE_ETH_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"}, {RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"}, {RTE_ETH_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"}, - {RTE_ETH_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"}, {RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"}, {RTE_ETH_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"}, {RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"}, diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c index 55e21d635c..86b4749f30 100644 --- a/drivers/net/failsafe/failsafe_ops.c +++ b/drivers/net/failsafe/failsafe_ops.c @@ -1187,7 +1187,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev, RTE_ETH_RX_OFFLOAD_QINQ_STRIP | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_MACSEC_STRIP | - RTE_ETH_RX_OFFLOAD_HEADER_SPLIT | RTE_ETH_RX_OFFLOAD_VLAN_FILTER | RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | RTE_ETH_RX_OFFLOAD_SCATTER | @@ -1204,7 +1203,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev, RTE_ETH_RX_OFFLOAD_QINQ_STRIP | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_MACSEC_STRIP | - RTE_ETH_RX_OFFLOAD_HEADER_SPLIT | RTE_ETH_RX_OFFLOAD_VLAN_FILTER | RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | RTE_ETH_RX_OFFLOAD_SCATTER | diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c index 8bbd8b445d..3f96703991 100644 --- a/drivers/net/fm10k/fm10k_ethdev.c +++ b/drivers/net/fm10k/fm10k_ethdev.c @@ -1779,7 +1779,6 @@ static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev) RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM | - RTE_ETH_RX_OFFLOAD_HEADER_SPLIT | RTE_ETH_RX_OFFLOAD_RSS_HASH); } diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c index 10ce5a7582..ad998e83bf 100644 --- a/drivers/net/fm10k/fm10k_rxtx_vec.c +++ b/drivers/net/fm10k/fm10k_rxtx_vec.c @@ -221,10 +221,6 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev) if (fconf->mode != RTE_FDIR_MODE_NONE) return -1; - /* no header split support */ - if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT) - return -1; - return 0; #else RTE_SET_USED(dev); diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h index 959832ed6a..08266ce1f3 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_common.h +++ b/drivers/net/i40e/i40e_rxtx_vec_common.h @@ -220,10 +220,6 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev) if (fconf->mode != RTE_FDIR_MODE_NONE) return -1; - /* no header split support */ - if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT) - return -1; - /* no QinQ support */ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) return -1; diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c index eef016aa0b..f9e5b96c77 100644 --- a/drivers/net/mvneta/mvneta_ethdev.c +++ b/drivers/net/mvneta/mvneta_ethdev.c @@ -121,11 +121,6 @@ mvneta_dev_configure(struct rte_eth_dev *dev) return -EINVAL; } - if (dev->data->dev_conf.rxmode.split_hdr_size) { - MVNETA_LOG(INFO, "Split headers not supported"); - return -EINVAL; - } - if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) priv->multiseg = 1; diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c index 735efb6cfc..f0c093e0fd 100644 --- a/drivers/net/mvpp2/mrvl_ethdev.c +++ b/drivers/net/mvpp2/mrvl_ethdev.c @@ -490,11 +490,6 @@ mrvl_dev_configure(struct rte_eth_dev *dev) return -EINVAL; } - if (dev->data->dev_conf.rxmode.split_hdr_size) { - MRVL_LOG(INFO, "Split headers not supported"); - return -EINVAL; - } - if (dev->data->dev_conf.rxmode.mtu > priv->max_mtu) { MRVL_LOG(ERR, "MTU %u is larger than max_mtu %u\n", dev->data->dev_conf.rxmode.mtu, diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c index 262c024560..b8b94fc4ff 100644 --- a/drivers/net/thunderx/nicvf_ethdev.c +++ b/drivers/net/thunderx/nicvf_ethdev.c @@ -2003,11 +2003,6 @@ nicvf_dev_configure(struct rte_eth_dev *dev) return -EINVAL; } - if (rxmode->split_hdr_size) { - PMD_INIT_LOG(INFO, "Rxmode does not support split header"); - return -EINVAL; - } - if (conf->dcb_capability_en) { PMD_INIT_LOG(INFO, "DCB enable not supported"); return -EINVAL; diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c index fc7e8b8174..ef1528e5ed 100644 --- a/examples/bbdev_app/main.c +++ b/examples/bbdev_app/main.c @@ -71,7 +71,6 @@ mbuf_input(struct rte_mbuf *mbuf) static const struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_NONE, - .split_hdr_size = 0, }, .txmode = { .mq_mode = RTE_ETH_MQ_TX_NONE, diff --git a/examples/bond/main.c b/examples/bond/main.c index 4efebb3902..9b076bb39f 100644 --- a/examples/bond/main.c +++ b/examples/bond/main.c @@ -115,7 +115,6 @@ static struct rte_mempool *mbuf_pool; static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_NONE, - .split_hdr_size = 0, }, .rx_adv_conf = { .rss_conf = { diff --git a/examples/flow_filtering/main.c b/examples/flow_filtering/main.c index bfc1949c84..f11f91a67c 100644 --- a/examples/flow_filtering/main.c +++ b/examples/flow_filtering/main.c @@ -133,9 +133,6 @@ init_port(void) uint16_t i; /* Ethernet port configured with default settings. 8< */ struct rte_eth_conf port_conf = { - .rxmode = { - .split_hdr_size = 0, - }, .txmode = { .offloads = RTE_ETH_TX_OFFLOAD_VLAN_INSERT | diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c index 78205d2e12..69378f99e6 100644 --- a/examples/ip_fragmentation/main.c +++ b/examples/ip_fragmentation/main.c @@ -147,7 +147,6 @@ static struct rte_eth_conf port_conf = { .rxmode = { .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN, - .split_hdr_size = 0, .offloads = (RTE_ETH_RX_OFFLOAD_CHECKSUM | RTE_ETH_RX_OFFLOAD_SCATTER), }, diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c index 0290767af4..4d69ebebfb 100644 --- a/examples/ip_pipeline/link.c +++ b/examples/ip_pipeline/link.c @@ -47,7 +47,6 @@ static struct rte_eth_conf port_conf_default = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_NONE, .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */ - .split_hdr_size = 0, /* Header split buffer size */ }, .rx_adv_conf = { .rss_conf = { diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c index 3ebf895aa0..4cfe09f9d7 100644 --- a/examples/ip_reassembly/main.c +++ b/examples/ip_reassembly/main.c @@ -163,7 +163,6 @@ static struct rte_eth_conf port_conf = { .mq_mode = RTE_ETH_MQ_RX_RSS, .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN, - .split_hdr_size = 0, .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM, }, .rx_adv_conf = { diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 815b9254ae..a0b221a447 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -234,7 +234,6 @@ struct lcore_conf lcore_conf[RTE_MAX_LCORE]; static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_RSS, - .split_hdr_size = 0, .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM, }, .rx_adv_conf = { diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c index bdcaa3bcd1..a3bc977fdf 100644 --- a/examples/ipv4_multicast/main.c +++ b/examples/ipv4_multicast/main.c @@ -111,7 +111,6 @@ static struct rte_eth_conf port_conf = { .rxmode = { .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN, - .split_hdr_size = 0, }, .txmode = { .mq_mode = RTE_ETH_MQ_TX_NONE, diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c index bf4b862379..cb7ba5cb4c 100644 --- a/examples/l2fwd-crypto/main.c +++ b/examples/l2fwd-crypto/main.c @@ -217,7 +217,6 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE]; static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_NONE, - .split_hdr_size = 0, }, .txmode = { .mq_mode = RTE_ETH_MQ_TX_NONE, diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c index 41a0d3f22f..162f880224 100644 --- a/examples/l2fwd-event/l2fwd_common.c +++ b/examples/l2fwd-event/l2fwd_common.c @@ -10,9 +10,6 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc) uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT; uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT; struct rte_eth_conf port_conf = { - .rxmode = { - .split_hdr_size = 0, - }, .txmode = { .mq_mode = RTE_ETH_MQ_TX_NONE, }, diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c index 9e71ba2d4e..4be598110c 100644 --- a/examples/l2fwd-jobstats/main.c +++ b/examples/l2fwd-jobstats/main.c @@ -89,9 +89,6 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE]; struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS]; static struct rte_eth_conf port_conf = { - .rxmode = { - .split_hdr_size = 0, - }, .txmode = { .mq_mode = RTE_ETH_MQ_TX_NONE, }, diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c index bd0aa7ea7a..72f9ece3c6 100644 --- a/examples/l2fwd-keepalive/main.c +++ b/examples/l2fwd-keepalive/main.c @@ -78,9 +78,6 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE]; struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS]; static struct rte_eth_conf port_conf = { - .rxmode = { - .split_hdr_size = 0, - }, .txmode = { .mq_mode = RTE_ETH_MQ_TX_NONE, }, diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c index 281c6b7a3f..ca802b5fc5 100644 --- a/examples/l2fwd/main.c +++ b/examples/l2fwd/main.c @@ -93,9 +93,6 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE]; static struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS]; static struct rte_eth_conf port_conf = { - .rxmode = { - .split_hdr_size = 0, - }, .txmode = { .mq_mode = RTE_ETH_MQ_TX_NONE, }, diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c index 7f00c65609..4d409b3ee2 100644 --- a/examples/l3fwd-graph/main.c +++ b/examples/l3fwd-graph/main.c @@ -112,7 +112,6 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default); static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_RSS, - .split_hdr_size = 0, }, .rx_adv_conf = { .rss_conf = { diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c index 887c6eae3f..a0dc7009a7 100644 --- a/examples/l3fwd-power/main.c +++ b/examples/l3fwd-power/main.c @@ -250,7 +250,6 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default); static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_RSS, - .split_hdr_size = 0, .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM, }, .rx_adv_conf = { diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index e090328fcc..865197baa8 100644 --- a/examples/l3fwd/main.c +++ b/examples/l3fwd/main.c @@ -121,7 +121,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) / static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_RSS, - .split_hdr_size = 0, .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM, }, .rx_adv_conf = { diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c index 9699e14ce6..4ac53c42a0 100644 --- a/examples/link_status_interrupt/main.c +++ b/examples/link_status_interrupt/main.c @@ -78,9 +78,6 @@ struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS]; /* Global configuration stored in a static structure. 8< */ static struct rte_eth_conf port_conf = { - .rxmode = { - .split_hdr_size = 0, - }, .txmode = { .mq_mode = RTE_ETH_MQ_TX_NONE, }, diff --git a/examples/multi_process/symmetric_mp/main.c b/examples/multi_process/symmetric_mp/main.c index 75237dee6e..1ff85875df 100644 --- a/examples/multi_process/symmetric_mp/main.c +++ b/examples/multi_process/symmetric_mp/main.c @@ -176,7 +176,6 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool, struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_RSS, - .split_hdr_size = 0, .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM, }, .rx_adv_conf = { diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c index 81964d0308..8cfee864af 100644 --- a/examples/ntb/ntb_fwd.c +++ b/examples/ntb/ntb_fwd.c @@ -90,7 +90,6 @@ static uint16_t pkt_burst = NTB_DFLT_PKT_BURST; static struct rte_eth_conf eth_port_conf = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_RSS, - .split_hdr_size = 0, }, .rx_adv_conf = { .rss_conf = { diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c index b79f044ac7..908b66274a 100644 --- a/examples/pipeline/obj.c +++ b/examples/pipeline/obj.c @@ -129,7 +129,6 @@ static struct rte_eth_conf port_conf_default = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_NONE, .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */ - .split_hdr_size = 0, /* Header split buffer size */ }, .rx_adv_conf = { .rss_conf = { diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c index a0f78e4ad6..319d0a96b2 100644 --- a/examples/qos_meter/main.c +++ b/examples/qos_meter/main.c @@ -52,7 +52,6 @@ static struct rte_mempool *pool = NULL; static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_RSS, - .split_hdr_size = 0, .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM, }, .rx_adv_conf = { diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c index 8a0fb8a374..6bd342aba2 100644 --- a/examples/qos_sched/init.c +++ b/examples/qos_sched/init.c @@ -56,9 +56,6 @@ int mp_size = NB_MBUF; struct flow_conf qos_conf[MAX_DATA_STREAMS]; static struct rte_eth_conf port_conf = { - .rxmode = { - .split_hdr_size = 0, - }, .txmode = { .mq_mode = RTE_ETH_MQ_TX_NONE, }, diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 7e1666f42a..504f5540ae 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -125,7 +125,6 @@ static struct vhost_queue_ops vdev_queue_ops[RTE_MAX_VHOST_DEVICE]; static struct rte_eth_conf vmdq_conf_default = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY, - .split_hdr_size = 0, /* * VLAN strip is necessary for 1G NIC such as I350, * this fixes bug of ipv4 forwarding in guest can't diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c index 10410b8783..0fb9520c25 100644 --- a/examples/vmdq/main.c +++ b/examples/vmdq/main.c @@ -66,7 +66,6 @@ static uint8_t rss_enable; static const struct rte_eth_conf vmdq_conf_default = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY, - .split_hdr_size = 0, }, .txmode = { diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c index d2218f2cf7..dae858514a 100644 --- a/examples/vmdq_dcb/main.c +++ b/examples/vmdq_dcb/main.c @@ -69,7 +69,6 @@ static uint8_t rss_enable; static const struct rte_eth_conf vmdq_dcb_conf_default = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB, - .split_hdr_size = 0, }, .txmode = { .mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB, diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 1979dc0850..ba6e8801bf 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -92,7 +92,6 @@ static const struct { RTE_RX_OFFLOAD_BIT2STR(QINQ_STRIP), RTE_RX_OFFLOAD_BIT2STR(OUTER_IPV4_CKSUM), RTE_RX_OFFLOAD_BIT2STR(MACSEC_STRIP), - RTE_RX_OFFLOAD_BIT2STR(HEADER_SPLIT), RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER), RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND), RTE_RX_OFFLOAD_BIT2STR(SCATTER), diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index de9e970d4d..1aaaa613b0 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -471,7 +471,6 @@ struct rte_eth_rxmode { uint32_t mtu; /**< Requested MTU. */ /** Maximum allowed size of LRO aggregated packet. */ uint32_t max_lro_pkt_size; - uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/ /** * Per-port Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags. * Only offloads set on rx_offload_capa field on rte_eth_dev_info @@ -1617,7 +1616,6 @@ struct rte_eth_conf { #define RTE_ETH_RX_OFFLOAD_QINQ_STRIP RTE_BIT64(5) #define RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM RTE_BIT64(6) #define RTE_ETH_RX_OFFLOAD_MACSEC_STRIP RTE_BIT64(7) -#define RTE_ETH_RX_OFFLOAD_HEADER_SPLIT RTE_BIT64(8) #define RTE_ETH_RX_OFFLOAD_VLAN_FILTER RTE_BIT64(9) #define RTE_ETH_RX_OFFLOAD_VLAN_EXTEND RTE_BIT64(10) #define RTE_ETH_RX_OFFLOAD_SCATTER RTE_BIT64(13) @@ -1642,7 +1640,6 @@ struct rte_eth_conf { #define DEV_RX_OFFLOAD_QINQ_STRIP RTE_DEPRECATED(DEV_RX_OFFLOAD_QINQ_STRIP) RTE_ETH_RX_OFFLOAD_QINQ_STRIP #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM #define DEV_RX_OFFLOAD_MACSEC_STRIP RTE_DEPRECATED(DEV_RX_OFFLOAD_MACSEC_STRIP) RTE_ETH_RX_OFFLOAD_MACSEC_STRIP -#define DEV_RX_OFFLOAD_HEADER_SPLIT RTE_DEPRECATED(DEV_RX_OFFLOAD_HEADER_SPLIT) RTE_ETH_RX_OFFLOAD_HEADER_SPLIT #define DEV_RX_OFFLOAD_VLAN_FILTER RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN_FILTER) RTE_ETH_RX_OFFLOAD_VLAN_FILTER #define DEV_RX_OFFLOAD_VLAN_EXTEND RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN_EXTEND) RTE_ETH_RX_OFFLOAD_VLAN_EXTEND #define DEV_RX_OFFLOAD_SCATTER RTE_DEPRECATED(DEV_RX_OFFLOAD_SCATTER) RTE_ETH_RX_OFFLOAD_SCATTER From patchwork Fri Aug 12 19:18:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Malov X-Patchwork-Id: 114932 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8CB8AA0543; Fri, 12 Aug 2022 21:18:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 458A940A83; Fri, 12 Aug 2022 21:18:31 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 2898F40A7F for ; Fri, 12 Aug 2022 21:18:30 +0200 (CEST) Received: from bree.oktetlabs.ru (bree.oktetlabs.ru [192.168.34.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPS id BB983B3; Fri, 12 Aug 2022 22:18:29 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru BB983B3 Authentication-Results: shelob.oktetlabs.ru/BB983B3; dkim=none; dkim-atps=neutral From: Ivan Malov To: dev@dpdk.org Cc: Ori Kam , Eli Britstein , Ilya Maximets , Thomas Monjalon , Stephen Hemminger , Jerin Jacob , Andrew Rybchenko , Ferruh Yigit , Ray Kinsella Subject: [PATCH 01/13] ethdev: strip experimental tag off Rx metadata negotiate API Date: Fri, 12 Aug 2022 22:18:15 +0300 Message-Id: <20220812191827.3187441-2-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220812191827.3187441-1-ivan.malov@oktetlabs.ru> References: <20220812191827.3187441-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org rte_eth_rx_metadata_negotiate() was introduced in DPDK 21.11. Since then, no one has requested any fixes. At the same time, the API is required by series [1] in OvS for the new release. [1] http://patchwork.ozlabs.org/project/openvswitch/list/?series=310415 Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko --- doc/guides/rel_notes/release_22_11.rst | 3 +++ lib/ethdev/rte_ethdev.h | 4 ---- lib/ethdev/version.map | 2 +- 3 files changed, 4 insertions(+), 5 deletions(-) diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 8c021cf050..6760ab8b87 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -84,6 +84,9 @@ API Changes Also, make sure to start the actual text at the margin. ======================================================= +* ethdev: promoted ``rte_eth_rx_metadata_negotiate()`` + from experimental to stable. + ABI Changes ----------- diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index de9e970d4d..e3f28283ce 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -5315,9 +5315,6 @@ int rte_eth_representor_info_get(uint16_t port_id, #define RTE_ETH_RX_METADATA_TUNNEL_ID RTE_BIT64(2) /** - * @warning - * @b EXPERIMENTAL: this API may change without prior notice - * * Negotiate the NIC's ability to deliver specific kinds of metadata to the PMD. * * Invoke this API before the first rte_eth_dev_configure() invocation @@ -5356,7 +5353,6 @@ int rte_eth_representor_info_get(uint16_t port_id, * - (-EIO) if the device is removed; * - (0) on success */ -__rte_experimental int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features); /** Flag to offload IP reassembly for IPv4 packets. */ diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 03f52fee91..2ecc1af571 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -98,6 +98,7 @@ DPDK_23 { rte_eth_remove_rx_callback; rte_eth_remove_tx_callback; rte_eth_rx_burst_mode_get; + rte_eth_rx_metadata_negotiate; rte_eth_rx_queue_info_get; rte_eth_rx_queue_setup; rte_eth_set_queue_rate_limit; @@ -250,7 +251,6 @@ EXPERIMENTAL { rte_eth_dev_capability_name; rte_eth_dev_conf_get; rte_eth_macaddrs_get; - rte_eth_rx_metadata_negotiate; rte_flow_flex_item_create; rte_flow_flex_item_release; rte_flow_pick_transfer_proxy; From patchwork Fri Aug 12 19:18:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Malov X-Patchwork-Id: 114934 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 93745A0543; Fri, 12 Aug 2022 21:18:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E6D6242C06; Fri, 12 Aug 2022 21:18:32 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 4365140A83 for ; Fri, 12 Aug 2022 21:18:30 +0200 (CEST) Received: from bree.oktetlabs.ru (bree.oktetlabs.ru [192.168.34.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPS id E8465B5; Fri, 12 Aug 2022 22:18:29 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru E8465B5 Authentication-Results: shelob.oktetlabs.ru/E8465B5; dkim=none; dkim-atps=neutral From: Ivan Malov To: dev@dpdk.org Cc: Ori Kam , Eli Britstein , Ilya Maximets , Thomas Monjalon , Stephen Hemminger , Jerin Jacob , Andrew Rybchenko , Ferruh Yigit Subject: [PATCH 02/13] ethdev: strip experimental tag off port ID items and actions Date: Fri, 12 Aug 2022 22:18:16 +0300 Message-Id: <20220812191827.3187441-3-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220812191827.3187441-1-ivan.malov@oktetlabs.ru> References: <20220812191827.3187441-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The following set of primitives has been introduced in 21.11: - RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR - RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT - RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR - RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT Since then, no one has requested any fixes. At the same time, the set is required by series [1] in OvS for the new release. [1] http://patchwork.ozlabs.org/project/openvswitch/list/?series=310415 Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko Acked-by: Ori Kam --- doc/guides/rel_notes/release_22_11.rst | 8 ++++++++ lib/ethdev/rte_flow.h | 6 ------ 2 files changed, 8 insertions(+), 6 deletions(-) diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 6760ab8b87..f039b857e2 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -87,6 +87,14 @@ API Changes * ethdev: promoted ``rte_eth_rx_metadata_negotiate()`` from experimental to stable. +* ethdev: promoted the following flow primitives + from experimental to stable: + + - ``RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR`` + - ``RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT`` + - ``RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR`` + - ``RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT`` + ABI Changes ----------- diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index a79f1e7ef0..e5d2d87403 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -1918,9 +1918,6 @@ static const struct rte_flow_item_conntrack rte_flow_item_conntrack_mask = { #endif /** - * @warning - * @b EXPERIMENTAL: this structure may change without prior notice - * * Provides an ethdev port ID for use with the following items: * RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR, * RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT. @@ -3643,9 +3640,6 @@ struct rte_flow_action_meter_color { }; /** - * @warning - * @b EXPERIMENTAL: this structure may change without prior notice - * * Provides an ethdev port ID for use with the following actions: * RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR, * RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT. From patchwork Fri Aug 12 19:18:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Malov X-Patchwork-Id: 114935 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3ED5FA0543; Fri, 12 Aug 2022 21:18:49 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B7AB942C15; Fri, 12 Aug 2022 21:18:33 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 8B75940A7F for ; Fri, 12 Aug 2022 21:18:30 +0200 (CEST) Received: from bree.oktetlabs.ru (bree.oktetlabs.ru [192.168.34.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPS id 26BB8B7; Fri, 12 Aug 2022 22:18:30 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 26BB8B7 Authentication-Results: shelob.oktetlabs.ru/26BB8B7; dkim=none; dkim-atps=neutral From: Ivan Malov To: dev@dpdk.org Cc: Ori Kam , Eli Britstein , Ilya Maximets , Thomas Monjalon , Stephen Hemminger , Jerin Jacob , Andrew Rybchenko , Ferruh Yigit , Ray Kinsella Subject: [PATCH 03/13] ethdev: remove experimental tag from flow transfer proxy API Date: Fri, 12 Aug 2022 22:18:17 +0300 Message-Id: <20220812191827.3187441-4-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220812191827.3187441-1-ivan.malov@oktetlabs.ru> References: <20220812191827.3187441-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org rte_flow_pick_transfer_proxy() was first added to DPDK 21.11. Since then, no one has requested any fixes. At the same time, the API is required by series [1] in OvS for the new release. [1] http://patchwork.ozlabs.org/project/openvswitch/list/?series=310415 Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko Acked-by: Ori Kam --- doc/guides/rel_notes/release_22_11.rst | 3 +++ lib/ethdev/rte_flow.h | 4 ---- lib/ethdev/version.map | 2 +- 3 files changed, 4 insertions(+), 5 deletions(-) diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index f039b857e2..b74e90d27f 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -95,6 +95,9 @@ API Changes - ``RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR`` - ``RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT`` +* ethdev: promoted ``rte_flow_pick_transfer_proxy()`` + from experimental to stable. + ABI Changes ----------- diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index e5d2d87403..bc68fd5631 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4792,9 +4792,6 @@ rte_flow_tunnel_item_release(uint16_t port_id, struct rte_flow_error *error); /** - * @warning - * @b EXPERIMENTAL: this API may change without prior notice. - * * Get a proxy port to manage "transfer" flows. * * Managing "transfer" flows requires that the user communicate them @@ -4819,7 +4816,6 @@ rte_flow_tunnel_item_release(uint16_t port_id, * @return * 0 on success, a negative error code otherwise */ -__rte_experimental int rte_flow_pick_transfer_proxy(uint16_t port_id, uint16_t *proxy_port_id, struct rte_flow_error *error); diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 2ecc1af571..25e54f9d3e 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -132,6 +132,7 @@ DPDK_23 { rte_flow_error_set; rte_flow_flush; rte_flow_isolate; + rte_flow_pick_transfer_proxy; rte_flow_query; rte_flow_validate; @@ -253,7 +254,6 @@ EXPERIMENTAL { rte_eth_macaddrs_get; rte_flow_flex_item_create; rte_flow_flex_item_release; - rte_flow_pick_transfer_proxy; # added in 22.03 rte_eth_dev_priority_flow_ctrl_queue_configure; From patchwork Fri Aug 12 19:18:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Malov X-Patchwork-Id: 114942 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EA67FA0543; Fri, 12 Aug 2022 21:19:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9E1AE42C3E; Fri, 12 Aug 2022 21:18:39 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id CB49840A7F for ; Fri, 12 Aug 2022 21:18:31 +0200 (CEST) Received: from bree.oktetlabs.ru (bree.oktetlabs.ru [192.168.34.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPS id 7C7B4C6; Fri, 12 Aug 2022 22:18:31 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 7C7B4C6 Authentication-Results: shelob.oktetlabs.ru/7C7B4C6; dkim=none; dkim-atps=neutral From: Ivan Malov To: dev@dpdk.org Cc: Ori Kam , Eli Britstein , Ilya Maximets , Thomas Monjalon , Stephen Hemminger , Jerin Jacob , Andrew Rybchenko , Aman Singh , Yuying Zhang , Ajit Khaparde , Somnath Kotur , Rahul Lakkireddy , Ferruh Yigit Subject: [PATCH 10/13] ethdev: remove deprecated flow item PF Date: Fri, 12 Aug 2022 22:18:24 +0300 Message-Id: <20220812191827.3187441-11-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220812191827.3187441-1-ivan.malov@oktetlabs.ru> References: <20220812191827.3187441-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Such deprecation was commenced in DPDK 21.11. Since then, no parties have objected. Remove. The patch breaks ABI. Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 9 ---- doc/guides/nics/features/bnxt.ini | 1 - doc/guides/nics/features/cxgbe.ini | 1 - doc/guides/nics/features/default.ini | 1 - doc/guides/nics/features/sfc.ini | 1 - doc/guides/nics/sfc_efx.rst | 2 - doc/guides/prog_guide/rte_flow.rst | 34 ------------- .../prog_guide/switch_representation.rst | 14 ------ doc/guides/rel_notes/release_22_11.rst | 5 ++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 - drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c | 4 -- drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 25 ---------- drivers/net/bnxt/tf_ulp/ulp_rte_parser.h | 5 -- drivers/net/cxgbe/cxgbe_flow.c | 21 -------- drivers/net/sfc/sfc_mae.c | 48 ------------------- lib/ethdev/rte_flow.c | 1 - lib/ethdev/rte_flow.h | 15 ------ 17 files changed, 5 insertions(+), 184 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 23889f7ab1..2722d5a48d 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -249,7 +249,6 @@ enum index { ITEM_INVERT, ITEM_ANY, ITEM_ANY_NUM, - ITEM_PF, ITEM_VF, ITEM_VF_ID, ITEM_PHY_PORT, @@ -1278,7 +1277,6 @@ static const enum index next_item[] = { ITEM_VOID, ITEM_INVERT, ITEM_ANY, - ITEM_PF, ITEM_VF, ITEM_PHY_PORT, ITEM_PORT_ID, @@ -3461,13 +3459,6 @@ static const struct token token_list[] = { .next = NEXT(item_any, NEXT_ENTRY(COMMON_UNSIGNED), item_param), .args = ARGS(ARGS_ENTRY(struct rte_flow_item_any, num)), }, - [ITEM_PF] = { - .name = "pf", - .help = "match traffic from/to the physical function", - .priv = PRIV_ITEM(PF, 0), - .next = NEXT(NEXT_ENTRY(ITEM_NEXT)), - .call = parse_vc, - }, [ITEM_VF] = { .name = "vf", .help = "match traffic from/to a virtual function ID", diff --git a/doc/guides/nics/features/bnxt.ini b/doc/guides/nics/features/bnxt.ini index afb5414b49..259480d1df 100644 --- a/doc/guides/nics/features/bnxt.ini +++ b/doc/guides/nics/features/bnxt.ini @@ -63,7 +63,6 @@ ipv6 = Y gre = Y icmp = Y icmp6 = Y -pf = Y phy_port = Y port_id = Y port_representor = Y diff --git a/doc/guides/nics/features/cxgbe.ini b/doc/guides/nics/features/cxgbe.ini index f674803ec4..d869f2100f 100644 --- a/doc/guides/nics/features/cxgbe.ini +++ b/doc/guides/nics/features/cxgbe.ini @@ -39,7 +39,6 @@ Usage doc = Y eth = Y ipv4 = Y ipv6 = Y -pf = Y phy_port = Y tcp = Y udp = Y diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index d1db0c256a..aff236134e 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -121,7 +121,6 @@ meta = mpls = nsh = nvgre = -pf = pfcp = phy_port = port_id = diff --git a/doc/guides/nics/features/sfc.ini b/doc/guides/nics/features/sfc.ini index 2e798b5ef5..355174d5c2 100644 --- a/doc/guides/nics/features/sfc.ini +++ b/doc/guides/nics/features/sfc.ini @@ -47,7 +47,6 @@ ipv4 = Y ipv6 = Y mark = P nvgre = Y -pf = Y phy_port = Y port_id = Y port_representor = Y diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst index 39c6e23d5b..2dbc59e8f7 100644 --- a/doc/guides/nics/sfc_efx.rst +++ b/doc/guides/nics/sfc_efx.rst @@ -200,8 +200,6 @@ Supported pattern items (***transfer*** rules): - PHY_PORT (cannot repeat; conflicts with other traffic source items) -- PF (cannot repeat; conflicts with other traffic source items) - - VF (cannot repeat; conflicts with other traffic source items) - ETH diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 588914b231..72f0c3d346 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -535,37 +535,6 @@ Usage example, matching non-TCPv4 packets only: | 4 | END | +-------+----------+ -Item: ``PF`` -^^^^^^^^^^^^ - -This item is deprecated. Consider: - - `Item: PORT_REPRESENTOR`_ - - `Item: REPRESENTED_PORT`_ - -Matches traffic originating from (ingress) or going to (egress) the physical -function of the current device. - -If supported, should work even if the physical function is not managed by -the application and thus not associated with a DPDK port ID. - -- Can be combined with any number of `Item: VF`_ to match both PF and VF - traffic. -- ``spec``, ``last`` and ``mask`` must not be set. - -.. _table_rte_flow_item_pf: - -.. table:: PF - - +----------+-------+ - | Field | Value | - +==========+=======+ - | ``spec`` | unset | - +----------+-------+ - | ``last`` | unset | - +----------+-------+ - | ``mask`` | unset | - +----------+-------+ - Item: ``VF`` ^^^^^^^^^^^^ @@ -584,7 +553,6 @@ separate entities, should be addressed through their own DPDK port IDs. - Can be specified multiple times to match traffic addressed to several VF IDs. -- Can be combined with a PF item to match both PF and VF traffic. - Default ``mask`` matches any VF ID. .. _table_rte_flow_item_vf: @@ -2074,8 +2042,6 @@ This action is deprecated. Consider: Directs matching traffic to the physical function (PF) of the current device. -See `Item: PF`_. - - No configurable properties. .. _table_rte_flow_action_pf: diff --git a/doc/guides/prog_guide/switch_representation.rst b/doc/guides/prog_guide/switch_representation.rst index 3da30fc779..6fd7b98bdc 100644 --- a/doc/guides/prog_guide/switch_representation.rst +++ b/doc/guides/prog_guide/switch_representation.rst @@ -624,25 +624,11 @@ Same restrictions as `PORT_ID pattern item`_. - Targets **A**, **B** or **C** in `traffic steering`_. -PF Pattern Item -^^^^^^^^^^^^^^^ - -Matches traffic originating from (ingress) or going to (egress) the physical -function of the current device. - -If supported, should work even if the physical function is not managed by -the application and thus not associated with a DPDK port ID. Its behavior is -otherwise similar to `PORT_ID pattern item`_ using PF port ID. - -- Matches **A** in `traffic steering`_. - PF Action ^^^^^^^^^ Directs matching traffic to the physical function of the current device. -Same restrictions as `PF pattern item`_. - - Targets **A** in `traffic steering`_. VF Pattern Item diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index c4ce32daed..b7469708af 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -76,6 +76,9 @@ Removed Items Also, make sure to start the actual text at the margin. ======================================================= +* ethdev: removed ``RTE_FLOW_ITEM_TYPE_PF``; + use ``RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT``. + API Changes ----------- @@ -122,6 +125,8 @@ ABI Changes Also, make sure to start the actual text at the margin. ======================================================= +* ethdev: enum ``RTE_FLOW_ITEM`` was affected by deprecation procedure. + Known Issues ------------ diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index c105200fe7..4446560369 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3665,8 +3665,6 @@ This section lists supported pattern items and their attributes, if any. - ``num {unsigned}``: number of layers covered. -- ``pf``: match traffic from/to the physical function. - - ``vf``: match traffic from/to a virtual function ID. - ``id {unsigned}``: VF ID. diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c index e9337ecd2c..17216426d8 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c @@ -260,10 +260,6 @@ struct bnxt_ulp_rte_hdr_info ulp_hdr_info[] = { .hdr_type = BNXT_ULP_HDR_TYPE_SUPPORTED, .proto_hdr_func = ulp_rte_item_any_handler }, - [RTE_FLOW_ITEM_TYPE_PF] = { - .hdr_type = BNXT_ULP_HDR_TYPE_SUPPORTED, - .proto_hdr_func = ulp_rte_pf_hdr_handler - }, [RTE_FLOW_ITEM_TYPE_VF] = { .hdr_type = BNXT_ULP_HDR_TYPE_SUPPORTED, .proto_hdr_func = ulp_rte_vf_hdr_handler diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c index 9edf3e8799..6a1d235f77 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c @@ -507,31 +507,6 @@ ulp_rte_parser_implicit_act_port_process(struct ulp_rte_parser_params *params) return BNXT_TF_RC_SUCCESS; } -/* Function to handle the parsing of RTE Flow item PF Header. */ -int32_t -ulp_rte_pf_hdr_handler(const struct rte_flow_item *item __rte_unused, - struct ulp_rte_parser_params *params) -{ - uint16_t port_id = 0; - uint16_t svif_mask = 0xFFFF; - uint32_t ifindex; - - /* Get the implicit port id */ - port_id = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_INCOMING_IF); - - /* perform the conversion from dpdk port to bnxt ifindex */ - if (ulp_port_db_dev_port_to_ulp_index(params->ulp_ctx, - port_id, - &ifindex)) { - BNXT_TF_DBG(ERR, "ParseErr:Portid is not valid\n"); - return BNXT_TF_RC_ERROR; - } - - /* Update the SVIF details */ - return ulp_rte_parser_svif_set(params, ifindex, svif_mask, - BNXT_ULP_DIR_INVALID); -} - /* Function to handle the parsing of RTE Flow item VF Header. */ int32_t ulp_rte_vf_hdr_handler(const struct rte_flow_item *item, diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h index e4225d00f8..94918f6b4a 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h @@ -80,11 +80,6 @@ bnxt_ulp_rte_parser_act_parse(const struct rte_flow_action actions[], void bnxt_ulp_rte_parser_post_process(struct ulp_rte_parser_params *params); -/* Function to handle the parsing of RTE Flow item PF Header. */ -int32_t -ulp_rte_pf_hdr_handler(const struct rte_flow_item *item, - struct ulp_rte_parser_params *params); - /* Function to handle the parsing of RTE Flow item VF Header. */ int32_t ulp_rte_vf_hdr_handler(const struct rte_flow_item *item, diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c index 6e460dfe2e..e4f9c152b5 100644 --- a/drivers/net/cxgbe/cxgbe_flow.c +++ b/drivers/net/cxgbe/cxgbe_flow.c @@ -288,22 +288,6 @@ ch_rte_parsetype_vlan(const void *dmask, const struct rte_flow_item *item, return 0; } -static int -ch_rte_parsetype_pf(const void *dmask __rte_unused, - const struct rte_flow_item *item __rte_unused, - struct ch_filter_specification *fs, - struct rte_flow_error *e __rte_unused) -{ - struct rte_flow *flow = (struct rte_flow *)fs->private; - struct rte_eth_dev *dev = flow->dev; - struct adapter *adap = ethdev2adap(dev); - - CXGBE_FILL_FS(1, 1, pfvf_vld); - - CXGBE_FILL_FS(adap->pf, 0x7, pf); - return 0; -} - static int ch_rte_parsetype_vf(const void *dmask, const struct rte_flow_item *item, struct ch_filter_specification *fs, @@ -1022,11 +1006,6 @@ static struct chrte_fparse parseitem[] = { .dmask = &rte_flow_item_tcp_mask, }, - [RTE_FLOW_ITEM_TYPE_PF] = { - .fptr = ch_rte_parsetype_pf, - .dmask = NULL, - }, - [RTE_FLOW_ITEM_TYPE_VF] = { .fptr = ch_rte_parsetype_vf, .dmask = &(const struct rte_flow_item_vf){ diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c index eb197fbdeb..e8da2d2a0d 100644 --- a/drivers/net/sfc/sfc_mae.c +++ b/drivers/net/sfc/sfc_mae.c @@ -1685,42 +1685,6 @@ sfc_mae_rule_parse_item_phy_port(const struct rte_flow_item *item, return 0; } -static int -sfc_mae_rule_parse_item_pf(const struct rte_flow_item *item, - struct sfc_flow_parse_ctx *ctx, - struct rte_flow_error *error) -{ - struct sfc_mae_parse_ctx *ctx_mae = ctx->mae; - const efx_nic_cfg_t *encp = efx_nic_cfg_get(ctx_mae->sa->nic); - efx_mport_sel_t mport_v; - int rc; - - if (ctx_mae->match_mport_set) { - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM, item, - "Can't handle multiple traffic source items"); - } - - rc = efx_mae_mport_by_pcie_function(encp->enc_pf, EFX_PCI_VF_INVALID, - &mport_v); - if (rc != 0) { - return rte_flow_error_set(error, rc, - RTE_FLOW_ERROR_TYPE_ITEM, item, - "Failed to convert the PF ID"); - } - - rc = efx_mae_match_spec_mport_set(ctx_mae->match_spec, &mport_v, NULL); - if (rc != 0) { - return rte_flow_error_set(error, rc, - RTE_FLOW_ERROR_TYPE_ITEM, item, - "Failed to set MPORT for the PF"); - } - - ctx_mae->match_mport_set = B_TRUE; - - return 0; -} - static int sfc_mae_rule_parse_item_vf(const struct rte_flow_item *item, struct sfc_flow_parse_ctx *ctx, @@ -2591,18 +2555,6 @@ static const struct sfc_flow_item sfc_flow_items[] = { .ctx_type = SFC_FLOW_PARSE_CTX_MAE, .parse = sfc_mae_rule_parse_item_phy_port, }, - { - .type = RTE_FLOW_ITEM_TYPE_PF, - .name = "PF", - /* - * In terms of RTE flow, this item is a META one, - * and its position in the pattern is don't care. - */ - .prev_layer = SFC_FLOW_ITEM_ANY_LAYER, - .layer = SFC_FLOW_ITEM_ANY_LAYER, - .ctx_type = SFC_FLOW_PARSE_CTX_MAE, - .parse = sfc_mae_rule_parse_item_pf, - }, { .type = RTE_FLOW_ITEM_TYPE_VF, .name = "VF", diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 501be9d602..6ece72bf36 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -97,7 +97,6 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = { MK_FLOW_ITEM(VOID, 0), MK_FLOW_ITEM(INVERT, 0), MK_FLOW_ITEM(ANY, sizeof(struct rte_flow_item_any)), - MK_FLOW_ITEM(PF, 0), MK_FLOW_ITEM(VF, sizeof(struct rte_flow_item_vf)), MK_FLOW_ITEM(PHY_PORT, sizeof(struct rte_flow_item_phy_port)), MK_FLOW_ITEM(PORT_ID, sizeof(struct rte_flow_item_port_id)), diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index bc68fd5631..97de98e232 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -188,20 +188,6 @@ enum rte_flow_item_type { */ RTE_FLOW_ITEM_TYPE_ANY, - /** - * @deprecated - * @see RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR - * @see RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT - * - * [META] - * - * Matches traffic originating from (ingress) or going to (egress) - * the physical function of the current device. - * - * No associated specification structure. - */ - RTE_FLOW_ITEM_TYPE_PF, - /** * @deprecated * @see RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR @@ -732,7 +718,6 @@ static const struct rte_flow_item_any rte_flow_item_any_mask = { * * - Can be specified multiple times to match traffic addressed to several * VF IDs. - * - Can be combined with a PF item to match both PF and VF traffic. * * A zeroed mask can be used to match any VF ID. */ From patchwork Fri Aug 12 19:18:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Malov X-Patchwork-Id: 114943 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B32F0A0543; Fri, 12 Aug 2022 21:19:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 71A1242C5D; Fri, 12 Aug 2022 21:18:40 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 0CFFB410F1 for ; Fri, 12 Aug 2022 21:18:32 +0200 (CEST) Received: from bree.oktetlabs.ru (bree.oktetlabs.ru [192.168.34.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPS id C9938C7; Fri, 12 Aug 2022 22:18:31 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru C9938C7 Authentication-Results: shelob.oktetlabs.ru/C9938C7; dkim=none; dkim-atps=neutral From: Ivan Malov To: dev@dpdk.org Cc: Ori Kam , Eli Britstein , Ilya Maximets , Thomas Monjalon , Stephen Hemminger , Jerin Jacob , Andrew Rybchenko , Aman Singh , Yuying Zhang , Ajit Khaparde , Somnath Kotur , Rahul Lakkireddy , Ferruh Yigit , Beilei Xing Subject: [PATCH 11/13] ethdev: remove deprecated flow item VF Date: Fri, 12 Aug 2022 22:18:25 +0300 Message-Id: <20220812191827.3187441-12-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220812191827.3187441-1-ivan.malov@oktetlabs.ru> References: <20220812191827.3187441-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Such deprecation was commenced in DPDK 21.11. Since then, no parties have objected. Remove. The patch breaks ABI. Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 25 - doc/guides/nics/features/bnxt.ini | 1 - doc/guides/nics/features/cxgbe.ini | 1 - doc/guides/nics/features/default.ini | 1 - doc/guides/nics/features/i40e.ini | 1 - doc/guides/nics/features/sfc.ini | 1 - doc/guides/nics/sfc_efx.rst | 2 - doc/guides/prog_guide/rte_flow.rst | 40 +- doc/guides/rel_notes/release_22_11.rst | 3 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 - drivers/net/bnxt/bnxt_flow.c | 58 +- drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c | 4 - drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 34 - drivers/net/bnxt/tf_ulp/ulp_rte_parser.h | 5 - drivers/net/cxgbe/cxgbe_flow.c | 36 - drivers/net/i40e/i40e_flow.c | 987 ++---------------- drivers/net/sfc/sfc_mae.c | 77 -- lib/ethdev/rte_flow.c | 1 - lib/ethdev/rte_flow.h | 46 - 19 files changed, 110 insertions(+), 1217 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 2722d5a48d..31b906178c 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -249,8 +249,6 @@ enum index { ITEM_INVERT, ITEM_ANY, ITEM_ANY_NUM, - ITEM_VF, - ITEM_VF_ID, ITEM_PHY_PORT, ITEM_PHY_PORT_INDEX, ITEM_PORT_ID, @@ -1277,7 +1275,6 @@ static const enum index next_item[] = { ITEM_VOID, ITEM_INVERT, ITEM_ANY, - ITEM_VF, ITEM_PHY_PORT, ITEM_PORT_ID, ITEM_MARK, @@ -1348,12 +1345,6 @@ static const enum index item_any[] = { ZERO, }; -static const enum index item_vf[] = { - ITEM_VF_ID, - ITEM_NEXT, - ZERO, -}; - static const enum index item_phy_port[] = { ITEM_PHY_PORT_INDEX, ITEM_NEXT, @@ -3459,19 +3450,6 @@ static const struct token token_list[] = { .next = NEXT(item_any, NEXT_ENTRY(COMMON_UNSIGNED), item_param), .args = ARGS(ARGS_ENTRY(struct rte_flow_item_any, num)), }, - [ITEM_VF] = { - .name = "vf", - .help = "match traffic from/to a virtual function ID", - .priv = PRIV_ITEM(VF, sizeof(struct rte_flow_item_vf)), - .next = NEXT(item_vf), - .call = parse_vc, - }, - [ITEM_VF_ID] = { - .name = "id", - .help = "VF ID", - .next = NEXT(item_vf, NEXT_ENTRY(COMMON_UNSIGNED), item_param), - .args = ARGS(ARGS_ENTRY(struct rte_flow_item_vf, id)), - }, [ITEM_PHY_PORT] = { .name = "phy_port", .help = "match traffic from/to a specific physical port", @@ -10669,9 +10647,6 @@ flow_item_default_mask(const struct rte_flow_item *item) case RTE_FLOW_ITEM_TYPE_ANY: mask = &rte_flow_item_any_mask; break; - case RTE_FLOW_ITEM_TYPE_VF: - mask = &rte_flow_item_vf_mask; - break; case RTE_FLOW_ITEM_TYPE_PORT_ID: mask = &rte_flow_item_port_id_mask; break; diff --git a/doc/guides/nics/features/bnxt.ini b/doc/guides/nics/features/bnxt.ini index 259480d1df..860a0a8cf6 100644 --- a/doc/guides/nics/features/bnxt.ini +++ b/doc/guides/nics/features/bnxt.ini @@ -69,7 +69,6 @@ port_representor = Y represented_port = Y tcp = Y udp = Y -vf = Y vlan = Y vxlan = Y diff --git a/doc/guides/nics/features/cxgbe.ini b/doc/guides/nics/features/cxgbe.ini index d869f2100f..3f11cc2ac0 100644 --- a/doc/guides/nics/features/cxgbe.ini +++ b/doc/guides/nics/features/cxgbe.ini @@ -42,7 +42,6 @@ ipv6 = Y phy_port = Y tcp = Y udp = Y -vf = Y vlan = Y [rte_flow actions] diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index aff236134e..8fbe1de46a 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -135,7 +135,6 @@ sctp = tag = tcp = udp = -vf = vlan = vxlan = vxlan_gpe = diff --git a/doc/guides/nics/features/i40e.ini b/doc/guides/nics/features/i40e.ini index dd18fec217..95e39aaba0 100644 --- a/doc/guides/nics/features/i40e.ini +++ b/doc/guides/nics/features/i40e.ini @@ -68,7 +68,6 @@ raw = Y sctp = Y tcp = Y udp = Y -vf = Y vlan = Y vxlan = Y diff --git a/doc/guides/nics/features/sfc.ini b/doc/guides/nics/features/sfc.ini index 355174d5c2..363fc6d0ec 100644 --- a/doc/guides/nics/features/sfc.ini +++ b/doc/guides/nics/features/sfc.ini @@ -55,7 +55,6 @@ pppoes = Y represented_port = Y tcp = Y udp = Y -vf = Y vlan = Y vxlan = Y diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst index 2dbc59e8f7..0e0088b09f 100644 --- a/doc/guides/nics/sfc_efx.rst +++ b/doc/guides/nics/sfc_efx.rst @@ -200,8 +200,6 @@ Supported pattern items (***transfer*** rules): - PHY_PORT (cannot repeat; conflicts with other traffic source items) -- VF (cannot repeat; conflicts with other traffic source items) - - ETH - VLAN (double-tagging is supported) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 72f0c3d346..85bf2bf123 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -535,40 +535,6 @@ Usage example, matching non-TCPv4 packets only: | 4 | END | +-------+----------+ -Item: ``VF`` -^^^^^^^^^^^^ - -This item is deprecated. Consider: - - `Item: PORT_REPRESENTOR`_ - - `Item: REPRESENTED_PORT`_ - -Matches traffic originating from (ingress) or going to (egress) a given -virtual function of the current device. - -If supported, should work even if the virtual function is not managed by the -application and thus not associated with a DPDK port ID. - -Note this pattern item does not match VF representors traffic which, as -separate entities, should be addressed through their own DPDK port IDs. - -- Can be specified multiple times to match traffic addressed to several VF - IDs. -- Default ``mask`` matches any VF ID. - -.. _table_rte_flow_item_vf: - -.. table:: VF - - +----------+----------+---------------------------+ - | Field | Subfield | Value | - +==========+==========+===========================+ - | ``spec`` | ``id`` | destination VF ID | - +----------+----------+---------------------------+ - | ``last`` | ``id`` | upper range value | - +----------+----------+---------------------------+ - | ``mask`` | ``id`` | zeroed to match any VF ID | - +----------+----------+---------------------------+ - Item: ``PHY_PORT`` ^^^^^^^^^^^^^^^^^^ @@ -2063,13 +2029,11 @@ This action is deprecated. Consider: Directs matching traffic to a given virtual function of the current device. -Packets matched by a VF pattern item can be redirected to their original VF -ID instead of the specified one. This parameter may not be available and is +Packets can be redirected to the VF they originate from, +instead of the specified one. This parameter may not be available and is not guaranteed to work properly if the VF part is matched by a prior flow rule or if packets are not addressed to a VF in the first place. -See `Item: VF`_. - .. _table_rte_flow_action_vf: .. table:: VF diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index b7469708af..cf3d6e4efb 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -79,6 +79,9 @@ Removed Items * ethdev: removed ``RTE_FLOW_ITEM_TYPE_PF``; use ``RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT``. +* ethdev: removed ``RTE_FLOW_ITEM_TYPE_VF``; + use ``RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT``. + API Changes ----------- diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 4446560369..17049e59f8 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3665,10 +3665,6 @@ This section lists supported pattern items and their attributes, if any. - ``num {unsigned}``: number of layers covered. -- ``vf``: match traffic from/to a virtual function ID. - - - ``id {unsigned}``: VF ID. - - ``phy_port``: match traffic from/to a specific physical port. - ``index {unsigned}``: physical port index. diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c index f8e10968e3..96ef00460c 100644 --- a/drivers/net/bnxt/bnxt_flow.c +++ b/drivers/net/bnxt/bnxt_flow.c @@ -126,8 +126,7 @@ bnxt_filter_type_check(const struct rte_flow_item pattern[], } static int -bnxt_validate_and_parse_flow_type(struct bnxt *bp, - const struct rte_flow_attr *attr, +bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], struct rte_flow_error *error, struct bnxt_filter_info *filter) @@ -148,16 +147,13 @@ bnxt_validate_and_parse_flow_type(struct bnxt *bp, const struct rte_flow_item_vxlan *vxlan_mask; uint8_t vni_mask[] = {0xFF, 0xFF, 0xFF}; uint8_t tni_mask[] = {0xFF, 0xFF, 0xFF}; - const struct rte_flow_item_vf *vf_spec; uint32_t tenant_id_be = 0, valid_flags = 0; bool vni_masked = 0; bool tni_masked = 0; uint32_t en_ethertype; uint8_t inner = 0; - uint32_t vf = 0; uint32_t en = 0; int use_ntuple; - int dflt_vnic; use_ntuple = bnxt_filter_type_check(pattern, error); if (use_ntuple < 0) @@ -680,56 +676,6 @@ bnxt_validate_and_parse_flow_type(struct bnxt *bp, } break; - case RTE_FLOW_ITEM_TYPE_VF: - vf_spec = item->spec; - vf = vf_spec->id; - if (!BNXT_PF(bp)) { - rte_flow_error_set(error, - EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Configuring on a VF!"); - return -rte_errno; - } - - if (vf >= bp->pdev->max_vfs) { - rte_flow_error_set(error, - EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Incorrect VF id!"); - return -rte_errno; - } - - if (!attr->transfer) { - rte_flow_error_set(error, - ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Matching VF traffic without" - " affecting it (transfer attribute)" - " is unsupported"); - return -rte_errno; - } - - filter->mirror_vnic_id = - dflt_vnic = bnxt_hwrm_func_qcfg_vf_dflt_vnic_id(bp, vf); - if (dflt_vnic < 0) { - /* This simply indicates there's no driver - * loaded. This is not an error. - */ - rte_flow_error_set - (error, - EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Unable to get default VNIC for VF"); - return -rte_errno; - } - - filter->mirror_vnic_id = dflt_vnic; - en |= NTUPLE_FLTR_ALLOC_INPUT_EN_MIRROR_VNIC_ID; - break; default: break; } @@ -1298,7 +1244,7 @@ bnxt_validate_and_parse_flow(struct rte_eth_dev *dev, int rc, use_ntuple; rc = - bnxt_validate_and_parse_flow_type(bp, attr, pattern, error, filter); + bnxt_validate_and_parse_flow_type(attr, pattern, error, filter); if (rc != 0) goto ret; diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c index 17216426d8..23081fc99b 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c @@ -260,10 +260,6 @@ struct bnxt_ulp_rte_hdr_info ulp_hdr_info[] = { .hdr_type = BNXT_ULP_HDR_TYPE_SUPPORTED, .proto_hdr_func = ulp_rte_item_any_handler }, - [RTE_FLOW_ITEM_TYPE_VF] = { - .hdr_type = BNXT_ULP_HDR_TYPE_SUPPORTED, - .proto_hdr_func = ulp_rte_vf_hdr_handler - }, [RTE_FLOW_ITEM_TYPE_PHY_PORT] = { .hdr_type = BNXT_ULP_HDR_TYPE_SUPPORTED, .proto_hdr_func = ulp_rte_phy_port_hdr_handler diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c index 6a1d235f77..38799840dd 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c @@ -507,40 +507,6 @@ ulp_rte_parser_implicit_act_port_process(struct ulp_rte_parser_params *params) return BNXT_TF_RC_SUCCESS; } -/* Function to handle the parsing of RTE Flow item VF Header. */ -int32_t -ulp_rte_vf_hdr_handler(const struct rte_flow_item *item, - struct ulp_rte_parser_params *params) -{ - const struct rte_flow_item_vf *vf_spec = item->spec; - const struct rte_flow_item_vf *vf_mask = item->mask; - uint16_t mask = 0; - uint32_t ifindex; - int32_t rc = BNXT_TF_RC_PARSE_ERR; - - /* Get VF rte_flow_item for Port details */ - if (!vf_spec) { - BNXT_TF_DBG(ERR, "ParseErr:VF id is not valid\n"); - return rc; - } - if (!vf_mask) { - BNXT_TF_DBG(ERR, "ParseErr:VF mask is not valid\n"); - return rc; - } - mask = vf_mask->id; - - /* perform the conversion from VF Func id to bnxt ifindex */ - if (ulp_port_db_dev_func_id_to_ulp_index(params->ulp_ctx, - vf_spec->id, - &ifindex)) { - BNXT_TF_DBG(ERR, "ParseErr:Portid is not valid\n"); - return rc; - } - /* Update the SVIF details */ - return ulp_rte_parser_svif_set(params, ifindex, mask, - BNXT_ULP_DIR_INVALID); -} - /* Parse items PORT_ID, PORT_REPRESENTOR and REPRESENTED_PORT. */ int32_t ulp_rte_port_hdr_handler(const struct rte_flow_item *item, diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h index 94918f6b4a..0e246abbd8 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h @@ -80,11 +80,6 @@ bnxt_ulp_rte_parser_act_parse(const struct rte_flow_action actions[], void bnxt_ulp_rte_parser_post_process(struct ulp_rte_parser_params *params); -/* Function to handle the parsing of RTE Flow item VF Header. */ -int32_t -ulp_rte_vf_hdr_handler(const struct rte_flow_item *item, - struct ulp_rte_parser_params *params); - /* Parse items PORT_ID, PORT_REPRESENTOR and REPRESENTED_PORT. */ int32_t ulp_rte_port_hdr_handler(const struct rte_flow_item *item, diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c index e4f9c152b5..8b4efc697b 100644 --- a/drivers/net/cxgbe/cxgbe_flow.c +++ b/drivers/net/cxgbe/cxgbe_flow.c @@ -288,35 +288,6 @@ ch_rte_parsetype_vlan(const void *dmask, const struct rte_flow_item *item, return 0; } -static int -ch_rte_parsetype_vf(const void *dmask, const struct rte_flow_item *item, - struct ch_filter_specification *fs, - struct rte_flow_error *e) -{ - const struct rte_flow_item_vf *umask = item->mask; - const struct rte_flow_item_vf *val = item->spec; - const struct rte_flow_item_vf *mask; - - /* If user has not given any mask, then use chelsio supported mask. */ - mask = umask ? umask : (const struct rte_flow_item_vf *)dmask; - - CXGBE_FILL_FS(1, 1, pfvf_vld); - - if (!val) - return 0; /* Wildcard, match all Vf */ - - if (val->id > UCHAR_MAX) - return rte_flow_error_set(e, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "VF ID > MAX(255)"); - - if (val->id || (umask && umask->id)) - CXGBE_FILL_FS(val->id, mask->id, vf); - - return 0; -} - static int ch_rte_parsetype_udp(const void *dmask, const struct rte_flow_item *item, struct ch_filter_specification *fs, @@ -1005,13 +976,6 @@ static struct chrte_fparse parseitem[] = { .fptr = ch_rte_parsetype_tcp, .dmask = &rte_flow_item_tcp_mask, }, - - [RTE_FLOW_ITEM_TYPE_VF] = { - .fptr = ch_rte_parsetype_vf, - .dmask = &(const struct rte_flow_item_vf){ - .id = 0xffffffff, - } - }, }; static int diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c index 4f3808cb5f..65a826d51c 100644 --- a/drivers/net/i40e/i40e_flow.c +++ b/drivers/net/i40e/i40e_flow.c @@ -62,7 +62,6 @@ static int i40e_flow_parse_ethertype_action(struct rte_eth_dev *dev, struct rte_flow_error *error, struct rte_eth_ethertype_filter *filter); static int i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, struct rte_flow_error *error, struct i40e_fdir_filter_conf *filter); @@ -148,1171 +147,508 @@ const struct rte_flow_ops i40e_flow_ops = { static union i40e_filter_t cons_filter; static enum rte_filter_type cons_filter_type = RTE_ETH_FILTER_NONE; -/* internal pattern w/o VOID items */ -struct rte_flow_item g_items[32]; - -/* Pattern matched ethertype filter */ -static enum rte_flow_item_type pattern_ethertype[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_END, -}; - -/* Pattern matched flow director filter */ -static enum rte_flow_item_type pattern_fdir_ipv4[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_udp[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_tcp[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_sctp[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_gtpc[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_GTPC, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_gtpu[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_GTPU, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_gtpu_ipv4[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_GTPU, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_gtpu_ipv6[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_GTPU, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_udp[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_tcp[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_sctp[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_gtpc[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_GTPC, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_gtpu[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_GTPU, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_gtpu_ipv4[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_GTPU, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_gtpu_ipv6[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_GTPU, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ethertype_raw_1[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ethertype_raw_2[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ethertype_raw_3[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_raw_1[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_raw_2[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_raw_3[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_udp_raw_1[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_udp_raw_2[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_udp_raw_3[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_tcp_raw_1[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_tcp_raw_2[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_tcp_raw_3[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_sctp_raw_1[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_sctp_raw_2[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_sctp_raw_3[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_raw_1[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_raw_2[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_raw_3[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_udp_raw_1[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_udp_raw_2[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_udp_raw_3[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_tcp_raw_1[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_tcp_raw_2[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_tcp_raw_3[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_sctp_raw_1[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_sctp_raw_2[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_sctp_raw_3[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ethertype_vlan[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv4[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_udp[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_tcp[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_sctp[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv6[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_udp[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_tcp[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_sctp[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ethertype_vlan_raw_1[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ethertype_vlan_raw_2[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ethertype_vlan_raw_3[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_raw_1[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_raw_2[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_raw_3[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_udp_raw_1[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_udp_raw_2[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; +/* internal pattern w/o VOID items */ +struct rte_flow_item g_items[32]; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_udp_raw_3[] = { +/* Pattern matched ethertype filter */ +static enum rte_flow_item_type pattern_ethertype[] = { RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_tcp_raw_1[] = { +/* Pattern matched flow director filter */ +static enum rte_flow_item_type pattern_fdir_ipv4[] = { RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_tcp_raw_2[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_udp[] = { RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, + RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_tcp_raw_3[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_tcp[] = { RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_sctp_raw_1[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_sctp[] = { RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_sctp_raw_2[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_gtpc[] = { RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_GTPC, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_sctp_raw_3[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_gtpu[] = { RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_raw_1[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_raw_2[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_raw_3[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_udp_raw_1[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_RAW, + RTE_FLOW_ITEM_TYPE_GTPU, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_udp_raw_2[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_gtpu_ipv4[] = { RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, + RTE_FLOW_ITEM_TYPE_GTPU, + RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_udp_raw_3[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_gtpu_ipv6[] = { RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_tcp_raw_1[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_GTPU, RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_tcp_raw_2[] = { +static enum rte_flow_item_type pattern_fdir_ipv6[] = { RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_tcp_raw_3[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_udp[] = { RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, + RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_sctp_raw_1[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_tcp[] = { RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_RAW, + RTE_FLOW_ITEM_TYPE_TCP, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_sctp_raw_2[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_sctp[] = { RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_sctp_raw_3[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_gtpc[] = { RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_vf[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_VF, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_udp_vf[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_VF, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_tcp_vf[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_VF, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv4_sctp_vf[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_VF, + RTE_FLOW_ITEM_TYPE_GTPC, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv6_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_gtpu[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_VF, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_GTPU, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv6_udp_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_gtpu_ipv4[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_VF, + RTE_FLOW_ITEM_TYPE_GTPU, + RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv6_tcp_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_gtpu_ipv6[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_VF, - RTE_FLOW_ITEM_TYPE_END, -}; - -static enum rte_flow_item_type pattern_fdir_ipv6_sctp_vf[] = { - RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_GTPU, RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ethertype_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_ethertype_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ethertype_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_ethertype_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ethertype_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_ethertype_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv4_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv4_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv4_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv4_udp_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_udp_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv4_udp_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_udp_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv4_udp_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_udp_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv4_tcp_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_tcp_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_TCP, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv4_tcp_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_tcp_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_TCP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv4_tcp_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_tcp_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_TCP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv4_sctp_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_sctp_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_SCTP, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv4_sctp_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_sctp_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_SCTP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv4_sctp_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv4_sctp_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_SCTP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv6_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv6_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv6_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv6_udp_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_udp_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv6_udp_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_udp_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv6_udp_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_udp_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv6_tcp_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_tcp_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_TCP, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv6_tcp_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_tcp_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_TCP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv6_tcp_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_tcp_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_TCP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv6_sctp_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_sctp_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_SCTP, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv6_sctp_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_sctp_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_SCTP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ipv6_sctp_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_ipv6_sctp_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_SCTP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ethertype_vlan_vf[] = { +static enum rte_flow_item_type pattern_fdir_ethertype_vlan[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv4[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_udp_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv4_udp[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_tcp_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv4_tcp[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_sctp_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv4_sctp[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv6[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_udp_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv6_udp[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_tcp_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv6_tcp[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_sctp_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv6_sctp[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_SCTP, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ethertype_vlan_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_ethertype_vlan_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ethertype_vlan_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_ethertype_vlan_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_ethertype_vlan_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_ethertype_vlan_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv4_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv4_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv4_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_udp_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv4_udp_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_udp_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv4_udp_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_udp_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv4_udp_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, @@ -1320,32 +656,29 @@ static enum rte_flow_item_type pattern_fdir_vlan_ipv4_udp_raw_3_vf[] = { RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_tcp_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv4_tcp_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_TCP, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_tcp_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv4_tcp_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_TCP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_tcp_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv4_tcp_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, @@ -1353,32 +686,29 @@ static enum rte_flow_item_type pattern_fdir_vlan_ipv4_tcp_raw_3_vf[] = { RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_sctp_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv4_sctp_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_SCTP, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_sctp_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv4_sctp_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_SCTP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv4_sctp_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv4_sctp_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, @@ -1386,62 +716,56 @@ static enum rte_flow_item_type pattern_fdir_vlan_ipv4_sctp_raw_3_vf[] = { RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv6_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv6_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv6_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_udp_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv6_udp_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_udp_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv6_udp_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_udp_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv6_udp_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, @@ -1449,32 +773,29 @@ static enum rte_flow_item_type pattern_fdir_vlan_ipv6_udp_raw_3_vf[] = { RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_tcp_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv6_tcp_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_TCP, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_tcp_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv6_tcp_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_TCP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_tcp_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv6_tcp_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, @@ -1482,32 +803,29 @@ static enum rte_flow_item_type pattern_fdir_vlan_ipv6_tcp_raw_3_vf[] = { RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_sctp_raw_1_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv6_sctp_raw_1[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_SCTP, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_sctp_raw_2_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv6_sctp_raw_2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_SCTP, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; -static enum rte_flow_item_type pattern_fdir_vlan_ipv6_sctp_raw_3_vf[] = { +static enum rte_flow_item_type pattern_fdir_vlan_ipv6_sctp_raw_3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV6, @@ -1515,7 +833,6 @@ static enum rte_flow_item_type pattern_fdir_vlan_ipv6_sctp_raw_3_vf[] = { RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, RTE_FLOW_ITEM_TYPE_RAW, - RTE_FLOW_ITEM_TYPE_VF, RTE_FLOW_ITEM_TYPE_END, }; @@ -1765,78 +1082,6 @@ static struct i40e_valid_pattern i40e_supported_patterns[] = { { pattern_fdir_vlan_ipv6_sctp_raw_1, i40e_flow_parse_fdir_filter }, { pattern_fdir_vlan_ipv6_sctp_raw_2, i40e_flow_parse_fdir_filter }, { pattern_fdir_vlan_ipv6_sctp_raw_3, i40e_flow_parse_fdir_filter }, - /* FDIR - support VF item */ - { pattern_fdir_ipv4_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv4_udp_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv4_tcp_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv4_sctp_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv6_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv6_udp_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv6_tcp_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv6_sctp_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ethertype_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ethertype_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ethertype_raw_3_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv4_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv4_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv4_raw_3_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv4_udp_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv4_udp_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv4_udp_raw_3_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv4_tcp_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv4_tcp_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv4_tcp_raw_3_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv4_sctp_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv4_sctp_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv4_sctp_raw_3_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv6_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv6_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv6_raw_3_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv6_udp_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv6_udp_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv6_udp_raw_3_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv6_tcp_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv6_tcp_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv6_tcp_raw_3_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv6_sctp_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv6_sctp_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ipv6_sctp_raw_3_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ethertype_vlan_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv4_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv4_udp_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv4_tcp_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv4_sctp_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv6_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv6_udp_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv6_tcp_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv6_sctp_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ethertype_vlan_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ethertype_vlan_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_ethertype_vlan_raw_3_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv4_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv4_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv4_raw_3_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv4_udp_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv4_udp_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv4_udp_raw_3_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv4_tcp_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv4_tcp_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv4_tcp_raw_3_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv4_sctp_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv4_sctp_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv4_sctp_raw_3_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv6_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv6_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv6_raw_3_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv6_udp_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv6_udp_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv6_udp_raw_3_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv6_tcp_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv6_tcp_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv6_tcp_raw_3_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv6_sctp_raw_1_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv6_sctp_raw_2_vf, i40e_flow_parse_fdir_filter }, - { pattern_fdir_vlan_ipv6_sctp_raw_3_vf, i40e_flow_parse_fdir_filter }, /* VXLAN */ { pattern_vxlan_1, i40e_flow_parse_vxlan_filter }, { pattern_vxlan_2, i40e_flow_parse_vxlan_filter }, @@ -2348,7 +1593,6 @@ i40e_flow_set_filter_spi(struct i40e_fdir_filter_conf *filter, */ static int i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, struct rte_flow_error *error, struct i40e_fdir_filter_conf *filter) @@ -2365,7 +1609,6 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev, const struct rte_flow_item_gtp *gtp_spec, *gtp_mask; const struct rte_flow_item_esp *esp_spec, *esp_mask; const struct rte_flow_item_raw *raw_spec, *raw_mask; - const struct rte_flow_item_vf *vf_spec; const struct rte_flow_item_l2tpv3oip *l2tpv3oip_spec, *l2tpv3oip_mask; uint8_t pctype = 0; @@ -3067,29 +2310,6 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev, filter->input.flow_ext.raw_id = raw_id; filter->input.flow_ext.is_flex_flow = true; break; - case RTE_FLOW_ITEM_TYPE_VF: - vf_spec = item->spec; - if (!attr->transfer) { - rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Matching VF traffic" - " without affecting it" - " (transfer attribute)" - " is unsupported"); - return -rte_errno; - } - filter->input.flow_ext.is_vf = 1; - filter->input.flow_ext.dst_id = vf_spec->id; - if (filter->input.flow_ext.is_vf && - filter->input.flow_ext.dst_id >= pf->vf_num) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Invalid VF ID for FDIR."); - return -rte_errno; - } - break; case RTE_FLOW_ITEM_TYPE_L2TPV3OIP: l2tpv3oip_spec = item->spec; l2tpv3oip_mask = item->mask; @@ -3277,8 +2497,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev, &filter->fdir_filter; int ret; - ret = i40e_flow_parse_fdir_pattern(dev, attr, pattern, error, - fdir_filter); + ret = i40e_flow_parse_fdir_pattern(dev, pattern, error, fdir_filter); if (ret) return ret; diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c index e8da2d2a0d..06de659ee2 100644 --- a/drivers/net/sfc/sfc_mae.c +++ b/drivers/net/sfc/sfc_mae.c @@ -1685,71 +1685,6 @@ sfc_mae_rule_parse_item_phy_port(const struct rte_flow_item *item, return 0; } -static int -sfc_mae_rule_parse_item_vf(const struct rte_flow_item *item, - struct sfc_flow_parse_ctx *ctx, - struct rte_flow_error *error) -{ - struct sfc_mae_parse_ctx *ctx_mae = ctx->mae; - const efx_nic_cfg_t *encp = efx_nic_cfg_get(ctx_mae->sa->nic); - const struct rte_flow_item_vf supp_mask = { - .id = 0xffffffff, - }; - const void *def_mask = &rte_flow_item_vf_mask; - const struct rte_flow_item_vf *spec = NULL; - const struct rte_flow_item_vf *mask = NULL; - efx_mport_sel_t mport_v; - int rc; - - if (ctx_mae->match_mport_set) { - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM, item, - "Can't handle multiple traffic source items"); - } - - rc = sfc_flow_parse_init(item, - (const void **)&spec, (const void **)&mask, - (const void *)&supp_mask, def_mask, - sizeof(struct rte_flow_item_vf), error); - if (rc != 0) - return rc; - - if (mask->id != supp_mask.id) { - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, item, - "Bad mask in the VF pattern item"); - } - - /* - * If "spec" is not set, the item requests any VF related to the - * PF of the current DPDK port (but not the PF itself). - * Reject this match criterion as unsupported. - */ - if (spec == NULL) { - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, item, - "Bad spec in the VF pattern item"); - } - - rc = efx_mae_mport_by_pcie_function(encp->enc_pf, spec->id, &mport_v); - if (rc != 0) { - return rte_flow_error_set(error, rc, - RTE_FLOW_ERROR_TYPE_ITEM, item, - "Failed to convert the PF + VF IDs"); - } - - rc = efx_mae_match_spec_mport_set(ctx_mae->match_spec, &mport_v, NULL); - if (rc != 0) { - return rte_flow_error_set(error, rc, - RTE_FLOW_ERROR_TYPE_ITEM, item, - "Failed to set MPORT for the PF + VF"); - } - - ctx_mae->match_mport_set = B_TRUE; - - return 0; -} - /* * Having this field ID in a field locator means that this * locator cannot be used to actually set the field at the @@ -2555,18 +2490,6 @@ static const struct sfc_flow_item sfc_flow_items[] = { .ctx_type = SFC_FLOW_PARSE_CTX_MAE, .parse = sfc_mae_rule_parse_item_phy_port, }, - { - .type = RTE_FLOW_ITEM_TYPE_VF, - .name = "VF", - /* - * In terms of RTE flow, this item is a META one, - * and its position in the pattern is don't care. - */ - .prev_layer = SFC_FLOW_ITEM_ANY_LAYER, - .layer = SFC_FLOW_ITEM_ANY_LAYER, - .ctx_type = SFC_FLOW_PARSE_CTX_MAE, - .parse = sfc_mae_rule_parse_item_vf, - }, { .type = RTE_FLOW_ITEM_TYPE_ETH, .name = "ETH", diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 6ece72bf36..65c74687e3 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -97,7 +97,6 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = { MK_FLOW_ITEM(VOID, 0), MK_FLOW_ITEM(INVERT, 0), MK_FLOW_ITEM(ANY, sizeof(struct rte_flow_item_any)), - MK_FLOW_ITEM(VF, sizeof(struct rte_flow_item_vf)), MK_FLOW_ITEM(PHY_PORT, sizeof(struct rte_flow_item_phy_port)), MK_FLOW_ITEM(PORT_ID, sizeof(struct rte_flow_item_port_id)), MK_FLOW_ITEM(RAW, sizeof(struct rte_flow_item_raw)), diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 97de98e232..0a98db9c1c 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -188,20 +188,6 @@ enum rte_flow_item_type { */ RTE_FLOW_ITEM_TYPE_ANY, - /** - * @deprecated - * @see RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR - * @see RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT - * - * [META] - * - * Matches traffic originating from (ingress) or going to (egress) a - * given virtual function of the current device. - * - * See struct rte_flow_item_vf. - */ - RTE_FLOW_ITEM_TYPE_VF, - /** * @deprecated * @see RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR @@ -700,38 +686,6 @@ static const struct rte_flow_item_any rte_flow_item_any_mask = { }; #endif -/** - * @deprecated - * @see RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR - * @see RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT - * - * RTE_FLOW_ITEM_TYPE_VF - * - * Matches traffic originating from (ingress) or going to (egress) a given - * virtual function of the current device. - * - * If supported, should work even if the virtual function is not managed by - * the application and thus not associated with a DPDK port ID. - * - * Note this pattern item does not match VF representors traffic which, as - * separate entities, should be addressed through their own DPDK port IDs. - * - * - Can be specified multiple times to match traffic addressed to several - * VF IDs. - * - * A zeroed mask can be used to match any VF ID. - */ -struct rte_flow_item_vf { - uint32_t id; /**< VF ID. */ -}; - -/** Default mask for RTE_FLOW_ITEM_TYPE_VF. */ -#ifndef __cplusplus -static const struct rte_flow_item_vf rte_flow_item_vf_mask = { - .id = 0x00000000, -}; -#endif - /** * @deprecated * @see RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR From patchwork Fri Aug 12 19:18:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Malov X-Patchwork-Id: 114944 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9E58CA0543; Fri, 12 Aug 2022 21:19:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4574542C66; Fri, 12 Aug 2022 21:18:41 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 6A43A40A7F for ; Fri, 12 Aug 2022 21:18:32 +0200 (CEST) Received: from bree.oktetlabs.ru (bree.oktetlabs.ru [192.168.34.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPS id 1467CC9; Fri, 12 Aug 2022 22:18:32 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 1467CC9 Authentication-Results: shelob.oktetlabs.ru/1467CC9; dkim=none; dkim-atps=neutral From: Ivan Malov To: dev@dpdk.org Cc: Ori Kam , Eli Britstein , Ilya Maximets , Thomas Monjalon , Stephen Hemminger , Jerin Jacob , Andrew Rybchenko , Aman Singh , Yuying Zhang , Ajit Khaparde , Somnath Kotur , Rahul Lakkireddy , Ferruh Yigit , Matan Azrad , Viacheslav Ovsiienko Subject: [PATCH 12/13] ethdev: remove deprecated flow item PHY PORT Date: Fri, 12 Aug 2022 22:18:26 +0300 Message-Id: <20220812191827.3187441-13-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220812191827.3187441-1-ivan.malov@oktetlabs.ru> References: <20220812191827.3187441-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Such deprecation was commenced in DPDK 21.11. Since then, no parties have objected. Remove. The patch breaks ABI. Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 24 ------ doc/guides/nics/features/bnxt.ini | 1 - doc/guides/nics/features/cxgbe.ini | 1 - doc/guides/nics/features/default.ini | 1 - doc/guides/nics/features/mlx5.ini | 1 - doc/guides/nics/features/sfc.ini | 1 - doc/guides/nics/sfc_efx.rst | 2 - doc/guides/prog_guide/rte_flow.rst | 45 ----------- doc/guides/rel_notes/release_22_11.rst | 3 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 - drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c | 4 - drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 75 ------------------- drivers/net/bnxt/tf_ulp/ulp_rte_parser.h | 5 -- drivers/net/cxgbe/cxgbe_flow.c | 32 -------- drivers/net/sfc/sfc_mae.c | 69 ----------------- lib/ethdev/rte_flow.c | 1 - lib/ethdev/rte_flow.h | 56 -------------- 17 files changed, 3 insertions(+), 322 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 31b906178c..758c1f0efa 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -249,8 +249,6 @@ enum index { ITEM_INVERT, ITEM_ANY, ITEM_ANY_NUM, - ITEM_PHY_PORT, - ITEM_PHY_PORT_INDEX, ITEM_PORT_ID, ITEM_PORT_ID_ID, ITEM_MARK, @@ -1275,7 +1273,6 @@ static const enum index next_item[] = { ITEM_VOID, ITEM_INVERT, ITEM_ANY, - ITEM_PHY_PORT, ITEM_PORT_ID, ITEM_MARK, ITEM_RAW, @@ -1345,12 +1342,6 @@ static const enum index item_any[] = { ZERO, }; -static const enum index item_phy_port[] = { - ITEM_PHY_PORT_INDEX, - ITEM_NEXT, - ZERO, -}; - static const enum index item_port_id[] = { ITEM_PORT_ID_ID, ITEM_NEXT, @@ -3450,21 +3441,6 @@ static const struct token token_list[] = { .next = NEXT(item_any, NEXT_ENTRY(COMMON_UNSIGNED), item_param), .args = ARGS(ARGS_ENTRY(struct rte_flow_item_any, num)), }, - [ITEM_PHY_PORT] = { - .name = "phy_port", - .help = "match traffic from/to a specific physical port", - .priv = PRIV_ITEM(PHY_PORT, - sizeof(struct rte_flow_item_phy_port)), - .next = NEXT(item_phy_port), - .call = parse_vc, - }, - [ITEM_PHY_PORT_INDEX] = { - .name = "index", - .help = "physical port index", - .next = NEXT(item_phy_port, NEXT_ENTRY(COMMON_UNSIGNED), - item_param), - .args = ARGS(ARGS_ENTRY(struct rte_flow_item_phy_port, index)), - }, [ITEM_PORT_ID] = { .name = "port_id", .help = "match traffic from/to a given DPDK port ID", diff --git a/doc/guides/nics/features/bnxt.ini b/doc/guides/nics/features/bnxt.ini index 860a0a8cf6..c05bcff909 100644 --- a/doc/guides/nics/features/bnxt.ini +++ b/doc/guides/nics/features/bnxt.ini @@ -63,7 +63,6 @@ ipv6 = Y gre = Y icmp = Y icmp6 = Y -phy_port = Y port_id = Y port_representor = Y represented_port = Y diff --git a/doc/guides/nics/features/cxgbe.ini b/doc/guides/nics/features/cxgbe.ini index 3f11cc2ac0..295816ab9d 100644 --- a/doc/guides/nics/features/cxgbe.ini +++ b/doc/guides/nics/features/cxgbe.ini @@ -39,7 +39,6 @@ Usage doc = Y eth = Y ipv4 = Y ipv6 = Y -phy_port = Y tcp = Y udp = Y vlan = Y diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index 8fbe1de46a..7ed5bd8cb9 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -122,7 +122,6 @@ mpls = nsh = nvgre = pfcp = -phy_port = port_id = port_representor = ppp = diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini index e056516deb..e5974063c8 100644 --- a/doc/guides/nics/features/mlx5.ini +++ b/doc/guides/nics/features/mlx5.ini @@ -76,7 +76,6 @@ mark = Y meta = Y mpls = Y nvgre = Y -phy_port = Y port_id = Y tag = Y tcp = Y diff --git a/doc/guides/nics/features/sfc.ini b/doc/guides/nics/features/sfc.ini index 363fc6d0ec..3dac105e35 100644 --- a/doc/guides/nics/features/sfc.ini +++ b/doc/guides/nics/features/sfc.ini @@ -47,7 +47,6 @@ ipv4 = Y ipv6 = Y mark = P nvgre = Y -phy_port = Y port_id = Y port_representor = Y pppoed = Y diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst index 0e0088b09f..6eca86e96f 100644 --- a/doc/guides/nics/sfc_efx.rst +++ b/doc/guides/nics/sfc_efx.rst @@ -198,8 +198,6 @@ Supported pattern items (***transfer*** rules): - PORT_ID (cannot repeat; conflicts with other traffic source items) -- PHY_PORT (cannot repeat; conflicts with other traffic source items) - - ETH - VLAN (double-tagging is supported) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 85bf2bf123..9cf4261494 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -535,44 +535,6 @@ Usage example, matching non-TCPv4 packets only: | 4 | END | +-------+----------+ -Item: ``PHY_PORT`` -^^^^^^^^^^^^^^^^^^ - -This item is deprecated. Consider: - - `Item: PORT_REPRESENTOR`_ - - `Item: REPRESENTED_PORT`_ - -Matches traffic originating from (ingress) or going to (egress) a physical -port of the underlying device. - -The first PHY_PORT item overrides the physical port normally associated with -the specified DPDK input port (port_id). This item can be provided several -times to match additional physical ports. - -Note that physical ports are not necessarily tied to DPDK input ports -(port_id) when those are not under DPDK control. Possible values are -specific to each device, they are not necessarily indexed from zero and may -not be contiguous. - -As a device property, the list of allowed values as well as the value -associated with a port_id should be retrieved by other means. - -- Default ``mask`` matches any port index. - -.. _table_rte_flow_item_phy_port: - -.. table:: PHY_PORT - - +----------+-----------+--------------------------------+ - | Field | Subfield | Value | - +==========+===========+================================+ - | ``spec`` | ``index`` | physical port index | - +----------+-----------+--------------------------------+ - | ``last`` | ``index`` | upper range value | - +----------+-----------+--------------------------------+ - | ``mask`` | ``index`` | zeroed to match any port index | - +----------+-----------+--------------------------------+ - Item: ``PORT_ID`` ^^^^^^^^^^^^^^^^^ @@ -586,11 +548,6 @@ port ID. Normally only supported if the port ID in question is known by the underlying PMD and related to the device the flow rule is created against. -This must not be confused with `Item: PHY_PORT`_ which refers to the -physical port of a device, whereas `Item: PORT_ID`_ refers to a ``struct -rte_eth_dev`` object on the application side (also known as "port -representor" depending on the kind of underlying device). - - Default ``mask`` matches the specified DPDK port ID. .. _table_rte_flow_item_port_id: @@ -2056,8 +2013,6 @@ This action is deprecated. Consider: Directs matching traffic to a given physical port index of the underlying device. -See `Item: PHY_PORT`_. - .. _table_rte_flow_action_phy_port: .. table:: PHY_PORT diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index cf3d6e4efb..343f40a041 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -82,6 +82,9 @@ Removed Items * ethdev: removed ``RTE_FLOW_ITEM_TYPE_VF``; use ``RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT``. +* ethdev: removed ``RTE_FLOW_ITEM_TYPE_PHY_PORT``; + use ``RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT``. + API Changes ----------- diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 17049e59f8..b9c2d7a6fe 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3665,10 +3665,6 @@ This section lists supported pattern items and their attributes, if any. - ``num {unsigned}``: number of layers covered. -- ``phy_port``: match traffic from/to a specific physical port. - - - ``index {unsigned}``: physical port index. - - ``port_id``: match traffic from/to a given DPDK port ID. - ``id {unsigned}``: DPDK port ID. diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c index 23081fc99b..66cd2fba7e 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c @@ -260,10 +260,6 @@ struct bnxt_ulp_rte_hdr_info ulp_hdr_info[] = { .hdr_type = BNXT_ULP_HDR_TYPE_SUPPORTED, .proto_hdr_func = ulp_rte_item_any_handler }, - [RTE_FLOW_ITEM_TYPE_PHY_PORT] = { - .hdr_type = BNXT_ULP_HDR_TYPE_SUPPORTED, - .proto_hdr_func = ulp_rte_phy_port_hdr_handler - }, [RTE_FLOW_ITEM_TYPE_PORT_ID] = { .hdr_type = BNXT_ULP_HDR_TYPE_SUPPORTED, .proto_hdr_func = ulp_rte_port_hdr_handler diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c index 38799840dd..3faafcf788 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c @@ -571,81 +571,6 @@ ulp_rte_port_hdr_handler(const struct rte_flow_item *item, return ulp_rte_parser_svif_set(params, ifindex, mask, item_dir); } -/* Function to handle the parsing of RTE Flow item phy port Header. */ -int32_t -ulp_rte_phy_port_hdr_handler(const struct rte_flow_item *item, - struct ulp_rte_parser_params *params) -{ - const struct rte_flow_item_phy_port *port_spec = item->spec; - const struct rte_flow_item_phy_port *port_mask = item->mask; - uint16_t mask = 0; - int32_t rc = BNXT_TF_RC_ERROR; - uint16_t svif; - enum bnxt_ulp_direction_type dir; - struct ulp_rte_hdr_field *hdr_field; - - /* Copy the rte_flow_item for phy port into hdr_field */ - if (!port_spec) { - BNXT_TF_DBG(ERR, "ParseErr:Phy Port id is not valid\n"); - return rc; - } - if (!port_mask) { - BNXT_TF_DBG(ERR, "ParseErr:Phy Port mask is not valid\n"); - return rc; - } - mask = port_mask->index; - - /* Update the match port type */ - ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_MATCH_PORT_TYPE, - BNXT_ULP_INTF_TYPE_PHY_PORT); - - /* Compute the Hw direction */ - bnxt_ulp_rte_parser_direction_compute(params); - - /* Direction validation */ - dir = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_DIRECTION); - if (dir == BNXT_ULP_DIR_EGRESS) { - BNXT_TF_DBG(ERR, - "Parse Err:Phy ports are valid only for ingress\n"); - return BNXT_TF_RC_PARSE_ERR; - } - - /* Get the physical port details from port db */ - rc = ulp_port_db_phy_port_svif_get(params->ulp_ctx, port_spec->index, - &svif); - if (rc) { - BNXT_TF_DBG(ERR, "Failed to get port details\n"); - return BNXT_TF_RC_PARSE_ERR; - } - - /* Update the SVIF details */ - svif = rte_cpu_to_be_16(svif); - hdr_field = ¶ms->hdr_field[BNXT_ULP_PROTO_HDR_FIELD_SVIF_IDX]; - memcpy(hdr_field->spec, &svif, sizeof(svif)); - memcpy(hdr_field->mask, &mask, sizeof(mask)); - hdr_field->size = sizeof(svif); - ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_SVIF_FLAG, - rte_be_to_cpu_16(svif)); - if (!mask) { - uint32_t port_id = 0; - uint16_t phy_port = 0; - - /* Validate the control port */ - port_id = ULP_COMP_FLD_IDX_RD(params, - BNXT_ULP_CF_IDX_DEV_PORT_ID); - if (ulp_port_db_phy_port_get(params->ulp_ctx, - port_id, &phy_port) || - (uint16_t)port_spec->index != phy_port) { - BNXT_TF_DBG(ERR, "Mismatch of control and phy_port\n"); - return BNXT_TF_RC_PARSE_ERR; - } - ULP_BITMAP_SET(params->hdr_bitmap.bits, - BNXT_ULP_HDR_BIT_SVIF_IGNORE); - memset(hdr_field->mask, 0xFF, sizeof(mask)); - } - return BNXT_TF_RC_SUCCESS; -} - /* Function to handle the update of proto header based on field values */ static void ulp_rte_l2_proto_type_update(struct ulp_rte_parser_params *param, diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h index 0e246abbd8..5a9b056b16 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h @@ -85,11 +85,6 @@ int32_t ulp_rte_port_hdr_handler(const struct rte_flow_item *item, struct ulp_rte_parser_params *params); -/* Function to handle the parsing of RTE Flow item port Header. */ -int32_t -ulp_rte_phy_port_hdr_handler(const struct rte_flow_item *item, - struct ulp_rte_parser_params *params); - /* Function to handle the RTE item Ethernet Header. */ int32_t ulp_rte_eth_hdr_handler(const struct rte_flow_item *item, diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c index 8b4efc697b..d383334415 100644 --- a/drivers/net/cxgbe/cxgbe_flow.c +++ b/drivers/net/cxgbe/cxgbe_flow.c @@ -208,31 +208,6 @@ ch_rte_parsetype_eth(const void *dmask, const struct rte_flow_item *item, return 0; } -static int -ch_rte_parsetype_port(const void *dmask, const struct rte_flow_item *item, - struct ch_filter_specification *fs, - struct rte_flow_error *e) -{ - const struct rte_flow_item_phy_port *val = item->spec; - const struct rte_flow_item_phy_port *umask = item->mask; - const struct rte_flow_item_phy_port *mask; - - mask = umask ? umask : (const struct rte_flow_item_phy_port *)dmask; - - if (!val) - return 0; /* Wildcard, match all physical ports */ - - if (val->index > 0x7) - return rte_flow_error_set(e, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, - item, - "port index up to 0x7 is supported"); - - if (val->index || (umask && umask->index)) - CXGBE_FILL_FS(val->index, mask->index, iport); - - return 0; -} - static int ch_rte_parsetype_vlan(const void *dmask, const struct rte_flow_item *item, struct ch_filter_specification *fs, @@ -926,13 +901,6 @@ static struct chrte_fparse parseitem[] = { } }, - [RTE_FLOW_ITEM_TYPE_PHY_PORT] = { - .fptr = ch_rte_parsetype_port, - .dmask = &(const struct rte_flow_item_phy_port){ - .index = 0x7, - } - }, - [RTE_FLOW_ITEM_TYPE_VLAN] = { .fptr = ch_rte_parsetype_vlan, .dmask = &(const struct rte_flow_item_vlan){ diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c index 06de659ee2..4ddb63cbe5 100644 --- a/drivers/net/sfc/sfc_mae.c +++ b/drivers/net/sfc/sfc_mae.c @@ -1628,63 +1628,6 @@ sfc_mae_rule_parse_item_ethdev_based(const struct rte_flow_item *item, return 0; } -static int -sfc_mae_rule_parse_item_phy_port(const struct rte_flow_item *item, - struct sfc_flow_parse_ctx *ctx, - struct rte_flow_error *error) -{ - struct sfc_mae_parse_ctx *ctx_mae = ctx->mae; - const struct rte_flow_item_phy_port supp_mask = { - .index = 0xffffffff, - }; - const void *def_mask = &rte_flow_item_phy_port_mask; - const struct rte_flow_item_phy_port *spec = NULL; - const struct rte_flow_item_phy_port *mask = NULL; - efx_mport_sel_t mport_v; - int rc; - - if (ctx_mae->match_mport_set) { - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM, item, - "Can't handle multiple traffic source items"); - } - - rc = sfc_flow_parse_init(item, - (const void **)&spec, (const void **)&mask, - (const void *)&supp_mask, def_mask, - sizeof(struct rte_flow_item_phy_port), error); - if (rc != 0) - return rc; - - if (mask->index != supp_mask.index) { - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, item, - "Bad mask in the PHY_PORT pattern item"); - } - - /* If "spec" is not set, could be any physical port */ - if (spec == NULL) - return 0; - - rc = efx_mae_mport_by_phy_port(spec->index, &mport_v); - if (rc != 0) { - return rte_flow_error_set(error, rc, - RTE_FLOW_ERROR_TYPE_ITEM, item, - "Failed to convert the PHY_PORT index"); - } - - rc = efx_mae_match_spec_mport_set(ctx_mae->match_spec, &mport_v, NULL); - if (rc != 0) { - return rte_flow_error_set(error, rc, - RTE_FLOW_ERROR_TYPE_ITEM, item, - "Failed to set MPORT for the PHY_PORT"); - } - - ctx_mae->match_mport_set = B_TRUE; - - return 0; -} - /* * Having this field ID in a field locator means that this * locator cannot be used to actually set the field at the @@ -2478,18 +2421,6 @@ static const struct sfc_flow_item sfc_flow_items[] = { .ctx_type = SFC_FLOW_PARSE_CTX_MAE, .parse = sfc_mae_rule_parse_item_ethdev_based, }, - { - .type = RTE_FLOW_ITEM_TYPE_PHY_PORT, - .name = "PHY_PORT", - /* - * In terms of RTE flow, this item is a META one, - * and its position in the pattern is don't care. - */ - .prev_layer = SFC_FLOW_ITEM_ANY_LAYER, - .layer = SFC_FLOW_ITEM_ANY_LAYER, - .ctx_type = SFC_FLOW_PARSE_CTX_MAE, - .parse = sfc_mae_rule_parse_item_phy_port, - }, { .type = RTE_FLOW_ITEM_TYPE_ETH, .name = "ETH", diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 65c74687e3..e7ccdb772e 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -97,7 +97,6 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = { MK_FLOW_ITEM(VOID, 0), MK_FLOW_ITEM(INVERT, 0), MK_FLOW_ITEM(ANY, sizeof(struct rte_flow_item_any)), - MK_FLOW_ITEM(PHY_PORT, sizeof(struct rte_flow_item_phy_port)), MK_FLOW_ITEM(PORT_ID, sizeof(struct rte_flow_item_port_id)), MK_FLOW_ITEM(RAW, sizeof(struct rte_flow_item_raw)), MK_FLOW_ITEM(ETH, sizeof(struct rte_flow_item_eth)), diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 0a98db9c1c..066e8c8a99 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -188,20 +188,6 @@ enum rte_flow_item_type { */ RTE_FLOW_ITEM_TYPE_ANY, - /** - * @deprecated - * @see RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR - * @see RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT - * - * [META] - * - * Matches traffic originating from (ingress) or going to (egress) a - * physical port of the underlying device. - * - * See struct rte_flow_item_phy_port. - */ - RTE_FLOW_ITEM_TYPE_PHY_PORT, - /** * @deprecated * @see RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR @@ -686,41 +672,6 @@ static const struct rte_flow_item_any rte_flow_item_any_mask = { }; #endif -/** - * @deprecated - * @see RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR - * @see RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT - * - * RTE_FLOW_ITEM_TYPE_PHY_PORT - * - * Matches traffic originating from (ingress) or going to (egress) a - * physical port of the underlying device. - * - * The first PHY_PORT item overrides the physical port normally associated - * with the specified DPDK input port (port_id). This item can be provided - * several times to match additional physical ports. - * - * Note that physical ports are not necessarily tied to DPDK input ports - * (port_id) when those are not under DPDK control. Possible values are - * specific to each device, they are not necessarily indexed from zero and - * may not be contiguous. - * - * As a device property, the list of allowed values as well as the value - * associated with a port_id should be retrieved by other means. - * - * A zeroed mask can be used to match any port index. - */ -struct rte_flow_item_phy_port { - uint32_t index; /**< Physical port index. */ -}; - -/** Default mask for RTE_FLOW_ITEM_TYPE_PHY_PORT. */ -#ifndef __cplusplus -static const struct rte_flow_item_phy_port rte_flow_item_phy_port_mask = { - .index = 0x00000000, -}; -#endif - /** * @deprecated * @see RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR @@ -734,11 +685,6 @@ static const struct rte_flow_item_phy_port rte_flow_item_phy_port_mask = { * Normally only supported if the port ID in question is known by the * underlying PMD and related to the device the flow rule is created * against. - * - * This must not be confused with @p PHY_PORT which refers to the physical - * port of a device, whereas @p PORT_ID refers to a struct rte_eth_dev - * object on the application side (also known as "port representor" - * depending on the kind of underlying device). */ struct rte_flow_item_port_id { uint32_t id; /**< DPDK port ID. */ @@ -3023,8 +2969,6 @@ struct rte_flow_action_vf { * * Directs packets to a given physical port index of the underlying * device. - * - * @see RTE_FLOW_ITEM_TYPE_PHY_PORT */ struct rte_flow_action_phy_port { uint32_t original:1; /**< Use original port index if possible. */ From patchwork Fri Aug 12 19:18:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Malov X-Patchwork-Id: 114945 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 879CDA0543; Fri, 12 Aug 2022 21:19:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 19C9842C6B; Fri, 12 Aug 2022 21:18:42 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id ACF8042C01 for ; Fri, 12 Aug 2022 21:18:32 +0200 (CEST) Received: from bree.oktetlabs.ru (bree.oktetlabs.ru [192.168.34.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPS id 62714CB; Fri, 12 Aug 2022 22:18:32 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 62714CB Authentication-Results: shelob.oktetlabs.ru/62714CB; dkim=none; dkim-atps=neutral From: Ivan Malov To: dev@dpdk.org Cc: Ori Kam , Eli Britstein , Ilya Maximets , Thomas Monjalon , Stephen Hemminger , Jerin Jacob , Andrew Rybchenko , Aman Singh , Yuying Zhang , Ajit Khaparde , Somnath Kotur , Rahul Lakkireddy , Ferruh Yigit , Hemant Agrawal , Sachin Saxena Subject: [PATCH 13/13] ethdev: remove deprecated flow action PHY PORT Date: Fri, 12 Aug 2022 22:18:27 +0300 Message-Id: <20220812191827.3187441-14-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220812191827.3187441-1-ivan.malov@oktetlabs.ru> References: <20220812191827.3187441-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Such deprecation was commenced in DPDK 21.11. Since then, no parties have objected. Remove. The patch breaks ABI. Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 35 ------------- doc/guides/nics/features/bnxt.ini | 1 - doc/guides/nics/features/cxgbe.ini | 1 - doc/guides/nics/features/default.ini | 1 - doc/guides/nics/features/dpaa2.ini | 1 - doc/guides/nics/features/sfc.ini | 1 - doc/guides/nics/sfc_efx.rst | 2 - doc/guides/prog_guide/rte_flow.rst | 22 --------- doc/guides/rel_notes/release_22_11.rst | 5 ++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 5 -- drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c | 4 -- drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 49 ------------------- drivers/net/bnxt/tf_ulp/ulp_rte_parser.h | 5 -- drivers/net/cxgbe/cxgbe_flow.c | 6 --- drivers/net/dpaa2/dpaa2_flow.c | 13 +---- drivers/net/sfc/sfc_mae.c | 36 -------------- lib/ethdev/rte_flow.c | 1 - lib/ethdev/rte_flow.h | 28 ----------- 18 files changed, 6 insertions(+), 210 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 758c1f0efa..80f4c0bbef 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -487,9 +487,6 @@ enum index { ACTION_VF, ACTION_VF_ORIGINAL, ACTION_VF_ID, - ACTION_PHY_PORT, - ACTION_PHY_PORT_ORIGINAL, - ACTION_PHY_PORT_INDEX, ACTION_PORT_ID, ACTION_PORT_ID_ORIGINAL, ACTION_PORT_ID_ID, @@ -1799,7 +1796,6 @@ static const enum index next_action[] = { ACTION_RSS, ACTION_PF, ACTION_VF, - ACTION_PHY_PORT, ACTION_PORT_ID, ACTION_METER, ACTION_METER_COLOR, @@ -1893,13 +1889,6 @@ static const enum index action_vf[] = { ZERO, }; -static const enum index action_phy_port[] = { - ACTION_PHY_PORT_ORIGINAL, - ACTION_PHY_PORT_INDEX, - ACTION_NEXT, - ZERO, -}; - static const enum index action_port_id[] = { ACTION_PORT_ID_ORIGINAL, ACTION_PORT_ID_ID, @@ -5240,30 +5229,6 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct rte_flow_action_vf, id)), .call = parse_vc_conf, }, - [ACTION_PHY_PORT] = { - .name = "phy_port", - .help = "direct packets to physical port index", - .priv = PRIV_ACTION(PHY_PORT, - sizeof(struct rte_flow_action_phy_port)), - .next = NEXT(action_phy_port), - .call = parse_vc, - }, - [ACTION_PHY_PORT_ORIGINAL] = { - .name = "original", - .help = "use original port index if possible", - .next = NEXT(action_phy_port, NEXT_ENTRY(COMMON_BOOLEAN)), - .args = ARGS(ARGS_ENTRY_BF(struct rte_flow_action_phy_port, - original, 1)), - .call = parse_vc_conf, - }, - [ACTION_PHY_PORT_INDEX] = { - .name = "index", - .help = "physical port index", - .next = NEXT(action_phy_port, NEXT_ENTRY(COMMON_UNSIGNED)), - .args = ARGS(ARGS_ENTRY(struct rte_flow_action_phy_port, - index)), - .call = parse_vc_conf, - }, [ACTION_PORT_ID] = { .name = "port_id", .help = "direct matching traffic to a given DPDK port ID", diff --git a/doc/guides/nics/features/bnxt.ini b/doc/guides/nics/features/bnxt.ini index c05bcff909..b2d54f06aa 100644 --- a/doc/guides/nics/features/bnxt.ini +++ b/doc/guides/nics/features/bnxt.ini @@ -82,7 +82,6 @@ of_push_vlan = Y of_set_vlan_pcp = Y of_set_vlan_vid = Y pf = Y -phy_port = Y port_id = Y port_representor = Y represented_port = Y diff --git a/doc/guides/nics/features/cxgbe.ini b/doc/guides/nics/features/cxgbe.ini index 295816ab9d..a9dbcd0573 100644 --- a/doc/guides/nics/features/cxgbe.ini +++ b/doc/guides/nics/features/cxgbe.ini @@ -51,7 +51,6 @@ of_pop_vlan = Y of_push_vlan = Y of_set_vlan_pcp = Y of_set_vlan_vid = Y -phy_port = Y queue = Y set_ipv4_dst = Y set_ipv4_src = Y diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index 7ed5bd8cb9..f7192cb0da 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -170,7 +170,6 @@ of_set_vlan_pcp = of_set_vlan_vid = passthru = pf = -phy_port = port_id = port_representor = queue = diff --git a/doc/guides/nics/features/dpaa2.ini b/doc/guides/nics/features/dpaa2.ini index 53148ad467..cedc234f26 100644 --- a/doc/guides/nics/features/dpaa2.ini +++ b/doc/guides/nics/features/dpaa2.ini @@ -45,7 +45,6 @@ vlan = Y [rte_flow actions] drop = Y -phy_port = Y port_id = Y queue = Y represented_port = Y diff --git a/doc/guides/nics/features/sfc.ini b/doc/guides/nics/features/sfc.ini index 3dac105e35..f5ac644278 100644 --- a/doc/guides/nics/features/sfc.ini +++ b/doc/guides/nics/features/sfc.ini @@ -70,7 +70,6 @@ of_push_vlan = Y of_set_vlan_pcp = Y of_set_vlan_vid = Y pf = Y -phy_port = Y port_id = Y port_representor = Y represented_port = Y diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst index 6eca86e96f..fcad671da2 100644 --- a/doc/guides/nics/sfc_efx.rst +++ b/doc/guides/nics/sfc_efx.rst @@ -244,8 +244,6 @@ Supported actions (***transfer*** rules): - MARK -- PHY_PORT - - PF - VF diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 9cf4261494..becf7c29c9 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -2003,28 +2003,6 @@ rule or if packets are not addressed to a VF in the first place. | ``id`` | VF ID | +--------------+--------------------------------+ -Action: ``PHY_PORT`` -^^^^^^^^^^^^^^^^^^^^ - -This action is deprecated. Consider: - - `Action: PORT_REPRESENTOR`_ - - `Action: REPRESENTED_PORT`_ - -Directs matching traffic to a given physical port index of the underlying -device. - -.. _table_rte_flow_action_phy_port: - -.. table:: PHY_PORT - - +--------------+-------------------------------------+ - | Field | Value | - +==============+=====================================+ - | ``original`` | use original port index if possible | - +--------------+-------------------------------------+ - | ``index`` | physical port index | - +--------------+-------------------------------------+ - Action: ``PORT_ID`` ^^^^^^^^^^^^^^^^^^^ This action is deprecated. Consider: diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 343f40a041..a7a2bf2c60 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -85,6 +85,9 @@ Removed Items * ethdev: removed ``RTE_FLOW_ITEM_TYPE_PHY_PORT``; use ``RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT``. +* ethdev: removed ``RTE_FLOW_ACTION_TYPE_PHY_PORT``; + use ``RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT``. + API Changes ----------- @@ -133,6 +136,8 @@ ABI Changes * ethdev: enum ``RTE_FLOW_ITEM`` was affected by deprecation procedure. +* ethdev: enum ``RTE_FLOW_ACTION`` was affected by deprecation procedure. + Known Issues ------------ diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index b9c2d7a6fe..710d69ddca 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -4014,11 +4014,6 @@ This section lists supported actions and their attributes, if any. - ``original {boolean}``: use original VF ID if possible. - ``id {unsigned}``: VF ID. -- ``phy_port``: direct packets to physical port index. - - - ``original {boolean}``: use original port index if possible. - - ``index {unsigned}``: physical port index. - - ``port_id``: direct matching traffic to a given DPDK port ID. - ``original {boolean}``: use original DPDK port ID if possible. diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c index 66cd2fba7e..042425ff5c 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_handler_tbl.c @@ -61,10 +61,6 @@ struct bnxt_ulp_rte_act_info ulp_act_info[] = { .act_type = BNXT_ULP_ACT_TYPE_SUPPORTED, .proto_act_func = ulp_rte_vf_act_handler }, - [RTE_FLOW_ACTION_TYPE_PHY_PORT] = { - .act_type = BNXT_ULP_ACT_TYPE_SUPPORTED, - .proto_act_func = ulp_rte_phy_port_act_handler - }, [RTE_FLOW_ACTION_TYPE_PORT_ID] = { .act_type = BNXT_ULP_ACT_TYPE_SUPPORTED, .proto_act_func = ulp_rte_port_act_handler diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c index 3faafcf788..1be649a16c 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c @@ -2255,55 +2255,6 @@ ulp_rte_port_act_handler(const struct rte_flow_action *act_item, return ulp_rte_parser_act_port_set(param, ifindex, act_dir); } -/* Function to handle the parsing of RTE Flow action phy_port. */ -int32_t -ulp_rte_phy_port_act_handler(const struct rte_flow_action *action_item, - struct ulp_rte_parser_params *prm) -{ - const struct rte_flow_action_phy_port *phy_port; - uint32_t pid; - int32_t rc; - uint16_t pid_s; - enum bnxt_ulp_direction_type dir; - - phy_port = action_item->conf; - if (!phy_port) { - BNXT_TF_DBG(ERR, - "ParseErr: Invalid Argument\n"); - return BNXT_TF_RC_PARSE_ERR; - } - - if (phy_port->original) { - BNXT_TF_DBG(ERR, - "Parse Err:Port Original not supported\n"); - return BNXT_TF_RC_PARSE_ERR; - } - dir = ULP_COMP_FLD_IDX_RD(prm, BNXT_ULP_CF_IDX_DIRECTION); - if (dir != BNXT_ULP_DIR_EGRESS) { - BNXT_TF_DBG(ERR, - "Parse Err:Phy ports are valid only for egress\n"); - return BNXT_TF_RC_PARSE_ERR; - } - /* Get the physical port details from port db */ - rc = ulp_port_db_phy_port_vport_get(prm->ulp_ctx, phy_port->index, - &pid_s); - if (rc) { - BNXT_TF_DBG(ERR, "Failed to get port details\n"); - return -EINVAL; - } - - pid = pid_s; - pid = rte_cpu_to_be_32(pid); - memcpy(&prm->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VPORT], - &pid, BNXT_ULP_ACT_PROP_SZ_VPORT); - - /* Update the action port set bit */ - ULP_COMP_FLD_IDX_WR(prm, BNXT_ULP_CF_IDX_ACT_PORT_IS_SET, 1); - ULP_COMP_FLD_IDX_WR(prm, BNXT_ULP_CF_IDX_ACT_PORT_TYPE, - BNXT_ULP_INTF_TYPE_PHY_PORT); - return BNXT_TF_RC_SUCCESS; -} - /* Function to handle the parsing of RTE Flow action pop vlan. */ int32_t ulp_rte_of_pop_vlan_act_handler(const struct rte_flow_action *a __rte_unused, diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h index 5a9b056b16..f59b10e88b 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h @@ -194,11 +194,6 @@ int32_t ulp_rte_port_act_handler(const struct rte_flow_action *act_item, struct ulp_rte_parser_params *params); -/* Function to handle the parsing of RTE Flow action phy_port. */ -int32_t -ulp_rte_phy_port_act_handler(const struct rte_flow_action *action_item, - struct ulp_rte_parser_params *params); - /* Function to handle the parsing of RTE Flow action pop vlan. */ int32_t ulp_rte_of_pop_vlan_act_handler(const struct rte_flow_action *action_item, diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c index d383334415..d66672a9e6 100644 --- a/drivers/net/cxgbe/cxgbe_flow.c +++ b/drivers/net/cxgbe/cxgbe_flow.c @@ -598,7 +598,6 @@ ch_rte_parse_atype_switch(const struct rte_flow_action *a, const struct rte_flow_action_set_ipv4 *ipv4; const struct rte_flow_action_set_ipv6 *ipv6; const struct rte_flow_action_set_tp *tp_port; - const struct rte_flow_action_phy_port *port; const struct rte_flow_action_set_mac *mac; int item_index; u16 tmp_vlan; @@ -645,10 +644,6 @@ ch_rte_parse_atype_switch(const struct rte_flow_action *a, case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: fs->newvlan = VLAN_REMOVE; break; - case RTE_FLOW_ACTION_TYPE_PHY_PORT: - port = (const struct rte_flow_action_phy_port *)a->conf; - fs->eport = port->index; - break; case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: item_index = cxgbe_get_flow_item_index(items, RTE_FLOW_ITEM_TYPE_IPV4); @@ -836,7 +831,6 @@ cxgbe_rtef_parse_actions(struct rte_flow *flow, goto action_switch; case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: - case RTE_FLOW_ACTION_TYPE_PHY_PORT: case RTE_FLOW_ACTION_TYPE_MAC_SWAP: case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST: diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index 25616b0035..df06c3862e 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -83,7 +83,6 @@ static const enum rte_flow_action_type dpaa2_supported_action_type[] = { RTE_FLOW_ACTION_TYPE_END, RTE_FLOW_ACTION_TYPE_QUEUE, - RTE_FLOW_ACTION_TYPE_PHY_PORT, RTE_FLOW_ACTION_TYPE_PORT_ID, RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT, RTE_FLOW_ACTION_TYPE_RSS @@ -92,7 +91,6 @@ enum rte_flow_action_type dpaa2_supported_action_type[] = { static const enum rte_flow_action_type dpaa2_supported_fs_action_type[] = { RTE_FLOW_ACTION_TYPE_QUEUE, - RTE_FLOW_ACTION_TYPE_PHY_PORT, RTE_FLOW_ACTION_TYPE_PORT_ID, RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT, }; @@ -3281,17 +3279,11 @@ static inline struct rte_eth_dev * dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv, const struct rte_flow_action *action) { - const struct rte_flow_action_phy_port *phy_port; const struct rte_flow_action_port_id *port_id; int idx = -1; struct rte_eth_dev *dest_dev; - if (action->type == RTE_FLOW_ACTION_TYPE_PHY_PORT) { - phy_port = (const struct rte_flow_action_phy_port *) - action->conf; - if (!phy_port->original) - idx = phy_port->index; - } else if (action->type == RTE_FLOW_ACTION_TYPE_PORT_ID) { + if (action->type == RTE_FLOW_ACTION_TYPE_PORT_ID) { port_id = (const struct rte_flow_action_port_id *) action->conf; if (!port_id->original) @@ -3345,7 +3337,6 @@ dpaa2_flow_verify_action( } break; case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: - case RTE_FLOW_ACTION_TYPE_PHY_PORT: case RTE_FLOW_ACTION_TYPE_PORT_ID: if (!dpaa2_flow_redirect_dev(priv, &actions[j])) { DPAA2_PMD_ERR("Invalid port id of action"); @@ -3523,7 +3514,6 @@ dpaa2_generic_flow_set(struct rte_flow *flow, switch (actions[j].type) { case RTE_FLOW_ACTION_TYPE_QUEUE: case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: - case RTE_FLOW_ACTION_TYPE_PHY_PORT: case RTE_FLOW_ACTION_TYPE_PORT_ID: memset(&action, 0, sizeof(struct dpni_fs_action_cfg)); flow->action = actions[j].type; @@ -4098,7 +4088,6 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev, switch (flow->action) { case RTE_FLOW_ACTION_TYPE_QUEUE: case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: - case RTE_FLOW_ACTION_TYPE_PHY_PORT: case RTE_FLOW_ACTION_TYPE_PORT_ID: if (priv->num_rx_tc > 1) { /* Remove entry from QoS table first */ diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c index 4ddb63cbe5..421bb6da95 100644 --- a/drivers/net/sfc/sfc_mae.c +++ b/drivers/net/sfc/sfc_mae.c @@ -3463,36 +3463,6 @@ sfc_mae_rule_parse_action_count(struct sfc_adapter *sa, return rc; } -static int -sfc_mae_rule_parse_action_phy_port(struct sfc_adapter *sa, - const struct rte_flow_action_phy_port *conf, - efx_mae_actions_t *spec) -{ - efx_mport_sel_t mport; - uint32_t phy_port; - int rc; - - if (conf->original != 0) - phy_port = efx_nic_cfg_get(sa->nic)->enc_assigned_port; - else - phy_port = conf->index; - - rc = efx_mae_mport_by_phy_port(phy_port, &mport); - if (rc != 0) { - sfc_err(sa, "failed to convert phys. port ID %u to m-port selector: %s", - phy_port, strerror(rc)); - return rc; - } - - rc = efx_mae_action_set_populate_deliver(spec, &mport); - if (rc != 0) { - sfc_err(sa, "failed to request action DELIVER with m-port selector 0x%08x: %s", - mport.sel, strerror(rc)); - } - - return rc; -} - static int sfc_mae_rule_parse_action_pf_vf(struct sfc_adapter *sa, const struct rte_flow_action_vf *vf_conf, @@ -3626,7 +3596,6 @@ static const char * const action_names[] = { [RTE_FLOW_ACTION_TYPE_COUNT] = "COUNT", [RTE_FLOW_ACTION_TYPE_FLAG] = "FLAG", [RTE_FLOW_ACTION_TYPE_MARK] = "MARK", - [RTE_FLOW_ACTION_TYPE_PHY_PORT] = "PHY_PORT", [RTE_FLOW_ACTION_TYPE_PF] = "PF", [RTE_FLOW_ACTION_TYPE_VF] = "VF", [RTE_FLOW_ACTION_TYPE_PORT_ID] = "PORT_ID", @@ -3745,11 +3714,6 @@ sfc_mae_rule_parse_action(struct sfc_adapter *sa, custom_error = B_TRUE; } break; - case RTE_FLOW_ACTION_TYPE_PHY_PORT: - SFC_BUILD_SET_OVERFLOW(RTE_FLOW_ACTION_TYPE_PHY_PORT, - bundle->actions_mask); - rc = sfc_mae_rule_parse_action_phy_port(sa, action->conf, spec); - break; case RTE_FLOW_ACTION_TYPE_PF: SFC_BUILD_SET_OVERFLOW(RTE_FLOW_ACTION_TYPE_PF, bundle->actions_mask); diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index e7ccdb772e..eeb9398e77 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -188,7 +188,6 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = { MK_FLOW_ACTION(RSS, sizeof(struct rte_flow_action_rss)), MK_FLOW_ACTION(PF, 0), MK_FLOW_ACTION(VF, sizeof(struct rte_flow_action_vf)), - MK_FLOW_ACTION(PHY_PORT, sizeof(struct rte_flow_action_phy_port)), MK_FLOW_ACTION(PORT_ID, sizeof(struct rte_flow_action_port_id)), MK_FLOW_ACTION(METER, sizeof(struct rte_flow_action_meter)), MK_FLOW_ACTION(SECURITY, sizeof(struct rte_flow_action_security)), diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 066e8c8a99..8c33e84ee8 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -2224,18 +2224,6 @@ enum rte_flow_action_type { */ RTE_FLOW_ACTION_TYPE_VF, - /** - * @deprecated - * @see RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR - * @see RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT - * - * Directs packets to a given physical port index of the underlying - * device. - * - * See struct rte_flow_action_phy_port. - */ - RTE_FLOW_ACTION_TYPE_PHY_PORT, - /** * @deprecated * @see RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR @@ -2960,22 +2948,6 @@ struct rte_flow_action_vf { uint32_t id; /**< VF ID. */ }; -/** - * @deprecated - * @see RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR - * @see RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT - * - * RTE_FLOW_ACTION_TYPE_PHY_PORT - * - * Directs packets to a given physical port index of the underlying - * device. - */ -struct rte_flow_action_phy_port { - uint32_t original:1; /**< Use original port index if possible. */ - uint32_t reserved:31; /**< Reserved, must be zero. */ - uint32_t index; /**< Physical port index. */ -}; - /** * @deprecated * @see RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR From patchwork Sun Aug 14 18:46:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 114956 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AD36FA00C2; Sun, 14 Aug 2022 20:46:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2408A4113C; Sun, 14 Aug 2022 20:46:48 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id AAAA8427EC for ; Sun, 14 Aug 2022 20:46:46 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27EIkcJU030986; Sun, 14 Aug 2022 11:46:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=+mHsGBIQF+DK2HwiHyRBHOdU5fMVyBeT+BMsGyk7MOQ=; b=JlmrlnU8ZmgIosSqxCVvHtZPfZdnV7P79ctftLgb5eh8CdXNZXlq6hQLgdxYFOtTNyrn dRefEwZUBPP+7QNLdFt6ClhWgHI++evjbcH8rclrxIwk0lFYefpYd/Knj8WyjH2IYd4t st3xcHITZHEYPFt9drD6vHBux87u8UaS4Sx2x96JIOVu4c76Cl1lhfvCgqT3dDP6DGvF utO0/uaBualPor8wXjq3R9SewYYjTWTjqlSRmqaaHePN5+IMq3IBs8/vtEcsYhOvz2X7 jSrl3EW+k3Sv9rfwiP0KIQb/M5azNZLl4d4dgXGEhcsQ4tTjlPtiXue7SuNcXxChfOSZ jg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3hxbfkm1dv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 14 Aug 2022 11:46:46 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 14 Aug 2022 11:46:43 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 14 Aug 2022 11:46:43 -0700 Received: from localhost.localdomain (unknown [10.28.36.102]) by maili.marvell.com (Postfix) with ESMTP id D964B3F7052; Sun, 14 Aug 2022 11:46:39 -0700 (PDT) From: Akhil Goyal To: CC: , , , , , , , , , , , , , Akhil Goyal Subject: [PATCH 3/3] ethdev: add MACsec flow item Date: Mon, 15 Aug 2022 00:16:20 +0530 Message-ID: <20220814184620.512343-4-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220814184620.512343-1-gakhil@marvell.com> References: <20220814184620.512343-1-gakhil@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: IBYWPDJTtYs3w_f-pjaUFVxCNUwgFMCU X-Proofpoint-ORIG-GUID: IBYWPDJTtYs3w_f-pjaUFVxCNUwgFMCU X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-14_11,2022-08-11_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org A new flow item is defined for MACsec flows which can be offloaded to an inline device. If the flow matches with MACsec header, device will process as per the security session created using rte_security APIs. If an error comes while MACsec processing in HW, PMD will notify with the events defined in this patch. Signed-off-by: Akhil Goyal Acked-by: Ori Kam --- lib/ethdev/rte_ethdev.h | 55 +++++++++++++++++++++++++++++++++++++++++ lib/ethdev/rte_flow.h | 18 ++++++++++++++ 2 files changed, 73 insertions(+) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index de9e970d4d..24661b01e9 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -3864,6 +3864,61 @@ rte_eth_tx_buffer_count_callback(struct rte_mbuf **pkts, uint16_t unsent, int rte_eth_tx_done_cleanup(uint16_t port_id, uint16_t queue_id, uint32_t free_cnt); +/** + * Subtypes for MACsec offload event(@ref RTE_ETH_EVENT_MACSEC) raised by + * Ethernet device. + */ +enum rte_eth_macsec_event_subtype { + RTE_ETH_MACSEC_SUBEVENT_UNKNOWN, + /* subevents of RTE_ETH_MACSEC_EVENT_SECTAG_VAL_ERR sectag validation events + * RTE_ETH_MACSEC_EVENT_RX_SECTAG_V_EQ1 + * Validation check: SecTag.TCI.V = 1 + * RTE_ETH_MACSEC_EVENT_RX_SECTAG_E_EQ0_C_EQ1 + * Validation check: SecTag.TCI.E = 0 && SecTag.TCI.C = 1 + * RTE_ETH_MACSEC_EVENT_RX_SECTAG_SL_GTE48 + * Validation check: SecTag.SL >= 'd48 + * RTE_ETH_MACSEC_EVENT_RX_SECTAG_ES_EQ1_SC_EQ1 + * Validation check: SecTag.TCI.ES = 1 && SecTag.TCI.SC = 1 + * RTE_ETH_MACSEC_EVENT_RX_SECTAG_SC_EQ1_SCB_EQ1 + * Validation check: SecTag.TCI.SC = 1 && SecTag.TCI.SCB = 1 + */ + RTE_ETH_MACSEC_SUBEVENT_RX_SECTAG_V_EQ1, + RTE_ETH_MACSEC_SUBEVENT_RX_SECTAG_E_EQ0_C_EQ1, + RTE_ETH_MACSEC_SUBEVENT_RX_SECTAG_SL_GTE48, + RTE_ETH_MACSEC_SUBEVENT_RX_SECTAG_ES_EQ1_SC_EQ1, + RTE_ETH_MACSEC_SUBEVENT_RX_SECTAG_SC_EQ1_SCB_EQ1, +}; + +enum rte_eth_macsec_event_type { + RTE_ETH_MACSEC_EVENT_UNKNOWN, + RTE_ETH_MACSEC_EVENT_SECTAG_VAL_ERR, + RTE_ETH_MACSEC_EVENT_RX_SA_PN_HARD_EXP, + RTE_ETH_MACSEC_EVENT_RX_SA_PN_SOFT_EXP, + RTE_ETH_MACSEC_EVENT_TX_SA_PN_HARD_EXP, + RTE_ETH_MACSEC_EVENT_TX_SA_PN_SOFT_EXP, + /* Notifies Invalid SA event */ + RTE_ETH_MACSEC_EVENT_SA_NOT_VALID, +}; + +/** + * Descriptor for @ref RTE_ETH_EVENT_MACSEC event. Used by eth dev to send extra + * information of the MACsec offload event. + */ +struct rte_eth_event_macsec_desc { + enum rte_eth_macsec_event_type type; + enum rte_eth_macsec_event_subtype subtype; + /** + * Event specific metadata. + * + * For the following events, *userdata* registered + * with the *rte_security_session* would be returned + * as metadata, + * + * @see struct rte_security_session_conf + */ + uint64_t metadata; +}; + /** * Subtypes for IPsec offload event(@ref RTE_ETH_EVENT_IPSEC) raised by * eth device. diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index a79f1e7ef0..4114c84a02 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -35,6 +35,7 @@ #include #include #include +#include #ifdef __cplusplus extern "C" { @@ -668,6 +669,13 @@ enum rte_flow_item_type { * See struct rte_flow_item_gre_opt. */ RTE_FLOW_ITEM_TYPE_GRE_OPTION, + + /** + * Matches MACsec Ethernet Header. + * + * See struct rte_flow_item_macsec. + */ + RTE_FLOW_ITEM_TYPE_MACSEC, }; /** @@ -1214,6 +1222,16 @@ struct rte_flow_item_gre_opt { struct rte_gre_hdr_opt_sequence sequence; }; +/** + * RTE_FLOW_ITEM_TYPE_MACSEC. + * + * Matches MACsec header. + */ +struct rte_flow_item_macsec { + struct rte_macsec_hdr macsec_hdr; +}; + + /** * RTE_FLOW_ITEM_TYPE_FUZZY * From patchwork Thu Aug 18 09:37:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Morten_Br=C3=B8rup?= X-Patchwork-Id: 115233 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 74B08A034C; Thu, 18 Aug 2022 11:37:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 281B840694; Thu, 18 Aug 2022 11:37:50 +0200 (CEST) Received: from smartserver.smartsharesystems.com (smartserver.smartsharesystems.com [77.243.40.215]) by mails.dpdk.org (Postfix) with ESMTP id 946F240156 for ; Thu, 18 Aug 2022 11:37:49 +0200 (CEST) Received: from dkrd2.smartsharesys.local ([192.168.4.12]) by smartserver.smartsharesystems.com with Microsoft SMTPSVC(6.0.3790.4675); Thu, 18 Aug 2022 11:37:48 +0200 From: =?utf-8?q?Morten_Br=C3=B8rup?= To: thomas@monjalon.net, ferruh.yigit@xilinx.com, andrew.rybchenko@oktetlabs.ru Cc: dev@dpdk.org, =?utf-8?q?Morten_Br=C3=B8rup?= Subject: [PATCH] ethdev: rte_eth_rx_queue_count is a dataplane function Date: Thu, 18 Aug 2022 11:37:44 +0200 Message-Id: <20220818093744.76157-1-mb@smartsharesystems.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 X-OriginalArrivalTime: 18 Aug 2022 09:37:48.0030 (UTC) FILETIME=[2D5691E0:01D8B2E6] X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Applications may use rte_eth_rx_queue_count() in the RX stage of the dataplane, so only check the function parameters if built with RTE_ETHDEV_DEBUG_RX. Signed-off-by: Morten Brørup Acked-by: Ferruh Yigit Reviewed-by: Andrew Rybchenko --- lib/ethdev/rte_ethdev.h | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index de9e970d4d..8d5d9b42bf 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -5681,6 +5681,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, /** * Get the number of used descriptors of a Rx queue * + * Since it's a dataplane function, no check is performed on port_id and + * queue_id. The caller must therefore ensure that the port is enabled + * and the queue is configured and running. + * * @param port_id * The port identifier of the Ethernet device. * @param queue_id @@ -5688,8 +5692,8 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, * @return * The number of used descriptors in the specific queue, or: * - (-ENODEV) if *port_id* is invalid. - * (-EINVAL) if *queue_id* is invalid - * (-ENOTSUP) if the device does not support this function + * - (-EINVAL) if *queue_id* is invalid + * - (-ENOTSUP) if the device does not support this function */ static inline int rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id) @@ -5697,6 +5701,7 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id) struct rte_eth_fp_ops *p; void *qd; +#ifdef RTE_ETHDEV_DEBUG_RX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { RTE_ETHDEV_LOG(ERR, @@ -5704,16 +5709,19 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id) port_id, queue_id); return -EINVAL; } +#endif /* fetch pointer to queue data */ p = &rte_eth_fp_ops[port_id]; qd = p->rxq.data[queue_id]; +#ifdef RTE_ETHDEV_DEBUG_RX RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); - RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP); if (qd == NULL) return -EINVAL; +#endif + RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP); return (int)(*p->rx_queue_count)(qd); } From patchwork Tue Aug 23 10:22:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sunil Kumar Kori X-Patchwork-Id: 115357 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 33AA3A0093; Tue, 23 Aug 2022 12:23:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2599E40DDA; Tue, 23 Aug 2022 12:23:07 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 3B8DC400D6 for ; Tue, 23 Aug 2022 12:23:05 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27NAIJYV025034; Tue, 23 Aug 2022 03:23:04 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=48l1+aAjdzm0rZTIewlEP2TpUt23+cbijco4GexUSc4=; b=Nl1L56VzdHXUNWbIus+zovcVmmFsmg71s9MCeezHKjw5+EHbgHebCeFu4q6ouT8Shi9t YK7MEqy1/YRM2MxWRAcAJY9XcDYu6B1Wf5J/7Ucf22WayGP9ua/d5WifcGqKz9M7Btl3 aAaEljyPuDp6KaMaa24WVlE3D38PA714UweCfjMe08d71+d+e/Sm1G19JvPrQwBTvdFz HOTr0+YCqlcwO9ANn6Sek0qFYnGPyG9LZf0OiTz490ktLN4NFWroS/U/mDzfxnXLiA44 gs6phh5HTM34kVG/X+BQ4nTuPLXVMpnNEvaqSGxksEdHwgB8bRIoTZ+c6cBiA8q0zL6J 5A== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3j4ubj8e9n-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 23 Aug 2022 03:23:04 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 23 Aug 2022 03:23:02 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 23 Aug 2022 03:23:02 -0700 Received: from localhost.localdomain (unknown [10.28.34.25]) by maili.marvell.com (Postfix) with ESMTP id 51DC55B6929; Tue, 23 Aug 2022 03:22:59 -0700 (PDT) From: To: Cristian Dumitrescu , Aman Singh , Yuying Zhang , "Nithin Dabilpuram" , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Jasvinder Singh , "Thomas Monjalon" , Ferruh Yigit , Andrew Rybchenko CC: Subject: [PATCH v2 1/1] ethdev: add protocol param to color table update Date: Tue, 23 Aug 2022 15:52:55 +0530 Message-ID: <20220823102255.2905191-1-skori@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220707065743.76932-1-skori@marvell.com> References: <20220707065743.76932-1-skori@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: bbzr8VLT13NRsvQ1vNaXyMUvs4kO-CDA X-Proofpoint-ORIG-GUID: bbzr8VLT13NRsvQ1vNaXyMUvs4kO-CDA X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-23_04,2022-08-22_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Sunil Kumar Kori Using rte_mtr_color_in_protocol_set(), user can configure combination of protocol headers, like outer_vlan and outer_ip, can be enabled on given meter object. But rte_mtr_meter_vlan_table_update() and rte_mtr_meter_dscp_table_update() do not have information that which table needs to be updated corresponding to protocol header i.e. inner or outer. Adding protocol paramreter will allow user to provide required protocol information so that corresponding inner or outer table can be updated corresponding to protocol header. If user wishes to configure both inner and outer table then API must be called twice with correct protocol information. Signed-off-by: Sunil Kumar Kori Acked-by: Cristian Dumitrescu --- v1..v2: - Rebase on ToT of dpdk-next-net-mrvl/for-next-net branch. - Remove "Depends On:" tag as dependent patch is merged. app/test-pmd/cmdline_mtr.c | 42 ++++++++++++++++----- doc/guides/testpmd_app_ug/testpmd_funcs.rst | 8 ++-- drivers/net/cnxk/cnxk_ethdev_mtr.c | 20 +++++++++- drivers/net/softnic/rte_eth_softnic_meter.c | 4 +- lib/ethdev/rte_mtr.c | 8 ++-- lib/ethdev/rte_mtr.h | 7 +++- lib/ethdev/rte_mtr_driver.h | 4 +- 7 files changed, 70 insertions(+), 23 deletions(-) diff --git a/app/test-pmd/cmdline_mtr.c b/app/test-pmd/cmdline_mtr.c index 833273da0d..f517a328b7 100644 --- a/app/test-pmd/cmdline_mtr.c +++ b/app/test-pmd/cmdline_mtr.c @@ -297,8 +297,8 @@ parse_meter_color_str(char *c_str, uint32_t *use_prev_meter_color, } static int -parse_multi_token_string(char *t_str, uint16_t *port_id, - uint32_t *mtr_id, enum rte_color **dscp_table) +parse_multi_token_string(char *t_str, uint16_t *port_id, uint32_t *mtr_id, + enum rte_mtr_color_in_protocol *proto, enum rte_color **dscp_table) { char *token; uint64_t val; @@ -326,6 +326,16 @@ parse_multi_token_string(char *t_str, uint16_t *port_id, *mtr_id = val; + /* Third token: protocol */ + token = strtok_r(t_str, PARSE_DELIMITER, &t_str); + if (token == NULL) + return 0; + + if (strcmp(token, "outer_ip") == 0) + *proto = RTE_MTR_COLOR_IN_PROTO_OUTER_IP; + else if (strcmp(token, "inner_ip") == 0) + *proto = RTE_MTR_COLOR_IN_PROTO_INNER_IP; + ret = parse_dscp_table_entries(t_str, dscp_table); if (ret != 0) return -1; @@ -335,7 +345,7 @@ parse_multi_token_string(char *t_str, uint16_t *port_id, static int parse_multi_token_vlan_str(char *t_str, uint16_t *port_id, uint32_t *mtr_id, - enum rte_color **vlan_table) + enum rte_mtr_color_in_protocol *proto, enum rte_color **vlan_table) { uint64_t val; char *token; @@ -363,6 +373,16 @@ parse_multi_token_vlan_str(char *t_str, uint16_t *port_id, uint32_t *mtr_id, *mtr_id = val; + /* Third token: protocol */ + token = strtok_r(t_str, PARSE_DELIMITER, &t_str); + if (token == NULL) + return 0; + + if (strcmp(token, "outer_vlan") == 0) + *proto = RTE_MTR_COLOR_IN_PROTO_OUTER_VLAN; + else if (strcmp(token, "inner_vlan") == 0) + *proto = RTE_MTR_COLOR_IN_PROTO_INNER_VLAN; + ret = parse_vlan_table_entries(t_str, vlan_table); if (ret != 0) return -1; @@ -1388,6 +1408,7 @@ static void cmd_set_port_meter_dscp_table_parsed(void *parsed_result, __rte_unused void *data) { struct cmd_set_port_meter_dscp_table_result *res = parsed_result; + enum rte_mtr_color_in_protocol proto = 0; struct rte_mtr_error error; enum rte_color *dscp_table = NULL; char *t_str = res->token_string; @@ -1396,7 +1417,8 @@ static void cmd_set_port_meter_dscp_table_parsed(void *parsed_result, int ret; /* Parse string */ - ret = parse_multi_token_string(t_str, &port_id, &mtr_id, &dscp_table); + ret = parse_multi_token_string(t_str, &port_id, &mtr_id, &proto, + &dscp_table); if (ret) { fprintf(stderr, " Multi token string parse error\n"); return; @@ -1406,7 +1428,7 @@ static void cmd_set_port_meter_dscp_table_parsed(void *parsed_result, goto free_table; /* Update Meter DSCP Table*/ - ret = rte_mtr_meter_dscp_table_update(port_id, mtr_id, + ret = rte_mtr_meter_dscp_table_update(port_id, mtr_id, proto, dscp_table, &error); if (ret != 0) print_err_msg(&error); @@ -1418,7 +1440,7 @@ static void cmd_set_port_meter_dscp_table_parsed(void *parsed_result, cmdline_parse_inst_t cmd_set_port_meter_dscp_table = { .f = cmd_set_port_meter_dscp_table_parsed, .data = NULL, - .help_str = "set port meter dscp table " + .help_str = "set port meter dscp table " "[ ... ]", .tokens = { (void *)&cmd_set_port_meter_dscp_table_set, @@ -1461,6 +1483,7 @@ static void cmd_set_port_meter_vlan_table_parsed(void *parsed_result, __rte_unused void *data) { struct cmd_set_port_meter_vlan_table_result *res = parsed_result; + enum rte_mtr_color_in_protocol proto = 0; struct rte_mtr_error error; enum rte_color *vlan_table = NULL; char *t_str = res->token_string; @@ -1469,7 +1492,8 @@ static void cmd_set_port_meter_vlan_table_parsed(void *parsed_result, int ret; /* Parse string */ - ret = parse_multi_token_vlan_str(t_str, &port_id, &mtr_id, &vlan_table); + ret = parse_multi_token_vlan_str(t_str, &port_id, &mtr_id, &proto, + &vlan_table); if (ret) { fprintf(stderr, " Multi token string parse error\n"); return; @@ -1479,7 +1503,7 @@ static void cmd_set_port_meter_vlan_table_parsed(void *parsed_result, goto free_table; /* Update Meter VLAN Table*/ - ret = rte_mtr_meter_vlan_table_update(port_id, mtr_id, + ret = rte_mtr_meter_vlan_table_update(port_id, mtr_id, proto, vlan_table, &error); if (ret != 0) print_err_msg(&error); @@ -1491,7 +1515,7 @@ static void cmd_set_port_meter_vlan_table_parsed(void *parsed_result, cmdline_parse_inst_t cmd_set_port_meter_vlan_table = { .f = cmd_set_port_meter_vlan_table_parsed, .data = NULL, - .help_str = "set port meter vlan table " + .help_str = "set port meter vlan table " "[ ... ]", .tokens = { (void *)&cmd_set_port_meter_vlan_table_set, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 330e34427d..ce40e3b6f2 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -2597,15 +2597,15 @@ set port meter dscp table Set meter dscp table for the ethernet device:: - testpmd> set port meter dscp table (port_id) (mtr_id) [(dscp_tbl_entry0) \ - (dscp_tbl_entry1)...(dscp_tbl_entry63)] + testpmd> set port meter dscp table (port_id) (mtr_id) (proto) \ + [(dscp_tbl_entry0) (dscp_tbl_entry1)...(dscp_tbl_entry63)] set port meter vlan table ~~~~~~~~~~~~~~~~~~~~~~~~~ Set meter VLAN table for the Ethernet device:: - testpmd> set port meter vlan table (port_id) (mtr_id) [(vlan_tbl_entry0) \ - (vlan_tbl_entry1)...(vlan_tbl_entry15)] + testpmd> set port meter vlan table (port_id) (mtr_id) (proto) \ + [(vlan_tbl_entry0) (vlan_tbl_entry1)...(vlan_tbl_entry15)] set port meter protocol ~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/drivers/net/cnxk/cnxk_ethdev_mtr.c b/drivers/net/cnxk/cnxk_ethdev_mtr.c index be2cb7d628..0fa18f01c7 100644 --- a/drivers/net/cnxk/cnxk_ethdev_mtr.c +++ b/drivers/net/cnxk/cnxk_ethdev_mtr.c @@ -720,6 +720,7 @@ cnxk_nix_mtr_disable(struct rte_eth_dev *eth_dev, uint32_t mtr_id, static int cnxk_nix_mtr_dscp_table_update(struct rte_eth_dev *eth_dev, uint32_t mtr_id, + enum rte_mtr_color_in_protocol proto, enum rte_color *dscp_table, struct rte_mtr_error *error) { @@ -750,7 +751,7 @@ cnxk_nix_mtr_dscp_table_update(struct rte_eth_dev *eth_dev, uint32_t mtr_id, table.count = ROC_NIX_BPF_PRECOLOR_TBL_SIZE_DSCP; - switch (dev->proto) { + switch (proto) { case RTE_MTR_COLOR_IN_PROTO_OUTER_IP: table.mode = ROC_NIX_BPF_PC_MODE_DSCP_OUTER; break; @@ -764,6 +765,13 @@ cnxk_nix_mtr_dscp_table_update(struct rte_eth_dev *eth_dev, uint32_t mtr_id, goto exit; } + if (dev->proto != proto) { + rc = -rte_mtr_error_set(error, EINVAL, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL, + "input color protocol is not configured"); + goto exit; + } + for (i = 0; i < ROC_NIX_BPF_PRECOLOR_TBL_SIZE_DSCP; i++) table.color[i] = nix_dscp_tbl[i]; @@ -784,6 +792,7 @@ cnxk_nix_mtr_dscp_table_update(struct rte_eth_dev *eth_dev, uint32_t mtr_id, static int cnxk_nix_mtr_vlan_table_update(struct rte_eth_dev *eth_dev, uint32_t mtr_id, + enum rte_mtr_color_in_protocol proto, enum rte_color *vlan_table, struct rte_mtr_error *error) { @@ -814,7 +823,7 @@ cnxk_nix_mtr_vlan_table_update(struct rte_eth_dev *eth_dev, uint32_t mtr_id, table.count = ROC_NIX_BPF_PRECOLOR_TBL_SIZE_VLAN; - switch (dev->proto) { + switch (proto) { case RTE_MTR_COLOR_IN_PROTO_OUTER_VLAN: table.mode = ROC_NIX_BPF_PC_MODE_VLAN_OUTER; break; @@ -828,6 +837,13 @@ cnxk_nix_mtr_vlan_table_update(struct rte_eth_dev *eth_dev, uint32_t mtr_id, goto exit; } + if (dev->proto != proto) { + rc = -rte_mtr_error_set(error, EINVAL, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL, + "input color protocol is not configured"); + goto exit; + } + for (i = 0; i < ROC_NIX_BPF_PRECOLOR_TBL_SIZE_VLAN; i++) table.color[i] = nix_vlan_tbl[i]; diff --git a/drivers/net/softnic/rte_eth_softnic_meter.c b/drivers/net/softnic/rte_eth_softnic_meter.c index 6b02f43e31..3e635a3cfe 100644 --- a/drivers/net/softnic/rte_eth_softnic_meter.c +++ b/drivers/net/softnic/rte_eth_softnic_meter.c @@ -636,7 +636,7 @@ pmd_mtr_meter_profile_update(struct rte_eth_dev *dev, /* MTR object meter DSCP table update */ static int pmd_mtr_meter_dscp_table_update(struct rte_eth_dev *dev, - uint32_t mtr_id, + uint32_t mtr_id, enum rte_mtr_color_in_protocol proto, enum rte_color *dscp_table, struct rte_mtr_error *error) { @@ -648,6 +648,8 @@ pmd_mtr_meter_dscp_table_update(struct rte_eth_dev *dev, uint32_t table_id, i; int status; + RTE_SET_USED(proto); + /* MTR object id must be valid */ m = softnic_mtr_find(p, mtr_id); if (m == NULL) diff --git a/lib/ethdev/rte_mtr.c b/lib/ethdev/rte_mtr.c index c460e4f4e0..e4dff20f76 100644 --- a/lib/ethdev/rte_mtr.c +++ b/lib/ethdev/rte_mtr.c @@ -197,25 +197,25 @@ rte_mtr_meter_policy_update(uint16_t port_id, /** MTR object meter DSCP table update */ int rte_mtr_meter_dscp_table_update(uint16_t port_id, - uint32_t mtr_id, + uint32_t mtr_id, enum rte_mtr_color_in_protocol proto, enum rte_color *dscp_table, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; return RTE_MTR_FUNC(port_id, meter_dscp_table_update)(dev, - mtr_id, dscp_table, error); + mtr_id, proto, dscp_table, error); } /** MTR object meter VLAN table update */ int rte_mtr_meter_vlan_table_update(uint16_t port_id, - uint32_t mtr_id, + uint32_t mtr_id, enum rte_mtr_color_in_protocol proto, enum rte_color *vlan_table, struct rte_mtr_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; return RTE_MTR_FUNC(port_id, meter_vlan_table_update)(dev, - mtr_id, vlan_table, error); + mtr_id, proto, vlan_table, error); } /** Set the input color protocol on MTR object */ diff --git a/lib/ethdev/rte_mtr.h b/lib/ethdev/rte_mtr.h index 008bc84f0d..5e4f7ba73b 100644 --- a/lib/ethdev/rte_mtr.h +++ b/lib/ethdev/rte_mtr.h @@ -913,6 +913,8 @@ rte_mtr_meter_policy_update(uint16_t port_id, * The port identifier of the Ethernet device. * @param[in] mtr_id * MTR object ID. Needs to be valid. + * @param[in] proto + * Input color protocol. * @param[in] dscp_table * When non-NULL: it points to a pre-allocated and pre-populated table with * exactly 64 elements providing the input color for each value of the @@ -927,7 +929,7 @@ rte_mtr_meter_policy_update(uint16_t port_id, __rte_experimental int rte_mtr_meter_dscp_table_update(uint16_t port_id, - uint32_t mtr_id, + uint32_t mtr_id, enum rte_mtr_color_in_protocol proto, enum rte_color *dscp_table, struct rte_mtr_error *error); @@ -938,6 +940,8 @@ rte_mtr_meter_dscp_table_update(uint16_t port_id, * The port identifier of the Ethernet device. * @param[in] mtr_id * MTR object ID. Needs to be valid. + * @param[in] proto + * Input color protocol. * @param[in] vlan_table * When non-NULL: it points to a pre-allocated and pre-populated table with * exactly 16 elements providing the input color for each value of the @@ -952,6 +956,7 @@ rte_mtr_meter_dscp_table_update(uint16_t port_id, __rte_experimental int rte_mtr_meter_vlan_table_update(uint16_t port_id, uint32_t mtr_id, + enum rte_mtr_color_in_protocol proto, enum rte_color *vlan_table, struct rte_mtr_error *error); diff --git a/lib/ethdev/rte_mtr_driver.h b/lib/ethdev/rte_mtr_driver.h index f7dca9a54c..a8b652a607 100644 --- a/lib/ethdev/rte_mtr_driver.h +++ b/lib/ethdev/rte_mtr_driver.h @@ -93,13 +93,13 @@ typedef int (*rte_mtr_meter_policy_update_t)(struct rte_eth_dev *dev, /** @internal MTR object meter DSCP table update. */ typedef int (*rte_mtr_meter_dscp_table_update_t)(struct rte_eth_dev *dev, - uint32_t mtr_id, + uint32_t mtr_id, enum rte_mtr_color_in_protocol proto, enum rte_color *dscp_table, struct rte_mtr_error *error); /** @internal mtr object meter vlan table update. */ typedef int (*rte_mtr_meter_vlan_table_update_t)(struct rte_eth_dev *dev, - uint32_t mtr_id, + uint32_t mtr_id, enum rte_mtr_color_in_protocol proto, enum rte_color *vlan_table, struct rte_mtr_error *error); From patchwork Wed Sep 7 02:40:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 115992 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 718A3A0542; Wed, 7 Sep 2022 04:41:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 14AF8400D6; Wed, 7 Sep 2022 04:41:22 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) by mails.dpdk.org (Postfix) with ESMTP id 27A0840042 for ; Wed, 7 Sep 2022 04:41:21 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Pu2Ga1Z8IqOnrX/+seC7MB4xfLRVLjfNj0WRRfkwbB6SIj3F7WxakEAJ29MKEVru40oBllg4CsoXR0zVcU3CXIdgTAWIjXP1DWY9EaxIun2RW7jbfgABh+FnTC17bRXnBYpaVqJFJCFAAusbSNkKx05CVfpg3/Id3crLgQ+SE847uj2luJq9eD8oc0cTK9mc4EDFYg9ppEVr/ctmgEZtStNTA/Sx0nSDTvXKMpBhU7ge/YryOBA+76zCSlOZzbShnD6KpjS3miBGJYusuqw8in16xUamQWtaPM+dgxLpvU21buVEnj0zXwnivrkAn4/NFKmkq2G3lhFQGSd9NDio2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VdZHYoflQMnvZqyAIZj5+oUJpcQXBzrzsg+x8GuSlog=; b=OfBoVL403l3VV90ds/BTMyLhwscuHV8FjerW5VleCykRyYUKspUWHhenfIQY++89NeKV15Ka7nrKOHEo0sXcn/uAKYogFcDi1E737dBVGDWlNS/a8hh5Jetu7WQwj5Er2BgbiQWjGxB51S1OHK7ZzmyjKu6ctyj4fwLbxKGdddrSz9Ip+UChePeQn3inUaLF3EOTUkL6ZgRs4YjkXcPTq8bqVaoSfbJemQllhanwboaxoMWtpcUbZzcuyCxJkTgwYNr4mhL1ijDPYrZnpi1JCBMc+hEClgB/DyhVi94p6Bv42Dn8KPjUbQOkMdilN6nvYAzGQQwHvxgl0fUIgQJ1yA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=xilinx.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VdZHYoflQMnvZqyAIZj5+oUJpcQXBzrzsg+x8GuSlog=; b=KnXzH2UWfvD9rxwsF/qjNENIVJzm5yvCCG8PyB/iTrZjpu2CFvBYjZHjiYDcXMMDB2tS5AhX4hBY2YxLT2jV7zwEGd5Z3PRsf4QO/2eBLoUUQR67fUFwPfWUpBEEP1l/bP8dgn9Com9FnR3fPBwvWv5qqh9v/qaWpgc+QC/WxJMBNwuI+6bEqz+sRETOBqAKVlI9TFx1WUFyO66oLWOVrgdWSrRc7VGc3btyyEIqJqvpJq7wvd+B4iqUOLhhU98VxDr32IUKIEHik8qpySni3soixk6SYynBo/BPPmtNcp4SQJjCt+kTDKt3zTkZZJMAE01aYtwUcaH0pDsnbceM3w== Received: from MW2PR2101CA0007.namprd21.prod.outlook.com (2603:10b6:302:1::20) by MN2PR12MB4439.namprd12.prod.outlook.com (2603:10b6:208:262::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5588.11; Wed, 7 Sep 2022 02:41:19 +0000 Received: from CO1NAM11FT034.eop-nam11.prod.protection.outlook.com (2603:10b6:302:1:cafe::54) by MW2PR2101CA0007.outlook.office365.com (2603:10b6:302:1::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.4 via Frontend Transport; Wed, 7 Sep 2022 02:41:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT034.mail.protection.outlook.com (10.13.174.248) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5612.13 via Frontend Transport; Wed, 7 Sep 2022 02:41:18 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.38; Wed, 7 Sep 2022 02:40:39 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 6 Sep 2022 19:40:35 -0700 From: Rongwei Liu To: , , , , Aman Singh , Yuying Zhang , Ferruh Yigit , "Andrew Rybchenko" CC: , Subject: [PATCH v1] ethdev: add direction info when creating the transfer table Date: Wed, 7 Sep 2022 05:40:20 +0300 Message-ID: <20220907024020.2474860-1-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT034:EE_|MN2PR12MB4439:EE_ X-MS-Office365-Filtering-Correlation-Id: f69f9c64-f539-4d32-dad9-08da907a7116 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: /w85oy8+M1cxYpgjZNIdD6uqOzTDMxZo4YSG7ph/jAYC4QOuXvcphOwct0z9FSE4S/L7UNWiGggaY7n7LTlau5amiXq9FUqyXnt+EIt1EN+icaO3DOwWa0tiXUxDKy6vwdV+c2WpKwjBw2Lso8XzhIvXv1DwJo3LKFCkDkaVImhGhH0YMb2U04cL4ZVcxIrH6rPUM0cddIh+EUeCpc2YAllXwkIi6L4c3DZfNnAuuaMJl7zw4HcWCXBvCtpmJHKRGbE2SVIgMEk5qjBPTKObsX2BYD62RtGrR/8+i0W+sP8iNfhirf91FIlCBQ2yG2FjP7uQ7AC2ly+BRRiZ3FgmRC1LKkh6zDiIbaVU4ZrK0ntNvZDClsrSRBK2Ws+a2LSbIIv/NHHOESasm+yVw0wpsSzVwp33yTxE1aNF0sTnKUyL/4HuzyD6cMfBDn1gdHDk5s/taAGwmVBL9/u7+FFoVP+JIZBrpx+0Vsjmv4eXypE0GyaVoLJm5e8cW5VO6deykluj/YQabT8axq87JpNcu+q6AErmTCGbTc22zS9208IxoyiucGskj6cWh0iv35r0y6w8XsLxdXp9Zac5bXfCKdOpfTRQ3AakDjGoKllT3P+Zk/Tw0JYgLFA2nVen1yqrL7nq85ms0xThbO+ZYWrrZHDdWKLLBhO8EDuipb55ljuULEYg4LN+AhaxIzF9mjxqo5OW/3OYqxoxnQfrouInVqsjpG+iDPUy4SbMknD3q3OPSlPMEd7u9YfZYe1UjSm6weAW9bv6RwVWVKxGms3isD1zUtvmhtaBJQt/6KDgypA= X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230016)(4636009)(39860400002)(346002)(136003)(376002)(396003)(46966006)(40470700004)(36840700001)(81166007)(356005)(82740400003)(40460700003)(36860700001)(4326008)(55016003)(8676002)(70206006)(70586007)(82310400005)(110136005)(316002)(54906003)(426003)(26005)(5660300002)(8936002)(16526019)(40480700001)(2616005)(6286002)(83380400001)(47076005)(1076003)(478600001)(6666004)(186003)(107886003)(7696005)(2906002)(86362001)(336012)(41300700001)(36756003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Sep 2022 02:41:18.2985 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f69f9c64-f539-4d32-dad9-08da907a7116 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT034.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4439 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The transfer domain rule is able to match traffic wire/vf origin and it means two directions' underlayer resource. In customer deployments, they usually match only one direction traffic in single flow table: either from wire or from vf. Introduce one new member transfer_mode into rte_flow_attr to indicate the flow table direction property: from wire, from vf or bi-direction(default). It helps to save underlayer memory also on insertion rate. By default, the transfer domain is bi-direction, and no behavior changes. 1. Match wire origin only flow template_table 0 create group 0 priority 0 transfer wire_orig... 2. Match vf origin only flow template_table 0 create group 0 priority 0 transfer vf_orig... Signed-off-by: Rongwei Liu Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 26 +++++++++++++++++++++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 3 ++- lib/ethdev/rte_flow.h | 9 ++++++- 3 files changed, 36 insertions(+), 2 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 7f50028eb7..b25b595e82 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -177,6 +177,8 @@ enum index { TABLE_INGRESS, TABLE_EGRESS, TABLE_TRANSFER, + TABLE_TRANSFER_WIRE_ORIG, + TABLE_TRANSFER_VF_ORIG, TABLE_RULES_NUMBER, TABLE_PATTERN_TEMPLATE, TABLE_ACTIONS_TEMPLATE, @@ -1141,6 +1143,8 @@ static const enum index next_table_attr[] = { TABLE_INGRESS, TABLE_EGRESS, TABLE_TRANSFER, + TABLE_TRANSFER_WIRE_ORIG, + TABLE_TRANSFER_VF_ORIG, TABLE_RULES_NUMBER, TABLE_PATTERN_TEMPLATE, TABLE_ACTIONS_TEMPLATE, @@ -2881,6 +2885,18 @@ static const struct token token_list[] = { .next = NEXT(next_table_attr), .call = parse_table, }, + [TABLE_TRANSFER_WIRE_ORIG] = { + .name = "wire_orig", + .help = "affect rule direction to transfer", + .next = NEXT(next_table_attr), + .call = parse_table, + }, + [TABLE_TRANSFER_VF_ORIG] = { + .name = "vf_orig", + .help = "affect rule direction to transfer", + .next = NEXT(next_table_attr), + .call = parse_table, + }, [TABLE_RULES_NUMBER] = { .name = "rules_number", .help = "number of rules in table", @@ -8894,6 +8910,16 @@ parse_table(struct context *ctx, const struct token *token, case TABLE_TRANSFER: out->args.table.attr.flow_attr.transfer = 1; return len; + case TABLE_TRANSFER_WIRE_ORIG: + if (!out->args.table.attr.flow_attr.transfer) + return -1; + out->args.table.attr.flow_attr.transfer_mode = 1; + return len; + case TABLE_TRANSFER_VF_ORIG: + if (!out->args.table.attr.flow_attr.transfer) + return -1; + out->args.table.attr.flow_attr.transfer_mode = 2; + return len; default: return -1; } diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 330e34427d..603b7988dd 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3332,7 +3332,8 @@ It is bound to ``rte_flow_template_table_create()``:: flow template_table {port_id} create [table_id {id}] [group {group_id}] - [priority {level}] [ingress] [egress] [transfer] + [priority {level}] [ingress] [egress] + [transfer [vf_orig] [wire_orig]] rules_number {number} pattern_template {pattern_template_id} actions_template {actions_template_id} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index a79f1e7ef0..512b08d817 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -130,7 +130,14 @@ struct rte_flow_attr { * through a suitable port. @see rte_flow_pick_transfer_proxy(). */ uint32_t transfer:1; - uint32_t reserved:29; /**< Reserved, must be zero. */ + /** + * 0 means bidirection, + * 0x1 origin uplink, + * 0x2 origin vport, + * N/A both set. + */ + uint32_t transfer_mode:2; + uint32_t reserved:27; /**< Reserved, must be zero. */ }; /** From patchwork Tue Sep 13 07:13:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 116238 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0DD23A00C3; Tue, 13 Sep 2022 09:20:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6827640E50; Tue, 13 Sep 2022 09:19:58 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id 568C840A7A for ; Tue, 13 Sep 2022 09:19:53 +0200 (CEST) Received: from dggpeml500024.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MRZWX1q28zmVJG; Tue, 13 Sep 2022 15:16:08 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Tue, 13 Sep 2022 15:19:51 +0800 From: Chengwen Feng To: , , , , , , , , CC: Subject: [PATCH v4 4/4] ethdev: support telemetry private dump Date: Tue, 13 Sep 2022 07:13:44 +0000 Message-ID: <20220913071344.38612-5-fengchengwen@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220913071344.38612-1-fengchengwen@huawei.com> References: <20220615073915.14041-1-fengchengwen@huawei.com> <20220913071344.38612-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch supports telemetry private dump a ethdev port. Signed-off-by: Chengwen Feng Acked-by: Morten Brørup --- lib/ethdev/rte_ethdev.c | 47 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 47 insertions(+) diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 1979dc0850..a19b1215be 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -5644,6 +5645,48 @@ eth_dev_handle_port_xstats(const char *cmd __rte_unused, return 0; } +#ifndef RTE_EXEC_ENV_WINDOWS +static int +eth_dev_handle_port_dump_priv(const char *cmd __rte_unused, + const char *params, + struct rte_tel_data *d) +{ + char *buf, *end_param; + int port_id, ret; + FILE *f; + + if (params == NULL || strlen(params) == 0 || !isdigit(*params)) + return -EINVAL; + + port_id = strtoul(params, &end_param, 0); + if (*end_param != '\0') + RTE_ETHDEV_LOG(NOTICE, + "Extra parameters passed to ethdev telemetry command, ignoring"); + if (!rte_eth_dev_is_valid_port(port_id)) + return -EINVAL; + + buf = calloc(sizeof(char), RTE_TEL_MAX_SINGLE_STRING_LEN); + if (buf == NULL) + return -ENOMEM; + + f = fmemopen(buf, RTE_TEL_MAX_SINGLE_STRING_LEN - 1, "w+"); + if (f == NULL) { + free(buf); + return -EINVAL; + } + + ret = rte_eth_dev_priv_dump(port_id, f); + fclose(f); + if (ret == 0) { + rte_tel_data_start_dict(d); + rte_tel_data_string(d, buf); + } + + free(buf); + return 0; +} +#endif /* !RTE_EXEC_ENV_WINDOWS */ + static int eth_dev_handle_port_link_status(const char *cmd __rte_unused, const char *params, @@ -5927,6 +5970,10 @@ RTE_INIT(ethdev_init_telemetry) "Returns the common stats for a port. Parameters: int port_id"); rte_telemetry_register_cmd("/ethdev/xstats", eth_dev_handle_port_xstats, "Returns the extended stats for a port. Parameters: int port_id"); +#ifndef RTE_EXEC_ENV_WINDOWS + rte_telemetry_register_cmd("/ethdev/dump_priv", eth_dev_handle_port_dump_priv, + "Returns dump private information for a port. Parameters: int port_id"); +#endif rte_telemetry_register_cmd("/ethdev/link_status", eth_dev_handle_port_link_status, "Returns the link status for a port. Parameters: int port_id"); From patchwork Thu Sep 15 07:07:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hanumanth Pothula X-Patchwork-Id: 116325 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2169DA00C5; Thu, 15 Sep 2022 09:09:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 11A574021D; Thu, 15 Sep 2022 09:09:50 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 54C0440156 for ; Thu, 15 Sep 2022 09:09:49 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 28ENmMJv023140; Thu, 15 Sep 2022 00:07:42 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=IhfQqvO7LBU4pQuxD4YZ53Z6lK+xGIyQlnwvCYUTV5M=; b=c69ai7c22jjVngs2IVRifZfw50WkJaP7RVQCyZnkJhdiBDGUQXsS5d84VtgsutNyan5y DRcE5vEbQpTjiExzwLztPnsq3XdRHR8xEiOoj/RUetz6OsxIPazyVNISDozD3pS106nD Hh7cnVXXXLshhp25IbjoU8v3lHY0O1cySczoqSGK6+x0tvkHA4XTPpp/VXLMIGi6OCbc Og4MsZbz6PNMsjdUkOA49X3L/uUEsT8LHrErMdK0rcYrViMZ9ZVwfRxb2EouTxDZK350 DllMOyl0Izw+mDqBK411zh5pmIp/mCTafzyDPCD1yPahBpKM38wTmcgFYW/3S69MkYAT OA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3jjy0272xe-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 15 Sep 2022 00:07:42 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 15 Sep 2022 00:07:40 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 15 Sep 2022 00:07:40 -0700 Received: from localhost.localdomain (unknown [10.28.36.155]) by maili.marvell.com (Postfix) with ESMTP id ECB4F3F70A2; Thu, 15 Sep 2022 00:07:35 -0700 (PDT) From: Hanumanth Pothula To: Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko CC: , , , , , , , , , , , , Hanumanth Pothula Subject: [PATCH v4 1/3] ethdev: Add support for mulitiple mbuf pools per Rx queue Date: Thu, 15 Sep 2022 12:37:30 +0530 Message-ID: <20220915070732.182542-1-hpothula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220902070047.2812906-1-hpothula@marvell.com> References: <20220902070047.2812906-1-hpothula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 8RlTzE4G0B99pR70wgygw7uphDd7-11u X-Proofpoint-ORIG-GUID: 8RlTzE4G0B99pR70wgygw7uphDd7-11u X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-09-15_03,2022-09-14_04,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds support for multiple mempool capability. Some of the HW has support for choosing memory pools based on the packet's size. Thiscapability allows PMD to choose a memory pool based on the packet's length. This is often useful for saving the memory where the application can create a different pool to steer the specific size of the packet, thus enabling effective use of memory. For example, let's say HW has a capability of three pools, - pool-1 size is 2K - pool-2 size is > 2K and < 4K - pool-3 size is > 4K Here, pool-1 can accommodate packets with sizes < 2K pool-2 can accommodate packets with sizes > 2K and < 4K pool-3 can accommodate packets with sizes > 4K With multiple mempool capability enabled in SW, an application may create three pools of different sizes and send them to PMD. Allowing PMD to program HW based on the packet lengths. So that packets with less than 2K are received on pool-1, packets with lengths between 2K and 4K are received on pool-2 and finally packets greater than 4K are received on pool-3. Signed-off-by: Hanumanth Pothula v4: - Renamed Offload capability name from RTE_ETH_RX_OFFLOAD_BUFFER_SORT to RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL. - In struct rte_eth_rxconf, defined new pointer, which holds array of type struct rte_eth_rx_mempool(memory pools). This array is used by PMD to program multiple mempools. v3: - Implemented Pool Sort capability as new Rx offload capability, RTE_ETH_RX_OFFLOAD_BUFFER_SORT. v2: - Along with spec changes, uploading testpmd and driver changes. --- lib/ethdev/rte_ethdev.c | 78 ++++++++++++++++++++++++++++++++++------- lib/ethdev/rte_ethdev.h | 24 +++++++++++++ 2 files changed, 89 insertions(+), 13 deletions(-) diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 1979dc0850..8618d6b01d 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -1634,6 +1634,45 @@ rte_eth_dev_is_removed(uint16_t port_id) return ret; } +static int +rte_eth_rx_queue_check_mempool(const struct rte_eth_rx_mempool *rx_mempool, + uint16_t n_pool, uint32_t *mbp_buf_size, + const struct rte_eth_dev_info *dev_info) +{ + uint16_t pool_idx; + + if (n_pool > dev_info->max_pools) { + RTE_ETHDEV_LOG(ERR, + "Invalid capabilities, max pools supported %u\n", + dev_info->max_pools); + return -EINVAL; + } + + for (pool_idx = 0; pool_idx < n_pool; pool_idx++) { + struct rte_mempool *mpl = rx_mempool[pool_idx].mp; + + if (mpl == NULL) { + RTE_ETHDEV_LOG(ERR, "null mempool pointer\n"); + return -EINVAL; + } + + *mbp_buf_size = rte_pktmbuf_data_room_size(mpl); + if (*mbp_buf_size < dev_info->min_rx_bufsize + + RTE_PKTMBUF_HEADROOM) { + RTE_ETHDEV_LOG(ERR, + "%s mbuf_data_room_size %u < %u (RTE_PKTMBUF_HEADROOM=%u + min_rx_bufsize(dev)=%u)\n", + mpl->name, *mbp_buf_size, + RTE_PKTMBUF_HEADROOM + dev_info->min_rx_bufsize, + RTE_PKTMBUF_HEADROOM, + dev_info->min_rx_bufsize); + return -EINVAL; + } + + } + + return 0; +} + static int rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, uint16_t n_seg, uint32_t *mbp_buf_size, @@ -1733,7 +1772,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (mp != NULL) { /* Single pool configuration check. */ - if (rx_conf != NULL && rx_conf->rx_nseg != 0) { + if (rx_conf != NULL && + (rx_conf->rx_nseg != 0 || rx_conf->rx_npool)) { RTE_ETHDEV_LOG(ERR, "Ambiguous segment configuration\n"); return -EINVAL; @@ -1763,30 +1803,42 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, dev_info.min_rx_bufsize); return -EINVAL; } - } else { - const struct rte_eth_rxseg_split *rx_seg; - uint16_t n_seg; + } else if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT || + rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) { - /* Extended multi-segment configuration check. */ - if (rx_conf == NULL || rx_conf->rx_seg == NULL || rx_conf->rx_nseg == 0) { + /* Extended multi-segment/pool configuration check. */ + if (rx_conf == NULL || + (rx_conf->rx_seg == NULL && rx_conf->rx_mempool == NULL) || + (rx_conf->rx_nseg == 0 && rx_conf->rx_npool == 0)) { RTE_ETHDEV_LOG(ERR, "Memory pool is null and no extended configuration provided\n"); return -EINVAL; } - rx_seg = (const struct rte_eth_rxseg_split *)rx_conf->rx_seg; - n_seg = rx_conf->rx_nseg; - if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + const struct rte_eth_rxseg_split *rx_seg = + (const struct rte_eth_rxseg_split *)rx_conf->rx_seg; + uint16_t n_seg = rx_conf->rx_nseg; ret = rte_eth_rx_queue_check_split(rx_seg, n_seg, &mbp_buf_size, &dev_info); - if (ret != 0) + if (ret) return ret; - } else { - RTE_ETHDEV_LOG(ERR, "No Rx segmentation offload configured\n"); - return -EINVAL; } + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) { + const struct rte_eth_rx_mempool *rx_mempool = + (const struct rte_eth_rx_mempool *)rx_conf->rx_mempool; + ret = rte_eth_rx_queue_check_mempool(rx_mempool, + rx_conf->rx_npool, + &mbp_buf_size, + &dev_info); + if (ret) + return ret; + + } + } else { + RTE_ETHDEV_LOG(ERR, "No Rx offload is configured\n"); + return -EINVAL; } /* Use default specified by driver, if nb_rx_desc is zero */ diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index b62ac5bb6f..17deec2cbd 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1035,6 +1035,11 @@ union rte_eth_rxseg { /* The other features settings should be added here. */ }; +/* A common structure used to describe mbuf pools per Rx queue */ +struct rte_eth_rx_mempool { + struct rte_mempool *mp; +}; + /** * A structure used to configure an Rx ring of an Ethernet port. */ @@ -1067,6 +1072,23 @@ struct rte_eth_rxconf { */ union rte_eth_rxseg *rx_seg; + /** + * Points to an array of mempools. + * + * This provides support for multiple mbuf pools per Rx queue. + * + * This is often useful for saving the memory where the application can + * create a different pools to steer the specific size of the packet, thus + * enabling effective use of memory. + * + * Note that on Rx scatter enable, a packet may be delivered using a chain + * of mbufs obtained from single mempool or multiple mempools based on + * the NIC implementation. + * + */ + struct rte_eth_rx_mempool *rx_mempool; + uint16_t rx_npool; /** < number of mempools */ + uint64_t reserved_64s[2]; /**< Reserved for future fields */ void *reserved_ptrs[2]; /**< Reserved for future fields */ }; @@ -1395,6 +1417,7 @@ struct rte_eth_conf { #define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_BIT64(18) #define RTE_ETH_RX_OFFLOAD_RSS_HASH RTE_BIT64(19) #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT RTE_BIT64(20) +#define RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL RTE_BIT64(21) #define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \ @@ -1615,6 +1638,7 @@ struct rte_eth_dev_info { /** Configured number of Rx/Tx queues */ uint16_t nb_rx_queues; /**< Number of Rx queues. */ uint16_t nb_tx_queues; /**< Number of Tx queues. */ + uint16_t max_pools; /** Rx parameter recommendations */ struct rte_eth_dev_portconf default_rxportconf; /** Tx parameter recommendations */ From patchwork Thu Sep 15 12:45:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "lihuisong (C)" X-Patchwork-Id: 116352 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 83ED1A00C5; Thu, 15 Sep 2022 14:47:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D60F542825; Thu, 15 Sep 2022 14:46:58 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 86BD54021D for ; Thu, 15 Sep 2022 14:46:54 +0200 (CEST) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4MSxj566ygznVHT; Thu, 15 Sep 2022 20:44:09 +0800 (CST) Received: from kwepemm600004.china.huawei.com (7.193.23.242) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 15 Sep 2022 20:46:52 +0800 Received: from localhost.localdomain (10.28.79.22) by kwepemm600004.china.huawei.com (7.193.23.242) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Thu, 15 Sep 2022 20:46:51 +0800 From: Huisong Li To: CC: , , , , Subject: [PATCH V2 3/6] ethdev: fix push new event Date: Thu, 15 Sep 2022 20:45:19 +0800 Message-ID: <20220915124522.5407-4-lihuisong@huawei.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20220915124522.5407-1-lihuisong@huawei.com> References: <20220825024425.10534-1-lihuisong@huawei.com> <20220915124522.5407-1-lihuisong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.28.79.22] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600004.china.huawei.com (7.193.23.242) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The 'state' in struct rte_eth_dev may be used to update some information when app receive these events. For example, when app receives a new event, app may get the socket id of this port by calling rte_eth_dev_socket_id to setup the attached port. The 'state' is used in rte_eth_dev_socket_id. If the state isn't modified to RTE_ETH_DEV_ATTACHED before pushing the new event, app will get the socket id failed. So this patch moves pushing event operation after the state updated. Fixes: 99a2dd955fba ("lib: remove librte_ prefix from directory names") Cc: stable@dpdk.org Signed-off-by: Huisong Li --- lib/ethdev/ethdev_driver.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c index a285f213f0..a6616f072b 100644 --- a/lib/ethdev/ethdev_driver.c +++ b/lib/ethdev/ethdev_driver.c @@ -206,9 +206,9 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev) if (rte_eal_process_type() == RTE_PROC_SECONDARY) eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev); - rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL); dev->state = RTE_ETH_DEV_ATTACHED; + rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL); } int From patchwork Mon Sep 19 12:15:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sunil Kumar Kori X-Patchwork-Id: 116426 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B7FD8A00C3; Mon, 19 Sep 2022 14:18:05 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BC7B440E0F; Mon, 19 Sep 2022 14:18:04 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 3020C40141 for ; Mon, 19 Sep 2022 14:18:03 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 28J5Ehoh027218; Mon, 19 Sep 2022 05:15:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=WD1fVuVO8vPWgE8SPv4S5hIAngtc7ReK8BIYPbeWg3U=; b=ZMBFma3WC45gkNOwtBdOX/gc+SR9g7/U72aIDyo4I2FseXz7gjhvd62skVohc+CJ9G3d 8LyeRZFUTNJCGghn/L5eejORK12AhNJ5pp009JVAGVLJw6KpsSsw6hU3C9EBkfhbBkdQ g2ZdBDXitacd4FpkSY8Jjic9uPaprVEP2jVdePTjtUuaEV7J3yJyRt17+CdXtSOZrSrK HEgVzZ6njBKJ8AHSaUHldC5NASH1T9U5fKi8YoohyNLUgdpTu6cAUhAq4VeWjvEU8t64 t1eX+kTpucfMYqNpqRHWQeDGisGJQOoROb8QrnfRM+GhXCAlgZv8j5aVR1FOQewEr7QK GA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3jnbkppkgx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 19 Sep 2022 05:15:44 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 19 Sep 2022 05:15:43 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 19 Sep 2022 05:15:43 -0700 Received: from localhost.localdomain (unknown [10.28.34.25]) by maili.marvell.com (Postfix) with ESMTP id 701AC3F703F; Mon, 19 Sep 2022 05:15:41 -0700 (PDT) From: To: Ferruh Yigit , Thomas Monjalon , Andrew Rybchenko , "Ray Kinsella" CC: , Jerin Jacob Subject: [PATCH v2 1/1] ethdev: support congestion management Date: Mon, 19 Sep 2022 17:45:34 +0530 Message-ID: <20220919121534.1058884-1-skori@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220713130340.2886839-1-jerinj@marvell.com> References: <20220713130340.2886839-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Q4HaTol78AcIEsPGKMc4y3G2zDhbskgC X-Proofpoint-GUID: Q4HaTol78AcIEsPGKMc4y3G2zDhbskgC X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-09-19_05,2022-09-16_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Jerin Jacob NIC HW controllers often come with congestion management support on various HW objects such as Rx queue depth or mempool queue depth. Also, it can support various modes of operation such as RED (Random early discard), WRED etc on those HW objects. This patch adds a framework to express such modes(enum rte_cman_mode) and introduce (enum rte_eth_cman_obj) to enumerate the different objects where the modes can operate on. This patch adds RTE_CMAN_RED mode of operation and RTE_ETH_CMAN_OBJ_RX_QUEUE, RTE_ETH_CMAN_OBJ_RX_QUEUE_MEMPOOL object. Introduced reserved fields in configuration structure backed by rte_eth_cman_config_init() to add new configuration parameters without ABI breakage. Added rte_eth_cman_info_get() API to get the information such as supported modes and objects. Added rte_eth_cman_config_init(), rte_eth_cman_config_set() APIs to configure congestion management on those object with associated mode. Finally, Added rte_eth_cman_config_get() API to retrieve the applied configuration. Signed-off-by: Jerin Jacob --- v1..v2: - Fix review comments (Akhil Goyal) rfc..v1: - Added RED specification (http://www.aciri.org/floyd/papers/red/red.html) link - Fixed doxygen comment issue (Min Hu) doc/guides/nics/features.rst | 12 +++ doc/guides/nics/features/default.ini | 1 + lib/eal/include/meson.build | 1 + lib/eal/include/rte_cman.h | 55 ++++++++++ lib/ethdev/ethdev_driver.h | 25 +++++ lib/ethdev/meson.build | 1 + lib/ethdev/rte_cman.c | 101 ++++++++++++++++++ lib/ethdev/rte_ethdev.h | 151 +++++++++++++++++++++++++++ lib/ethdev/version.map | 6 ++ 9 files changed, 353 insertions(+) create mode 100644 lib/eal/include/rte_cman.h create mode 100644 lib/ethdev/rte_cman.c diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst index 7f6cb914a5..aa22d8bb22 100644 --- a/doc/guides/nics/features.rst +++ b/doc/guides/nics/features.rst @@ -727,6 +727,18 @@ Supports configuring per-queue stat counter mapping. ``rte_eth_dev_set_tx_queue_stats_mapping()``. +.. _nic_features_congestion_management: + +Congestion management +--------------------- + +Supports congestion management. + +* **[implements] eth_dev_ops**: ``cman_info_get``, ``cman_config_set``, ``cman_config_get``. +* **[related] API**: ``rte_eth_cman_info_get()``, ``rte_eth_cman_config_init()``, + ``rte_eth_cman_config_set()``, ``rte_eth_cman_config_get()``. + + .. _nic_features_fw_version: FW version diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index d1db0c256a..38a5767b06 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -60,6 +60,7 @@ Tx descriptor status = Basic stats = Extended stats = Stats per queue = +Congestion management = FW version = EEPROM dump = Module EEPROM dump = diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build index fd6e844224..e569ba7cf4 100644 --- a/lib/eal/include/meson.build +++ b/lib/eal/include/meson.build @@ -10,6 +10,7 @@ headers += files( 'rte_branch_prediction.h', 'rte_bus.h', 'rte_class.h', + 'rte_cman.h', 'rte_common.h', 'rte_compat.h', 'rte_debug.h', diff --git a/lib/eal/include/rte_cman.h b/lib/eal/include/rte_cman.h new file mode 100644 index 0000000000..1d84ddf0fb --- /dev/null +++ b/lib/eal/include/rte_cman.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2022 Marvell International Ltd. + */ + +#ifndef RTE_CMAN_H +#define RTE_CMAN_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include + +/** + * @file + * Congestion management related parameters for DPDK. + */ + +/** Congestion management modes */ +enum rte_cman_mode { + /** + * Congestion based on Random Early Detection. + * + * https://en.wikipedia.org/wiki/Random_early_detection + * http://www.aciri.org/floyd/papers/red/red.html + * @see struct rte_cman_red_params + */ + RTE_CMAN_RED = RTE_BIT64(0), +}; + +/** + * RED based congestion management configuration parameters. + */ +struct rte_cman_red_params { + /** + * Minimum threshold (min_th) value + * + * Value expressed as percentage. Value must be in 0 to 100(inclusive). + */ + uint8_t min_th; + /** + * Maximum threshold (max_th) value + * + * Value expressed as percentage. Value must be in 0 to 100(inclusive). + */ + uint8_t max_th; + /** Inverse of packet marking probability maximum value (maxp = 1 / maxp_inv) */ + uint16_t maxp_inv; +}; + +#ifdef __cplusplus +} +#endif + +#endif /* RTE_CMAN_H */ diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index 5101868ea7..9b6ad5c5c1 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -1093,6 +1093,22 @@ typedef int (*eth_rx_queue_avail_thresh_query_t)(struct rte_eth_dev *dev, uint16_t *rx_queue_id, uint8_t *avail_thresh); +/** @internal Get congestion management information. */ +typedef int (*eth_cman_info_get_t)(struct rte_eth_dev *dev, + struct rte_eth_cman_info *info); + +/** @internal Init congestion management structure with default values. */ +typedef int (*eth_cman_config_init_t)(struct rte_eth_dev *dev, + struct rte_eth_cman_config *config); + +/** @internal Configure congestion management on a port. */ +typedef int (*eth_cman_config_set_t)(struct rte_eth_dev *dev, + struct rte_eth_cman_config *config); + +/** @internal Retrieve congestion management configuration of a port. */ +typedef int (*eth_cman_config_get_t)(struct rte_eth_dev *dev, + struct rte_eth_cman_config *config); + /** * @internal A structure containing the functions exported by an Ethernet driver. */ @@ -1308,6 +1324,15 @@ struct eth_dev_ops { eth_rx_queue_avail_thresh_set_t rx_queue_avail_thresh_set; /** Query Rx queue available descriptors threshold event */ eth_rx_queue_avail_thresh_query_t rx_queue_avail_thresh_query; + + /** Get congestion management information */ + eth_cman_info_get_t cman_info_get; + /** Initialize congestion management structure with default values */ + eth_cman_config_init_t cman_config_init; + /** Configure congestion management */ + eth_cman_config_set_t cman_config_set; + /** Retrieve congestion management configuration */ + eth_cman_config_get_t cman_config_get; }; /** diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build index 47bb2625b0..59ad49114f 100644 --- a/lib/ethdev/meson.build +++ b/lib/ethdev/meson.build @@ -7,6 +7,7 @@ sources = files( 'ethdev_profile.c', 'ethdev_trace_points.c', 'rte_class_eth.c', + 'rte_cman.c', 'rte_ethdev.c', 'rte_flow.c', 'rte_mtr.c', diff --git a/lib/ethdev/rte_cman.c b/lib/ethdev/rte_cman.c new file mode 100644 index 0000000000..2093c247d1 --- /dev/null +++ b/lib/ethdev/rte_cman.c @@ -0,0 +1,101 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2022 Marvell International Ltd. + */ + +#include + +#include +#include "rte_ethdev.h" +#include "ethdev_driver.h" + +static int +eth_err(uint16_t port_id, int ret) +{ + if (ret == 0) + return 0; + + if (rte_eth_dev_is_removed(port_id)) + return -EIO; + + return ret; +} + +#define RTE_CMAN_FUNC_ERR_RET(func) \ +do { \ + if (func == NULL) { \ + RTE_ETHDEV_LOG(ERR, "Function not implemented\n"); \ + return -ENOTSUP; \ + } \ +} while (0) + +/* Get congestion management information for a port */ +int +rte_eth_cman_info_get(uint16_t port_id, struct rte_eth_cman_info *info) +{ + struct rte_eth_dev *dev; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + dev = &rte_eth_devices[port_id]; + + if (info == NULL) { + RTE_ETHDEV_LOG(ERR, "congestion management info is NULL\n"); + return -EINVAL; + } + + RTE_CMAN_FUNC_ERR_RET(dev->dev_ops->cman_info_get); + return eth_err(port_id, (*dev->dev_ops->cman_info_get)(dev, info)); +} + +/* Initialize congestion management structure with default values */ +int +rte_eth_cman_config_init(uint16_t port_id, struct rte_eth_cman_config *config) +{ + struct rte_eth_dev *dev; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + dev = &rte_eth_devices[port_id]; + + if (config == NULL) { + RTE_ETHDEV_LOG(ERR, "congestion management config is NULL\n"); + return -EINVAL; + } + + RTE_CMAN_FUNC_ERR_RET(dev->dev_ops->cman_config_init); + return eth_err(port_id, (*dev->dev_ops->cman_config_init)(dev, config)); +} + +/* Configure congestion management on a port */ +int +rte_eth_cman_config_set(uint16_t port_id, struct rte_eth_cman_config *config) +{ + struct rte_eth_dev *dev; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + dev = &rte_eth_devices[port_id]; + + if (config == NULL) { + RTE_ETHDEV_LOG(ERR, "congestion management config is NULL\n"); + return -EINVAL; + } + + RTE_CMAN_FUNC_ERR_RET(dev->dev_ops->cman_config_set); + return eth_err(port_id, (*dev->dev_ops->cman_config_set)(dev, config)); +} + +/* Retrieve congestion management configuration of a port */ +int +rte_eth_cman_config_get(uint16_t port_id, struct rte_eth_cman_config *config) +{ + struct rte_eth_dev *dev; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + dev = &rte_eth_devices[port_id]; + + if (config == NULL) { + RTE_ETHDEV_LOG(ERR, "congestion management config is NULL\n"); + return -EINVAL; + } + + RTE_CMAN_FUNC_ERR_RET(dev->dev_ops->cman_config_get); + return eth_err(port_id, (*dev->dev_ops->cman_config_get)(dev, config)); +} diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index de9e970d4d..f4bb644c6a 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -160,6 +160,7 @@ extern "C" { #define RTE_ETHDEV_DEBUG_TX #endif +#include #include #include #include @@ -5506,6 +5507,156 @@ typedef struct { __rte_experimental int rte_eth_dev_priv_dump(uint16_t port_id, FILE *file); +/* Congestion management */ + +/** Enumerate list of ethdev congestion management objects */ +enum rte_eth_cman_obj { + /** Congestion management based on Rx queue depth */ + RTE_ETH_CMAN_OBJ_RX_QUEUE = RTE_BIT64(0), + /** + * Congestion management based on mempool depth associated with Rx queue + * @see rte_eth_rx_queue_setup() + */ + RTE_ETH_CMAN_OBJ_RX_QUEUE_MEMPOOL = RTE_BIT64(1), +}; + +/** + * @warning + * @b EXPERIMENTAL: this structure may change, or be removed, without prior notice + * + * A structure used to retrieve information of ethdev congestion management. + */ +struct rte_eth_cman_info { + /** + * Set of supported congestion management modes + * @see enum rte_cman_mode + */ + uint64_t modes_supported; + /** + * Set of supported congestion management objects + * @see enum rte_eth_cman_obj + */ + uint64_t objs_supported; + /** Reserved for future fields */ + uint8_t rsvd[8]; +}; + +/** + * @warning + * @b EXPERIMENTAL: this structure may change, or be removed, without prior notice + * + * A structure used to configure the ethdev congestion management. + */ +struct rte_eth_cman_config { + /** Congestion management object */ + enum rte_eth_cman_obj obj; + /** Congestion management mode */ + enum rte_cman_mode mode; + union { + /** + * Rx queue to configure congestion management. + * + * Valid when object is RTE_ETH_CMAN_OBJ_RX_QUEUE or + * RTE_ETH_CMAN_OBJ_RX_QUEUE_MEMPOOL. + */ + uint16_t rx_queue; + /** Reserved for future fields */ + uint8_t rsvd_obj_params[4]; + } obj_param; + union { + /** + * RED configuration parameters. + * + * Valid when mode is RTE_CMAN_RED. + */ + struct rte_cman_red_params red; + /** Reserved for future fields */ + uint8_t rsvd_mode_params[4]; + } mode_param; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * Retrieve the information for ethdev congestion management + * + * @param port_id + * The port identifier of the Ethernet device. + * @param info + * A pointer to a structure of type *rte_eth_cman_info* to be filled with + * the information about congestion management. + * @return + * - (0) if successful. + * - (-ENOTSUP) if support for cman_info_get does not exist. + * - (-ENODEV) if *port_id* invalid. + * - (-EINVAL) if bad parameter. + */ +__rte_experimental +int rte_eth_cman_info_get(uint16_t port_id, struct rte_eth_cman_info *info); + +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * Initialize the ethdev congestion management configuration structure with default values. + * + * @param port_id + * The port identifier of the Ethernet device. + * @param config + * A pointer to a structure of type *rte_eth_cman_config* to be initialized + * with default value. + * @return + * - (0) if successful. + * - (-ENOTSUP) if support for cman_config_init does not exist. + * - (-ENODEV) if *port_id* invalid. + * - (-EINVAL) if bad parameter. + */ +__rte_experimental +int rte_eth_cman_config_init(uint16_t port_id, struct rte_eth_cman_config *config); + +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * Configure ethdev congestion management + * + * @param port_id + * The port identifier of the Ethernet device. + * @param config + * A pointer to a structure of type *rte_eth_cman_config* to be configured. + * @return + * - (0) if successful. + * - (-ENOTSUP) if support for cman_config_set does not exist. + * - (-ENODEV) if *port_id* invalid. + * - (-EINVAL) if bad parameter. + */ +__rte_experimental +int rte_eth_cman_config_set(uint16_t port_id, struct rte_eth_cman_config *config); + +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * Retrieve the applied ethdev congestion management parameters for the given port. + * + * @param port_id + * The port identifier of the Ethernet device. + * @param config + * A pointer to a structure of type *rte_eth_cman_config* to retrieve + * congestion management parameters for the given object. + * Application must fill all parameters except mode_param parameter in + * struct rte_eth_cman_config. + * + * @return + * - (0) if successful. + * - (-ENOTSUP) if support for cman_config_get does not exist. + * - (-ENODEV) if *port_id* invalid. + * - (-EINVAL) if bad parameter. + */ +__rte_experimental +int rte_eth_cman_config_get(uint16_t port_id, struct rte_eth_cman_config *config); + #include /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 03f52fee91..ea9b9497ad 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -285,6 +285,12 @@ EXPERIMENTAL { rte_mtr_color_in_protocol_priority_get; rte_mtr_color_in_protocol_set; rte_mtr_meter_vlan_table_update; + + # added in 22.11 + rte_eth_cman_config_get; + rte_eth_cman_config_init; + rte_eth_cman_config_set; + rte_eth_cman_info_get; }; INTERNAL { From patchwork Mon Sep 19 15:50:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Savisko X-Patchwork-Id: 116433 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 790CFA00C3; Mon, 19 Sep 2022 17:50:58 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AA92640E0F; Mon, 19 Sep 2022 17:50:57 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2061.outbound.protection.outlook.com [40.107.237.61]) by mails.dpdk.org (Postfix) with ESMTP id 7593B40141 for ; Mon, 19 Sep 2022 17:50:56 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=iV8p/Xta79gUHnf6x4i0l/MQcsvRFpS+VvzxfVcjU2ia1XKNY3AoDW7k+32ncbacL+LLYgiWXlm/iMZOA6Zwqz2jnqW+3kLH6ibeoCn4DH7wsGv11ley0HxOJ+f75vcShxX9Hug3CK+DdWUxu7im2MiCFLks6+NySsNjRNHX+O47WTDaza7BMuViSZ3LmbV+ftrw+NL/3wBQ1IXXyS+bWmbfpESFQU0wRj+ruA4+SRFNKI5q6Ipo4ewF7UyIi2tsSNGjJaArI1XpxKBzQ8DPINTmd8U8ZI4IBWFwXPKG6lZnHbuR5PpmKwIdsJShC0HgyEVfV5pvyg3la8oNwIRtRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/jAuucj1BBKQ0CQ3WYfdVVRuxhqvb05mmWXN3FUaRoU=; b=Qgvy06sIeeiM72QttygTPu0W78FP3WAnvXaRgF2AdGghG8oN1yYGmLe2X0CIW/H7Xkfhwlfw0uGh8pZawxfOuBQENiybrKtV6o+BlTD7i3sKAfChnIPXL4E5T3j5PgPjweTZt3hwh9HuTp95Hmsv46YNws4rLalPXpf//aDLBjSyQ/r65ejQJ/9M4k9XONoy4rxTKWzkV0sPlNjRRt9RDZ5LUKKsEQjAgyfWnxQDl7Lnyp9+K/SOMGtrmyQR0nP40FdAvLWSi5P9+1mxEiinYtH4x8AYziPXODXmoQmqPRbbGEC9KP0APsF8Txn/7XTCkEjRRj27bRQefQK8SkEQgQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/jAuucj1BBKQ0CQ3WYfdVVRuxhqvb05mmWXN3FUaRoU=; b=D1beDi14a8mmpKgmN09ReES2+EAEurDAeMr3HTmQFZEvAgZjvQ2DdwdevuHCkcbAxNFyUS3vi9NmTF9/My6tyh/AmeqlFS/VNCp5kZ4td2eK3OAeiVwz6R9qxOTNq+TV/e22Q5YQlMM7QWKQk5ODz3eM1ta+uPGryEomtB4/kBLvh4DWr5D0Lq0VpkBJBce6q59cCzSJj4E1Ke44GAQLQVHENptTGhWj3HSllVovggmiYYg+b26vip1qlVZsdyOmoq/Xzk++pvtvi6Ap+0xENcEHmq2kKXRugO+8ruNJQVLjpO0qtayrnDa9WAC+iRNQiVXPc4H/K3HwOjqU5BewAw== Received: from MW4PR03CA0257.namprd03.prod.outlook.com (2603:10b6:303:b4::22) by BN9PR12MB5324.namprd12.prod.outlook.com (2603:10b6:408:105::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.21; Mon, 19 Sep 2022 15:50:54 +0000 Received: from CO1NAM11FT050.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b4:cafe::40) by MW4PR03CA0257.outlook.office365.com (2603:10b6:303:b4::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.19 via Frontend Transport; Mon, 19 Sep 2022 15:50:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT050.mail.protection.outlook.com (10.13.174.79) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.12 via Frontend Transport; Mon, 19 Sep 2022 15:50:54 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Mon, 19 Sep 2022 08:50:35 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Mon, 19 Sep 2022 08:50:32 -0700 From: Michael Savisko To: CC: , , , , Aman Singh , Yuying Zhang , Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Subject: [PATCH v3] ethdev: add send to kernel action Date: Mon, 19 Sep 2022 18:50:13 +0300 Message-ID: <20220919155013.61473-1-michaelsav@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220914093219.11728-1-michaelsav@nvidia.com> References: <20220914093219.11728-1-michaelsav@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT050:EE_|BN9PR12MB5324:EE_ X-MS-Office365-Filtering-Correlation-Id: e92d5c6e-3c98-49a0-935e-08da9a56bc57 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: CHc2jc0/WEaq7fa7ukm4d5x3t7Z80xoW3Bj2TN3UKMOaRAJP/t+3RdRMUFMDw5pkuoWGwt+jHpSMCOK0ugpZZ94117dPSZoZjkde2J370G4RxySc7t6bTlmpLWzBIYt1gAFzEoGbB4wxPjMk5Ae4kj+JMogTbs8GxrUvsd2dwVxhAHkcBlEfv68EJj/+iNRu1OM1IjLU7lATwBudbuKv8Jv5x5VbvBmC9V9TlRTTa7nrQ4afA7CLSocaWE8G3mNqemo+iwhg6RLMuufm7TFYvtRX+qBXN8Mu7/bicG5co8Nr3jqvc6KQaMOGqqzUzBuwGC9wuzgWlpha7iE/YTm3wjM35eoGVFbF1ChkPdnXennoycATeq/Mz+XQfeByzGnbYksTcY8+EPjTWAg0W+Onbk+9iU0C5tJnfiVshPM6pZfSZNrgIUKn4OkDTPMwtCF1uqMExaV9iEw6G8vjjjnWL6hQoJiR0KSNNWYQn+Wp/IVTduJoY0aVixzHWTG7LIluY2Z8E8B8EZGng1B3LngKcpkabCvxo8oyuZhOSwut7wIELToNdxYk9hs+f8ImNP6i2r59evDkk3BdqdwXywi9DAb8+nNRXl/camwNHZEKt26HzDngWLrtsQuDYMleFXQwiMxN7QxHgyEd343YGH2L8WdclFvIxgDhXIMw+nl2XPkcXhCxcwu+vMtdnHWBvYQdzbIv9wBv5TMggOxFL9RhOuVfb6DQieZRLlNYAW4qh5BSxrKP0HQAVHCqYPRLwLSvTZHrqDWl4NAcG1fyO/4bXQ== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(346002)(376002)(136003)(396003)(451199015)(40470700004)(46966006)(36840700001)(2906002)(2616005)(54906003)(7696005)(82310400005)(41300700001)(356005)(36860700001)(6666004)(55016003)(40480700001)(4326008)(8676002)(70206006)(83380400001)(6916009)(70586007)(316002)(426003)(336012)(40460700003)(1076003)(5660300002)(36756003)(47076005)(16526019)(186003)(8936002)(478600001)(86362001)(26005)(6286002)(7636003)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Sep 2022 15:50:54.2505 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e92d5c6e-3c98-49a0-935e-08da9a56bc57 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT050.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5324 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In some cases application may receive a packet that should have been received by the kernel. In this case application uses KNI or other means to transfer the packet to the kernel. With bifurcated driver we can have a rule to route packets matching a pattern (example: IPv4 packets) to the DPDK application and the rest of the traffic will be received by the kernel. But if we want to receive most of the traffic in DPDK except specific pattern (example: ICMP packets) that should be processed by the kernel, then it's easier to re-route these packets with a single rule. This commit introduces new rte_flow action which allows application to re-route packets directly to the kernel without software involvement. Add new testpmd rte_flow action 'send_to_kernel'. The application may use this action to route the packet to the kernel while still in the HW. Example with testpmd command: flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800 type mask 0xffff / end actions send_to_kernel / end Signed-off-by: Michael Savisko Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 9 +++++++++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 ++ lib/ethdev/rte_flow.c | 1 + lib/ethdev/rte_flow.h | 10 ++++++++++ 4 files changed, 22 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 7f50028eb7..042f6b34a6 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -612,6 +612,7 @@ enum index { ACTION_PORT_REPRESENTOR_PORT_ID, ACTION_REPRESENTED_PORT, ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID, + ACTION_SEND_TO_KERNEL, }; /** Maximum size for pattern in struct rte_flow_item_raw. */ @@ -1872,6 +1873,7 @@ static const enum index next_action[] = { ACTION_CONNTRACK_UPDATE, ACTION_PORT_REPRESENTOR, ACTION_REPRESENTED_PORT, + ACTION_SEND_TO_KERNEL, ZERO, }; @@ -6341,6 +6343,13 @@ static const struct token token_list[] = { .help = "submit a list of associated actions for red", .next = NEXT(next_action), }, + [ACTION_SEND_TO_KERNEL] = { + .name = "send_to_kernel", + .help = "send packets to kernel", + .priv = PRIV_ACTION(SEND_TO_KERNEL, 0), + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)), + .call = parse_vc, + }, /* Top-level command. */ [ADD] = { diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 330e34427d..c259c8239a 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -4189,6 +4189,8 @@ This section lists supported actions and their attributes, if any. - ``ethdev_port_id {unsigned}``: ethdev port ID +- ``send_to_kernel``: send packets to kernel. + Destroying flow rules ~~~~~~~~~~~~~~~~~~~~~ diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 501be9d602..627c671ce4 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = { MK_FLOW_ACTION(CONNTRACK, sizeof(struct rte_flow_action_conntrack)), MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct rte_flow_action_ethdev)), MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct rte_flow_action_ethdev)), + MK_FLOW_ACTION(SEND_TO_KERNEL, 0), }; int diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index a79f1e7ef0..bf076087b3 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -2879,6 +2879,16 @@ enum rte_flow_action_type { * @see struct rte_flow_action_ethdev */ RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT, + + /** + * Send packets to the kernel, without going to userspace at all. + * The packets will be received by the kernel driver sharing + * the same device as the DPDK port. + * This is an ingress action only. + * + * No associated configuration structure. + */ + RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL, }; /** From patchwork Mon Sep 19 16:37:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dariusz Sosnowski X-Patchwork-Id: 116439 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 795DEA00C3; Mon, 19 Sep 2022 18:39:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 669A541148; Mon, 19 Sep 2022 18:39:02 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2067.outbound.protection.outlook.com [40.107.220.67]) by mails.dpdk.org (Postfix) with ESMTP id 6BC5A41145 for ; Mon, 19 Sep 2022 18:39:01 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VStP0fe6J6Qt9ie+UkY/1KFumNZniTb5dZCMj9thn9bqe9xN4SUb9gcnJOjPWbG1yACOUeF5zs0W9bEP/rdDpWZIy9qOwhi0z+XOxRuQf1kbBFeSZ53K2dvhVLY20ZZ+UaBL6/aUKK6Co6xBhdXX1eSmwK4M3H+x8gWh1b1RFRC1tYFv1KYgit84CuQ31PDCjxbDha9jI8H8PVhRVMStqHPqPEqigF8W1zbCdn/PuAUznAaiq5DjUbUHuSeKdPcYL12PVab7EPoptwoMLefeNCIM1ICJkTL43UIYDY3Xv9GN45PgQmYeNZDqdAdvBzBwCVxJGA1cn+3eEFzyc1qBcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZOKW75G1mlOoqHfLqcR1ZUrGUmxAFrzD1qmndRJOimw=; b=Qgl9xa8PzjGs9PHq5QFBunWYxijg0pcp34gBMx++BHjqB800Wl4Z0d6jQ+mcyPZYtMYkgism1e+fgWhfszGuwKaczhyFsw5/C5l0JLDYRWTOsGHVVdR2Ndc2wJsgu+lPP/eIms1GIqWhjsj6XVxEZd4baXBL3X43UGwKaKrqOx12FqR5p9iaKwz89OYj0dHdmtqhAfZkFbtIxRaaithv3ATuEr8llHafG3g8n9KOrmbOwgDDtwSze7FdfBY4NCliXpKrn/fgFDoYhHwFzX1D6bqmTxU3K8aFn7tWMwwh7qBusUzkGDxc3MosZNvYHETKyswtooaXbCjV0r4t4EtxrA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZOKW75G1mlOoqHfLqcR1ZUrGUmxAFrzD1qmndRJOimw=; b=myV1HpwpejOkqYvgQao6IshJkCuIBdayvOOH2OZLBJ+axEzQsRqVdkuzPPEUH7PxK1HSdChyLj7rljpzIMko13+/KLj64Ydz3LorwrFNFqy7idOAYS6/81AVxNnQaRXuv0oMiDungLgK8MzS7cdmzHSTsTzeF0sUYhVI7KF+i0cG8kdwEpz5Fabse1QSJ640KL4ZZLtyu0351h7SReNLozIc0RN67Ccg14hMNke60yxkwOi2T2hOxy1ot2BHgA2nwBkcGkC7ba4wvpoxWR++Kf0Q/q+pwU1YCMMFH317beOqryNSEyp7ydGr7WEeYtwbMM6Rmjx60pFZrweCoxXuFA== Received: from DM6PR03CA0031.namprd03.prod.outlook.com (2603:10b6:5:40::44) by PH8PR12MB7027.namprd12.prod.outlook.com (2603:10b6:510:1be::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.18; Mon, 19 Sep 2022 16:38:59 +0000 Received: from DM6NAM11FT093.eop-nam11.prod.protection.outlook.com (2603:10b6:5:40:cafe::c1) by DM6PR03CA0031.outlook.office365.com (2603:10b6:5:40::44) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.21 via Frontend Transport; Mon, 19 Sep 2022 16:38:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DM6NAM11FT093.mail.protection.outlook.com (10.13.172.235) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.12 via Frontend Transport; Mon, 19 Sep 2022 16:38:58 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Mon, 19 Sep 2022 09:38:47 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Mon, 19 Sep 2022 09:38:45 -0700 From: Dariusz Sosnowski To: Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko CC: Subject: [PATCH 1/7] ethdev: introduce hairpin memory capabilities Date: Mon, 19 Sep 2022 16:37:24 +0000 Message-ID: <20220919163731.1540454-2-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220919163731.1540454-1-dsosnowski@nvidia.com> References: <20220919163731.1540454-1-dsosnowski@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT093:EE_|PH8PR12MB7027:EE_ X-MS-Office365-Filtering-Correlation-Id: 6ec2470a-e5a1-4890-2082-08da9a5d737a X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: etKxYgXKdg3wMlt0x1ZtA/bA80fBOOSmtpG2ciSrU3lljAOS+wfygvArPGoBNUnU4SHRAk5BAwJK3o/8zdWvYAtE51kJ87PVH+P1gFX9Hr7CYbf4VtA8+saSFgyyp48MUKGWKPxczvkx5m+ycn/HolEfAiaGTeIazvEi2pV22EJTd4Bj/BZyUgVh8FUgRGzSLmEvtdnkOjpnh8bML21SKDrhWVF+ChlGuDgEgfoMVyUKWOvTBLuJwLbXunU+NH8HmoVTWT2xhhvO+hY52xwRJ5QhbrSqb7RJwvjq/yqLaD/SXrlvm4qtV3u6mCoFpkRZKmEOLukOJqaWb/OhZcJmp+d6gKuyhdDKrmnuEM575+zDk8P9TIdoU6NSFYq0/qBnPPkVhyGIiBhfhQNBi60NyhWAJkZJ4EuAgastI18FThgLRaaw3oriLSGhYAGntoS4YKzyitNdNZeUsPjE+Tut9dwNfvtdBgG8g4j1+74ywpnEb2cTemF250j5O7TY81VrkUIInr+9Q67uXKeNIknPCCeLN+OLSObryUqX62jZpU6Nc2IaxQLqM3PaJs4sdC/LcbWYqd4+soU/F8dIPCOq5dvUP3pGp55XEEQR8u4YUsOinnqgxghw1+/obyv3m1dl/LQUTHbp0Ce58+wmCxPk2zq7q+0Uq24Xp10CnnMyDHKfvr0NOqUV8u2BuWdKxhGySn58ZYDTzeX1HT0oWFndaidu7J0ZGBiz7NSofeCB+Uv0jY5NZ0cJgWg91kU+I8BnOYzc7rHm0unjJ1Ff9woA1IgALzY/FwXDJ1+3c9Ouj0fXRl3OQXyOqOe+YUyeCwfw X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(346002)(376002)(39860400002)(136003)(451199015)(40470700004)(36840700001)(46966006)(2906002)(8936002)(36756003)(7636003)(356005)(82740400003)(47076005)(86362001)(36860700001)(82310400005)(83380400001)(7696005)(6286002)(6666004)(55016003)(426003)(1076003)(2616005)(26005)(186003)(40460700003)(16526019)(336012)(70586007)(110136005)(5660300002)(4326008)(478600001)(70206006)(8676002)(316002)(40480700001)(41300700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Sep 2022 16:38:58.4803 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6ec2470a-e5a1-4890-2082-08da9a5d737a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT093.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7027 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch introduces new hairpin queue configuration options through rte_eth_hairpin_conf struct, allowing to tune Rx and Tx hairpin queues memory configuration. Hairpin configuration is extended with the following fields: - use_locked_device_memory - If set, PMD will use specialized on-device memory to store RX or TX hairpin queue data. - use_rte_memory - If set, PMD will use DPDK-managed memory to store RX or TX hairpin queue data. - force_memory - If set, PMD will be forced to use provided memory settings. If no appropriate resources are available, then device start will fail. If unset and no resources are available, PMD will fallback to using default type of resource for given queue. Hairpin capabilities are also extended, to allow verification of support of given hairpin memory configurations. Struct rte_eth_hairpin_cap is extended with two additional fields of type rte_eth_hairpin_queue_cap: - rx_cap - memory capabilities of hairpin RX queues. - tx_cap - memory capabilities of hairpin TX queues. Struct rte_eth_hairpin_queue_cap exposes whether given queue type supports use_locked_device_memory and use_rte_memory flags. Signed-off-by: Dariusz Sosnowski --- lib/ethdev/rte_ethdev.c | 44 ++++++++++++++++++++++++++++ lib/ethdev/rte_ethdev.h | 65 ++++++++++++++++++++++++++++++++++++++++- 2 files changed, 108 insertions(+), 1 deletion(-) diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 1979dc0850..edcec08231 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -1945,6 +1945,28 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, conf->peer_count, cap.max_rx_2_tx); return -EINVAL; } + if (conf->use_locked_device_memory && !cap.rx_cap.locked_device_memory) { + RTE_ETHDEV_LOG(ERR, + "Attempt to use locked device memory for Rx queue, which is not supported"); + return -EINVAL; + } + if (conf->use_rte_memory && !cap.rx_cap.rte_memory) { + RTE_ETHDEV_LOG(ERR, + "Attempt to use DPDK memory for Rx queue, which is not supported"); + return -EINVAL; + } + if (conf->use_locked_device_memory && conf->use_rte_memory) { + RTE_ETHDEV_LOG(ERR, + "Attempt to use mutually exclusive memory settings for Rx queue"); + return -EINVAL; + } + if (conf->force_memory && + !conf->use_locked_device_memory && + !conf->use_rte_memory) { + RTE_ETHDEV_LOG(ERR, + "Attempt to force Rx queue memory settings, but none is set"); + return -EINVAL; + } if (conf->peer_count == 0) { RTE_ETHDEV_LOG(ERR, "Invalid value for number of peers for Rx queue(=%u), should be: > 0", @@ -2111,6 +2133,28 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, conf->peer_count, cap.max_tx_2_rx); return -EINVAL; } + if (conf->use_locked_device_memory && !cap.tx_cap.locked_device_memory) { + RTE_ETHDEV_LOG(ERR, + "Attempt to use locked device memory for Tx queue, which is not supported"); + return -EINVAL; + } + if (conf->use_rte_memory && !cap.tx_cap.rte_memory) { + RTE_ETHDEV_LOG(ERR, + "Attempt to use DPDK memory for Tx queue, which is not supported"); + return -EINVAL; + } + if (conf->use_locked_device_memory && conf->use_rte_memory) { + RTE_ETHDEV_LOG(ERR, + "Attempt to use mutually exclusive memory settings for Tx queue"); + return -EINVAL; + } + if (conf->force_memory && + !conf->use_locked_device_memory && + !conf->use_rte_memory) { + RTE_ETHDEV_LOG(ERR, + "Attempt to force Tx queue memory settings, but none is set"); + return -EINVAL; + } if (conf->peer_count == 0) { RTE_ETHDEV_LOG(ERR, "Invalid value for number of peers for Tx queue(=%u), should be: > 0", diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index de9e970d4d..e179b0e79b 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1273,6 +1273,28 @@ struct rte_eth_txconf { void *reserved_ptrs[2]; /**< Reserved for future fields */ }; +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * A structure used to return the Tx or Rx hairpin queue capabilities that are supported. + */ +struct rte_eth_hairpin_queue_cap { + /** + * When set, a specialized on-device memory type can be used as a backing + * storage for a given hairpin queue type. + */ + uint32_t locked_device_memory:1; + + /** + * When set, memory managed by DPDK can be used as a backing storage + * for a given hairpin queue type. + */ + uint32_t rte_memory:1; + + uint32_t reserved:30; /**< Reserved for future fields */ +}; + /** * @warning * @b EXPERIMENTAL: this API may change, or be removed, without prior notice @@ -1287,6 +1309,8 @@ struct rte_eth_hairpin_cap { /** Max number of Tx queues to be connected to one Rx queue. */ uint16_t max_tx_2_rx; uint16_t max_nb_desc; /**< The max num of descriptors. */ + struct rte_eth_hairpin_queue_cap rx_cap; /**< Rx hairpin queue capabilities. */ + struct rte_eth_hairpin_queue_cap tx_cap; /**< Tx hairpin queue capabilities. */ }; #define RTE_ETH_MAX_HAIRPIN_PEERS 32 @@ -1334,7 +1358,46 @@ struct rte_eth_hairpin_conf { * configured automatically during port start. */ uint32_t manual_bind:1; - uint32_t reserved:14; /**< Reserved bits. */ + + /** + * Use locked device memory as a backing storage. + * + * - When set, PMD will attempt to use on-device memory as a backing storage for descriptors + * and/or data in hairpin queue. + * - When set, PMD will use detault memory type as a backing storage. Please refer to PMD + * documentation for details. + * + * API user should check if PMD supports this configuration flag using + * @see rte_eth_dev_hairpin_capability_get. + */ + uint32_t use_locked_device_memory:1; + + /** + * Use DPDK memory as backing storage. + * + * - When set, PMD will attempt to use memory managed by DPDK as a backing storage + * for descriptors and/or data in hairpin queue. + * - When clear, PMD will use default memory type as a backing storage. Please refer + * to PMD documentation for details. + * + * API user should check if PMD supports this configuration flag using + * @see rte_eth_dev_hairpin_capability_get. + */ + uint32_t use_rte_memory:1; + + /** + * Force usage of hairpin memory configuration. + * + * - When set, PMD will attempt to use specified memory settings and + * if resource allocation fails, then hairpin queue setup will result in an + * error. + * - When clear, PMD will attempt to use specified memory settings and + * if resource allocation fails, then PMD will retry allocation with default + * configuration. + */ + uint32_t force_memory:1; + + uint32_t reserved:11; /**< Reserved bits. */ struct rte_eth_hairpin_peer peers[RTE_ETH_MAX_HAIRPIN_PEERS]; }; From patchwork Tue Sep 20 07:10:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116468 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1ACD2A00C3; Tue, 20 Sep 2022 09:11:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AA9A24069B; Tue, 20 Sep 2022 09:11:15 +0200 (CEST) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1anam02on2084.outbound.protection.outlook.com [40.107.96.84]) by mails.dpdk.org (Postfix) with ESMTP id 922BA4021D for ; Tue, 20 Sep 2022 09:11:14 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZsSVEKvQI3LY5nszUYPmS2lmOODjbrny3FHTgZEActMfsocMgWC5RPH++VrHxmkIes6+D1zR/JzwRQNqpLPUPddALGewrJzVcXc6w+OhiZbEWXMyf6VnzF3iexO/kVud5c2Gw/6J64xn/ZK/b6oS7cGw1piZE0NkfaAdM53A/28gbu3n48MpniajLp4cxM6NIOD5NLgm48a/rTY//rhqwYVmtQVuCuWbIMbR675pcVhejwsoc5TLIVSOlW6wy8wN3lOtQZboPvPMcoIdIIvY203KqfxSonQ1HCsZU1GfHcXx18+X5QHI7jTjwsixgbWVBhUdH7AstP1UJ7QeL7EIAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=T0RfP7U2r5dZ1gwAjIKhDtxsapKrFY+O1IY1DOK+OXk=; b=GZYJz7XPHLxNbHWcOcHTz65mNxg0k0ss9ZjBi1NPdztHKwvl3mfaEhYb77UwksZfxIS3tfHyOeUpxOt7xKlDwElWMTEwVPt45JyROeKePXeVleP1rW1LLzIxf2yJ6Nov5MoVexGRLB/AelA0U57vU94MlxW3sAWJ2+fS9iSqIece5GxsVIRvAPBfXBK3IhWqG685HZQE+CLeAndpN6WZU9Afkl0rwSY5LE1P8Iy3QZ/enu4NzwBQ8I3OtGVipMJFgP6n6qn18bPIr3WVMpcQ59zoCge9C/qrPAQ0VEwa8iKlKrBMknXF1+J+pggt59JKBxAbhkY5acofB+2fCPw31g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=T0RfP7U2r5dZ1gwAjIKhDtxsapKrFY+O1IY1DOK+OXk=; b=P84JjmpyuxGwIpnoVzAD8YEHNZDb/UhuPRZNFVW7/uSTduCjRVPQWgoRiwDGfsI+kI7rxvX6AlLWBWBbFBZpMbhY9/wnoaViERLA4hAF5u/pyTVSWrE+d6OhaDVC6xxmlG3kRsbdFQUnageFA1KnYPZ2+boyRpHwWc4IcXtsUsm3iSrZbg5q+H3ue7X6OZB+sNasYFPIYPXj1qNY9DHtr2ocmZD+kJTdU7Yox9rbg6CA1FFSeOuUG7WEiLnSd0MG354G6qqrJY32rkrrvLeECuJRUuSZLLywUjUIgp4Wg9imgTdAMZX9ba1Bt8WxLlrToGeOLhcfkAlIO61Nrdqtsw== Received: from BN9PR03CA0351.namprd03.prod.outlook.com (2603:10b6:408:f6::26) by DM4PR12MB6613.namprd12.prod.outlook.com (2603:10b6:8:b8::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.21; Tue, 20 Sep 2022 07:11:13 +0000 Received: from BN8NAM11FT044.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f6:cafe::e6) by BN9PR03CA0351.outlook.office365.com (2603:10b6:408:f6::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.21 via Frontend Transport; Tue, 20 Sep 2022 07:11:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT044.mail.protection.outlook.com (10.13.177.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.12 via Frontend Transport; Tue, 20 Sep 2022 07:11:12 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 20 Sep 2022 00:10:53 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 20 Sep 2022 00:10:50 -0700 From: Suanming Mou To: Ori Kam , Aman Singh , "Yuying Zhang" , Thomas Monjalon , "Ferruh Yigit" , Andrew Rybchenko CC: Subject: [PATCH v1] ethdev: add async flow connection tracking configuration Date: Tue, 20 Sep 2022 10:10:36 +0300 Message-ID: <20220920071036.20878-1-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220809132534.24441-1-suanmingm@nvidia.com> References: <20220809132534.24441-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT044:EE_|DM4PR12MB6613:EE_ X-MS-Office365-Filtering-Correlation-Id: 90e76dcd-62a0-4513-28da-08da9ad74cec X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: GizEm43h9HSmUpmunBq1TcDA4WZ02XJmKnwwkMql2OU0KUif12t4V0dGC6I1U+Wv8jDK/YDHZi3HLMvRI2h9Lf5D07miQiwuw1kiEo44VygKHpBAt3WdpOV8wU1yokiKeUfeMeyp9IGdkfR+BHIH9OR2iB6kdvb29drr9GN5Lx2sloo9UzpgUBMnoZ70ZxVX91EhZ+zifY+u5NMUEgmGPEMtyP7uhjvscinnhovncJy0etORnAd7z0lRxGB2jG9ceUakr6/M04pPfOpRXq4kM5AlASOJN8qIMUWVzqfw4TG6K5p2yo/7wXAcM4V1HFvHxTQlq0KtMrW2mWBksdIMU6azHrXlyqHRfq0/o1a55wtkEmioT8iqr1AIznRwNMO5flt3tggIJAxkDCg5e3NWNGCAq50c36PN+zrYm9l1dlrlcG0SFe9i3IOdXHxlEH3X09qC7u/9LLA3Vdn9svSozZ4+Qd4Az1YWg4ZZg+mJ1tjJFJ2MI8cxoJ/JBTemEaEe00P24TQtEXBF61H1Jx7fR4IR9L0MCyGz/D4THtWaeRaKqSaoxHxEaZmUuQcl8iXpSwbyWrpE5rGrrGYU1mdn2Af38O18d/f1A/FNVtfPNkIUf8tr8MjyRuZtdmEIZmnP44xk6ndMP0QZHp1sghhj6h+1/DGbld2jF4r0cpf2Y/w5DOn/bN7WXOQrXiQE6yKzkM5wqzp4cNqY2I2GcQM03gs6gHCATgtGQ7EgP7ZkO0ZxeZQYyleVkjQT3XeqYx75m/fC0ikKvyA6FxzCOzOShw== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(346002)(376002)(396003)(136003)(451199015)(40470700004)(36840700001)(46966006)(6666004)(2906002)(5660300002)(8936002)(86362001)(40480700001)(7636003)(70206006)(4326008)(55016003)(8676002)(70586007)(110136005)(356005)(316002)(82310400005)(478600001)(336012)(47076005)(41300700001)(40460700003)(7696005)(36756003)(82740400003)(83380400001)(6286002)(26005)(2616005)(16526019)(186003)(36860700001)(1076003)(426003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Sep 2022 07:11:12.3506 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 90e76dcd-62a0-4513-28da-08da9ad74cec X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT044.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6613 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In queue based async flow engine, in order to optimize the flow insertion rate, PMD can use the hints from application to have resources pre-allocate during initialization phase for actions such as count/meter/aging. This commit adds the connection tracking action hints. Signed-off-by: Suanming Mou Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 10 ++++++++++ doc/guides/rel_notes/release_22_11.rst | 6 ++++++ lib/ethdev/rte_flow.h | 10 ++++++++++ 3 files changed, 26 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 7f50028eb7..c9cbf381c4 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -219,6 +219,7 @@ enum index { CONFIG_COUNTERS_NUMBER, CONFIG_AGING_OBJECTS_NUMBER, CONFIG_METERS_NUMBER, + CONFIG_CONN_TRACK_NUMBER, /* Indirect action arguments */ INDIRECT_ACTION_CREATE, @@ -1081,6 +1082,7 @@ static const enum index next_config_attr[] = { CONFIG_COUNTERS_NUMBER, CONFIG_AGING_OBJECTS_NUMBER, CONFIG_METERS_NUMBER, + CONFIG_CONN_TRACK_NUMBER, END, ZERO, }; @@ -2667,6 +2669,14 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct buffer, args.configure.port_attr.nb_meters)), }, + [CONFIG_CONN_TRACK_NUMBER] = { + .name = "conn_tracks_number", + .help = "number of connection trackings", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.port_attr.nb_conn_tracks)), + }, /* Top-level command. */ [PATTERN_TEMPLATE] = { .name = "pattern_template", diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 8c021cf050..d5e64ff9a1 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -55,6 +55,12 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Added configuration for asynchronous flow connection tracking.** + + Added connection tracking action number hint to ``rte_flow_configure`` + and ``rte_flow_info_get``. + PMD can prepare the connection tracking resources according to the hint. + Removed Items ------------- diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index a79f1e7ef0..c2747abc55 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4902,6 +4902,11 @@ struct rte_flow_port_info { * @see RTE_FLOW_ACTION_TYPE_METER */ uint32_t max_nb_meters; + /** + * Maximum number connection trackings. + * @see RTE_FLOW_ACTION_TYPE_CONNTRACK + */ + uint32_t max_nb_conn_tracks; }; /** @@ -4971,6 +4976,11 @@ struct rte_flow_port_attr { * @see RTE_FLOW_ACTION_TYPE_METER */ uint32_t nb_meters; + /** + * Number of connection trackings to configure. + * @see RTE_FLOW_ACTION_TYPE_CONNTRACK + */ + uint32_t nb_conn_tracks; }; /** From patchwork Tue Sep 20 07:11:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116469 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EB86CA00C3; Tue, 20 Sep 2022 09:12:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D861740E0F; Tue, 20 Sep 2022 09:12:23 +0200 (CEST) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2079.outbound.protection.outlook.com [40.107.212.79]) by mails.dpdk.org (Postfix) with ESMTP id D8ECA40DFB for ; Tue, 20 Sep 2022 09:12:21 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FjyLQm8FJe6nVV16CvxIrln/EQ5SWwGQh5DRa03DKxCBx5IYPBETWzgBkWrClpzx0uW8KTiGqQEVuVe4yvTX2nQMcJXPJPlA+3FeXJ24EsJq1Qe/hE6tCYIbERgeVLnPxuTcKVoXh17gSfIZi0p+395FsdYpJ2dLSw/gTieHqopITj5vzrsde2Ax9jLPzwxiSCVwpbpPX8PYUGB1rRHesFHqJG1VyLYRDW7H5GZpIzWCH13PvxxHNOiprMF/voFvUcUSH8zLn3suz2Z7mqlOyPHcNYTjPLEo4kXDF3DljTYM/sA5/Mp3WClt+AMsdo694p6ZjVQsTABjSGmDFiWhFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=QJAt43RfxoFoEziONwK6MmXyKBFbKD8Jg3tn7dwfq+c=; b=nagpdzYV1CdDwfxt0C/tVGEIsUjWJ7T3iCl1y1oFoee0HM/Zaa4A9lgGMCln06qQSFavgayX22rLmX00ktCBKQkOmTP0oZu3Fz9ja7rd9KjE6Nm2yDEBT7PxgAVRxXM+B5AscbCZ0NLlzo+EAIOJnDGeLcG13L8bWsmHKWUTCqBDNXFsl3Oqv+jUF83xQoGn9ADhr0HPGMJLE1YnVrAQgSJ8rSBrHhG9yaRgn+Ghn7SBIBc+th3noaDVE29IPaOkfRWHlQ+qvaMgHhC7dPsa1GJ7RhwOK58j7Zgli1NyyrCRrSD4u+dEeZjcn8fubteo5/fGEXgDYUDs3Kcmf0PZTQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QJAt43RfxoFoEziONwK6MmXyKBFbKD8Jg3tn7dwfq+c=; b=WWjlsIvS6QDecDbLE6qj2suvuC/8vM+L1mn2Tm3bIofMaOnoTJq8qg8v8vcIEDuMO/IAyQFNTRBRDm1V4o88IgDpXiUnP95yIEYiqRon/m/2IWqDE6mvW/dLRo0bXFrq7ttsWmeB9Mt2R132rHQrZ3qQifzK/lBaxLsmyVFK5vazQTYAXDq4qds4V8cgrzT8+OEZ/C/hV8LrMsLFIkT43uCsDuqVMB7jMNIOzvI96qH8566XIKT9baygrX1iYtk+sznjDF3KsNoS8nKkPviEWQCvu3uqIghN0sEP/gREnTJMgjRZTTrh2pxNG2C8SIaajI2XfMZhspSkD84X5Kxd2w== Received: from BN1PR12CA0003.namprd12.prod.outlook.com (2603:10b6:408:e1::8) by DM6PR12MB4220.namprd12.prod.outlook.com (2603:10b6:5:21d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.18; Tue, 20 Sep 2022 07:12:19 +0000 Received: from BN8NAM11FT088.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e1:cafe::83) by BN1PR12CA0003.outlook.office365.com (2603:10b6:408:e1::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.21 via Frontend Transport; Tue, 20 Sep 2022 07:12:19 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT088.mail.protection.outlook.com (10.13.177.81) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.12 via Frontend Transport; Tue, 20 Sep 2022 07:12:19 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 20 Sep 2022 00:11:58 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 20 Sep 2022 00:11:55 -0700 From: Suanming Mou To: Ori Kam , Aman Singh , "Yuying Zhang" , Thomas Monjalon , "Ferruh Yigit" , Andrew Rybchenko , Ray Kinsella CC: Subject: [PATCH v1] ethdev: add indirect action async query Date: Tue, 20 Sep 2022 10:11:41 +0300 Message-ID: <20220920071141.21769-1-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220809132824.25890-1-suanmingm@nvidia.com> References: <20220809132824.25890-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT088:EE_|DM6PR12MB4220:EE_ X-MS-Office365-Filtering-Correlation-Id: 8bed70d4-6add-4a5c-dc2b-08da9ad774d1 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 3msrVCr/pVeZqsumGBZQ3x2Wgkj4RC1yR/qT3ZEWvQ4ko88oeEIg6DbJZuZmguGUK+J0ttFovhF1vxaTYjUhoBHggrKmAkXVZuHbNdEP9CiCM+T+ghQKG45kSr+ji4X/IfwlPT6VMOuaOO5cMS/7lawsO992M1WdzcbpRSqeTnrLJJKyibA1SOLq7cMZ7kS1eefK4LQcvUONGejO5hFSqkh1OpLPqkhuspk2OpYzvpEEvL9oQk7fMpjnu99yg95vOxqYvR3OTNxX8ghUKdhFnD23DryQrDWm0cDw1/OGLMQbMm790W/dcvGhpOVD0dfvLL/lRaz838NGrQpG6sXTcCkhieeZpCEMT9VwEnFLdmQvsbHx45PkBngJwXMQb7PEJpBQwi5oXyoQjjF6fRY2z1dTF+mtCL2a5QDClkxImWnZlf/gtBddHCrNglxU7ytl+dScgVCpwQCIW3TPbBot2sj+NCwYMLeRnsMc6mKrh3na+YH+FJHeyhK4Dd+BMkXB2N2zjaI5RBKn9JXGjGRh5ckwhPbVwGD+olTwXFw9niW3utBfsq0XQjLay9yH5iL7VQ5tSjaC585zLGCwRjfD0hcQPkUA6f0P3f2fE9gYDEBK9lLwKdXnJ5zW8xPhIKAaUZ4h7wPoXW6zvDzxB/OeEi8pKN67zJu5k0FiJtjFq1dBb1Jl2TosoI5NGeqNKehQM4Efql2vWC2QYtSdhCwZt8Ip/oIXbUuOVZg0DEY+kCsxBO/kNUHEIZNUXopckVsSQ39N/YFCnpkoWd9hnBG9VQ== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(346002)(39860400002)(376002)(396003)(136003)(451199015)(40470700004)(36840700001)(46966006)(8936002)(30864003)(82740400003)(82310400005)(83380400001)(86362001)(47076005)(478600001)(70206006)(110136005)(70586007)(1076003)(336012)(356005)(2616005)(55016003)(40480700001)(186003)(7636003)(426003)(4326008)(8676002)(5660300002)(36860700001)(316002)(7696005)(6666004)(6286002)(40460700003)(16526019)(36756003)(26005)(2906002)(41300700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Sep 2022 07:12:19.2871 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8bed70d4-6add-4a5c-dc2b-08da9ad774d1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT088.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4220 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org As rte_flow_action_handle_create/destroy/update() have their own asynchronous rte_flow_async_action_handle_create/destroy/update() version functions to accelerate the indirect action operations in queue based flow engine. Currently, the asynchronous version query function for indirect action was missing. This patch adds the rte_flow_async_action_handle_query() function corresponds to rte_flow_action_handle_query(). The new asynchronous version function enables enqueue the query to the hardware similar as asynchronous flow management does and returns immediately to free the CPU for other tasks. Application can get the query results from rte_flow_pull() when the hardware completes its work. Signed-off-by: Suanming Mou Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 34 +++ app/test-pmd/config.c | 240 ++++++++++++++------ app/test-pmd/testpmd.h | 28 +++ doc/guides/prog_guide/rte_flow.rst | 16 ++ doc/guides/rel_notes/release_22_11.rst | 5 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 19 ++ lib/ethdev/rte_flow.c | 18 ++ lib/ethdev/rte_flow.h | 44 ++++ lib/ethdev/rte_flow_driver.h | 9 + lib/ethdev/version.map | 3 + 10 files changed, 345 insertions(+), 71 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 7f50028eb7..0223286c1a 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -145,6 +145,7 @@ enum index { QUEUE_INDIRECT_ACTION_CREATE, QUEUE_INDIRECT_ACTION_UPDATE, QUEUE_INDIRECT_ACTION_DESTROY, + QUEUE_INDIRECT_ACTION_QUERY, /* Queue indirect action create arguments */ QUEUE_INDIRECT_ACTION_CREATE_ID, @@ -161,6 +162,9 @@ enum index { QUEUE_INDIRECT_ACTION_DESTROY_ID, QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE, + /* Queue indirect action query arguments */ + QUEUE_INDIRECT_ACTION_QUERY_POSTPONE, + /* Push arguments. */ PUSH_QUEUE, @@ -1171,6 +1175,7 @@ static const enum index next_qia_subcmd[] = { QUEUE_INDIRECT_ACTION_CREATE, QUEUE_INDIRECT_ACTION_UPDATE, QUEUE_INDIRECT_ACTION_DESTROY, + QUEUE_INDIRECT_ACTION_QUERY, ZERO, }; @@ -1197,6 +1202,12 @@ static const enum index next_qia_destroy_attr[] = { ZERO, }; +static const enum index next_qia_query_attr[] = { + QUEUE_INDIRECT_ACTION_QUERY_POSTPONE, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -3013,6 +3024,14 @@ static const struct token token_list[] = { .next = NEXT(next_qia_destroy_attr), .call = parse_qia_destroy, }, + [QUEUE_INDIRECT_ACTION_QUERY] = { + .name = "query", + .help = "query indirect action", + .next = NEXT(next_qia_query_attr, + NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)), + .call = parse_qia, + }, /* Indirect action destroy arguments. */ [QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE] = { .name = "postpone", @@ -3038,6 +3057,14 @@ static const struct token token_list[] = { NEXT_ENTRY(COMMON_BOOLEAN)), .args = ARGS(ARGS_ENTRY(struct buffer, postpone)), }, + /* Indirect action update arguments. */ + [QUEUE_INDIRECT_ACTION_QUERY_POSTPONE] = { + .name = "postpone", + .help = "postpone query operation", + .next = NEXT(next_qia_query_attr, + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, postpone)), + }, /* Indirect action create arguments. */ [QUEUE_INDIRECT_ACTION_CREATE_ID] = { .name = "action_id", @@ -6682,6 +6709,8 @@ parse_qia(struct context *ctx, const struct token *token, (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), sizeof(double)); out->args.vc.attr.group = UINT32_MAX; + /* fallthrough */ + case QUEUE_INDIRECT_ACTION_QUERY: out->command = ctx->curr; ctx->objdata = 0; ctx->object = out; @@ -10509,6 +10538,11 @@ cmd_flow_parsed(const struct buffer *in) in->args.vc.attr.group, in->args.vc.actions); break; + case QUEUE_INDIRECT_ACTION_QUERY: + port_queue_action_handle_query(in->port, + in->queue, in->postpone, + in->args.vc.attr.group); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index a2939867c4..4c51ed03a8 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2080,44 +2080,18 @@ port_action_handle_update(portid_t port_id, uint32_t id, return 0; } -int -port_action_handle_query(portid_t port_id, uint32_t id) +static void +port_action_handle_query_dump(uint32_t type, union port_action_query *query) { - struct rte_flow_error error; - struct port_indirect_action *pia; - union { - struct rte_flow_query_count count; - struct rte_flow_query_age age; - struct rte_flow_action_conntrack ct; - } query; - - pia = action_get_by_id(port_id, id); - if (!pia) - return -EINVAL; - switch (pia->type) { - case RTE_FLOW_ACTION_TYPE_AGE: - case RTE_FLOW_ACTION_TYPE_COUNT: - break; - default: - fprintf(stderr, - "Indirect action %u (type: %d) on port %u doesn't support query\n", - id, pia->type, port_id); - return -ENOTSUP; - } - /* Poisoning to make sure PMDs update it in case of error. */ - memset(&error, 0x55, sizeof(error)); - memset(&query, 0, sizeof(query)); - if (rte_flow_action_handle_query(port_id, pia->handle, &query, &error)) - return port_flow_complain(&error); - switch (pia->type) { + switch (type) { case RTE_FLOW_ACTION_TYPE_AGE: printf("Indirect AGE action:\n" " aged: %u\n" " sec_since_last_hit_valid: %u\n" " sec_since_last_hit: %" PRIu32 "\n", - query.age.aged, - query.age.sec_since_last_hit_valid, - query.age.sec_since_last_hit); + query->age.aged, + query->age.sec_since_last_hit_valid, + query->age.sec_since_last_hit); break; case RTE_FLOW_ACTION_TYPE_COUNT: printf("Indirect COUNT action:\n" @@ -2125,10 +2099,10 @@ port_action_handle_query(portid_t port_id, uint32_t id) " bytes_set: %u\n" " hits: %" PRIu64 "\n" " bytes: %" PRIu64 "\n", - query.count.hits_set, - query.count.bytes_set, - query.count.hits, - query.count.bytes); + query->count.hits_set, + query->count.bytes_set, + query->count.hits, + query->count.bytes); break; case RTE_FLOW_ACTION_TYPE_CONNTRACK: printf("Conntrack Context:\n" @@ -2138,47 +2112,76 @@ port_action_handle_query(portid_t port_id, uint32_t id) " Factor: %u, Retrans: %u, TCP flags: %u\n" " Last Seq: %u, Last ACK: %u\n" " Last Win: %u, Last End: %u\n", - query.ct.peer_port, - query.ct.is_original_dir ? "Original" : "Reply", - query.ct.enable, query.ct.live_connection, - query.ct.selective_ack, query.ct.challenge_ack_passed, - query.ct.last_direction ? "Original" : "Reply", - query.ct.liberal_mode, query.ct.state, - query.ct.max_ack_window, query.ct.retransmission_limit, - query.ct.last_index, query.ct.last_seq, - query.ct.last_ack, query.ct.last_window, - query.ct.last_end); + query->ct.peer_port, + query->ct.is_original_dir ? "Original" : "Reply", + query->ct.enable, query->ct.live_connection, + query->ct.selective_ack, query->ct.challenge_ack_passed, + query->ct.last_direction ? "Original" : "Reply", + query->ct.liberal_mode, query->ct.state, + query->ct.max_ack_window, query->ct.retransmission_limit, + query->ct.last_index, query->ct.last_seq, + query->ct.last_ack, query->ct.last_window, + query->ct.last_end); printf(" Original Dir:\n" " scale: %u, fin: %u, ack seen: %u\n" " unacked data: %u\n Sent end: %u," " Reply end: %u, Max win: %u, Max ACK: %u\n", - query.ct.original_dir.scale, - query.ct.original_dir.close_initiated, - query.ct.original_dir.last_ack_seen, - query.ct.original_dir.data_unacked, - query.ct.original_dir.sent_end, - query.ct.original_dir.reply_end, - query.ct.original_dir.max_win, - query.ct.original_dir.max_ack); + query->ct.original_dir.scale, + query->ct.original_dir.close_initiated, + query->ct.original_dir.last_ack_seen, + query->ct.original_dir.data_unacked, + query->ct.original_dir.sent_end, + query->ct.original_dir.reply_end, + query->ct.original_dir.max_win, + query->ct.original_dir.max_ack); printf(" Reply Dir:\n" " scale: %u, fin: %u, ack seen: %u\n" " unacked data: %u\n Sent end: %u," " Reply end: %u, Max win: %u, Max ACK: %u\n", - query.ct.reply_dir.scale, - query.ct.reply_dir.close_initiated, - query.ct.reply_dir.last_ack_seen, - query.ct.reply_dir.data_unacked, - query.ct.reply_dir.sent_end, - query.ct.reply_dir.reply_end, - query.ct.reply_dir.max_win, - query.ct.reply_dir.max_ack); + query->ct.reply_dir.scale, + query->ct.reply_dir.close_initiated, + query->ct.reply_dir.last_ack_seen, + query->ct.reply_dir.data_unacked, + query->ct.reply_dir.sent_end, + query->ct.reply_dir.reply_end, + query->ct.reply_dir.max_win, + query->ct.reply_dir.max_ack); + break; + default: + fprintf(stderr, + "Indirect action (type: %d) doesn't support query\n", + type); + break; + } + +} + +int +port_action_handle_query(portid_t port_id, uint32_t id) +{ + struct rte_flow_error error; + struct port_indirect_action *pia; + union port_action_query query; + + pia = action_get_by_id(port_id, id); + if (!pia) + return -EINVAL; + switch (pia->type) { + case RTE_FLOW_ACTION_TYPE_AGE: + case RTE_FLOW_ACTION_TYPE_COUNT: break; default: fprintf(stderr, "Indirect action %u (type: %d) on port %u doesn't support query\n", id, pia->type, port_id); - break; + return -ENOTSUP; } + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x55, sizeof(error)); + memset(&query, 0, sizeof(query)); + if (rte_flow_action_handle_query(port_id, pia->handle, &query, &error)) + return port_flow_complain(&error); + port_action_handle_query_dump(pia->type, &query); return 0; } @@ -2670,6 +2673,7 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, bool found; struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL }; struct rte_flow_action_age *age = age_action_get(actions); + struct queue_job *job; port = &ports[port_id]; if (port->flow_list) { @@ -2713,9 +2717,18 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, return -EINVAL; } + job = calloc(1, sizeof(*job)); + if (!job) { + printf("Queue flow create job allocate failed\n"); + return -ENOMEM; + } + job->type = QUEUE_JOB_TYPE_FLOW_CREATE; + pf = port_flow_new(NULL, pattern, actions, &error); - if (!pf) + if (!pf) { + free(job); return port_flow_complain(&error); + } if (age) { pf->age_type = ACTION_AGE_CONTEXT_TYPE_FLOW; age->context = &pf->age_type; @@ -2723,16 +2736,18 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, /* Poisoning to make sure PMDs update it in case of error. */ memset(&error, 0x11, sizeof(error)); flow = rte_flow_async_create(port_id, queue_id, &op_attr, pt->table, - pattern, pattern_idx, actions, actions_idx, NULL, &error); + pattern, pattern_idx, actions, actions_idx, job, &error); if (!flow) { uint32_t flow_id = pf->id; port_queue_flow_destroy(port_id, queue_id, true, 1, &flow_id); + free(job); return port_flow_complain(&error); } pf->next = port->flow_list; pf->id = id; pf->flow = flow; + job->pf = pf; port->flow_list = pf; printf("Flow rule #%u creation enqueued\n", pf->id); return 0; @@ -2748,6 +2763,7 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, struct port_flow **tmp; uint32_t c = 0; int ret = 0; + struct queue_job *job; if (port_id_is_invalid(port_id, ENABLED_WARN) || port_id == (portid_t)RTE_PORT_ALL) @@ -2774,14 +2790,22 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, * update it in case of error. */ memset(&error, 0x33, sizeof(error)); + job = calloc(1, sizeof(*job)); + if (!job) { + printf("Queue flow destroy job allocate failed\n"); + return -ENOMEM; + } + job->type = QUEUE_JOB_TYPE_FLOW_DESTROY; + job->pf = pf; + if (rte_flow_async_destroy(port_id, queue_id, &op_attr, - pf->flow, NULL, &error)) { + pf->flow, job, &error)) { + free(job); ret = port_flow_complain(&error); continue; } printf("Flow rule #%u destruction enqueued\n", pf->id); *tmp = pf->next; - free(pf); break; } if (i == n) @@ -2803,6 +2827,7 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, struct port_indirect_action *pia; int ret; struct rte_flow_error error; + struct queue_job *job; ret = action_alloc(port_id, id, &pia); if (ret) @@ -2813,6 +2838,13 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, printf("Queue #%u is invalid\n", queue_id); return -EINVAL; } + job = calloc(1, sizeof(*job)); + if (!job) { + printf("Queue action create job allocate failed\n"); + return -ENOMEM; + } + job->type = QUEUE_JOB_TYPE_ACTION_CREATE; + job->pia = pia; if (action->type == RTE_FLOW_ACTION_TYPE_AGE) { struct rte_flow_action_age *age = @@ -2824,11 +2856,12 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, /* Poisoning to make sure PMDs update it in case of error. */ memset(&error, 0x88, sizeof(error)); pia->handle = rte_flow_async_action_handle_create(port_id, queue_id, - &attr, conf, action, NULL, &error); + &attr, conf, action, job, &error); if (!pia->handle) { uint32_t destroy_id = pia->id; port_queue_action_handle_destroy(port_id, queue_id, postpone, 1, &destroy_id); + free(job); return port_flow_complain(&error); } pia->type = action->type; @@ -2847,6 +2880,7 @@ port_queue_action_handle_destroy(portid_t port_id, struct port_indirect_action **tmp; uint32_t c = 0; int ret = 0; + struct queue_job *job; if (port_id_is_invalid(port_id, ENABLED_WARN) || port_id == (portid_t)RTE_PORT_ALL) @@ -2873,17 +2907,23 @@ port_queue_action_handle_destroy(portid_t port_id, * of error. */ memset(&error, 0x99, sizeof(error)); + job = calloc(1, sizeof(*job)); + if (!job) { + printf("Queue action destroy job allocate failed\n"); + return -ENOMEM; + } + job->type = QUEUE_JOB_TYPE_ACTION_DESTROY; + job->pia = pia; if (pia->handle && rte_flow_async_action_handle_destroy(port_id, - queue_id, &attr, pia->handle, NULL, &error)) { + queue_id, &attr, pia->handle, job, &error)) { ret = port_flow_complain(&error); continue; } *tmp = pia->next; printf("Indirect action #%u destruction queued\n", pia->id); - free(pia); break; } if (i == n) @@ -2903,6 +2943,7 @@ port_queue_action_handle_update(portid_t port_id, struct rte_port *port; struct rte_flow_error error; struct rte_flow_action_handle *action_handle; + struct queue_job *job; action_handle = port_action_handle_get_by_id(port_id, id); if (!action_handle) @@ -2914,8 +2955,56 @@ port_queue_action_handle_update(portid_t port_id, return -EINVAL; } + job = calloc(1, sizeof(*job)); + if (!job) { + printf("Queue action update job allocate failed\n"); + return -ENOMEM; + } + job->type = QUEUE_JOB_TYPE_ACTION_UPDATE; + if (rte_flow_async_action_handle_update(port_id, queue_id, &attr, - action_handle, action, NULL, &error)) { + action_handle, action, job, &error)) { + free(job); + return port_flow_complain(&error); + } + printf("Indirect action #%u update queued\n", id); + return 0; +} + +/** Enqueue indirect action query operation. */ +int +port_queue_action_handle_query(portid_t port_id, + uint32_t queue_id, bool postpone, uint32_t id) +{ + const struct rte_flow_op_attr attr = { .postpone = postpone}; + struct rte_port *port; + struct rte_flow_error error; + struct rte_flow_action_handle *action_handle; + struct port_indirect_action *pia; + struct queue_job *job; + + pia = action_get_by_id(port_id, id); + action_handle = pia ? pia->handle : NULL; + if (!action_handle) + return -EINVAL; + + port = &ports[port_id]; + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + job = calloc(1, sizeof(*job)); + if (!job) { + printf("Queue action update job allocate failed\n"); + return -ENOMEM; + } + job->type = QUEUE_JOB_TYPE_ACTION_QUERY; + job->pia = pia; + + if (rte_flow_async_action_handle_query(port_id, queue_id, &attr, + action_handle, &job->query, job, &error)) { + free(job); return port_flow_complain(&error); } printf("Indirect action #%u update queued\n", id); @@ -2960,6 +3049,7 @@ port_queue_flow_pull(portid_t port_id, queueid_t queue_id) int ret = 0; int success = 0; int i; + struct queue_job *job; if (port_id_is_invalid(port_id, ENABLED_WARN) || port_id == (portid_t)RTE_PORT_ALL) @@ -2989,6 +3079,14 @@ port_queue_flow_pull(portid_t port_id, queueid_t queue_id) for (i = 0; i < ret; i++) { if (res[i].status == RTE_FLOW_OP_SUCCESS) success++; + job = (struct queue_job *)res[i].user_data; + if (job->type == QUEUE_JOB_TYPE_FLOW_DESTROY) + free(job->pf); + else if (job->type == QUEUE_JOB_TYPE_ACTION_DESTROY) + free(job->pia); + else if (job->type == QUEUE_JOB_TYPE_ACTION_QUERY) + port_action_handle_query_dump(job->pia->type, &job->query); + free(job); } printf("Queue #%u pulled %u operations (%u failed, %u succeeded)\n", queue_id, ret, ret - success, success); diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index fb2f5195d3..c7a96d062c 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -105,6 +105,15 @@ enum { /**< allocate mempool natively, use rte_pktmbuf_pool_create_extbuf */ }; +enum { + QUEUE_JOB_TYPE_FLOW_CREATE, + QUEUE_JOB_TYPE_FLOW_DESTROY, + QUEUE_JOB_TYPE_ACTION_CREATE, + QUEUE_JOB_TYPE_ACTION_DESTROY, + QUEUE_JOB_TYPE_ACTION_UPDATE, + QUEUE_JOB_TYPE_ACTION_QUERY, +}; + /** * The data structure associated with RX and TX packet burst statistics * that are recorded for each forwarding stream. @@ -220,6 +229,23 @@ struct port_indirect_action { enum age_action_context_type age_type; /**< Age action context type. */ }; +/* Descriptor for action query data. */ +union port_action_query { + struct rte_flow_query_count count; + struct rte_flow_query_age age; + struct rte_flow_action_conntrack ct; +}; + +/* Descriptor for queue job. */ +struct queue_job { + uint32_t type; /**< Job type. */ + union { + struct port_flow *pf; + struct port_indirect_action *pia; + }; + union port_action_query query; +}; + struct port_flow_tunnel { LIST_ENTRY(port_flow_tunnel) chain; struct rte_flow_action *pmd_actions; @@ -980,6 +1006,8 @@ int port_queue_action_handle_destroy(portid_t port_id, int port_queue_action_handle_update(portid_t port_id, uint32_t queue_id, bool postpone, uint32_t id, const struct rte_flow_action *action); +int port_queue_action_handle_query(portid_t port_id, uint32_t queue_id, + bool postpone, uint32_t id); int port_queue_flow_push(portid_t port_id, queueid_t queue_id); int port_queue_flow_pull(portid_t port_id, queueid_t queue_id); int port_flow_validate(portid_t port_id, diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 588914b231..9e6aadf954 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3911,6 +3911,22 @@ Asynchronous version of indirect action update API. void *user_data, struct rte_flow_error *error); +Enqueue indirect action query operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Asynchronous version of indirect action query API. + +.. code-block:: c + + int + rte_flow_async_action_handle_query(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + void *data, + void *user_data, + struct rte_flow_error *error); + Push enqueued operations ~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 8c021cf050..597b28ede1 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -55,6 +55,11 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Added support for queue based async query in rte_flow.** + + Added new API ``rte_flow_async_action_handle_query()``, to query the + action asynchronously. + Removed Items ------------- diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 330e34427d..d2c6e385db 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -4676,6 +4676,25 @@ Query indirect action having id 100:: testpmd> flow indirect_action 0 query 100 +Enqueueing query of indirect actions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue indirect_action query`` adds query operation for an indirect +action to a queue. It is bound to ``rte_flow_async_action_handle_query()``:: + + flow queue {port_id} indirect_action {queue_id} query + {indirect_action_id} [postpone {boolean}] + +If successful, it will show:: + + Indirect action #[...] query queued + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +``flow queue pull`` must be called to retrieve the operation status. + Sample QinQ flow rules ~~~~~~~~~~~~~~~~~~~~~~ diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 501be9d602..eb6a6b737e 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1844,3 +1844,21 @@ rte_flow_async_action_handle_update(uint16_t port_id, action_handle, update, user_data, error); return flow_err(port_id, ret, error); } + +int +rte_flow_async_action_handle_query(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + const struct rte_flow_action_handle *action_handle, + void *data, + void *user_data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + ret = ops->async_action_handle_query(dev, queue_id, op_attr, + action_handle, data, user_data, error); + return flow_err(port_id, ret, error); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index a79f1e7ef0..7554c46e72 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -5612,6 +5612,50 @@ rte_flow_async_action_handle_update(uint16_t port_id, const void *update, void *user_data, struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action query operation. + * + * Retrieve action-specific data such as counters. + * Data is gathered by special action which may be present/referenced in + * more than one flow rule definition. + * Data will be available only when completion event returns. + * + * @see rte_flow_async_action_handle_query + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to query the action. + * @param[in] op_attr + * Indirect action update operation attributes. + * @param[in] action_handle + * Handle for the action object to query. + * @param[in, out] data + * Pointer to storage for the associated query data type. + * The out data will be available only when completion event returns + * from rte_flow_pull. + * @param[in] user_data + * The user data that will be returned on the completion events. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_async_action_handle_query(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + const struct rte_flow_action_handle *action_handle, + void *data, + void *user_data, + struct rte_flow_error *error); #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index 2bff732d6a..7289deb538 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -260,6 +260,15 @@ struct rte_flow_ops { const void *update, void *user_data, struct rte_flow_error *error); + /** See rte_flow_async_action_handle_query() */ + int (*async_action_handle_query) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + const struct rte_flow_action_handle *action_handle, + void *data, + void *user_data, + struct rte_flow_error *error); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 03f52fee91..722081a8c7 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -285,6 +285,9 @@ EXPERIMENTAL { rte_mtr_color_in_protocol_priority_get; rte_mtr_color_in_protocol_set; rte_mtr_meter_vlan_table_update; + + # added in 22.11 + rte_flow_async_action_handle_query; }; INTERNAL { From patchwork Wed Sep 21 02:11:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 116508 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 375C5A034C; Wed, 21 Sep 2022 04:12:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 140CF42670; Wed, 21 Sep 2022 04:12:21 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) by mails.dpdk.org (Postfix) with ESMTP id AC5CF410EE for ; Wed, 21 Sep 2022 04:12:19 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MiKJeCOLU3xCovdMLckVkZcFQxfsog+JHRq8VoFUygauAdGClAGWvYqbnFUv2C/JhrUmpPhYuMq/u40FpaEvkUtAyuPVCCrbRAtVC+RiTBh8d97HHL72K3qsXfAR4cmXxiBEThmql5LW0RSPDdIHA892AkgBYJKTokJOjrqsvN0sz/wAeEDL/DV8FkTNnUjBVXUEJz4Ft229D9b/NQz6XrpJLVSUQWVafQ8RtqdtOLwsHWhkeZde3kaOHaVNTf0fcLw6zwnKdAGHnvbTJrNYjOlIpOl8X6HO0pOU6xrjdNdP7cwFJx9hy2E29J48HwNpcjHScMKl/Ix4MmHh16HHrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WJ9lsf8wuEGBLqPfAGVm2bmf8zEmK8HjPRmnBsJqUtY=; b=WnT5FulGgYEBK9fZUldyZoEZuJAN/99VUT0xsdjTC5B/xTDBSdlTDdkrUIV09MATVzJlYpeJIFz68msDQhSjtblQDlZjLjjIZ7WzLkH8rz7L0F55nV4nD/BbAnEMJnOEdvKB/Ge6LEZaQLSD9wlnGwWIrFLeCQWhbjvSfI/xKzlylvZvlE/Zaaz5nDWFqV0eK3MPlCAl7m3XUghbAvz2Yio/Ck/4FDxLq2BoD8hkMmTUt0einL855L2L3oCMSaJ5HWXS2QM92YjcFSm+ULZqApWtSsLHUSOlllS6KvJgjARvPr4h2Tg3tocFgNWMfpxs6JXl5D0JcJSCcVewPz/gug== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=WJ9lsf8wuEGBLqPfAGVm2bmf8zEmK8HjPRmnBsJqUtY=; b=C3GtfmE+bFbKjdJruKbScK9i39CExauEnNNZ2qnm6c6vM/dZhe7H/1HD8YY5YP/c3XHE4Gp4z79UdwBn39inSiAU2iUwGvCN5aJIGnjRHJGzFCfgJhiQzRKF6mqpq03b8zXKCVJhel+YniF4mOfyxs/JNX7UU9J2csWe2BjZaohQI1VPLGnJmRy1LiiHUEJV9hPlAiSvwc4gh2qboBlv2xpZUwC7t5g3KUTHBriRKMz5hht7W8ZePuDmSXB+dsIUaw/fJdFEK/l760UnxSXllARc9NIM5CbSBIZv0DCOB9wh57nP3X0x5Z+/03jBRdLMOpJ4dF/BG8YQqiV0YUxjxQ== Received: from MWH0EPF00056D04.namprd21.prod.outlook.com (2603:10b6:30f:fff2:0:1:0:c) by DM4PR12MB5343.namprd12.prod.outlook.com (2603:10b6:5:389::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14; Wed, 21 Sep 2022 02:12:17 +0000 Received: from CO1NAM11FT089.eop-nam11.prod.protection.outlook.com (2a01:111:f400:7eab::205) by MWH0EPF00056D04.outlook.office365.com (2603:1036:d20::b) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.4 via Frontend Transport; Wed, 21 Sep 2022 02:12:17 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT089.mail.protection.outlook.com (10.13.175.179) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Wed, 21 Sep 2022 02:12:17 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 20 Sep 2022 19:12:06 -0700 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 20 Sep 2022 19:12:02 -0700 From: Alexander Kozyrev To: CC: , , , , , , , , , , , Subject: [PATCH v4 1/7] ethdev: add meter color flow matching item Date: Wed, 21 Sep 2022 05:11:27 +0300 Message-ID: <20220921021133.2982954-2-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220921021133.2982954-1-akozyrev@nvidia.com> References: <20220601034408.2579943-1-akozyrev@nvidia.com> <20220921021133.2982954-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT089:EE_|DM4PR12MB5343:EE_ X-MS-Office365-Filtering-Correlation-Id: b6c1fef9-511f-4c48-0005-08da9b76b507 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: gkUpSF4b+Hn3gZWjNcwkMQ7/BOwslNFrbr3eyqbe0r1PI921KFREzZFNC//2N9Pr07PnZb1om7FGqduR08Ui8zDSTH/R30sVpx6Z5ToN5lJipuxu6xHw0JOwUKQyV7q7SXEJP69Dy90jrgog5JI4sGQR6kpPIq4gn/fVXBMOQExNY+Rl9nV8HjjgqCBoIZ/GvUkUQPTScqehjjqE+FXCsoLG6dTxGqHnBLj5tODeymzd/4UwdzRikq1Np5tE8wyg/+JberwJ/mmObUxlnWuos9XocNQ4kEoHfdZu2vQO3JX323dtgZlw9OO9OANlX5I0iCj07/gCRFecYpLV2xZD+hCj59kvbagw3QumFzBh3i+pOSuD/G0SdI1yHpMxZZrBUbPijMRUfflVK+rjNg5aENHpiyJ/1kzDhf1p/kNAQ9w26npADMRgKPBsPTrlaDFu51mxDz9zJwmbqpuwDXgzS+HDzC0pkRCsF2Vwm0/IZs1lyAy+8ai4R9ppt5FpGnxX7MlJ7tFT9Ehn1D12edYnMfqJPeDhh3cmonzrIoxAdqsFtiICNIQx1r7O68R2KrOWpRR7IDCkrCKb+kWBkevoRr6PvEtTj88Perq02OP/UGPuE5SfIiAa9Ycjk3/Dw/wmZC6CNv5osneONVF/Za+rwaLLwWKgRAy7fFviVKVoRdLxf2dQZ1UwZyfVVl1EFhBhUuE5AzMRTdacAxBu1jdHGFhsXakglqyj65c2fCer0IjWJWQu/qcAGllGG/ddd7jbDiafFJdaD22licxDCyLpCA== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(39860400002)(376002)(136003)(346002)(451199015)(46966006)(36840700001)(40470700004)(8676002)(7416002)(40460700003)(82310400005)(47076005)(82740400003)(4326008)(70586007)(5660300002)(8936002)(70206006)(7636003)(356005)(2906002)(86362001)(6916009)(54906003)(426003)(336012)(36860700001)(6666004)(41300700001)(16526019)(40480700001)(2616005)(186003)(1076003)(316002)(26005)(36756003)(478600001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Sep 2022 02:12:17.0365 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b6c1fef9-511f-4c48-0005-08da9b76b507 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT089.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5343 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Provide an ability to use a Color Marker set by a Meter as a matching item in Flow API. The Color Marker reflects the metering result by setting the metadata for a packet to a particular codepoint: green, yellow or red. Signed-off-by: Alexander Kozyrev --- doc/guides/prog_guide/rte_flow.rst | 7 +++++++ doc/guides/rel_notes/release_22_11.rst | 3 +++ lib/ethdev/rte_flow.c | 1 + lib/ethdev/rte_flow.h | 24 ++++++++++++++++++++++++ 4 files changed, 35 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 588914b231..018def1033 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1651,6 +1651,13 @@ Matches a PPP header. - ``proto_id``: PPP protocol identifier. - Default ``mask`` matches addr, ctrl, proto_id. +Item: ``METER_COLOR`` +^^^^^^^^^^^^^^^^^^^^^ + +Matches Color Marker set by a Meter. + +- ``color``: Metering color marker. + Actions ~~~~~~~ diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 8c021cf050..f6c02bb5e7 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -55,6 +55,9 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Extended Metering and Marking support in the Flow API.** + + * Added METER_COLOR item to match Color Marker set by a Meter. Removed Items ------------- diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 501be9d602..99247b599d 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -159,6 +159,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = { rte_flow_item_flex_conv), MK_FLOW_ITEM(L2TPV2, sizeof(struct rte_flow_item_l2tpv2)), MK_FLOW_ITEM(PPP, sizeof(struct rte_flow_item_ppp)), + MK_FLOW_ITEM(METER_COLOR, sizeof(struct rte_flow_item_meter_color)), }; /** Generate flow_action[] entry. */ diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index a79f1e7ef0..d49f5fd1b7 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -668,6 +668,14 @@ enum rte_flow_item_type { * See struct rte_flow_item_gre_opt. */ RTE_FLOW_ITEM_TYPE_GRE_OPTION, + + /** + * Matches Meter Color Marker. + * + * See struct rte_flow_item_meter_color. + */ + + RTE_FLOW_ITEM_TYPE_METER_COLOR, }; /** @@ -2198,6 +2206,22 @@ struct rte_flow_item_flex_conf { uint32_t nb_outputs; }; +/** + * RTE_FLOW_ITEM_TYPE_METER_COLOR. + * + * Matches Color Marker set by a Meter. + */ +struct rte_flow_item_meter_color { + enum rte_color color; /**< Meter color marker. */ +}; + +/** Default mask for RTE_FLOW_ITEM_TYPE_METER_COLOR. */ +#ifndef __cplusplus +static const struct rte_flow_item_meter_color rte_flow_item_meter_color_mask = { + .color = RTE_COLORS, +}; +#endif + /** * Action types. * From patchwork Wed Sep 21 02:11:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 116509 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E187BA034C; Wed, 21 Sep 2022 04:12:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3750342825; Wed, 21 Sep 2022 04:12:26 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2053.outbound.protection.outlook.com [40.107.93.53]) by mails.dpdk.org (Postfix) with ESMTP id 9890D40697 for ; Wed, 21 Sep 2022 04:12:25 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RbgB09CXmuDqQSGobhw18YBFaOX3dZtIKGikC3q6yiCsGRbxENZcYDb2MeXwGnVm9K7BXOcnBUnxMNUMON96x2Abx0zIOVB8GBvIsg2oyLDLwcMNmcTw73eUpWzHcahFeCa8uO4t1MRh1OzVB8nmb9wCBta6Yd5+4io/xril6ZqcwhzjlJC1OjnNRJ5PtUpSRcjRegrigkNJzPsmXFTPkjO8RDwSn9qdnjGEdCSzqi+RhTLdc0V5AXsqQnWQAHkO1ZPq/SEyEc5zUas3InOkop5zZjsBsJtvm4rbYTRESC0/juQwEKff4l6WUjySbIo8RjhJqOGmmbCg601eBNdzHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1rRfB+5TI4xZCsmO0S3kIkih62jghnv33WWqQWx1XPw=; b=GwmYTPjQvpum/k7VjRPTftTe7B3txwlfQuFo16r+QXlQ1EzTGZdn7UzfefM1RVNDQEQXxcAwukrTOdFJfWitfxWvTaYChV2EEIm/1fLbCwNsVnTBl98mDJKTLn2t3qIHqDQbdkfR827mOUztZ4ubkVmYrfBLaQAS+gBnAnodaBuyEz67y8lRmGkbno1nlaYs81wVu3puLmVMCnvBAtXw7hZdJHbshXRLK418vurBUW4jLcVEo5iJV0MN0fK1c1VeUthbdsbKJNFeSvYQlOEpYXw3viRQ8FoO4pXRbx2hI/wZ6CY1fMXv9+s635fMURDodZC3rFn522dR3Bx3xhgxAA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1rRfB+5TI4xZCsmO0S3kIkih62jghnv33WWqQWx1XPw=; b=bmy7zeGJ7eNNZYj7bSAG9hicgRsCDT3OkOeLPSYkfhUl/2aaokKpT+o1rzoD48X03GuYTBh2eVBSCQWfworAynIywGfh9SitPYiAOuzHeywRlIRc/k7xrsf30KdPSWr1LVyyC1UQf+iqeU/vsYVHmljEOLliOosliNBamlB8jY2Ecrz15i1YCop4o0sqUgMDM3CF28PsDC9dxPmVrXlIWV3fsbTGOZ5nvDS5sgEODkr54mpNcWURpY5pf3ESOv4SexZfkszI+N8D4TXFn/oX2MGenitoltczVpmojPwbf1jl8sRe7a3RgE1UbBHQ8aG0452e24zUUZt0QZURmunRqA== Received: from MW4PR04CA0205.namprd04.prod.outlook.com (2603:10b6:303:86::30) by MN0PR12MB6344.namprd12.prod.outlook.com (2603:10b6:208:3d3::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.15; Wed, 21 Sep 2022 02:12:23 +0000 Received: from CO1NAM11FT005.eop-nam11.prod.protection.outlook.com (2603:10b6:303:86:cafe::a2) by MW4PR04CA0205.outlook.office365.com (2603:10b6:303:86::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.21 via Frontend Transport; Wed, 21 Sep 2022 02:12:23 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT005.mail.protection.outlook.com (10.13.174.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.12 via Frontend Transport; Wed, 21 Sep 2022 02:12:23 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 20 Sep 2022 19:12:10 -0700 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 20 Sep 2022 19:12:06 -0700 From: Alexander Kozyrev To: CC: , , , , , , , , , , , Subject: [PATCH v4 2/7] ethdev: allow meter color marker modification Date: Wed, 21 Sep 2022 05:11:28 +0300 Message-ID: <20220921021133.2982954-3-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220921021133.2982954-1-akozyrev@nvidia.com> References: <20220601034408.2579943-1-akozyrev@nvidia.com> <20220921021133.2982954-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT005:EE_|MN0PR12MB6344:EE_ X-MS-Office365-Filtering-Correlation-Id: 1c6917ea-3840-42b1-8bc3-08da9b76b8d4 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: FartgyWaJMTfOhWk56crxBdWODTWxNMKgV1xY2/hhrdIih8vmGGjgpty2jAAYDFvrQliH8noUBwLzrhzatfHxM03/TIY52NtuUxkYkdz7P8Ch89wYZk8ac6j0UBlYDCuL4O5XUbifL0GyhPSaoAfjhC95c8NPtDYSXdNNJgl76wz7lzXex4MYiWxO41nJj9yPZMJAOSCqgaHS5z/QwVqXnZ03Vl+ynuId4tZoSqcWvd/pFppqtJqoqVD8cA0S0RDxxV8eWRxGX/PYJcCjxzd06gjX8VweXrKeiLsxjc0baHp8IZV1uInjM2x0ecP88MCuBLdR3YEGysWnh8vSN1mFFn2Tib8SGxx2yaFjtae2e+vyJdJYF5onz1ent2kcXqlOllzrZarSVRHKwegSz2hTwB+6FUkjiuM0iOgkzjBk3aEIgjvT90WPPzFwcUkZgQCzvhVseJLu79BGGArlVDV2AUiaNPK8PHyNyA+0aB2NJA01kwZ5moYv/v/wJTMtbAkPbP4LXyOOAeoEydeomGRic8h7fS8PT0Fm0PM2nKwYpOqeKrjjA9hLW8kDqBAXSdSo3MZ7t9ZbM/SVoLRqIjl3M6rCieYnoT/MqbsUKpSVajY35jC/82zkr8wObD+VFtIFpnny0MC9kVb+1akvVTLM1UqY49dIE/ukLkNFtlrrHSVlK5v/ZXf9kgFzXi9Iq8HwgEzjPco+kjorqLGgA4E243yQyG2VNwqWqnYQFqLD5AeuhH8rIKDt/XJcf60e3deVIX9EiN+gICuuf1U0zL7DA== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(396003)(136003)(376002)(346002)(451199015)(40470700004)(36840700001)(46966006)(54906003)(8676002)(26005)(8936002)(6666004)(316002)(6916009)(478600001)(4326008)(70206006)(70586007)(5660300002)(7416002)(2906002)(41300700001)(40460700003)(40480700001)(426003)(82310400005)(36756003)(356005)(2616005)(82740400003)(86362001)(7636003)(36860700001)(47076005)(16526019)(186003)(1076003)(336012); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Sep 2022 02:12:23.4011 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1c6917ea-3840-42b1-8bc3-08da9b76b8d4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT005.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6344 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Extend modify_field Flow API with support of Meter Color Marker modifications. It allows setting the packet's metadata to any color marker: green, yellow or red. A user is able to specify an initial packet color for Meter API or create simple Metering and Marking flow rules based on his own coloring algorithm. Signed-off-by: Alexander Kozyrev --- doc/guides/rel_notes/release_22_11.rst | 1 + lib/ethdev/rte_flow.h | 1 + 2 files changed, 2 insertions(+) diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index f6c02bb5e7..a7651f69ba 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -58,6 +58,7 @@ New Features * **Extended Metering and Marking support in the Flow API.** * Added METER_COLOR item to match Color Marker set by a Meter. + * Added ability to set Color Marker via modify_field Flow API. Removed Items ------------- diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index d49f5fd1b7..fddd47e7b5 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -3713,6 +3713,7 @@ enum rte_flow_field_id { RTE_FLOW_FIELD_VALUE, /**< Immediate value. */ RTE_FLOW_FIELD_IPV4_ECN, /**< IPv4 ECN. */ RTE_FLOW_FIELD_IPV6_ECN, /**< IPv6 ECN. */ + RTE_FLOW_FIELD_METER_COLOR, /**< Meter color marker. */ }; /** From patchwork Wed Sep 21 02:11:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 116510 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E1212A034C; Wed, 21 Sep 2022 04:12:37 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 286CE42B6D; Wed, 21 Sep 2022 04:12:32 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2087.outbound.protection.outlook.com [40.107.220.87]) by mails.dpdk.org (Postfix) with ESMTP id 3E0F84282B for ; Wed, 21 Sep 2022 04:12:30 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=moT3OePHKdHh80sGdjQlJ4wXS6AE6IaZCXdihmICJQKvb6iXkXPmDR77uT8ZJIz+ML3dFlNqwUTDDElFTGQpXHLlqzIb7oPrsWwvq8dWEE6T16TKrMEEyrOf11af5uoUF688qRqiBKLgUY7Kj2aZj54f+EZ0J7zb4jAS11JtD2RqHF1+//9hytcTnfAqcZJfJ0okPsmWGcJeLvmoF1dsGQVxun4FZdIi8jhJb+rGgB8KIBMNEuHA5qhVwyOJ816eEicsDGJHC1sNLOrQHYA0fJoPEw2+nHBywEQ+WdL468waIFoBDjWu9W11ejjhEpIntd4h+0Nk9g8jhVAZRJPssQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=J+ZiXps35pH8hMqsd/vk7UgYFCbd6Pc6MV3EV+uolnI=; b=eJGsSedLJ4w769Ibfetgnu08aIcmzH1GEcsK1yWSU3+tyysyyg0aTQKQdZswN2kz2LsCRNtS0/O8sK7SG+uk4SwBvwOZAHsX1P4NHnjlmW3u5fkdNQbxXG0UXDqyxwL5REZ5LEco2QqIrlB353CNgQNEwT49J+xAjMLwbpUDJGtTbvTextePiea+Lkhf3TEJ0bY4kWa7KfE3L8liG9QuOHBPo+VS/VRKSWZuMjsefoQwypMVpNEK/LLkwHvpEMPTSneaK+FAXhb8tVe9F4SB/VGXxIOQJu1nW+3FyHwZ49znIYGFnb0VTz/zhxeuhRFQv+hK3sa+aZPuM+K1MarHOA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=J+ZiXps35pH8hMqsd/vk7UgYFCbd6Pc6MV3EV+uolnI=; b=IcydCViHn6oWtYoaFJfKnf/G+mm0sFMwSPll+09Vz2R8KpMF4NMS6pDxXnt6zyuru19Q4AZKfVwzyH91xSMvB3lJxDQYTPNMSug3KtIbqn5NVHJvNB3+slfUJlI/BJ8rgI2H7ixsOBtsX0IqkyQXzdTCzYTgPWSEfZzop4bvtQqOwNKoTkapyUeC5M7rTZqp+G4d7bylj+pp0l/ZmiDRkxwb7dxRDa6gFjCE9cdn1HveNja3B+9ZbMC4uy54WioMVTODLLUnreaLC3SZsTDeZw1HDeBfOtjKk1fw0bvkCgNfVPMO7wGPKphHKN4iGBRc4BLnvZkpRnpvrv/G3dT52w== Received: from MW2PR16CA0063.namprd16.prod.outlook.com (2603:10b6:907:1::40) by LV2PR12MB5775.namprd12.prod.outlook.com (2603:10b6:408:179::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.18; Wed, 21 Sep 2022 02:12:27 +0000 Received: from CO1NAM11FT042.eop-nam11.prod.protection.outlook.com (2603:10b6:907:1:cafe::ec) by MW2PR16CA0063.outlook.office365.com (2603:10b6:907:1::40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.16 via Frontend Transport; Wed, 21 Sep 2022 02:12:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT042.mail.protection.outlook.com (10.13.174.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Wed, 21 Sep 2022 02:12:26 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 20 Sep 2022 19:12:13 -0700 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 20 Sep 2022 19:12:10 -0700 From: Alexander Kozyrev To: CC: , , , , , , , , , , , Subject: [PATCH v4 3/7] ethdev: get meter profile/policy objects Date: Wed, 21 Sep 2022 05:11:29 +0300 Message-ID: <20220921021133.2982954-4-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220921021133.2982954-1-akozyrev@nvidia.com> References: <20220601034408.2579943-1-akozyrev@nvidia.com> <20220921021133.2982954-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT042:EE_|LV2PR12MB5775:EE_ X-MS-Office365-Filtering-Correlation-Id: ff6e6eca-cd70-41a8-4b05-08da9b76badf X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: HhyxR/dUYt7+zS2ZAVJ+mm8jWk71ImMj76HOuhIljLAgKKQVAOUhsqbBO/xRSQ+OaUImTZ9t+3qG3yQS+aipcBusE3+4bJiv2uHEQxhqYLc4+f74yhBQtcUCr8nWZb29cHk3YlfwOumEg/ZtWpqD10Gg557EYlEGqtvzpbbC963rZ/JWXo25AWYylffkcZqmsAdGn5gxTVOAiKY8zAmgmXKjUzgMnnM0iYfTXJAPY2BsDVo9GNndMWQ1nxFEZy5viUSdj+nFD/xQhDi3PNjVu4F/RnhedOUf6tdU9DxNWIAtDZJpFoeeU+AIwsv0h4gNAXy1htb9HKGVEa46MMNkTqWbn3pBy1M0UCttjau18RQ7X7qB5RpCNSmtewr+iEDRCr13LyBB45TNhTde00aQTKvGYM2dpQfrdAlejlownGzZCod+LOVKpj+xzWTrofRlhXeonm4KwD2aN0nHOxmYzM1BMU2ou90GG3/lohTDOpkXd1UMQxNhhnpPrfxrxt5OVUfHzno7s84+hMCM6KJEqSSuPKHtzA0TwbzP49u3oQLCgKY36/sDt1L8Y70ZToB92EjWpUELsLjrpXo5FfkOuggIVJxzNt9QKrpPPqlGraQ33GhJUrBpmZgMOMB7/X0jB1MLLyG3qfrc6ky3kUkB1UOqjvGpwiu1CQelNvaaVipKQeo7E1e9NUnraSKdB4/Mc7irO1/vWQ6B/M3TfiljFXIhmQIF5Px3sZSIw9tWoutwpMbRVtV7pSYOpFjS/fAm5ETZKnlQMeXITdcb+YNFP0wW53DEJigiwXPmTk6lQbA8Tu98HJfqIIkST/LqIXz7 X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(376002)(396003)(346002)(136003)(451199015)(40470700004)(36840700001)(46966006)(7636003)(40480700001)(86362001)(16526019)(26005)(478600001)(8936002)(2616005)(82740400003)(6666004)(40460700003)(186003)(41300700001)(1076003)(356005)(336012)(36756003)(83380400001)(36860700001)(47076005)(426003)(82310400005)(70206006)(2906002)(5660300002)(8676002)(316002)(70586007)(54906003)(6916009)(7416002)(4326008); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Sep 2022 02:12:26.8426 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ff6e6eca-cd70-41a8-4b05-08da9b76badf X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT042.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV2PR12MB5775 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce a new Meter API to retrieve a Meter profile and policy objects using the profile/policy ID previously created with meter_profile_add() and meter_policy_create() functions. That allows to save the pointer and avoid any lookups in the corresponding lists for quick access during a flow rule creation. Also, it eliminates the need for CIR, CBS and EBS calculations and conversion to a PMD-specific format when the profile is used. Pointers are destroyed and cannot be used after the corresponding meter_profile_delete() or meter_policy_delete() are called. Signed-off-by: Alexander Kozyrev --- .../traffic_metering_and_policing.rst | 7 ++++ doc/guides/rel_notes/release_22_11.rst | 1 + lib/ethdev/rte_flow.h | 7 ++++ lib/ethdev/rte_mtr.c | 41 +++++++++++++++++++ lib/ethdev/rte_mtr.h | 40 ++++++++++++++++++ lib/ethdev/rte_mtr_driver.h | 19 +++++++++ lib/ethdev/version.map | 4 ++ 7 files changed, 119 insertions(+) diff --git a/doc/guides/prog_guide/traffic_metering_and_policing.rst b/doc/guides/prog_guide/traffic_metering_and_policing.rst index d1958a023d..2ce3236ad8 100644 --- a/doc/guides/prog_guide/traffic_metering_and_policing.rst +++ b/doc/guides/prog_guide/traffic_metering_and_policing.rst @@ -107,6 +107,13 @@ traffic meter and policing library. to the list of meter actions (``struct rte_mtr_meter_policy_params::actions``) specified per color as show in :numref:`figure_rte_mtr_chaining`. +#. The ``rte_mtr_meter_profile_get()`` and ``rte_mtr_meter_policy_get()`` + API functions are available for getting the object pointers directly. + These pointers allow quick access to profile/policy objects and are + required by the ``RTE_FLOW_ACTION_TYPE_METER_MARK`` action. + This action may omit the policy definition to providei flexibility + to match a color later with the ``RTE_FLOW_ITEM_TYPE_METER_COLOR`` item. + Protocol based input color selection ------------------------------------ diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index a7651f69ba..7969609788 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -59,6 +59,7 @@ New Features * Added METER_COLOR item to match Color Marker set by a Meter. * Added ability to set Color Marker via modify_field Flow API. + * Added Meter API to get a pointer to profile/policy by their ID. Removed Items ------------- diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index fddd47e7b5..edf69fc44f 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -3826,6 +3826,13 @@ struct rte_flow_action { */ struct rte_flow; +/** + * Opaque type for Meter profile object returned by MTR API. + * + * This handle can be used to create Meter actions instead of profile ID. + */ +struct rte_flow_meter_profile; + /** * @warning * @b EXPERIMENTAL: this structure may change without prior notice diff --git a/lib/ethdev/rte_mtr.c b/lib/ethdev/rte_mtr.c index c460e4f4e0..9e79b744da 100644 --- a/lib/ethdev/rte_mtr.c +++ b/lib/ethdev/rte_mtr.c @@ -56,6 +56,25 @@ rte_mtr_ops_get(uint16_t port_id, struct rte_mtr_error *error) ops->func; \ }) +#define RTE_MTR_HNDL_FUNC(port_id, func) \ +({ \ + const struct rte_mtr_ops *ops = \ + rte_mtr_ops_get(port_id, error); \ + if (ops == NULL) \ + return NULL; \ + \ + if (ops->func == NULL) { \ + rte_mtr_error_set(error, \ + ENOSYS, \ + RTE_MTR_ERROR_TYPE_UNSPECIFIED, \ + NULL, \ + rte_strerror(ENOSYS)); \ + return NULL; \ + } \ + \ + ops->func; \ +}) + /* MTR capabilities get */ int rte_mtr_capabilities_get(uint16_t port_id, @@ -90,6 +109,17 @@ rte_mtr_meter_profile_delete(uint16_t port_id, meter_profile_id, error); } +/** MTR meter profile get */ +struct rte_flow_meter_profile * +rte_mtr_meter_profile_get(uint16_t port_id, + uint32_t meter_profile_id, + struct rte_mtr_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + return RTE_MTR_HNDL_FUNC(port_id, meter_profile_get)(dev, + meter_profile_id, error); +} + /* MTR meter policy validate */ int rte_mtr_meter_policy_validate(uint16_t port_id, @@ -124,6 +154,17 @@ rte_mtr_meter_policy_delete(uint16_t port_id, policy_id, error); } +/** MTR meter policy get */ +struct rte_flow_meter_policy * +rte_mtr_meter_policy_get(uint16_t port_id, + uint32_t policy_id, + struct rte_mtr_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + return RTE_MTR_HNDL_FUNC(port_id, meter_policy_get)(dev, + policy_id, error); +} + /** MTR object create */ int rte_mtr_create(uint16_t port_id, diff --git a/lib/ethdev/rte_mtr.h b/lib/ethdev/rte_mtr.h index 008bc84f0d..58f0d26215 100644 --- a/lib/ethdev/rte_mtr.h +++ b/lib/ethdev/rte_mtr.h @@ -623,6 +623,26 @@ rte_mtr_meter_profile_delete(uint16_t port_id, uint32_t meter_profile_id, struct rte_mtr_error *error); +/** + * Meter profile object get + * + * Get meter profile object for a given meter profile ID. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] meter_profile_id + * Meter profile ID. Needs to be the valid. + * @param[out] error + * Error details. Filled in only on error, when not NULL. + * @return + * A valid handle in case of success, NULL otherwise. + */ +__rte_experimental +struct rte_flow_meter_profile * +rte_mtr_meter_profile_get(uint16_t port_id, + uint32_t meter_profile_id, + struct rte_mtr_error *error); + /** * Check whether a meter policy can be created on a given port. * @@ -679,6 +699,26 @@ rte_mtr_meter_policy_add(uint16_t port_id, struct rte_mtr_meter_policy_params *policy, struct rte_mtr_error *error); +/** + * Meter policy object get + * + * Get meter policy object for a given meter policy ID. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] policy_id + * Meter policy ID. Needs to be the valid. + * @param[out] error + * Error details. Filled in only on error, when not NULL. + * @return + * A valid handle in case of success, NULL otherwise. + */ +__rte_experimental +struct rte_flow_meter_policy * +rte_mtr_meter_policy_get(uint16_t port_id, + uint32_t policy_id, + struct rte_mtr_error *error); + /** * Define meter policy action list: * GREEN - GREEN, YELLOW - YELLOW, RED - RED diff --git a/lib/ethdev/rte_mtr_driver.h b/lib/ethdev/rte_mtr_driver.h index f7dca9a54c..948a629b93 100644 --- a/lib/ethdev/rte_mtr_driver.h +++ b/lib/ethdev/rte_mtr_driver.h @@ -41,6 +41,12 @@ typedef int (*rte_mtr_meter_profile_delete_t)(struct rte_eth_dev *dev, uint32_t meter_profile_id, struct rte_mtr_error *error); +/** @internal MTR meter profile get. */ +typedef struct rte_flow_meter_profile * +(*rte_mtr_meter_profile_get_t)(struct rte_eth_dev *dev, + uint32_t meter_profile_id, + struct rte_mtr_error *error); + /** @internal MTR meter policy validate. */ typedef int (*rte_mtr_meter_policy_validate_t)(struct rte_eth_dev *dev, struct rte_mtr_meter_policy_params *policy, @@ -57,6 +63,13 @@ typedef int (*rte_mtr_meter_policy_delete_t)(struct rte_eth_dev *dev, uint32_t policy_id, struct rte_mtr_error *error); +/** @internal MTR meter policy get. */ +typedef struct rte_flow_meter_policy * +(*rte_mtr_meter_policy_get_t)(struct rte_eth_dev *dev, + uint32_t policy_id, + struct rte_mtr_error *error); + + /** @internal MTR object create. */ typedef int (*rte_mtr_create_t)(struct rte_eth_dev *dev, uint32_t mtr_id, @@ -194,6 +207,12 @@ struct rte_mtr_ops { /** MTR object meter policy update */ rte_mtr_meter_policy_update_t meter_policy_update; + + /** MTR meter profile get */ + rte_mtr_meter_profile_get_t meter_profile_get; + + /** MTR meter policy get */ + rte_mtr_meter_policy_get_t meter_policy_get; }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 03f52fee91..1fec250c85 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -285,6 +285,10 @@ EXPERIMENTAL { rte_mtr_color_in_protocol_priority_get; rte_mtr_color_in_protocol_set; rte_mtr_meter_vlan_table_update; + + # added in 22.11 + rte_mtr_meter_profile_get; + rte_mtr_meter_policy_get; }; INTERNAL { From patchwork Wed Sep 21 02:11:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 116511 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 35B8DA034C; Wed, 21 Sep 2022 04:12:44 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 20D2D42B73; Wed, 21 Sep 2022 04:12:36 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2041.outbound.protection.outlook.com [40.107.244.41]) by mails.dpdk.org (Postfix) with ESMTP id B59F74282B for ; Wed, 21 Sep 2022 04:12:34 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EF9Qw3gXurS9O/hpqWTbhfaaKY2sEpacrOhyfig1/jt+SaHCnc9gFXI05Z1hpV5j5cu67hi18aINUAIt1wyBqB1KKsd/Hd9fdDxYmFLzHIPaeYT9+D4+hGxG9pM8Tgdok+Au95xM1vjSc0t+0O4U2LG/NmrgR6OCWrXyMj2VJxMto8lDepuEFaYKMTeaFyAUJWgtPURnZkBQX2Tc9tceMWo3w0AVVSmCNot0wDXD/EezPd4363Irk2HFg797X9vDgUmMS7u+73/0g1oPDz+uwn7F5KGlTBP/e3tJPxsZel/SFOHDHhCYOuARzKvHJERewInrDdMvikTqjnb+KDNEGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=3CMiC0LqO6QiblJ7orKtG3xxOEJls2efsSpOcqFTauE=; b=CciOtcXZQ4otuD/o5iY+yRSgYYhSJexh53CgaKgyNL8ioA4ssW36YkVgXLLb6PpIumZs7ho8JFk0sf2YYUNKT4zw16NqK1yES0Wc6AflTl99+MoaPAzaWRvByXLCT6jR1DI7bJMlcxjQW6PHRp8Hcw96hhhqi5tXtcFgoecDU888WO26ZdYBFPIbd+pnK2Np6BHSBvbYbiX5rVio1nZafbtdu1mVF0EQu/+7FmixBjoV5MoboS+f8R+DCqbmgEV4Oough+2P1AUeXHFXkHNh7Iu8+qZFFgkbIV0JaQ7eYmhumk7SLEm7MhxV7Oa/KOMbJd1tf3HSAbDiIRkDd/ihUQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3CMiC0LqO6QiblJ7orKtG3xxOEJls2efsSpOcqFTauE=; b=bXFlPckwx1WL+XjoLN7GdwzrAKTbkE9GL57/RfnY87tEMUoiZbZygZQsj+vDz1TXyymhidXGe91LIOOyOQ6+EJvQOHwhiArGRLrbKrVPS8gq50afNd/wM3DaK3TIB2cxqw3I+Ae3ztWO9xqX+wgunQLSNdSXoz9ROneE/+fsDbM6MkZZkj52TUPlZu3WhoeQzpGcys+K3SObzDf21y2pkqy6xSauwL0sRsHfILnGC5ztO/ugg2CYtaVALCRhl7uNrqOGtsJY6RXpLDs9YG7cy13KjZf1TxUgDJ41o4HRB1tQ0lqyV6lh0N7+YUBv0SYk7cIWs5Gb3Xhhpy8+qYed6Q== Received: from MWH0EPF00056D0F.namprd21.prod.outlook.com (2603:10b6:30f:fff2:0:1:0:11) by PH7PR12MB6786.namprd12.prod.outlook.com (2603:10b6:510:1ac::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.19; Wed, 21 Sep 2022 02:12:32 +0000 Received: from CO1NAM11FT105.eop-nam11.prod.protection.outlook.com (2a01:111:f400:7eab::209) by MWH0EPF00056D0F.outlook.office365.com (2603:1036:d20::b) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.4 via Frontend Transport; Wed, 21 Sep 2022 02:12:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT105.mail.protection.outlook.com (10.13.175.159) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Wed, 21 Sep 2022 02:12:32 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 20 Sep 2022 19:12:17 -0700 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 20 Sep 2022 19:12:13 -0700 From: Alexander Kozyrev To: CC: , , , , , , , , , , , Subject: [PATCH v4 4/7] ethdev: add meter color mark flow action Date: Wed, 21 Sep 2022 05:11:30 +0300 Message-ID: <20220921021133.2982954-5-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220921021133.2982954-1-akozyrev@nvidia.com> References: <20220601034408.2579943-1-akozyrev@nvidia.com> <20220921021133.2982954-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT105:EE_|PH7PR12MB6786:EE_ X-MS-Office365-Filtering-Correlation-Id: 76526430-b0bc-44ea-225b-08da9b76be38 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: TSMcv+3Nttecq2FLl5WKDD1dVy/d2pHklDkVYBLVC7yyTKCrzuDpODX9bmraF3aO+uSEeGXa1lbmVzBHouAibRBktj3A1tKpKag4R/0SjxouJyUsdQDXIhKntOdaDn9RI6nE/f3mQlbqFcEDoTYEoNqUivj2lmsQoO50XmjkR8rM6MF9BOcDSppfgDCsAU1Uo8FWGgAEztFTYqIzNbam4q2DzYlRwfO33kcf60Tp/3KrcgarQ6F1sn3l1orYQ5Bp0C7wt+UzifbZCxoq5HLQuO9B8SZDcBVb+Mmz/OwmQUvUSkAska9Fysjxd1cskMVMHPYVPGnvCFSXR44pkSz4jnY9bnvUOhiHkdsUseBK3KcPjGL066mrtUIhIwWXlOO3sDW773BL049tFZKZ6qeh1NiEXBDttHbjYuxOqdW401sLCQxJRPZU1aQFrN/iTjN2lvWQWUyhyAZn1pcoW9xMfr4xwgTWmBHuvixA4A6fuCdtLbs0Wu20xXB9BcS7UnI6uFatkt0A+rFacH9b9JSAefZNm8wiXYKxEZ+Ml8HMUqsRg6GrVAn6SYtNtp7lzATNc4TzfqdYHaEtYd/URKNWNFAgzH9ruhtrd9b1ASCKUwtoaJ+fsGqKhi4ILu2DiO1wsEJ8YQex9mmgx/3y//b0D0FQZL/40HBTwh6FLNY3g4zjavRCm6bX/vEd/nOfKMIz+/9OHWLbeDcntLKPktEuxUnsfOKOPZvB9770s4Lyi1qRhFVCKApsL0wOmv07av+OxTBxVPASVDyFmlFBVVehjg== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(136003)(376002)(396003)(39860400002)(346002)(451199015)(40470700004)(36840700001)(46966006)(70586007)(70206006)(4326008)(40480700001)(8676002)(7636003)(356005)(6916009)(54906003)(7416002)(5660300002)(316002)(86362001)(8936002)(2906002)(186003)(83380400001)(36860700001)(1076003)(16526019)(426003)(36756003)(2616005)(26005)(336012)(47076005)(82740400003)(478600001)(40460700003)(82310400005)(41300700001)(6666004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Sep 2022 02:12:32.4423 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 76526430-b0bc-44ea-225b-08da9b76be38 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT105.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6786 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Create a new Flow API action: METER_MARK. It Meters a packet stream and marks its packets with colors. The marking is done on a metadata, not on a packet field. Unlike the METER action, it performs no policing at all. A user has the flexibility to create any policies with the help of the METER_COLOR item later, only meter profile is mandatory here. Signed-off-by: Alexander Kozyrev --- doc/guides/prog_guide/rte_flow.rst | 25 +++++++++++++++++++++++ doc/guides/rel_notes/release_22_11.rst | 1 + lib/ethdev/rte_flow.c | 1 + lib/ethdev/rte_flow.h | 28 ++++++++++++++++++++++++++ 4 files changed, 55 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 018def1033..5b87d9f61e 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3411,6 +3411,31 @@ This action is meant to use the same structure as `Action: PORT_REPRESENTOR`_. See also `Item: REPRESENTED_PORT`_. +Action: ``METER_MARK`` +^^^^^^^^^^^^^^^^^^^^^^ + +Meters a packet stream and marks its packets with colors. + +Unlike the ``METER`` action, policing is optional and may be +performed later with the help of the ``METER_COLOR`` item. +The profile and/or policy objects have to be created +using the rte_mtr_profile_add()/rte_mtr_policy_add() API. +Pointers to these objects are used as action parameters +and need to be retrieved using the rte_mtr_profile_get() API +and rte_mtr_policy_get() API respectively. + +.. _table_rte_flow_action_meter_mark: + +.. table:: METER_MARK + + +------------------+----------------------+ + | Field | Value | + +==================+======================+ + | ``profile`` | Meter profile object | + +------------------+----------------------+ + | ``policy`` | Meter policy object | + +------------------+----------------------+ + Negative types ~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 7969609788..401552ff84 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -60,6 +60,7 @@ New Features * Added METER_COLOR item to match Color Marker set by a Meter. * Added ability to set Color Marker via modify_field Flow API. * Added Meter API to get a pointer to profile/policy by their ID. + * Added METER_MARK action for Metering with lockless profile/policy access. Removed Items ------------- diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 99247b599d..7ff024f33e 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -260,6 +260,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = { MK_FLOW_ACTION(CONNTRACK, sizeof(struct rte_flow_action_conntrack)), MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct rte_flow_action_ethdev)), MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct rte_flow_action_ethdev)), + MK_FLOW_ACTION(METER_MARK, sizeof(struct rte_flow_action_meter_mark)), }; int diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index edf69fc44f..74e7ddf73a 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -2903,6 +2903,15 @@ enum rte_flow_action_type { * @see struct rte_flow_action_ethdev */ RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT, + + /** + * Traffic metering and marking (MTR). + * the entity represented by the given ethdev. + * + * @see struct rte_flow_action_meter_mark + * See file rte_mtr.h for MTR profile object configuration. + */ + RTE_FLOW_ACTION_TYPE_METER_MARK, }; /** @@ -3774,6 +3783,25 @@ struct rte_flow_action_modify_field { uint32_t width; /**< Number of bits to use from a source field. */ }; +/** + * RTE_FLOW_ACTION_TYPE_METER_MARK + * + * Traffic metering and marking (MTR). + * + * Meters a packet stream and marks its packets either + * green, yellow, or red according to the specified profile. + * The policy is optional and may be specified for defining + * subsequent actions based on a color assigned by MTR. + * Alternatively, the METER_COLOR item may be used for this. + */ +struct rte_flow_action_meter_mark { + + /**< Profile config retrieved with rte_mtr_profile_get(). */ + struct rte_flow_meter_profile *profile; + /**< Policy config retrieved with rte_mtr_policy_get(). */ + struct rte_flow_meter_policy *policy; +}; + /* Mbuf dynamic field offset for metadata. */ extern int32_t rte_flow_dynf_metadata_offs; From patchwork Wed Sep 21 14:32:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dariusz Sosnowski X-Patchwork-Id: 116570 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB72EA00C3; Wed, 21 Sep 2022 16:33:10 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4FF634067C; Wed, 21 Sep 2022 16:33:10 +0200 (CEST) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1anam02on2085.outbound.protection.outlook.com [40.107.96.85]) by mails.dpdk.org (Postfix) with ESMTP id 9FB364014F for ; Wed, 21 Sep 2022 16:33:08 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NIa2/EJdJgrsa3wexvS+wzYGx8R8Gn11Ii/ds0oI7JQK08GGiN7Q4J1TajU8upqKQiIaZG9a5Ua1F9kzPNgsu/fZDTVuzYsocCbuPJ5gN0dxa+G4Vh9+tzdlyYM5DjrBKmxqZjlATCQDP2BdLO5pM2JHfrOHUt3/JCCwhFu79GBhpgqeBu+vt+obFOdBLpmn1Aky6KgVlp6y9KPTYUEMdijD+QuRjQkWVpwfbMuoEcRKKQ5IJvSadlEtHjVE8f6ox3C0tXfccX/BC6q/S2BhjMYRY03J36lt2AsEcbse3eBpDZoWhxBDJ7qIGlBlnorT4AWxKALRZDGeBF/J5o/c9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=r4bt6pYOBoYVH2UilSma4dN3eL6ZH3EmzpIUKyUv8nM=; b=RwJgnwQcG9fdBNvHpeibSF9jxn1fIEcKE+pRwCdl9EupKj8uh9p5OjzDh2Wca9gS5dECzWjVyMrEQ1cM0X4273dB/d7aTvrmpZCeS4pEuzbCjogx10kLZJL1aY4AfMbCWIy0g9aHZ98SeluSoUGTSwE6Nzms/KEXOyHR4tPfL7RLeKBYhv4X1mNukjbM4/DipWxD8CwJh4hElfPc9WBFwQMqQV3AFbPyKtqPwABI3ZUHqa7CMrjXJ0lMQq/qY189xM6neFZlcjVwHn6MGCDOeh6GDOIliFugRBK/Yq8bnMulKJZf/2ccw/up3zRPJaPvy9xdWL+AcmH3lBKVTjNIVg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=r4bt6pYOBoYVH2UilSma4dN3eL6ZH3EmzpIUKyUv8nM=; b=mYt4kX3ZPSHq/PWJmXKHO/6ff96+zvKbR1FGuG6jx9MZXmnsvU423f9IHe1OTs7TKCBZo9JQtRDUJU3q7cr7OEwW6xhLiI0cn9kefH3DfjT14o9dIQqjhtpK3HtAta2nWvMKJ9lZGviXFetk3zHookBuaisBhA05+vH7mN8dzBRgnWjHyTju+YcEhgLC83sRdqkMnXPzTAiwawvG3rM4HpAwnhOexQ9/6DOMnKVNxIdx/26auVbESxjgVT7xIJ5TCk8wSPMMGMrPIhNxY9I1wS71MYwbSiiqUGUmok1ZyICbKsZpP0R4DFzXLDbDz9rPWGruBMVPipXt9huqJBTBIw== Received: from BN8PR03CA0022.namprd03.prod.outlook.com (2603:10b6:408:94::35) by BN9PR12MB5322.namprd12.prod.outlook.com (2603:10b6:408:103::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.16; Wed, 21 Sep 2022 14:33:07 +0000 Received: from BN8NAM11FT032.eop-nam11.prod.protection.outlook.com (2603:10b6:408:94:cafe::52) by BN8PR03CA0022.outlook.office365.com (2603:10b6:408:94::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.21 via Frontend Transport; Wed, 21 Sep 2022 14:33:07 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT032.mail.protection.outlook.com (10.13.177.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Wed, 21 Sep 2022 14:33:06 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Wed, 21 Sep 2022 07:32:43 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Wed, 21 Sep 2022 07:32:41 -0700 From: Dariusz Sosnowski To: Ori Kam , Aman Singh , "Yuying Zhang" , Thomas Monjalon , "Ferruh Yigit" , Andrew Rybchenko CC: Subject: [PATCH v2] ethdev: add GTP PSC QFI field Date: Wed, 21 Sep 2022 14:32:02 +0000 Message-ID: <20220921143202.1790802-1-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220921101839.1711058-1-dsosnowski@nvidia.com> References: <20220921101839.1711058-1-dsosnowski@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT032:EE_|BN9PR12MB5322:EE_ X-MS-Office365-Filtering-Correlation-Id: e42e8bc5-68c5-4256-5ad7-08da9bde3324 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: OIwvaKyrVb7EhENh7Ym+l+gRYHxiDNGsEkVPxrj2jfB9JCajvzYt5aOICLjXziWK0hiAcw0PR+rY4ZcC4/YmOPM8UcxpJdy1HpyEIuK59fCDWzGq3XZ4Qm0mJ/d4WLgB8BjECCrFmAzdsBSC07Ifcb9J1pZU79hJv2jyL/wpVSyOsQi71U0SWuXgqBUikHH+b2p7kYLDbUKGfTnnOjPrLOn/GIwgPkP84+ozPdXHd9XVqocmsWUkXlEznEXhrzC6veU26B0D4A9Hs9HK/QHxqF2IX0UbbCzaY8AoyJmC6CVbW3jPJuIjST76kci+TWJMQRZMY9Z8SZrb/U57c1Sn0hA6Bphhh6tGmIyPqZoI4duQd+LT9+oifYbClZ04iGocwf2+rPSKzTdFDT5iWaWTEWwIGCnD6IRpgOkPo0vSZ1RhVG39hslRpTQTrr2uTXOokwyQgyxCYqCwF937b+zdm3i3+mI2jzev3yZHCIydy512upH7T30j3dud9SvZu2peHm+nA+ZZQJAXHD+DbwWugqkwKnKoWVlXpTpLGJmdbFkzuxbNYh9qTmHNIzErPG22tFy46q/jW6Bcm5B99YcmjvQsBBK2EYFIULkqm+6NDe0DqLECvteWEZunwprYEFm6JCmG+EI68CykRj80DkUzmAzCot3kJuX0G/0235KB86NjBjZsO7I0Va5t+Vh2p+cw9PSN+7I2Y2+EASqlvM47Kbe7bvo+gXFjwB8Ji/8bXAZnqTcFA/txhlQ2wIjvrM2s6f2sGGtjRbVh6+kpNl2SIg== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(396003)(346002)(376002)(136003)(451199015)(40470700004)(36840700001)(46966006)(47076005)(83380400001)(426003)(186003)(16526019)(82740400003)(336012)(36860700001)(2616005)(1076003)(26005)(41300700001)(5660300002)(55016003)(40480700001)(478600001)(6666004)(6286002)(7696005)(8936002)(82310400005)(4326008)(7636003)(356005)(8676002)(70586007)(70206006)(86362001)(316002)(40460700003)(2906002)(36756003)(110136005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Sep 2022 14:33:06.7380 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e42e8bc5-68c5-4256-5ad7-08da9bde3324 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT032.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5322 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch introduces the GTP PSC QFI modify field support and adds the testpmd CLI command support. An example for copying GTP QFI field using modify_field action: modify_field op set dst_type meta src_type gtp_psc_qfi width 8 An example of setting GTP QFI field value to 0x1f using modify_field action: modify_field op set dst_type gtp_psc_qfi src_type value src_value 1f width 8 Signed-off-by: Dariusz Sosnowski Acked-by: Ori Kam --- v2: * Squashed ethdev and testpmd commits. app/test-pmd/cmdline_flow.c | 2 +- lib/ethdev/rte_flow.h | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 7f50028eb7..b9673314b1 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -796,7 +796,7 @@ static const char *const modify_field_ids[] = { "udp_port_src", "udp_port_dst", "vxlan_vni", "geneve_vni", "gtp_teid", "tag", "mark", "meta", "pointer", "value", - "ipv4_ecn", "ipv6_ecn", NULL + "ipv4_ecn", "ipv6_ecn", "gtp_psc_qfi", NULL }; /** Maximum number of subsequent tokens and arguments on the stack. */ diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index a79f1e7ef0..e64831f8f1 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -3689,6 +3689,7 @@ enum rte_flow_field_id { RTE_FLOW_FIELD_VALUE, /**< Immediate value. */ RTE_FLOW_FIELD_IPV4_ECN, /**< IPv4 ECN. */ RTE_FLOW_FIELD_IPV6_ECN, /**< IPv6 ECN. */ + RTE_FLOW_FIELD_GTP_PSC_QFI, /**< GTP QFI. */ }; /** From patchwork Wed Sep 21 14:54:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 116572 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EE89DA00C3; Wed, 21 Sep 2022 16:54:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0847E4281B; Wed, 21 Sep 2022 16:54:33 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2045.outbound.protection.outlook.com [40.107.220.45]) by mails.dpdk.org (Postfix) with ESMTP id 813BB40E0F for ; Wed, 21 Sep 2022 16:54:30 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RfX8O2BkJBKh0Lli40kP0qFg83p9xH7n2cCmHsrGjiiDeWtYT2JS0IwVnq7DDGQKPQ/B2Bbqon3DJrgKLvw5Uf/ZSFGtUVRekuDoFnOKbjaJNaONq+bA5RYB5Y49Mc9XDNXFKJ/iDVWv8my5evlm+MVfqUMZVfIX5WB/fe53VLv1cozcj7eKtwxEvlC5L36/hSIoWMVUr4AgRPn1wbguiHW58n4KEHs9m3/7KApaRMeQrsKu6IvKQHuItbTKhM0n4afwf7tVdmbygS8CPKWpwkd7eWHRU+EIRvSObnOZIq5WMCUZu6o7v4H+JU2Aa6VuGYBi72bH0Q6ZNam6ShPHVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jySY3qp+QiV+eHnaqeGHCCNRrU8PI5G769GuEDSnsr4=; b=Exr56JADR/SdhLKNNTkz6KBkUsV+QVO+9B26NQ2XCitDVMiVry1mcpPDce6pPO0L2E6w9WOn8DaTond0qzEwwp6148pWYfi49UETRCuFnzlT9ZU+OKCg+6sPQZUYLeMpy2qTLddy4zGAD5dQajBrvBFrfh+PJwulPxZNyc1KTDr+juSOgi7VAt0yGeMU1e271d2ii7D0hgY4GWp2vno3hCB71qlhgncxK54ZzFEutCpPcqAeIo50kg9iQhJlURYl6wBJOwAyjKdCAangRYgOEPFoGdxy6T8A8saU+l+lS2RAV0F7Gwl/vWNlm6gk3xWucee2my0JJ44svhP9gIQJFA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jySY3qp+QiV+eHnaqeGHCCNRrU8PI5G769GuEDSnsr4=; b=iyOV7pGYOGZbtfWCq3LPdG4ElMp80wlWkZO/q1UysFp/Z+S2a/qUozApD1+00OId+KxgyMw9D7Svh74E1sbpu9zArcXdHUqRbk3NHgpS4MPrlFQo1N1LvYQ5ZD3wjjqOtckGmTGbUm7wX6SK7co0A5qrKkg5S+Tx8493jZc4G1dre9cS0jFvF3uMBRM/zu3W3zgrpEhyZ2lCdr8G7pAVmjNCXEJApn2U684+D7939bmwXJJEtqKfSm/Vtoe0bjrbpt+CBK0Oar/uHP8wa3rErhST4MXFalv9l4R+YVGxJMVhDmopJdPTT9mBg9nUWa546sFqBqd+B5ybUD4ePbeUGw== Received: from MW4PR04CA0122.namprd04.prod.outlook.com (2603:10b6:303:84::7) by SN7PR12MB7226.namprd12.prod.outlook.com (2603:10b6:806:2a9::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.18; Wed, 21 Sep 2022 14:54:28 +0000 Received: from CO1NAM11FT054.eop-nam11.prod.protection.outlook.com (2603:10b6:303:84:cafe::a8) by MW4PR04CA0122.outlook.office365.com (2603:10b6:303:84::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Wed, 21 Sep 2022 14:54:28 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT054.mail.protection.outlook.com (10.13.174.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Wed, 21 Sep 2022 14:54:28 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Wed, 21 Sep 2022 07:54:17 -0700 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Wed, 21 Sep 2022 07:54:17 -0700 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29 via Frontend Transport; Wed, 21 Sep 2022 07:54:15 -0700 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , "Ori Kam" Subject: [PATCH 1/3] ethdev: add strict queue to pre-configuration flow hints Date: Wed, 21 Sep 2022 17:54:07 +0300 Message-ID: <20220921145409.511328-2-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220921145409.511328-1-michaelba@nvidia.com> References: <20220921145409.511328-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT054:EE_|SN7PR12MB7226:EE_ X-MS-Office365-Filtering-Correlation-Id: ef893d07-aa99-49e3-f652-08da9be12ee4 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: tebzQiS77XvtX1jX5VC8V0XhrDE/CcIW2TE2+vbZNrEA5BaWCWdrthBbWb6WidBPZ2IjBOGJ2ol77FikFrWaazns2pjCt1/bkJxHZzWCwuxoVFPvzBYt8tc8bCAUGhU0ZiPM2NYgpGcp2cBNJRCUu5Dyq35zBDCC3Myx+1fyktfXq5JaGvRFh4DTaSOyLAWj7VN3ag/9n0op2LDVUIfdKfBW7YHjMmeldyhTjL4uKg2ba0V9bZK+dDOhynPG/NU2HXZ3QKl8MHfRjqkPB45RWwg2+EdPDr8kqAi1WMThPorPyvkM2guu1Bx7Jl6HV+1zqTIFlJOvAvdQclUUIApWVhGxnFhch8UXAslKfEz1e8DJ/EgOzHaMZfs63XVDuhSCwUtZpsbnOxSlG7uVbhJoxBgv+cuTJNZ+bEkI1kUQ10Ektywo0SLnTWM2sby5br8pGsCP7otGygRC91F+wMsxVQRpzmGUekT8tMb7X/mPMKpxxiiiywu69lfMuCpXAPK0DfkGrmVdyqZHPTdjU3MKx3EwyZ8Cq7ea/P5Lp+V1cwzjJzOJKBmxr6yfrSOn2Ig0XTWeey7uxPWlTG463uSzDVZXBaiNMnqePvD8BC/crMlasEUwktwpoDhsJx0TvBKHWOjOtaZUk60wI+VzasFWbnGrrbcPYh2kueiXHznLavHMJndAVziqd3VtWasISHQz7sfuBivd6OhAP0uzY7obpw/nm64td0+hFubcFL/QEPeEbjEwMqgeWV5VHdbjOm7logKIELly83ZIzYfRmVvA6A== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(346002)(136003)(376002)(39860400002)(396003)(451199015)(36840700001)(46966006)(40470700004)(2616005)(107886003)(86362001)(6666004)(7696005)(82310400005)(8936002)(41300700001)(26005)(356005)(7636003)(36756003)(6916009)(478600001)(55016003)(70206006)(54906003)(70586007)(1076003)(186003)(316002)(336012)(6286002)(2906002)(8676002)(82740400003)(47076005)(426003)(5660300002)(40460700003)(40480700001)(83380400001)(4326008)(36860700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Sep 2022 14:54:28.1956 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ef893d07-aa99-49e3-f652-08da9be12ee4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT054.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7226 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The data-path focused flow rule management can manage flow rules in more optimized way than traditional one by using hints provided by application in initialization phase. In addition to the current hints we have in port attr, more hints could be provided by application about its behaviour. One example is how the application do with the same flow rule ? A. create/destroy flow on same queue but query flow on different queue or queue-less way (i.e, counter query) B. All flow operations will be exactly on the same queue, by which PMD could be in more optimized way then A because resource could be isolated and access based on queue, without lock, for example. This patch add flag about above situation and could be extended to cover more situations. Signed-off-by: Michael Baum Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 10 ++++++++++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 ++-- lib/ethdev/rte_flow.h | 14 ++++++++++++++ 3 files changed, 26 insertions(+), 2 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 7f50028eb7..a982083d27 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -219,6 +219,7 @@ enum index { CONFIG_COUNTERS_NUMBER, CONFIG_AGING_OBJECTS_NUMBER, CONFIG_METERS_NUMBER, + CONFIG_FLAGS, /* Indirect action arguments */ INDIRECT_ACTION_CREATE, @@ -1081,6 +1082,7 @@ static const enum index next_config_attr[] = { CONFIG_COUNTERS_NUMBER, CONFIG_AGING_OBJECTS_NUMBER, CONFIG_METERS_NUMBER, + CONFIG_FLAGS, END, ZERO, }; @@ -2667,6 +2669,14 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct buffer, args.configure.port_attr.nb_meters)), }, + [CONFIG_FLAGS] = { + .name = "flags", + .help = "configuration flags", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.port_attr.flags)), + }, /* Top-level command. */ [PATTERN_TEMPLATE] = { .name = "pattern_template", diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 330e34427d..6c12e0286c 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3082,7 +3082,7 @@ following sections. [queues_number {number}] [queues_size {size}] [counters_number {number}] [aging_counters_number {number}] - [meters_number {number}] + [meters_number {number}] [flags {number}] - Create a pattern template:: flow pattern_template {port_id} create [pattern_template_id {id}] @@ -3233,7 +3233,7 @@ for asynchronous flow creation/destruction operations. It is bound to [queues_number {number}] [queues_size {size}] [counters_number {number}] [aging_counters_number {number}] - [meters_number {number}] + [meters_number {number}] [flags {number}] If successful, it will show:: diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index a79f1e7ef0..c552771472 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4874,6 +4874,12 @@ rte_flow_flex_item_release(uint16_t port_id, const struct rte_flow_item_flex_handle *handle, struct rte_flow_error *error); +/** + * Indicate all operations for a given flow rule will _strictly_ + * happen on the same queue (create/destroy/query/update). + */ +#define RTE_FLOW_PORT_FLAG_STRICT_QUEUE RTE_BIT32(0) + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. @@ -4902,6 +4908,10 @@ struct rte_flow_port_info { * @see RTE_FLOW_ACTION_TYPE_METER */ uint32_t max_nb_meters; + /** + * Port supported flags (RTE_FLOW_PORT_FLAG_*). + */ + uint32_t supported_flags; }; /** @@ -4971,6 +4981,10 @@ struct rte_flow_port_attr { * @see RTE_FLOW_ACTION_TYPE_METER */ uint32_t nb_meters; + /** + * Port flags (RTE_FLOW_PORT_FLAG_*). + */ + uint32_t flags; }; /** From patchwork Wed Sep 21 14:54:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 116573 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B362CA00C3; Wed, 21 Sep 2022 16:54:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C984742826; Wed, 21 Sep 2022 16:54:35 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2073.outbound.protection.outlook.com [40.107.237.73]) by mails.dpdk.org (Postfix) with ESMTP id B4545410D0 for ; Wed, 21 Sep 2022 16:54:32 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GdIhjQNKPyOsNGt08jFNokViuUaMUoXfVKFrr3t6eB5uqppSQapPvtNxDD6NgvIMgIiVAjMNQFyUnNC7ZmIc+1HJZZItP2E3sd/FfNH8nrX09/8FKFjTfTymZI9oQmxgAk/17A5lqNrk+NVer2TixkY+fAV+Yo+IZc0ZDDPG9BYTvgxe9GdZsYpJMXCHxK6GMsNgkLxaTR1alnLJZxhPIcEXEf75W+LdbWqn+YyAUGBiAfwqcTXQL0xSCuda9iLc5QoCZJIqQuu8HV89RUZy5hIS25xDkwQoMEHV1gAXvfhQYAoOeeMz91FR7nkrhIL0sfN1LAGBlYEj3yFQrLUi9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0xtEUuh6xraKQOYgKR2gxq/W9ZWgNrZmsgocOZoksps=; b=FkqDrQ4d82h9ppZe3FNzi9Iv/+fdXc+O6NVP70/XOwSHAtGD8jSwdqQtScS+gASNm3L+2pkcsYkPb1408Rc4ezUvhPT4R85MdE/C2e7OOfp+FVdE2VER2TiaD/vwvlSZTzIGim0jyieqtEfjOkcIRM74g7embOnQ7MOHOejz6Hs8JahoYzuGndHB3JjGHwj7xk8keEIScinHXj3iRedMm9X2dp6p2csecvPn3zQB/UmP66RSxPMHZYprm9osbBg7zUpOgThH//KPe2fXS849lHYeUF2ImycDe1V9DGb4NHxjcBV963m0aJF92qK1RVzsw/LXqi0XkXutjeRHqDhnCg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0xtEUuh6xraKQOYgKR2gxq/W9ZWgNrZmsgocOZoksps=; b=nQKnoifHTRA2wRsxkwTC8QhgQpn31PbgR3eLhi+Hb70/FLwE9SCc1R6qx1PS/7dWBaoFlyLjIT+Lo3Lu6DjP84pWeLt/xQSUjjEogPNGw5EQ9hG28BKPxeP7IVPbbInuFf5ytkk4l4hXaxGculAcjrMXn2fRbCSxRmB3H03cpxaAZiww89CUk8BwwcDTwQaSRK1BXxqcECNzfO14qmP4BaVbwkIm48wG/4wbba38q1mlPbFzEZ/hJzGeUrCQMS4mFJAjjD0X/lfV2RQ1QGm+fCNH5bc4OQaN/trFXW7YhTnIgU/nsbCb11gVUP8cOJMgGs78HnULgcVhgo9stxpDBQ== Received: from MW4PR04CA0147.namprd04.prod.outlook.com (2603:10b6:303:84::32) by DM4PR12MB5868.namprd12.prod.outlook.com (2603:10b6:8:67::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.18; Wed, 21 Sep 2022 14:54:29 +0000 Received: from CO1NAM11FT054.eop-nam11.prod.protection.outlook.com (2603:10b6:303:84:cafe::93) by MW4PR04CA0147.outlook.office365.com (2603:10b6:303:84::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.17 via Frontend Transport; Wed, 21 Sep 2022 14:54:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT054.mail.protection.outlook.com (10.13.174.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Wed, 21 Sep 2022 14:54:29 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Wed, 21 Sep 2022 07:54:19 -0700 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Wed, 21 Sep 2022 07:54:19 -0700 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29 via Frontend Transport; Wed, 21 Sep 2022 07:54:17 -0700 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , "Ori Kam" Subject: [PATCH 2/3] ethdev: add queue-based API to report aged flow rules Date: Wed, 21 Sep 2022 17:54:08 +0300 Message-ID: <20220921145409.511328-3-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220921145409.511328-1-michaelba@nvidia.com> References: <20220921145409.511328-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT054:EE_|DM4PR12MB5868:EE_ X-MS-Office365-Filtering-Correlation-Id: b993083e-e8cc-4f80-e9f5-08da9be12f9e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: r4Q3Dx6L8D6umGyKw/JQQVC06mVJ2eF3FU54bytKx57wRcF1egly55eFHW+3LRt7uM6Hn1cIu0GKCS1oUgvQv02IEB7VLlMx2u1/nQ2lDXB3VDObHNeREhM2HYjFrU7LlPkQCYCQNmoBCmT0jPs27JcTmE4YYtOj9zdI5Vgtgv+s3Xc5/nrVNQoxbuE6c+/+Mt3SfLf+8WCq3wOIROw2r9WPS7KR96z7J2t0OTgMyHFRSyhgxgFcALl44TbuOz9NwVodjNc2svEU/37AhWTordYrzUFzVDx9uH1PFUQED8XWSmuBfQUU0W7moNwZRwyopg5lTV+ZeEgvn8i7KiddDuMLEMO1Q/CMhL/5Ne8tHpO55xoZPWB3to/0MdTmr2qAvWNR69GEMFiE4GU2WrTn1mD55F8cTWeDuF87XNJP+Zn/L464dQo4j6EuLqreNIsIHjOUbt+WqBtNUPYHYNytD33hagHTW8qOaTe5afCJ5X1f8A4LLcE/yB1cwQP4RAorEJ/BsnEySSp3seKaalFsnAeQ0bxPoFDIaomfdwgdQ+80QJ3v+nQFn6l2oGx+bBv8OM39Nvo3gJljlzbDNVMHEe0w1stxb3qB4iXyR5o7SHJBujV7RKlEPdrCKNaDRZvCyoCWvJ0KRcUg7LDR6EqtCTjoWUZDdvZFffM1dMcBAn3ALUY42yGjYP9VXJGFpYpYPT7qWJlyrsc4ctzZ37U7hLlF8VMvmfDKv00PDKFSZB3Jl1wNbOr86/p1ZQ/DL6UzmtFhBLKods7Ikt+79lzo4w== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(136003)(346002)(39860400002)(376002)(451199015)(36840700001)(46966006)(40470700004)(36756003)(86362001)(70586007)(2906002)(30864003)(5660300002)(54906003)(8936002)(83380400001)(55016003)(40460700003)(40480700001)(47076005)(426003)(478600001)(82740400003)(26005)(6666004)(316002)(336012)(7696005)(2616005)(6916009)(7636003)(186003)(107886003)(41300700001)(6286002)(356005)(1076003)(36860700001)(4326008)(8676002)(70206006)(82310400005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Sep 2022 14:54:29.4299 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b993083e-e8cc-4f80-e9f5-08da9be12f9e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT054.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5868 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When application use queue-based flow rule management and operate the same flow rule on the same queue, e.g create/destroy/query, API of querying aged flow rules should also have queue id parameter just like other queue-based flow APIs. By this way, PMD can work in more optimized way since resources are isolated by queue and needn't synchronize. If application do use queue-based flow management but configure port without RTE_FLOW_PORT_FLAG_STRICT_QUEUE, which means application operate a given flow rule on different queues, the queue id parameter will be ignored. Signed-off-by: Michael Baum Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 17 ++- app/test-pmd/config.c | 159 +++++++++++++++++++- app/test-pmd/testpmd.h | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 86 ++++++++++- lib/ethdev/rte_flow.c | 22 +++ lib/ethdev/rte_flow.h | 48 +++++- lib/ethdev/rte_flow_driver.h | 7 + lib/ethdev/version.map | 3 + 8 files changed, 333 insertions(+), 10 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index a982083d27..4fb90a92cb 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -127,6 +127,7 @@ enum index { /* Queue arguments. */ QUEUE_CREATE, QUEUE_DESTROY, + QUEUE_AGED, QUEUE_INDIRECT_ACTION, /* Queue create arguments. */ @@ -1159,6 +1160,7 @@ static const enum index next_table_destroy_attr[] = { static const enum index next_queue_subcmd[] = { QUEUE_CREATE, QUEUE_DESTROY, + QUEUE_AGED, QUEUE_INDIRECT_ACTION, ZERO, }; @@ -2942,6 +2944,13 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct buffer, queue)), .call = parse_qo_destroy, }, + [QUEUE_AGED] = { + .name = "aged", + .help = "list and destroy aged flows", + .next = NEXT(next_aged_attr, NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + .call = parse_aged, + }, [QUEUE_INDIRECT_ACTION] = { .name = "indirect_action", .help = "queue indirect actions", @@ -8640,8 +8649,8 @@ parse_aged(struct context *ctx, const struct token *token, /* Nothing else to do if there is no buffer. */ if (!out) return len; - if (!out->command) { - if (ctx->curr != AGED) + if (!out->command || out->command == QUEUE) { + if (ctx->curr != AGED && ctx->curr != QUEUE_AGED) return -1; if (sizeof(*out) > size) return -1; @@ -10496,6 +10505,10 @@ cmd_flow_parsed(const struct buffer *in) case PULL: port_queue_flow_pull(in->port, in->queue); break; + case QUEUE_AGED: + port_queue_flow_aged(in->port, in->queue, + in->args.aged.destroy); + break; case QUEUE_INDIRECT_ACTION_CREATE: port_queue_action_handle_create( in->port, in->queue, in->postpone, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index a2939867c4..31952467fb 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2662,6 +2662,7 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, const struct rte_flow_action *actions) { struct rte_flow_op_attr op_attr = { .postpone = postpone }; + struct rte_flow_attr flow_attr = { 0 }; struct rte_flow *flow; struct rte_port *port; struct port_flow *pf; @@ -2713,7 +2714,7 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, return -EINVAL; } - pf = port_flow_new(NULL, pattern, actions, &error); + pf = port_flow_new(&flow_attr, pattern, actions, &error); if (!pf) return port_flow_complain(&error); if (age) { @@ -2950,6 +2951,162 @@ port_queue_flow_push(portid_t port_id, queueid_t queue_id) return ret; } +/** Pull queue operation results from the queue. */ +static int +port_queue_aged_flow_destroy(portid_t port_id, queueid_t queue_id, + const uint32_t *rule, int nb_flows) +{ + struct rte_port *port = &ports[port_id]; + struct rte_flow_op_result *res; + struct rte_flow_error error; + uint32_t n = nb_flows; + int ret = 0; + int i; + + res = calloc(port->queue_sz, sizeof(struct rte_flow_op_result)); + if (!res) { + printf("Failed to allocate memory for pulled results\n"); + return -ENOMEM; + } + + memset(&error, 0x66, sizeof(error)); + while (nb_flows > 0) { + int success = 0; + + if (n > port->queue_sz) + n = port->queue_sz; + ret = port_queue_flow_destroy(port_id, queue_id, true, n, rule); + if (ret < 0) { + free(res); + return ret; + } + ret = rte_flow_push(port_id, queue_id, &error); + if (ret < 0) { + printf("Failed to push operations in the queue: %s\n", + strerror(-ret)); + free(res); + return ret; + } + while (success < nb_flows) { + ret = rte_flow_pull(port_id, queue_id, res, + port->queue_sz, &error); + if (ret < 0) { + printf("Failed to pull a operation results: %s\n", + strerror(-ret)); + free(res); + return ret; + } + + for (i = 0; i < ret; i++) { + if (res[i].status == RTE_FLOW_OP_SUCCESS) + success++; + } + } + rule += n; + nb_flows -= n; + n = nb_flows; + } + + free(res); + return ret; +} + +/** List simply and destroy all aged flows per queue. */ +void +port_queue_flow_aged(portid_t port_id, uint32_t queue_id, uint8_t destroy) +{ + void **contexts; + int nb_context, total = 0, idx; + uint32_t *rules = NULL; + struct rte_port *port; + struct rte_flow_error error; + enum age_action_context_type *type; + union { + struct port_flow *pf; + struct port_indirect_action *pia; + } ctx; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return; + port = &ports[port_id]; + if (queue_id >= port->queue_nb) { + printf("Error: queue #%u is invalid\n", queue_id); + return; + } + total = rte_flow_get_q_aged_flows(port_id, queue_id, NULL, 0, &error); + if (total < 0) { + port_flow_complain(&error); + return; + } + printf("Port %u queue %u total aged flows: %d\n", + port_id, queue_id, total); + if (total == 0) + return; + contexts = calloc(total, sizeof(void *)); + if (contexts == NULL) { + printf("Cannot allocate contexts for aged flow\n"); + return; + } + printf("%-20s\tID\tGroup\tPrio\tAttr\n", "Type"); + nb_context = rte_flow_get_q_aged_flows(port_id, queue_id, contexts, + total, &error); + if (nb_context > total) { + printf("Port %u queue %u get aged flows count(%d) > total(%d)\n", + port_id, queue_id, nb_context, total); + free(contexts); + return; + } + if (destroy) { + rules = malloc(sizeof(uint32_t) * nb_context); + if (rules == NULL) + printf("Cannot allocate memory for destroy aged flow\n"); + } + total = 0; + for (idx = 0; idx < nb_context; idx++) { + if (!contexts[idx]) { + printf("Error: get Null context in port %u queue %u\n", + port_id, queue_id); + continue; + } + type = (enum age_action_context_type *)contexts[idx]; + switch (*type) { + case ACTION_AGE_CONTEXT_TYPE_FLOW: + ctx.pf = container_of(type, struct port_flow, age_type); + printf("%-20s\t%" PRIu32 "\t%" PRIu32 "\t%" PRIu32 + "\t%c%c%c\t\n", + "Flow", + ctx.pf->id, + ctx.pf->rule.attr->group, + ctx.pf->rule.attr->priority, + ctx.pf->rule.attr->ingress ? 'i' : '-', + ctx.pf->rule.attr->egress ? 'e' : '-', + ctx.pf->rule.attr->transfer ? 't' : '-'); + if (rules != NULL) { + rules[total] = ctx.pf->id; + total++; + } + break; + case ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION: + ctx.pia = container_of(type, + struct port_indirect_action, + age_type); + printf("%-20s\t%" PRIu32 "\n", "Indirect action", + ctx.pia->id); + break; + default: + printf("Error: invalid context type %u\n", port_id); + break; + } + } + if (rules != NULL) { + port_queue_aged_flow_destroy(port_id, queue_id, rules, total); + free(rules); + } + printf("\n%d flows destroyed\n", total); + free(contexts); +} + /** Pull queue operation results from the queue. */ int port_queue_flow_pull(portid_t port_id, queueid_t queue_id) diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index fb2f5195d3..4e24dd9ee0 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -982,6 +982,7 @@ int port_queue_action_handle_update(portid_t port_id, uint32_t queue_id, const struct rte_flow_action *action); int port_queue_flow_push(portid_t port_id, queueid_t queue_id); int port_queue_flow_pull(portid_t port_id, queueid_t queue_id); +void port_queue_flow_aged(portid_t port_id, uint32_t queue_id, uint8_t destroy); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 6c12e0286c..e68b852e29 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3085,9 +3085,10 @@ following sections. [meters_number {number}] [flags {number}] - Create a pattern template:: + flow pattern_template {port_id} create [pattern_template_id {id}] [relaxed {boolean}] [ingress] [egress] [transfer] - template {item} [/ {item} [...]] / end + template {item} [/ {item} [...]] / end - Destroy a pattern template:: @@ -3186,6 +3187,10 @@ following sections. flow aged {port_id} [destroy] +- Enqueue list and destroy aged flow rules:: + + flow queue {port_id} aged {queue_id} [destroy] + - Tunnel offload - create a tunnel stub:: flow tunnel create {port_id} type {tunnel_type} @@ -4427,7 +4432,7 @@ Disabling isolated mode:: testpmd> Dumping HW internal information -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``flow dump`` dumps the hardware's internal representation information of all flows. It is bound to ``rte_flow_dev_dump()``:: @@ -4443,10 +4448,10 @@ Otherwise, it will complain error occurred:: Caught error type [...] ([...]): [...] Listing and destroying aged flow rules -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``flow aged`` simply lists aged flow rules be get from api ``rte_flow_get_aged_flows``, -and ``destroy`` parameter can be used to destroy those flow rules in PMD. +and ``destroy`` parameter can be used to destroy those flow rules in PMD:: flow aged {port_id} [destroy] @@ -4481,7 +4486,7 @@ will be ID 3, ID 1, ID 0:: 1 0 0 i-- 0 0 0 i-- -If attach ``destroy`` parameter, the command will destroy all the list aged flow rules. +If attach ``destroy`` parameter, the command will destroy all the list aged flow rules:: testpmd> flow aged 0 destroy Port 0 total aged flows: 4 @@ -4499,6 +4504,77 @@ If attach ``destroy`` parameter, the command will destroy all the list aged flow testpmd> flow aged 0 Port 0 total aged flows: 0 + +Enqueueing listing and destroying aged flow rules +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue aged`` simply lists aged flow rules be get from +``rte_flow_get_q_aged_flows`` API, and ``destroy`` parameter can be used to +destroy those flow rules in PMD:: + + flow queue {port_id} aged {queue_id} [destroy] + +Listing current aged flow rules:: + + testpmd> flow queue 0 aged 0 + Port 0 queue 0 total aged flows: 0 + testpmd> flow queue 0 create 0 ingress tanle 0 item_template 0 action_template 0 + pattern eth / ipv4 src is 2.2.2.14 / end + actions age timeout 5 / queue index 0 / end + Flow rule #0 creation enqueued + testpmd> flow queue 0 create 0 ingress tanle 0 item_template 0 action_template 0 + pattern eth / ipv4 src is 2.2.2.15 / end + actions age timeout 4 / queue index 0 / end + Flow rule #1 creation enqueued + testpmd> flow queue 0 create 0 ingress tanle 0 item_template 0 action_template 0 + pattern eth / ipv4 src is 2.2.2.16 / end + actions age timeout 4 / queue index 0 / end + Flow rule #2 creation enqueued + testpmd> flow queue 0 create 0 ingress tanle 0 item_template 0 action_template 0 + pattern eth / ipv4 src is 2.2.2.17 / end + actions age timeout 4 / queue index 0 / end + Flow rule #3 creation enqueued + testpmd> flow pull 0 queue 0 + Queue #0 pulled 4 operations (0 failed, 4 succeeded) + +Aged Rules are simply list as command ``flow queue {port_id} list {queue_id}``, +but strip the detail rule information, all the aged flows are sorted by the +longest timeout time. For example, if those rules is configured in the same time, +ID 2 will be the first aged out rule, the next will be ID 3, ID 1, ID 0:: + + testpmd> flow queue 0 aged 0 + Port 0 queue 0 total aged flows: 4 + ID Group Prio Attr + 2 0 0 --- + 3 0 0 --- + 1 0 0 --- + 0 0 0 --- + + 0 flows destroyed + +If attach ``destroy`` parameter, the command will destroy all the list aged flow rules:: + + testpmd> flow queue 0 aged 0 destroy + Port 0 queue 0 total aged flows: 4 + ID Group Prio Attr + 2 0 0 --- + 3 0 0 --- + 1 0 0 --- + 0 0 0 --- + Flow rule #2 destruction enqueued + Flow rule #3 destruction enqueued + Flow rule #1 destruction enqueued + Flow rule #0 destruction enqueued + + 4 flows destroyed + testpmd> flow queue 0 aged 0 + Port 0 total aged flows: 0 + +.. note:: + + The queue must be empty before attaching ``destroy`` parameter. + + Creating indirect actions ~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 501be9d602..5c95ac7f8b 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1133,6 +1133,28 @@ rte_flow_get_aged_flows(uint16_t port_id, void **contexts, NULL, rte_strerror(ENOTSUP)); } +int +rte_flow_get_q_aged_flows(uint16_t port_id, uint32_t queue_id, void **contexts, + uint32_t nb_contexts, struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->get_q_aged_flows)) { + fts_enter(dev); + ret = ops->get_q_aged_flows(dev, queue_id, contexts, + nb_contexts, error); + fts_exit(dev); + return flow_err(port_id, ret, error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + struct rte_flow_action_handle * rte_flow_action_handle_create(uint16_t port_id, const struct rte_flow_indir_action_conf *conf, diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index c552771472..d830b02321 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -2930,8 +2930,8 @@ struct rte_flow_action_queue { * on the flow. RTE_ETH_EVENT_FLOW_AGED event is triggered when a * port detects new aged-out flows. * - * The flow context and the flow handle will be reported by the - * rte_flow_get_aged_flows API. + * The flow context and the flow handle will be reported by the either + * rte_flow_get_aged_flows or rte_flow_get_q_aged_flows APIs. */ struct rte_flow_action_age { uint32_t timeout:24; /**< Time in seconds. */ @@ -4443,6 +4443,50 @@ int rte_flow_get_aged_flows(uint16_t port_id, void **contexts, uint32_t nb_contexts, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Get aged-out flows of a given port on the given flow queue. + * + * If application configure port attribute with RTE_FLOW_PORT_FLAG_STRICT_QUEUE, + * there is no RTE_ETH_EVENT_FLOW_AGED event and this function must be called to + * get the aged flows synchronously. + * + * If application configure port attribute without + * RTE_FLOW_PORT_FLAG_STRICT_QUEUE, RTE_ETH_EVENT_FLOW_AGED event will be + * triggered at least one new aged out flow was detected on any flow queue after + * the last call to rte_flow_get_q_aged_flows. + * In addition, the @p queue_id will be ignored. + * This function can be called to get the aged flows asynchronously from the + * event callback or synchronously regardless the event. + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue to query. Ignored when RTE_FLOW_PORT_FLAG_STRICT_QUEUE not set. + * @param[in, out] contexts + * The address of an array of pointers to the aged-out flows contexts. + * @param[in] nb_contexts + * The length of context array pointers. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * if nb_contexts is 0, return the amount of all aged contexts. + * if nb_contexts is not 0 , return the amount of aged flows reported + * in the context array, otherwise negative errno value. + * + * @see rte_flow_action_age + * @see RTE_ETH_EVENT_FLOW_AGED + * @see rte_flow_port_flag + */ +__rte_experimental +int +rte_flow_get_q_aged_flows(uint16_t port_id, uint32_t queue_id, void **contexts, + uint32_t nb_contexts, struct rte_flow_error *error); + /** * Specify indirect action object configuration */ diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index 2bff732d6a..f0a03bf149 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -84,6 +84,13 @@ struct rte_flow_ops { void **context, uint32_t nb_contexts, struct rte_flow_error *err); + /** See rte_flow_get_q_aged_flows() */ + int (*get_q_aged_flows) + (struct rte_eth_dev *dev, + uint32_t queue_id, + void **contexts, + uint32_t nb_contexts, + struct rte_flow_error *error); /** See rte_flow_action_handle_create() */ struct rte_flow_action_handle *(*action_handle_create) (struct rte_eth_dev *dev, diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 03f52fee91..4a40d24d8f 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -285,6 +285,9 @@ EXPERIMENTAL { rte_mtr_color_in_protocol_priority_get; rte_mtr_color_in_protocol_set; rte_mtr_meter_vlan_table_update; + + # added in 22.11 + rte_flow_get_q_aged_flows; }; INTERNAL { From patchwork Wed Sep 21 14:54:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 116574 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8BC33A00C3; Wed, 21 Sep 2022 16:54:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0360241181; Wed, 21 Sep 2022 16:54:39 +0200 (CEST) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1anam02on2071.outbound.protection.outlook.com [40.107.96.71]) by mails.dpdk.org (Postfix) with ESMTP id 7CDB242B6E for ; Wed, 21 Sep 2022 16:54:37 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lQCu3DhFIoHNqBld9DIaLpGFILBXeT9HdvCYkmdWF7J94Lsf2T0+0IrbsgRXSqCcdw3Px6aIichUJh9OT0sLrtjKpUS5q/LaEvNb92Swt+Co5I7KGO/5XbcSWyb+aK7Ta4sQCqbBZpzTF93b9oGLJPGw/vnNUOu05vIHLe8mhnfrS8kQhvfdHyk97q6nX59NWxBepQciOqrp/XLZg37A4zZ8+5cUqThxBrT6AtQzPQqCXGEaJu6k5cnOxhJAImNDbouly3GQ+DLISaanynRiiMoDH5JbyitURuPmaHGGUhvjg9tFhsAGS8HbYvCilzynFWdpklwrx1sD15nz27CctQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=T/uKzlEo01s2UeoQlGhmrD7OqMz/l7caR0TAyvRJuzI=; b=GLlmhY/46MIYG3qggi3GGDMOIEq1RmN+Diww0PzhvB43A/O4HzK31jeArpN2GaXHOAx61xfP9z7FA25HVdebE0ZVQVYR9Ike2Iq++Bh398qnOzVUp+fuvfUbfhtKZ/STAhYjhzN5eouM+OXo8klziUEnSeF5pp+OAvGfZ38SqM1Lin5fg/9tfosFynedPRdTFZtUcKu/sWzUP0ImgQqi5EvKIy1KlNaMapouOFiOeGOcjKTUFv69RLXdV0iVeCCKZCDnK9vJD2KMb8+1uScpSzvayHRTma5lZ1bbTqnWFN8U9OUjjTFe67shV/C7xW9p+a16AyXBOHq0zfz92szgyQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=T/uKzlEo01s2UeoQlGhmrD7OqMz/l7caR0TAyvRJuzI=; b=AcWK2XkPzhgbaP5MK81Nhdgj68gH4IC3NOMNx9FnMGRYEZ4H5I/2Wfs3YGRU+zkxqYkYRcrW3u1ZTPHgG8jfKl5nfMHpv4BdBJ0HT/v23hRQmHRr+gvCwxaVVwJFZJ/SznbROBNNdNCSdopquRE+v6RsFRKpnvzlDQ09eeLnncfDVLrz20pv6//0Kr67SwzAYAQbM/D2z8K43+7tsNVqHGsPNQ52iYboTjDSL8ns9sGpoxjfNQ9dhbLVzHWpT6h4lf2/N/uvacow/kkmHoPNm5DG5vSlVtlf8DOZUQBOrVS4gNQj8L3iehL0WqHyBgqRsxlD91tT6HH92SG9zutE6A== Received: from BN9PR03CA0281.namprd03.prod.outlook.com (2603:10b6:408:f5::16) by PH7PR12MB6693.namprd12.prod.outlook.com (2603:10b6:510:1b0::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.16; Wed, 21 Sep 2022 14:54:35 +0000 Received: from BN8NAM11FT092.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f5:cafe::f7) by BN9PR03CA0281.outlook.office365.com (2603:10b6:408:f5::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.15 via Frontend Transport; Wed, 21 Sep 2022 14:54:35 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT092.mail.protection.outlook.com (10.13.176.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Wed, 21 Sep 2022 14:54:34 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Wed, 21 Sep 2022 07:54:21 -0700 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Wed, 21 Sep 2022 07:54:21 -0700 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29 via Frontend Transport; Wed, 21 Sep 2022 07:54:19 -0700 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , "Ori Kam" Subject: [PATCH 3/3] ethdev: add structure for indirect AGE update Date: Wed, 21 Sep 2022 17:54:09 +0300 Message-ID: <20220921145409.511328-4-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220921145409.511328-1-michaelba@nvidia.com> References: <20220921145409.511328-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT092:EE_|PH7PR12MB6693:EE_ X-MS-Office365-Filtering-Correlation-Id: 4d380e85-4e54-4407-ca95-08da9be132ed X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: clYV+oHNEUHd19UbdeBsbVTSmpQeq8KNU5BjzmWb1c4o31aYoZ752jzO5hABmYQuJgzarrYzHWqzIq9xp5tmOkG9t5lZvoODTU/XwdLL16lUcV0gp4iu9vEi9lYHFIBiMllO0sAp9hmReyPAhmKMwckp/JtP+zLXCV/jzDxiOkuKdlMCSQlOciPTTeDhCHhHPrctlK7eZGsXotYnNma5EarK6APIIb9cLyC2Rod3d5bpwugcHtt8Z/CSY/D9xqW1u1YMAa3tdbaNneVCBRiHxzrA0i1bWlMkxhd/9OQZr81YZkuuopA4wTpV1QN6D2DZ7tCWygHEAHFnWbzuvyj/X1ySQW+xYFKjXl/RP4KPxr6LyeSe8KPXH5+qg1CBKfQJ1HA1mNpXospThXlt15iIOI+91LFtAN+tq6ZLSmKUduqGCLmW/p+JVxTzy9wje/zcUXQKOkIYcZiHMhSv9twYSa4qWMn+09Lj3X8IQ9BGgZgm6QxbjnNTeUwIXILnn2JVvlv5t6P+qHUCLDmQHMzPbG2HK6JuLxLNoQzy84NjWTDANytOEQ0MimfUU4Ww/+FmgrV9XNASxBjyMJMxn0l7+83nuV/I0Uc5PBHVBrfgPUKiJ/Trl0IDwfUD8C6TqrkOVu3E4SK8Inh4I0Euu2iubLTGxcZ9tKQbB2SpcgmJHzalIpbE59LzGG48n8+eFGcuVHJVrjChfa2j2IxNzISH2QMHQWTeuWCQZbfvVicDIiuvjYZCrEvZovbldnMyrDWUz+px4QXp8PZZ10hSem5A3A== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(136003)(376002)(396003)(346002)(39860400002)(451199015)(46966006)(36840700001)(40470700004)(36756003)(7636003)(36860700001)(82740400003)(8936002)(356005)(186003)(2616005)(1076003)(83380400001)(70206006)(336012)(426003)(86362001)(47076005)(26005)(6286002)(478600001)(41300700001)(54906003)(6916009)(316002)(7696005)(40460700003)(2906002)(70586007)(107886003)(5660300002)(8676002)(6666004)(15650500001)(4326008)(55016003)(82310400005)(40480700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Sep 2022 14:54:34.8706 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4d380e85-4e54-4407-ca95-08da9be132ed X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT092.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6693 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add a new structure for indirect AGE update. This new structure enables: 1. Update timeout value. 2. Stop AGE checking. 3. Start AGE checking. 4. restart AGE checking. Signed-off-by: Michael Baum Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 66 ++++++++++++++++++++++++++++++ app/test-pmd/config.c | 18 +++++++- doc/guides/prog_guide/rte_flow.rst | 25 +++++++++-- lib/ethdev/rte_flow.h | 27 ++++++++++++ 4 files changed, 132 insertions(+), 4 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 4fb90a92cb..a315fd9ded 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -583,6 +583,9 @@ enum index { ACTION_SET_IPV6_DSCP_VALUE, ACTION_AGE, ACTION_AGE_TIMEOUT, + ACTION_AGE_UPDATE, + ACTION_AGE_UPDATE_TIMEOUT, + ACTION_AGE_UPDATE_TOUCH, ACTION_SAMPLE, ACTION_SAMPLE_RATIO, ACTION_SAMPLE_INDEX, @@ -1869,6 +1872,7 @@ static const enum index next_action[] = { ACTION_SET_IPV4_DSCP, ACTION_SET_IPV6_DSCP, ACTION_AGE, + ACTION_AGE_UPDATE, ACTION_SAMPLE, ACTION_INDIRECT, ACTION_MODIFY_FIELD, @@ -2113,6 +2117,14 @@ static const enum index action_age[] = { ZERO, }; +static const enum index action_age_update[] = { + ACTION_AGE_UPDATE, + ACTION_AGE_UPDATE_TIMEOUT, + ACTION_AGE_UPDATE_TOUCH, + ACTION_NEXT, + ZERO, +}; + static const enum index action_sample[] = { ACTION_SAMPLE, ACTION_SAMPLE_RATIO, @@ -2191,6 +2203,9 @@ static int parse_vc_spec(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); static int parse_vc_conf(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_vc_conf_timeout(struct context *, const struct token *, + const char *, unsigned int, void *, + unsigned int); static int parse_vc_item_ecpri_type(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -6194,6 +6209,30 @@ static const struct token token_list[] = { .next = NEXT(action_age, NEXT_ENTRY(COMMON_UNSIGNED)), .call = parse_vc_conf, }, + [ACTION_AGE_UPDATE] = { + .name = "age_update", + .help = "update aging parameter", + .next = NEXT(action_age_update), + .priv = PRIV_ACTION(AGE, + sizeof(struct rte_flow_update_age)), + .call = parse_vc, + }, + [ACTION_AGE_UPDATE_TIMEOUT] = { + .name = "timeout", + .help = "age timeout update value", + .args = ARGS(ARGS_ENTRY_BF(struct rte_flow_update_age, + timeout, 24)), + .next = NEXT(action_age_update, NEXT_ENTRY(COMMON_UNSIGNED)), + .call = parse_vc_conf_timeout, + }, + [ACTION_AGE_UPDATE_TOUCH] = { + .name = "touch", + .help = "this flow is touched", + .next = NEXT(action_age_update, NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY_BF(struct rte_flow_update_age, + touch, 1)), + .call = parse_vc_conf, + }, [ACTION_SAMPLE] = { .name = "sample", .help = "set a sample action", @@ -7031,6 +7070,33 @@ parse_vc_conf(struct context *ctx, const struct token *token, return len; } +/** Parse action configuration field. */ +static int +parse_vc_conf_timeout(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + struct rte_flow_update_age *update; + + (void)size; + if (ctx->curr != ACTION_AGE_UPDATE_TIMEOUT) + return -1; + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + /* Point to selected object. */ + ctx->object = out->args.vc.data; + ctx->objmask = NULL; + /* Update the timeout is valid. */ + update = (struct rte_flow_update_age *)out->args.vc.data; + update->timeout_valid = 1; + return len; +} + /** Parse eCPRI common header type field. */ static int parse_vc_item_ecpri_type(struct context *ctx, const struct token *token, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 31952467fb..45495385d7 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2065,6 +2065,7 @@ port_action_handle_update(portid_t port_id, uint32_t id, if (!pia) return -EINVAL; switch (pia->type) { + case RTE_FLOW_ACTION_TYPE_AGE: case RTE_FLOW_ACTION_TYPE_CONNTRACK: update = action->conf; break; @@ -2904,6 +2905,8 @@ port_queue_action_handle_update(portid_t port_id, struct rte_port *port; struct rte_flow_error error; struct rte_flow_action_handle *action_handle; + struct port_indirect_action *pia; + const void *update; action_handle = port_action_handle_get_by_id(port_id, id); if (!action_handle) @@ -2915,8 +2918,21 @@ port_queue_action_handle_update(portid_t port_id, return -EINVAL; } + pia = action_get_by_id(port_id, id); + if (!pia) + return -EINVAL; + + switch (pia->type) { + case RTE_FLOW_ACTION_TYPE_AGE: + update = action->conf; + break; + default: + update = action; + break; + } + if (rte_flow_async_action_handle_update(port_id, queue_id, &attr, - action_handle, action, NULL, &error)) { + action_handle, update, NULL, &error)) { return port_flow_complain(&error); } printf("Indirect action #%u update queued\n", id); diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 588914b231..dae9121279 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -2958,7 +2958,7 @@ Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned. Action: ``AGE`` ^^^^^^^^^^^^^^^ -Set ageing timeout configuration to a flow. +Set aging timeout configuration to a flow. Event RTE_ETH_EVENT_FLOW_AGED will be reported if timeout passed without any matching on the flow. @@ -2977,8 +2977,8 @@ timeout passed without any matching on the flow. | ``context`` | user input flow context | +--------------+---------------------------------+ -Query structure to retrieve ageing status information of a -shared AGE action, or a flow rule using the AGE action: +Query structure to retrieve aging status information of an +indirect AGE action, or a flow rule using the AGE action: .. _table_rte_flow_query_age: @@ -2994,6 +2994,25 @@ shared AGE action, or a flow rule using the AGE action: | ``sec_since_last_hit`` | out | Seconds since last traffic hit | +------------------------------+-----+----------------------------------------+ +Update structure to modify the parameters of an indirect AGE action. +The update structure is used by ``rte_flow_action_handle_update()`` function. + +.. _table_rte_flow_update_age: + +.. table:: AGE update + + +-------------------+--------------------------------------------------------------+ + | Field | Value | + +===================+==============================================================+ + | ``timeout`` | 24 bits timeout value | + +-------------------+--------------------------------------------------------------+ + | ``timeout_valid`` | 1 bit, timeout value is valid | + +-------------------+--------------------------------------------------------------+ + | ``touch`` | 1 bit, touch the AGE action to set ``sec_since_last_hit`` 0 | + +-------------------+--------------------------------------------------------------+ + | ``reserved`` | 6 bits reserved, must be zero | + +-------------------+--------------------------------------------------------------+ + Action: ``SAMPLE`` ^^^^^^^^^^^^^^^^^^ diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index d830b02321..a21d437cf8 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -2954,6 +2954,33 @@ struct rte_flow_query_age { uint32_t sec_since_last_hit:24; /**< Seconds since last traffic hit. */ }; +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * RTE_FLOW_ACTION_TYPE_AGE + * + * Update indirect AGE action attributes: + * - Timeout can be updated including stop/start action: + * +-------------+-------------+------------------------------+ + * | Old Timeout | New Timeout | Updating | + * +=============+=============+==============================+ + * | 0 | positive | Start aging with new value | + * +-------------+-------------+------------------------------+ + * | positive | 0 | Stop aging | + * +-------------+-------------+------------------------------+ + * | positive | positive | Change timeout to new value | + * +-------------+-------------+------------------------------+ + * - sec_since_last_hit can be reset. + */ +struct rte_flow_update_age { + uint32_t reserved:6; /**< Reserved, must be zero. */ + uint32_t timeout_valid:1; /**< The timeout is valid for update. */ + uint32_t timeout:24; /**< Time in seconds. */ + uint32_t touch:1; + /**< Means that aging should assume packet passed the aging. */ +}; + /** * @warning * @b EXPERIMENTAL: this structure may change without prior notice From patchwork Thu Sep 22 07:41:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 116636 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 03E0DA0543; Thu, 22 Sep 2022 09:48:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2DAE741145; Thu, 22 Sep 2022 09:48:11 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 2A9FB400D7 for ; Thu, 22 Sep 2022 09:48:07 +0200 (CEST) Received: from dggpeml500024.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4MY6jS2vkszlWT6; Thu, 22 Sep 2022 15:43:56 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Thu, 22 Sep 2022 15:48:04 +0800 From: Chengwen Feng To: , , CC: , , , , , Subject: [PATCH v9 1/5] ethdev: support get port error handling mode Date: Thu, 22 Sep 2022 07:41:47 +0000 Message-ID: <20220922074151.39450-2-fengchengwen@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220922074151.39450-1-fengchengwen@huawei.com> References: <20220128124831.427-1-kalesh-anakkur.purayil@broadcom.com> <20220922074151.39450-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch support gets port's error handling mode by rte_eth_dev_info_get() API. Currently, the defined modes include: 1) NONE: it means no error handling modes are supported by this port. 2) PASSIVE: passive error handling, after the PMD detect that a reset is required, the PMD reports RTE_ETH_EVENT_INTR_RESET event, and application invoke rte_eth_dev_reset() to recover the port. Signed-off-by: Chengwen Feng --- app/test-pmd/config.c | 2 ++ drivers/net/e1000/igb_ethdev.c | 2 ++ drivers/net/ena/ena_ethdev.c | 2 ++ drivers/net/iavf/iavf_ethdev.c | 2 ++ drivers/net/ixgbe/ixgbe_ethdev.c | 2 ++ drivers/net/txgbe/txgbe_ethdev_vf.c | 2 ++ lib/ethdev/rte_ethdev.h | 19 ++++++++++++++++++- 7 files changed, 30 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 86054455d2..0c10c663e9 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -922,6 +922,8 @@ port_infos_display(portid_t port_id) printf("Switch Rx domain: %u\n", dev_info.switch_info.rx_domain); } + if (dev_info.err_handle_mode == RTE_ETH_ERROR_HANDLE_MODE_PASSIVE) + printf("Device error handling mode: passive\n"); } void diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c index a9c18b27e8..dea69c9db1 100644 --- a/drivers/net/e1000/igb_ethdev.c +++ b/drivers/net/e1000/igb_ethdev.c @@ -2341,6 +2341,8 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->rx_desc_lim = rx_desc_lim; dev_info->tx_desc_lim = tx_desc_lim; + dev_info->err_handle_mode = RTE_ETH_ERROR_HANDLE_MODE_PASSIVE; + return 0; } diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 3e88bcda6c..efcb163027 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -2482,6 +2482,8 @@ static int ena_infos_get(struct rte_eth_dev *dev, dev_info->default_rxportconf.ring_size = ENA_DEFAULT_RING_SIZE; dev_info->default_txportconf.ring_size = ENA_DEFAULT_RING_SIZE; + dev_info->err_handle_mode = RTE_ETH_ERROR_HANDLE_MODE_PASSIVE; + return 0; } diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 652f0d00a5..b2ef2dc366 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -1178,6 +1178,8 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) .nb_align = IAVF_ALIGN_RING_DESC, }; + dev_info->err_handle_mode = RTE_ETH_ERROR_HANDLE_MODE_PASSIVE; + return 0; } diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index f31bbb7895..7b68b171e6 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -4056,6 +4056,8 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev, dev_info->rx_desc_lim = rx_desc_lim; dev_info->tx_desc_lim = tx_desc_lim; + dev_info->err_handle_mode = RTE_ETH_ERROR_HANDLE_MODE_PASSIVE; + return 0; } diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index f52cd8bc19..3b1f7c913b 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -521,6 +521,8 @@ txgbevf_dev_info_get(struct rte_eth_dev *dev, dev_info->rx_desc_lim = rx_desc_lim; dev_info->tx_desc_lim = tx_desc_lim; + dev_info->err_handle_mode = RTE_ETH_ERROR_HANDLE_MODE_PASSIVE; + return 0; } diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index de9e970d4d..930b0a2fff 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1848,6 +1848,19 @@ enum rte_eth_representor_type { RTE_ETH_REPRESENTOR_PF, /**< representor of Physical Function. */ }; +/** + * Ethernet device error handling mode. + */ +enum rte_eth_err_handle_mode { + /** No error handling modes are supported. */ + RTE_ETH_ERROR_HANDLE_MODE_NONE, + /** Passive error handling, after the PMD detect that a reset is + * required, the PMD reports @see RTE_ETH_EVENT_INTR_RESET event, and + * application invoke @see rte_eth_dev_reset to recover the port. + */ + RTE_ETH_ERROR_HANDLE_MODE_PASSIVE, +}; + /** * A structure used to retrieve the contextual information of * an Ethernet device, such as the controlling driver of the @@ -1908,8 +1921,12 @@ struct rte_eth_dev_info { * embedded managed interconnect/switch. */ struct rte_eth_switch_info switch_info; + /** Supported error handling mode. @see enum rte_eth_err_handle_mode */ + uint8_t err_handle_mode; - uint64_t reserved_64s[2]; /**< Reserved for future fields */ + uint8_t reserved_8; /**< Reserved for future fields */ + uint16_t reserved_16s[3]; /**< Reserved for future fields */ + uint64_t reserved_64; /**< Reserved for future fields */ void *reserved_ptrs[2]; /**< Reserved for future fields */ }; From patchwork Thu Sep 22 07:41:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 116639 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5E076A0543; Thu, 22 Sep 2022 09:48:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7EABB42B6E; Thu, 22 Sep 2022 09:48:13 +0200 (CEST) Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by mails.dpdk.org (Postfix) with ESMTP id 9045A40F17 for ; Thu, 22 Sep 2022 09:48:07 +0200 (CEST) Received: from dggpeml500024.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4MY6lk36rpzHppZ; Thu, 22 Sep 2022 15:45:54 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Thu, 22 Sep 2022 15:48:04 +0800 From: Chengwen Feng To: , , CC: , , , , , Subject: [PATCH v9 2/5] ethdev: support proactive error handling mode Date: Thu, 22 Sep 2022 07:41:48 +0000 Message-ID: <20220922074151.39450-3-fengchengwen@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220922074151.39450-1-fengchengwen@huawei.com> References: <20220128124831.427-1-kalesh-anakkur.purayil@broadcom.com> <20220922074151.39450-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Kalesh AP Some PMDs (e.g. hns3) could detect hardware or firmware errors, and try to recover from the errors. In this process, the PMD sets the data path pointers to dummy functions (which will prevent the crash), and also make sure the control path operations failed with retcode -EBUSY. The above error handling mode is known as RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE (proactive error handling mode). In some service scenarios, application needs to be aware of the event to determine whether to migrate services. So three events were introduced: 1) RTE_ETH_EVENT_ERR_RECOVERING: used to notify the application that it detected an error and the recovery is being started. Upon receiving the event, the application should not invoke any control path APIs until receiving RTE_ETH_EVENT_RECOVERY_SUCCESS or RTE_ETH_EVENT_RECOVERY_FAILED event. 2) RTE_ETH_EVENT_RECOVERY_SUCCESS: used to notify the application that it recovers successful from the error, the PMD already re-configures the port to the state prior to the error. 3) RTE_ETH_EVENT_RECOVERY_FAILED: used to notify the application that it recovers failed from the error, the port should not usable anymore. The application should close the port. Signed-off-by: Kalesh AP Signed-off-by: Somnath Kotur Signed-off-by: Chengwen Feng Reviewed-by: Ajit Khaparde --- app/test-pmd/config.c | 2 ++ doc/guides/prog_guide/poll_mode_drv.rst | 39 +++++++++++++++++++++++++ doc/guides/rel_notes/release_22_11.rst | 12 ++++++++ lib/ethdev/rte_ethdev.h | 33 +++++++++++++++++++++ 4 files changed, 86 insertions(+) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 0c10c663e9..b716d2a15f 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -924,6 +924,8 @@ port_infos_display(portid_t port_id) } if (dev_info.err_handle_mode == RTE_ETH_ERROR_HANDLE_MODE_PASSIVE) printf("Device error handling mode: passive\n"); + else if (dev_info.err_handle_mode == RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE) + printf("Device error handling mode: proactive\n"); } void diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst index 9d081b1cba..232dc459b0 100644 --- a/doc/guides/prog_guide/poll_mode_drv.rst +++ b/doc/guides/prog_guide/poll_mode_drv.rst @@ -627,3 +627,42 @@ by application. The PMD itself should not call rte_eth_dev_reset(). The PMD can trigger the application to handle reset event. It is duty of application to handle all synchronization before it calls rte_eth_dev_reset(). + +The above error handling mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PASSIVE``. + +Proactive Error Handling Mode +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If PMD supports ``RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE``, it means once detect +hardware or firmware errors, the PMD will try to recover from the errors. In +this process, the PMD sets the data path pointers to dummy functions (which +will prevent the crash), and also make sure the control path operations failed +with retcode -EBUSY. + +Also in this process, from the perspective of application, services are +affected. For example, the Rx/Tx bust APIs cannot receive and send packets, +and the control plane API return failure. + +In some service scenarios, application needs to be aware of the event to +determine whether to migrate services. So three events were introduced: + +* RTE_ETH_EVENT_ERR_RECOVERING: used to notify the application that it detected + an error and the recovery is being started. Upon receiving the event, the + application should not invoke any control path APIs until receiving + RTE_ETH_EVENT_RECOVERY_SUCCESS or RTE_ETH_EVENT_RECOVERY_FAILED event. + + +* RTE_ETH_EVENT_RECOVERY_SUCCESS: used to notify the application that it + recovers successful from the error, the PMD already re-configures the port to + the state prior to the error. + +* RTE_ETH_EVENT_RECOVERY_FAILED: used to notify the application that it + recovers failed from the error, the port should not usable anymore. the + application should close the port. + +.. note:: + * Before the PMD reports the recovery result, the PMD may report the + ``RTE_ETH_EVENT_ERR_RECOVERING`` event again, because a larger error + may occur during the recovery. + * The error handling mode supported by the PMD can be reported through + the ``rte_eth_dev_info_get`` API. diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 8c021cf050..fc85e5fa87 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -55,6 +55,18 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Added proactive error handling mode for ethdev.** + + Added proactive error handling mode for ethdev, and three event were + introduced: + + * Added new event: ``RTE_ETH_EVENT_ERR_RECOVERING`` for the PMD to report + that the port is recovering from an error. + * Added new event: ``RTE_ETH_EVENT_RECOVER_SUCCESS`` for the PMD to report + that the port recover successful from an error. + * Added new event: ``RTE_ETH_EVENT_RECOVER_FAILED`` for the PMD to report + that the prot recover failed from an error. + Removed Items ------------- diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 930b0a2fff..d3e81b98a7 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1859,6 +1859,12 @@ enum rte_eth_err_handle_mode { * application invoke @see rte_eth_dev_reset to recover the port. */ RTE_ETH_ERROR_HANDLE_MODE_PASSIVE, + /** Proactive error handling, after the PMD detect that a reset is + * required, the PMD reports @see RTE_ETH_EVENT_ERR_RECOVERING event, + * and do recovery internally, finally, reports the recovery result + * event (@see RTE_ETH_EVENT_RECOVERY_*). + */ + RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE, }; /** @@ -3944,6 +3950,33 @@ enum rte_eth_event_type { * @see rte_eth_rx_avail_thresh_set() */ RTE_ETH_EVENT_RX_AVAIL_THRESH, + /** Port recovering from a hardware or firmware error. + * If PMD supports proactive error recovery, it should trigger this + * event to notify application that it detected an error and the + * recovery is being started. Upon receiving the event, the application + * should not invoke any control path APIs (such as + * rte_eth_dev_configure/rte_eth_dev_stop...) until receiving + * RTE_ETH_EVENT_RECOVERY_SUCCESS or RTE_ETH_EVENT_RECOVERY_FAILED + * event. + * The PMD will set the data path pointers to dummy functions, and + * re-set the data patch pointers to non-dummy functions before reports + * RTE_ETH_EVENT_RECOVERY_SUCCESS event. It means that the application + * cannot send or receive any packets during this period. + * @note Before the PMD reports the recovery result, the PMD may report + * the RTE_ETH_EVENT_ERR_RECOVERING event again, because a larger error + * may occur during the recovery. + */ + RTE_ETH_EVENT_ERR_RECOVERING, + /** Port recovers successful from the error. + * The PMD already re-configures the port to the state prior to the + * error. + */ + RTE_ETH_EVENT_RECOVERY_SUCCESS, + /** Port recovers failed from the error. + * It means that the port should not usable anymore. The application + * should close the port. + */ + RTE_ETH_EVENT_RECOVERY_FAILED, RTE_ETH_EVENT_MAX /**< max value of this enum */ }; From patchwork Fri Sep 23 07:43:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dongdong Liu X-Patchwork-Id: 116719 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D12B0A0544; Fri, 23 Sep 2022 09:45:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 377164282D; Fri, 23 Sep 2022 09:45:00 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id D748E400D7 for ; Fri, 23 Sep 2022 09:44:56 +0200 (CEST) Received: from kwepemi500017.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4MYkbJ1lB7zlWN0; Fri, 23 Sep 2022 15:40:44 +0800 (CST) Received: from localhost.localdomain (10.28.79.22) by kwepemi500017.china.huawei.com (7.221.188.110) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Fri, 23 Sep 2022 15:44:53 +0800 From: Dongdong Liu To: , , , , , CC: "Min Hu (Connor)" , Dongdong Liu Subject: [PATCH v4 1/3] ethdev: introduce ethdev desc dump API Date: Fri, 23 Sep 2022 15:43:14 +0800 Message-ID: <20220923074316.25077-2-liudongdong3@huawei.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20220923074316.25077-1-liudongdong3@huawei.com> References: <20220527023351.40577-1-humin29@huawei.com> <20220923074316.25077-1-liudongdong3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.28.79.22] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemi500017.china.huawei.com (7.221.188.110) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: "Min Hu (Connor)" Added the ethdev Rx/Tx desc dump API which provides functions for query descriptor from device. HW descriptor info differs in different NICs. The information demonstrates I/O process which is important for debug. As the information is different between NICs, the new API is introduced. Signed-off-by: Min Hu (Connor) Signed-off-by: Dongdong Liu Acked-by: Ray Kinsella --- doc/guides/rel_notes/release_22_11.rst | 7 ++++ lib/ethdev/ethdev_driver.h | 46 +++++++++++++++++++++++ lib/ethdev/rte_ethdev.c | 52 ++++++++++++++++++++++++++ lib/ethdev/rte_ethdev.h | 49 ++++++++++++++++++++++++ lib/ethdev/version.map | 2 + 5 files changed, 156 insertions(+) diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index f60161765b..d3f3f2e50c 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -55,6 +55,13 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Added ethdev desc dump API, to dump Rx/Tx desc info from device.** + +Added the ethdev Rx/Tx desc dump API which provides functions for query +descriptor from device. The descriptor info differs in different NICs. +The information demonstrates I/O process which is important for debug. +As the information is different between NICs, the new API is introduced. +The dump format is vendor-specific. Removed Items ------------- diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index a0e0b2ae88..76808dae89 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -1093,6 +1093,47 @@ typedef int (*eth_rx_queue_avail_thresh_query_t)(struct rte_eth_dev *dev, uint16_t *rx_queue_id, uint8_t *avail_thresh); + +/** + * @internal + * Dump Rx descriptor info to a file. + * + * It is used for debugging, not a dataplane API. + * + * @param file + * A pointer to a file for output. + * @param dev + * Port (ethdev) handle. + * @param queue_id + * The selected queue. + * @param num + * The number of the descriptors to dump. + * @return + * Negative errno value on error, zero on success. + */ +typedef int (*eth_rx_hw_desc_dump_t)(FILE *file, const struct rte_eth_dev *dev, + uint16_t queue_id, uint16_t num); + +/** + * @internal + * Dump Tx descriptor info to a file. + * + * This API is used for debugging, not a dataplane API. + * + * @param file + * A pointer to a file for output. + * @param dev + * Port (ethdev) handle. + * @param queue_id + * The selected queue. + * @param num + * The number of the descriptors to dump. + * @return + * Negative errno value on error, zero on success. + */ +typedef int (*eth_tx_hw_desc_dump_t)(FILE *file, const struct rte_eth_dev *dev, + uint16_t queue_id, uint16_t num); + /** * @internal A structure containing the functions exported by an Ethernet driver. */ @@ -1308,6 +1349,11 @@ struct eth_dev_ops { eth_rx_queue_avail_thresh_set_t rx_queue_avail_thresh_set; /** Query Rx queue available descriptors threshold event */ eth_rx_queue_avail_thresh_query_t rx_queue_avail_thresh_query; + + /** Dump Rx descriptor info */ + eth_rx_hw_desc_dump_t eth_rx_hw_desc_dump; + /** Dump Tx descriptor info */ + eth_tx_hw_desc_dump_t eth_tx_hw_desc_dump; }; /** diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 1979dc0850..2093275d87 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -5917,6 +5917,58 @@ rte_eth_dev_priv_dump(uint16_t port_id, FILE *file) return eth_err(port_id, (*dev->dev_ops->eth_dev_priv_dump)(dev, file)); } +int +rte_eth_rx_hw_desc_dump(FILE *file, uint16_t port_id, uint16_t queue_id, + uint16_t num) +{ + struct rte_eth_dev *dev; + int ret; + + if (file == NULL) { + RTE_ETHDEV_LOG(ERR, "Invalid file (NULL)\n"); + return -EINVAL; + } + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + dev = &rte_eth_devices[port_id]; + + if (queue_id >= dev->data->nb_rx_queues) { + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->eth_rx_hw_desc_dump, -ENOTSUP); + ret = (*dev->dev_ops->eth_rx_hw_desc_dump)(file, dev, queue_id, num); + + return ret; +} + +int +rte_eth_tx_hw_desc_dump(FILE *file, uint16_t port_id, uint16_t queue_id, + uint16_t num) +{ + struct rte_eth_dev *dev; + int ret; + + if (file == NULL) { + RTE_ETHDEV_LOG(ERR, "Invalid file (NULL)\n"); + return -EINVAL; + } + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + dev = &rte_eth_devices[port_id]; + + if (queue_id >= dev->data->nb_tx_queues) { + RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", queue_id); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->eth_tx_hw_desc_dump, -ENOTSUP); + ret = (*dev->dev_ops->eth_tx_hw_desc_dump)(file, dev, queue_id, num); + + return ret; +} + RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO); RTE_INIT(ethdev_init_telemetry) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index b62ac5bb6f..4671e6b28e 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -5221,6 +5221,55 @@ typedef struct { __rte_experimental int rte_eth_dev_priv_dump(uint16_t port_id, FILE *file); +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * Dump ethdev Rx descriptor info to a file. + * + * This API is used for debugging, not a dataplane API. + * + * @param file + * A pointer to a file for output. + * @param dev + * Port (ethdev) handle. + * @param queue_id + * The selected queue. + * @param num + * The number of the descriptors to dump. + * @return + * - On success, zero. + * - On failure, a negative value. + */ +__rte_experimental +int rte_eth_rx_hw_desc_dump(FILE *file, uint16_t port_id, uint16_t queue_id, + uint16_t num); + +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * Dump ethdev Tx descriptor info to a file. + * + * This API is used for debugging, not a dataplane API. + * + * @param file + * A pointer to a file for output. + * @param dev + * Port (ethdev) handle. + * @param queue_id + * The selected queue. + * @param num + * The number of the descriptors to dump. + * @return + * - On success, zero. + * - On failure, a negative value. + */ +__rte_experimental +int rte_eth_tx_hw_desc_dump(FILE *file, uint16_t port_id, uint16_t queue_id, + uint16_t num); + + #include /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 03f52fee91..3c7c75b582 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -285,6 +285,8 @@ EXPERIMENTAL { rte_mtr_color_in_protocol_priority_get; rte_mtr_color_in_protocol_set; rte_mtr_meter_vlan_table_update; + rte_eth_rx_hw_desc_dump; + rte_eth_tx_hw_desc_dump; }; INTERNAL { From patchwork Fri Sep 23 13:16:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satha Koteswara Rao Kottidi X-Patchwork-Id: 116735 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D1042A0544; Fri, 23 Sep 2022 15:17:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C2483406A2; Fri, 23 Sep 2022 15:17:32 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id ABD7E4003F for ; Fri, 23 Sep 2022 15:17:30 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 28N74Xag026331; Fri, 23 Sep 2022 06:17:27 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-type; s=pfpt0220; bh=S99wgxs89voQ3jAeFyA5XPZ3xljiEGnzoHA4At5hMHg=; b=Mb/VTj9RjPQ3U/m1Ep7CsRdK/w0JL647yC6+BifEWy3LQwXQV+dGx3AZJ8SYPNZP4H4C Qkyjcc6WJ20PcnhysxjhTupuJQCJUMqXc/dOoQLjF10Ok0FiLb+grWqU7KuMlhv/h/wz iywr60Myi5jZc+PSoj7vDKEXVREgFLuJ5lgKRrtr2PEzD05aTbJ3SGMm5fXPTRCeztj5 meYelzqY9RPW1YjOM1TBWjRRmS0y29ArLAeKubHco9tKW/Z7fLN9FcZ12/iUjrTL8+Nq Kl0vLSx7xl3LGppuCePw2m+q+rmUxo4iwBIJ5xPfo+T+EfHUM8VYQstS2ugx/jQTuiLY OQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3jrmx5dakg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 23 Sep 2022 06:17:27 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 23 Sep 2022 06:17:26 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 23 Sep 2022 06:17:25 -0700 Received: from cavium.localdomain (unknown [10.28.34.26]) by maili.marvell.com (Postfix) with ESMTP id 8D9DD3F7050; Fri, 23 Sep 2022 06:17:20 -0700 (PDT) From: To: Aman Singh , Yuying Zhang , Ajit Khaparde , "Somnath Kotur" , Nithin Dabilpuram , Kiran Kumar K , "Sunil Kumar Kori" , Satha Rao , "Qiming Yang" , Wenjun Wu , Jiawen Wu , Jian Wang , "Thomas Monjalon" , Ferruh Yigit , Andrew Rybchenko CC: , , , , Subject: [PATCH] ethdev: queue rate parameter changed from 16b to 32b Date: Fri, 23 Sep 2022 09:16:58 -0400 Message-ID: <1663939018-18898-1-git-send-email-skoteshwar@marvell.com> X-Mailer: git-send-email 1.8.3.1 MIME-Version: 1.0 X-Proofpoint-GUID: YkvnzRtofyUJhJhtQonEvhQk4Aw4hYRg X-Proofpoint-ORIG-GUID: YkvnzRtofyUJhJhtQonEvhQk4Aw4hYRg X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-09-23_04,2022-09-22_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Satha Rao The rate parameter modified to uint32_t, so that it can work for more than 64 Gbps. Change-Id: I7115c22a4dfdda84d820b221bf33839a7b57f2cd Signed-off-by: Satha Rao --- app/test-pmd/cmdline.c | 8 ++++---- app/test-pmd/config.c | 4 ++-- app/test-pmd/testpmd.h | 4 ++-- drivers/net/bnxt/rte_pmd_bnxt.c | 4 ++-- drivers/net/bnxt/rte_pmd_bnxt.h | 2 +- drivers/net/cnxk/cnxk_ethdev.h | 19 ++++++++----------- drivers/net/cnxk/cnxk_tm.c | 4 ++-- drivers/net/ixgbe/ixgbe_ethdev.c | 4 ++-- drivers/net/ixgbe/ixgbe_ethdev.h | 4 ++-- drivers/net/ixgbe/rte_pmd_ixgbe.c | 2 +- drivers/net/ixgbe/rte_pmd_ixgbe.h | 2 +- drivers/net/txgbe/txgbe_ethdev.c | 2 +- drivers/net/txgbe/txgbe_ethdev.h | 2 +- lib/ethdev/ethdev_driver.h | 2 +- lib/ethdev/rte_ethdev.c | 2 +- lib/ethdev/rte_ethdev.h | 2 +- 16 files changed, 32 insertions(+), 35 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 51321de..adfdc1d 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -8106,7 +8106,7 @@ struct cmd_queue_rate_limit_result { cmdline_fixed_string_t queue; uint8_t queue_num; cmdline_fixed_string_t rate; - uint16_t rate_num; + uint32_t rate_num; }; static void cmd_queue_rate_limit_parsed(void *parsed_result, @@ -8147,7 +8147,7 @@ static void cmd_queue_rate_limit_parsed(void *parsed_result, rate, "rate"); static cmdline_parse_token_num_t cmd_queue_rate_limit_ratenum = TOKEN_NUM_INITIALIZER(struct cmd_queue_rate_limit_result, - rate_num, RTE_UINT16); + rate_num, RTE_UINT32); static cmdline_parse_inst_t cmd_queue_rate_limit = { .f = cmd_queue_rate_limit_parsed, @@ -8174,7 +8174,7 @@ struct cmd_vf_rate_limit_result { cmdline_fixed_string_t vf; uint8_t vf_num; cmdline_fixed_string_t rate; - uint16_t rate_num; + uint32_t rate_num; cmdline_fixed_string_t q_msk; uint64_t q_msk_val; }; @@ -8218,7 +8218,7 @@ static void cmd_vf_rate_limit_parsed(void *parsed_result, rate, "rate"); static cmdline_parse_token_num_t cmd_vf_rate_limit_ratenum = TOKEN_NUM_INITIALIZER(struct cmd_vf_rate_limit_result, - rate_num, RTE_UINT16); + rate_num, RTE_UINT32); static cmdline_parse_token_string_t cmd_vf_rate_limit_q_msk = TOKEN_STRING_INITIALIZER(struct cmd_vf_rate_limit_result, q_msk, "queue_mask"); diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index c90cdfe..6dd543d 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -5914,7 +5914,7 @@ struct igb_ring_desc_16_bytes { } int -set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint16_t rate) +set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint32_t rate) { int diag; struct rte_eth_link link; @@ -5942,7 +5942,7 @@ struct igb_ring_desc_16_bytes { } int -set_vf_rate_limit(portid_t port_id, uint16_t vf, uint16_t rate, uint64_t q_msk) +set_vf_rate_limit(portid_t port_id, uint16_t vf, uint32_t rate, uint64_t q_msk) { int diag = -ENOTSUP; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index ddf5e21..0af3aa1 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -1097,8 +1097,8 @@ void port_rss_reta_info(portid_t port_id, uint16_t nb_rx_desc, unsigned int socket_id, struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp); -int set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint16_t rate); -int set_vf_rate_limit(portid_t port_id, uint16_t vf, uint16_t rate, +int set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint32_t rate); +int set_vf_rate_limit(portid_t port_id, uint16_t vf, uint32_t rate, uint64_t q_msk); int set_rxq_avail_thresh(portid_t port_id, uint16_t queue_id, diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c index 77ecbef..4dc38a2 100644 --- a/drivers/net/bnxt/rte_pmd_bnxt.c +++ b/drivers/net/bnxt/rte_pmd_bnxt.c @@ -172,12 +172,12 @@ int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, uint16_t vf, } int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, uint16_t vf, - uint16_t tx_rate, uint64_t q_msk) + uint32_t tx_rate, uint64_t q_msk) { struct rte_eth_dev *eth_dev; struct rte_eth_dev_info dev_info; struct bnxt *bp; - uint16_t tot_rate = 0; + uint32_t tot_rate = 0; uint64_t idx; int rc; diff --git a/drivers/net/bnxt/rte_pmd_bnxt.h b/drivers/net/bnxt/rte_pmd_bnxt.h index 86b8d71..174c18a 100644 --- a/drivers/net/bnxt/rte_pmd_bnxt.h +++ b/drivers/net/bnxt/rte_pmd_bnxt.h @@ -184,7 +184,7 @@ int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan, * - (-EINVAL) if *vf* or *mac_addr* is invalid. */ int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, uint16_t vf, - uint16_t tx_rate, uint64_t q_msk); + uint32_t tx_rate, uint64_t q_msk); /** * Get VF's statistics diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index c09e9bf..17c820a 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -557,17 +557,14 @@ int cnxk_nix_timesync_write_time(struct rte_eth_dev *eth_dev, uint64_t cnxk_nix_rxq_mbuf_setup(struct cnxk_eth_dev *dev); int cnxk_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *ops); -int cnxk_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev, - uint16_t queue_idx, uint16_t tx_rate); -int cnxk_nix_tm_mark_vlan_dei(struct rte_eth_dev *eth_dev, int mark_green, - int mark_yellow, int mark_red, - struct rte_tm_error *error); -int cnxk_nix_tm_mark_ip_ecn(struct rte_eth_dev *eth_dev, int mark_green, - int mark_yellow, int mark_red, - struct rte_tm_error *error); -int cnxk_nix_tm_mark_ip_dscp(struct rte_eth_dev *eth_dev, int mark_green, - int mark_yellow, int mark_red, - struct rte_tm_error *error); +int cnxk_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev, uint16_t queue_idx, + uint32_t tx_rate); +int cnxk_nix_tm_mark_vlan_dei(struct rte_eth_dev *eth_dev, int mark_green, int mark_yellow, + int mark_red, struct rte_tm_error *error); +int cnxk_nix_tm_mark_ip_ecn(struct rte_eth_dev *eth_dev, int mark_green, int mark_yellow, + int mark_red, struct rte_tm_error *error); +int cnxk_nix_tm_mark_ip_dscp(struct rte_eth_dev *eth_dev, int mark_green, int mark_yellow, + int mark_red, struct rte_tm_error *error); /* MTR */ int cnxk_nix_mtr_ops_get(struct rte_eth_dev *dev, void *ops); diff --git a/drivers/net/cnxk/cnxk_tm.c b/drivers/net/cnxk/cnxk_tm.c index d45e70a..a36f45d 100644 --- a/drivers/net/cnxk/cnxk_tm.c +++ b/drivers/net/cnxk/cnxk_tm.c @@ -750,8 +750,8 @@ struct rte_tm_ops cnxk_tm_ops = { } int -cnxk_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev, - uint16_t queue_idx, uint16_t tx_rate_mbps) +cnxk_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev, uint16_t queue_idx, + uint32_t tx_rate_mbps) { struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); uint64_t tx_rate = tx_rate_mbps * (uint64_t)1E6; diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index 1dfad0e..9ff8ee0 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -2475,7 +2475,7 @@ static int eth_ixgbevf_pci_remove(struct rte_pci_device *pci_dev) int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf, - uint16_t tx_rate, uint64_t q_msk) + uint32_t tx_rate, uint64_t q_msk) { struct ixgbe_hw *hw; struct ixgbe_vf_info *vfinfo; @@ -6090,7 +6090,7 @@ static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on) int ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev, - uint16_t queue_idx, uint16_t tx_rate) + uint16_t queue_idx, uint32_t tx_rate) { struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint32_t rf_dec, rf_int; diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h index 0773a7e..b4db3f4 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.h +++ b/drivers/net/ixgbe/ixgbe_ethdev.h @@ -753,13 +753,13 @@ void ixgbe_fdir_stats_get(struct rte_eth_dev *dev, int ixgbe_vt_check(struct ixgbe_hw *hw); int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf, - uint16_t tx_rate, uint64_t q_msk); + uint32_t tx_rate, uint64_t q_msk); bool is_ixgbe_supported(struct rte_eth_dev *dev); int ixgbe_tm_ops_get(struct rte_eth_dev *dev, void *ops); void ixgbe_tm_conf_init(struct rte_eth_dev *dev); void ixgbe_tm_conf_uninit(struct rte_eth_dev *dev); int ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev, uint16_t queue_idx, - uint16_t tx_rate); + uint32_t tx_rate); int ixgbe_rss_conf_init(struct ixgbe_rte_flow_rss_conf *out, const struct rte_flow_action_rss *in); int ixgbe_action_rss_same(const struct rte_flow_action_rss *comp, diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c index 9729f85..4ff7f37 100644 --- a/drivers/net/ixgbe/rte_pmd_ixgbe.c +++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c @@ -498,7 +498,7 @@ int rte_pmd_ixgbe_set_vf_rate_limit(uint16_t port, uint16_t vf, - uint16_t tx_rate, uint64_t q_msk) + uint32_t tx_rate, uint64_t q_msk) { struct rte_eth_dev *dev; diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.h b/drivers/net/ixgbe/rte_pmd_ixgbe.h index 426fe58..7ca1126 100644 --- a/drivers/net/ixgbe/rte_pmd_ixgbe.h +++ b/drivers/net/ixgbe/rte_pmd_ixgbe.h @@ -380,7 +380,7 @@ int rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an, * - (-EINVAL) if bad parameter. */ int rte_pmd_ixgbe_set_vf_rate_limit(uint16_t port, uint16_t vf, - uint16_t tx_rate, uint64_t q_msk); + uint32_t tx_rate, uint64_t q_msk); /** * Set all the TCs' bandwidth weight. diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index 4422472..86ef979 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -3764,7 +3764,7 @@ static int txgbe_dev_xstats_get_names_by_id(struct rte_eth_dev *dev, int txgbe_set_queue_rate_limit(struct rte_eth_dev *dev, - uint16_t queue_idx, uint16_t tx_rate) + uint16_t queue_idx, uint32_t tx_rate) { struct txgbe_hw *hw = TXGBE_DEV_HW(dev); uint32_t bcnrc_val; diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h index e425ab4..5171a6c 100644 --- a/drivers/net/txgbe/txgbe_ethdev.h +++ b/drivers/net/txgbe/txgbe_ethdev.h @@ -586,7 +586,7 @@ int txgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf, void txgbe_tm_conf_init(struct rte_eth_dev *dev); void txgbe_tm_conf_uninit(struct rte_eth_dev *dev); int txgbe_set_queue_rate_limit(struct rte_eth_dev *dev, uint16_t queue_idx, - uint16_t tx_rate); + uint32_t tx_rate); int txgbe_rss_conf_init(struct txgbe_rte_flow_rss_conf *out, const struct rte_flow_action_rss *in); int txgbe_action_rss_same(const struct rte_flow_action_rss *comp, diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index a0e0b2a..a89450c 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -598,7 +598,7 @@ typedef int (*eth_uc_all_hash_table_set_t)(struct rte_eth_dev *dev, /** @internal Set queue Tx rate. */ typedef int (*eth_set_queue_rate_limit_t)(struct rte_eth_dev *dev, uint16_t queue_idx, - uint16_t tx_rate); + uint32_t tx_rate); /** @internal Add tunneling UDP port. */ typedef int (*eth_udp_tunnel_port_add_t)(struct rte_eth_dev *dev, diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 1979dc0..4b11dae 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -4388,7 +4388,7 @@ enum { } int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx, - uint16_t tx_rate) + uint32_t tx_rate) { struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index b62ac5b..7149dd7 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -4165,7 +4165,7 @@ int rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct rte_ether_addr *addr, * - (-EINVAL) if bad parameter. */ int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx, - uint16_t tx_rate); + uint32_t tx_rate); /** * Configuration of Receive Side Scaling hash computation of Ethernet device. From patchwork Fri Sep 23 13:45:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satha Koteswara Rao Kottidi X-Patchwork-Id: 116736 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 27413A0544; Fri, 23 Sep 2022 15:45:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C697E40156; Fri, 23 Sep 2022 15:45:39 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 1506D4003F for ; Fri, 23 Sep 2022 15:45:37 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 28N53x7Q024145; Fri, 23 Sep 2022 06:45:35 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=+i/Zhenk+ReR6vPkWRPaAAs8kNGcvqlKPdg4prGi8/k=; b=OPSk5IcRAZ1nNURMi6JiQDRvupoD9ODJFmqYwYm12G4U4RhEf2OlEaKrpb8COJH7epNd nGhnJAnEJuMPUnZKzh1APzdZb/LhNUsjBLHvwdn9SbrIvP4mX7rxaJAWsuWzkvEkC4bR 8iU96Y9OaGo4+cgHVP15MUhr0TTjj8UuS3PO0OVQKxeJInQZNXH6vgXRNDdEYVDN9PJc IfOdN983jN4BWdpNhlQHySzs+iQk0r+VRxvtHBH7ewlkuFRG7RlcsxyFU8nyIE6ffqxK QRmU+HRUY2z/Ju6fDD+lgNcLncd2MW8Y9bmGRJYH9S+cNEyAK0GCjB1dbv0ax52NQuGe tw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3jrmx5dded-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 23 Sep 2022 06:45:34 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 23 Sep 2022 06:45:31 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 23 Sep 2022 06:45:31 -0700 Received: from cavium.localdomain (unknown [10.28.34.26]) by maili.marvell.com (Postfix) with ESMTP id 3BFFA3F7050; Fri, 23 Sep 2022 06:45:26 -0700 (PDT) From: To: Aman Singh , Yuying Zhang , Ajit Khaparde , "Somnath Kotur" , Nithin Dabilpuram , Kiran Kumar K , "Sunil Kumar Kori" , Satha Rao , "Qiming Yang" , Wenjun Wu , Jiawen Wu , Jian Wang , "Thomas Monjalon" , Ferruh Yigit , Andrew Rybchenko CC: , , , , Subject: [PATCH v2] ethdev: queue rate parameter changed from 16b to 32b Date: Fri, 23 Sep 2022 09:45:15 -0400 Message-ID: <1663940715-19619-1-git-send-email-skoteshwar@marvell.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1663939018-18898-1-git-send-email-skoteshwar@marvell.com> References: <1663939018-18898-1-git-send-email-skoteshwar@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: ksxvRBp1_cGtInE4mE-0GbeRclz1JbUa X-Proofpoint-ORIG-GUID: ksxvRBp1_cGtInE4mE-0GbeRclz1JbUa X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-09-23_04,2022-09-22_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Satha Rao The rate parameter modified to uint32_t, so that it can work for more than 64 Gbps. Signed-off-by: Satha Rao --- v2: Fixed checkpatch warnings app/test-pmd/cmdline.c | 8 ++++---- app/test-pmd/config.c | 4 ++-- app/test-pmd/testpmd.h | 4 ++-- drivers/net/bnxt/rte_pmd_bnxt.c | 4 ++-- drivers/net/bnxt/rte_pmd_bnxt.h | 2 +- drivers/net/cnxk/cnxk_ethdev.h | 19 ++++++++----------- drivers/net/cnxk/cnxk_tm.c | 4 ++-- drivers/net/ixgbe/ixgbe_ethdev.c | 4 ++-- drivers/net/ixgbe/ixgbe_ethdev.h | 4 ++-- drivers/net/ixgbe/rte_pmd_ixgbe.c | 2 +- drivers/net/ixgbe/rte_pmd_ixgbe.h | 2 +- drivers/net/txgbe/txgbe_ethdev.c | 2 +- drivers/net/txgbe/txgbe_ethdev.h | 2 +- lib/ethdev/ethdev_driver.h | 2 +- lib/ethdev/rte_ethdev.c | 2 +- lib/ethdev/rte_ethdev.h | 2 +- 16 files changed, 32 insertions(+), 35 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 51321de..adfdc1d 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -8106,7 +8106,7 @@ struct cmd_queue_rate_limit_result { cmdline_fixed_string_t queue; uint8_t queue_num; cmdline_fixed_string_t rate; - uint16_t rate_num; + uint32_t rate_num; }; static void cmd_queue_rate_limit_parsed(void *parsed_result, @@ -8147,7 +8147,7 @@ static void cmd_queue_rate_limit_parsed(void *parsed_result, rate, "rate"); static cmdline_parse_token_num_t cmd_queue_rate_limit_ratenum = TOKEN_NUM_INITIALIZER(struct cmd_queue_rate_limit_result, - rate_num, RTE_UINT16); + rate_num, RTE_UINT32); static cmdline_parse_inst_t cmd_queue_rate_limit = { .f = cmd_queue_rate_limit_parsed, @@ -8174,7 +8174,7 @@ struct cmd_vf_rate_limit_result { cmdline_fixed_string_t vf; uint8_t vf_num; cmdline_fixed_string_t rate; - uint16_t rate_num; + uint32_t rate_num; cmdline_fixed_string_t q_msk; uint64_t q_msk_val; }; @@ -8218,7 +8218,7 @@ static void cmd_vf_rate_limit_parsed(void *parsed_result, rate, "rate"); static cmdline_parse_token_num_t cmd_vf_rate_limit_ratenum = TOKEN_NUM_INITIALIZER(struct cmd_vf_rate_limit_result, - rate_num, RTE_UINT16); + rate_num, RTE_UINT32); static cmdline_parse_token_string_t cmd_vf_rate_limit_q_msk = TOKEN_STRING_INITIALIZER(struct cmd_vf_rate_limit_result, q_msk, "queue_mask"); diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index c90cdfe..6dd543d 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -5914,7 +5914,7 @@ struct igb_ring_desc_16_bytes { } int -set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint16_t rate) +set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint32_t rate) { int diag; struct rte_eth_link link; @@ -5942,7 +5942,7 @@ struct igb_ring_desc_16_bytes { } int -set_vf_rate_limit(portid_t port_id, uint16_t vf, uint16_t rate, uint64_t q_msk) +set_vf_rate_limit(portid_t port_id, uint16_t vf, uint32_t rate, uint64_t q_msk) { int diag = -ENOTSUP; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index ddf5e21..0af3aa1 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -1097,8 +1097,8 @@ void port_rss_reta_info(portid_t port_id, uint16_t nb_rx_desc, unsigned int socket_id, struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp); -int set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint16_t rate); -int set_vf_rate_limit(portid_t port_id, uint16_t vf, uint16_t rate, +int set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint32_t rate); +int set_vf_rate_limit(portid_t port_id, uint16_t vf, uint32_t rate, uint64_t q_msk); int set_rxq_avail_thresh(portid_t port_id, uint16_t queue_id, diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c index 77ecbef..4dc38a2 100644 --- a/drivers/net/bnxt/rte_pmd_bnxt.c +++ b/drivers/net/bnxt/rte_pmd_bnxt.c @@ -172,12 +172,12 @@ int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, uint16_t vf, } int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, uint16_t vf, - uint16_t tx_rate, uint64_t q_msk) + uint32_t tx_rate, uint64_t q_msk) { struct rte_eth_dev *eth_dev; struct rte_eth_dev_info dev_info; struct bnxt *bp; - uint16_t tot_rate = 0; + uint32_t tot_rate = 0; uint64_t idx; int rc; diff --git a/drivers/net/bnxt/rte_pmd_bnxt.h b/drivers/net/bnxt/rte_pmd_bnxt.h index 86b8d71..174c18a 100644 --- a/drivers/net/bnxt/rte_pmd_bnxt.h +++ b/drivers/net/bnxt/rte_pmd_bnxt.h @@ -184,7 +184,7 @@ int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan, * - (-EINVAL) if *vf* or *mac_addr* is invalid. */ int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, uint16_t vf, - uint16_t tx_rate, uint64_t q_msk); + uint32_t tx_rate, uint64_t q_msk); /** * Get VF's statistics diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index c09e9bf..17c820a 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -557,17 +557,14 @@ int cnxk_nix_timesync_write_time(struct rte_eth_dev *eth_dev, uint64_t cnxk_nix_rxq_mbuf_setup(struct cnxk_eth_dev *dev); int cnxk_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *ops); -int cnxk_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev, - uint16_t queue_idx, uint16_t tx_rate); -int cnxk_nix_tm_mark_vlan_dei(struct rte_eth_dev *eth_dev, int mark_green, - int mark_yellow, int mark_red, - struct rte_tm_error *error); -int cnxk_nix_tm_mark_ip_ecn(struct rte_eth_dev *eth_dev, int mark_green, - int mark_yellow, int mark_red, - struct rte_tm_error *error); -int cnxk_nix_tm_mark_ip_dscp(struct rte_eth_dev *eth_dev, int mark_green, - int mark_yellow, int mark_red, - struct rte_tm_error *error); +int cnxk_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev, uint16_t queue_idx, + uint32_t tx_rate); +int cnxk_nix_tm_mark_vlan_dei(struct rte_eth_dev *eth_dev, int mark_green, int mark_yellow, + int mark_red, struct rte_tm_error *error); +int cnxk_nix_tm_mark_ip_ecn(struct rte_eth_dev *eth_dev, int mark_green, int mark_yellow, + int mark_red, struct rte_tm_error *error); +int cnxk_nix_tm_mark_ip_dscp(struct rte_eth_dev *eth_dev, int mark_green, int mark_yellow, + int mark_red, struct rte_tm_error *error); /* MTR */ int cnxk_nix_mtr_ops_get(struct rte_eth_dev *dev, void *ops); diff --git a/drivers/net/cnxk/cnxk_tm.c b/drivers/net/cnxk/cnxk_tm.c index d45e70a..a36f45d 100644 --- a/drivers/net/cnxk/cnxk_tm.c +++ b/drivers/net/cnxk/cnxk_tm.c @@ -750,8 +750,8 @@ struct rte_tm_ops cnxk_tm_ops = { } int -cnxk_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev, - uint16_t queue_idx, uint16_t tx_rate_mbps) +cnxk_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev, uint16_t queue_idx, + uint32_t tx_rate_mbps) { struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); uint64_t tx_rate = tx_rate_mbps * (uint64_t)1E6; diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index 1dfad0e..9ff8ee0 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -2475,7 +2475,7 @@ static int eth_ixgbevf_pci_remove(struct rte_pci_device *pci_dev) int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf, - uint16_t tx_rate, uint64_t q_msk) + uint32_t tx_rate, uint64_t q_msk) { struct ixgbe_hw *hw; struct ixgbe_vf_info *vfinfo; @@ -6090,7 +6090,7 @@ static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on) int ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev, - uint16_t queue_idx, uint16_t tx_rate) + uint16_t queue_idx, uint32_t tx_rate) { struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint32_t rf_dec, rf_int; diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h index 0773a7e..b4db3f4 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.h +++ b/drivers/net/ixgbe/ixgbe_ethdev.h @@ -753,13 +753,13 @@ void ixgbe_fdir_stats_get(struct rte_eth_dev *dev, int ixgbe_vt_check(struct ixgbe_hw *hw); int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf, - uint16_t tx_rate, uint64_t q_msk); + uint32_t tx_rate, uint64_t q_msk); bool is_ixgbe_supported(struct rte_eth_dev *dev); int ixgbe_tm_ops_get(struct rte_eth_dev *dev, void *ops); void ixgbe_tm_conf_init(struct rte_eth_dev *dev); void ixgbe_tm_conf_uninit(struct rte_eth_dev *dev); int ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev, uint16_t queue_idx, - uint16_t tx_rate); + uint32_t tx_rate); int ixgbe_rss_conf_init(struct ixgbe_rte_flow_rss_conf *out, const struct rte_flow_action_rss *in); int ixgbe_action_rss_same(const struct rte_flow_action_rss *comp, diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c index 9729f85..4ff7f37 100644 --- a/drivers/net/ixgbe/rte_pmd_ixgbe.c +++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c @@ -498,7 +498,7 @@ int rte_pmd_ixgbe_set_vf_rate_limit(uint16_t port, uint16_t vf, - uint16_t tx_rate, uint64_t q_msk) + uint32_t tx_rate, uint64_t q_msk) { struct rte_eth_dev *dev; diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.h b/drivers/net/ixgbe/rte_pmd_ixgbe.h index 426fe58..7ca1126 100644 --- a/drivers/net/ixgbe/rte_pmd_ixgbe.h +++ b/drivers/net/ixgbe/rte_pmd_ixgbe.h @@ -380,7 +380,7 @@ int rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an, * - (-EINVAL) if bad parameter. */ int rte_pmd_ixgbe_set_vf_rate_limit(uint16_t port, uint16_t vf, - uint16_t tx_rate, uint64_t q_msk); + uint32_t tx_rate, uint64_t q_msk); /** * Set all the TCs' bandwidth weight. diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index 4422472..86ef979 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -3764,7 +3764,7 @@ static int txgbe_dev_xstats_get_names_by_id(struct rte_eth_dev *dev, int txgbe_set_queue_rate_limit(struct rte_eth_dev *dev, - uint16_t queue_idx, uint16_t tx_rate) + uint16_t queue_idx, uint32_t tx_rate) { struct txgbe_hw *hw = TXGBE_DEV_HW(dev); uint32_t bcnrc_val; diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h index e425ab4..5171a6c 100644 --- a/drivers/net/txgbe/txgbe_ethdev.h +++ b/drivers/net/txgbe/txgbe_ethdev.h @@ -586,7 +586,7 @@ int txgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf, void txgbe_tm_conf_init(struct rte_eth_dev *dev); void txgbe_tm_conf_uninit(struct rte_eth_dev *dev); int txgbe_set_queue_rate_limit(struct rte_eth_dev *dev, uint16_t queue_idx, - uint16_t tx_rate); + uint32_t tx_rate); int txgbe_rss_conf_init(struct txgbe_rte_flow_rss_conf *out, const struct rte_flow_action_rss *in); int txgbe_action_rss_same(const struct rte_flow_action_rss *comp, diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index a0e0b2a..a89450c 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -598,7 +598,7 @@ typedef int (*eth_uc_all_hash_table_set_t)(struct rte_eth_dev *dev, /** @internal Set queue Tx rate. */ typedef int (*eth_set_queue_rate_limit_t)(struct rte_eth_dev *dev, uint16_t queue_idx, - uint16_t tx_rate); + uint32_t tx_rate); /** @internal Add tunneling UDP port. */ typedef int (*eth_udp_tunnel_port_add_t)(struct rte_eth_dev *dev, diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 1979dc0..4b11dae 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -4388,7 +4388,7 @@ enum { } int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx, - uint16_t tx_rate) + uint32_t tx_rate) { struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index b62ac5b..7149dd7 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -4165,7 +4165,7 @@ int rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct rte_ether_addr *addr, * - (-EINVAL) if bad parameter. */ int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx, - uint16_t tx_rate); + uint32_t tx_rate); /** * Configuration of Receive Side Scaling hash computation of Ethernet device. From patchwork Fri Sep 23 14:43:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116746 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 76F7CA054A; Fri, 23 Sep 2022 16:45:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3627242BE7; Fri, 23 Sep 2022 16:44:19 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2055.outbound.protection.outlook.com [40.107.220.55]) by mails.dpdk.org (Postfix) with ESMTP id 2CB9442BB2 for ; Fri, 23 Sep 2022 16:44:17 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=miKSCIP3+wQFkgQv6n2nPa11NwLHggSH+kiJMAFmKjf3UMmcZkbjTWv3+GG8SZ1RYV87awX+pBkqWAwFZ8lnGhN3TT4/FjWFHTl+421nSC0jyJDsNyd9vGppoR14ibGrBv3opUgGnbf6XJI3yfXW1fWWcVAntNS1QCnNQzP0g46/Y0DhIXl6RCYPRRsG244WLHZZ4IqRuiE0eI51us/Xo2sJzdzoI1F4V1KSN+ELRIqUJxKPks6ZotQDdRqNLg5n0eIL9vS6VtAvOCyi84uXhchbneFvolfVhDHAPel8Fbamd31Lb3WOtANU9DZl4DD4pfP2u695WWARaxPePrK7Ow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ywOCeIXDJef6s4sEl5jUEDMpnIVERv8IMI3u1UWhgY0=; b=RWCQ/lZAXnorQcC9QNjWChi+b494Q39J1KNCdAHzCSQeQnG0mF+RnTKhQyZ6sa7LL3XMPZRpsivy9YnWw+fwoH15909QQVssY2zIFPiEdcKrqe/lNC7iYPRi4eag0hJqCnbMLOOf2BZSF1BfM0+7By9Ljdh1gj7P10lbU/4I9CB8vqtbmqoU1WXYmY5YmspUurQpuIalccpx/nBkAxhEqlGFcfpqFRSI0msvDZthfcY63vxDfNLkbbFRZdGbZhgv8Q41OzqopktyIYSghYjcwU4b90lqHnGfNoZEeox9ltw1glXFZ9abPg//mP9277Aq6XvwrJnaW9DOf98SIjArew== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ywOCeIXDJef6s4sEl5jUEDMpnIVERv8IMI3u1UWhgY0=; b=iUX8Z/OZlCeTmhBCKeIwTPRaIJxkQxwcvtMe0yRr4YSkg7nOzDr9haK3sfcHTheD3gvvdLC1SUjCZfGH7CvmFR/I1fdpcK+4VaWghMnrAvEW/cb7CLZ61Y1dzcLrPfcvcpcZsQ+IeA8sQ+Gk/BJF72n6GsrkW+8gylSvXWRF6HPAjK7e1iuECl2pPu6SYeQ2G2NBg0fk4RhqyAR+0+HjvH2cWXE8dIlshYF56f84tdguepmWSpu4SyPaFI54uRb+ia5gMSpBi5jcrz2tCYkesNyW5AY9CDuWUfQeIvK+BcqT0s2B9ROjdAkh24zt8PKquhvzp4Z+A65Usn848alsxg== Received: from DM6PR07CA0112.namprd07.prod.outlook.com (2603:10b6:5:330::27) by BL1PR12MB5303.namprd12.prod.outlook.com (2603:10b6:208:317::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:14 +0000 Received: from DM6NAM11FT023.eop-nam11.prod.protection.outlook.com (2603:10b6:5:330:cafe::5) by DM6PR07CA0112.outlook.office365.com (2603:10b6:5:330::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.19 via Frontend Transport; Fri, 23 Sep 2022 14:44:14 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT023.mail.protection.outlook.com (10.13.173.96) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:13 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:04 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:02 -0700 From: Suanming Mou To: Ori Kam , Thomas Monjalon , "Ferruh Yigit" , Andrew Rybchenko CC: , Alexander Kozyrev Subject: [PATCH 09/27] ethdev: add meter profiles/policies config Date: Fri, 23 Sep 2022 17:43:16 +0300 Message-ID: <20220923144334.27736-10-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT023:EE_|BL1PR12MB5303:EE_ X-MS-Office365-Filtering-Correlation-Id: ddc91056-0650-479f-65a2-08da9d7215a2 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 5JnOM1nuQ6Od12ErT8h39BP/turNotcLXo0/5kIeWQBVNL4xD/JyDDna720OIxJ/wS5vxbEgIWFJ/s1tTq3vnIYAH0BC3pk6NFOO15PiMyDcKN7E9o3H4uyd3Kk96TpGBSwyDwlHBjZnqgB5T3Fj7F9+lI3vFXZgP+WfHQBZetcS6AR15VMOHBendUdF9q2j+lks3qqKXcI3X1vHLlX8fGO69f/MLPeJIV9X+tddgKrFPBlm6nq3V6JYRbOC8HWS6qsxvgd2tbfrAV42qU8OLLswz/3a52+DXlPeZhlI0U0IciT/WtQ+k7aISKHyW7zHwtkwg6VTZ/z08gMlZsl32JFLz4MVfi1kZXiDNHKr3kJx6bicdsN1UrTY4HCcIC/O+c7/skTFqx/EdrDimqdNbbBePoh8uMBGkd20jsTyBlg85Q4odfWW01s+IeFEEKtIndlpkwbTsPpzSPLLGpLpqcSVRP/CESDbW9szFupQfE5Oc+bzFt7vxSQbnPnweCiNou9tYSsHh3wVZFCz11IBviEqJ2r30x+w+EZIa474eLhrW6+0wiZC/NRX78QsciLtBt/+ZGcoe3OmKqOigVqB+IMGFbkQV+dSD4z32uFIGqNSd7AuJRo24m6Q3FqgXHXUodr72qWLY3BnKvkaZENTsAc5q5xSwFkv4yp2hjmdn//wcapXZLg2sK7D5Xlr7L6ASQwXV4s9dC10cVucn5CxttYoV3Z62i5Y/8/KxMQawRlvmhARDavYLLO0nMZxonzSylIxq6+KZV7++Q2dGke39g== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(376002)(136003)(346002)(396003)(451199015)(46966006)(36840700001)(40470700004)(426003)(47076005)(36756003)(1076003)(2616005)(41300700001)(336012)(26005)(6286002)(2906002)(7696005)(16526019)(186003)(5660300002)(82740400003)(7636003)(8936002)(82310400005)(6666004)(55016003)(107886003)(40460700003)(356005)(40480700001)(86362001)(36860700001)(83380400001)(478600001)(316002)(54906003)(4326008)(110136005)(70586007)(8676002)(70206006); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:13.9737 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ddc91056-0650-479f-65a2-08da9d7215a2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT023.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5303 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Alexander Kozyrev Provide an ability to specify the number of meter profiles/policies alongside with the number of meters during the Flow engine configuration. Signed-off-by: Alexander Kozyrev --- lib/ethdev/rte_flow.h | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index a79f1e7ef0..abb475bdee 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4898,10 +4898,20 @@ struct rte_flow_port_info { */ uint32_t max_nb_aging_objects; /** - * Maximum number traffic meters. + * Maximum number of traffic meters. * @see RTE_FLOW_ACTION_TYPE_METER */ uint32_t max_nb_meters; + /** + * Maximum number of traffic meter profiles. + * @see RTE_FLOW_ACTION_TYPE_METER + */ + uint32_t max_nb_meter_profiles; + /** + * Maximum number of traffic meter policies. + * @see RTE_FLOW_ACTION_TYPE_METER + */ + uint32_t max_nb_meter_policies; }; /** @@ -4971,6 +4981,16 @@ struct rte_flow_port_attr { * @see RTE_FLOW_ACTION_TYPE_METER */ uint32_t nb_meters; + /** + * Number of traffic meter profiles to configure. + * @see RTE_FLOW_ACTION_TYPE_METER + */ + uint32_t nb_meter_profiles; + /** + * Number of traffic meter policies to configure. + * @see RTE_FLOW_ACTION_TYPE_METER + */ + uint32_t nb_meter_policies; }; /** From patchwork Fri Sep 23 14:43:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116759 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D29DCA054A; Fri, 23 Sep 2022 16:47:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3B43542BD1; Fri, 23 Sep 2022 16:44:48 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2077.outbound.protection.outlook.com [40.107.223.77]) by mails.dpdk.org (Postfix) with ESMTP id 2372D42C22 for ; Fri, 23 Sep 2022 16:44:45 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=H8oP3H5Fi86TQNSBp8T6xSzpfI3Hzh3ZnHFKArzk1RKi7is5VjIe1XsEZGAA+EQnmkZPksQ/6ERWObP2pOSokQ8Z9s4+BvYRSmmSOP2JfcgIIgp3ezS6hqK3MbVtzgWW3XtT8dKxNUzNTpYlS77WWz5bK9hguWEG0yHrmeQbyaxcodhfBQPa6seq+pGfjJZfFFsiSvURCVhilwD6hmKcpxUrIY1MvqjV5dJhRKLI1t9zpTW4iHS//8xD3ncMtawO5UJNK6lN3RV+MJHiJ6Rrqs1s4e6tn/rs6CaVGQ1BBuNzRGp8kb6PXrto8Vxt6D44Nbit4XODy7j2x8zzupr+lg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2u76KeRtVZ9dCMFW+PS/WDSYv4jyZEeY+lrW22HrFMY=; b=PuK2WRSKBKX15gRwxZsBnnnJzDcFYMsQ8RdqETiQhBgYohRMIkhGLFvDREeOwcNhxyORJrkHYy3sx9Mp9Wxdb5Tta76DqXtmCnx0ehkEEYuvv53rim1qj0aFxLeLQOIUJiV2Q2desMQY3/zFHItjGwapi2FFRDFdfhF22FWhjRY10ARSX9OOKlLxPA39Z/8a1HFUlAYPVE8SD45WyNq/5oXpaMJjuOtm44BTopgeO87nnCxGXOD1agFOI0z+cV0c7FSy0I0VeN3jsVdc/fJB6Tnqclok18OoEGC9+RZE3SFA282pEX9f6MZGg+EaEya27AyKokaC9ujBY/CLRj3LAA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2u76KeRtVZ9dCMFW+PS/WDSYv4jyZEeY+lrW22HrFMY=; b=gbyYbirxeXB1+tulyH7FcHCtMENijxGVorQ/5UlUBQTj1BEdoEvpUYiygp3nCe+feDOq803C245eRwgk630l55BkFwMfO/HMOEJirm5MXs+dmAKXjIyHVrHjrdp6RUFkWxbpSKA92HwkPz3tbkeFwJvLmlFntBZ61sxErbrDw9jZ0KDZeK/Vh/7kKDDmp0U5IxwDjd1PKrVdmKsY7r8+lQhha3mKlAxrV51hnEognaAeeRVmoqOIcwGK5lDnRpm8zKPM1tJuDNBskn+AZFp4IlBK1GDLo/Ikk9LYiNqS+nJCjRxGDj01WyGMQirYWj6S9VhTTB73UoEmqxFPpx0Kog== Received: from MW4PR03CA0084.namprd03.prod.outlook.com (2603:10b6:303:b6::29) by DS0PR12MB6584.namprd12.prod.outlook.com (2603:10b6:8:d0::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.18; Fri, 23 Sep 2022 14:44:43 +0000 Received: from CO1NAM11FT034.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b6:cafe::73) by MW4PR03CA0084.outlook.office365.com (2603:10b6:303:b6::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT034.mail.protection.outlook.com (10.13.174.248) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:42 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:25 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:23 -0700 From: Suanming Mou To: Ori Kam , Thomas Monjalon , "Ferruh Yigit" , Andrew Rybchenko CC: Subject: [PATCH 20/27] lib/ethdev: add connection tracking configuration Date: Fri, 23 Sep 2022 17:43:27 +0300 Message-ID: <20220923144334.27736-21-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT034:EE_|DS0PR12MB6584:EE_ X-MS-Office365-Filtering-Correlation-Id: 7626f30a-b9c0-44a3-2f44-08da9d7226d3 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: U3iErQq725PXruwHqTbJCgrOBvxrS17Wp9xNzQU1J7ndFdUJEMHbSuP0T4yq3Q6kBICSHRk214IkPOeZSTi3sYphT1QOYK4HPSfvOeY/jQO1zjVmRVcNE7Ca4Ie85js7r2eheFMNyQKtCAu6j7lMNKY3caPPNLCWcmdrpU+fqZWGA5mcm+4uLy/wlnEschDA1calCQGVHOAlyGOH73a76kvOC7CtZtWIVwVZoZSoSy4YOEB9vgqDGTZCOnGmSAxTX3E7rsqLgEgxZURfu5oIt7JZVUMArH+wfu06pjOfxlV+EemkBZCtAoAVu0mr8EkDZp8h69fzymZb+NSC2R30LOX3bXYIkb2S9FB01rR+1Nz5OAJWAYYdSoMsilhDYeSr2GTe5d2MyaiuOmbEvPN/LLFuMa0gy6SXzDV4rZWr6MvyvI7JeZkgphHRihnLohDSTOCWmTqs2tpgQA5TzBQ0mCH0oTz7/9wey0HUyt5TiwulzzGAMhag2W1QNqLtCl7aDsVKKPvSuKro0pRN+8Vmsoo/s2S2sLFHsyJ5grF8PcbIzt2sNDJ4Ixk/NBBnppOQ6nyKwD/ShOLDwcSn9JZUYBgeSCcOgf4FXj9LX0ccMzKNDiGsNUjIfsPHB5/wHf1zNRvnU/uayWAy/1S53e+OfGeNFFZGiht+IBHgTRvSf8ZKld90LMyp4sBYRKLlZI7iYkI1W3pF7EbMqf6HkqsDdHkKibc3CC5HH7zho+sQb6+6ocTGgnMkjn6jE/8a3IECvVFTnfpnPVwPWJqWdVpBKw== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(376002)(396003)(346002)(136003)(39860400002)(451199015)(46966006)(40470700004)(36840700001)(4326008)(4744005)(478600001)(36756003)(8676002)(70206006)(7696005)(5660300002)(70586007)(336012)(6286002)(41300700001)(40460700003)(2906002)(8936002)(7636003)(82310400005)(2616005)(82740400003)(356005)(55016003)(1076003)(40480700001)(16526019)(426003)(86362001)(186003)(47076005)(36860700001)(26005)(316002)(110136005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:42.7991 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7626f30a-b9c0-44a3-2f44-08da9d7226d3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT034.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB6584 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This commit adds the maximum connection tracking number configuration for async flow engine. Signed-off-by: Suanming Mou --- lib/ethdev/rte_flow.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index abb475bdee..e9a1bce38b 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4991,6 +4991,11 @@ struct rte_flow_port_attr { * @see RTE_FLOW_ACTION_TYPE_METER */ uint32_t nb_meter_policies; + /** + * Number of connection tracking to configure. + * @see RTE_FLOW_ACTION_TYPE_CONNTRACK + */ + uint32_t nb_cts; }; /** From patchwork Sat Sep 24 13:57:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 116819 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 181B6A0542; Sat, 24 Sep 2022 15:58:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0C9C142BF1; Sat, 24 Sep 2022 15:58:18 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 69F4942BEA for ; Sat, 24 Sep 2022 15:58:16 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 28ODfgfi012635; Sat, 24 Sep 2022 06:58:15 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=TZVREXbzzTUdM7H1DA6tOjOty30YSicvkc2k+Z7Ej60=; b=Wy+TC6bVsZ/9SZqUJyMkzTXrfY3y9kqr2vZs1nTON8j/U05n8+NkVZJli8qezZV3jIqf 43VCewqLhgTCHtslmAkwT1K6avS4WMB9Y6yBJtOZCz+qrIrIfEoS/Rg2khBlg+RcxSO5 XCvc+On33BVXgy9FZQ+aDZQwyBmDLRBzXdBGqxLMwkTuXHkAAtK2Ml9kMSzfqni1ryad oMb7SF80HiSyxvI1S+sMBdhKPqvgVy9gSdJMUS4v/5QwFJ5BENM9UsQPWOSbdOrI4xqK o9UcAkc7YFYJXF03h5oOFbd1StQlPlPaVy2qSQYIK3qm96a1QQpNuBp+imbhPz+wxfaZ xg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3jt1dp06ym-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sat, 24 Sep 2022 06:58:15 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sat, 24 Sep 2022 06:58:13 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sat, 24 Sep 2022 06:58:13 -0700 Received: from localhost.localdomain (unknown [10.28.36.102]) by maili.marvell.com (Postfix) with ESMTP id 73D163F706C; Sat, 24 Sep 2022 06:58:09 -0700 (PDT) From: Akhil Goyal To: CC: , , , , , , , , , , , Akhil Goyal Subject: [PATCH v5 1/3] ethdev: add IPsec SA expiry event subtypes Date: Sat, 24 Sep 2022 19:27:56 +0530 Message-ID: <20220924135758.3402392-2-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220924135758.3402392-1-gakhil@marvell.com> References: <20220416192530.173895-8-gakhil@marvell.com> <20220924135758.3402392-1-gakhil@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 50V_shAf8Tzt6PRKxQp-kESTg5vsBDME X-Proofpoint-GUID: 50V_shAf8Tzt6PRKxQp-kESTg5vsBDME X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-09-24_06,2022-09-22_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Vamsi Attunuru Patch adds new event subtypes for notifying expiry events upon reaching IPsec SA soft packet expiry and hard packet/byte expiry limits. Signed-off-by: Vamsi Attunuru Signed-off-by: Akhil Goyal Acked-by: Thomas Monjalon --- lib/ethdev/rte_ethdev.h | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 2e783536c1..d730676a0e 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -3875,8 +3875,26 @@ enum rte_eth_event_ipsec_subtype { RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW, /** Soft time expiry of SA */ RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY, - /** Soft byte expiry of SA */ + /** + * Soft byte expiry of SA determined by @ref bytes_soft_limit + * defined in @ref rte_security_ipsec_lifetime + */ RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY, + /** + * Soft packet expiry of SA determined by @ref packets_soft_limit + * defined in @ref rte_security_ipsec_lifetime + */ + RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY, + /** + * Hard byte expiry of SA determined by @ref bytes_hard_limit + * defined in @ref rte_security_ipsec_lifetime + */ + RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY, + /** + * Hard packet expiry of SA determined by @ref packets_hard_limit + * defined in @ref rte_security_ipsec_lifetime + */ + RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY, /** Max value of this enum */ RTE_ETH_EVENT_IPSEC_MAX }; @@ -3898,6 +3916,9 @@ struct rte_eth_event_ipsec_desc { * - @ref RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW * - @ref RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY * - @ref RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY + * - @ref RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY + * - @ref RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY + * - @ref RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY * * @see struct rte_security_session_conf * From patchwork Mon Sep 26 09:40:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 116834 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 65FCCA00C3; Mon, 26 Sep 2022 03:56:58 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5928A40DF7; Mon, 26 Sep 2022 03:56:58 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id B0999400D4 for ; Mon, 26 Sep 2022 03:56:56 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664157416; x=1695693416; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wNlxZTnqjIY1OucDBppa4zXgG3//1Tqtg0w8YFH4COs=; b=JlqeykWbxgT2m6nyG1u0q+4WEuFifHlsrDCiEO2GvA9/wMxuNh+8rKwl O02Vup8QUqSUHo41sAEP3u+hANRebU8itmtJ0XcBhLR8FN18Vd7nnuElE 6epkWwNxfsGrnA7+4RIV4bT7POwFoCsfJhdWggNMYW3l66Pebcxw8ngeh 5bvR9zH90ZbBvqtOGYzMFxgAaLlfedtbwR+Y9f6GGr3KT/yaRziYnzMNQ vv962n9TL+xtWlxuyJwdbNpmAdSl5uM2+CYRmL5AHt+me0AfOZ81CFqEr sRKNfZvklP0TAlZUSnVN8HE0bOgEwivV91Eqg/zKaapYWNPi7AxKWaFlk g==; X-IronPort-AV: E=McAfee;i="6500,9779,10481"; a="298517592" X-IronPort-AV: E=Sophos;i="5.93,345,1654585200"; d="scan'208";a="298517592" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2022 18:56:55 -0700 X-IronPort-AV: E=Sophos;i="5.93,345,1654585200"; d="scan'208";a="620873392" Received: from unknown (HELO localhost.localdomain) ([10.239.252.55]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2022 18:56:50 -0700 From: Yuan Wang To: dev@dpdk.org, Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko , Ray Kinsella Cc: xiaoyun.li@intel.com, aman.deep.singh@intel.com, yuying.zhang@intel.com, qi.z.zhang@intel.com, qiming.yang@intel.com, jerinjacobk@gmail.com, viacheslavo@nvidia.com, stephen@networkplumber.org, xuan.ding@intel.com, hpothula@marvell.com, yaqi.tang@intel.com, Yuan Wang , Wenxuan Wu Subject: [PATCH v5 1/4] ethdev: introduce protocol header API Date: Mon, 26 Sep 2022 17:40:36 +0800 Message-Id: <20220926094039.1572741-2-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220926094039.1572741-1-yuanx.wang@intel.com> References: <20220812181552.2908067-1-yuanx.wang@intel.com> <20220926094039.1572741-1-yuanx.wang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add a new ethdev API to retrieve supported protocol headers of a PMD, which helps to configure protocol header based buffer split. Signed-off-by: Yuan Wang Signed-off-by: Xuan Ding Signed-off-by: Wenxuan Wu Reviewed-by: Andrew Rybchenko --- doc/guides/rel_notes/release_22_11.rst | 5 ++++ lib/ethdev/ethdev_driver.h | 15 ++++++++++++ lib/ethdev/rte_ethdev.c | 33 ++++++++++++++++++++++++++ lib/ethdev/rte_ethdev.h | 30 +++++++++++++++++++++++ lib/ethdev/version.map | 3 +++ 5 files changed, 86 insertions(+) diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 235ac9bf94..8e5bdde46a 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -59,6 +59,11 @@ New Features * Added support to set device link down/up. +* **Added new ethdev API for PMD to get buffer split supported protocol types.** + + Added ``rte_eth_buffer_split_get_supported_hdr_ptypes()``, to get supported + header protocols of a PMD to split. + Removed Items ------------- diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index 8cd8eb8685..791b264610 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -1055,6 +1055,18 @@ typedef int (*eth_ip_reassembly_conf_get_t)(struct rte_eth_dev *dev, typedef int (*eth_ip_reassembly_conf_set_t)(struct rte_eth_dev *dev, const struct rte_eth_ip_reassembly_params *conf); +/** + * @internal + * Get supported header protocols of a PMD to split. + * + * @param dev + * Ethdev handle of port. + * + * @return + * An array pointer to store supported protocol headers. + */ +typedef const uint32_t *(*eth_buffer_split_supported_hdr_ptypes_get_t)(struct rte_eth_dev *dev); + /** * @internal * Dump private info from device to a file. @@ -1302,6 +1314,9 @@ struct eth_dev_ops { /** Set IP reassembly configuration */ eth_ip_reassembly_conf_set_t ip_reassembly_conf_set; + /** Get supported header ptypes to split */ + eth_buffer_split_supported_hdr_ptypes_get_t buffer_split_supported_hdr_ptypes_get; + /** Dump private info from device */ eth_dev_priv_dump_t eth_dev_priv_dump; diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 0c2c1088c0..1f0a7f8f3f 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -6002,6 +6002,39 @@ rte_eth_dev_priv_dump(uint16_t port_id, FILE *file) return eth_err(port_id, (*dev->dev_ops->eth_dev_priv_dump)(dev, file)); } +int +rte_eth_buffer_split_get_supported_hdr_ptypes(uint16_t port_id, uint32_t *ptypes, int num) +{ + int i, j; + struct rte_eth_dev *dev; + const uint32_t *all_types; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + dev = &rte_eth_devices[port_id]; + + if (ptypes == NULL && num > 0) { + RTE_ETHDEV_LOG(ERR, + "Cannot get ethdev port %u supported header protocol types to NULL when array size is non zero\n", + port_id); + return -EINVAL; + } + + if (*dev->dev_ops->buffer_split_supported_hdr_ptypes_get == NULL) + return -ENOTSUP; + all_types = (*dev->dev_ops->buffer_split_supported_hdr_ptypes_get)(dev); + + if (!all_types) + return 0; + + for (i = 0, j = 0; all_types[i] != RTE_PTYPE_UNKNOWN; ++i) { + if (j < num) + ptypes[j] = all_types[i]; + j++; + } + + return j; +} + RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO); RTE_INIT(ethdev_init_telemetry) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 45d17ddd13..c440e3863a 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -5924,6 +5924,36 @@ rte_eth_tx_buffer(uint16_t port_id, uint16_t queue_id, return rte_eth_tx_buffer_flush(port_id, queue_id, buffer); } +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Get supported header protocols to split on Rx. + * + * When a packet type is announced to be split, it *must* be supported by + * the PMD. For instance, if eth-ipv4, eth-ipv4-udp is announced, the PMD must + * return the following packet types for these packets: + * - Ether/IPv4 -> RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 + * - Ether/IPv4/UDP -> RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP + * + * @param port_id + * The port identifier of the device. + * @param[out] ptypes + * An array pointer to store supported protocol headers, allocated by caller. + * These ptypes are composed with RTE_PTYPE_*. + * @param num + * Size of the array pointed by param ptypes. + * @return + * - (>=0) Number of supported ptypes. If the number of types exceeds num, + * only num entries will be filled into the ptypes array, but the full + * count of supported ptypes will be returned. + * - (-ENOTSUP) if header protocol is not supported by device. + * - (-ENODEV) if *port_id* invalid. + * - (-EINVAL) if bad parameter. + */ +__rte_experimental +int rte_eth_buffer_split_get_supported_hdr_ptypes(uint16_t port_id, uint32_t *ptypes, int num); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 03f52fee91..e496c8d938 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -285,6 +285,9 @@ EXPERIMENTAL { rte_mtr_color_in_protocol_priority_get; rte_mtr_color_in_protocol_set; rte_mtr_meter_vlan_table_update; + + # added in 22.11 + rte_eth_buffer_split_get_supported_hdr_ptypes; }; INTERNAL { From patchwork Mon Sep 26 09:40:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 116835 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EC5A3A00C3; Mon, 26 Sep 2022 03:57:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DBB50400D7; Mon, 26 Sep 2022 03:57:19 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 5F85F400D4 for ; Mon, 26 Sep 2022 03:57:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664157438; x=1695693438; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qI/CG0HPh5vdM5yASMDcQWS0UTwATUfc4/vUhrhHSTg=; b=OmvmWX1CXt4XmOjweneiHcS70AkKVCGu1yJrpsozpXFCB7sJdELm6cj7 bnxIe+WMsO8sbfasbq3uwfRv6gdFFAx69NBkXYbdykyBUNZIv1T5LCMdQ X5ol53A35KDoxsUK4dKgvjjut6xi0jKedtF1mBicPU9jkMthk71GcQglx WqphyCDHo8tiM/CUQX+TzL8cyMjSdnw3ORBauzlQhQlXvYHJSDAgNCRQX vPJ2al4NKtdHPojlagVjrrwq7t2Lnd56iUePYMg/hDtnD4PfiRgUNe2GQ /q1RFFZ83jLnmfLToeVNsRS24j1bCRjJxV2OMOYh+FasHR2wmR4Ts5hCQ g==; X-IronPort-AV: E=McAfee;i="6500,9779,10481"; a="387207867" X-IronPort-AV: E=Sophos;i="5.93,345,1654585200"; d="scan'208";a="387207867" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2022 18:57:17 -0700 X-IronPort-AV: E=Sophos;i="5.93,345,1654585200"; d="scan'208";a="620873456" Received: from unknown (HELO localhost.localdomain) ([10.239.252.55]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2022 18:57:12 -0700 From: Yuan Wang To: dev@dpdk.org, Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Cc: mdr@ashroe.eu, xiaoyun.li@intel.com, aman.deep.singh@intel.com, yuying.zhang@intel.com, qi.z.zhang@intel.com, qiming.yang@intel.com, jerinjacobk@gmail.com, viacheslavo@nvidia.com, stephen@networkplumber.org, xuan.ding@intel.com, hpothula@marvell.com, yaqi.tang@intel.com, Yuan Wang , Wenxuan Wu Subject: [PATCH v5 2/4] ethdev: introduce protocol hdr based buffer split Date: Mon, 26 Sep 2022 17:40:37 +0800 Message-Id: <20220926094039.1572741-3-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220926094039.1572741-1-yuanx.wang@intel.com> References: <20220812181552.2908067-1-yuanx.wang@intel.com> <20220926094039.1572741-1-yuanx.wang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Currently, Rx buffer split supports length based split. With Rx queue offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT enabled and Rx packet segment configured, PMD will be able to split the received packets into multiple segments. However, length based buffer split is not suitable for NICs that do split based on protocol headers. Given an arbitrarily variable length in Rx packet segment, it is almost impossible to pass a fixed protocol header to driver. Besides, the existence of tunneling results in the composition of a packet is various, which makes the situation even worse. This patch extends current buffer split to support protocol header based buffer split. A new proto_hdr field is introduced in the reserved field of rte_eth_rxseg_split structure to specify protocol header. The proto_hdr field defines the split position of packet, splitting will always happen after the protocol header defined in the Rx packet segment. When Rx queue offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is enabled and corresponding protocol header is configured, driver will split the ingress packets into multiple segments. Examples for proto_hdr field defines: To split after ETH-IPV4-UDP, it should be defined as RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP For inner ETH-IPV4-UDP, it should be defined as RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP struct rte_eth_rxseg_split { struct rte_mempool *mp; /* memory pools to allocate segment from */ uint16_t length; /* segment maximal data length, configures split point */ uint16_t offset; /* data offset from beginning of mbuf data buffer */ /** * Proto_hdr defines a bit mask of the protocol sequence as * RTE_PTYPE_*, configures split point. The last RTE_PTYPE* * in the mask indicates the split position. * For non-tunneling packets, the complete protocol sequence * should be defined. * For tunneling packets, for simplicity, only the tunnel and * inner protocol sequence should be defined. */ uint32_t proto_hdr; }; If protocol header split can be supported by a PMD, the rte_eth_buffer_split_get_supported_hdr_ptypes function can be use to obtain a list of these protocol headers. For example, let's suppose we configured the Rx queue with the following segments: seg0 - pool0, proto_hdr0=RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4, off0=2B seg1 - pool1, proto_hdr1=RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP, off1=128B seg2 - pool2, off1=0B The packet consists of ETH_IPV4_UDP_PAYLOAD will be split like following: seg0 - ipv4 header @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0 seg1 - udp header @ 128 in mbuf from pool1 seg2 - payload @ 0 in mbuf from pool2 Note: NIC will only do split when the packets exactly match all the protocol headers in the segments. For example, if ARP packets received with above config, the NIC won't do split for ARP packets since it does not contains ipv4 header and udp header. These packets will be put into the last valid mempool, with zero offset. Now buffer split can be configured in two modes. For length based buffer split, the mp, length, offset field in Rx packet segment should be configured, while the proto_hdr field will be ignored. For protocol header based buffer split, the mp, offset, proto_hdr field in Rx packet segment should be configured, while the length field will be ignored. The split limitations imposed by underlying driver is reported in the rte_eth_dev_info->rx_seg_capa field. The memory attributes for the split parts may differ either, dpdk memory and external memory, respectively. Signed-off-by: Yuan Wang Signed-off-by: Xuan Ding Signed-off-by: Wenxuan Wu --- doc/guides/rel_notes/release_22_11.rst | 7 +++ lib/ethdev/rte_ethdev.c | 74 ++++++++++++++++++++++---- lib/ethdev/rte_ethdev.h | 29 +++++++++- 3 files changed, 98 insertions(+), 12 deletions(-) diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 8e5bdde46a..cce1f6e50c 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -64,6 +64,13 @@ New Features Added ``rte_eth_buffer_split_get_supported_hdr_ptypes()``, to get supported header protocols of a PMD to split. +* **Added protocol header based buffer split.** + + Ethdev: The ``reserved`` field in the ``rte_eth_rxseg_split`` structure is + replaced with ``proto_hdr`` to support protocol header based buffer split. + User can choose length or protocol header to configure buffer split + according to NIC's capability. + Removed Items ------------- diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 1f0a7f8f3f..27ec19faed 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -1649,9 +1649,10 @@ rte_eth_dev_is_removed(uint16_t port_id) } static int -rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, - uint16_t n_seg, uint32_t *mbp_buf_size, - const struct rte_eth_dev_info *dev_info) +rte_eth_rx_queue_check_split(uint16_t port_id, + const struct rte_eth_rxseg_split *rx_seg, + uint16_t n_seg, uint32_t *mbp_buf_size, + const struct rte_eth_dev_info *dev_info) { const struct rte_eth_rxseg_capa *seg_capa = &dev_info->rx_seg_capa; struct rte_mempool *mp_first; @@ -1674,6 +1675,7 @@ rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, struct rte_mempool *mpl = rx_seg[seg_idx].mp; uint32_t length = rx_seg[seg_idx].length; uint32_t offset = rx_seg[seg_idx].offset; + uint32_t proto_hdr = rx_seg[seg_idx].proto_hdr; if (mpl == NULL) { RTE_ETHDEV_LOG(ERR, "null mempool pointer\n"); @@ -1707,13 +1709,63 @@ rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, } offset += seg_idx != 0 ? 0 : RTE_PKTMBUF_HEADROOM; *mbp_buf_size = rte_pktmbuf_data_room_size(mpl); - length = length != 0 ? length : *mbp_buf_size; - if (*mbp_buf_size < length + offset) { - RTE_ETHDEV_LOG(ERR, - "%s mbuf_data_room_size %u < %u (segment length=%u + segment offset=%u)\n", - mpl->name, *mbp_buf_size, - length + offset, length, offset); - return -EINVAL; + + if (proto_hdr > 0) { + /* Split based on protocol headers. */ + + /* skip the payload */ + if (proto_hdr == RTE_PTYPE_ALL_MASK) + continue; + + int ptype_cnt; + + ptype_cnt = rte_eth_buffer_split_get_supported_hdr_ptypes(port_id, NULL, 0); + if (ptype_cnt <= 0) { + RTE_ETHDEV_LOG(ERR, + "Port %u failed to supported buffer split header protocols\n", + port_id); + return -EINVAL; + } + + uint32_t ptypes[ptype_cnt]; + int i; + + ptype_cnt = rte_eth_buffer_split_get_supported_hdr_ptypes(port_id, + ptypes, ptype_cnt); + if (ptype_cnt < 0) { + RTE_ETHDEV_LOG(ERR, + "Port %u failed to supported buffer split header protocols\n", + port_id); + return -EINVAL; + } + + for (i = 0; i < ptype_cnt; i++) + if (ptypes[i] == proto_hdr) + break; + if (i == ptype_cnt) { + RTE_ETHDEV_LOG(ERR, + "Requested Rx split header protocols 0x%x is not supported.\n", + proto_hdr); + return -EINVAL; + } + + if (*mbp_buf_size < offset) { + RTE_ETHDEV_LOG(ERR, + "%s mbuf_data_room_size %u < %u segment offset)\n", + mpl->name, *mbp_buf_size, + offset); + return -EINVAL; + } + } else { + /* Split at fixed length. */ + length = length != 0 ? length : *mbp_buf_size; + if (*mbp_buf_size < length + offset) { + RTE_ETHDEV_LOG(ERR, + "%s mbuf_data_room_size %u < %u (segment length=%u + segment offset=%u)\n", + mpl->name, *mbp_buf_size, + length + offset, length, offset); + return -EINVAL; + } } } return 0; @@ -1793,7 +1845,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, n_seg = rx_conf->rx_nseg; if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { - ret = rte_eth_rx_queue_check_split(rx_seg, n_seg, + ret = rte_eth_rx_queue_check_split(port_id, rx_seg, n_seg, &mbp_buf_size, &dev_info); if (ret != 0) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index c440e3863a..ba7c11f735 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -994,6 +994,9 @@ struct rte_eth_txmode { * specified in the first array element, the second buffer, from the * pool in the second element, and so on. * + * - The proto_hdrs in the elements define the split position of + * received packets. + * * - The offsets from the segment description elements specify * the data offset from the buffer beginning except the first mbuf. * The first segment offset is added with RTE_PKTMBUF_HEADROOM. @@ -1015,12 +1018,36 @@ struct rte_eth_txmode { * - pool from the last valid element * - the buffer size from this pool * - zero offset + * + * - Length based buffer split: + * - mp, length, offset should be configured. + * - The proto_hdr field will be ignored. + * + * - Protocol header based buffer split: + * - mp, offset, proto_hdr should be configured. + * - The length field will be ignored. + * + * - For Protocol header based buffer split, if the received packets + * don't exactly match all protocol headers in the elements, packets + * will not be split. + * These packets will be put into: + * - pool from the last valid element + * - the buffer size from this pool + * - zero offset */ struct rte_eth_rxseg_split { struct rte_mempool *mp; /**< Memory pool to allocate segment from. */ uint16_t length; /**< Segment data length, configures split point. */ uint16_t offset; /**< Data offset from beginning of mbuf data buffer. */ - uint32_t reserved; /**< Reserved field. */ + /** + * Proto_hdr defines a bit mask of the protocol sequence as RTE_PTYPE_*, + * configures split point. The last RTE_PTYPE* in the mask indicates the + * split position. + * For non-tunneling packets, the complete protocol sequence should be defined. + * For tunneling packets, for simplicity, only the tunnel and inner + * protocol sequence should be defined. + */ + uint32_t proto_hdr; }; /**