Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/112304/?format=api
http://patches.dpdk.org/api/patches/112304/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/patch/20220603124821.1148119-4-spiked@nvidia.com/", "project": { "id": 1, "url": "http://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20220603124821.1148119-4-spiked@nvidia.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20220603124821.1148119-4-spiked@nvidia.com", "date": "2022-06-03T12:48:17", "name": "[v4,3/7] ethdev: introduce Rx queue based fill threshold", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "38b1129d70a81d6ea76f09b106193c61d6672450", "submitter": { "id": 2637, "url": "http://patches.dpdk.org/api/people/2637/?format=api", "name": "Spike Du", "email": "spiked@nvidia.com" }, "delegate": { "id": 3961, "url": "http://patches.dpdk.org/api/users/3961/?format=api", "username": "arybchenko", "first_name": "Andrew", "last_name": "Rybchenko", "email": "andrew.rybchenko@oktetlabs.ru" }, "mbox": "http://patches.dpdk.org/project/dpdk/patch/20220603124821.1148119-4-spiked@nvidia.com/mbox/", "series": [ { "id": 23319, "url": "http://patches.dpdk.org/api/series/23319/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=23319", "date": "2022-06-03T12:48:14", "name": "introduce per-queue fill threshold and host shaper", "version": 4, "mbox": "http://patches.dpdk.org/series/23319/mbox/" } ], "comments": "http://patches.dpdk.org/api/patches/112304/comments/", "check": "warning", "checks": "http://patches.dpdk.org/api/patches/112304/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id A7838A0560;\n\tFri, 3 Jun 2022 14:49:07 +0200 (CEST)", "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 5DE3D42BA7;\n\tFri, 3 Jun 2022 14:48:59 +0200 (CEST)", "from NAM04-DM6-obe.outbound.protection.outlook.com\n (mail-dm6nam04on2084.outbound.protection.outlook.com [40.107.102.84])\n by mails.dpdk.org (Postfix) with ESMTP id A399042BA0\n for <dev@dpdk.org>; Fri, 3 Jun 2022 14:48:56 +0200 (CEST)", "from BN6PR18CA0002.namprd18.prod.outlook.com (2603:10b6:404:121::12)\n by BL1PR12MB5320.namprd12.prod.outlook.com (2603:10b6:208:314::17)\n with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Fri, 3 Jun\n 2022 12:48:54 +0000", "from BN8NAM11FT031.eop-nam11.prod.protection.outlook.com\n (2603:10b6:404:121:cafe::4) by BN6PR18CA0002.outlook.office365.com\n (2603:10b6:404:121::12) with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.19 via Frontend\n Transport; Fri, 3 Jun 2022 12:48:54 +0000", "from mail.nvidia.com (12.22.5.238) by\n BN8NAM11FT031.mail.protection.outlook.com (10.13.177.25) with Microsoft SMTP\n Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id\n 15.20.5314.12 via Frontend Transport; Fri, 3 Jun 2022 12:48:52 +0000", "from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com\n (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32;\n Fri, 3 Jun 2022 12:48:52 +0000", "from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com\n (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 3 Jun 2022\n 05:48:47 -0700" ], "ARC-Seal": "i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;\n b=DIHWCXOGmi0I2Xca+HZLAoljLB3Nd0TYZQl7puEK89iwl9qVnOY8tnqPczc6qp2mohNNYLgNe+HzO/OGKUEKtSuX6AB7uM4vetpa6ZGO82ODnfAU4q4RoncwCE9kLwHeDnZ1nuJynC0QNzpXqdLwIjVmjImkOVJA0qnOJmYYTluMvVrt03qbK81n+1ezVMy1YYXcWi54iv+78p9p/svsF+IslGzFPYHkNr5V6ei/DHjKinKidQZWT6hG+MUWmL7vMGHLBypzP/t82ScZGU+b/S2AoNFI4wmvDBFCeGHNF4b4z5eZa357h/Ng5kH3/OVLb2c7FAWH+38vP7vw8iss5A==", "ARC-Message-Signature": "i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector9901;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=3xORe7VoL6wX54+EdI5EtqZxLihz7MqzAOgMI8wFuxk=;\n b=nrtKjgey4H1MZYn07KQ8kTe0WkjbUmX/arexkczVUCNMVtkUhOzNhLhKKlLi5oijdDt9JgAP1xY+9RM4GsFL/9aKZUYzyUzbAGixA+KdQ1bvbRmi7/kjtpWSwMUBzKulREbECHWXaDOC21eyAeaC0xBxEVJbW49XZngdkOE6nyDlAxXPR+y7wjoP+D+ImV/cGITr3xKcZzFRkFfCbaw3If3cv7sig9klRIjGQDbrb1RdxBfcjRpowlxNQ3pp7TdU3sq5aGHa/qBG461NKrT8eyisU28KDBuTYIrMTEEFKK69gs2QwltCdLazDZ3g/3M/IYI7PBqsSEcwECkzjEXlEA==", "ARC-Authentication-Results": "i=1; mx.microsoft.com 1; spf=pass (sender ip is\n 12.22.5.238) smtp.rcpttodomain=oktetlabs.ru smtp.mailfrom=nvidia.com;\n dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com;\n dkim=none (message not signed); arc=none", "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=3xORe7VoL6wX54+EdI5EtqZxLihz7MqzAOgMI8wFuxk=;\n b=X/WfKdAYewMNqIjt5r3MyhzGcxnuEDLIOT+n6oHO5/421Fn5VTQr1RlVtuPKjRB+xcdLNLuWp4AmCirN4ZKW99LMxPs31vajUKxgLSb4Inkkx5w8Cv+r0bbnTkKrqwTPYkpW99K4FFMh4J8bLw79/GkaN6ig5Srl1G5hm362r7mO++VK7D3qijXQrddQukj0zZq8A2vt+Z2kEvk8qr/M3RhInoa5SH17iGoxsl+Vr9iTwRlFJXH7VN6koIMDTCK1IHg+13Ro4d7qqH5pvLVtE0l5fndasjqkYS6df0w+ir0HoecnWcWBgU/OlV4D7KbjUpr8/u//jE9TXwXw4g8hEQ==", "X-MS-Exchange-Authentication-Results": "spf=pass (sender IP is 12.22.5.238)\n smtp.mailfrom=nvidia.com; dkim=none (message not signed)\n header.d=none;dmarc=pass action=none header.from=nvidia.com;", "Received-SPF": "Pass (protection.outlook.com: domain of nvidia.com designates\n 12.22.5.238 as permitted sender) receiver=protection.outlook.com;\n client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C", "From": "Spike Du <spiked@nvidia.com>", "To": "<matan@nvidia.com>, <viacheslavo@nvidia.com>, <orika@nvidia.com>,\n <thomas@monjalon.net>, Wenzhuo Lu <wenzhuo.lu@intel.com>, Beilei Xing\n <beilei.xing@intel.com>, Bernard Iremonger <bernard.iremonger@intel.com>,\n \"Ray Kinsella\" <mdr@ashroe.eu>, Neil Horman <nhorman@tuxdriver.com>", "CC": "<andrew.rybchenko@oktetlabs.ru>, <stephen@networkplumber.org>,\n <mb@smartsharesystems.com>, <dev@dpdk.org>, <rasland@nvidia.com>", "Subject": "[PATCH v4 3/7] ethdev: introduce Rx queue based fill threshold", "Date": "Fri, 3 Jun 2022 15:48:17 +0300", "Message-ID": "<20220603124821.1148119-4-spiked@nvidia.com>", "X-Mailer": "git-send-email 2.27.0", "In-Reply-To": "<20220603124821.1148119-1-spiked@nvidia.com>", "References": "<20220524152041.737154-1-spiked@nvidia.com>\n <20220603124821.1148119-1-spiked@nvidia.com>", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit", "Content-Type": "text/plain", "X-Originating-IP": "[10.126.231.35]", "X-ClientProxiedBy": "rnnvmail202.nvidia.com (10.129.68.7) To\n rnnvmail201.nvidia.com (10.129.68.8)", "X-EOPAttributedMessage": "0", "X-MS-PublicTrafficType": "Email", "X-MS-Office365-Filtering-Correlation-Id": "1124558d-3d2a-4fe9-ae33-08da455f69fc", "X-MS-TrafficTypeDiagnostic": "BL1PR12MB5320:EE_", "X-LD-Processed": "43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr", "X-Microsoft-Antispam-PRVS": "\n <BL1PR12MB5320ACFBF5B7E45130BEA695A8A19@BL1PR12MB5320.namprd12.prod.outlook.com>", "X-MS-Exchange-SenderADCheck": "1", "X-MS-Exchange-AntiSpam-Relay": "0", "X-Microsoft-Antispam": "BCL:0;", "X-Microsoft-Antispam-Message-Info": "\n 2zo/f8IdelnGJGrARk7OSNq9zf/BlI16VsDtn/115jCMLjP7HP8ZmUbBV9NlHNsH7u2j/hrwNdKDOtHuOSH1Uwz+EKg9wp13skKHKRcfnXb1FKKsM4+v8mrNuAaXxdnv/gN44xu1a1WaZFx2EAkyanSgxWRe9Z488r2Lyc83/9wfMtuofgk51ao7VNWyIeoG93f1uKde0wb4NHuobXJp+qZuxGDU5JMGklPctX60XBP3+EKDNyIZr4p4Qku2FieSxYzs1fOboIJlUvpImiOKWWALNT6kg9ghcLVLAFqk6/Uvrx3wAo71IaouAnFi8jiNli+YIZCNX6VyGKQq3Z7iHSFkIwOSTeXGSkfJVHuCcXrbQyFObPGZm9nsxRjoHK74/BUeq+p2Gzg3iWofIO3e2jFyNmlqv27INsfwkEcl7Nhz3y/FZMn78lSy/nLUowLx/d7TtddcyQjkiaJv+Ne40VF6XKnz06Ekp+zjseFqvFiyCa3xFO4gdlAP/xAA9ou1bm0rysPd+ElZfOHyWIxj/PoBidEpyc4UG2rLbtwXhQ7bW6xeDB/8tfS8FQ2RxDPlzMeYAelB164KR9bGznre5RJujoeOleGU/hb2opIT3oei97NeGAsFpsrlvEyCXXAawLkws6gfMMTh/1l0B0/8r6MxzN4eY/ztdR1sZM3V9LdBFiorFeASoRSHDYJkur3jZQ50Cb0l40HC3+dDNykV9w==", "X-Forefront-Antispam-Report": "CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:;\n IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE;\n SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(8676002)(55016003)(7416002)(2906002)(83380400001)(40460700003)(26005)(6286002)(36860700001)(30864003)(86362001)(356005)(6666004)(7696005)(4326008)(2616005)(8936002)(316002)(508600001)(36756003)(186003)(110136005)(16526019)(82310400005)(426003)(54906003)(47076005)(336012)(1076003)(107886003)(70586007)(5660300002)(81166007)(70206006)(36900700001);\n DIR:OUT; SFP:1101;", "X-OriginatorOrg": "Nvidia.com", "X-MS-Exchange-CrossTenant-OriginalArrivalTime": "03 Jun 2022 12:48:52.6379 (UTC)", "X-MS-Exchange-CrossTenant-Network-Message-Id": "\n 1124558d-3d2a-4fe9-ae33-08da455f69fc", "X-MS-Exchange-CrossTenant-Id": "43083d15-7273-40c1-b7db-39efd9ccc17a", "X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp": "\n TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238];\n Helo=[mail.nvidia.com]", "X-MS-Exchange-CrossTenant-AuthSource": "\n BN8NAM11FT031.eop-nam11.prod.protection.outlook.com", "X-MS-Exchange-CrossTenant-AuthAs": "Anonymous", "X-MS-Exchange-CrossTenant-FromEntityHeader": "HybridOnPrem", "X-MS-Exchange-Transport-CrossTenantHeadersStamped": "BL1PR12MB5320", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org" }, "content": "Fill threshold describes the fullness of a Rx queue. If the Rx\nqueue fullness is above the threshold, the device will trigger the event\nRTE_ETH_EVENT_RX_FILL_THRESH.\nFill threshold is defined as a percentage of Rx queue size with valid\nvalue of [0,99].\nSetting fill threshold to 0 means disable it, which is the default.\nAdd fill threshold configuration and query driver callbacks in eth_dev_ops.\nAdd command line options to support fill_thresh per-rxq configure.\n- Command syntax:\n set port <port_id> rxq <rxq_id> fill_thresh <fill_thresh_num>\n\n- Example commands:\nTo configure fill_thresh as 30% of rxq size on port 1 rxq 0:\ntestpmd> set port 1 rxq 0 fill_thresh 30\n\nTo disable fill_thresh on port 1 rxq 0:\ntestpmd> set port 1 rxq 0 fill_thresh 0\n\nSigned-off-by: Spike Du <spiked@nvidia.com>\n---\n app/test-pmd/cmdline.c | 68 +++++++++++++++++++++++++++++++++++++++++++\n app/test-pmd/config.c | 21 ++++++++++++++\n app/test-pmd/testpmd.c | 18 ++++++++++++\n app/test-pmd/testpmd.h | 2 ++\n lib/ethdev/ethdev_driver.h | 22 ++++++++++++++\n lib/ethdev/rte_ethdev.c | 52 +++++++++++++++++++++++++++++++++\n lib/ethdev/rte_ethdev.h | 72 ++++++++++++++++++++++++++++++++++++++++++++++\n lib/ethdev/version.map | 2 ++\n 8 files changed, 257 insertions(+)", "diff": "diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c\nindex 0410bad..918581e 100644\n--- a/app/test-pmd/cmdline.c\n+++ b/app/test-pmd/cmdline.c\n@@ -17823,6 +17823,73 @@ struct cmd_show_port_flow_transfer_proxy_result {\n \t}\n };\n \n+/* *** SET FILL THRESHOLD FOR A RXQ OF A PORT *** */\n+struct cmd_rxq_fill_thresh_result {\n+\tcmdline_fixed_string_t set;\n+\tcmdline_fixed_string_t port;\n+\tuint16_t port_num;\n+\tcmdline_fixed_string_t rxq;\n+\tuint16_t rxq_num;\n+\tcmdline_fixed_string_t fill_thresh;\n+\tuint16_t fill_thresh_num;\n+};\n+\n+static void cmd_rxq_fill_thresh_parsed(void *parsed_result,\n+\t\t__rte_unused struct cmdline *cl,\n+\t\t__rte_unused void *data)\n+{\n+\tstruct cmd_rxq_fill_thresh_result *res = parsed_result;\n+\tint ret = 0;\n+\n+\tif ((strcmp(res->set, \"set\") == 0) && (strcmp(res->port, \"port\") == 0)\n+\t && (strcmp(res->rxq, \"rxq\") == 0)\n+\t && (strcmp(res->fill_thresh, \"fill_thresh\") == 0))\n+\t\tret = set_rxq_fill_thresh(res->port_num, res->rxq_num,\n+\t\t\t\t res->fill_thresh_num);\n+\tif (ret < 0)\n+\t\tprintf(\"rxq_fill_thresh_cmd error: (%s)\\n\", strerror(-ret));\n+\n+}\n+\n+cmdline_parse_token_string_t cmd_rxq_fill_thresh_set =\n+\tTOKEN_STRING_INITIALIZER(struct cmd_rxq_fill_thresh_result,\n+\t\t\t\tset, \"set\");\n+cmdline_parse_token_string_t cmd_rxq_fill_thresh_port =\n+\tTOKEN_STRING_INITIALIZER(struct cmd_rxq_fill_thresh_result,\n+\t\t\t\tport, \"port\");\n+cmdline_parse_token_num_t cmd_rxq_fill_thresh_portnum =\n+\tTOKEN_NUM_INITIALIZER(struct cmd_rxq_fill_thresh_result,\n+\t\t\t\tport_num, RTE_UINT16);\n+cmdline_parse_token_string_t cmd_rxq_fill_thresh_rxq =\n+\tTOKEN_STRING_INITIALIZER(struct cmd_rxq_fill_thresh_result,\n+\t\t\t\trxq, \"rxq\");\n+cmdline_parse_token_num_t cmd_rxq_fill_thresh_rxqnum =\n+\tTOKEN_NUM_INITIALIZER(struct cmd_rxq_fill_thresh_result,\n+\t\t\t\trxq_num, RTE_UINT8);\n+cmdline_parse_token_string_t cmd_rxq_fill_thresh_fill_thresh =\n+\tTOKEN_STRING_INITIALIZER(struct cmd_rxq_fill_thresh_result,\n+\t\t\t\tfill_thresh, \"fill_thresh\");\n+cmdline_parse_token_num_t cmd_rxq_fill_thresh_fill_threshnum =\n+\tTOKEN_NUM_INITIALIZER(struct cmd_rxq_fill_thresh_result,\n+\t\t\t\tfill_thresh_num, RTE_UINT16);\n+\n+cmdline_parse_inst_t cmd_rxq_fill_thresh = {\n+\t.f = cmd_rxq_fill_thresh_parsed,\n+\t.data = (void *)0,\n+\t.help_str = \"set port <port_id> rxq <rxq_id> fill_thresh <fill_thresh_num>\"\n+\t\t\"Set fill_thresh for rxq on port_id\",\n+\t.tokens = {\n+\t\t(void *)&cmd_rxq_fill_thresh_set,\n+\t\t(void *)&cmd_rxq_fill_thresh_port,\n+\t\t(void *)&cmd_rxq_fill_thresh_portnum,\n+\t\t(void *)&cmd_rxq_fill_thresh_rxq,\n+\t\t(void *)&cmd_rxq_fill_thresh_rxqnum,\n+\t\t(void *)&cmd_rxq_fill_thresh_fill_thresh,\n+\t\t(void *)&cmd_rxq_fill_thresh_fill_threshnum,\n+\t\tNULL,\n+\t},\n+};\n+\n /* ******************************************************************************** */\n \n /* list of instructions */\n@@ -18110,6 +18177,7 @@ struct cmd_show_port_flow_transfer_proxy_result {\n \t(cmdline_parse_inst_t *)&cmd_show_capability,\n \t(cmdline_parse_inst_t *)&cmd_set_flex_is_pattern,\n \t(cmdline_parse_inst_t *)&cmd_set_flex_spec_pattern,\n+\t(cmdline_parse_inst_t *)&cmd_rxq_fill_thresh,\n \tNULL,\n };\n \ndiff --git a/app/test-pmd/config.c b/app/test-pmd/config.c\nindex 1b1e738..d0c519b 100644\n--- a/app/test-pmd/config.c\n+++ b/app/test-pmd/config.c\n@@ -6342,3 +6342,24 @@ struct igb_ring_desc_16_bytes {\n \t\tprintf(\" %s\\n\", buf);\n \t}\n }\n+\n+int\n+set_rxq_fill_thresh(portid_t port_id, uint16_t queue_idx, uint16_t fill_thresh)\n+{\n+\tstruct rte_eth_link link;\n+\tint ret;\n+\n+\tif (port_id_is_invalid(port_id, ENABLED_WARN))\n+\t\treturn -EINVAL;\n+\tret = eth_link_get_nowait_print_err(port_id, &link);\n+\tif (ret < 0)\n+\t\treturn -EINVAL;\n+\tif (fill_thresh > 99)\n+\t\treturn -EINVAL;\n+\tret = rte_eth_rx_fill_thresh_set(port_id, queue_idx, fill_thresh);\n+\n+\tif (ret)\n+\t\treturn ret;\n+\treturn 0;\n+}\n+\ndiff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c\nindex 767765d..1209230 100644\n--- a/app/test-pmd/testpmd.c\n+++ b/app/test-pmd/testpmd.c\n@@ -420,6 +420,7 @@ struct fwd_engine * fwd_engines[] = {\n \t[RTE_ETH_EVENT_NEW] = \"device probed\",\n \t[RTE_ETH_EVENT_DESTROY] = \"device released\",\n \t[RTE_ETH_EVENT_FLOW_AGED] = \"flow aged\",\n+\t[RTE_ETH_EVENT_RX_FILL_THRESH] = \"rxq fill threshold reached\",\n \t[RTE_ETH_EVENT_MAX] = NULL,\n };\n \n@@ -3616,6 +3617,10 @@ struct pmd_test_command {\n eth_event_callback(portid_t port_id, enum rte_eth_event_type type, void *param,\n \t\t void *ret_param)\n {\n+\tstruct rte_eth_dev_info dev_info;\n+\tuint16_t rxq_id;\n+\tuint8_t fill_thresh;\n+\tint ret;\n \tRTE_SET_USED(param);\n \tRTE_SET_USED(ret_param);\n \n@@ -3647,6 +3652,19 @@ struct pmd_test_command {\n \t\tports[port_id].port_status = RTE_PORT_CLOSED;\n \t\tprintf(\"Port %u is closed\\n\", port_id);\n \t\tbreak;\n+\tcase RTE_ETH_EVENT_RX_FILL_THRESH:\n+\t\tret = rte_eth_dev_info_get(port_id, &dev_info);\n+\t\tif (ret != 0)\n+\t\t\tbreak;\n+\t\t/* fill_thresh query API rewinds rxq_id, no need to check max rxq num. */\n+\t\tfor (rxq_id = 0; ; rxq_id++) {\n+\t\t\tret = rte_eth_rx_fill_thresh_query(port_id, &rxq_id, &fill_thresh);\n+\t\t\tif (ret <= 0)\n+\t\t\t\tbreak;\n+\t\t\tprintf(\"Received fill_thresh event, port:%d rxq_id:%d\\n\",\n+\t\t\t port_id, rxq_id);\n+\t\t}\n+\t\tbreak;\n \tdefault:\n \t\tbreak;\n \t}\ndiff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h\nindex 78a5f4e..c7a144e 100644\n--- a/app/test-pmd/testpmd.h\n+++ b/app/test-pmd/testpmd.h\n@@ -1173,6 +1173,8 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,\n void flex_item_create(portid_t port_id, uint16_t flex_id, const char *filename);\n void flex_item_destroy(portid_t port_id, uint16_t flex_id);\n void port_flex_item_flush(portid_t port_id);\n+int set_rxq_fill_thresh(portid_t port_id, uint16_t queue_idx,\n+\t\t\tuint16_t fill_thresh);\n \n extern int flow_parse(const char *src, void *result, unsigned int size,\n \t\t struct rte_flow_attr **attr,\ndiff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h\nindex 69d9dc2..7ef7dba 100644\n--- a/lib/ethdev/ethdev_driver.h\n+++ b/lib/ethdev/ethdev_driver.h\n@@ -470,6 +470,23 @@ typedef int (*eth_rx_queue_setup_t)(struct rte_eth_dev *dev,\n \t\t\t\t const struct rte_eth_rxconf *rx_conf,\n \t\t\t\t struct rte_mempool *mb_pool);\n \n+/**\n+ * @internal Set Rx queue fill threshold.\n+ * @see rte_eth_rx_fill_thresh_set()\n+ */\n+typedef int (*eth_rx_queue_fill_thresh_set_t)(struct rte_eth_dev *dev,\n+\t\t\t\t uint16_t rx_queue_id,\n+\t\t\t\t uint8_t fill_thresh);\n+\n+/**\n+ * @internal Query queue fill threshold event.\n+ * @see rte_eth_rx_fill_thresh_query()\n+ */\n+\n+typedef int (*eth_rx_queue_fill_thresh_query_t)(struct rte_eth_dev *dev,\n+\t\t\t\t\tuint16_t *rx_queue_id,\n+\t\t\t\t\tuint8_t *fill_thresh);\n+\n /** @internal Setup a transmit queue of an Ethernet device. */\n typedef int (*eth_tx_queue_setup_t)(struct rte_eth_dev *dev,\n \t\t\t\t uint16_t tx_queue_id,\n@@ -1168,6 +1185,11 @@ struct eth_dev_ops {\n \t/** Priority flow control queue configure */\n \tpriority_flow_ctrl_queue_config_t priority_flow_ctrl_queue_config;\n \n+\t/** Set Rx queue fill threshold. */\n+\teth_rx_queue_fill_thresh_set_t rx_queue_fill_thresh_set;\n+\t/** Query Rx queue fill threshold event. */\n+\teth_rx_queue_fill_thresh_query_t rx_queue_fill_thresh_query;\n+\n \t/** Set Unicast Table Array */\n \teth_uc_hash_table_set_t uc_hash_table_set;\n \t/** Set Unicast hash bitmap */\ndiff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c\nindex a175867..69a1f75 100644\n--- a/lib/ethdev/rte_ethdev.c\n+++ b/lib/ethdev/rte_ethdev.c\n@@ -4424,6 +4424,58 @@ int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx,\n \t\t\t\t\t\t\tqueue_idx, tx_rate));\n }\n \n+int rte_eth_rx_fill_thresh_set(uint16_t port_id, uint16_t queue_id,\n+\t\t\t uint8_t fill_thresh)\n+{\n+\tstruct rte_eth_dev *dev;\n+\tstruct rte_eth_dev_info dev_info;\n+\tint ret;\n+\n+\tRTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);\n+\tdev = &rte_eth_devices[port_id];\n+\n+\tret = rte_eth_dev_info_get(port_id, &dev_info);\n+\tif (ret != 0)\n+\t\treturn ret;\n+\n+\tif (queue_id > dev_info.max_rx_queues) {\n+\t\tRTE_ETHDEV_LOG(ERR,\n+\t\t\t\"Set queue fill thresh: port %u: invalid queue ID=%u.\\n\",\n+\t\t\tport_id, queue_id);\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (fill_thresh > 99)\n+\t\treturn -EINVAL;\n+\tRTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_fill_thresh_set, -ENOTSUP);\n+\treturn eth_err(port_id, (*dev->dev_ops->rx_queue_fill_thresh_set)(dev,\n+\t\t\t\t\t\t\t queue_id, fill_thresh));\n+}\n+\n+int rte_eth_rx_fill_thresh_query(uint16_t port_id, uint16_t *queue_id,\n+\t\t\t\t uint8_t *fill_thresh)\n+{\n+\tstruct rte_eth_dev_info dev_info;\n+\tstruct rte_eth_dev *dev;\n+\tint ret;\n+\n+\tRTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);\n+\tdev = &rte_eth_devices[port_id];\n+\n+\tret = rte_eth_dev_info_get(port_id, &dev_info);\n+\tif (ret != 0)\n+\t\treturn ret;\n+\n+\tif (queue_id == NULL)\n+\t\treturn -EINVAL;\n+\tif (*queue_id >= dev_info.max_rx_queues)\n+\t\t*queue_id = 0;\n+\n+\tRTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_fill_thresh_query, -ENOTSUP);\n+\treturn eth_err(port_id, (*dev->dev_ops->rx_queue_fill_thresh_query)(dev,\n+\t\t\t\t\t\t\t queue_id, fill_thresh));\n+}\n+\n RTE_INIT(eth_dev_init_fp_ops)\n {\n \tuint32_t i;\ndiff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h\nindex 04225bb..d44e5da 100644\n--- a/lib/ethdev/rte_ethdev.h\n+++ b/lib/ethdev/rte_ethdev.h\n@@ -1931,6 +1931,14 @@ struct rte_eth_rxq_info {\n \tuint8_t queue_state; /**< one of RTE_ETH_QUEUE_STATE_*. */\n \tuint16_t nb_desc; /**< configured number of RXDs. */\n \tuint16_t rx_buf_size; /**< hardware receive buffer size. */\n+\t/**\n+\t * Per-queue Rx fill threshold defined as percentage of Rx queue\n+\t * size. If Rx queue receives traffic higher than this percentage,\n+\t * the event RTE_ETH_EVENT_RX_FILL_THESH is triggered.\n+\t * Value 0 means threshold monitoring is disabled, no event is\n+\t * triggered.\n+\t */\n+\tuint8_t fill_thresh;\n } __rte_cache_min_aligned;\n \n /**\n@@ -3672,6 +3680,65 @@ int rte_eth_dev_set_vlan_ether_type(uint16_t port_id,\n */\n int rte_eth_dev_set_vlan_pvid(uint16_t port_id, uint16_t pvid, int on);\n \n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Set Rx queue based fill threshold.\n+ *\n+ * @param port_id\n+ * The port identifier of the Ethernet device.\n+ * @param queue_id\n+ * The index of the receive queue.\n+ * @param fill_thresh\n+ * The fill threshold percentage of Rx queue size which describes\n+ * the fullness of Rx queue. If the Rx queue fullness is above it,\n+ * the device will trigger the event RTE_ETH_EVENT_RX_FILL_THRESH.\n+ * [1-99] to set a new fill thresold.\n+ * 0 to disable thresold monitoring.\n+ *\n+ * @return\n+ * - 0 if successful.\n+ * - negative if failed.\n+ */\n+__rte_experimental\n+int rte_eth_rx_fill_thresh_set(uint16_t port_id, uint16_t queue_id,\n+\t\t\t uint8_t fill_thresh);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Query Rx queue based fill threshold event.\n+ * The function queries all queues in the port circularly until one\n+ * pending fill_thresh event is found or no pending fill_thresh event is found.\n+ *\n+ * @param port_id\n+ * The port identifier of the Ethernet device.\n+ * @param queue_id\n+ * The API caller sets the starting Rx queue id in the pointer.\n+ * If the queue_id is bigger than maximum queue id of the port,\n+ * it's rewinded to 0 so that application can keep calling\n+ * this function to handle all pending fill_thresh events in the queues\n+ * with a simple increment between calls.\n+ * If a Rx queue has pending fill_thresh event, the pointer is updated\n+ * with this Rx queue id; otherwise this pointer's content is\n+ * unchanged.\n+ * @param fill_thresh\n+ * The pointer to the fill threshold percentage of Rx queue.\n+ * If Rx queue with pending fill_thresh event is found, the queue's fill_thresh\n+ * percentage is stored in this pointer, otherwise the pointer's\n+ * content is unchanged.\n+ *\n+ * @return\n+ * - 1 if a Rx queue with pending fill_thresh event is found.\n+ * - 0 if no Rx queue with pending fill_thresh event is found.\n+ * - -EINVAL if queue_id is NULL.\n+ */\n+__rte_experimental\n+int rte_eth_rx_fill_thresh_query(uint16_t port_id, uint16_t *queue_id,\n+\t\t\t\t uint8_t *fill_thresh);\n+\n typedef void (*buffer_tx_error_fn)(struct rte_mbuf **unsent, uint16_t count,\n \t\tvoid *userdata);\n \n@@ -3877,6 +3944,11 @@ enum rte_eth_event_type {\n \tRTE_ETH_EVENT_DESTROY, /**< port is released */\n \tRTE_ETH_EVENT_IPSEC, /**< IPsec offload related event */\n \tRTE_ETH_EVENT_FLOW_AGED,/**< New aged-out flows is detected */\n+\t/**\n+\t * Fill threshold value is exceeded in a queue.\n+\t * @see rte_eth_rx_fill_thresh_set()\n+\t */\n+\tRTE_ETH_EVENT_RX_FILL_THRESH,\n \tRTE_ETH_EVENT_MAX /**< max value of this enum */\n };\n \ndiff --git a/lib/ethdev/version.map b/lib/ethdev/version.map\nindex daca785..29b1fe8 100644\n--- a/lib/ethdev/version.map\n+++ b/lib/ethdev/version.map\n@@ -285,6 +285,8 @@ EXPERIMENTAL {\n \trte_mtr_color_in_protocol_priority_get;\n \trte_mtr_color_in_protocol_set;\n \trte_mtr_meter_vlan_table_update;\n+\trte_eth_rx_fill_thresh_set;\n+\trte_eth_rx_fill_thresh_query;\n };\n \n INTERNAL {\n", "prefixes": [ "v4", "3/7" ] }{ "id": 112304, "url": "