Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/101984/?format=api
http://patches.dpdk.org/api/patches/101984/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/patch/20211018120842.2058637-4-xuemingl@nvidia.com/", "project": { "id": 1, "url": "http://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20211018120842.2058637-4-xuemingl@nvidia.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20211018120842.2058637-4-xuemingl@nvidia.com", "date": "2021-10-18T12:08:39", "name": "[v8,3/6] app/testpmd: new parameter to enable shared Rx queue", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "8a9397c2ca39e71496c23e2cbc0b55bb4c4f8e0a", "submitter": { "id": 1904, "url": "http://patches.dpdk.org/api/people/1904/?format=api", "name": "Xueming Li", "email": "xuemingl@nvidia.com" }, "delegate": { "id": 319, "url": "http://patches.dpdk.org/api/users/319/?format=api", "username": "fyigit", "first_name": "Ferruh", "last_name": "Yigit", "email": "ferruh.yigit@amd.com" }, "mbox": "http://patches.dpdk.org/project/dpdk/patch/20211018120842.2058637-4-xuemingl@nvidia.com/mbox/", "series": [ { "id": 19736, "url": "http://patches.dpdk.org/api/series/19736/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=19736", "date": "2021-10-18T12:08:36", "name": "ethdev: introduce shared Rx queue", "version": 8, "mbox": "http://patches.dpdk.org/series/19736/mbox/" } ], "comments": "http://patches.dpdk.org/api/patches/101984/comments/", "check": "success", "checks": "http://patches.dpdk.org/api/patches/101984/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id BB145A0C43;\n\tMon, 18 Oct 2021 14:10:03 +0200 (CEST)", "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id B9A1E410FC;\n\tMon, 18 Oct 2021 14:09:53 +0200 (CEST)", "from NAM10-MW2-obe.outbound.protection.outlook.com\n (mail-mw2nam10on2089.outbound.protection.outlook.com [40.107.94.89])\n by mails.dpdk.org (Postfix) with ESMTP id 4C2CF410EB\n for <dev@dpdk.org>; Mon, 18 Oct 2021 14:09:51 +0200 (CEST)", "from CO2PR04CA0135.namprd04.prod.outlook.com (2603:10b6:104::13) by\n DM6PR12MB4843.namprd12.prod.outlook.com (2603:10b6:5:1d1::16) with\n Microsoft\n SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.20.4608.17; Mon, 18 Oct 2021 12:09:49 +0000", "from CO1NAM11FT040.eop-nam11.prod.protection.outlook.com\n (2603:10b6:104:0:cafe::a2) by CO2PR04CA0135.outlook.office365.com\n (2603:10b6:104::13) with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.15 via Frontend\n Transport; Mon, 18 Oct 2021 12:09:49 +0000", "from mail.nvidia.com (216.228.112.34) by\n CO1NAM11FT040.mail.protection.outlook.com (10.13.174.140) with Microsoft SMTP\n Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id\n 15.20.4608.15 via Frontend Transport; Mon, 18 Oct 2021 12:09:48 +0000", "from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com\n (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 18 Oct\n 2021 12:09:09 +0000" ], "ARC-Seal": "i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;\n b=SUmZrrhPRokP8hD+NvQoreh8BLqG61hfxOC5yPqgXsHUuNZI8mNyvy1SwY/cH8Y39qBFY/7Rr7l5ruFYZZqmOyEV1BcUTtsN75vlNLMzMlXm9pVjYEf624kmwlrgB0j5rWxAhrIbhcoaZUWdfLl4IH7ZbTYXuPAOIc1YGQ7n1ejkiRrk7119osSflPd4sPKioaIWUYXen8+hut4oK57xWSByLqY2t4PmaXSuhXhMmkE9mpU5BXoqjsrUaHIZPppdJuNlUvyfHA4nKQk2QZya4cuA0M5awS26lubN1GuxJaAIjLNi0ARc2jYY7obqcWXgalkLNbssx63DPakVil78sQ==", "ARC-Message-Signature": "i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector9901;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=/iiL1nMgiMkbYTUOHNuFoZGIN69qjAPB3VVVQQr/7qY=;\n b=i+hqqo+f8T6nH1z7QHiDoFuFIWtOQq06dImPkN714mIqrI6bYExMLSLm0kxdLtFaLqV8ItdSC0UKswc1rOx8HHpdWFqBMw4i+5nuwrw+ds0dbRKRpjifaJE7/H9Ir4V6aJ6GhnKj6mJk5kYahmyYPzEG7YF/kW7PuU3eo3yxxeBm5XbnNw8NFlLW1hbumBjJ4v9zwcSz913gjpHrSiNZvm/oculN7BNeJx2BHpbFdChDe3l8GwTrWRWlKXexMgnnhrYLLlwBdBJJg3/dnIFCq4juNQWwoR58RNXj3IsYwoZRM9e9ttVAnMtOxd7brk+64RLDDELjB3ojDu1iEn7GNQ==", "ARC-Authentication-Results": "i=1; mx.microsoft.com 1; spf=pass (sender ip is\n 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com;\n dmarc=pass (p=quarantine sp=quarantine pct=100) action=none\n header.from=nvidia.com; dkim=none (message not signed); arc=none", "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=/iiL1nMgiMkbYTUOHNuFoZGIN69qjAPB3VVVQQr/7qY=;\n b=jH9OIUxtkVnZFBpwEE6qVPvyOoLq8/2N9Pt2zNH3B0ZwpKAVNJxgThV/3aT+4oPAJUHOfpHsLpyKiznZqW7nzEeeNQt2nKU6ve0nankkDFEwPhbit0QFZO0WRbUAKwZ8p4hMFvj/Jb4yM81xic1/LpRPD9inPgR02aBVqfn4uYESCSTpkMd1StUgmfFQhpHSj+MGJrqpNlGxDta0jl0Hnv8kYBw84uL7JZRhRQyeWjt47FW6pCI7YQ0ZLTUM2l7RCsBpelWgaMmoJxskcqOS5EruIW25BjmVObIokEF4r2UZsWRBrMYoKG/OUYT47wFSWSB8Ow1ayKt3X8GBpDejvw==", "X-MS-Exchange-Authentication-Results": "spf=pass (sender IP is 216.228.112.34)\n smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed)\n header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com;", "Received-SPF": "Pass (protection.outlook.com: domain of nvidia.com designates\n 216.228.112.34 as permitted sender) receiver=protection.outlook.com;\n client-ip=216.228.112.34; helo=mail.nvidia.com;", "From": "Xueming Li <xuemingl@nvidia.com>", "To": "<dev@dpdk.org>", "CC": "<xuemingl@nvidia.com>, Jerin Jacob <jerinjacobk@gmail.com>, Ferruh Yigit\n <ferruh.yigit@intel.com>, Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,\n Viacheslav Ovsiienko <viacheslavo@nvidia.com>, Thomas Monjalon\n <thomas@monjalon.net>, Lior Margalit <lmargalit@nvidia.com>, \"Ananyev\n Konstantin\" <konstantin.ananyev@intel.com>, Xiaoyun Li <xiaoyun.li@intel.com>", "Date": "Mon, 18 Oct 2021 20:08:39 +0800", "Message-ID": "<20211018120842.2058637-4-xuemingl@nvidia.com>", "X-Mailer": "git-send-email 2.33.0", "In-Reply-To": "<20211018120842.2058637-1-xuemingl@nvidia.com>", "References": "<20211018120842.2058637-1-xuemingl@nvidia.com>", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit", "Content-Type": "text/plain", "X-Originating-IP": "[172.20.187.6]", "X-ClientProxiedBy": "HQMAIL111.nvidia.com (172.20.187.18) To\n HQMAIL107.nvidia.com (172.20.187.13)", "X-EOPAttributedMessage": "0", "X-MS-PublicTrafficType": "Email", "X-MS-Office365-Filtering-Correlation-Id": "f0864290-574e-4cb5-aac8-08d992302e9d", "X-MS-TrafficTypeDiagnostic": "DM6PR12MB4843:", "X-LD-Processed": "43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr", "X-Microsoft-Antispam-PRVS": "\n <DM6PR12MB48439839689456EECEE70356A1BC9@DM6PR12MB4843.namprd12.prod.outlook.com>", "X-MS-Oob-TLC-OOBClassifiers": "OLM:2887;", "X-MS-Exchange-SenderADCheck": "1", "X-MS-Exchange-AntiSpam-Relay": "0", "X-Microsoft-Antispam": "BCL:0;", "X-Microsoft-Antispam-Message-Info": "\n bNAuC3+FIjANadL/ZYS1Me/9LrjqEA45aN+39sMtV2XkH503Gk6oEEqSZevsRaZHvDPHTJDmQo2NeNZVxVjKFmQ2MzE2jzV22VQUVuihBcLYNcuzSrz81JWx289Fa27bOqNaEKYtErKfTOsHa+XNrbr33Gi2V+Q26PZBB0SDIlBMrwUAEHclqpxgitjJvEAiZRcTtE4EY1vx9/l3JzPlHagyAkcWMnSJnnjzAHvgM2ZBwqbF4nJ7kMP0Xj/01dSXYKkDohFGHEtxw5WhkPvASKTebSklnnvqawywD0AhFyC+QXJs4dFhWtMDAAALxP+SG3L/79BRCThVtof9BARS8NlMvVG8XDzPa2eF5jnpl3R/fVv8Gv6EGak5dblnJsD8kntKWqV89b+wstSG6Zz8x+eKOSIKOYQwqWfq5odPPEUMeaisLrMZAa00wVymQqXVKfGq7Dk9uL2DDCgljnLHr/mcu8ZTQ2UcdkUw/tOoZ1TTRPTrnFN5qpz00MR/gsuNLoQcGaMj4dAQwNlTrmZG09GZnzppFrc2XUPk9ubZMq4exJZuTCUbM7+FKSM2opn5xQMa2KNw0kr/pNYuwNLvrIRcGGpxcyoPto9/WeaQ7iA1lm48/lg616upAVsKDMOyrCnnCpkEdJEm24hVGpidQt9ddhUrUSrpM6NKPvgwY2+B3mOn0gD8F9doW0c/w5meH7sbhA65aHdxPvt3hc1jsQ==", "X-Forefront-Antispam-Report": "CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1;\n SRV:;\n IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE;\n SFS:(4636009)(46966006)(36840700001)(26005)(83380400001)(82310400003)(356005)(7696005)(186003)(2616005)(6286002)(70586007)(6916009)(70206006)(1076003)(36756003)(86362001)(8936002)(6666004)(4326008)(508600001)(47076005)(55016002)(8676002)(316002)(5660300002)(54906003)(36860700001)(336012)(2906002)(426003)(7636003)(16526019);\n DIR:OUT; SFP:1101;", "X-OriginatorOrg": "Nvidia.com", "X-MS-Exchange-CrossTenant-OriginalArrivalTime": "18 Oct 2021 12:09:48.2476 (UTC)", "X-MS-Exchange-CrossTenant-Network-Message-Id": "\n f0864290-574e-4cb5-aac8-08d992302e9d", "X-MS-Exchange-CrossTenant-Id": "43083d15-7273-40c1-b7db-39efd9ccc17a", "X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp": "\n TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34];\n Helo=[mail.nvidia.com]", "X-MS-Exchange-CrossTenant-AuthSource": "\n CO1NAM11FT040.eop-nam11.prod.protection.outlook.com", "X-MS-Exchange-CrossTenant-AuthAs": "Anonymous", "X-MS-Exchange-CrossTenant-FromEntityHeader": "HybridOnPrem", "X-MS-Exchange-Transport-CrossTenantHeadersStamped": "DM6PR12MB4843", "Subject": "[dpdk-dev] [PATCH v8 3/6] app/testpmd: new parameter to enable\n shared Rx queue", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "Adds \"--rxq-share=X\" parameter to enable shared RxQ, share if device\nsupports, otherwise fallback to standard RxQ.\n\nShare group number grows per X ports. X defaults to MAX, implies all\nports join share group 1. Queue ID is mapped equally with shared Rx\nqueue ID.\n\nForwarding engine \"shared-rxq\" should be used which Rx only and update\nstream statistics correctly.\n\nSigned-off-by: Xueming Li <xuemingl@nvidia.com>\n---\n app/test-pmd/config.c | 7 ++++++-\n app/test-pmd/parameters.c | 13 +++++++++++++\n app/test-pmd/testpmd.c | 20 +++++++++++++++++---\n app/test-pmd/testpmd.h | 2 ++\n doc/guides/testpmd_app_ug/run_app.rst | 7 +++++++\n 5 files changed, 45 insertions(+), 4 deletions(-)", "diff": "diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c\nindex c0616dcd2fd..f8fb8961cae 100644\n--- a/app/test-pmd/config.c\n+++ b/app/test-pmd/config.c\n@@ -2713,7 +2713,12 @@ rxtx_config_display(void)\n \t\t\tprintf(\" RX threshold registers: pthresh=%d hthresh=%d \"\n \t\t\t\t\" wthresh=%d\\n\",\n \t\t\t\tpthresh_tmp, hthresh_tmp, wthresh_tmp);\n-\t\t\tprintf(\" RX Offloads=0x%\"PRIx64\"\\n\", offloads_tmp);\n+\t\t\tprintf(\" RX Offloads=0x%\"PRIx64, offloads_tmp);\n+\t\t\tif (rx_conf->share_group > 0)\n+\t\t\t\tprintf(\" share_group=%u share_qid=%u\",\n+\t\t\t\t rx_conf->share_group,\n+\t\t\t\t rx_conf->share_qid);\n+\t\t\tprintf(\"\\n\");\n \t\t}\n \n \t\t/* per tx queue config only for first queue to be less verbose */\ndiff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c\nindex 3f94a82e321..30dae326310 100644\n--- a/app/test-pmd/parameters.c\n+++ b/app/test-pmd/parameters.c\n@@ -167,6 +167,7 @@ usage(char* progname)\n \tprintf(\" --tx-ip=src,dst: IP addresses in Tx-only mode\\n\");\n \tprintf(\" --tx-udp=src[,dst]: UDP ports in Tx-only mode\\n\");\n \tprintf(\" --eth-link-speed: force link speed.\\n\");\n+\tprintf(\" --rxq-share: number of ports per shared rxq groups, defaults to MAX(1 group)\\n\");\n \tprintf(\" --disable-link-check: disable check on link status when \"\n \t \"starting/stopping ports.\\n\");\n \tprintf(\" --disable-device-start: do not automatically start port\\n\");\n@@ -607,6 +608,7 @@ launch_args_parse(int argc, char** argv)\n \t\t{ \"rxpkts\",\t\t\t1, 0, 0 },\n \t\t{ \"txpkts\",\t\t\t1, 0, 0 },\n \t\t{ \"txonly-multi-flow\",\t\t0, 0, 0 },\n+\t\t{ \"rxq-share\",\t\t\t2, 0, 0 },\n \t\t{ \"eth-link-speed\",\t\t1, 0, 0 },\n \t\t{ \"disable-link-check\",\t\t0, 0, 0 },\n \t\t{ \"disable-device-start\",\t0, 0, 0 },\n@@ -1271,6 +1273,17 @@ launch_args_parse(int argc, char** argv)\n \t\t\t}\n \t\t\tif (!strcmp(lgopts[opt_idx].name, \"txonly-multi-flow\"))\n \t\t\t\ttxonly_multi_flow = 1;\n+\t\t\tif (!strcmp(lgopts[opt_idx].name, \"rxq-share\")) {\n+\t\t\t\tif (optarg == NULL) {\n+\t\t\t\t\trxq_share = UINT32_MAX;\n+\t\t\t\t} else {\n+\t\t\t\t\tn = atoi(optarg);\n+\t\t\t\t\tif (n >= 0)\n+\t\t\t\t\t\trxq_share = (uint32_t)n;\n+\t\t\t\t\telse\n+\t\t\t\t\t\trte_exit(EXIT_FAILURE, \"rxq-share must be >= 0\\n\");\n+\t\t\t\t}\n+\t\t\t}\n \t\t\tif (!strcmp(lgopts[opt_idx].name, \"no-flush-rx\"))\n \t\t\t\tno_flush_rx = 1;\n \t\t\tif (!strcmp(lgopts[opt_idx].name, \"eth-link-speed\")) {\ndiff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c\nindex 97ae52e17ec..123142ed110 100644\n--- a/app/test-pmd/testpmd.c\n+++ b/app/test-pmd/testpmd.c\n@@ -498,6 +498,11 @@ uint8_t record_core_cycles;\n */\n uint8_t record_burst_stats;\n \n+/*\n+ * Number of ports per shared Rx queue group, 0 disable.\n+ */\n+uint32_t rxq_share;\n+\n unsigned int num_sockets = 0;\n unsigned int socket_ids[RTE_MAX_NUMA_NODES];\n \n@@ -3393,14 +3398,23 @@ dev_event_callback(const char *device_name, enum rte_dev_event_type type,\n }\n \n static void\n-rxtx_port_config(struct rte_port *port)\n+rxtx_port_config(portid_t pid)\n {\n \tuint16_t qid;\n \tuint64_t offloads;\n+\tstruct rte_port *port = &ports[pid];\n \n \tfor (qid = 0; qid < nb_rxq; qid++) {\n \t\toffloads = port->rx_conf[qid].offloads;\n \t\tport->rx_conf[qid] = port->dev_info.default_rxconf;\n+\n+\t\tif (rxq_share > 0 &&\n+\t\t (port->dev_info.dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE)) {\n+\t\t\t/* Non-zero share group to enable RxQ share. */\n+\t\t\tport->rx_conf[qid].share_group = pid / rxq_share + 1;\n+\t\t\tport->rx_conf[qid].share_qid = qid; /* Equal mapping. */\n+\t\t}\n+\n \t\tif (offloads != 0)\n \t\t\tport->rx_conf[qid].offloads = offloads;\n \n@@ -3558,7 +3572,7 @@ init_port_config(void)\n \t\t\t\tport->dev_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;\n \t\t}\n \n-\t\trxtx_port_config(port);\n+\t\trxtx_port_config(pid);\n \n \t\tret = eth_macaddr_get_print_err(pid, &port->eth_addr);\n \t\tif (ret != 0)\n@@ -3772,7 +3786,7 @@ init_port_dcb_config(portid_t pid,\n \n \tmemcpy(&rte_port->dev_conf, &port_conf, sizeof(struct rte_eth_conf));\n \n-\trxtx_port_config(rte_port);\n+\trxtx_port_config(pid);\n \t/* VLAN filter */\n \trte_port->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;\n \tfor (i = 0; i < RTE_DIM(vlan_tags); i++)\ndiff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h\nindex 5863b2f43f3..3dfaaad94c0 100644\n--- a/app/test-pmd/testpmd.h\n+++ b/app/test-pmd/testpmd.h\n@@ -477,6 +477,8 @@ extern enum tx_pkt_split tx_pkt_split;\n \n extern uint8_t txonly_multi_flow;\n \n+extern uint32_t rxq_share;\n+\n extern uint16_t nb_pkt_per_burst;\n extern uint16_t nb_pkt_flowgen_clones;\n extern int nb_flows_flowgen;\ndiff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst\nindex 640eadeff73..ff5908dcd50 100644\n--- a/doc/guides/testpmd_app_ug/run_app.rst\n+++ b/doc/guides/testpmd_app_ug/run_app.rst\n@@ -389,6 +389,13 @@ The command line options are:\n \n Generate multiple flows in txonly mode.\n \n+* ``--rxq-share=[X]``\n+\n+ Create queues in shared Rx queue mode if device supports.\n+ Group number grows per X ports. X defaults to MAX, implies all ports\n+ join share group 1. Forwarding engine \"shared-rxq\" should be used\n+ which Rx only and update stream statistics correctly.\n+\n * ``--eth-link-speed``\n \n Set a forced link speed to the ethernet port::\n", "prefixes": [ "v8", "3/6" ] }{ "id": 101984, "url": "