From patchwork Thu Mar 21 18:47:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sivaprasad Tummala X-Patchwork-Id: 138674 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 086B243D0F; Thu, 21 Mar 2024 19:48:17 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E21F142E3D; Thu, 21 Mar 2024 19:48:16 +0100 (CET) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2077.outbound.protection.outlook.com [40.107.95.77]) by mails.dpdk.org (Postfix) with ESMTP id 3933D42E39; Thu, 21 Mar 2024 19:48:15 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PpuLl29IHoMeuIE1WL1+3t6Cflj9LBoZFm0USMnKO9GYiU1IWP+hRVFb1XLWnTKB17x0dZgZoQeOxwdTD5ZG2dg9J9lCCpd4x12XEK+H8B0Xzg4B+ohrm5QENo/TYfFz8VJu6g+OtwdkRVxG6nD3FewuZ8cEBUcVD4gsWxFnk0ApnNhr9aZcjedqjhK1BNIfZQrg5be4jYROtZB8uY1zKsin45jKwVAGUhCroGKpJqAveeOPaW5EkXiHGYDjtQ3o02+lGf3Jr80aNIIXFNneu0+uxPwvGypWwioBEaWmqxOsiW3Dxi2TMO3CouF+aP6deDkI8SkUuridLOZL/yEuLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MpvFq3mJzbEFT5jcC2FFZdD+kg0AmbZQg2mWmIpHbG4=; b=Xq72HMOzcbsIPRjFhH5Jx6BbypRjUaRfqXZy5UZSIThEsiSO/3w0Vcp3+ny7lOPihJ3dmZAHHX99oOaB1Y3H66HMj+xgZEkj8fT3sfy0jpoTrytvcN5pmqe380pa380ZF28tMzLyMHjoK6ZyS2VXQKuAu9C0CLQewNA1IIKCkUOHylGzezeE9RyEVaD3YiIw3V2jKr2jnw7KXUf07xSsT55BEwIQ4N6vJRrClstnZ9MkxyWoNtLK8wSlvI+S/l+zmOR1HCFmZ4ajwxxGx6NWVereURi2PQuKAVjZHfEnti8fzq0k5ru5fNJKGuhUlLYDEXrRu5/QUSVGtxoZ2mX8pA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=intel.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MpvFq3mJzbEFT5jcC2FFZdD+kg0AmbZQg2mWmIpHbG4=; b=u6dB+GFhyD7Hwymbvg0AxcEGCF4LwLeS+ExSYKKB3xNTiTjcUig1ZYnsEAZhAV8ymEEAwFyy43ElfKzMoLrJK9pwfaout0M/Fq8l/bWox14x1BAOrDbpGM9Wmcp8K9vrqKwT7CYuKyhjJa0njgLu+NgDzJgFFlpFT9qicGROGzA= Received: from BL1PR13CA0351.namprd13.prod.outlook.com (2603:10b6:208:2c6::26) by DM3PR12MB9435.namprd12.prod.outlook.com (2603:10b6:0:40::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7386.31; Thu, 21 Mar 2024 18:48:10 +0000 Received: from BL6PEPF0001AB71.namprd02.prod.outlook.com (2603:10b6:208:2c6:cafe::e0) by BL1PR13CA0351.outlook.office365.com (2603:10b6:208:2c6::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.12 via Frontend Transport; Thu, 21 Mar 2024 18:48:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BL6PEPF0001AB71.mail.protection.outlook.com (10.167.242.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7409.10 via Frontend Transport; Thu, 21 Mar 2024 18:48:10 +0000 Received: from ubuntu2004.linuxvmimages.local (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 21 Mar 2024 13:47:55 -0500 From: Sivaprasad Tummala To: , , , , , , , , , , CC: , Subject: [PATCH v6 02/14] examples/l3fwd-power: fix queue ID restriction Date: Thu, 21 Mar 2024 19:47:08 +0100 Message-ID: <20240321184721.69040-3-sivaprasad.tummala@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240321184721.69040-1-sivaprasad.tummala@amd.com> References: <20240318173146.24303-1-sivaprasad.tummala@amd.com> <20240321184721.69040-1-sivaprasad.tummala@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL6PEPF0001AB71:EE_|DM3PR12MB9435:EE_ X-MS-Office365-Filtering-Correlation-Id: e1a467ac-5c0d-479c-ace7-08dc49d774a2 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zO+ObygXAlI6nbWkWtqWLeJegO3L/9jKoiGK49ZDlxKusiejarXsjRsLR1NaQgJX8sLi+lmzF/Z5zevjvocfqFwEdRpRIBAiQznxkV/mrgTQJkUTWUa3cFPhrvKIj2W2pJX/nPqCZy3QtUCAYUKjQ1w7NUVD0z2whIJ2FiK/chGZ0SsfGwTbxwzK/sWOw6EU7mgk29y9xPZZdjb3SE1531RWD5Drgi8/M6XsFPY+kfs1ShVckA02gUoRJEAj3kxTsmGBFFS57k7GavEquwNUQ6O9tAKkExKInQRhA9mXKoCeO6Del4Kxpl2pSNZl//+EiZvEUiv3XfCTUUH56Zw0x4TTTfpCLpO2Iwe3n3JL6rHQ+r12lFBLnZ1xFagSnO/nvzZkaF0LEvH10tq9BZ29A+waSH8Gpt9zPep7kGTbqTo9MwYG2vb4KzTe7lfM+s2/Wxso7pdbEL1RFOEGextXbvtvKob1TkdCotVGnnCg66BsHG/SM0r6z2NCCSHWZpQbRVcz0RzHXJmwxaJx7xvyl1q3Nnh66ZLwQ0/n3+x3UWf8nM6pv3dWp2fxQFs8fqm8Qiw7RKottESOQiOXDg1mWk/lzBv4B4YGpArQItKFirzCUVgi3sfIr8LwLMSMHJALblxAzNM81Opsjr+3M5WbEKwwkD7E4DL4v2lsEYEgd87YzWh5haDTJUD7eE8L7IG42y+i0R+r8+h8p2IXL4bkRaO257WkqhI9NLLbI+q3GsmV9idaljvy2nU2EuqrcPclKS3T5jPaFi+kPUpW8B645Q== X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230031)(376005)(1800799015)(36860700004)(7416005)(82310400014)(921011); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Mar 2024 18:48:10.2132 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e1a467ac-5c0d-479c-ace7-08dc49d774a2 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BL6PEPF0001AB71.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM3PR12MB9435 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Currently application supports queue IDs up to 255 and max queues of 256 irrespective of device support. This limits the number of active lcores to 256. The patch fixes these constraints by increasing the queue IDs to support up to 65535. Fixes: f88e7c175a68 ("examples/l3fwd-power: add high/regular perf cores options") Cc: radu.nicolau@intel.com Cc: stable@dpdk.org Signed-off-by: Sivaprasad Tummala Acked-by: Morten Brørup Acked-by: Ferruh Yigit --- examples/l3fwd-power/main.c | 49 ++++++++++++++++---------------- examples/l3fwd-power/main.h | 2 +- examples/l3fwd-power/perf_core.c | 10 +++++-- 3 files changed, 32 insertions(+), 29 deletions(-) diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c index f4adcf41b5..1881b1b194 100644 --- a/examples/l3fwd-power/main.c +++ b/examples/l3fwd-power/main.c @@ -214,7 +214,7 @@ enum freq_scale_hint_t struct lcore_rx_queue { uint16_t port_id; - uint8_t queue_id; + uint16_t queue_id; enum freq_scale_hint_t freq_up_hint; uint32_t zero_rx_packet_count; uint32_t idle_hint; @@ -838,7 +838,7 @@ sleep_until_rx_interrupt(int num, int lcore) struct rte_epoll_event event[num]; int n, i; uint16_t port_id; - uint8_t queue_id; + uint16_t queue_id; void *data; if (status[lcore].wakeup) { @@ -850,9 +850,9 @@ sleep_until_rx_interrupt(int num, int lcore) n = rte_epoll_wait(RTE_EPOLL_PER_THREAD, event, num, 10); for (i = 0; i < n; i++) { data = event[i].epdata.data; - port_id = ((uintptr_t)data) >> CHAR_BIT; + port_id = ((uintptr_t)data) >> (sizeof(uint16_t) * CHAR_BIT); queue_id = ((uintptr_t)data) & - RTE_LEN2MASK(CHAR_BIT, uint8_t); + RTE_LEN2MASK((sizeof(uint16_t) * CHAR_BIT), uint16_t); RTE_LOG(INFO, L3FWD_POWER, "lcore %u is waked up from rx interrupt on" " port %d queue %d\n", @@ -867,7 +867,7 @@ static void turn_on_off_intr(struct lcore_conf *qconf, bool on) { int i; struct lcore_rx_queue *rx_queue; - uint8_t queue_id; + uint16_t queue_id; uint16_t port_id; for (i = 0; i < qconf->n_rx_queue; ++i) { @@ -887,7 +887,7 @@ static void turn_on_off_intr(struct lcore_conf *qconf, bool on) static int event_register(struct lcore_conf *qconf) { struct lcore_rx_queue *rx_queue; - uint8_t queueid; + uint16_t queueid; uint16_t portid; uint32_t data; int ret; @@ -897,7 +897,7 @@ static int event_register(struct lcore_conf *qconf) rx_queue = &(qconf->rx_queue_list[i]); portid = rx_queue->port_id; queueid = rx_queue->queue_id; - data = portid << CHAR_BIT | queueid; + data = portid << (sizeof(uint16_t) * CHAR_BIT) | queueid; ret = rte_eth_dev_rx_intr_ctl_q(portid, queueid, RTE_EPOLL_PER_THREAD, @@ -917,8 +917,7 @@ static int main_intr_loop(__rte_unused void *dummy) unsigned int lcore_id; uint64_t prev_tsc, diff_tsc, cur_tsc; int i, j, nb_rx; - uint8_t queueid; - uint16_t portid; + uint16_t portid, queueid; struct lcore_conf *qconf; struct lcore_rx_queue *rx_queue; uint32_t lcore_rx_idle_count = 0; @@ -946,7 +945,7 @@ static int main_intr_loop(__rte_unused void *dummy) portid = qconf->rx_queue_list[i].port_id; queueid = qconf->rx_queue_list[i].queue_id; RTE_LOG(INFO, L3FWD_POWER, - " -- lcoreid=%u portid=%u rxqueueid=%hhu\n", + " -- lcoreid=%u portid=%u rxqueueid=%hu\n", lcore_id, portid, queueid); } @@ -1083,8 +1082,7 @@ main_telemetry_loop(__rte_unused void *dummy) unsigned int lcore_id; uint64_t prev_tsc, diff_tsc, cur_tsc, prev_tel_tsc; int i, j, nb_rx; - uint8_t queueid; - uint16_t portid; + uint16_t portid, queueid; struct lcore_conf *qconf; struct lcore_rx_queue *rx_queue; uint64_t ep_nep[2] = {0}, fp_nfp[2] = {0}; @@ -1114,7 +1112,7 @@ main_telemetry_loop(__rte_unused void *dummy) portid = qconf->rx_queue_list[i].port_id; queueid = qconf->rx_queue_list[i].queue_id; RTE_LOG(INFO, L3FWD_POWER, " -- lcoreid=%u portid=%u " - "rxqueueid=%hhu\n", lcore_id, portid, queueid); + "rxqueueid=%hu\n", lcore_id, portid, queueid); } while (!is_done()) { @@ -1205,8 +1203,7 @@ main_legacy_loop(__rte_unused void *dummy) uint64_t prev_tsc, diff_tsc, cur_tsc, tim_res_tsc, hz; uint64_t prev_tsc_power = 0, cur_tsc_power, diff_tsc_power; int i, j, nb_rx; - uint8_t queueid; - uint16_t portid; + uint16_t portid, queueid; struct lcore_conf *qconf; struct lcore_rx_queue *rx_queue; enum freq_scale_hint_t lcore_scaleup_hint; @@ -1234,7 +1231,7 @@ main_legacy_loop(__rte_unused void *dummy) portid = qconf->rx_queue_list[i].port_id; queueid = qconf->rx_queue_list[i].queue_id; RTE_LOG(INFO, L3FWD_POWER, " -- lcoreid=%u portid=%u " - "rxqueueid=%hhu\n", lcore_id, portid, queueid); + "rxqueueid=%hu\n", lcore_id, portid, queueid); } /* add into event wait list */ @@ -1399,14 +1396,14 @@ main_legacy_loop(__rte_unused void *dummy) static int check_lcore_params(void) { - uint8_t queue, lcore; - uint16_t i; + uint16_t queue, i; + uint8_t lcore; int socketid; for (i = 0; i < nb_lcore_params; ++i) { queue = lcore_params[i].queue_id; if (queue >= MAX_RX_QUEUE_PER_PORT) { - printf("invalid queue number: %hhu\n", queue); + printf("invalid queue number: %hu\n", queue); return -1; } lcore = lcore_params[i].lcore_id; @@ -1451,7 +1448,7 @@ check_port_config(void) return 0; } -static uint8_t +static uint16_t get_port_n_rx_queues(const uint16_t port) { int queue = -1; @@ -1462,7 +1459,7 @@ get_port_n_rx_queues(const uint16_t port) lcore_params[i].queue_id > queue) queue = lcore_params[i].queue_id; } - return (uint8_t)(++queue); + return (uint16_t)(++queue); } static int @@ -1661,6 +1658,8 @@ parse_config(const char *q_arg) char *str_fld[_NUM_FLD]; int i; unsigned size; + unsigned int max_fld[_NUM_FLD] = {USHRT_MAX, + USHRT_MAX, UCHAR_MAX}; nb_lcore_params = 0; @@ -1681,7 +1680,7 @@ parse_config(const char *q_arg) errno = 0; int_fld[i] = strtoul(str_fld[i], &end, 0); if (errno != 0 || end == str_fld[i] || int_fld[i] > - 255) + max_fld[i]) return -1; } if (nb_lcore_params >= MAX_LCORE_PARAMS) { @@ -1692,7 +1691,7 @@ parse_config(const char *q_arg) lcore_params_array[nb_lcore_params].port_id = (uint8_t)int_fld[FLD_PORT]; lcore_params_array[nb_lcore_params].queue_id = - (uint8_t)int_fld[FLD_QUEUE]; + (uint16_t)int_fld[FLD_QUEUE]; lcore_params_array[nb_lcore_params].lcore_id = (uint8_t)int_fld[FLD_LCORE]; ++nb_lcore_params; @@ -2501,8 +2500,8 @@ main(int argc, char **argv) uint64_t hz; uint32_t n_tx_queue, nb_lcores; uint32_t dev_rxq_num, dev_txq_num; - uint8_t nb_rx_queue, queue, socketid; - uint16_t portid; + uint8_t socketid; + uint16_t portid, nb_rx_queue, queue; const char *ptr_strings[NUM_TELSTATS]; /* init EAL */ diff --git a/examples/l3fwd-power/main.h b/examples/l3fwd-power/main.h index 258de98f5b..40b5194726 100644 --- a/examples/l3fwd-power/main.h +++ b/examples/l3fwd-power/main.h @@ -9,7 +9,7 @@ #define MAX_LCORE_PARAMS 1024 struct lcore_params { uint16_t port_id; - uint8_t queue_id; + uint16_t queue_id; uint8_t lcore_id; } __rte_cache_aligned; diff --git a/examples/l3fwd-power/perf_core.c b/examples/l3fwd-power/perf_core.c index 41ef6d0c9a..3088935ee0 100644 --- a/examples/l3fwd-power/perf_core.c +++ b/examples/l3fwd-power/perf_core.c @@ -22,7 +22,7 @@ static uint16_t nb_hp_lcores; struct perf_lcore_params { uint16_t port_id; - uint8_t queue_id; + uint16_t queue_id; uint8_t high_perf; uint8_t lcore_idx; } __rte_cache_aligned; @@ -132,6 +132,8 @@ parse_perf_config(const char *q_arg) char *str_fld[_NUM_FLD]; int i; unsigned int size; + unsigned int max_fld[_NUM_FLD] = {USHRT_MAX, USHRT_MAX, + UCHAR_MAX, UCHAR_MAX}; nb_prf_lc_prms = 0; @@ -152,7 +154,9 @@ parse_perf_config(const char *q_arg) for (i = 0; i < _NUM_FLD; i++) { errno = 0; int_fld[i] = strtoul(str_fld[i], &end, 0); - if (errno != 0 || end == str_fld[i] || int_fld[i] > 255) + if (errno != 0 || end == str_fld[i] || int_fld[i] > + max_fld[i]) + return -1; } if (nb_prf_lc_prms >= MAX_LCORE_PARAMS) { @@ -163,7 +167,7 @@ parse_perf_config(const char *q_arg) prf_lc_prms[nb_prf_lc_prms].port_id = (uint8_t)int_fld[FLD_PORT]; prf_lc_prms[nb_prf_lc_prms].queue_id = - (uint8_t)int_fld[FLD_QUEUE]; + (uint16_t)int_fld[FLD_QUEUE]; prf_lc_prms[nb_prf_lc_prms].high_perf = !!(uint8_t)int_fld[FLD_LCORE_HP]; prf_lc_prms[nb_prf_lc_prms].lcore_idx =