From patchwork Mon Nov 21 18:07:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hanumanth Pothula X-Patchwork-Id: 120002 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8D031A0566; Mon, 21 Nov 2022 19:08:10 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3125342D1C; Mon, 21 Nov 2022 19:08:10 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 7CD8342C76 for ; Mon, 21 Nov 2022 19:08:08 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2ALBUX6H014582; Mon, 21 Nov 2022 10:08:07 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=XgOid5bDfMv7bc8wP+5FLpTbkbRSxxhnnB3kGxSvoho=; b=b7wg9geiVscn+c7Edt60me8gBYSoH2X6G+tx73k0wEvEYTO23i5W1iZtLkCcDGRFLPkh pIoWANxV7yahbeNJhfyLRS/sCMXbTCRBBB4V0CNHy69BJ7uV3SLF8IvALj8UmhJdDjh/ Zu09QnVIknB9PqlR8rvumYtJnxp3LiQaqfoj9y86LhVTcgFpPz7booOo5cQaWB1c9O+P I15v5nCaoAqKlSd5eDpbT1qwCNDkDYV82GQ00FDqIPrjwsMlY77vZO/y+0FKvao+qO3B qWSf7OGz2lNvtFDQVn0qVrTMHBvGMbb8+Lg0axhVcL09NYMiFThkK4IWJEZANDS3i3oT WQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3kxyhrxyd8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 21 Nov 2022 10:08:07 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 21 Nov 2022 10:08:04 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 21 Nov 2022 10:08:04 -0800 Received: from localhost.localdomain (unknown [10.28.36.155]) by maili.marvell.com (Postfix) with ESMTP id 3695F3F7072; Mon, 21 Nov 2022 10:08:01 -0800 (PST) From: Hanumanth Pothula To: Aman Singh , Yuying Zhang CC: , , , , , , Subject: [PATCH v7 1/1] app/testpmd: add valid check to verify multi mempool feature Date: Mon, 21 Nov 2022 23:37:56 +0530 Message-ID: <20221121180756.3924770-1-hpothula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221121143347.3923255-1-hpothula@marvell.com> References: <20221121143347.3923255-1-hpothula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 936hBUHEsECph0sXsK1_H1jz4wLRodH1 X-Proofpoint-ORIG-GUID: 936hBUHEsECph0sXsK1_H1jz4wLRodH1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-21_15,2022-11-18_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Validate ethdev parameter 'max_rx_mempools' to know whether device supports multi-mempool feature or not. Also, add new testpmd command line argument, multi-mempool, to control multi-mempool feature. By default its disabled. Bugzilla ID: 1128 Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue") Signed-off-by: Hanumanth Pothula Reviewed-by: Ferruh Yigit Tested-by: Yingya Han Tested-by: Yaqi Tang --- v7: - Update testpmd argument name from multi-mempool to multi-rx-mempool. - Upated defination of testpmd argument, mbuf-size. - Resolved indentations. v6: - Updated run_app.rst file with multi-mempool argument. - defined and populated multi_mempool at related arguments. - invoking rte_eth_dev_info_get() withing multi-mempool condition v5: - Added testpmd argument to enable multi-mempool feature. - Simplified logic to distinguish between multi-mempool, multi-segment and single pool/segment. v4: - updated if condition. v3: - Simplified conditional check. - Corrected spell, whether. v2: - Rebased on tip of next-net/main. --- app/test-pmd/parameters.c | 7 ++- app/test-pmd/testpmd.c | 64 ++++++++++++++++----------- app/test-pmd/testpmd.h | 1 + doc/guides/testpmd_app_ug/run_app.rst | 4 ++ 4 files changed, 50 insertions(+), 26 deletions(-) diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index aed4cdcb84..af9ec39cf9 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -88,7 +88,8 @@ usage(char* progname) "in NUMA mode.\n"); printf(" --mbuf-size=N,[N1[,..Nn]: set the data size of mbuf to " "N bytes. If multiple numbers are specified the extra pools " - "will be created to receive with packet split features\n"); + "will be created to receive packets based on the features " + "supported, like buufer-split, multi-mempool.\n"); printf(" --total-num-mbufs=N: set the number of mbufs to be allocated " "in mbuf pools.\n"); printf(" --max-pkt-len=N: set the maximum size of packet to N bytes.\n"); @@ -155,6 +156,7 @@ usage(char* progname) printf(" --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n"); printf(" --txpkts=X[,Y]*: set TX segment sizes" " or total packet length.\n"); + printf(" --multi-rx-mempool: enable multi-mempool support\n"); printf(" --txonly-multi-flow: generate multiple flows in txonly mode\n"); printf(" --tx-ip=src,dst: IP addresses in Tx-only mode\n"); printf(" --tx-udp=src[,dst]: UDP ports in Tx-only mode\n"); @@ -669,6 +671,7 @@ launch_args_parse(int argc, char** argv) { "rxpkts", 1, 0, 0 }, { "rxhdrs", 1, 0, 0 }, { "txpkts", 1, 0, 0 }, + { "multi-rx-mempool", 0, 0, 0 }, { "txonly-multi-flow", 0, 0, 0 }, { "rxq-share", 2, 0, 0 }, { "eth-link-speed", 1, 0, 0 }, @@ -1295,6 +1298,8 @@ launch_args_parse(int argc, char** argv) else rte_exit(EXIT_FAILURE, "bad txpkts\n"); } + if (!strcmp(lgopts[opt_idx].name, "multi-rx-mempool")) + multi_rx_mempool = 1; if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow")) txonly_multi_flow = 1; if (!strcmp(lgopts[opt_idx].name, "rxq-share")) { diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 4e25f77c6a..716937925e 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -245,6 +245,7 @@ uint32_t max_rx_pkt_len; */ uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT]; uint8_t rx_pkt_nb_segs; /**< Number of segments to split */ +uint8_t multi_rx_mempool; /**< Enables multi-mempool feature */ uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT]; uint8_t rx_pkt_nb_offs; /**< Number of specified offsets */ uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT]; @@ -2659,24 +2660,9 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, uint32_t prev_hdrs = 0; int ret; - /* Verify Rx queue configuration is single pool and segment or - * multiple pool/segment. - * @see rte_eth_rxconf::rx_mempools - * @see rte_eth_rxconf::rx_seg - */ - if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 || - ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) { - /* Single pool/segment configuration */ - rx_conf->rx_seg = NULL; - rx_conf->rx_nseg = 0; - ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, - nb_rx_desc, socket_id, - rx_conf, mp); - goto exit; - } - if (rx_pkt_nb_segs > 1 || - rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + if ((rx_pkt_nb_segs > 1) && + (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { /* multi-segment configuration */ for (i = 0; i < rx_pkt_nb_segs; i++) { struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split; @@ -2701,22 +2687,50 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, } rx_conf->rx_nseg = rx_pkt_nb_segs; rx_conf->rx_seg = rx_useg; - } else { + rx_conf->rx_mempools = NULL; + rx_conf->rx_nmempool = 0; + ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc, + socket_id, rx_conf, NULL); + rx_conf->rx_seg = NULL; + rx_conf->rx_nseg = 0; + } else if (multi_rx_mempool == 1) { /* multi-pool configuration */ + struct rte_eth_dev_info dev_info; + + if (mbuf_data_size_n <= 1) { + RTE_LOG(ERR, EAL, "invalid number of mempools %u", + mbuf_data_size_n); + return -EINVAL; + } + ret = rte_eth_dev_info_get(port_id, &dev_info); + if (ret != 0) + return ret; + if (dev_info.max_rx_mempools == 0) { + RTE_LOG(ERR, EAL, "device doesn't support requested multi-mempool configuration"); + return -ENOTSUP; + } for (i = 0; i < mbuf_data_size_n; i++) { mpx = mbuf_pool_find(socket_id, i); rx_mempool[i] = mpx ? mpx : mp; } rx_conf->rx_mempools = rx_mempool; rx_conf->rx_nmempool = mbuf_data_size_n; - } - ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc, + rx_conf->rx_seg = NULL; + rx_conf->rx_nseg = 0; + ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc, socket_id, rx_conf, NULL); - rx_conf->rx_seg = NULL; - rx_conf->rx_nseg = 0; - rx_conf->rx_mempools = NULL; - rx_conf->rx_nmempool = 0; -exit: + rx_conf->rx_mempools = NULL; + rx_conf->rx_nmempool = 0; + } else { + /* Single pool/segment configuration */ + rx_conf->rx_seg = NULL; + rx_conf->rx_nseg = 0; + rx_conf->rx_mempools = NULL; + rx_conf->rx_nmempool = 0; + ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc, + socket_id, rx_conf, mp); + } + ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start ? RTE_ETH_QUEUE_STATE_STOPPED : RTE_ETH_QUEUE_STATE_STARTED; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index aaf69c349a..0596d38cd2 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -589,6 +589,7 @@ extern uint32_t max_rx_pkt_len; extern uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT]; extern uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT]; extern uint8_t rx_pkt_nb_segs; /**< Number of segments to split */ +extern uint8_t multi_rx_mempool; /**< Enables multi-mempool feature. */ extern uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT]; extern uint8_t rx_pkt_nb_offs; /**< Number of specified offsets */ diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index 610e442924..af84b2260a 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -365,6 +365,10 @@ The command line options are: Set TX segment sizes or total packet length. Valid for ``tx-only`` and ``flowgen`` forwarding modes. +* ``--multi-rx-mempool`` + + Enable multi-mempool, multiple mbuf pools per Rx queue, support. + * ``--txonly-multi-flow`` Generate multiple flows in txonly mode.