From patchwork Tue Apr 2 09:53:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 52090 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B6A5E4F9B; Tue, 2 Apr 2019 11:53:58 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id EBB154CA9 for ; Tue, 2 Apr 2019 11:53:57 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x329j38j014051; Tue, 2 Apr 2019 02:53:57 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : references : in-reply-to : content-type : content-transfer-encoding : mime-version; s=pfpt0818; bh=mX1WT4Pb1HmpArq5smM+bHGDHdq9F3s+w0CTpO7aUOA=; b=ICnS2waF5/BdsLgcoV4IrBqKsY0KWiDL9E2v9Pno1wta7qdM6YY664N8NHyvfcSeTZ0h YvQDxuq4NAm6TEgwbTC/Pnt5wTaAOpqhpSkVSLa/6GtFds8kh1+7gyA3UZOhkWsLYq+M K4pup+wUVQbnMR4HygaVf1ujaSRH1OOaaq0SKCs/CRbe7ORicbSwRBVoLiziATPbsOIR XHpuNPeiUew3yhaUA8FSH1ikrm12esACpjyrFU7dyZH4Yfkd0H1tG2E4hMnJp/ICeTME o1doN71osiP2n5dziBMkInaSdrypLQ8mACR5jZm896kg87HSe81vVp1DO5vwoGVvbrbi 7Q== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2rkrb49wyv-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 02 Apr 2019 02:53:57 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Tue, 2 Apr 2019 02:53:44 -0700 Received: from NAM01-SN1-obe.outbound.protection.outlook.com (104.47.32.57) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3 via Frontend Transport; Tue, 2 Apr 2019 02:53:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.onmicrosoft.com; s=selector1-marvell-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mX1WT4Pb1HmpArq5smM+bHGDHdq9F3s+w0CTpO7aUOA=; b=Rl322w4ojcMV5ch+TTKzKe/uIxZWQzLDCbJOiiq70gDDGLqtPyiVBzfJVw89Wc3EkiWT9eribzaliWSJG9iJA87WqX7Oi0pe4PqL+u2kgKZH9ux8i3rxm4McG+V6gQ/OHOG7KIDNrt/TCJkzRMxdR+ZuB3mZLAZpG2fG4kl64Bo= Received: from CY4PR1801MB1863.namprd18.prod.outlook.com (10.171.255.14) by CY4PR1801MB1926.namprd18.prod.outlook.com (10.171.255.29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1750.15; Tue, 2 Apr 2019 09:53:33 +0000 Received: from CY4PR1801MB1863.namprd18.prod.outlook.com ([fe80::e980:fa09:c83a:851d]) by CY4PR1801MB1863.namprd18.prod.outlook.com ([fe80::e980:fa09:c83a:851d%3]) with mapi id 15.20.1750.021; Tue, 2 Apr 2019 09:53:33 +0000 From: Pavan Nikhilesh Bhagavatula To: Jerin Jacob Kollanukkaran , "thomas@monjalon.net" , "arybchenko@solarflare.com" , "ferruh.yigit@intel.com" , "bernard.iremonger@intel.com" , "alialnu@mellanox.com" CC: "dev@dpdk.org" , Pavan Nikhilesh Bhagavatula Thread-Topic: [dpdk-dev] [PATCH v6 3/4] app/testpmd: move pkt prepare logic into a separate function Thread-Index: AQHU6Tnuk6OIfXNVpkavSaGgCzn9HA== Date: Tue, 2 Apr 2019 09:53:33 +0000 Message-ID: <20190402095255.848-3-pbhagavatula@marvell.com> References: <20190228194128.14236-1-pbhagavatula@marvell.com> <20190402095255.848-1-pbhagavatula@marvell.com> In-Reply-To: <20190402095255.848-1-pbhagavatula@marvell.com> Accept-Language: en-IN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: BM1PR0101CA0029.INDPRD01.PROD.OUTLOOK.COM (2603:1096:b00:1a::15) To CY4PR1801MB1863.namprd18.prod.outlook.com (2603:10b6:910:7a::14) x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.21.0 x-originating-ip: [115.113.156.3] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 5351d818-27f9-496f-84a9-08d6b751111c x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(5600139)(711020)(4605104)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7193020); SRVR:CY4PR1801MB1926; x-ms-traffictypediagnostic: CY4PR1801MB1926: x-microsoft-antispam-prvs: x-forefront-prvs: 0995196AA2 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(366004)(396003)(136003)(346002)(39860400002)(376002)(189003)(199004)(6486002)(8676002)(55236004)(102836004)(106356001)(26005)(486006)(186003)(86362001)(476003)(107886003)(2906002)(105586002)(36756003)(11346002)(66066001)(2201001)(8936002)(71200400001)(305945005)(7736002)(2616005)(478600001)(4326008)(71190400001)(446003)(97736004)(6436002)(256004)(1076003)(2501003)(50226002)(54906003)(6506007)(316002)(81156014)(14454004)(81166006)(53936002)(25786009)(6116002)(52116002)(76176011)(110136005)(3846002)(5660300002)(6512007)(386003)(68736007)(99286004); DIR:OUT; SFP:1101; SCL:1; SRVR:CY4PR1801MB1926; H:CY4PR1801MB1863.namprd18.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: marvell.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: lGZ0djcqH7g8qtVeVwfxhmQgAJL0dm5jBUqA/OItyZvMdCliWs+t/zO16lSRJn63CcXoX9g/SlVTenC6ISQMunqD3mzuXVnYRiynVzE9LdhLRkMZCc+MKwoKTuGS5+RI23Pg3mG35Wbt/q5Y3CYUZ401WENPTJn/hphc/4Tr4T/x3BJIlTKXcE4rZD2UvLOwIo7PJfpGbPUaa/kIWUMjn+luvAhU+pSI9evWiu/48hlS5oZEAE53SInEe4E/Jc0RBparUuP3GSkddODTO8sWYf4rGR5ISvAoV2i9lys/eHnaJ6ZHZM5QsWE7RfsZUV7BbjJbXmPgNIGL/5Zcrf9ACH1o72ivjvnROm18epr/HeY+zv1zRDel1Uuii+EGjegEtNhvgZ6ybGcAjRa7UfHlP1Evjr2T5Vo2rB7DI/r8wPY= MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: 5351d818-27f9-496f-84a9-08d6b751111c X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Apr 2019 09:53:33.1192 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 70e1fb47-1155-421d-87fc-2e58f638b6e0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1801MB1926 X-OriginatorOrg: marvell.com X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-04-02_04:, , signatures=0 Subject: [dpdk-dev] [PATCH v6 3/4] app/testpmd: move pkt prepare logic into a separate function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Move the packet prepare logic into a separate function so that it can be reused later. Signed-off-by: Pavan Nikhilesh --- app/test-pmd/txonly.c | 163 +++++++++++++++++++++--------------------- 1 file changed, 83 insertions(+), 80 deletions(-) diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index 65171c1d1..56ca0ad24 100644 --- a/app/test-pmd/txonly.c +++ b/app/test-pmd/txonly.c @@ -148,6 +148,80 @@ setup_pkt_udp_ip_headers(struct ipv4_hdr *ip_hdr, ip_hdr->hdr_checksum = (uint16_t) ip_cksum; } +static inline bool +pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp, + struct ether_hdr *eth_hdr, const uint16_t vlan_tci, + const uint16_t vlan_tci_outer, const uint64_t ol_flags) +{ + struct rte_mbuf *pkt_segs[RTE_MAX_SEGS_PER_PKT]; + uint8_t ip_var = RTE_PER_LCORE(_ip_var); + struct rte_mbuf *pkt_seg; + uint32_t nb_segs, pkt_len; + uint8_t i; + + if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND)) + nb_segs = random() % tx_pkt_nb_segs + 1; + else + nb_segs = tx_pkt_nb_segs; + + if (nb_segs > 1) { + if (rte_mempool_get_bulk(mbp, (void **)pkt_segs, nb_segs)) + return false; + } + + rte_pktmbuf_reset_headroom(pkt); + pkt->data_len = tx_pkt_seg_lengths[0]; + pkt->ol_flags = ol_flags; + pkt->vlan_tci = vlan_tci; + pkt->vlan_tci_outer = vlan_tci_outer; + pkt->l2_len = sizeof(struct ether_hdr); + pkt->l3_len = sizeof(struct ipv4_hdr); + + pkt_len = pkt->data_len; + pkt_seg = pkt; + for (i = 1; i < nb_segs; i++) { + pkt_seg->next = pkt_segs[i - 1]; + pkt_seg = pkt_seg->next; + pkt_seg->data_len = tx_pkt_seg_lengths[i]; + pkt_len += pkt_seg->data_len; + } + pkt_seg->next = NULL; /* Last segment of packet. */ + /* + * Copy headers in first packet segment(s). + */ + copy_buf_to_pkt(eth_hdr, sizeof(eth_hdr), pkt, 0); + copy_buf_to_pkt(&pkt_ip_hdr, sizeof(pkt_ip_hdr), pkt, + sizeof(struct ether_hdr)); + if (txonly_multi_flow) { + struct ipv4_hdr *ip_hdr; + uint32_t addr; + + ip_hdr = rte_pktmbuf_mtod_offset(pkt, + struct ipv4_hdr *, + sizeof(struct ether_hdr)); + /* + * Generate multiple flows by varying IP src addr. This + * enables packets are well distributed by RSS in + * receiver side if any and txonly mode can be a decent + * packet generator for developer's quick performance + * regression test. + */ + addr = (IP_DST_ADDR | (ip_var++ << 8)) + rte_lcore_id(); + ip_hdr->src_addr = rte_cpu_to_be_32(addr); + } + copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt, + sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr)); + /* + * Complete first mbuf of packet and append it to the + * burst of packets to be transmitted. + */ + pkt->nb_segs = nb_segs; + pkt->pkt_len = pkt_len; + + return true; +} + /* * Transmit a burst of multi-segments packets. */ @@ -155,10 +229,8 @@ static void pkt_burst_transmit(struct fwd_stream *fs) { struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; - struct rte_mbuf *pkt_segs[RTE_MAX_SEGS_PER_PKT]; struct rte_port *txp; struct rte_mbuf *pkt; - struct rte_mbuf *pkt_seg; struct rte_mempool *mbp; struct ether_hdr eth_hdr; uint16_t nb_tx; @@ -166,15 +238,12 @@ pkt_burst_transmit(struct fwd_stream *fs) uint16_t vlan_tci, vlan_tci_outer; uint32_t retry; uint64_t ol_flags = 0; - uint8_t ip_var = RTE_PER_LCORE(_ip_var); - uint8_t i; uint64_t tx_offloads; #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES uint64_t start_tsc; uint64_t end_tsc; uint64_t core_cycles; #endif - uint32_t nb_segs, pkt_len; #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES start_tsc = rte_rdtsc(); @@ -201,85 +270,19 @@ pkt_burst_transmit(struct fwd_stream *fs) for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { pkt = rte_mbuf_raw_alloc(mbp); - if (pkt == NULL) { - nomore_mbuf: - if (nb_pkt == 0) - return; + if (pkt == NULL) + break; + if (unlikely(!pkt_burst_prepare(pkt, mbp, ð_hdr, vlan_tci, + vlan_tci_outer, ol_flags))) { + rte_pktmbuf_free(pkt); break; } - - /* - * Using raw alloc is good to improve performance, - * but some consumers may use the headroom and so - * decrement data_off. We need to make sure it is - * reset to default value. - */ - rte_pktmbuf_reset_headroom(pkt); - pkt->data_len = tx_pkt_seg_lengths[0]; - pkt_seg = pkt; - - if (tx_pkt_split == TX_PKT_SPLIT_RND) - nb_segs = random() % tx_pkt_nb_segs + 1; - else - nb_segs = tx_pkt_nb_segs; - - if (nb_segs > 1) { - if (rte_mempool_get_bulk(mbp, (void **)pkt_segs, - nb_segs)) { - rte_pktmbuf_free(pkt); - goto nomore_mbuf; - } - } - - pkt_len = pkt->data_len; - for (i = 1; i < nb_segs; i++) { - pkt_seg->next = pkt_segs[i - 1]; - pkt_seg = pkt_seg->next; - pkt_seg->data_len = tx_pkt_seg_lengths[i]; - pkt_len += pkt_seg->data_len; - } - pkt_seg->next = NULL; /* Last segment of packet. */ - - /* - * Copy headers in first packet segment(s). - */ - copy_buf_to_pkt(ð_hdr, sizeof(eth_hdr), pkt, 0); - copy_buf_to_pkt(&pkt_ip_hdr, sizeof(pkt_ip_hdr), pkt, - sizeof(struct ether_hdr)); - if (txonly_multi_flow) { - struct ipv4_hdr *ip_hdr; - uint32_t addr; - - ip_hdr = rte_pktmbuf_mtod_offset(pkt, - struct ipv4_hdr *, - sizeof(struct ether_hdr)); - /* - * Generate multiple flows by varying IP src addr. This - * enables packets are well distributed by RSS in - * receiver side if any and txonly mode can be a decent - * packet generator for developer's quick performance - * regression test. - */ - addr = (IP_DST_ADDR | (ip_var++ << 8)) + rte_lcore_id(); - ip_hdr->src_addr = rte_cpu_to_be_32(addr); - } - copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt, - sizeof(struct ether_hdr) + - sizeof(struct ipv4_hdr)); - - /* - * Complete first mbuf of packet and append it to the - * burst of packets to be transmitted. - */ - pkt->nb_segs = nb_segs; - pkt->pkt_len = pkt_len; - pkt->ol_flags = ol_flags; - pkt->vlan_tci = vlan_tci; - pkt->vlan_tci_outer = vlan_tci_outer; - pkt->l2_len = sizeof(struct ether_hdr); - pkt->l3_len = sizeof(struct ipv4_hdr); pkts_burst[nb_pkt] = pkt; } + + if (nb_pkt == 0) + return; + nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_pkt); /* * Retry if necessary