From patchwork Sun Mar 31 13:14:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 51949 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C7CF22B9A; Sun, 31 Mar 2019 15:14:31 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 1B3382B82 for ; Sun, 31 Mar 2019 15:14:29 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x2VDAk1I012274; Sun, 31 Mar 2019 06:14:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : references : in-reply-to : content-type : content-transfer-encoding : mime-version; s=pfpt0818; bh=OJ8PFT/WXc/uXbjzUuzUN0zg0uAMNGbPsfd2vtHXs94=; b=hIwszik2+/VRWXXNTUANJR5auaTz2f/LqLxid74d7TvFwL102cW6gkJ8gcOviFQhdsds iAS6P2o7xg7ryoxmKcL7B3fFhPfF97GjOg/CHAZNL7tqXN3x/eT5nWPvCBUlpDXxE2Cq z1iKuXPwl/bOEPQEf4ugDJcRfv9uL04k/xhtvdSCKt+M2mcctw5UABvPUUOQ1aNz3ycp MbgofVip4D1eGFDXWcwi/GPfv2epabciVunQDMZlKdK/Wff6vysdRIBLdv2TFzamX5JY 8pCQm5wpd/V06lU1RK/ieSxIqhsa/s9nhBydGvB809zH8GxZAEaVrhnnq+YxtqIy/CCY qw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2rj5tr2ndf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 31 Mar 2019 06:14:28 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 31 Mar 2019 06:14:22 -0700 Received: from NAM04-SN1-obe.outbound.protection.outlook.com (104.47.44.56) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3 via Frontend Transport; Sun, 31 Mar 2019 06:14:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.onmicrosoft.com; s=selector1-marvell-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OJ8PFT/WXc/uXbjzUuzUN0zg0uAMNGbPsfd2vtHXs94=; b=AoaQ+gcDtFP8hXABP5wDUdTk6d7uRk3MpyjKB/BMo+Q11afEVvJQ2YZtpWCO9ZXFwTk1dLA8BHEKH+K5WLPy5VsEYd9pNOivkVi8RtDijl/RzMfprLcvdsoTqNL4J8U2hxrEijT69oDYVSEwi+ltQmUgZeodriAjQtwCqh8Y344= Received: from CY4PR1801MB1863.namprd18.prod.outlook.com (10.171.255.14) by CY4PR1801MB2071.namprd18.prod.outlook.com (10.171.254.163) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1750.15; Sun, 31 Mar 2019 13:14:20 +0000 Received: from CY4PR1801MB1863.namprd18.prod.outlook.com ([fe80::286d:5e93:974e:8bfa]) by CY4PR1801MB1863.namprd18.prod.outlook.com ([fe80::286d:5e93:974e:8bfa%2]) with mapi id 15.20.1750.014; Sun, 31 Mar 2019 13:14:20 +0000 From: Pavan Nikhilesh Bhagavatula To: Jerin Jacob Kollanukkaran , "thomas@monjalon.net" , "arybchenko@solarflare.com" , "ferruh.yigit@intel.com" , "bernard.iremonger@intel.com" CC: "dev@dpdk.org" , Pavan Nikhilesh Bhagavatula Thread-Topic: [dpdk-dev] [PATCH v5 1/2] app/testpmd: optimize testpmd txonly mode Thread-Index: AQHU58OmNuHTLIT6F0WWZ3z1WI+wYQ== Date: Sun, 31 Mar 2019 13:14:20 +0000 Message-ID: <20190331131341.12924-1-pbhagavatula@marvell.com> References: <20190228194128.14236-1-pbhagavatula@marvell.com> In-Reply-To: <20190228194128.14236-1-pbhagavatula@marvell.com> Accept-Language: en-IN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: SG2PR06CA0178.apcprd06.prod.outlook.com (2603:1096:1:1e::32) To CY4PR1801MB1863.namprd18.prod.outlook.com (2603:10b6:910:7a::14) x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.21.0 x-originating-ip: [49.205.218.5] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: f23b0bc1-a469-44fa-b7eb-08d6b5dac8e3 x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600127)(711020)(4605104)(2017052603328)(7153060)(7193020); SRVR:CY4PR1801MB2071; x-ms-traffictypediagnostic: CY4PR1801MB2071: x-microsoft-antispam-prvs: x-forefront-prvs: 0993689CD1 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(376002)(136003)(346002)(396003)(39850400004)(366004)(199004)(189003)(2906002)(76176011)(8936002)(6506007)(6436002)(81156014)(386003)(8676002)(81166006)(102836004)(26005)(5660300002)(4326008)(107886003)(110136005)(1076003)(186003)(71200400001)(78486014)(7736002)(6512007)(2501003)(6486002)(99286004)(97736004)(54906003)(3846002)(6116002)(52116002)(316002)(105586002)(53936002)(305945005)(25786009)(71190400001)(86362001)(68736007)(2201001)(486006)(106356001)(478600001)(50226002)(66066001)(11346002)(446003)(476003)(2616005)(256004)(14454004)(36756003); DIR:OUT; SFP:1101; SCL:1; SRVR:CY4PR1801MB2071; H:CY4PR1801MB1863.namprd18.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: marvell.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: aVHgbNJECqPxKgn9ByPvhmlHyAsQcn16O6NmuSi0cc1CV0NiJsnzR7r7nKjhFO8d7Sq6AXjOkDJvhmsemDOaGLfqicrP9HXFcgNjmPsS7vPeSVQfa3hK0bVXbTLI3U/+Dl3wjv0t3osi/JSrQvwMC1cxwLtdd8msu2YLiSwqGPm8+uWSxTMUwqpbiVdO2o4HT1tCPZRABUyhODdiRvmUx6tajPZrl0ZjFkZQkmqv65djDllNPjjDqZWZ6hUUq7WjXjb/SR50+A8HU5hBxdRk/xXe8vtvDitoIDCX2cuEYABU0z35CEAxvKgK1L8u0TuNLThlRG3Ytgh3Y3ziIPyVmRrAc08dDkEsoHi4e+kjP9yvIzN/yfQg/pdobvscNwJNYUmMgKcg9TH926DgmHJxhg/qLi6id0Sguz1gWXitswQ= MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: f23b0bc1-a469-44fa-b7eb-08d6b5dac8e3 X-MS-Exchange-CrossTenant-originalarrivaltime: 31 Mar 2019 13:14:20.2922 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 70e1fb47-1155-421d-87fc-2e58f638b6e0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1801MB2071 X-OriginatorOrg: marvell.com X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-03-31_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v5 1/2] app/testpmd: optimize testpmd txonly mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Optimize testpmd txonly mode by 1. Moving per packet ethernet header copy above the loop. 2. Use bulk ops for allocating segments instead of having a inner loop for every segment. Also, move the packet prepare logic into a separate function so that it can be reused later. Signed-off-by: Pavan Nikhilesh --- v5 Changes - Remove unnecessary change to struct rte_port *txp (movement). (Bernard) v4 Changes: - Fix packet len calculation. v3 Changes: - Split the patches for easier review. (Thomas) - Remove unnecessary assignments to 0. (Bernard) v2 Changes: - Use bulk ops for fetching segments. (Andrew Rybchenko) - Fallback to rte_mbuf_raw_alloc if bulk get fails. (Andrew Rybchenko) - Fix mbufs not being freed when there is no more mbufs available for segments. (Andrew Rybchenko) app/test-pmd/txonly.c | 139 +++++++++++++++++++++++------------------- 1 file changed, 76 insertions(+), 63 deletions(-) -- 2.21.0 diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index 1f08b6ed3..9c0147089 100644 --- a/app/test-pmd/txonly.c +++ b/app/test-pmd/txonly.c @@ -147,6 +147,63 @@ setup_pkt_udp_ip_headers(struct ipv4_hdr *ip_hdr, ip_hdr->hdr_checksum = (uint16_t) ip_cksum; } +static inline bool +pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp, + struct ether_hdr *eth_hdr, const uint16_t vlan_tci, + const uint16_t vlan_tci_outer, const uint64_t ol_flags) +{ + struct rte_mbuf *pkt_segs[RTE_MAX_SEGS_PER_PKT]; + struct rte_mbuf *pkt_seg; + uint32_t nb_segs, pkt_len; + uint8_t i; + + if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND)) + nb_segs = random() % tx_pkt_nb_segs + 1; + else + nb_segs = tx_pkt_nb_segs; + + if (nb_segs > 1) { + if (rte_mempool_get_bulk(mbp, (void **)pkt_segs, nb_segs)) + return false; + } + + rte_pktmbuf_reset_headroom(pkt); + pkt->data_len = tx_pkt_seg_lengths[0]; + pkt->ol_flags = ol_flags; + pkt->vlan_tci = vlan_tci; + pkt->vlan_tci_outer = vlan_tci_outer; + pkt->l2_len = sizeof(struct ether_hdr); + pkt->l3_len = sizeof(struct ipv4_hdr); + + pkt_len = pkt->data_len; + pkt_seg = pkt; + for (i = 1; i < nb_segs; i++) { + pkt_seg->next = pkt_segs[i - 1]; + pkt_seg = pkt_seg->next; + pkt_seg->data_len = tx_pkt_seg_lengths[i]; + pkt_len += pkt_seg->data_len; + } + pkt_seg->next = NULL; /* Last segment of packet. */ + /* + * Copy headers in first packet segment(s). + */ + copy_buf_to_pkt(eth_hdr, sizeof(eth_hdr), pkt, 0); + copy_buf_to_pkt(&pkt_ip_hdr, sizeof(pkt_ip_hdr), pkt, + sizeof(struct ether_hdr)); + copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt, + sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr)); + + /* + * Complete first mbuf of packet and append it to the + * burst of packets to be transmitted. + */ + pkt->nb_segs = nb_segs; + pkt->pkt_len = pkt_len; + + return true; +} + /* * Transmit a burst of multi-segments packets. */ @@ -156,7 +213,6 @@ pkt_burst_transmit(struct fwd_stream *fs) struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; struct rte_port *txp; struct rte_mbuf *pkt; - struct rte_mbuf *pkt_seg; struct rte_mempool *mbp; struct ether_hdr eth_hdr; uint16_t nb_tx; @@ -164,14 +220,12 @@ pkt_burst_transmit(struct fwd_stream *fs) uint16_t vlan_tci, vlan_tci_outer; uint32_t retry; uint64_t ol_flags = 0; - uint8_t i; uint64_t tx_offloads; #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES uint64_t start_tsc; uint64_t end_tsc; uint64_t core_cycles; #endif - uint32_t nb_segs, pkt_len; #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES start_tsc = rte_rdtsc(); @@ -188,72 +242,31 @@ pkt_burst_transmit(struct fwd_stream *fs) ol_flags |= PKT_TX_QINQ_PKT; if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT) ol_flags |= PKT_TX_MACSEC; + + /* + * Initialize Ethernet header. + */ + ether_addr_copy(&peer_eth_addrs[fs->peer_addr], ð_hdr.d_addr); + ether_addr_copy(&ports[fs->tx_port].eth_addr, ð_hdr.s_addr); + eth_hdr.ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4); + for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { pkt = rte_mbuf_raw_alloc(mbp); - if (pkt == NULL) { - nomore_mbuf: - if (nb_pkt == 0) - return; + if (pkt == NULL) + break; + if (unlikely(!pkt_burst_prepare(pkt, mbp, + ð_hdr, vlan_tci, + vlan_tci_outer, + ol_flags))) { + rte_mempool_put(mbp, pkt); break; } - - /* - * Using raw alloc is good to improve performance, - * but some consumers may use the headroom and so - * decrement data_off. We need to make sure it is - * reset to default value. - */ - rte_pktmbuf_reset_headroom(pkt); - pkt->data_len = tx_pkt_seg_lengths[0]; - pkt_seg = pkt; - if (tx_pkt_split == TX_PKT_SPLIT_RND) - nb_segs = random() % tx_pkt_nb_segs + 1; - else - nb_segs = tx_pkt_nb_segs; - pkt_len = pkt->data_len; - for (i = 1; i < nb_segs; i++) { - pkt_seg->next = rte_mbuf_raw_alloc(mbp); - if (pkt_seg->next == NULL) { - pkt->nb_segs = i; - rte_pktmbuf_free(pkt); - goto nomore_mbuf; - } - pkt_seg = pkt_seg->next; - pkt_seg->data_len = tx_pkt_seg_lengths[i]; - pkt_len += pkt_seg->data_len; - } - pkt_seg->next = NULL; /* Last segment of packet. */ - - /* - * Initialize Ethernet header. - */ - ether_addr_copy(&peer_eth_addrs[fs->peer_addr],ð_hdr.d_addr); - ether_addr_copy(&ports[fs->tx_port].eth_addr, ð_hdr.s_addr); - eth_hdr.ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4); - - /* - * Copy headers in first packet segment(s). - */ - copy_buf_to_pkt(ð_hdr, sizeof(eth_hdr), pkt, 0); - copy_buf_to_pkt(&pkt_ip_hdr, sizeof(pkt_ip_hdr), pkt, - sizeof(struct ether_hdr)); - copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt, - sizeof(struct ether_hdr) + - sizeof(struct ipv4_hdr)); - - /* - * Complete first mbuf of packet and append it to the - * burst of packets to be transmitted. - */ - pkt->nb_segs = nb_segs; - pkt->pkt_len = pkt_len; - pkt->ol_flags = ol_flags; - pkt->vlan_tci = vlan_tci; - pkt->vlan_tci_outer = vlan_tci_outer; - pkt->l2_len = sizeof(struct ether_hdr); - pkt->l3_len = sizeof(struct ipv4_hdr); pkts_burst[nb_pkt] = pkt; } + + if (nb_pkt == 0) + return; + nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_pkt); /* * Retry if necessary