From patchwork Sat Feb 3 03:11:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mallesh Koujalagi X-Patchwork-Id: 34902 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4AA2E1E34; Sat, 3 Feb 2018 04:43:22 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 89D2D1B19 for ; Sat, 3 Feb 2018 04:43:20 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 02 Feb 2018 19:43:19 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,452,1511856000"; d="scan'208";a="15123315" Received: from dna-skx3.jf.intel.com ([10.54.81.137]) by orsmga008.jf.intel.com with ESMTP; 02 Feb 2018 19:43:18 -0800 From: Mallesh Koujalagi To: dev@dpdk.org Cc: mtetsuyah@gmail.com, ferruh.yigit@intel.com, malleshx.koujalagi@intel.com Date: Fri, 2 Feb 2018 19:11:50 -0800 Message-Id: <1517627510-60932-1-git-send-email-malleshx.koujalagi@intel.com> X-Mailer: git-send-email 2.7.4 Subject: [dpdk-dev] [PATCH] net/null: Support bulk alloc and free. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" After bulk allocation and freeing of multiple mbufs increase more than ~2% throughput on single core. Signed-off-by: Mallesh Koujalagi --- drivers/net/null/rte_eth_null.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c index 9385ffd..247ede0 100644 --- a/drivers/net/null/rte_eth_null.c +++ b/drivers/net/null/rte_eth_null.c @@ -130,10 +130,11 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) return 0; packet_size = h->internals->packet_size; + + if (rte_pktmbuf_alloc_bulk(h->mb_pool, bufs, nb_bufs) != 0) + return 0; + for (i = 0; i < nb_bufs; i++) { - bufs[i] = rte_pktmbuf_alloc(h->mb_pool); - if (!bufs[i]) - break; rte_memcpy(rte_pktmbuf_mtod(bufs[i], void *), h->dummy_packet, packet_size); bufs[i]->data_len = (uint16_t)packet_size; @@ -149,18 +150,15 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) static uint16_t eth_null_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) { - int i; struct null_queue *h = q; if ((q == NULL) || (bufs == NULL)) return 0; - for (i = 0; i < nb_bufs; i++) - rte_pktmbuf_free(bufs[i]); + rte_mempool_put_bulk(bufs[0]->pool, (void **)bufs, nb_bufs); + rte_atomic64_add(&h->tx_pkts, nb_bufs); - rte_atomic64_add(&(h->tx_pkts), i); - - return i; + return nb_bufs; } static uint16_t