From patchwork Thu Jan 28 09:45:59 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chen, Jing D" X-Patchwork-Id: 10177 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 65C2FC3FC; Thu, 28 Jan 2016 10:46:08 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id C5448C3F4 for ; Thu, 28 Jan 2016 10:46:06 +0100 (CET) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP; 28 Jan 2016 01:46:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,357,1449561600"; d="scan'208";a="902846579" Received: from shvmail01.sh.intel.com ([10.239.29.42]) by fmsmga002.fm.intel.com with ESMTP; 28 Jan 2016 01:46:07 -0800 Received: from shecgisg004.sh.intel.com (shecgisg004.sh.intel.com [10.239.29.89]) by shvmail01.sh.intel.com with ESMTP id u0S9k3aR026711; Thu, 28 Jan 2016 17:46:03 +0800 Received: from shecgisg004.sh.intel.com (localhost [127.0.0.1]) by shecgisg004.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP id u0S9k1Ov020929; Thu, 28 Jan 2016 17:46:03 +0800 Received: (from jingche2@localhost) by shecgisg004.sh.intel.com (8.13.6/8.13.6/Submit) id u0S9k1UG020925; Thu, 28 Jan 2016 17:46:01 +0800 From: "Chen Jing D(Mark)" To: michael.qiu@intel.com, konstantin.ananyev@intel.com Date: Thu, 28 Jan 2016 17:45:59 +0800 Message-Id: <1453974359-20895-1-git-send-email-jing.d.chen@intel.com> X-Mailer: git-send-email 1.7.12.2 Cc: dev@dpdk.org Subject: [dpdk-dev] [PATCH] fm10k: optimize legacy TX func X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "Chen Jing D(Mark)" When legacy TX func tries to free a bunch of mbufs, it will free them one by one. This change will scan the free list and merge the requests in case they belongs to same pool, then free once, which will reduce cycles on freeing mbufs. Signed-off-by: Chen Jing D(Mark) Acked-by: Shaopeng He --- doc/guides/rel_notes/release_2_3.rst | 2 + drivers/net/fm10k/fm10k_rxtx.c | 59 ++++++++++++++++++++++++++++----- 2 files changed, 52 insertions(+), 9 deletions(-) diff --git a/doc/guides/rel_notes/release_2_3.rst b/doc/guides/rel_notes/release_2_3.rst index 99de186..20ce78d 100644 --- a/doc/guides/rel_notes/release_2_3.rst +++ b/doc/guides/rel_notes/release_2_3.rst @@ -3,7 +3,9 @@ DPDK Release 2.3 New Features ------------ +* **Optimize fm10k Tx func.** + * Free multiple mbufs at a time to reduce freeing mbuf cycles. Resolved Issues --------------- diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c index e958865..f3de691 100644 --- a/drivers/net/fm10k/fm10k_rxtx.c +++ b/drivers/net/fm10k/fm10k_rxtx.c @@ -369,6 +369,51 @@ fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_rcv; } +/* + * Free multiple TX mbuf at a time if they are in the same pool + * + * @txep: software desc ring index that starts to free + * @num: number of descs to free + * + */ +static inline void tx_free_bulk_mbuf(struct rte_mbuf **txep, int num) +{ + struct rte_mbuf *m, *free[RTE_FM10K_TX_MAX_FREE_BUF_SZ]; + int i; + int nb_free = 0; + + if (unlikely(num == 0)) + return; + + m = __rte_pktmbuf_prefree_seg(txep[0]); + if (likely(m != NULL)) { + free[0] = m; + nb_free = 1; + for (i = 1; i < num; i++) { + m = __rte_pktmbuf_prefree_seg(txep[i]); + if (likely(m != NULL)) { + if (likely(m->pool == free[0]->pool)) + free[nb_free++] = m; + else { + rte_mempool_put_bulk(free[0]->pool, + (void *)free, nb_free); + free[0] = m; + nb_free = 1; + } + } + txep[i] = NULL; + } + rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free); + } else { + for (i = 1; i < num; i++) { + m = __rte_pktmbuf_prefree_seg(txep[i]); + if (m != NULL) + rte_mempool_put(m->pool, m); + txep[i] = NULL; + } + } +} + static inline void tx_free_descriptors(struct fm10k_tx_queue *q) { uint16_t next_rs, count = 0; @@ -385,11 +430,7 @@ static inline void tx_free_descriptors(struct fm10k_tx_queue *q) * including nb_desc */ if (q->last_free > next_rs) { count = q->nb_desc - q->last_free; - while (q->last_free < q->nb_desc) { - rte_pktmbuf_free_seg(q->sw_ring[q->last_free]); - q->sw_ring[q->last_free] = NULL; - ++q->last_free; - } + tx_free_bulk_mbuf(&q->sw_ring[q->last_free], count); q->last_free = 0; } @@ -397,10 +438,10 @@ static inline void tx_free_descriptors(struct fm10k_tx_queue *q) q->nb_free += count + (next_rs + 1 - q->last_free); /* free buffers from last_free, up to and including next_rs */ - while (q->last_free <= next_rs) { - rte_pktmbuf_free_seg(q->sw_ring[q->last_free]); - q->sw_ring[q->last_free] = NULL; - ++q->last_free; + if (q->last_free <= next_rs) { + count = next_rs - q->last_free + 1; + tx_free_bulk_mbuf(&q->sw_ring[q->last_free], count); + q->last_free += count; } if (q->last_free == q->nb_desc)