From patchwork Thu Feb 26 23:15:06 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "vadim.suraev@gmail.com" X-Patchwork-Id: 3752 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 95E0A568E; Fri, 27 Feb 2015 00:16:07 +0100 (CET) Received: from mail-we0-f178.google.com (mail-we0-f178.google.com [74.125.82.178]) by dpdk.org (Postfix) with ESMTP id 36883CE7 for ; Fri, 27 Feb 2015 00:16:06 +0100 (CET) Received: by wevk48 with SMTP id k48so15642600wev.3 for ; Thu, 26 Feb 2015 15:16:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=VPeCdx37qroeEBuIn3hFUdcnrOEJf2ZIEirEJeJfkfM=; b=mGCXSchq8Da3d5IGFa552d4ucQ5XXU6ScpGLjbuZgKeZCMAKhr3Jthe9u1FWRq99Fy fbyqG9c11RUSAKHbilnHDBXvmKsMRCHknwbfM+k627YdIMhe/bId00tgS0XPiYeDRc2b 6Yw8k0d0Tj/DT7CGPsgeIRyQUPe8He+5BLFs8kQAyvb6I8BZEDFXZsiPMcXm/6uTZ59M mjZCQ8cBV8qFIxBFHIKGOoOEyBz0Va7mUA4hLhvNpQL3LHCsKvQyACE0r1fSjIsDY5IV QJPl3Zd8U+peyUSXe9qu5CCAwm1ioxuNmOKgGcBSGgjvyCsVSx9Ct+E3voqFekorcB2O mYhA== X-Received: by 10.180.104.66 with SMTP id gc2mr821672wib.34.1424992566059; Thu, 26 Feb 2015 15:16:06 -0800 (PST) Received: from ubuntu.ubuntu-domain ([37.26.147.189]) by mx.google.com with ESMTPSA id kr5sm3467068wjc.1.2015.02.26.15.16.04 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 26 Feb 2015 15:16:05 -0800 (PST) From: "vadim.suraev@gmail.com" To: dev@dpdk.org Date: Fri, 27 Feb 2015 01:15:06 +0200 Message-Id: <1424992506-20484-1-git-send-email-vadim.suraev@gmail.com> X-Mailer: git-send-email 1.7.9.5 Subject: [dpdk-dev] [PATCH] rte_mbuf: scattered pktmbufs freeing optimization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "vadim.suraev@gmail.com" new function - rte_pktmbuf_free_bulk makes freeing long scattered (chained) pktmbufs belonging to the same pool more optimal using rte_mempool_put_bulk rather than calling rte_mempool_put for each segment. Inlike rte_pktmbuf_free, which calls rte_pktmbuf_free_seg, this function calls __rte_pktmbuf_prefree_seg. If non-NULL returned, the pointer is placed in an array. When array is filled or when the last segment is processed, rte_mempool_put_bulk is called. In case of multiple producers, performs 3 times better. Signed-off-by: vadim.suraev@gmail.com --- lib/librte_mbuf/rte_mbuf.h | 55 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 55 insertions(+) diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index 17ba791..1d6f848 100644 --- a/lib/librte_mbuf/rte_mbuf.h +++ b/lib/librte_mbuf/rte_mbuf.h @@ -824,6 +824,61 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m) } } +/* This macro defines the size of max bulk of mbufs to free for rte_pktmbuf_free_bulk */ +#define MAX_MBUF_FREE_SIZE 32 + +/* If RTE_LIBRTE_MBUF_DEBUG is enabled, checks if all mbufs must belong to the same mempool */ +#ifdef RTE_LIBRTE_MBUF_DEBUG + +#define RTE_MBUF_MEMPOOL_CHECK1(m) struct rte_mempool *first_buffers_mempool = (m) ? (m)->pool : NULL + +#define RTE_MBUF_MEMPOOL_CHECK2(m) RTE_MBUF_ASSERT(first_buffers_mempool == (m)->pool) + +#else + +#define RTE_MBUF_MEMPOOL_CHECK1(m) + +#define RTE_MBUF_MEMPOOL_CHECK2(m) + +#endif + +/** + * Free chained (scattered) mbuf into its original mempool. + * + * All the mbufs in the chain must belong to the same mempool. + * + * @param head + * The head of mbufs to be freed chain + */ + +static inline void __attribute__((always_inline)) +rte_pktmbuf_free_bulk(struct rte_mbuf *head) +{ + void *mbufs[MAX_MBUF_FREE_SIZE]; + unsigned mbufs_count = 0; + struct rte_mbuf *next; + + RTE_MBUF_MEMPOOL_CHECK1(head); + + while(head) { + next = head->next; + head->next = NULL; + if(__rte_pktmbuf_prefree_seg(head)) { + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(head) == 0); + RTE_MBUF_MEMPOOL_CHECK2(head); + mbufs[mbufs_count++] = head; + } + head = next; + if(mbufs_count == MAX_MBUF_FREE_SIZE) { + rte_mempool_put_bulk(((struct rte_mbuf *)mbufs[0])->pool,mbufs,mbufs_count); + mbufs_count = 0; + } + } + if(mbufs_count > 0) { + rte_mempool_put_bulk(((struct rte_mbuf *)mbufs[0])->pool,mbufs,mbufs_count); + } +} + /** * Creates a "clone" of the given packet mbuf. *