Message ID | 1426628169-1735-1-git-send-email-vadim.suraev@gmail.com (mailing list archive) |
---|---|
State | Changes Requested, archived |
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id B18AF9AA4; Tue, 17 Mar 2015 22:36:24 +0100 (CET) Received: from mail-wi0-f182.google.com (mail-wi0-f182.google.com [209.85.212.182]) by dpdk.org (Postfix) with ESMTP id 1FBDC58EF for <dev@dpdk.org>; Tue, 17 Mar 2015 22:36:23 +0100 (CET) Received: by wifj2 with SMTP id j2so22467818wif.1 for <dev@dpdk.org>; Tue, 17 Mar 2015 14:36:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=lrmdlFtw+ydATb9wl0KgRBBhQA6pzQ8PrLYGaaOZHy8=; b=qHlZn9gGV3YqtKf0CnAMctEqE2Nj+vxSwvzuzZFhDk9lyacLS44zRwySyycKGBwzZ3 0DPqJo6T8AcU+2LJGjbn7DvTuoNo2Ws2s75kuV3Y9NY4R4CnIsd4bXTtn+CzByM/cDSS nc1vZNVxD0Z5xv7UNWcL44NA2rpswOvTz3kOttb2GvTpa4azDwyxE1Who4iZz+ZF1rql /8IhqBPbUa4cq/GUIQIEcC1QH2ed2KpHfHwAOijI9tvK6mN5uYk6g9aSvwm6O21nqnTQ 4oA/HKBdkDrTXfAYXp3Ra3a23jMkX4JYZaV5N1PxuY82LfG3kfri2WYIlu7bIukv9b0Y lpWw== X-Received: by 10.180.208.107 with SMTP id md11mr1442293wic.10.1426628183028; Tue, 17 Mar 2015 14:36:23 -0700 (PDT) Received: from ubuntu.ubuntu-domain (bzq-79-176-41-72.red.bezeqint.net. [79.176.41.72]) by mx.google.com with ESMTPSA id yr10sm21633443wjc.0.2015.03.17.14.36.21 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 17 Mar 2015 14:36:22 -0700 (PDT) From: "vadim.suraev@gmail.com" <vadim.suraev@gmail.com> To: dev@dpdk.org Date: Tue, 17 Mar 2015 23:36:09 +0200 Message-Id: <1426628169-1735-1-git-send-email-vadim.suraev@gmail.com> X-Mailer: git-send-email 1.7.9.5 Subject: [dpdk-dev] [PATCH v2] rte_mbuf: mbuf bulk alloc/free functions added + unittest X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK <dev.dpdk.org> List-Unsubscribe: <http://dpdk.org/ml/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://dpdk.org/ml/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <http://dpdk.org/ml/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Commit Message
vadim.suraev@gmail.com
March 17, 2015, 9:36 p.m. UTC
From: "vadim.suraev@gmail.com" <vadim.suraev@gmail.com> This patch adds mbuf bulk allocation/freeing functions and unittest Signed-off-by: Vadim Suraev <vadim.suraev@gmail.com> --- New in v2: - function rte_pktmbuf_alloc_bulk added - function rte_pktmbuf_bulk_free added - function rte_pktmbuf_free_chain added - applied reviewers' comments app/test/test_mbuf.c | 94 +++++++++++++++++++++++++++++++++++++++++++- lib/librte_mbuf/rte_mbuf.h | 89 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 182 insertions(+), 1 deletion(-)
Comments
Hi Vadim, > -----Original Message----- > From: vadim.suraev@gmail.com [mailto:vadim.suraev@gmail.com] > Sent: Tuesday, March 17, 2015 9:36 PM > To: dev@dpdk.org > Cc: olivier.matz@6wind.com; stephen@networkplumber.org; Ananyev, Konstantin; vadim.suraev@gmail.com > Subject: [PATCH v2] rte_mbuf: mbuf bulk alloc/free functions added + unittest > > From: "vadim.suraev@gmail.com" <vadim.suraev@gmail.com> > > This patch adds mbuf bulk allocation/freeing functions and unittest > > Signed-off-by: Vadim Suraev > <vadim.suraev@gmail.com> > --- > New in v2: > - function rte_pktmbuf_alloc_bulk added > - function rte_pktmbuf_bulk_free added > - function rte_pktmbuf_free_chain added > - applied reviewers' comments > > app/test/test_mbuf.c | 94 +++++++++++++++++++++++++++++++++++++++++++- > lib/librte_mbuf/rte_mbuf.h | 89 +++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 182 insertions(+), 1 deletion(-) > > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c > index 1ff66cb..b20c6a4 100644 > --- a/app/test/test_mbuf.c > +++ b/app/test/test_mbuf.c > @@ -77,6 +77,7 @@ > #define REFCNT_RING_SIZE (REFCNT_MBUF_NUM * REFCNT_MAX_REF) > > #define MAKE_STRING(x) # x > +#define MBUF_POOL_LOCAL_CACHE_SIZE 32 > > static struct rte_mempool *pktmbuf_pool = NULL; > > @@ -405,6 +406,84 @@ test_pktmbuf_pool(void) > return ret; > } > > +/* test pktmbuf bulk allocation and freeing > +*/ > +static int > +test_pktmbuf_pool_bulk(void) > +{ > + unsigned i; > + /* size of mempool - size of local cache, otherwise may fail */ > + unsigned mbufs_to_allocate = NB_MBUF - MBUF_POOL_LOCAL_CACHE_SIZE; > + struct rte_mbuf *m[mbufs_to_allocate]; > + int ret = 0; > + unsigned mbuf_count_before_allocation = rte_mempool_count(pktmbuf_pool); > + > + for (i = 0; i < mbufs_to_allocate; i++) > + m[i] = NULL; > + /* alloc NB_MBUF-MBUF_POOL_LOCAL_CACHE_SIZE mbufs */ > + ret = rte_pktmbuf_alloc_bulk(pktmbuf_pool, m, mbufs_to_allocate); > + if (ret) { > + printf("cannot allocate %d mbufs bulk mempool_cnt=%d ret=%d\n", > + mbufs_to_allocate, > + rte_mempool_count(pktmbuf_pool), > + ret); > + return -1; > + } > + if ((rte_mempool_count(pktmbuf_pool) + mbufs_to_allocate) != > + mbuf_count_before_allocation) { > + printf("mempool count %d + allocated %d != initial %d\n", > + rte_mempool_count(pktmbuf_pool), > + mbufs_to_allocate, > + mbuf_count_before_allocation); > + return -1; > + } > + /* free them */ > + rte_pktmbuf_bulk_free(m, mbufs_to_allocate); > + > + if (rte_mempool_count(pktmbuf_pool) != mbuf_count_before_allocation) { > + printf("mempool count %d != initial %d\n", > + rte_mempool_count(pktmbuf_pool), > + mbuf_count_before_allocation); > + return -1; > + } > + for (i = 0; i < mbufs_to_allocate; i++) > + m[i] = NULL; > + > + /* alloc NB_MBUF-MBUF_POOL_LOCAL_CACHE_SIZE mbufs */ > + ret = rte_pktmbuf_alloc_bulk(pktmbuf_pool, m, mbufs_to_allocate); > + if (ret) { > + printf("cannot allocate %d mbufs bulk mempool_cnt=%d ret=%d\n", > + mbufs_to_allocate, > + rte_mempool_count(pktmbuf_pool), > + ret); > + return -1; > + } > + if ((rte_mempool_count(pktmbuf_pool) + mbufs_to_allocate) != > + mbuf_count_before_allocation) { > + printf("mempool count %d + allocated %d != initial %d\n", > + rte_mempool_count(pktmbuf_pool), > + mbufs_to_allocate, > + mbuf_count_before_allocation); > + return -1; > + } > + > + /* chain it */ > + for (i = 0; i < mbufs_to_allocate - 1; i++) { > + m[i]->next = m[i + 1]; > + m[0]->nb_segs++; > + } > + /* free them */ > + rte_pktmbuf_free_chain(m[0]); > + > + if (rte_mempool_count(pktmbuf_pool) != mbuf_count_before_allocation) { > + printf("mempool count %d != initial %d\n", > + rte_mempool_count(pktmbuf_pool), > + mbuf_count_before_allocation); > + return -1; > + } > + return ret; > +} > + > /* > * test that the pointer to the data on a packet mbuf is set properly > */ > @@ -766,7 +845,8 @@ test_mbuf(void) > if (pktmbuf_pool == NULL) { > pktmbuf_pool = > rte_mempool_create("test_pktmbuf_pool", NB_MBUF, > - MBUF_SIZE, 32, > + MBUF_SIZE, > + MBUF_POOL_LOCAL_CACHE_SIZE, > sizeof(struct rte_pktmbuf_pool_private), > rte_pktmbuf_pool_init, NULL, > rte_pktmbuf_init, NULL, > @@ -790,6 +870,18 @@ test_mbuf(void) > return -1; > } > > + /* test bulk allocation and freeing */ > + if (test_pktmbuf_pool_bulk() < 0) { > + printf("test_pktmbuf_pool_bulk() failed\n"); > + return -1; > + } > + > + /* once again to ensure all mbufs were freed */ > + if (test_pktmbuf_pool_bulk() < 0) { > + printf("test_pktmbuf_pool_bulk() failed\n"); > + return -1; > + } > + > /* test that the pointer to the data on a packet mbuf is set properly */ > if (test_pktmbuf_pool_ptr() < 0) { > printf("test_pktmbuf_pool_ptr() failed\n"); > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h > index 17ba791..995237d 100644 > --- a/lib/librte_mbuf/rte_mbuf.h > +++ b/lib/librte_mbuf/rte_mbuf.h > @@ -825,6 +825,95 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m) > } > > /** > + * Allocate a bulk of mbufs, initiate refcnt and resets > + * > + * @param pool > + * memory pool to allocate from > + * @param mbufs > + * Array of pointers to mbuf > + * @param count > + * Array size > + */ > +static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool, > + struct rte_mbuf **mbufs, > + unsigned count) > +{ > + unsigned idx; > + int rc = 0; > + > + rc = rte_mempool_get_bulk(pool, (void **)mbufs, count); > + if (unlikely(rc)) > + return rc; > + > + for (idx = 0; idx < count; idx++) { > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0); > + rte_mbuf_refcnt_set(mbufs[idx], 1); > + rte_pktmbuf_reset(mbufs[idx]); > + } > + return rc; > +} > + > +/** > + * Free a bulk of mbufs into its original mempool. > + * This function assumes refcnt equals 1 > + * as well as the freed mbufs are direct I think your forgot to mention in comments one more requirement for that function: all mbufs have to be from the same mempool. > + * > + * @param mbufs > + * Array of pointers to mbuf > + * @param count > + * Array size > + */ > +static inline void rte_pktmbuf_bulk_free(struct rte_mbuf **mbufs, > + unsigned count) > +{ > + unsigned idx; > + > + RTE_MBUF_ASSERT(count > 0); > + > + for (idx = 0; idx < count; idx++) { > + rte_mbuf_refcnt_update(mbufs[idx], -1); You can do just: rte_mbuf_refcnt_set(m, 0); here and move your assert above it. Something like: RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 1); rte_mbuf_refcnt_set(m, 0); Also probably would be a good thing to add one more assert here, something like: RTE_MBUF_ASSERT(mbufs[idx]->pool == mufs[0]->pool); > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0); > + } > + rte_mempool_put_bulk(mbufs[0]->pool, (void **)mbufs, count); > +} > + > +/** > + * Free chained (scattered) mbufs into its original mempool(s). > + * > + * @param head > + * The head of mbufs to be freed chain. Must not be NULL > + */ > +static inline void rte_pktmbuf_free_chain(struct rte_mbuf *head) > +{ > + struct rte_mbuf *mbufs[head->nb_segs]; > + unsigned mbufs_count = 0; > + struct rte_mbuf *next; > + > + while (head) { > + next = head->next; > + head->next = NULL; Shouldn't the line above be inside if (head != NULL) {...} block? > + head = __rte_pktmbuf_prefree_seg(head); > + if (likely(head != NULL)) { > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(head) == 0); I don't think there is any use of the assert above. If prefree_seg returns non-NULL value, it sets refcnt to 0 for that mbuf. > + if (likely((!mbufs_count) || > + (head->pool == mbufs[0]->pool))) > + mbufs[mbufs_count++] = head; > + else { > + rte_mempool_put_bulk(mbufs[0]->pool, > + (void **)mbufs, > + mbufs_count); > + mbufs_count = 0; > + } > + } > + head = next; > + } > + if (mbufs_count > 0) > + rte_mempool_put_bulk(mbufs[0]->pool, > + (void **)mbufs, > + mbufs_count); > +} > + > +/** > * Creates a "clone" of the given packet mbuf. > * > * Walks through all segments of the given packet mbuf, and for each of them: > -- > 1.7.9.5
Hi, Konstantin, >Shouldn't the line above be inside if (head != NULL) {...} block? This is removed as Olivier commented before: >> +{ > + if (likely(head != NULL)) { >I think we should remove this test. The other mbuf functions do not >check this. Regards, Vadim. On Wed, Mar 18, 2015 at 1:46 AM, Ananyev, Konstantin < konstantin.ananyev@intel.com> wrote: > Hi Vadim, > > > -----Original Message----- > > From: vadim.suraev@gmail.com [mailto:vadim.suraev@gmail.com] > > Sent: Tuesday, March 17, 2015 9:36 PM > > To: dev@dpdk.org > > Cc: olivier.matz@6wind.com; stephen@networkplumber.org; Ananyev, > Konstantin; vadim.suraev@gmail.com > > Subject: [PATCH v2] rte_mbuf: mbuf bulk alloc/free functions added + > unittest > > > > From: "vadim.suraev@gmail.com" <vadim.suraev@gmail.com> > > > > This patch adds mbuf bulk allocation/freeing functions and unittest > > > > Signed-off-by: Vadim Suraev > > <vadim.suraev@gmail.com> > > --- > > New in v2: > > - function rte_pktmbuf_alloc_bulk added > > - function rte_pktmbuf_bulk_free added > > - function rte_pktmbuf_free_chain added > > - applied reviewers' comments > > > > app/test/test_mbuf.c | 94 > +++++++++++++++++++++++++++++++++++++++++++- > > lib/librte_mbuf/rte_mbuf.h | 89 > +++++++++++++++++++++++++++++++++++++++++ > > 2 files changed, 182 insertions(+), 1 deletion(-) > > > > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c > > index 1ff66cb..b20c6a4 100644 > > --- a/app/test/test_mbuf.c > > +++ b/app/test/test_mbuf.c > > @@ -77,6 +77,7 @@ > > #define REFCNT_RING_SIZE (REFCNT_MBUF_NUM * REFCNT_MAX_REF) > > > > #define MAKE_STRING(x) # x > > +#define MBUF_POOL_LOCAL_CACHE_SIZE 32 > > > > static struct rte_mempool *pktmbuf_pool = NULL; > > > > @@ -405,6 +406,84 @@ test_pktmbuf_pool(void) > > return ret; > > } > > > > +/* test pktmbuf bulk allocation and freeing > > +*/ > > +static int > > +test_pktmbuf_pool_bulk(void) > > +{ > > + unsigned i; > > + /* size of mempool - size of local cache, otherwise may fail */ > > + unsigned mbufs_to_allocate = NB_MBUF - MBUF_POOL_LOCAL_CACHE_SIZE; > > + struct rte_mbuf *m[mbufs_to_allocate]; > > + int ret = 0; > > + unsigned mbuf_count_before_allocation = > rte_mempool_count(pktmbuf_pool); > > + > > + for (i = 0; i < mbufs_to_allocate; i++) > > + m[i] = NULL; > > + /* alloc NB_MBUF-MBUF_POOL_LOCAL_CACHE_SIZE mbufs */ > > + ret = rte_pktmbuf_alloc_bulk(pktmbuf_pool, m, mbufs_to_allocate); > > + if (ret) { > > + printf("cannot allocate %d mbufs bulk mempool_cnt=%d > ret=%d\n", > > + mbufs_to_allocate, > > + rte_mempool_count(pktmbuf_pool), > > + ret); > > + return -1; > > + } > > + if ((rte_mempool_count(pktmbuf_pool) + mbufs_to_allocate) != > > + mbuf_count_before_allocation) { > > + printf("mempool count %d + allocated %d != initial %d\n", > > + rte_mempool_count(pktmbuf_pool), > > + mbufs_to_allocate, > > + mbuf_count_before_allocation); > > + return -1; > > + } > > + /* free them */ > > + rte_pktmbuf_bulk_free(m, mbufs_to_allocate); > > + > > + if (rte_mempool_count(pktmbuf_pool) != > mbuf_count_before_allocation) { > > + printf("mempool count %d != initial %d\n", > > + rte_mempool_count(pktmbuf_pool), > > + mbuf_count_before_allocation); > > + return -1; > > + } > > + for (i = 0; i < mbufs_to_allocate; i++) > > + m[i] = NULL; > > + > > + /* alloc NB_MBUF-MBUF_POOL_LOCAL_CACHE_SIZE mbufs */ > > + ret = rte_pktmbuf_alloc_bulk(pktmbuf_pool, m, mbufs_to_allocate); > > + if (ret) { > > + printf("cannot allocate %d mbufs bulk mempool_cnt=%d > ret=%d\n", > > + mbufs_to_allocate, > > + rte_mempool_count(pktmbuf_pool), > > + ret); > > + return -1; > > + } > > + if ((rte_mempool_count(pktmbuf_pool) + mbufs_to_allocate) != > > + mbuf_count_before_allocation) { > > + printf("mempool count %d + allocated %d != initial %d\n", > > + rte_mempool_count(pktmbuf_pool), > > + mbufs_to_allocate, > > + mbuf_count_before_allocation); > > + return -1; > > + } > > + > > + /* chain it */ > > + for (i = 0; i < mbufs_to_allocate - 1; i++) { > > + m[i]->next = m[i + 1]; > > + m[0]->nb_segs++; > > + } > > + /* free them */ > > + rte_pktmbuf_free_chain(m[0]); > > + > > + if (rte_mempool_count(pktmbuf_pool) != > mbuf_count_before_allocation) { > > + printf("mempool count %d != initial %d\n", > > + rte_mempool_count(pktmbuf_pool), > > + mbuf_count_before_allocation); > > + return -1; > > + } > > + return ret; > > +} > > + > > /* > > * test that the pointer to the data on a packet mbuf is set properly > > */ > > @@ -766,7 +845,8 @@ test_mbuf(void) > > if (pktmbuf_pool == NULL) { > > pktmbuf_pool = > > rte_mempool_create("test_pktmbuf_pool", NB_MBUF, > > - MBUF_SIZE, 32, > > + MBUF_SIZE, > > + MBUF_POOL_LOCAL_CACHE_SIZE, > > sizeof(struct > rte_pktmbuf_pool_private), > > rte_pktmbuf_pool_init, NULL, > > rte_pktmbuf_init, NULL, > > @@ -790,6 +870,18 @@ test_mbuf(void) > > return -1; > > } > > > > + /* test bulk allocation and freeing */ > > + if (test_pktmbuf_pool_bulk() < 0) { > > + printf("test_pktmbuf_pool_bulk() failed\n"); > > + return -1; > > + } > > + > > + /* once again to ensure all mbufs were freed */ > > + if (test_pktmbuf_pool_bulk() < 0) { > > + printf("test_pktmbuf_pool_bulk() failed\n"); > > + return -1; > > + } > > + > > /* test that the pointer to the data on a packet mbuf is set > properly */ > > if (test_pktmbuf_pool_ptr() < 0) { > > printf("test_pktmbuf_pool_ptr() failed\n"); > > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h > > index 17ba791..995237d 100644 > > --- a/lib/librte_mbuf/rte_mbuf.h > > +++ b/lib/librte_mbuf/rte_mbuf.h > > @@ -825,6 +825,95 @@ static inline void rte_pktmbuf_free(struct rte_mbuf > *m) > > } > > > > /** > > + * Allocate a bulk of mbufs, initiate refcnt and resets > > + * > > + * @param pool > > + * memory pool to allocate from > > + * @param mbufs > > + * Array of pointers to mbuf > > + * @param count > > + * Array size > > + */ > > +static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool, > > + struct rte_mbuf **mbufs, > > + unsigned count) > > +{ > > + unsigned idx; > > + int rc = 0; > > + > > + rc = rte_mempool_get_bulk(pool, (void **)mbufs, count); > > + if (unlikely(rc)) > > + return rc; > > + > > + for (idx = 0; idx < count; idx++) { > > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0); > > + rte_mbuf_refcnt_set(mbufs[idx], 1); > > + rte_pktmbuf_reset(mbufs[idx]); > > + } > > + return rc; > > +} > > + > > +/** > > + * Free a bulk of mbufs into its original mempool. > > + * This function assumes refcnt equals 1 > > + * as well as the freed mbufs are direct > > I think your forgot to mention in comments one more requirement for that > function: > all mbufs have to be from the same mempool. > > > + * > > + * @param mbufs > > + * Array of pointers to mbuf > > + * @param count > > + * Array size > > + */ > > +static inline void rte_pktmbuf_bulk_free(struct rte_mbuf **mbufs, > > + unsigned count) > > +{ > > + unsigned idx; > > + > > + RTE_MBUF_ASSERT(count > 0); > > + > > + for (idx = 0; idx < count; idx++) { > > + rte_mbuf_refcnt_update(mbufs[idx], -1); > > You can do just: > rte_mbuf_refcnt_set(m, 0); > here and move your assert above it. > Something like: > RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 1); > rte_mbuf_refcnt_set(m, 0); > > Also probably would be a good thing to add one more assert here, > something like: > RTE_MBUF_ASSERT(mbufs[idx]->pool == mufs[0]->pool); > > > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0); > > + } > > + rte_mempool_put_bulk(mbufs[0]->pool, (void **)mbufs, count); > > +} > > + > > +/** > > + * Free chained (scattered) mbufs into its original mempool(s). > > + * > > + * @param head > > + * The head of mbufs to be freed chain. Must not be NULL > > + */ > > +static inline void rte_pktmbuf_free_chain(struct rte_mbuf *head) > > +{ > > + struct rte_mbuf *mbufs[head->nb_segs]; > > + unsigned mbufs_count = 0; > > + struct rte_mbuf *next; > > + > > + while (head) { > > + next = head->next; > > + head->next = NULL; > > Shouldn't the line above be inside if (head != NULL) {...} block? > > > + head = __rte_pktmbuf_prefree_seg(head); > > + if (likely(head != NULL)) { > > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(head) == 0); > > I don't think there is any use of the assert above. > If prefree_seg returns non-NULL value, it sets refcnt to 0 for that mbuf. > > > + if (likely((!mbufs_count) || > > + (head->pool == mbufs[0]->pool))) > > + mbufs[mbufs_count++] = head; > > + else { > > + rte_mempool_put_bulk(mbufs[0]->pool, > > + (void **)mbufs, > > + mbufs_count); > > + mbufs_count = 0; > > + } > > + } > > + head = next; > > + } > > + if (mbufs_count > 0) > > + rte_mempool_put_bulk(mbufs[0]->pool, > > + (void **)mbufs, > > + mbufs_count); > > +} > > + > > +/** > > * Creates a "clone" of the given packet mbuf. > > * > > * Walks through all segments of the given packet mbuf, and for each of > them: > > -- > > 1.7.9.5 > >
Hi Vadim, > From: Vadim Suraev [mailto:vadim.suraev@gmail.com] > Sent: Wednesday, March 18, 2015 5:19 AM > To: Ananyev, Konstantin > Cc: dev@dpdk.org; olivier.matz@6wind.com; stephen@networkplumber.org > Subject: Re: [PATCH v2] rte_mbuf: mbuf bulk alloc/free functions added + unittest > > Hi, Konstantin, > > >Shouldn't the line above be inside if (head != NULL) {...} block? > This is removed as Olivier commented before: > > >> +{ > > + if (likely(head != NULL)) { > > >I think we should remove this test. The other mbuf functions do not > >check this. > Regards, > Vadim. I meant that in my opinion it should be: while (head) { next = head->next; - head->next = NULL; head = __rte_pktmbuf_prefree_seg(head); if (likely(head != NULL)) { + head->next = NULL; RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(head) == 0); Same as rte_pktmbuf_free() doing. Konstantin > > On Wed, Mar 18, 2015 at 1:46 AM, Ananyev, Konstantin <konstantin.ananyev@intel.com> wrote: > Hi Vadim, > > > -----Original Message----- > > From: vadim.suraev@gmail.com [mailto:vadim.suraev@gmail.com] > > Sent: Tuesday, March 17, 2015 9:36 PM > > To: dev@dpdk.org > > Cc: olivier.matz@6wind.com; stephen@networkplumber.org; Ananyev, Konstantin; vadim.suraev@gmail.com > > Subject: [PATCH v2] rte_mbuf: mbuf bulk alloc/free functions added + unittest > > > > From: "vadim.suraev@gmail.com" <vadim.suraev@gmail.com> > > > > This patch adds mbuf bulk allocation/freeing functions and unittest > > > > Signed-off-by: Vadim Suraev > > <vadim.suraev@gmail.com> > > --- > > New in v2: > > - function rte_pktmbuf_alloc_bulk added > > - function rte_pktmbuf_bulk_free added > > - function rte_pktmbuf_free_chain added > > - applied reviewers' comments > > > > app/test/test_mbuf.c | 94 +++++++++++++++++++++++++++++++++++++++++++- > > lib/librte_mbuf/rte_mbuf.h | 89 +++++++++++++++++++++++++++++++++++++++++ > > 2 files changed, 182 insertions(+), 1 deletion(-) > > > > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c > > index 1ff66cb..b20c6a4 100644 > > --- a/app/test/test_mbuf.c > > +++ b/app/test/test_mbuf.c > > @@ -77,6 +77,7 @@ > > #define REFCNT_RING_SIZE (REFCNT_MBUF_NUM * REFCNT_MAX_REF) > > > > #define MAKE_STRING(x) # x > > +#define MBUF_POOL_LOCAL_CACHE_SIZE 32 > > > > static struct rte_mempool *pktmbuf_pool = NULL; > > > > @@ -405,6 +406,84 @@ test_pktmbuf_pool(void) > > return ret; > > } > > > > +/* test pktmbuf bulk allocation and freeing > > +*/ > > +static int > > +test_pktmbuf_pool_bulk(void) > > +{ > > + unsigned i; > > + /* size of mempool - size of local cache, otherwise may fail */ > > + unsigned mbufs_to_allocate = NB_MBUF - MBUF_POOL_LOCAL_CACHE_SIZE; > > + struct rte_mbuf *m[mbufs_to_allocate]; > > + int ret = 0; > > + unsigned mbuf_count_before_allocation = rte_mempool_count(pktmbuf_pool); > > + > > + for (i = 0; i < mbufs_to_allocate; i++) > > + m[i] = NULL; > > + /* alloc NB_MBUF-MBUF_POOL_LOCAL_CACHE_SIZE mbufs */ > > + ret = rte_pktmbuf_alloc_bulk(pktmbuf_pool, m, mbufs_to_allocate); > > + if (ret) { > > + printf("cannot allocate %d mbufs bulk mempool_cnt=%d ret=%d\n", > > + mbufs_to_allocate, > > + rte_mempool_count(pktmbuf_pool), > > + ret); > > + return -1; > > + } > > + if ((rte_mempool_count(pktmbuf_pool) + mbufs_to_allocate) != > > + mbuf_count_before_allocation) { > > + printf("mempool count %d + allocated %d != initial %d\n", > > + rte_mempool_count(pktmbuf_pool), > > + mbufs_to_allocate, > > + mbuf_count_before_allocation); > > + return -1; > > + } > > + /* free them */ > > + rte_pktmbuf_bulk_free(m, mbufs_to_allocate); > > + > > + if (rte_mempool_count(pktmbuf_pool) != mbuf_count_before_allocation) { > > + printf("mempool count %d != initial %d\n", > > + rte_mempool_count(pktmbuf_pool), > > + mbuf_count_before_allocation); > > + return -1; > > + } > > + for (i = 0; i < mbufs_to_allocate; i++) > > + m[i] = NULL; > > + > > + /* alloc NB_MBUF-MBUF_POOL_LOCAL_CACHE_SIZE mbufs */ > > + ret = rte_pktmbuf_alloc_bulk(pktmbuf_pool, m, mbufs_to_allocate); > > + if (ret) { > > + printf("cannot allocate %d mbufs bulk mempool_cnt=%d ret=%d\n", > > + mbufs_to_allocate, > > + rte_mempool_count(pktmbuf_pool), > > + ret); > > + return -1; > > + } > > + if ((rte_mempool_count(pktmbuf_pool) + mbufs_to_allocate) != > > + mbuf_count_before_allocation) { > > + printf("mempool count %d + allocated %d != initial %d\n", > > + rte_mempool_count(pktmbuf_pool), > > + mbufs_to_allocate, > > + mbuf_count_before_allocation); > > + return -1; > > + } > > + > > + /* chain it */ > > + for (i = 0; i < mbufs_to_allocate - 1; i++) { > > + m[i]->next = m[i + 1]; > > + m[0]->nb_segs++; > > + } > > + /* free them */ > > + rte_pktmbuf_free_chain(m[0]); > > + > > + if (rte_mempool_count(pktmbuf_pool) != mbuf_count_before_allocation) { > > + printf("mempool count %d != initial %d\n", > > + rte_mempool_count(pktmbuf_pool), > > + mbuf_count_before_allocation); > > + return -1; > > + } > > + return ret; > > +} > > + > > /* > > * test that the pointer to the data on a packet mbuf is set properly > > */ > > @@ -766,7 +845,8 @@ test_mbuf(void) > > if (pktmbuf_pool == NULL) { > > pktmbuf_pool = > > rte_mempool_create("test_pktmbuf_pool", NB_MBUF, > > - MBUF_SIZE, 32, > > + MBUF_SIZE, > > + MBUF_POOL_LOCAL_CACHE_SIZE, > > sizeof(struct rte_pktmbuf_pool_private), > > rte_pktmbuf_pool_init, NULL, > > rte_pktmbuf_init, NULL, > > @@ -790,6 +870,18 @@ test_mbuf(void) > > return -1; > > } > > > > + /* test bulk allocation and freeing */ > > + if (test_pktmbuf_pool_bulk() < 0) { > > + printf("test_pktmbuf_pool_bulk() failed\n"); > > + return -1; > > + } > > + > > + /* once again to ensure all mbufs were freed */ > > + if (test_pktmbuf_pool_bulk() < 0) { > > + printf("test_pktmbuf_pool_bulk() failed\n"); > > + return -1; > > + } > > + > > /* test that the pointer to the data on a packet mbuf is set properly */ > > if (test_pktmbuf_pool_ptr() < 0) { > > printf("test_pktmbuf_pool_ptr() failed\n"); > > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h > > index 17ba791..995237d 100644 > > --- a/lib/librte_mbuf/rte_mbuf.h > > +++ b/lib/librte_mbuf/rte_mbuf.h > > @@ -825,6 +825,95 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m) > > } > > > > /** > > + * Allocate a bulk of mbufs, initiate refcnt and resets > > + * > > + * @param pool > > + * memory pool to allocate from > > + * @param mbufs > > + * Array of pointers to mbuf > > + * @param count > > + * Array size > > + */ > > +static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool, > > + struct rte_mbuf **mbufs, > > + unsigned count) > > +{ > > + unsigned idx; > > + int rc = 0; > > + > > + rc = rte_mempool_get_bulk(pool, (void **)mbufs, count); > > + if (unlikely(rc)) > > + return rc; > > + > > + for (idx = 0; idx < count; idx++) { > > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0); > > + rte_mbuf_refcnt_set(mbufs[idx], 1); > > + rte_pktmbuf_reset(mbufs[idx]); > > + } > > + return rc; > > +} > > + > > +/** > > + * Free a bulk of mbufs into its original mempool. > > + * This function assumes refcnt equals 1 > > + * as well as the freed mbufs are direct > I think your forgot to mention in comments one more requirement for that function: > all mbufs have to be from the same mempool. > > > + * > > + * @param mbufs > > + * Array of pointers to mbuf > > + * @param count > > + * Array size > > + */ > > +static inline void rte_pktmbuf_bulk_free(struct rte_mbuf **mbufs, > > + unsigned count) > > +{ > > + unsigned idx; > > + > > + RTE_MBUF_ASSERT(count > 0); > > + > > + for (idx = 0; idx < count; idx++) { > > + rte_mbuf_refcnt_update(mbufs[idx], -1); > > You can do just: > rte_mbuf_refcnt_set(m, 0); > here and move your assert above it. > Something like: > RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 1); > rte_mbuf_refcnt_set(m, 0); > > Also probably would be a good thing to add one more assert here, > something like: > RTE_MBUF_ASSERT(mbufs[idx]->pool == mufs[0]->pool); > > > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0); > > + } > > + rte_mempool_put_bulk(mbufs[0]->pool, (void **)mbufs, count); > > +} > > + > > +/** > > + * Free chained (scattered) mbufs into its original mempool(s). > > + * > > + * @param head > > + * The head of mbufs to be freed chain. Must not be NULL > > + */ > > +static inline void rte_pktmbuf_free_chain(struct rte_mbuf *head) > > +{ > > + struct rte_mbuf *mbufs[head->nb_segs]; > > + unsigned mbufs_count = 0; > > + struct rte_mbuf *next; > > + > > + while (head) { > > + next = head->next; > > + head->next = NULL; > > Shouldn't the line above be inside if (head != NULL) {...} block? > > > + head = __rte_pktmbuf_prefree_seg(head); > > + if (likely(head != NULL)) { > > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(head) == 0); > > I don't think there is any use of the assert above. > If prefree_seg returns non-NULL value, it sets refcnt to 0 for that mbuf. > > > + if (likely((!mbufs_count) || > > + (head->pool == mbufs[0]->pool))) > > + mbufs[mbufs_count++] = head; > > + else { > > + rte_mempool_put_bulk(mbufs[0]->pool, > > + (void **)mbufs, > > + mbufs_count); > > + mbufs_count = 0; > > + } > > + } > > + head = next; > > + } > > + if (mbufs_count > 0) > > + rte_mempool_put_bulk(mbufs[0]->pool, > > + (void **)mbufs, > > + mbufs_count); > > +} > > + > > +/** > > * Creates a "clone" of the given packet mbuf. > > * > > * Walks through all segments of the given packet mbuf, and for each of them: > > -- > > 1.7.9.5
Hi, Konstantin, Got it. To make the same, nulling the next should be inside of the block as you said. One question raises here: If a segment in the chain has refcnt > 1 (so its next is not assigned NULL), and the next segment has refcnt == 1 (so it is freed), do you think this scenario is real/should be considered? If so, the former can be safely freed only by calling rte_pktmbuf_free_seg which does not iterate. So why to keep next pointing to something? Regards, Vadim On Wed, Mar 18, 2015 at 11:56 AM, Ananyev, Konstantin < konstantin.ananyev@intel.com> wrote: > > Hi Vadim, > > > > From: Vadim Suraev [mailto:vadim.suraev@gmail.com] > > Sent: Wednesday, March 18, 2015 5:19 AM > > To: Ananyev, Konstantin > > Cc: dev@dpdk.org; olivier.matz@6wind.com; stephen@networkplumber.org > > Subject: Re: [PATCH v2] rte_mbuf: mbuf bulk alloc/free functions added + > unittest > > > > Hi, Konstantin, > > > > >Shouldn't the line above be inside if (head != NULL) {...} block? > > This is removed as Olivier commented before: > > > > >> +{ > > > + if (likely(head != NULL)) { > > > > >I think we should remove this test. The other mbuf functions do not > > >check this. > > Regards, > > Vadim. > > I meant that in my opinion it should be: > > while (head) { > next = head->next; > - head->next = NULL; > > head = __rte_pktmbuf_prefree_seg(head); > if (likely(head != NULL)) { > + head->next = NULL; > RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(head) == 0); > > Same as rte_pktmbuf_free() doing. > > Konstantin > > > > > On Wed, Mar 18, 2015 at 1:46 AM, Ananyev, Konstantin < > konstantin.ananyev@intel.com> wrote: > > Hi Vadim, > > > > > -----Original Message----- > > > From: vadim.suraev@gmail.com [mailto:vadim.suraev@gmail.com] > > > Sent: Tuesday, March 17, 2015 9:36 PM > > > To: dev@dpdk.org > > > Cc: olivier.matz@6wind.com; stephen@networkplumber.org; Ananyev, > Konstantin; vadim.suraev@gmail.com > > > Subject: [PATCH v2] rte_mbuf: mbuf bulk alloc/free functions added + > unittest > > > > > > From: "vadim.suraev@gmail.com" <vadim.suraev@gmail.com> > > > > > > This patch adds mbuf bulk allocation/freeing functions and unittest > > > > > > Signed-off-by: Vadim Suraev > > > <vadim.suraev@gmail.com> > > > --- > > > New in v2: > > > - function rte_pktmbuf_alloc_bulk added > > > - function rte_pktmbuf_bulk_free added > > > - function rte_pktmbuf_free_chain added > > > - applied reviewers' comments > > > > > > app/test/test_mbuf.c | 94 > +++++++++++++++++++++++++++++++++++++++++++- > > > lib/librte_mbuf/rte_mbuf.h | 89 > +++++++++++++++++++++++++++++++++++++++++ > > > 2 files changed, 182 insertions(+), 1 deletion(-) > > > > > > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c > > > index 1ff66cb..b20c6a4 100644 > > > --- a/app/test/test_mbuf.c > > > +++ b/app/test/test_mbuf.c > > > @@ -77,6 +77,7 @@ > > > #define REFCNT_RING_SIZE (REFCNT_MBUF_NUM * REFCNT_MAX_REF) > > > > > > #define MAKE_STRING(x) # x > > > +#define MBUF_POOL_LOCAL_CACHE_SIZE 32 > > > > > > static struct rte_mempool *pktmbuf_pool = NULL; > > > > > > @@ -405,6 +406,84 @@ test_pktmbuf_pool(void) > > > return ret; > > > } > > > > > > +/* test pktmbuf bulk allocation and freeing > > > +*/ > > > +static int > > > +test_pktmbuf_pool_bulk(void) > > > +{ > > > + unsigned i; > > > + /* size of mempool - size of local cache, otherwise may fail */ > > > + unsigned mbufs_to_allocate = NB_MBUF - > MBUF_POOL_LOCAL_CACHE_SIZE; > > > + struct rte_mbuf *m[mbufs_to_allocate]; > > > + int ret = 0; > > > + unsigned mbuf_count_before_allocation = > rte_mempool_count(pktmbuf_pool); > > > + > > > + for (i = 0; i < mbufs_to_allocate; i++) > > > + m[i] = NULL; > > > + /* alloc NB_MBUF-MBUF_POOL_LOCAL_CACHE_SIZE mbufs */ > > > + ret = rte_pktmbuf_alloc_bulk(pktmbuf_pool, m, mbufs_to_allocate); > > > + if (ret) { > > > + printf("cannot allocate %d mbufs bulk mempool_cnt=%d > ret=%d\n", > > > + mbufs_to_allocate, > > > + rte_mempool_count(pktmbuf_pool), > > > + ret); > > > + return -1; > > > + } > > > + if ((rte_mempool_count(pktmbuf_pool) + mbufs_to_allocate) != > > > + mbuf_count_before_allocation) { > > > + printf("mempool count %d + allocated %d != initial %d\n", > > > + rte_mempool_count(pktmbuf_pool), > > > + mbufs_to_allocate, > > > + mbuf_count_before_allocation); > > > + return -1; > > > + } > > > + /* free them */ > > > + rte_pktmbuf_bulk_free(m, mbufs_to_allocate); > > > + > > > + if (rte_mempool_count(pktmbuf_pool) != > mbuf_count_before_allocation) { > > > + printf("mempool count %d != initial %d\n", > > > + rte_mempool_count(pktmbuf_pool), > > > + mbuf_count_before_allocation); > > > + return -1; > > > + } > > > + for (i = 0; i < mbufs_to_allocate; i++) > > > + m[i] = NULL; > > > + > > > + /* alloc NB_MBUF-MBUF_POOL_LOCAL_CACHE_SIZE mbufs */ > > > + ret = rte_pktmbuf_alloc_bulk(pktmbuf_pool, m, mbufs_to_allocate); > > > + if (ret) { > > > + printf("cannot allocate %d mbufs bulk mempool_cnt=%d > ret=%d\n", > > > + mbufs_to_allocate, > > > + rte_mempool_count(pktmbuf_pool), > > > + ret); > > > + return -1; > > > + } > > > + if ((rte_mempool_count(pktmbuf_pool) + mbufs_to_allocate) != > > > + mbuf_count_before_allocation) { > > > + printf("mempool count %d + allocated %d != initial %d\n", > > > + rte_mempool_count(pktmbuf_pool), > > > + mbufs_to_allocate, > > > + mbuf_count_before_allocation); > > > + return -1; > > > + } > > > + > > > + /* chain it */ > > > + for (i = 0; i < mbufs_to_allocate - 1; i++) { > > > + m[i]->next = m[i + 1]; > > > + m[0]->nb_segs++; > > > + } > > > + /* free them */ > > > + rte_pktmbuf_free_chain(m[0]); > > > + > > > + if (rte_mempool_count(pktmbuf_pool) != > mbuf_count_before_allocation) { > > > + printf("mempool count %d != initial %d\n", > > > + rte_mempool_count(pktmbuf_pool), > > > + mbuf_count_before_allocation); > > > + return -1; > > > + } > > > + return ret; > > > +} > > > + > > > /* > > > * test that the pointer to the data on a packet mbuf is set properly > > > */ > > > @@ -766,7 +845,8 @@ test_mbuf(void) > > > if (pktmbuf_pool == NULL) { > > > pktmbuf_pool = > > > rte_mempool_create("test_pktmbuf_pool", NB_MBUF, > > > - MBUF_SIZE, 32, > > > + MBUF_SIZE, > > > + MBUF_POOL_LOCAL_CACHE_SIZE, > > > sizeof(struct > rte_pktmbuf_pool_private), > > > rte_pktmbuf_pool_init, NULL, > > > rte_pktmbuf_init, NULL, > > > @@ -790,6 +870,18 @@ test_mbuf(void) > > > return -1; > > > } > > > > > > + /* test bulk allocation and freeing */ > > > + if (test_pktmbuf_pool_bulk() < 0) { > > > + printf("test_pktmbuf_pool_bulk() failed\n"); > > > + return -1; > > > + } > > > + > > > + /* once again to ensure all mbufs were freed */ > > > + if (test_pktmbuf_pool_bulk() < 0) { > > > + printf("test_pktmbuf_pool_bulk() failed\n"); > > > + return -1; > > > + } > > > + > > > /* test that the pointer to the data on a packet mbuf is set > properly */ > > > if (test_pktmbuf_pool_ptr() < 0) { > > > printf("test_pktmbuf_pool_ptr() failed\n"); > > > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h > > > index 17ba791..995237d 100644 > > > --- a/lib/librte_mbuf/rte_mbuf.h > > > +++ b/lib/librte_mbuf/rte_mbuf.h > > > @@ -825,6 +825,95 @@ static inline void rte_pktmbuf_free(struct > rte_mbuf *m) > > > } > > > > > > /** > > > + * Allocate a bulk of mbufs, initiate refcnt and resets > > > + * > > > + * @param pool > > > + * memory pool to allocate from > > > + * @param mbufs > > > + * Array of pointers to mbuf > > > + * @param count > > > + * Array size > > > + */ > > > +static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool, > > > + struct rte_mbuf **mbufs, > > > + unsigned count) > > > +{ > > > + unsigned idx; > > > + int rc = 0; > > > + > > > + rc = rte_mempool_get_bulk(pool, (void **)mbufs, count); > > > + if (unlikely(rc)) > > > + return rc; > > > + > > > + for (idx = 0; idx < count; idx++) { > > > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0); > > > + rte_mbuf_refcnt_set(mbufs[idx], 1); > > > + rte_pktmbuf_reset(mbufs[idx]); > > > + } > > > + return rc; > > > +} > > > + > > > +/** > > > + * Free a bulk of mbufs into its original mempool. > > > + * This function assumes refcnt equals 1 > > > + * as well as the freed mbufs are direct > > I think your forgot to mention in comments one more requirement for that > function: > > all mbufs have to be from the same mempool. > > > > > + * > > > + * @param mbufs > > > + * Array of pointers to mbuf > > > + * @param count > > > + * Array size > > > + */ > > > +static inline void rte_pktmbuf_bulk_free(struct rte_mbuf **mbufs, > > > + unsigned count) > > > +{ > > > + unsigned idx; > > > + > > > + RTE_MBUF_ASSERT(count > 0); > > > + > > > + for (idx = 0; idx < count; idx++) { > > > + rte_mbuf_refcnt_update(mbufs[idx], -1); > > > > You can do just: > > rte_mbuf_refcnt_set(m, 0); > > here and move your assert above it. > > Something like: > > RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 1); > > rte_mbuf_refcnt_set(m, 0); > > > > Also probably would be a good thing to add one more assert here, > > something like: > > RTE_MBUF_ASSERT(mbufs[idx]->pool == mufs[0]->pool); > > > > > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0); > > > + } > > > + rte_mempool_put_bulk(mbufs[0]->pool, (void **)mbufs, count); > > > +} > > > + > > > +/** > > > + * Free chained (scattered) mbufs into its original mempool(s). > > > + * > > > + * @param head > > > + * The head of mbufs to be freed chain. Must not be NULL > > > + */ > > > +static inline void rte_pktmbuf_free_chain(struct rte_mbuf *head) > > > +{ > > > + struct rte_mbuf *mbufs[head->nb_segs]; > > > + unsigned mbufs_count = 0; > > > + struct rte_mbuf *next; > > > + > > > + while (head) { > > > + next = head->next; > > > + head->next = NULL; > > > > Shouldn't the line above be inside if (head != NULL) {...} block? > > > > > + head = __rte_pktmbuf_prefree_seg(head); > > > + if (likely(head != NULL)) { > > > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(head) == 0); > > > > I don't think there is any use of the assert above. > > If prefree_seg returns non-NULL value, it sets refcnt to 0 for that mbuf. > > > > > + if (likely((!mbufs_count) || > > > + (head->pool == mbufs[0]->pool))) > > > + mbufs[mbufs_count++] = head; > > > + else { > > > + rte_mempool_put_bulk(mbufs[0]->pool, > > > + (void **)mbufs, > > > + mbufs_count); > > > + mbufs_count = 0; > > > + } > > > + } > > > + head = next; > > > + } > > > + if (mbufs_count > 0) > > > + rte_mempool_put_bulk(mbufs[0]->pool, > > > + (void **)mbufs, > > > + mbufs_count); > > > +} > > > + > > > +/** > > > * Creates a "clone" of the given packet mbuf. > > > * > > > * Walks through all segments of the given packet mbuf, and for each > of them: > > > -- > > > 1.7.9.5 > >
> From: Vadim Suraev [mailto:vadim.suraev@gmail.com] > Sent: Wednesday, March 18, 2015 10:41 AM > To: Ananyev, Konstantin > Cc: dev@dpdk.org > Subject: Re: [PATCH v2] rte_mbuf: mbuf bulk alloc/free functions added + unittest > > Hi, Konstantin, > > Got it. To make the same, nulling the next should be inside of the block as you said. > One question raises here: If a segment in the chain has refcnt > 1 (so its next is not assigned NULL), and the next segment has refcnt > == 1 (so it is freed), do you think this scenario is real/should be considered? If so, the former can be safely freed only by calling > rte_pktmbuf_free_seg which does not iterate. So why to keep next pointing to something? I think we need it, not just to keep things the same with rte_pktmbuf_free(), but because it is a right thing to do. Let say you have a packet in 2 mbufs chained together, both mbufs have refcnt==2. Then: rte_pktmbuf_free(firs_mbuf); rte_pktmbuf_free(firs_mbuf); Would work correctly and free both mbufs back to the mempool. While after: rte_pktmbuf_free_chain(first_mbuf); rte_pktmbuf_free_chain(first_mbuf); We would have first_mbuf freed back into the mempool, while second would get lost(memory leaking). Basically free() shouldn't modify any filed inside mbuf, except refcnt if rte_mbuf_refcnt_update(m, -1) > 0 About your case, when: first_mbuf->refcnt==2 and second_mbuf->refcnt==1. Right now, rte_pktmbuf_free() can't handle such cases properly, and, as I know, such situation is not considered as valid one. Konstantin > Regards, > Vadim > > On Wed, Mar 18, 2015 at 11:56 AM, Ananyev, Konstantin <konstantin.ananyev@intel.com> wrote: > > Hi Vadim, > > > > From: Vadim Suraev [mailto:vadim.suraev@gmail.com] > > Sent: Wednesday, March 18, 2015 5:19 AM > > To: Ananyev, Konstantin > > Cc: dev@dpdk.org; olivier.matz@6wind.com; stephen@networkplumber.org > > Subject: Re: [PATCH v2] rte_mbuf: mbuf bulk alloc/free functions added + unittest > > > > Hi, Konstantin, > > > > >Shouldn't the line above be inside if (head != NULL) {...} block? > > This is removed as Olivier commented before: > > > > >> +{ > > > + if (likely(head != NULL)) { > > > > >I think we should remove this test. The other mbuf functions do not > > >check this. > > Regards, > > Vadim. > > I meant that in my opinion it should be: > > while (head) { > next = head->next; > - head->next = NULL; > > head = __rte_pktmbuf_prefree_seg(head); > if (likely(head != NULL)) { > + head->next = NULL; > RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(head) == 0); > > Same as rte_pktmbuf_free() doing. > > Konstantin > > > > > On Wed, Mar 18, 2015 at 1:46 AM, Ananyev, Konstantin <konstantin.ananyev@intel.com> wrote: > > Hi Vadim, > > > > > -----Original Message----- > > > From: vadim.suraev@gmail.com [mailto:vadim.suraev@gmail.com] > > > Sent: Tuesday, March 17, 2015 9:36 PM > > > To: dev@dpdk.org > > > Cc: olivier.matz@6wind.com; stephen@networkplumber.org; Ananyev, Konstantin; vadim.suraev@gmail.com > > > Subject: [PATCH v2] rte_mbuf: mbuf bulk alloc/free functions added + unittest > > > > > > From: "vadim.suraev@gmail.com" <vadim.suraev@gmail.com> > > > > > > This patch adds mbuf bulk allocation/freeing functions and unittest > > > > > > Signed-off-by: Vadim Suraev > > > <vadim.suraev@gmail.com> > > > --- > > > New in v2: > > > - function rte_pktmbuf_alloc_bulk added > > > - function rte_pktmbuf_bulk_free added > > > - function rte_pktmbuf_free_chain added > > > - applied reviewers' comments > > > > > > app/test/test_mbuf.c | 94 +++++++++++++++++++++++++++++++++++++++++++- > > > lib/librte_mbuf/rte_mbuf.h | 89 +++++++++++++++++++++++++++++++++++++++++ > > > 2 files changed, 182 insertions(+), 1 deletion(-) > > > > > > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c > > > index 1ff66cb..b20c6a4 100644 > > > --- a/app/test/test_mbuf.c > > > +++ b/app/test/test_mbuf.c > > > @@ -77,6 +77,7 @@ > > > #define REFCNT_RING_SIZE (REFCNT_MBUF_NUM * REFCNT_MAX_REF) > > > > > > #define MAKE_STRING(x) # x > > > +#define MBUF_POOL_LOCAL_CACHE_SIZE 32 > > > > > > static struct rte_mempool *pktmbuf_pool = NULL; > > > > > > @@ -405,6 +406,84 @@ test_pktmbuf_pool(void) > > > return ret; > > > } > > > > > > +/* test pktmbuf bulk allocation and freeing > > > +*/ > > > +static int > > > +test_pktmbuf_pool_bulk(void) > > > +{ > > > + unsigned i; > > > + /* size of mempool - size of local cache, otherwise may fail */ > > > + unsigned mbufs_to_allocate = NB_MBUF - MBUF_POOL_LOCAL_CACHE_SIZE; > > > + struct rte_mbuf *m[mbufs_to_allocate]; > > > + int ret = 0; > > > + unsigned mbuf_count_before_allocation = rte_mempool_count(pktmbuf_pool); > > > + > > > + for (i = 0; i < mbufs_to_allocate; i++) > > > + m[i] = NULL; > > > + /* alloc NB_MBUF-MBUF_POOL_LOCAL_CACHE_SIZE mbufs */ > > > + ret = rte_pktmbuf_alloc_bulk(pktmbuf_pool, m, mbufs_to_allocate); > > > + if (ret) { > > > + printf("cannot allocate %d mbufs bulk mempool_cnt=%d ret=%d\n", > > > + mbufs_to_allocate, > > > + rte_mempool_count(pktmbuf_pool), > > > + ret); > > > + return -1; > > > + } > > > + if ((rte_mempool_count(pktmbuf_pool) + mbufs_to_allocate) != > > > + mbuf_count_before_allocation) { > > > + printf("mempool count %d + allocated %d != initial %d\n", > > > + rte_mempool_count(pktmbuf_pool), > > > + mbufs_to_allocate, > > > + mbuf_count_before_allocation); > > > + return -1; > > > + } > > > + /* free them */ > > > + rte_pktmbuf_bulk_free(m, mbufs_to_allocate); > > > + > > > + if (rte_mempool_count(pktmbuf_pool) != mbuf_count_before_allocation) { > > > + printf("mempool count %d != initial %d\n", > > > + rte_mempool_count(pktmbuf_pool), > > > + mbuf_count_before_allocation); > > > + return -1; > > > + } > > > + for (i = 0; i < mbufs_to_allocate; i++) > > > + m[i] = NULL; > > > + > > > + /* alloc NB_MBUF-MBUF_POOL_LOCAL_CACHE_SIZE mbufs */ > > > + ret = rte_pktmbuf_alloc_bulk(pktmbuf_pool, m, mbufs_to_allocate); > > > + if (ret) { > > > + printf("cannot allocate %d mbufs bulk mempool_cnt=%d ret=%d\n", > > > + mbufs_to_allocate, > > > + rte_mempool_count(pktmbuf_pool), > > > + ret); > > > + return -1; > > > + } > > > + if ((rte_mempool_count(pktmbuf_pool) + mbufs_to_allocate) != > > > + mbuf_count_before_allocation) { > > > + printf("mempool count %d + allocated %d != initial %d\n", > > > + rte_mempool_count(pktmbuf_pool), > > > + mbufs_to_allocate, > > > + mbuf_count_before_allocation); > > > + return -1; > > > + } > > > + > > > + /* chain it */ > > > + for (i = 0; i < mbufs_to_allocate - 1; i++) { > > > + m[i]->next = m[i + 1]; > > > + m[0]->nb_segs++; > > > + } > > > + /* free them */ > > > + rte_pktmbuf_free_chain(m[0]); > > > + > > > + if (rte_mempool_count(pktmbuf_pool) != mbuf_count_before_allocation) { > > > + printf("mempool count %d != initial %d\n", > > > + rte_mempool_count(pktmbuf_pool), > > > + mbuf_count_before_allocation); > > > + return -1; > > > + } > > > + return ret; > > > +} > > > + > > > /* > > > * test that the pointer to the data on a packet mbuf is set properly > > > */ > > > @@ -766,7 +845,8 @@ test_mbuf(void) > > > if (pktmbuf_pool == NULL) { > > > pktmbuf_pool = > > > rte_mempool_create("test_pktmbuf_pool", NB_MBUF, > > > - MBUF_SIZE, 32, > > > + MBUF_SIZE, > > > + MBUF_POOL_LOCAL_CACHE_SIZE, > > > sizeof(struct rte_pktmbuf_pool_private), > > > rte_pktmbuf_pool_init, NULL, > > > rte_pktmbuf_init, NULL, > > > @@ -790,6 +870,18 @@ test_mbuf(void) > > > return -1; > > > } > > > > > > + /* test bulk allocation and freeing */ > > > + if (test_pktmbuf_pool_bulk() < 0) { > > > + printf("test_pktmbuf_pool_bulk() failed\n"); > > > + return -1; > > > + } > > > + > > > + /* once again to ensure all mbufs were freed */ > > > + if (test_pktmbuf_pool_bulk() < 0) { > > > + printf("test_pktmbuf_pool_bulk() failed\n"); > > > + return -1; > > > + } > > > + > > > /* test that the pointer to the data on a packet mbuf is set properly */ > > > if (test_pktmbuf_pool_ptr() < 0) { > > > printf("test_pktmbuf_pool_ptr() failed\n"); > > > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h > > > index 17ba791..995237d 100644 > > > --- a/lib/librte_mbuf/rte_mbuf.h > > > +++ b/lib/librte_mbuf/rte_mbuf.h > > > @@ -825,6 +825,95 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m) > > > } > > > > > > /** > > > + * Allocate a bulk of mbufs, initiate refcnt and resets > > > + * > > > + * @param pool > > > + * memory pool to allocate from > > > + * @param mbufs > > > + * Array of pointers to mbuf > > > + * @param count > > > + * Array size > > > + */ > > > +static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool, > > > + struct rte_mbuf **mbufs, > > > + unsigned count) > > > +{ > > > + unsigned idx; > > > + int rc = 0; > > > + > > > + rc = rte_mempool_get_bulk(pool, (void **)mbufs, count); > > > + if (unlikely(rc)) > > > + return rc; > > > + > > > + for (idx = 0; idx < count; idx++) { > > > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0); > > > + rte_mbuf_refcnt_set(mbufs[idx], 1); > > > + rte_pktmbuf_reset(mbufs[idx]); > > > + } > > > + return rc; > > > +} > > > + > > > +/** > > > + * Free a bulk of mbufs into its original mempool. > > > + * This function assumes refcnt equals 1 > > > + * as well as the freed mbufs are direct > > I think your forgot to mention in comments one more requirement for that function: > > all mbufs have to be from the same mempool. > > > > > + * > > > + * @param mbufs > > > + * Array of pointers to mbuf > > > + * @param count > > > + * Array size > > > + */ > > > +static inline void rte_pktmbuf_bulk_free(struct rte_mbuf **mbufs, > > > + unsigned count) > > > +{ > > > + unsigned idx; > > > + > > > + RTE_MBUF_ASSERT(count > 0); > > > + > > > + for (idx = 0; idx < count; idx++) { > > > + rte_mbuf_refcnt_update(mbufs[idx], -1); > > > > You can do just: > > rte_mbuf_refcnt_set(m, 0); > > here and move your assert above it. > > Something like: > > RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 1); > > rte_mbuf_refcnt_set(m, 0); > > > > Also probably would be a good thing to add one more assert here, > > something like: > > RTE_MBUF_ASSERT(mbufs[idx]->pool == mufs[0]->pool); > > > > > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0); > > > + } > > > + rte_mempool_put_bulk(mbufs[0]->pool, (void **)mbufs, count); > > > +} > > > + > > > +/** > > > + * Free chained (scattered) mbufs into its original mempool(s). > > > + * > > > + * @param head > > > + * The head of mbufs to be freed chain. Must not be NULL > > > + */ > > > +static inline void rte_pktmbuf_free_chain(struct rte_mbuf *head) > > > +{ > > > + struct rte_mbuf *mbufs[head->nb_segs]; > > > + unsigned mbufs_count = 0; > > > + struct rte_mbuf *next; > > > + > > > + while (head) { > > > + next = head->next; > > > + head->next = NULL; > > > > Shouldn't the line above be inside if (head != NULL) {...} block? > > > > > + head = __rte_pktmbuf_prefree_seg(head); > > > + if (likely(head != NULL)) { > > > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(head) == 0); > > > > I don't think there is any use of the assert above. > > If prefree_seg returns non-NULL value, it sets refcnt to 0 for that mbuf. > > > > > + if (likely((!mbufs_count) || > > > + (head->pool == mbufs[0]->pool))) > > > + mbufs[mbufs_count++] = head; > > > + else { > > > + rte_mempool_put_bulk(mbufs[0]->pool, > > > + (void **)mbufs, > > > + mbufs_count); > > > + mbufs_count = 0; > > > + } > > > + } > > > + head = next; > > > + } > > > + if (mbufs_count > 0) > > > + rte_mempool_put_bulk(mbufs[0]->pool, > > > + (void **)mbufs, > > > + mbufs_count); > > > +} > > > + > > > +/** > > > * Creates a "clone" of the given packet mbuf. > > > * > > > * Walks through all segments of the given packet mbuf, and for each of them: > > > -- > > > 1.7.9.5
Hi Konstantin, On 03/18/2015 04:13 PM, Ananyev, Konstantin wrote: > >> From: Vadim Suraev [mailto:vadim.suraev@gmail.com] >> Sent: Wednesday, March 18, 2015 10:41 AM >> To: Ananyev, Konstantin >> Cc: dev@dpdk.org >> Subject: Re: [PATCH v2] rte_mbuf: mbuf bulk alloc/free functions added + unittest >> >> Hi, Konstantin, >> >> Got it. To make the same, nulling the next should be inside of the block as you said. >> One question raises here: If a segment in the chain has refcnt > 1 (so its next is not assigned NULL), and the next segment has refcnt >> == 1 (so it is freed), do you think this scenario is real/should be considered? If so, the former can be safely freed only by calling >> rte_pktmbuf_free_seg which does not iterate. So why to keep next pointing to something? > > I think we need it, not just to keep things the same with rte_pktmbuf_free(), but because it is a right thing to do. > Let say you have a packet in 2 mbufs chained together, both mbufs have refcnt==2. > Then: > rte_pktmbuf_free(firs_mbuf); > rte_pktmbuf_free(firs_mbuf); > > Would work correctly and free both mbufs back to the mempool. > > While after: > rte_pktmbuf_free_chain(first_mbuf); > rte_pktmbuf_free_chain(first_mbuf); > > We would have first_mbuf freed back into the mempool, while second would get lost(memory leaking). > Basically free() shouldn't modify any filed inside mbuf, except refcnt if rte_mbuf_refcnt_update(m, -1) > 0 > > About your case, when: first_mbuf->refcnt==2 and second_mbuf->refcnt==1. > Right now, rte_pktmbuf_free() can't handle such cases properly, > and, as I know, such situation is not considered as valid one. I'm not sure I understand what you are saying. To me, the case you are describing is similar to the case below, and it should work properly: /* allocate a packet and clone it. After that, m1 has a * refcnt of 2 */ m1 = rte_pktmbuf_alloc(); clone1 = rte_pktmbuf_clone(m1); /* allocate another packet */ m2 = rte_pktmbuf_alloc(); /* chain m2 after m1, updating fields like total length. * After that, m1 has 2 segments, the first one has a refcnt * of 1 and the second has a refcnt of 2 */ mbuf_concat(m1, m2); /* This will decrement the refcnt on the first segment and * free the second segment */ rte_pktmbuf_free(m1); /* free the indirect mbuf, and as the refcnt is 1 on the * direct mbuf (m1), also release it */ rte_pktmbuf_free(clone1); Am I missing something? Thanks, Olivier
Hi Olivier, > -----Original Message----- > From: Olivier MATZ [mailto:olivier.matz@6wind.com] > Sent: Thursday, March 19, 2015 8:13 AM > To: Ananyev, Konstantin; vadim.suraev@gmail.com > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH v2] rte_mbuf: mbuf bulk alloc/free functions added + unittest > > Hi Konstantin, > > On 03/18/2015 04:13 PM, Ananyev, Konstantin wrote: > > > >> From: Vadim Suraev [mailto:vadim.suraev@gmail.com] > >> Sent: Wednesday, March 18, 2015 10:41 AM > >> To: Ananyev, Konstantin > >> Cc: dev@dpdk.org > >> Subject: Re: [PATCH v2] rte_mbuf: mbuf bulk alloc/free functions added + unittest > >> > >> Hi, Konstantin, > >> > >> Got it. To make the same, nulling the next should be inside of the block as you said. > >> One question raises here: If a segment in the chain has refcnt > 1 (so its next is not assigned NULL), and the next segment has > refcnt > >> == 1 (so it is freed), do you think this scenario is real/should be considered? If so, the former can be safely freed only by calling > >> rte_pktmbuf_free_seg which does not iterate. So why to keep next pointing to something? > > > > I think we need it, not just to keep things the same with rte_pktmbuf_free(), but because it is a right thing to do. > > Let say you have a packet in 2 mbufs chained together, both mbufs have refcnt==2. > > Then: > > rte_pktmbuf_free(firs_mbuf); > > rte_pktmbuf_free(firs_mbuf); > > > > Would work correctly and free both mbufs back to the mempool. > > > > While after: > > rte_pktmbuf_free_chain(first_mbuf); > > rte_pktmbuf_free_chain(first_mbuf); > > > > We would have first_mbuf freed back into the mempool, while second would get lost(memory leaking). > > Basically free() shouldn't modify any filed inside mbuf, except refcnt if rte_mbuf_refcnt_update(m, -1) > 0 > > > > About your case, when: first_mbuf->refcnt==2 and second_mbuf->refcnt==1. > > Right now, rte_pktmbuf_free() can't handle such cases properly, > > and, as I know, such situation is not considered as valid one. > > I'm not sure I understand what you are saying. To me, the case you are > describing is similar to the case below, and it should work properly: > > /* allocate a packet and clone it. After that, m1 has a > * refcnt of 2 */ > m1 = rte_pktmbuf_alloc(); > clone1 = rte_pktmbuf_clone(m1); > > /* allocate another packet */ > m2 = rte_pktmbuf_alloc(); > > /* chain m2 after m1, updating fields like total length. > * After that, m1 has 2 segments, the first one has a refcnt > * of 1 and the second has a refcnt of 2 */ > mbuf_concat(m1, m2); > > /* This will decrement the refcnt on the first segment and > * free the second segment */ > rte_pktmbuf_free(m1); > > /* free the indirect mbuf, and as the refcnt is 1 on the > * direct mbuf (m1), also release it */ > rte_pktmbuf_free(clone1); > > Am I missing something? The scenario you described would work I believe, as second reference for m1 is from indirect mbuf. So rte_pktmbuf_free(clone1) would just call __rte_mbuf_raw_free(m1). The scenario I am talking about is: No indirect mbufs pointing to m1 data buffer. m1->next == m2; m1->refcnt==2; m2->next == NULL; m2->rectn==1; And then: rte_pktmbuf_free(m1); //after that m2 is freed, but m1->next == m2 rte_pktmbuf_free(m1); //would call rte_pktmbuf_free_seg(m2) That one would not work correctly, and I think considered as invalid case right now. Konstantin > > Thanks, > Olivier
Hi Konstantin, On 03/19/2015 11:47 AM, Ananyev, Konstantin wrote: >>>> Hi, Konstantin, >>>> >>>> Got it. To make the same, nulling the next should be inside of the block as you said. >>>> One question raises here: If a segment in the chain has refcnt > 1 (so its next is not assigned NULL), and the next segment has >> refcnt >>>> == 1 (so it is freed), do you think this scenario is real/should be considered? If so, the former can be safely freed only by calling >>>> rte_pktmbuf_free_seg which does not iterate. So why to keep next pointing to something? >>> >>> I think we need it, not just to keep things the same with rte_pktmbuf_free(), but because it is a right thing to do. >>> Let say you have a packet in 2 mbufs chained together, both mbufs have refcnt==2. >>> Then: >>> rte_pktmbuf_free(firs_mbuf); >>> rte_pktmbuf_free(firs_mbuf); >>> >>> Would work correctly and free both mbufs back to the mempool. >>> >>> While after: >>> rte_pktmbuf_free_chain(first_mbuf); >>> rte_pktmbuf_free_chain(first_mbuf); >>> >>> We would have first_mbuf freed back into the mempool, while second would get lost(memory leaking). >>> Basically free() shouldn't modify any filed inside mbuf, except refcnt if rte_mbuf_refcnt_update(m, -1) > 0 >>> >>> About your case, when: first_mbuf->refcnt==2 and second_mbuf->refcnt==1. >>> Right now, rte_pktmbuf_free() can't handle such cases properly, >>> and, as I know, such situation is not considered as valid one. >> >> I'm not sure I understand what you are saying. To me, the case you are >> describing is similar to the case below, and it should work properly: >> >> /* allocate a packet and clone it. After that, m1 has a >> * refcnt of 2 */ >> m1 = rte_pktmbuf_alloc(); >> clone1 = rte_pktmbuf_clone(m1); >> >> /* allocate another packet */ >> m2 = rte_pktmbuf_alloc(); >> >> /* chain m2 after m1, updating fields like total length. >> * After that, m1 has 2 segments, the first one has a refcnt >> * of 1 and the second has a refcnt of 2 */ >> mbuf_concat(m1, m2); >> >> /* This will decrement the refcnt on the first segment and >> * free the second segment */ >> rte_pktmbuf_free(m1); >> >> /* free the indirect mbuf, and as the refcnt is 1 on the >> * direct mbuf (m1), also release it */ >> rte_pktmbuf_free(clone1); >> >> Am I missing something? > > The scenario you described would work I believe, as second reference for m1 is from indirect mbuf. > So rte_pktmbuf_free(clone1) would just call __rte_mbuf_raw_free(m1). > > The scenario I am talking about is: > No indirect mbufs pointing to m1 data buffer. > m1->next == m2; m1->refcnt==2; > m2->next == NULL; m2->rectn==1; > > And then: > rte_pktmbuf_free(m1); //after that m2 is freed, but m1->next == m2 > rte_pktmbuf_free(m1); //would call rte_pktmbuf_free_seg(m2) > > That one would not work correctly, and I think considered as invalid case right now. Ok, I agree this is invalid and should not happen. Thanks, Olivier
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c index 1ff66cb..b20c6a4 100644 --- a/app/test/test_mbuf.c +++ b/app/test/test_mbuf.c @@ -77,6 +77,7 @@ #define REFCNT_RING_SIZE (REFCNT_MBUF_NUM * REFCNT_MAX_REF) #define MAKE_STRING(x) # x +#define MBUF_POOL_LOCAL_CACHE_SIZE 32 static struct rte_mempool *pktmbuf_pool = NULL; @@ -405,6 +406,84 @@ test_pktmbuf_pool(void) return ret; } +/* test pktmbuf bulk allocation and freeing +*/ +static int +test_pktmbuf_pool_bulk(void) +{ + unsigned i; + /* size of mempool - size of local cache, otherwise may fail */ + unsigned mbufs_to_allocate = NB_MBUF - MBUF_POOL_LOCAL_CACHE_SIZE; + struct rte_mbuf *m[mbufs_to_allocate]; + int ret = 0; + unsigned mbuf_count_before_allocation = rte_mempool_count(pktmbuf_pool); + + for (i = 0; i < mbufs_to_allocate; i++) + m[i] = NULL; + /* alloc NB_MBUF-MBUF_POOL_LOCAL_CACHE_SIZE mbufs */ + ret = rte_pktmbuf_alloc_bulk(pktmbuf_pool, m, mbufs_to_allocate); + if (ret) { + printf("cannot allocate %d mbufs bulk mempool_cnt=%d ret=%d\n", + mbufs_to_allocate, + rte_mempool_count(pktmbuf_pool), + ret); + return -1; + } + if ((rte_mempool_count(pktmbuf_pool) + mbufs_to_allocate) != + mbuf_count_before_allocation) { + printf("mempool count %d + allocated %d != initial %d\n", + rte_mempool_count(pktmbuf_pool), + mbufs_to_allocate, + mbuf_count_before_allocation); + return -1; + } + /* free them */ + rte_pktmbuf_bulk_free(m, mbufs_to_allocate); + + if (rte_mempool_count(pktmbuf_pool) != mbuf_count_before_allocation) { + printf("mempool count %d != initial %d\n", + rte_mempool_count(pktmbuf_pool), + mbuf_count_before_allocation); + return -1; + } + for (i = 0; i < mbufs_to_allocate; i++) + m[i] = NULL; + + /* alloc NB_MBUF-MBUF_POOL_LOCAL_CACHE_SIZE mbufs */ + ret = rte_pktmbuf_alloc_bulk(pktmbuf_pool, m, mbufs_to_allocate); + if (ret) { + printf("cannot allocate %d mbufs bulk mempool_cnt=%d ret=%d\n", + mbufs_to_allocate, + rte_mempool_count(pktmbuf_pool), + ret); + return -1; + } + if ((rte_mempool_count(pktmbuf_pool) + mbufs_to_allocate) != + mbuf_count_before_allocation) { + printf("mempool count %d + allocated %d != initial %d\n", + rte_mempool_count(pktmbuf_pool), + mbufs_to_allocate, + mbuf_count_before_allocation); + return -1; + } + + /* chain it */ + for (i = 0; i < mbufs_to_allocate - 1; i++) { + m[i]->next = m[i + 1]; + m[0]->nb_segs++; + } + /* free them */ + rte_pktmbuf_free_chain(m[0]); + + if (rte_mempool_count(pktmbuf_pool) != mbuf_count_before_allocation) { + printf("mempool count %d != initial %d\n", + rte_mempool_count(pktmbuf_pool), + mbuf_count_before_allocation); + return -1; + } + return ret; +} + /* * test that the pointer to the data on a packet mbuf is set properly */ @@ -766,7 +845,8 @@ test_mbuf(void) if (pktmbuf_pool == NULL) { pktmbuf_pool = rte_mempool_create("test_pktmbuf_pool", NB_MBUF, - MBUF_SIZE, 32, + MBUF_SIZE, + MBUF_POOL_LOCAL_CACHE_SIZE, sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, @@ -790,6 +870,18 @@ test_mbuf(void) return -1; } + /* test bulk allocation and freeing */ + if (test_pktmbuf_pool_bulk() < 0) { + printf("test_pktmbuf_pool_bulk() failed\n"); + return -1; + } + + /* once again to ensure all mbufs were freed */ + if (test_pktmbuf_pool_bulk() < 0) { + printf("test_pktmbuf_pool_bulk() failed\n"); + return -1; + } + /* test that the pointer to the data on a packet mbuf is set properly */ if (test_pktmbuf_pool_ptr() < 0) { printf("test_pktmbuf_pool_ptr() failed\n"); diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index 17ba791..995237d 100644 --- a/lib/librte_mbuf/rte_mbuf.h +++ b/lib/librte_mbuf/rte_mbuf.h @@ -825,6 +825,95 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m) } /** + * Allocate a bulk of mbufs, initiate refcnt and resets + * + * @param pool + * memory pool to allocate from + * @param mbufs + * Array of pointers to mbuf + * @param count + * Array size + */ +static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool, + struct rte_mbuf **mbufs, + unsigned count) +{ + unsigned idx; + int rc = 0; + + rc = rte_mempool_get_bulk(pool, (void **)mbufs, count); + if (unlikely(rc)) + return rc; + + for (idx = 0; idx < count; idx++) { + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0); + rte_mbuf_refcnt_set(mbufs[idx], 1); + rte_pktmbuf_reset(mbufs[idx]); + } + return rc; +} + +/** + * Free a bulk of mbufs into its original mempool. + * This function assumes refcnt equals 1 + * as well as the freed mbufs are direct + * + * @param mbufs + * Array of pointers to mbuf + * @param count + * Array size + */ +static inline void rte_pktmbuf_bulk_free(struct rte_mbuf **mbufs, + unsigned count) +{ + unsigned idx; + + RTE_MBUF_ASSERT(count > 0); + + for (idx = 0; idx < count; idx++) { + rte_mbuf_refcnt_update(mbufs[idx], -1); + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0); + } + rte_mempool_put_bulk(mbufs[0]->pool, (void **)mbufs, count); +} + +/** + * Free chained (scattered) mbufs into its original mempool(s). + * + * @param head + * The head of mbufs to be freed chain. Must not be NULL + */ +static inline void rte_pktmbuf_free_chain(struct rte_mbuf *head) +{ + struct rte_mbuf *mbufs[head->nb_segs]; + unsigned mbufs_count = 0; + struct rte_mbuf *next; + + while (head) { + next = head->next; + head->next = NULL; + head = __rte_pktmbuf_prefree_seg(head); + if (likely(head != NULL)) { + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(head) == 0); + if (likely((!mbufs_count) || + (head->pool == mbufs[0]->pool))) + mbufs[mbufs_count++] = head; + else { + rte_mempool_put_bulk(mbufs[0]->pool, + (void **)mbufs, + mbufs_count); + mbufs_count = 0; + } + } + head = next; + } + if (mbufs_count > 0) + rte_mempool_put_bulk(mbufs[0]->pool, + (void **)mbufs, + mbufs_count); +} + +/** * Creates a "clone" of the given packet mbuf. * * Walks through all segments of the given packet mbuf, and for each of them: