From patchwork Wed May 10 15:13:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Olivier Matz X-Patchwork-Id: 24204 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id ED6EE2952; Wed, 10 May 2017 17:13:52 +0200 (CEST) Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 51EC31C00 for ; Wed, 10 May 2017 17:13:51 +0200 (CEST) Received: from glumotte.dev.6wind.com (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 544E4248AF; Wed, 10 May 2017 17:13:45 +0200 (CEST) From: Olivier Matz To: dev@dpdk.org Cc: gregory@weka.io, thomas@monjalon.net Date: Wed, 10 May 2017 17:13:10 +0200 Message-Id: <20170510151310.20505-1-olivier.matz@6wind.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <1802735.EECn92tGJO@polaris> References: <1802735.EECn92tGJO@polaris> Subject: [dpdk-dev] [PATCH v3] mbuf: fix bulk allocation when debug enabled X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gregory Etelson The debug assertions when allocating a raw mbuf are not correct since commit 8f094a9ac5d7 ("mbuf: set mbuf fields while in pool"), which triggers a panic when using this function in debug mode Change the expected number of segments to 1 instead of 0, and factorize these sanity checks. Fixes: 8f094a9ac5d7 ("mbuf: set mbuf fields while in pool") Signed-off-by: Gregory Etelson Signed-off-by: Olivier Matz --- lib/librte_mbuf/rte_mbuf.h | 25 ++++++++++++------------- 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index 9097f1859..1cb03109c 100644 --- a/lib/librte_mbuf/rte_mbuf.h +++ b/lib/librte_mbuf/rte_mbuf.h @@ -788,6 +788,13 @@ rte_mbuf_refcnt_set(struct rte_mbuf *m, uint16_t new_value) void rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header); +#define MBUF_RAW_ALLOC_CHECK(m) do { \ + RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1); \ + RTE_ASSERT((m)->next == NULL); \ + RTE_ASSERT((m)->nb_segs == 1); \ + __rte_mbuf_sanity_check(m, 0); \ +} while (0) + /** * Allocate an unitialized mbuf from mempool *mp*. * @@ -815,11 +822,7 @@ static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp) if (rte_mempool_get(mp, &mb) < 0) return NULL; m = (struct rte_mbuf *)mb; - RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1); - RTE_ASSERT(m->next == NULL); - RTE_ASSERT(m->nb_segs == 1); - __rte_mbuf_sanity_check(m, 0); - + MBUF_RAW_ALLOC_CHECK(m); return m; } @@ -1152,26 +1155,22 @@ static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool, switch (count % 4) { case 0: while (idx != count) { - RTE_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0); - rte_mbuf_refcnt_set(mbufs[idx], 1); + MBUF_RAW_ALLOC_CHECK(mbufs[idx]); rte_pktmbuf_reset(mbufs[idx]); idx++; /* fall-through */ case 3: - RTE_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0); - rte_mbuf_refcnt_set(mbufs[idx], 1); + MBUF_RAW_ALLOC_CHECK(mbufs[idx]); rte_pktmbuf_reset(mbufs[idx]); idx++; /* fall-through */ case 2: - RTE_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0); - rte_mbuf_refcnt_set(mbufs[idx], 1); + MBUF_RAW_ALLOC_CHECK(mbufs[idx]); rte_pktmbuf_reset(mbufs[idx]); idx++; /* fall-through */ case 1: - RTE_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0); - rte_mbuf_refcnt_set(mbufs[idx], 1); + MBUF_RAW_ALLOC_CHECK(mbufs[idx]); rte_pktmbuf_reset(mbufs[idx]); idx++; /* fall-through */