[dpdk-dev,v2] mbuf: use refcnt = 0 when debugging

Message ID 1502122274-15657-1-git-send-email-ciwillia@brocade.com (mailing list archive)
State Superseded, archived
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Chas Williams Aug. 7, 2017, 4:11 p.m. UTC
  After commit 8f094a9ac5d7 ("mbuf: set mbuf fields while in pool") is it
much harder to detect a "double free".  If the developer makes a copy
of an mbuf pointer and frees it twice, this condition is never detected
and the mbuf gets returned to the pool twice.

Since this requires extra work to track, make this behavior conditional
on CONFIG_RTE_LIBRTE_MBUF_DEBUG.

Signed-off-by: Chas Williams <ciwillia@brocade.com>
---
 lib/librte_mbuf/rte_mbuf.c |  2 +-
 lib/librte_mbuf/rte_mbuf.h | 42 +++++++++++++++++++++++++++++++++++-------
 2 files changed, 36 insertions(+), 8 deletions(-)
  

Comments

Radu Nicolau Sept. 4, 2017, 2:27 p.m. UTC | #1
On 8/7/2017 5:11 PM, Charles (Chas) Williams wrote:
> After commit 8f094a9ac5d7 ("mbuf: set mbuf fields while in pool") is it
> much harder to detect a "double free".  If the developer makes a copy
> of an mbuf pointer and frees it twice, this condition is never detected
> and the mbuf gets returned to the pool twice.
>
> Since this requires extra work to track, make this behavior conditional
> on CONFIG_RTE_LIBRTE_MBUF_DEBUG.
>
> Signed-off-by: Chas Williams <ciwillia@brocade.com>
> ---
>
> @@ -1304,10 +1329,13 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
>   			m->next = NULL;
>   			m->nb_segs = 1;
>   		}
> +#ifdef RTE_LIBRTE_MBUF_DEBUG
> +		rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT);
> +#endif
>   
>   		return m;
>   
> -       } else if (rte_atomic16_add_return(&m->refcnt_atomic, -1) == 0) {
> +	} else if (rte_mbuf_refcnt_update(m, -1) == 0) {
Why replace the use of atomic operation?
>   
>   
>   		if (RTE_MBUF_INDIRECT(m))
> @@ -1317,7 +1345,7 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
>   			m->next = NULL;
>   			m->nb_segs = 1;
>   		}
> -		rte_mbuf_refcnt_set(m, 1);
> +		rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT);
>   
>   		return m;
>   	}
Reviewed-by:  Radu Nicolau <radu.nicolau@intel.com>
  
Chas Williams Sept. 6, 2017, 10:46 a.m. UTC | #2
[Note: My former email address is going away eventually.  I am moving the
conversation to my other email address which is a bit more permanent.]

On Mon, 2017-09-04 at 15:27 +0100, Radu Nicolau wrote:
> 
> On 8/7/2017 5:11 PM, Charles (Chas) Williams wrote:
> > After commit 8f094a9ac5d7 ("mbuf: set mbuf fields while in pool") is it
> > much harder to detect a "double free".  If the developer makes a copy
> > of an mbuf pointer and frees it twice, this condition is never detected
> > and the mbuf gets returned to the pool twice.
> >
> > Since this requires extra work to track, make this behavior conditional
> > on CONFIG_RTE_LIBRTE_MBUF_DEBUG.
> >
> > Signed-off-by: Chas Williams <ciwillia@brocade.com>
> > ---
> >
> > @@ -1304,10 +1329,13 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
> >   			m->next = NULL;
> >   			m->nb_segs = 1;
> >   		}
> > +#ifdef RTE_LIBRTE_MBUF_DEBUG
> > +		rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT);
> > +#endif
> >   
> >   		return m;
> >   
> > -       } else if (rte_atomic16_add_return(&m->refcnt_atomic, -1) == 0) {
> > +	} else if (rte_mbuf_refcnt_update(m, -1) == 0) {
> Why replace the use of atomic operation?

It doesn't.  rte_mbuf_refcnt_update() is also atomic(ish) but it slightly more
optimal.  This whole section is a little hazy actually.  It looks like 
rte_pktmbuf_prefree_seg() unwraps rte_mbuf_refcnt_update() so they can avoid
setting the refcnt when the refcnt is already the 'correct' value.

> >   
> >   
> >   		if (RTE_MBUF_INDIRECT(m))
> > @@ -1317,7 +1345,7 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
> >   			m->next = NULL;
> >   			m->nb_segs = 1;
> >   		}
> > -		rte_mbuf_refcnt_set(m, 1);
> > +		rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT);
> >   
> >   		return m;
> >   	}
> Reviewed-by:  Radu Nicolau <radu.nicolau@intel.com>

Thanks for the review.
  
Ananyev, Konstantin Sept. 6, 2017, 11:58 a.m. UTC | #3
> -----Original Message-----

> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Chas Williams

> Sent: Wednesday, September 6, 2017 11:46 AM

> To: Nicolau, Radu <radu.nicolau@intel.com>; dev@dpdk.org

> Cc: olivier.matz@6wind.com; cw817q@att.com

> Subject: Re: [dpdk-dev] [PATCH v2] mbuf: use refcnt = 0 when debugging

> 

> [Note: My former email address is going away eventually.  I am moving the

> conversation to my other email address which is a bit more permanent.]

> 

> On Mon, 2017-09-04 at 15:27 +0100, Radu Nicolau wrote:

> >

> > On 8/7/2017 5:11 PM, Charles (Chas) Williams wrote:

> > > After commit 8f094a9ac5d7 ("mbuf: set mbuf fields while in pool") is it

> > > much harder to detect a "double free".  If the developer makes a copy

> > > of an mbuf pointer and frees it twice, this condition is never detected

> > > and the mbuf gets returned to the pool twice.

> > >

> > > Since this requires extra work to track, make this behavior conditional

> > > on CONFIG_RTE_LIBRTE_MBUF_DEBUG.

> > >

> > > Signed-off-by: Chas Williams <ciwillia@brocade.com>

> > > ---

> > >

> > > @@ -1304,10 +1329,13 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)

> > >   			m->next = NULL;

> > >   			m->nb_segs = 1;

> > >   		}

> > > +#ifdef RTE_LIBRTE_MBUF_DEBUG

> > > +		rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT);

> > > +#endif

> > >

> > >   		return m;

> > >

> > > -       } else if (rte_atomic16_add_return(&m->refcnt_atomic, -1) == 0) {

> > > +	} else if (rte_mbuf_refcnt_update(m, -1) == 0) {

> > Why replace the use of atomic operation?

> 

> It doesn't.  rte_mbuf_refcnt_update() is also atomic(ish) but it slightly more

> optimal.  This whole section is a little hazy actually.  It looks like

> rte_pktmbuf_prefree_seg() unwraps rte_mbuf_refcnt_update() so they can avoid

> setting the refcnt when the refcnt is already the 'correct' value.


You don't need to use refcnt_update() here - if you take that path it already means
that m->refcnt_atomic != 1.
In fact, I think using refcnt_update () here might be a bit slower - as it means extra read.
Konstantin

> 

> > >

> > >

> > >   		if (RTE_MBUF_INDIRECT(m))

> > > @@ -1317,7 +1345,7 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)

> > >   			m->next = NULL;

> > >   			m->nb_segs = 1;

> > >   		}

> > > -		rte_mbuf_refcnt_set(m, 1);

> > > +		rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT);

> > >

> > >   		return m;

> > >   	}

> > Reviewed-by:  Radu Nicolau <radu.nicolau@intel.com>

> 

> Thanks for the review.
  
Chas Williams Sept. 6, 2017, 1:55 p.m. UTC | #4
On Wed, 2017-09-06 at 11:58 +0000, Ananyev, Konstantin wrote:
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Chas Williams
> > Sent: Wednesday, September 6, 2017 11:46 AM
> > To: Nicolau, Radu <radu.nicolau@intel.com>; dev@dpdk.org
> > Cc: olivier.matz@6wind.com; cw817q@att.com
> > Subject: Re: [dpdk-dev] [PATCH v2] mbuf: use refcnt = 0 when debugging
> > 
> > [Note: My former email address is going away eventually.  I am moving the
> > conversation to my other email address which is a bit more permanent.]
> > 
> > On Mon, 2017-09-04 at 15:27 +0100, Radu Nicolau wrote:
> > >
> > > On 8/7/2017 5:11 PM, Charles (Chas) Williams wrote:
> > > > After commit 8f094a9ac5d7 ("mbuf: set mbuf fields while in pool") is it
> > > > much harder to detect a "double free".  If the developer makes a copy
> > > > of an mbuf pointer and frees it twice, this condition is never detected
> > > > and the mbuf gets returned to the pool twice.
> > > >
> > > > Since this requires extra work to track, make this behavior conditional
> > > > on CONFIG_RTE_LIBRTE_MBUF_DEBUG.
> > > >
> > > > Signed-off-by: Chas Williams <ciwillia@brocade.com>
> > > > ---
> > > >
> > > > @@ -1304,10 +1329,13 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
> > > >   			m->next = NULL;
> > > >   			m->nb_segs = 1;
> > > >   		}
> > > > +#ifdef RTE_LIBRTE_MBUF_DEBUG
> > > > +		rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT);
> > > > +#endif
> > > >
> > > >   		return m;
> > > >
> > > > -       } else if (rte_atomic16_add_return(&m->refcnt_atomic, -1) == 0) {
> > > > +	} else if (rte_mbuf_refcnt_update(m, -1) == 0) {
> > > Why replace the use of atomic operation?
> > 
> > It doesn't.  rte_mbuf_refcnt_update() is also atomic(ish) but it slightly more
> > optimal.  This whole section is a little hazy actually.  It looks like
> > rte_pktmbuf_prefree_seg() unwraps rte_mbuf_refcnt_update() so they can avoid
> > setting the refcnt when the refcnt is already the 'correct' value.
> 
> You don't need to use refcnt_update() here - if you take that path it already means
> that m->refcnt_atomic != 1.
> In fact, I think using refcnt_update () here might be a bit slower - as it means extra read.
> Konstantin

Yes, that is somewhat the point.  If a mbuf can have a refcnt of 0,
then we want to go into rte_mbuf_refcnt_update() which detects 0 -> -1.
I could explicitly check this in prefree_seg but I was just restored the
previous call into refcnt_update.  I could explicitly check for refcnt =
0 in prefree_seg() but since we do have a routine for this...

> > > >
> > > >
> > > >   		if (RTE_MBUF_INDIRECT(m))
> > > > @@ -1317,7 +1345,7 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
> > > >   			m->next = NULL;
> > > >   			m->nb_segs = 1;
> > > >   		}
> > > > -		rte_mbuf_refcnt_set(m, 1);
> > > > +		rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT);
> > > >
> > > >   		return m;
> > > >   	}
> > > Reviewed-by:  Radu Nicolau <radu.nicolau@intel.com>
> > 
> > Thanks for the review.
  
Ananyev, Konstantin Sept. 6, 2017, 2:53 p.m. UTC | #5
> -----Original Message-----

> From: Chas Williams [mailto:3chas3@gmail.com]

> Sent: Wednesday, September 6, 2017 2:56 PM

> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Nicolau, Radu <radu.nicolau@intel.com>; dev@dpdk.org

> Cc: olivier.matz@6wind.com; cw817q@att.com

> Subject: Re: [dpdk-dev] [PATCH v2] mbuf: use refcnt = 0 when debugging

> 

> On Wed, 2017-09-06 at 11:58 +0000, Ananyev, Konstantin wrote:

> >

> > > -----Original Message-----

> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Chas Williams

> > > Sent: Wednesday, September 6, 2017 11:46 AM

> > > To: Nicolau, Radu <radu.nicolau@intel.com>; dev@dpdk.org

> > > Cc: olivier.matz@6wind.com; cw817q@att.com

> > > Subject: Re: [dpdk-dev] [PATCH v2] mbuf: use refcnt = 0 when debugging

> > >

> > > [Note: My former email address is going away eventually.  I am moving the

> > > conversation to my other email address which is a bit more permanent.]

> > >

> > > On Mon, 2017-09-04 at 15:27 +0100, Radu Nicolau wrote:

> > > >

> > > > On 8/7/2017 5:11 PM, Charles (Chas) Williams wrote:

> > > > > After commit 8f094a9ac5d7 ("mbuf: set mbuf fields while in pool") is it

> > > > > much harder to detect a "double free".  If the developer makes a copy

> > > > > of an mbuf pointer and frees it twice, this condition is never detected

> > > > > and the mbuf gets returned to the pool twice.

> > > > >

> > > > > Since this requires extra work to track, make this behavior conditional

> > > > > on CONFIG_RTE_LIBRTE_MBUF_DEBUG.

> > > > >

> > > > > Signed-off-by: Chas Williams <ciwillia@brocade.com>

> > > > > ---

> > > > >

> > > > > @@ -1304,10 +1329,13 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)

> > > > >   			m->next = NULL;

> > > > >   			m->nb_segs = 1;

> > > > >   		}

> > > > > +#ifdef RTE_LIBRTE_MBUF_DEBUG

> > > > > +		rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT);

> > > > > +#endif

> > > > >

> > > > >   		return m;

> > > > >

> > > > > -       } else if (rte_atomic16_add_return(&m->refcnt_atomic, -1) == 0) {

> > > > > +	} else if (rte_mbuf_refcnt_update(m, -1) == 0) {

> > > > Why replace the use of atomic operation?

> > >

> > > It doesn't.  rte_mbuf_refcnt_update() is also atomic(ish) but it slightly more

> > > optimal.  This whole section is a little hazy actually.  It looks like

> > > rte_pktmbuf_prefree_seg() unwraps rte_mbuf_refcnt_update() so they can avoid

> > > setting the refcnt when the refcnt is already the 'correct' value.

> >

> > You don't need to use refcnt_update() here - if you take that path it already means

> > that m->refcnt_atomic != 1.

> > In fact, I think using refcnt_update () here might be a bit slower - as it means extra read.

> > Konstantin

> 

> Yes, that is somewhat the point.  If a mbuf can have a refcnt of 0,

> then we want to go into rte_mbuf_refcnt_update() which detects 0 -> -1.


Woulnd't __rte_mbuf_sanity_check(m, 0) at the start of prefree_seg()
already catch it?
Konstantin

> I could explicitly check this in prefree_seg but I was just restored the

> previous call into refcnt_update.  I could explicitly check for refcnt =

> 0 in prefree_seg() but since we do have a routine for this...

> 

> > > > >

> > > > >

> > > > >   		if (RTE_MBUF_INDIRECT(m))

> > > > > @@ -1317,7 +1345,7 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)

> > > > >   			m->next = NULL;

> > > > >   			m->nb_segs = 1;

> > > > >   		}

> > > > > -		rte_mbuf_refcnt_set(m, 1);

> > > > > +		rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT);

> > > > >

> > > > >   		return m;

> > > > >   	}

> > > > Reviewed-by:  Radu Nicolau <radu.nicolau@intel.com>

> > >

> > > Thanks for the review.
  
Chas Williams Sept. 7, 2017, 3:55 p.m. UTC | #6
On Wed, 2017-09-06 at 14:53 +0000, Ananyev, Konstantin wrote:
> 
> > -----Original Message-----
> > From: Chas Williams [mailto:3chas3@gmail.com]
> > Sent: Wednesday, September 6, 2017 2:56 PM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Nicolau, Radu <radu.nicolau@intel.com>; dev@dpdk.org
> > Cc: olivier.matz@6wind.com; cw817q@att.com
> > Subject: Re: [dpdk-dev] [PATCH v2] mbuf: use refcnt = 0 when debugging
> > 
> > On Wed, 2017-09-06 at 11:58 +0000, Ananyev, Konstantin wrote:
> > >
> > > > -----Original Message-----
> > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Chas Williams
> > > > Sent: Wednesday, September 6, 2017 11:46 AM
> > > > To: Nicolau, Radu <radu.nicolau@intel.com>; dev@dpdk.org
> > > > Cc: olivier.matz@6wind.com; cw817q@att.com
> > > > Subject: Re: [dpdk-dev] [PATCH v2] mbuf: use refcnt = 0 when debugging
> > > >
> > > > [Note: My former email address is going away eventually.  I am moving the
> > > > conversation to my other email address which is a bit more permanent.]
> > > >
> > > > On Mon, 2017-09-04 at 15:27 +0100, Radu Nicolau wrote:
> > > > >
> > > > > On 8/7/2017 5:11 PM, Charles (Chas) Williams wrote:
> > > > > > After commit 8f094a9ac5d7 ("mbuf: set mbuf fields while in pool") is it
> > > > > > much harder to detect a "double free".  If the developer makes a copy
> > > > > > of an mbuf pointer and frees it twice, this condition is never detected
> > > > > > and the mbuf gets returned to the pool twice.
> > > > > >
> > > > > > Since this requires extra work to track, make this behavior conditional
> > > > > > on CONFIG_RTE_LIBRTE_MBUF_DEBUG.
> > > > > >
> > > > > > Signed-off-by: Chas Williams <ciwillia@brocade.com>
> > > > > > ---
> > > > > >
> > > > > > @@ -1304,10 +1329,13 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
> > > > > >   			m->next = NULL;
> > > > > >   			m->nb_segs = 1;
> > > > > >   		}
> > > > > > +#ifdef RTE_LIBRTE_MBUF_DEBUG
> > > > > > +		rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT);
> > > > > > +#endif
> > > > > >
> > > > > >   		return m;
> > > > > >
> > > > > > -       } else if (rte_atomic16_add_return(&m->refcnt_atomic, -1) == 0) {
> > > > > > +	} else if (rte_mbuf_refcnt_update(m, -1) == 0) {
> > > > > Why replace the use of atomic operation?
> > > >
> > > > It doesn't.  rte_mbuf_refcnt_update() is also atomic(ish) but it slightly more
> > > > optimal.  This whole section is a little hazy actually.  It looks like
> > > > rte_pktmbuf_prefree_seg() unwraps rte_mbuf_refcnt_update() so they can avoid
> > > > setting the refcnt when the refcnt is already the 'correct' value.
> > >
> > > You don't need to use refcnt_update() here - if you take that path it already means
> > > that m->refcnt_atomic != 1.
> > > In fact, I think using refcnt_update () here might be a bit slower - as it means extra read.
> > > Konstantin
> > 
> > Yes, that is somewhat the point.  If a mbuf can have a refcnt of 0,
> > then we want to go into rte_mbuf_refcnt_update() which detects 0 -> -1.
> 
> Woulnd't __rte_mbuf_sanity_check(m, 0) at the start of prefree_seg()
> already catch it?
> Konstantin

Yes!  I didn't notice that so I will just drop change and issue a v3 today
sometime.

Thanks!
 
> > I could explicitly check this in prefree_seg but I was just restored the
> > previous call into refcnt_update.  I could explicitly check for refcnt =
> > 0 in prefree_seg() but since we do have a routine for this...
> > 
> > > > > >
> > > > > >
> > > > > >   		if (RTE_MBUF_INDIRECT(m))
> > > > > > @@ -1317,7 +1345,7 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
> > > > > >   			m->next = NULL;
> > > > > >   			m->nb_segs = 1;
> > > > > >   		}
> > > > > > -		rte_mbuf_refcnt_set(m, 1);
> > > > > > +		rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT);
> > > > > >
> > > > > >   		return m;
> > > > > >   	}
> > > > > Reviewed-by:  Radu Nicolau <radu.nicolau@intel.com>
> > > >
> > > > Thanks for the review.
  

Patch

diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 26a62b8..b0d222c 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -145,7 +145,7 @@  rte_pktmbuf_init(struct rte_mempool *mp,
 	m->pool = mp;
 	m->nb_segs = 1;
 	m->port = 0xff;
-	rte_mbuf_refcnt_set(m, 1);
+	rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT);
 	m->next = NULL;
 }
 
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index eaed7ee..3e00373 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -671,11 +671,15 @@  struct rte_pktmbuf_pool_private {
 
 #ifdef RTE_LIBRTE_MBUF_DEBUG
 
+#define RTE_MBUF_UNUSED_CNT 0
+
 /**  check mbuf type in debug mode */
 #define __rte_mbuf_sanity_check(m, is_h) rte_mbuf_sanity_check(m, is_h)
 
 #else /*  RTE_LIBRTE_MBUF_DEBUG */
 
+#define RTE_MBUF_UNUSED_CNT 1
+
 /**  check mbuf type in debug mode */
 #define __rte_mbuf_sanity_check(m, is_h) do { } while (0)
 
@@ -721,6 +725,9 @@  rte_mbuf_refcnt_set(struct rte_mbuf *m, uint16_t new_value)
 static inline uint16_t
 rte_mbuf_refcnt_update(struct rte_mbuf *m, int16_t value)
 {
+#ifdef RTE_LIBRTE_MBUF_DEBUG
+	RTE_ASSERT(rte_mbuf_refcnt_read(m) != 0);
+#endif
 	/*
 	 * The atomic_add is an expensive operation, so we don't want to
 	 * call it in the case where we know we are the uniq holder of
@@ -791,10 +798,9 @@  void
 rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header);
 
 #define MBUF_RAW_ALLOC_CHECK(m) do {				\
-	RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);		\
+	RTE_ASSERT(rte_mbuf_refcnt_read(m) == RTE_MBUF_UNUSED_CNT); \
 	RTE_ASSERT((m)->next == NULL);				\
 	RTE_ASSERT((m)->nb_segs == 1);				\
-	__rte_mbuf_sanity_check(m, 0);				\
 } while (0)
 
 /**
@@ -825,6 +831,10 @@  static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp)
 		return NULL;
 	m = (struct rte_mbuf *)mb;
 	MBUF_RAW_ALLOC_CHECK(m);
+#ifdef RTE_LIBRTE_MBUF_DEBUG
+	rte_mbuf_refcnt_set(m, 1);
+#endif
+	__rte_mbuf_sanity_check(m, 0);
 	return m;
 }
 
@@ -846,10 +856,9 @@  static __rte_always_inline void
 rte_mbuf_raw_free(struct rte_mbuf *m)
 {
 	RTE_ASSERT(RTE_MBUF_DIRECT(m));
-	RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);
+	RTE_ASSERT(rte_mbuf_refcnt_read(m) == RTE_MBUF_UNUSED_CNT);
 	RTE_ASSERT(m->next == NULL);
 	RTE_ASSERT(m->nb_segs == 1);
-	__rte_mbuf_sanity_check(m, 0);
 	rte_mempool_put(m->pool, m);
 }
 
@@ -1159,21 +1168,37 @@  static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool,
 	case 0:
 		while (idx != count) {
 			MBUF_RAW_ALLOC_CHECK(mbufs[idx]);
+#ifdef RTE_LIBRTE_MBUF_DEBUG
+			rte_mbuf_refcnt_set(mbufs[idx], 1);
+#endif
+			__rte_mbuf_sanity_check(mbufs[idx], 0);
 			rte_pktmbuf_reset(mbufs[idx]);
 			idx++;
 			/* fall-through */
 	case 3:
 			MBUF_RAW_ALLOC_CHECK(mbufs[idx]);
+#ifdef RTE_LIBRTE_MBUF_DEBUG
+			rte_mbuf_refcnt_set(mbufs[idx], 1);
+#endif
+			__rte_mbuf_sanity_check(mbufs[idx], 0);
 			rte_pktmbuf_reset(mbufs[idx]);
 			idx++;
 			/* fall-through */
 	case 2:
 			MBUF_RAW_ALLOC_CHECK(mbufs[idx]);
+#ifdef RTE_LIBRTE_MBUF_DEBUG
+			rte_mbuf_refcnt_set(mbufs[idx], 1);
+#endif
+			__rte_mbuf_sanity_check(mbufs[idx], 0);
 			rte_pktmbuf_reset(mbufs[idx]);
 			idx++;
 			/* fall-through */
 	case 1:
 			MBUF_RAW_ALLOC_CHECK(mbufs[idx]);
+#ifdef RTE_LIBRTE_MBUF_DEBUG
+			rte_mbuf_refcnt_set(mbufs[idx], 1);
+#endif
+			__rte_mbuf_sanity_check(mbufs[idx], 0);
 			rte_pktmbuf_reset(mbufs[idx]);
 			idx++;
 			/* fall-through */
@@ -1271,7 +1296,7 @@  static inline void rte_pktmbuf_detach(struct rte_mbuf *m)
 	if (rte_mbuf_refcnt_update(md, -1) == 0) {
 		md->next = NULL;
 		md->nb_segs = 1;
-		rte_mbuf_refcnt_set(md, 1);
+		rte_mbuf_refcnt_set(md, RTE_MBUF_UNUSED_CNT);
 		rte_mbuf_raw_free(md);
 	}
 }
@@ -1304,10 +1329,13 @@  rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
 			m->next = NULL;
 			m->nb_segs = 1;
 		}
+#ifdef RTE_LIBRTE_MBUF_DEBUG
+		rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT);
+#endif
 
 		return m;
 
-       } else if (rte_atomic16_add_return(&m->refcnt_atomic, -1) == 0) {
+	} else if (rte_mbuf_refcnt_update(m, -1) == 0) {
 
 
 		if (RTE_MBUF_INDIRECT(m))
@@ -1317,7 +1345,7 @@  rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
 			m->next = NULL;
 			m->nb_segs = 1;
 		}
-		rte_mbuf_refcnt_set(m, 1);
+		rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT);
 
 		return m;
 	}