mempool: fix rte_mempool_avail_count may segment fault when used in multiprocess

Message ID 20221114071439.38902-1-changfengnan@bytedance.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers
Series mempool: fix rte_mempool_avail_count may segment fault when used in multiprocess |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK
ci/iol-intel-Functional fail Functional Testing issues
ci/iol-aarch64-unit-testing success Testing PASS
ci/github-robot: build fail github build: failed
ci/iol-x86_64-compile-testing success Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/intel-Testing fail Testing issues

Commit Message

Fengnan Chang Nov. 14, 2022, 7:14 a.m. UTC
  rte_mempool_create put tailq entry into rte_mempool_tailq list before
populate, and pool_data set when populate. So in multi process, if
process A create mempool, and process B can get mempool through
rte_mempool_lookup before pool_data set, if B call rte_mempool_lookup,
it will cause segment fault.
Fix this by put tailq entry into rte_mempool_tailq after populate.

Signed-off-by: changfengnan <changfengnan@bytedance.com>
---
 lib/mempool/rte_mempool.c | 40 +++++++++++++++++++--------------------
 1 file changed, 19 insertions(+), 21 deletions(-)
  

Comments

Morten Brørup Nov. 14, 2022, 9:55 a.m. UTC | #1
> From: changfengnan [mailto:changfengnan@bytedance.com]
> Sent: Monday, 14 November 2022 08.15
> 
> rte_mempool_create put tailq entry into rte_mempool_tailq list before
> populate, and pool_data set when populate. So in multi process, if
> process A create mempool, and process B can get mempool through
> rte_mempool_lookup before pool_data set, if B call rte_mempool_lookup,
> it will cause segment fault.
> Fix this by put tailq entry into rte_mempool_tailq after populate.
> 
> Signed-off-by: changfengnan <changfengnan@bytedance.com>
> ---

Good catch.

You must use your real name (not your username) before the email address in the sign-off line, or the patch cannot be accepted. Please refer to: https://doc.dpdk.org/guides/contributing/patches.html#commit-messages-body

With a proper sign-off line,
Acked-by: Morten Brørup <mb@smartsharesystems.com>
  
Fengnan Chang Nov. 14, 2022, 11:23 a.m. UTC | #2
It seems that this modification ignores some certain cases, for example calling
rte_mempool_create_empty directly, maybe add a new flag bit to indicate
when to put tailq entry into rte_mempool_tailq list is a better way ?

Looking forward to any suggestions.

Thanks.
Fengnan Chang.

changfengnan <changfengnan@bytedance.com> 于2022年11月14日周一 15:14写道:
>
> rte_mempool_create put tailq entry into rte_mempool_tailq list before
> populate, and pool_data set when populate. So in multi process, if
> process A create mempool, and process B can get mempool through
> rte_mempool_lookup before pool_data set, if B call rte_mempool_lookup,
> it will cause segment fault.
> Fix this by put tailq entry into rte_mempool_tailq after populate.
>
> Signed-off-by: changfengnan <changfengnan@bytedance.com>
> ---
>  lib/mempool/rte_mempool.c | 40 +++++++++++++++++++--------------------
>  1 file changed, 19 insertions(+), 21 deletions(-)
>
> diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
> index 4c78071a34..b23d6138ff 100644
> --- a/lib/mempool/rte_mempool.c
> +++ b/lib/mempool/rte_mempool.c
> @@ -798,9 +798,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
>         int socket_id, unsigned flags)
>  {
>         char mz_name[RTE_MEMZONE_NAMESIZE];
> -       struct rte_mempool_list *mempool_list;
>         struct rte_mempool *mp = NULL;
> -       struct rte_tailq_entry *te = NULL;
>         const struct rte_memzone *mz = NULL;
>         size_t mempool_size;
>         unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
> @@ -820,8 +818,6 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
>                           RTE_CACHE_LINE_MASK) != 0);
>  #endif
>
> -       mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
> -
>         /* asked for zero items */
>         if (n == 0) {
>                 rte_errno = EINVAL;
> @@ -866,14 +862,6 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
>         private_data_size = (private_data_size +
>                              RTE_MEMPOOL_ALIGN_MASK) & (~RTE_MEMPOOL_ALIGN_MASK);
>
> -
> -       /* try to allocate tailq entry */
> -       te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
> -       if (te == NULL) {
> -               RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n");
> -               goto exit_unlock;
> -       }
> -
>         mempool_size = RTE_MEMPOOL_HEADER_SIZE(mp, cache_size);
>         mempool_size += private_data_size;
>         mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
> @@ -908,7 +896,6 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
>         mp->private_data_size = private_data_size;
>         STAILQ_INIT(&mp->elt_list);
>         STAILQ_INIT(&mp->mem_list);
> -
>         /*
>          * local_cache pointer is set even if cache_size is zero.
>          * The local_cache points to just past the elt_pa[] array.
> @@ -922,12 +909,6 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
>                         mempool_cache_init(&mp->local_cache[lcore_id],
>                                            cache_size);
>         }
> -
> -       te->data = mp;
> -
> -       rte_mcfg_tailq_write_lock();
> -       TAILQ_INSERT_TAIL(mempool_list, te, next);
> -       rte_mcfg_tailq_write_unlock();
>         rte_mcfg_mempool_write_unlock();
>
>         rte_mempool_trace_create_empty(name, n, elt_size, cache_size,
> @@ -936,7 +917,6 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
>
>  exit_unlock:
>         rte_mcfg_mempool_write_unlock();
> -       rte_free(te);
>         rte_mempool_free(mp);
>         return NULL;
>  }
> @@ -951,11 +931,22 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
>  {
>         int ret;
>         struct rte_mempool *mp;
> +       struct rte_mempool_list *mempool_list;
> +       struct rte_tailq_entry *te = NULL;
> +
> +       /* try to allocate tailq entry */
> +       te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
> +       if (te == NULL) {
> +               RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n");
> +               return NULL;
> +       }
>
>         mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
>                 private_data_size, socket_id, flags);
> -       if (mp == NULL)
> +       if (mp == NULL) {
> +               rte_free(te);
>                 return NULL;
> +       }
>
>         /*
>          * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> @@ -984,12 +975,19 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
>         if (obj_init)
>                 rte_mempool_obj_iter(mp, obj_init, obj_init_arg);
>
> +       te->data = mp;
> +       mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
> +       rte_mcfg_tailq_write_lock();
> +       TAILQ_INSERT_TAIL(mempool_list, te, next);
> +       rte_mcfg_tailq_write_unlock();
> +
>         rte_mempool_trace_create(name, n, elt_size, cache_size,
>                 private_data_size, mp_init, mp_init_arg, obj_init,
>                 obj_init_arg, flags, mp);
>         return mp;
>
>   fail:
> +       rte_free(te);
>         rte_mempool_free(mp);
>         return NULL;
>  }
> --
> 2.37.0 (Apple Git-136)
>
  
David Marchand Nov. 14, 2022, 8:43 p.m. UTC | #3
On Mon, Nov 14, 2022 at 9:13 AM changfengnan <changfengnan@bytedance.com> wrote:
>
> rte_mempool_create put tailq entry into rte_mempool_tailq list before
> populate, and pool_data set when populate. So in multi process, if
> process A create mempool, and process B can get mempool through
> rte_mempool_lookup before pool_data set, if B call rte_mempool_lookup,
> it will cause segment fault.

I fail to see how pool_data impacts rte_mempool_lookup.
Something is fishy about this commitlog.


> Fix this by put tailq entry into rte_mempool_tailq after populate.

Moving tailq manipulation to rte_mempool_create only, is probably incorrect.
An application is allowed to call rte_mempool_create_empty() and
rte_mempool_populate().

I did not look in depth, but It is likely the reason why testpmd (as
run with devtools/test-null.sh) won't pass anymore.
The CI reported this issue in various envs.

We can't take this patch.


>
> Signed-off-by: changfengnan <changfengnan@bytedance.com>

Please use your real name.
  
Fengnan Chang Nov. 15, 2022, 1:51 a.m. UTC | #4
David Marchand <david.marchand@redhat.com> 于2022年11月15日周二 04:44写道:
>
> On Mon, Nov 14, 2022 at 9:13 AM changfengnan <changfengnan@bytedance.com> wrote:
> >
> > rte_mempool_create put tailq entry into rte_mempool_tailq list before
> > populate, and pool_data set when populate. So in multi process, if
> > process A create mempool, and process B can get mempool through
> > rte_mempool_lookup before pool_data set, if B call rte_mempool_lookup,
> > it will cause segment fault.
>
> I fail to see how pool_data impacts rte_mempool_lookup.
> Something is fishy about this commitlog.

oh, it's my fault about this commit. correct: if B can get mempool through
rte_mempool_lookup before pool_data set, and call rte_mempool_avail_count,
it will cause segment fault.

>
>
> > Fix this by put tailq entry into rte_mempool_tailq after populate.
>
> Moving tailq manipulation to rte_mempool_create only, is probably incorrect.
> An application is allowed to call rte_mempool_create_empty() and
> rte_mempool_populate().
>
> I did not look in depth, but It is likely the reason why testpmd (as
> run with devtools/test-null.sh) won't pass anymore.
> The CI reported this issue in various envs.
>
> We can't take this patch.

Yeah, this version makes CI fail.
I didn't notice rte_mempool_create_empty will called directly before, maybe
add a new flag bit to indicate when to put tailq entry into rte_mempool_tailq
list is a better way. If no better idea, I'll send a new version.

>
>
> >
> > Signed-off-by: changfengnan <changfengnan@bytedance.com>
>
> Please use your real name.

It's my real name.

>
>
> --
> David Marchand
>
  
Morten Brørup Nov. 15, 2022, 7:23 a.m. UTC | #5
> From: Fengnan Chang [mailto:changfengnan@bytedance.com]
> Sent: Tuesday, 15 November 2022 02.51
> 
> David Marchand <david.marchand@redhat.com> 于2022年11月15日周二
> 04:44写道:
> >
> > On Mon, Nov 14, 2022 at 9:13 AM changfengnan
> <changfengnan@bytedance.com> wrote:

[...]

> > > Signed-off-by: changfengnan <changfengnan@bytedance.com>
> >
> > Please use your real name.
> 
> It's my real name.

The name in the sign-off line must be Fengnan Chang <changfengnan@bytedance.com>, not changfengnan <changfengnan@bytedance.com>.

-Morten
  
David Marchand Nov. 15, 2022, 7:47 a.m. UTC | #6
On Tue, Nov 15, 2022 at 2:51 AM Fengnan Chang
<changfengnan@bytedance.com> wrote:
>
> David Marchand <david.marchand@redhat.com> 于2022年11月15日周二 04:44写道:
> >
> > On Mon, Nov 14, 2022 at 9:13 AM changfengnan <changfengnan@bytedance.com> wrote:
> > >
> > > rte_mempool_create put tailq entry into rte_mempool_tailq list before
> > > populate, and pool_data set when populate. So in multi process, if
> > > process A create mempool, and process B can get mempool through
> > > rte_mempool_lookup before pool_data set, if B call rte_mempool_lookup,
> > > it will cause segment fault.
> >
> > I fail to see how pool_data impacts rte_mempool_lookup.
> > Something is fishy about this commitlog.
>
> oh, it's my fault about this commit. correct: if B can get mempool through
> rte_mempool_lookup before pool_data set, and call rte_mempool_avail_count,
> it will cause segment fault.

Ok, now it makes more sense :-).

>
> >
> >
> > > Fix this by put tailq entry into rte_mempool_tailq after populate.
> >
> > Moving tailq manipulation to rte_mempool_create only, is probably incorrect.
> > An application is allowed to call rte_mempool_create_empty() and
> > rte_mempool_populate().
> >
> > I did not look in depth, but It is likely the reason why testpmd (as
> > run with devtools/test-null.sh) won't pass anymore.
> > The CI reported this issue in various envs.
> >
> > We can't take this patch.
>
> Yeah, this version makes CI fail.
> I didn't notice rte_mempool_create_empty will called directly before, maybe
> add a new flag bit to indicate when to put tailq entry into rte_mempool_tailq
> list is a better way. If no better idea, I'll send a new version.

I don't think we need an other flag.
Can we "publish" the mempool at the mempool_ops_alloc_once stage?


>
> >
> >
> > >
> > > Signed-off-by: changfengnan <changfengnan@bytedance.com>
> >
> > Please use your real name.
>
> It's my real name.

Sorry, I meant your full name, like Fengnan Chang <changfengnan@bytedance.com>
  
Olivier Matz Nov. 15, 2022, 8:29 a.m. UTC | #7
Hi,

On Tue, Nov 15, 2022 at 08:47:15AM +0100, David Marchand wrote:
> On Tue, Nov 15, 2022 at 2:51 AM Fengnan Chang
> <changfengnan@bytedance.com> wrote:
> >
> > David Marchand <david.marchand@redhat.com> 于2022年11月15日周二 04:44写道:
> > >
> > > On Mon, Nov 14, 2022 at 9:13 AM changfengnan <changfengnan@bytedance.com> wrote:
> > > >
> > > > rte_mempool_create put tailq entry into rte_mempool_tailq list before
> > > > populate, and pool_data set when populate. So in multi process, if
> > > > process A create mempool, and process B can get mempool through
> > > > rte_mempool_lookup before pool_data set, if B call rte_mempool_lookup,
> > > > it will cause segment fault.
> > >
> > > I fail to see how pool_data impacts rte_mempool_lookup.
> > > Something is fishy about this commitlog.
> >
> > oh, it's my fault about this commit. correct: if B can get mempool through
> > rte_mempool_lookup before pool_data set, and call rte_mempool_avail_count,
> > it will cause segment fault.
> 
> Ok, now it makes more sense :-).
> 
> >
> > >
> > >
> > > > Fix this by put tailq entry into rte_mempool_tailq after populate.
> > >
> > > Moving tailq manipulation to rte_mempool_create only, is probably incorrect.
> > > An application is allowed to call rte_mempool_create_empty() and
> > > rte_mempool_populate().
> > >
> > > I did not look in depth, but It is likely the reason why testpmd (as
> > > run with devtools/test-null.sh) won't pass anymore.
> > > The CI reported this issue in various envs.
> > >
> > > We can't take this patch.
> >
> > Yeah, this version makes CI fail.
> > I didn't notice rte_mempool_create_empty will called directly before, maybe
> > add a new flag bit to indicate when to put tailq entry into rte_mempool_tailq
> > list is a better way. If no better idea, I'll send a new version.
> 
> I don't think we need an other flag.
> Can we "publish" the mempool at the mempool_ops_alloc_once stage?

The mempool_ops_alloc_once() seems it is the proper place, yes.

Alternatively, I suppose this issue can be fixed in the secondary
application:

- it can wait that the flag RTE_MEMPOOL_F_POOL_CREATED is present before
  using the mempool.

- or it can wait the RTE_MEMPOOL_EVENT_READY

- or it can wait that the whole initialization of the primary
  application is finished by another mean (a sort of lock). I don't know
  the exact use case, but to me, it looks sane to do that, it would
  protect from other similar issues.


Olivier

> 
> 
> >
> > >
> > >
> > > >
> > > > Signed-off-by: changfengnan <changfengnan@bytedance.com>
> > >
> > > Please use your real name.
> >
> > It's my real name.
> 
> Sorry, I meant your full name, like Fengnan Chang <changfengnan@bytedance.com>
> 
> 
> -- 
> David Marchand
>
  
Fengnan Chang Nov. 15, 2022, 11:30 a.m. UTC | #8
Olivier Matz <olivier.matz@6wind.com> 于2022年11月15日周二 16:29写道:
>
> Hi,
>
> On Tue, Nov 15, 2022 at 08:47:15AM +0100, David Marchand wrote:
> > On Tue, Nov 15, 2022 at 2:51 AM Fengnan Chang
> > <changfengnan@bytedance.com> wrote:
> > >
> > > David Marchand <david.marchand@redhat.com> 于2022年11月15日周二 04:44写道:
> > > >
> > > > On Mon, Nov 14, 2022 at 9:13 AM changfengnan <changfengnan@bytedance.com> wrote:
> > > > >
> > > > > rte_mempool_create put tailq entry into rte_mempool_tailq list before
> > > > > populate, and pool_data set when populate. So in multi process, if
> > > > > process A create mempool, and process B can get mempool through
> > > > > rte_mempool_lookup before pool_data set, if B call rte_mempool_lookup,
> > > > > it will cause segment fault.
> > > >
> > > > I fail to see how pool_data impacts rte_mempool_lookup.
> > > > Something is fishy about this commitlog.
> > >
> > > oh, it's my fault about this commit. correct: if B can get mempool through
> > > rte_mempool_lookup before pool_data set, and call rte_mempool_avail_count,
> > > it will cause segment fault.
> >
> > Ok, now it makes more sense :-).
> >
> > >
> > > >
> > > >
> > > > > Fix this by put tailq entry into rte_mempool_tailq after populate.
> > > >
> > > > Moving tailq manipulation to rte_mempool_create only, is probably incorrect.
> > > > An application is allowed to call rte_mempool_create_empty() and
> > > > rte_mempool_populate().
> > > >
> > > > I did not look in depth, but It is likely the reason why testpmd (as
> > > > run with devtools/test-null.sh) won't pass anymore.
> > > > The CI reported this issue in various envs.
> > > >
> > > > We can't take this patch.
> > >
> > > Yeah, this version makes CI fail.
> > > I didn't notice rte_mempool_create_empty will called directly before, maybe
> > > add a new flag bit to indicate when to put tailq entry into rte_mempool_tailq
> > > list is a better way. If no better idea, I'll send a new version.
> >
> > I don't think we need an other flag.
> > Can we "publish" the mempool at the mempool_ops_alloc_once stage?
>
> The mempool_ops_alloc_once() seems it is the proper place, yes.
>
> Alternatively, I suppose this issue can be fixed in the secondary
> application:

I found this problem in spdk, if this is fixed in the secondary app,
it needs modify
spdk too.
The following options work, but in spdk,  I prefer to publish the mempool at
mempool_ops_alloc_once, I'll send a new version like this.

>
> - it can wait that the flag RTE_MEMPOOL_F_POOL_CREATED is present before
>   using the mempool.
>
> - or it can wait the RTE_MEMPOOL_EVENT_READY
>
> - or it can wait that the whole initialization of the primary
>   application is finished by another mean (a sort of lock). I don't know
>   the exact use case, but to me, it looks sane to do that, it would
>   protect from other similar issues.
>
>
> Olivier
>
> >
> >
> > >
> > > >
> > > >
> > > > >
> > > > > Signed-off-by: changfengnan <changfengnan@bytedance.com>
> > > >
> > > > Please use your real name.
> > >
> > > It's my real name.
> >
> > Sorry, I meant your full name, like Fengnan Chang <changfengnan@bytedance.com>
> >
> >
> > --
> > David Marchand
> >
  

Patch

diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 4c78071a34..b23d6138ff 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -798,9 +798,7 @@  rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	int socket_id, unsigned flags)
 {
 	char mz_name[RTE_MEMZONE_NAMESIZE];
-	struct rte_mempool_list *mempool_list;
 	struct rte_mempool *mp = NULL;
-	struct rte_tailq_entry *te = NULL;
 	const struct rte_memzone *mz = NULL;
 	size_t mempool_size;
 	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
@@ -820,8 +818,6 @@  rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 			  RTE_CACHE_LINE_MASK) != 0);
 #endif
 
-	mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
-
 	/* asked for zero items */
 	if (n == 0) {
 		rte_errno = EINVAL;
@@ -866,14 +862,6 @@  rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	private_data_size = (private_data_size +
 			     RTE_MEMPOOL_ALIGN_MASK) & (~RTE_MEMPOOL_ALIGN_MASK);
 
-
-	/* try to allocate tailq entry */
-	te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
-	if (te == NULL) {
-		RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n");
-		goto exit_unlock;
-	}
-
 	mempool_size = RTE_MEMPOOL_HEADER_SIZE(mp, cache_size);
 	mempool_size += private_data_size;
 	mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
@@ -908,7 +896,6 @@  rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	mp->private_data_size = private_data_size;
 	STAILQ_INIT(&mp->elt_list);
 	STAILQ_INIT(&mp->mem_list);
-
 	/*
 	 * local_cache pointer is set even if cache_size is zero.
 	 * The local_cache points to just past the elt_pa[] array.
@@ -922,12 +909,6 @@  rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 			mempool_cache_init(&mp->local_cache[lcore_id],
 					   cache_size);
 	}
-
-	te->data = mp;
-
-	rte_mcfg_tailq_write_lock();
-	TAILQ_INSERT_TAIL(mempool_list, te, next);
-	rte_mcfg_tailq_write_unlock();
 	rte_mcfg_mempool_write_unlock();
 
 	rte_mempool_trace_create_empty(name, n, elt_size, cache_size,
@@ -936,7 +917,6 @@  rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 
 exit_unlock:
 	rte_mcfg_mempool_write_unlock();
-	rte_free(te);
 	rte_mempool_free(mp);
 	return NULL;
 }
@@ -951,11 +931,22 @@  rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 {
 	int ret;
 	struct rte_mempool *mp;
+	struct rte_mempool_list *mempool_list;
+	struct rte_tailq_entry *te = NULL;
+
+	/* try to allocate tailq entry */
+	te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
+	if (te == NULL) {
+		RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n");
+		return NULL;
+	}
 
 	mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
 		private_data_size, socket_id, flags);
-	if (mp == NULL)
+	if (mp == NULL) {
+		rte_free(te);
 		return NULL;
+	}
 
 	/*
 	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
@@ -984,12 +975,19 @@  rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 	if (obj_init)
 		rte_mempool_obj_iter(mp, obj_init, obj_init_arg);
 
+	te->data = mp;
+	mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
+	rte_mcfg_tailq_write_lock();
+	TAILQ_INSERT_TAIL(mempool_list, te, next);
+	rte_mcfg_tailq_write_unlock();
+
 	rte_mempool_trace_create(name, n, elt_size, cache_size,
 		private_data_size, mp_init, mp_init_arg, obj_init,
 		obj_init_arg, flags, mp);
 	return mp;
 
  fail:
+	rte_free(te);
 	rte_mempool_free(mp);
 	return NULL;
 }