mempool: fix mempool virt populate with small chunks

Message ID 20200109132720.15664-1-olivier.matz@6wind.com (mailing list archive)
State Superseded, archived
Delegated to: David Marchand
Headers
Series mempool: fix mempool virt populate with small chunks |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-testing success Testing PASS
ci/iol-nxp-Performance success Performance Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/travis-robot warning Travis build: failed
ci/Intel-compilation success Compilation OK

Commit Message

Olivier Matz Jan. 9, 2020, 1:27 p.m. UTC
  To populate a mempool with a virtual area, the mempool code calls
rte_mempool_populate_iova() for each iova-contiguous area. It happens
(rarely) that this area is too small to store one object. In this case,
rte_mempool_populate_iova() returns an error, which is forwarded by
rte_mempool_populate_virt().

This case should not throw an error in
rte_mempool_populate_virt(). Instead, the area that is too small should
just be ignored.

To fix this issue, change the return value of
rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
so it can be ignored by the caller. As this would be an API change, add
a compat wrapper to keep the current API unchanged. The wrapper will be
removed for 20.11.

Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
Cc: stable@dpdk.org

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---

Is there a simple way to ensure that we won't forget to remove the
wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
it's not clear to me how.

 doc/guides/rel_notes/deprecation.rst |  4 ++++
 lib/librte_mempool/rte_mempool.c     | 28 +++++++++++++++++++++++-----
 lib/librte_mempool/rte_mempool.h     |  5 ++++-
 3 files changed, 31 insertions(+), 6 deletions(-)
  

Comments

David Marchand Jan. 9, 2020, 1:40 p.m. UTC | #1
On Thu, Jan 9, 2020 at 2:27 PM Olivier Matz <olivier.matz@6wind.com> wrote:
>
> To populate a mempool with a virtual area, the mempool code calls
> rte_mempool_populate_iova() for each iova-contiguous area. It happens
> (rarely) that this area is too small to store one object. In this case,
> rte_mempool_populate_iova() returns an error, which is forwarded by
> rte_mempool_populate_virt().
>
> This case should not throw an error in
> rte_mempool_populate_virt(). Instead, the area that is too small should
> just be ignored.
>
> To fix this issue, change the return value of
> rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
> so it can be ignored by the caller. As this would be an API change, add
> a compat wrapper to keep the current API unchanged. The wrapper will be
> removed for 20.11.
>
> Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
> Cc: stable@dpdk.org
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
>
> Is there a simple way to ensure that we won't forget to remove the
> wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
> it's not clear to me how.
>
>  doc/guides/rel_notes/deprecation.rst |  4 ++++
>  lib/librte_mempool/rte_mempool.c     | 28 +++++++++++++++++++++++-----
>  lib/librte_mempool/rte_mempool.h     |  5 ++++-
>  3 files changed, 31 insertions(+), 6 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index afa94b43e..b6e89d9a2 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -86,3 +86,7 @@ Deprecation Notices
>    to set new power environment if power environment was already initialized.
>    In this case the function will return -1 unless the environment is unset first
>    (using ``rte_power_unset_env``). Other function usage scenarios will not change.
> +
> +* mempool: starting from v20.11, rte_mempool_populate_iova() will
> +  return -ENOBUFS instead of -EINVAL when there is not enough room to
> +  store one object.
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 78d8eb941..bda361ce6 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -297,8 +297,8 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
>   * zone. Return the number of objects added, or a negative value
>   * on error.
>   */
> -int
> -rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> +static int
> +__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>         rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
>         void *opaque)
>  {
> @@ -332,7 +332,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>                 off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_MEMPOOL_ALIGN) - vaddr;
>
>         if (off > len) {
> -               ret = -EINVAL;
> +               ret = -ENOBUFS;
>                 goto fail;
>         }
>
> @@ -343,7 +343,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>
>         /* not enough room to store one object */
>         if (i == 0) {
> -               ret = -EINVAL;
> +               ret = -ENOBUFS;
>                 goto fail;
>         }
>
> @@ -356,6 +356,22 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>         return ret;
>  }
>
> +/* Compat wrapper, to be removed when changing the API is allowed (v20.11). */
> +int
> +rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> +       rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> +       void *opaque)
> +{
> +       int ret;
> +
> +       ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
> +                                       opaque);
> +       if (ret == -ENOBUFS)
> +               ret = -EINVAL;
> +
> +       return ret;
> +}
> +
>  static rte_iova_t
>  get_iova(void *addr)
>  {
> @@ -406,8 +422,10 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
>                                 break;
>                 }
>
> -               ret = rte_mempool_populate_iova(mp, addr + off, iova,
> +               ret = __rte_mempool_populate_iova(mp, addr + off, iova,
>                         phys_len, free_cb, opaque);
> +               if (ret == -ENOBUFS)
> +                       continue;
>                 if (ret < 0)
>                         goto fail;
>                 /* no need to call the free callback for next chunks */
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index f81152af9..c08bb444f 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -1108,7 +1108,10 @@ rte_mempool_free(struct rte_mempool *mp);
>   * @return
>   *   The number of objects added on success.
>   *   On error, the chunk is not added in the memory list of the
> - *   mempool and a negative errno is returned.
> + *   mempool and a negative errno is returned:
> + *   (-ENOBUFS): not enough room in chunk for one object.
> + *   (-ENOSPC): mempool is already populated.
> + *   (-ENOMEM): allocation failure.

You can't update the doc before this function does return -ENOBUFS.
  
Olivier Matz Jan. 9, 2020, 1:46 p.m. UTC | #2
On Thu, Jan 09, 2020 at 02:40:06PM +0100, David Marchand wrote:
> On Thu, Jan 9, 2020 at 2:27 PM Olivier Matz <olivier.matz@6wind.com> wrote:
> >
> > To populate a mempool with a virtual area, the mempool code calls
> > rte_mempool_populate_iova() for each iova-contiguous area. It happens
> > (rarely) that this area is too small to store one object. In this case,
> > rte_mempool_populate_iova() returns an error, which is forwarded by
> > rte_mempool_populate_virt().
> >
> > This case should not throw an error in
> > rte_mempool_populate_virt(). Instead, the area that is too small should
> > just be ignored.
> >
> > To fix this issue, change the return value of
> > rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
> > so it can be ignored by the caller. As this would be an API change, add
> > a compat wrapper to keep the current API unchanged. The wrapper will be
> > removed for 20.11.
> >
> > Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > ---
> >
> > Is there a simple way to ensure that we won't forget to remove the
> > wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
> > it's not clear to me how.
> >
> >  doc/guides/rel_notes/deprecation.rst |  4 ++++
> >  lib/librte_mempool/rte_mempool.c     | 28 +++++++++++++++++++++++-----
> >  lib/librte_mempool/rte_mempool.h     |  5 ++++-
> >  3 files changed, 31 insertions(+), 6 deletions(-)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index afa94b43e..b6e89d9a2 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -86,3 +86,7 @@ Deprecation Notices
> >    to set new power environment if power environment was already initialized.
> >    In this case the function will return -1 unless the environment is unset first
> >    (using ``rte_power_unset_env``). Other function usage scenarios will not change.
> > +
> > +* mempool: starting from v20.11, rte_mempool_populate_iova() will
> > +  return -ENOBUFS instead of -EINVAL when there is not enough room to
> > +  store one object.
> > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> > index 78d8eb941..bda361ce6 100644
> > --- a/lib/librte_mempool/rte_mempool.c
> > +++ b/lib/librte_mempool/rte_mempool.c
> > @@ -297,8 +297,8 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
> >   * zone. Return the number of objects added, or a negative value
> >   * on error.
> >   */
> > -int
> > -rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> > +static int
> > +__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> >         rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> >         void *opaque)
> >  {
> > @@ -332,7 +332,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> >                 off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_MEMPOOL_ALIGN) - vaddr;
> >
> >         if (off > len) {
> > -               ret = -EINVAL;
> > +               ret = -ENOBUFS;
> >                 goto fail;
> >         }
> >
> > @@ -343,7 +343,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> >
> >         /* not enough room to store one object */
> >         if (i == 0) {
> > -               ret = -EINVAL;
> > +               ret = -ENOBUFS;
> >                 goto fail;
> >         }
> >
> > @@ -356,6 +356,22 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> >         return ret;
> >  }
> >
> > +/* Compat wrapper, to be removed when changing the API is allowed (v20.11). */
> > +int
> > +rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> > +       rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> > +       void *opaque)
> > +{
> > +       int ret;
> > +
> > +       ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
> > +                                       opaque);
> > +       if (ret == -ENOBUFS)
> > +               ret = -EINVAL;
> > +
> > +       return ret;
> > +}
> > +
> >  static rte_iova_t
> >  get_iova(void *addr)
> >  {
> > @@ -406,8 +422,10 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
> >                                 break;
> >                 }
> >
> > -               ret = rte_mempool_populate_iova(mp, addr + off, iova,
> > +               ret = __rte_mempool_populate_iova(mp, addr + off, iova,
> >                         phys_len, free_cb, opaque);
> > +               if (ret == -ENOBUFS)
> > +                       continue;
> >                 if (ret < 0)
> >                         goto fail;
> >                 /* no need to call the free callback for next chunks */
> > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> > index f81152af9..c08bb444f 100644
> > --- a/lib/librte_mempool/rte_mempool.h
> > +++ b/lib/librte_mempool/rte_mempool.h
> > @@ -1108,7 +1108,10 @@ rte_mempool_free(struct rte_mempool *mp);
> >   * @return
> >   *   The number of objects added on success.
> >   *   On error, the chunk is not added in the memory list of the
> > - *   mempool and a negative errno is returned.
> > + *   mempool and a negative errno is returned:
> > + *   (-ENOBUFS): not enough room in chunk for one object.
> > + *   (-ENOSPC): mempool is already populated.
> > + *   (-ENOMEM): allocation failure.
> 
> You can't update the doc before this function does return -ENOBUFS.

Indeed :)

It should be EINVAL instead of ENOBUFS above.
...and I should add a reminder to update the doc for 20.11
  
Burakov, Anatoly Jan. 9, 2020, 1:52 p.m. UTC | #3
On 09-Jan-20 1:27 PM, Olivier Matz wrote:
> To populate a mempool with a virtual area, the mempool code calls
> rte_mempool_populate_iova() for each iova-contiguous area. It happens
> (rarely) that this area is too small to store one object. In this case,
> rte_mempool_populate_iova() returns an error, which is forwarded by
> rte_mempool_populate_virt().
> 
> This case should not throw an error in
> rte_mempool_populate_virt(). Instead, the area that is too small should
> just be ignored.
> 
> To fix this issue, change the return value of
> rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
> so it can be ignored by the caller. As this would be an API change, add
> a compat wrapper to keep the current API unchanged. The wrapper will be
> removed for 20.11.
> 
> Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
> 

The approach fixes the issue on my end, so

Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>

> Is there a simple way to ensure that we won't forget to remove the
> wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
> it's not clear to me how.
> 

Yes, i'd like to do better than "ah shur we won't forget pinky swear".

Can't we do this with ABI versioning? E.g.

rte_populate_iova_v20() ... returns EINVAL

rte_populate_iova_v21() ... returns ENOBUFS

I'm pretty sure, even if it doesn't break, it will still be more likely 
to not be forgotten because there's almost a guarantee that someone will 
grep for symbol versioning macros across the codebase around 20.11 
timeframe.
  
Olivier Matz Jan. 9, 2020, 2:23 p.m. UTC | #4
On Thu, Jan 09, 2020 at 01:52:41PM +0000, Burakov, Anatoly wrote:
> On 09-Jan-20 1:27 PM, Olivier Matz wrote:
> > To populate a mempool with a virtual area, the mempool code calls
> > rte_mempool_populate_iova() for each iova-contiguous area. It happens
> > (rarely) that this area is too small to store one object. In this case,
> > rte_mempool_populate_iova() returns an error, which is forwarded by
> > rte_mempool_populate_virt().
> > 
> > This case should not throw an error in
> > rte_mempool_populate_virt(). Instead, the area that is too small should
> > just be ignored.
> > 
> > To fix this issue, change the return value of
> > rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
> > so it can be ignored by the caller. As this would be an API change, add
> > a compat wrapper to keep the current API unchanged. The wrapper will be
> > removed for 20.11.
> > 
> > Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
> > Cc: stable@dpdk.org
> > 
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > ---
> > 
> 
> The approach fixes the issue on my end, so
> 
> Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>
> 
> > Is there a simple way to ensure that we won't forget to remove the
> > wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
> > it's not clear to me how.
> > 
> 
> Yes, i'd like to do better than "ah shur we won't forget pinky swear".
> 
> Can't we do this with ABI versioning? E.g.
> 
> rte_populate_iova_v20() ... returns EINVAL
> 
> rte_populate_iova_v21() ... returns ENOBUFS
> 
> I'm pretty sure, even if it doesn't break, it will still be more likely to
> not be forgotten because there's almost a guarantee that someone will grep
> for symbol versioning macros across the codebase around 20.11 timeframe.

Without using symbol versionning, would this be ok too?

  int
  rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
         rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
         void *opaque)
  {
         int ret;

         ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb, opaque);

  #if RTE_VERSION < RTE_VERSION_NUM(20, 11, 0, 0)
         if (ret == -ENOBUFS)
                 ret = -EINVAL;
  #endif

         return ret;
  }



> -- 
> Thanks,
> Anatoly
  
Burakov, Anatoly Jan. 9, 2020, 2:29 p.m. UTC | #5
On 09-Jan-20 2:23 PM, Olivier Matz wrote:
> On Thu, Jan 09, 2020 at 01:52:41PM +0000, Burakov, Anatoly wrote:
>> On 09-Jan-20 1:27 PM, Olivier Matz wrote:
>>> To populate a mempool with a virtual area, the mempool code calls
>>> rte_mempool_populate_iova() for each iova-contiguous area. It happens
>>> (rarely) that this area is too small to store one object. In this case,
>>> rte_mempool_populate_iova() returns an error, which is forwarded by
>>> rte_mempool_populate_virt().
>>>
>>> This case should not throw an error in
>>> rte_mempool_populate_virt(). Instead, the area that is too small should
>>> just be ignored.
>>>
>>> To fix this issue, change the return value of
>>> rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
>>> so it can be ignored by the caller. As this would be an API change, add
>>> a compat wrapper to keep the current API unchanged. The wrapper will be
>>> removed for 20.11.
>>>
>>> Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
>>> ---
>>>
>>
>> The approach fixes the issue on my end, so
>>
>> Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>
>>
>>> Is there a simple way to ensure that we won't forget to remove the
>>> wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
>>> it's not clear to me how.
>>>
>>
>> Yes, i'd like to do better than "ah shur we won't forget pinky swear".
>>
>> Can't we do this with ABI versioning? E.g.
>>
>> rte_populate_iova_v20() ... returns EINVAL
>>
>> rte_populate_iova_v21() ... returns ENOBUFS
>>
>> I'm pretty sure, even if it doesn't break, it will still be more likely to
>> not be forgotten because there's almost a guarantee that someone will grep
>> for symbol versioning macros across the codebase around 20.11 timeframe.
> 
> Without using symbol versionning, would this be ok too?
> 
>    int
>    rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>           rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
>           void *opaque)
>    {
>           int ret;
> 
>           ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb, opaque);
> 
>    #if RTE_VERSION < RTE_VERSION_NUM(20, 11, 0, 0)
>           if (ret == -ENOBUFS)
>                   ret = -EINVAL;
>    #endif
> 
>           return ret;
>    }
> 
> 

Well it would certainly work :) it just makes it more likely that this 
will be missed.

How about, we leave your patch as is, and then you submit another patch 
marked for [20.11] and mark it as Deferred straight away? This is 
probably the best method to not forget that i can think of, if you're so 
averse to symbol versioning :D

> 
>> -- 
>> Thanks,
>> Anatoly
  
Bruce Richardson Jan. 9, 2020, 2:58 p.m. UTC | #6
On Thu, Jan 09, 2020 at 02:29:08PM +0000, Burakov, Anatoly wrote:
> On 09-Jan-20 2:23 PM, Olivier Matz wrote:
> > On Thu, Jan 09, 2020 at 01:52:41PM +0000, Burakov, Anatoly wrote:
> > > On 09-Jan-20 1:27 PM, Olivier Matz wrote:
> > > > To populate a mempool with a virtual area, the mempool code calls
> > > > rte_mempool_populate_iova() for each iova-contiguous area. It happens
> > > > (rarely) that this area is too small to store one object. In this case,
> > > > rte_mempool_populate_iova() returns an error, which is forwarded by
> > > > rte_mempool_populate_virt().
> > > > 
> > > > This case should not throw an error in
> > > > rte_mempool_populate_virt(). Instead, the area that is too small should
> > > > just be ignored.
> > > > 
> > > > To fix this issue, change the return value of
> > > > rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
> > > > so it can be ignored by the caller. As this would be an API change, add
> > > > a compat wrapper to keep the current API unchanged. The wrapper will be
> > > > removed for 20.11.
> > > > 
> > > > Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
> > > > Cc: stable@dpdk.org
> > > > 
> > > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > > > ---
> > > > 
> > > 
> > > The approach fixes the issue on my end, so
> > > 
> > > Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>
> > > 
> > > > Is there a simple way to ensure that we won't forget to remove the
> > > > wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
> > > > it's not clear to me how.
> > > > 
> > > 
> > > Yes, i'd like to do better than "ah shur we won't forget pinky swear".
> > > 
> > > Can't we do this with ABI versioning? E.g.
> > > 
> > > rte_populate_iova_v20() ... returns EINVAL
> > > 
> > > rte_populate_iova_v21() ... returns ENOBUFS
> > > 
> > > I'm pretty sure, even if it doesn't break, it will still be more likely to
> > > not be forgotten because there's almost a guarantee that someone will grep
> > > for symbol versioning macros across the codebase around 20.11 timeframe.
> > 
> > Without using symbol versionning, would this be ok too?
> > 
> >    int
> >    rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> >           rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> >           void *opaque)
> >    {
> >           int ret;
> > 
> >           ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb, opaque);
> > 
> >    #if RTE_VERSION < RTE_VERSION_NUM(20, 11, 0, 0)
> >           if (ret == -ENOBUFS)
> >                   ret = -EINVAL;
> >    #endif
> > 
> >           return ret;
> >    }
> > 
> > 
> 
> Well it would certainly work :) it just makes it more likely that this will
> be missed.
> 
Depends on your definition of "work". The difference vs having the ABI
versioned code is that any application linked against 20.02...20.08 will
have the old behaviour and have to change to use 20.11. Using proper
versioning, any newly linked apps immediately get the old behaviour and the
compatiblity is provided only for those apps linked against 19.11.

Therefore, I think the ABI compatibility approach is still better, and
should not be that bad, given that I think all that needs to be done for
v20 is to call v21 and then convert an -ENOBUFS to -EINVAL as you do here.

__mempool_populate_iova_v20(....)
{
   ret = __mempool_populate_iova_v21(...);
   return ret == -ENOBUFS ? -EINVAL : ret;
}

Regards,
/Bruce
  

Patch

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index afa94b43e..b6e89d9a2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -86,3 +86,7 @@  Deprecation Notices
   to set new power environment if power environment was already initialized.
   In this case the function will return -1 unless the environment is unset first
   (using ``rte_power_unset_env``). Other function usage scenarios will not change.
+
+* mempool: starting from v20.11, rte_mempool_populate_iova() will
+  return -ENOBUFS instead of -EINVAL when there is not enough room to
+  store one object.
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 78d8eb941..bda361ce6 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -297,8 +297,8 @@  mempool_ops_alloc_once(struct rte_mempool *mp)
  * zone. Return the number of objects added, or a negative value
  * on error.
  */
-int
-rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+static int
+__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
 	void *opaque)
 {
@@ -332,7 +332,7 @@  rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_MEMPOOL_ALIGN) - vaddr;
 
 	if (off > len) {
-		ret = -EINVAL;
+		ret = -ENOBUFS;
 		goto fail;
 	}
 
@@ -343,7 +343,7 @@  rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 
 	/* not enough room to store one object */
 	if (i == 0) {
-		ret = -EINVAL;
+		ret = -ENOBUFS;
 		goto fail;
 	}
 
@@ -356,6 +356,22 @@  rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	return ret;
 }
 
+/* Compat wrapper, to be removed when changing the API is allowed (v20.11). */
+int
+rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
+	void *opaque)
+{
+	int ret;
+
+	ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
+					opaque);
+	if (ret == -ENOBUFS)
+		ret = -EINVAL;
+
+	return ret;
+}
+
 static rte_iova_t
 get_iova(void *addr)
 {
@@ -406,8 +422,10 @@  rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 				break;
 		}
 
-		ret = rte_mempool_populate_iova(mp, addr + off, iova,
+		ret = __rte_mempool_populate_iova(mp, addr + off, iova,
 			phys_len, free_cb, opaque);
+		if (ret == -ENOBUFS)
+			continue;
 		if (ret < 0)
 			goto fail;
 		/* no need to call the free callback for next chunks */
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index f81152af9..c08bb444f 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1108,7 +1108,10 @@  rte_mempool_free(struct rte_mempool *mp);
  * @return
  *   The number of objects added on success.
  *   On error, the chunk is not added in the memory list of the
- *   mempool and a negative errno is returned.
+ *   mempool and a negative errno is returned:
+ *   (-ENOBUFS): not enough room in chunk for one object.
+ *   (-ENOSPC): mempool is already populated.
+ *   (-ENOMEM): allocation failure.
  */
 int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,