[v1] mempool: fix some errors in html api

Message ID 20230703061743.10576-1-rma.ma@jaguarmicro.com (mailing list archive)
State New
Delegated to: Thomas Monjalon
Headers
Series [v1] mempool: fix some errors in html api |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/github-robot: build success github build: passed
ci/intel-Functional success Functional PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-abi-testing success Testing PASS
ci/iol-aarch-unit-testing success Testing PASS
ci/iol-unit-testing success Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/iol-testing success Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS

Commit Message

Rma Ma July 3, 2023, 6:17 a.m. UTC
  This patch fix some error descriptions of
return value in mempool api which affect in html api.

Signed-off-by: Rma Ma <rma.ma@jaguarmicro.com>
---
 lib/mempool/rte_mempool.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)
  

Comments

Rma Ma Aug. 2, 2023, 2:20 a.m. UTC | #1
Hi,

I know, it's a simple, no-impact patch for dpdk functionality.
However, this can be misleading for app programs.
When the return value of the interface is judged against -ENOENT, this will cause the program to error out.
It also affects the presentation of the api documentation:
https://doc.dpdk.org/api/rte__mempool_8h.html#a0d326354d53ef5068d86a8b7d9ec2d61

I'm not sure if this needs to be fixed or not.



> Subject: [PATCH v1] mempool: fix some errors in html api

>

> This patch fix some error descriptions of return value in mempool api which

> affect in html api.

>

> Signed-off-by: Rma Ma <rma.ma@jaguarmicro.com<mailto:rma.ma@jaguarmicro.com>>

> ---

>  lib/mempool/rte_mempool.h | 12 ++++++------

>  1 file changed, 6 insertions(+), 6 deletions(-)

>




Best wishes,

Rma
  
Morten Brørup Aug. 2, 2023, 8:58 a.m. UTC | #2
+CC various mempool driver maintainers

> From: Rma Ma [mailto:rma.ma@jaguarmicro.com]
> Sent: Monday, 3 July 2023 08.18
> 
> This patch fix some error descriptions of
> return value in mempool api which affect in html api.
> 
> Signed-off-by: Rma Ma <rma.ma@jaguarmicro.com>
> ---
>  lib/mempool/rte_mempool.h | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index 160975a7e7..d4d707533a 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -1610,7 +1610,7 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void
> **obj_table,
>   * Get several objects from the mempool.
>   *
>   * If cache is enabled, objects will be retrieved first from cache,
> - * subsequently from the common pool. Note that it can return -ENOENT when
> + * subsequently from the common pool. Note that it can return -ENOBUFS when
>   * the local cache and common pool are empty, even if cache from other
>   * lcores are full.
>   *
> @@ -1624,7 +1624,7 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void
> **obj_table,
>   *   A pointer to a mempool cache structure. May be NULL if not needed.
>   * @return
>   *   - 0: Success; objects taken.
> - *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
> + *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
>   */
>  static __rte_always_inline int
>  rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
> @@ -1646,7 +1646,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, void
> **obj_table,
>   * mempool creation time (see flags).
>   *
>   * If cache is enabled, objects will be retrieved first from cache,
> - * subsequently from the common pool. Note that it can return -ENOENT when
> + * subsequently from the common pool. Note that it can return -ENOBUFS when
>   * the local cache and common pool are empty, even if cache from other
>   * lcores are full.
>   *
> @@ -1658,7 +1658,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, void
> **obj_table,
>   *   The number of objects to get from the mempool to obj_table.
>   * @return
>   *   - 0: Success; objects taken
> - *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
> + *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
>   */
>  static __rte_always_inline int
>  rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int
> n)
> @@ -1677,7 +1677,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void
> **obj_table, unsigned int n)
>   * mempool creation (see flags).
>   *
>   * If cache is enabled, objects will be retrieved first from cache,
> - * subsequently from the common pool. Note that it can return -ENOENT when
> + * subsequently from the common pool. Note that it can return -ENOBUFS when
>   * the local cache and common pool are empty, even if cache from other
>   * lcores are full.
>   *
> @@ -1687,7 +1687,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void
> **obj_table, unsigned int n)
>   *   A pointer to a void * pointer (object) that will be filled.
>   * @return
>   *   - 0: Success; objects taken.
> - *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
> + *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
>   */
>  static __rte_always_inline int
>  rte_mempool_get(struct rte_mempool *mp, void **obj_p)
> --
> 2.17.1

Unfortunately, it is not that simple...

Here is the list of where mempool drivers are registered:
https://elixir.bootlin.com/dpdk/v23.07/C/ident/RTE_MEMPOOL_REGISTER_OPS

Some of the mempool drivers return -ENOBUFS, e.g. the ring driver and the stack driver:
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/ring/rte_mempool_ring.c#L145
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/ring/rte_mempool_ring.c#L48
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/stack/rte_mempool_stack.c#L78
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/stack/rte_mempool_stack.c#L59

However, I found one mempool driver (Marvell cnxk) that returns -ENOENT:
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/cnxk/cn10k_hwpool_ops.c#L261
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/cnxk/cn10k_hwpool_ops.c#L69

And one mempool driver (Cavium OCTEON TX) returns -ENOMEM:
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/octeontx/rte_mempool_octeontx.c#L194
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/octeontx/rte_mempool_octeontx.c#L111

One mempool driver (NXP dpaa) returns -ENOBUFS, or (if requesting too many objects) -1:
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa/dpaa_mempool.c#L351
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa/dpaa_mempool.c#L225
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa/dpaa_mempool.c#L257

And one mempool driver (NXP dpaa2) returns -ENOBUFS, or in rare cases -ENOENT or -1:
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa2/dpaa2_hw_mempool.c#L471
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa2/dpaa2_hw_mempool.c#L373
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa2/dpaa2_hw_mempool.c#L336
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa2/dpaa2_hw_mempool.c#L342


The root cause for this misery is the general lack of documentation of callbacks in DPDK (not just in the mempool library!), which leaves the callback implementers unaware of what to return. E.g. the mempool driver enqueue and dequeue callback types have no descriptions of their return values:
https://elixir.bootlin.com/dpdk/v23.07/source/lib/mempool/rte_mempool.h#L467
https://elixir.bootlin.com/dpdk/v23.07/source/lib/mempool/rte_mempool.h#L473

So in theory, the mempool drivers are free to return any value... Initially, I didn't think any mempool drivers would return -1 and set rte_errno, but apparently they do (except the driver doesn't set rte_errno when returning -1)!

On a techboard meeting not long ago, Tyler mentioned the lack of consistency in return value conventions as an general annoyance. Now having looked at these mempool drivers, I realize that the situation is much worse than I would imagine in my worst nightmare!


So, how do we want to fix this:

1. Specify the allowed return values in the documentation for the mempool library callback types, and require the mempool drivers to follow the updated specification?
2. Update mempool API documentation to list the many currently possible error return values (-ENOBUFS, -ENOENT, -ENOMEM, and -EPERM (=-1))?
3. Update mempool API documentation to say something along the lines of "negative return value indicates error; rte_errno is not set if -1"?
4. Switch to a general convention of returning -1 and setting rte_errno in all DPDK APIs, starting with the mempool library? However; this would still require a documented list of possible rte_errno values set by the functions - essentially the problem would remain the same: "possible return values" vs. "possible rte_errno values".

Personally, I am in strong favor of any option that tightens the API specification and treats non-conformance as bugs.
  
Rma Ma Aug. 2, 2023, 9:46 a.m. UTC | #3
>

> So, how do we want to fix this:

>

> 1. Specify the allowed return values in the documentation for the mempool

> library callback types, and require the mempool drivers to follow the updated

> specification?

> 2. Update mempool API documentation to list the many currently possible error

> return values (-ENOBUFS, -ENOENT, -ENOMEM, and -EPERM (=-1))?

> 3. Update mempool API documentation to say something along the lines of

> "negative return value indicates error; rte_errno is not set if -1"?

> 4. Switch to a general convention of returning -1 and setting rte_errno in all

> DPDK APIs, starting with the mempool library? However; this would still require a

> documented list of possible rte_errno values set by the functions - essentially

> the problem would remain the same: "possible return values" vs. "possible

> rte_errno values".

>

> Personally, I am in strong favor of any option that tightens the API specification

> and treats non-conformance as bugs.

Thanks for your detailed reply!

Couldn't agree more with your third option, which is much more generalized.



Best wishes,

Rma
  

Patch

diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 160975a7e7..d4d707533a 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -1610,7 +1610,7 @@  rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table,
  * Get several objects from the mempool.
  *
  * If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT when
+ * subsequently from the common pool. Note that it can return -ENOBUFS when
  * the local cache and common pool are empty, even if cache from other
  * lcores are full.
  *
@@ -1624,7 +1624,7 @@  rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table,
  *   A pointer to a mempool cache structure. May be NULL if not needed.
  * @return
  *   - 0: Success; objects taken.
- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
 rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
@@ -1646,7 +1646,7 @@  rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
  * mempool creation time (see flags).
  *
  * If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT when
+ * subsequently from the common pool. Note that it can return -ENOBUFS when
  * the local cache and common pool are empty, even if cache from other
  * lcores are full.
  *
@@ -1658,7 +1658,7 @@  rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
  *   The number of objects to get from the mempool to obj_table.
  * @return
  *   - 0: Success; objects taken
- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
 rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
@@ -1677,7 +1677,7 @@  rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
  * mempool creation (see flags).
  *
  * If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT when
+ * subsequently from the common pool. Note that it can return -ENOBUFS when
  * the local cache and common pool are empty, even if cache from other
  * lcores are full.
  *
@@ -1687,7 +1687,7 @@  rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
  *   A pointer to a void * pointer (object) that will be filled.
  * @return
  *   - 0: Success; objects taken.
- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
 rte_mempool_get(struct rte_mempool *mp, void **obj_p)