mbox series

[v2,0/2] mlx5/net: hint PMD not to inline packet

Message ID 1580300467-7716-1-git-send-email-viacheslavo@mellanox.com (mailing list archive)
Headers
Series mlx5/net: hint PMD not to inline packet |

Message

Slava Ovsiienko Jan. 29, 2020, 12:21 p.m. UTC
  Some PMDs inline the mbuf data buffer directly to device transmit descriptor.
This is in order to save the overhead of the PCI headers imposed when the
device DMA reads the data by buffer pointer. For some devices it is essential
in order to provide the full bandwidth.

However, there are cases where such inlining is in-efficient. For example, when
the data buffer resides on other device memory (like GPU or storage device).
Attempt to inline such buffer will result in high PCI overhead for reading
and copying the data from the remote device to the host memory.

To support a mixed traffic pattern (some buffers from local host memory, some
buffers from other devices) with high bandwidth, a hint flag is introduced in
the mbuf.

Application will hint the PMD whether or not it should try to inline the
given mbuf data buffer. PMD should do the best effort to act upon this request.

The hint flag RTE_NET_MLX5_DYNFLAG_NO_INLINE_NAME is supposed to be dynamic,
registered by application with rte_mbuf_dynflag_register(). This flag is
purely vendor specific and declared in PMD specific header rte_pmd_mlx5.h,
which is intended to be used by specific application.

To query the supported specific flags in runtime the private routine is
introduced:

int rte_pmd_mlx5_get_dyn_flag_names(
        uint16_t port,
	char *names[],
        uint16_t n)

It returns the array of currently (over present hardware and configuration)
supported specific flags.

The "not inline hint" feature operating flow is the following one:
- application start
- probe the devices, ports are created
- query the port capabilities
- if port supporting the feature is found
  - register dynamic flag RTE_NET_MLX5_DYNFLAG_NO_INLINE_NAME
- application starts the ports
- on dev_start() PMD checks whether the feature flag is registered and
  enables the feature support in datapath
- application might set this flag in ol_flags field of mbuf in the packets
  being sent and PMD will handle ones appropriately.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>

---
RFC: https://patches.dpdk.org/patch/61348/

This patchset combines the parts of the following:

v1/testpmd: http://patches.dpdk.org/cover/64541/
v1/mlx5: http://patches.dpdk.org/patch/64622/

---
Ori Kam (1):
  net/mlx5: add fine grain dynamic flag support

Viacheslav Ovsiienko (1):
  net/mlx5: update Tx datapath to support no inline hint

 drivers/net/mlx5/mlx5.c                   |  20 ++++++
 drivers/net/mlx5/mlx5_rxtx.c              | 106 +++++++++++++++++++++++++-----
 drivers/net/mlx5/mlx5_rxtx.h              |   3 +
 drivers/net/mlx5/mlx5_trigger.c           |   8 +++
 drivers/net/mlx5/rte_pmd_mlx5.h           |  35 ++++++++++
 drivers/net/mlx5/rte_pmd_mlx5_version.map |   7 ++
 6 files changed, 163 insertions(+), 16 deletions(-)
 create mode 100644 drivers/net/mlx5/rte_pmd_mlx5.h
  

Comments

Raslan Darawsheh Jan. 30, 2020, 1:52 p.m. UTC | #1
Hi,

> -----Original Message-----
> From: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> Sent: Wednesday, January 29, 2020 2:21 PM
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@mellanox.com>; Raslan Darawsheh
> <rasland@mellanox.com>; Ori Kam <orika@mellanox.com>; Shahaf Shuler
> <shahafs@mellanox.com>; thomas@mellanox.net;
> olivier.matz@6wind.com; ferruh.yigit@intel.com
> Subject: [PATCH v2 0/2] mlx5/net: hint PMD not to inline packet
> 
> Some PMDs inline the mbuf data buffer directly to device transmit
> descriptor.
> This is in order to save the overhead of the PCI headers imposed when the
> device DMA reads the data by buffer pointer. For some devices it is essential
> in order to provide the full bandwidth.
> 
> However, there are cases where such inlining is in-efficient. For example,
> when
> the data buffer resides on other device memory (like GPU or storage
> device).
> Attempt to inline such buffer will result in high PCI overhead for reading
> and copying the data from the remote device to the host memory.
> 
> To support a mixed traffic pattern (some buffers from local host memory,
> some
> buffers from other devices) with high bandwidth, a hint flag is introduced in
> the mbuf.
> 
> Application will hint the PMD whether or not it should try to inline the
> given mbuf data buffer. PMD should do the best effort to act upon this
> request.
> 
> The hint flag RTE_NET_MLX5_DYNFLAG_NO_INLINE_NAME is supposed to
> be dynamic,
> registered by application with rte_mbuf_dynflag_register(). This flag is
> purely vendor specific and declared in PMD specific header rte_pmd_mlx5.h,
> which is intended to be used by specific application.
> 
> To query the supported specific flags in runtime the private routine is
> introduced:
> 
> int rte_pmd_mlx5_get_dyn_flag_names(
>         uint16_t port,
> 	char *names[],
>         uint16_t n)
> 
> It returns the array of currently (over present hardware and configuration)
> supported specific flags.
> 
> The "not inline hint" feature operating flow is the following one:
> - application start
> - probe the devices, ports are created
> - query the port capabilities
> - if port supporting the feature is found
>   - register dynamic flag RTE_NET_MLX5_DYNFLAG_NO_INLINE_NAME
> - application starts the ports
> - on dev_start() PMD checks whether the feature flag is registered and
>   enables the feature support in datapath
> - application might set this flag in ol_flags field of mbuf in the packets
>   being sent and PMD will handle ones appropriately.
> 
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> 
> ---
> RFC:
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatch
> es.dpdk.org%2Fpatch%2F61348%2F&amp;data=02%7C01%7Crasland%40mell
> anox.com%7C7b9dad01f6f24fc054df08d7a4b5c1aa%7Ca652971c7d2e4d9ba6a
> 4d149256f461b%7C0%7C0%7C637158972862376366&amp;sdata=GVQd0sNOS
> 8Bbi3z33j2USdZpx%2FPE8IzwcfTg4QBj%2BwI%3D&amp;reserved=0
> 
> This patchset combines the parts of the following:
> 
> v1/testpmd:
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> es.dpdk.org%2Fcover%2F64541%2F&amp;data=02%7C01%7Crasland%40mell
> anox.com%7C7b9dad01f6f24fc054df08d7a4b5c1aa%7Ca652971c7d2e4d9ba6a
> 4d149256f461b%7C0%7C0%7C637158972862376366&amp;sdata=wpMH45Orli
> mz1y4Bd7Emb%2F%2Fz4hsu%2BLUMN8sortguMUE%3D&amp;reserved=0
> v1/mlx5:
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> es.dpdk.org%2Fpatch%2F64622%2F&amp;data=02%7C01%7Crasland%40mell
> anox.com%7C7b9dad01f6f24fc054df08d7a4b5c1aa%7Ca652971c7d2e4d9ba6a
> 4d149256f461b%7C0%7C0%7C637158972862376366&amp;sdata=RAA3Qw104
> dV6rujRoxXIOm0gcAI0DY5DyAdMAwryeb8%3D&amp;reserved=0
> 
> ---
> Ori Kam (1):
>   net/mlx5: add fine grain dynamic flag support
> 
> Viacheslav Ovsiienko (1):
>   net/mlx5: update Tx datapath to support no inline hint
> 
>  drivers/net/mlx5/mlx5.c                   |  20 ++++++
>  drivers/net/mlx5/mlx5_rxtx.c              | 106 +++++++++++++++++++++++++--
> ---
>  drivers/net/mlx5/mlx5_rxtx.h              |   3 +
>  drivers/net/mlx5/mlx5_trigger.c           |   8 +++
>  drivers/net/mlx5/rte_pmd_mlx5.h           |  35 ++++++++++
>  drivers/net/mlx5/rte_pmd_mlx5_version.map |   7 ++
>  6 files changed, 163 insertions(+), 16 deletions(-)
>  create mode 100644 drivers/net/mlx5/rte_pmd_mlx5.h
> 
> --
> 1.8.3.1

Series applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh