[v3,0/2] support single flow dump on MLX5 PMD

Message ID 1618485564-128533-1-git-send-email-haifeil@nvidia.com (mailing list archive)


Haifei Luo April 15, 2021, 11:19 a.m. UTC
  Dump information for all flows are supported and it is
useful to dump one flow.

Add single flow dump support on MLX5 PMD.

Modify API mlx5_flow_dev_dump to support.Modify mlx5_socket 
since one extra arg flow_ptr is added.

The data structure sent to DPDK application from the utility triggering
the flow dumps should be packed and endianness must be specified.

The native host endianness can be used, all exchange happens within
the same host (we use sendmsg aux data and share the file handle,
remote approach is not applicable, no inter-host communication happens).

The message structure to dump one/all flow(s):
    struct mlx5_flow_dump_req {
        uint32_t port_id;
        uint64_t flow_ptr;
    } __rte_packed;

If flow_ptr is 0, all flows for the specified port will be dumped.

Depends-on: series=16367  ("single flow dump")

V2: Rebase to fix apply patch failure.
V3: Fix commments. Modify data structures sent to DPDK application.

Haifei Luo (2):
  common/mlx5: add mlx5 APIs for single flow dump feature
  net/mlx5: add mlx5 APIs for single flow dump feature

 drivers/common/mlx5/linux/meson.build |  6 +++--
 drivers/common/mlx5/linux/mlx5_glue.c | 13 ++++++++++
 drivers/common/mlx5/linux/mlx5_glue.h |  1 +
 drivers/common/mlx5/mlx5_devx_cmds.c  | 14 +++++++++++
 drivers/common/mlx5/mlx5_devx_cmds.h  |  2 ++
 drivers/common/mlx5/version.map       |  1 +
 drivers/net/mlx5/linux/mlx5_os.h      |  3 +++
 drivers/net/mlx5/linux/mlx5_socket.c  | 47 +++++++++++++++++++++++++++--------
 drivers/net/mlx5/mlx5.h               | 10 ++++++++
 drivers/net/mlx5/mlx5_flow.c          | 30 ++++++++++++++++++++--
 10 files changed, 113 insertions(+), 14 deletions(-)