mbox series

[00/12] net/mlx5: add bonding configuration support

Message ID 1569398015-6027-1-git-send-email-viacheslavo@mellanox.com (mailing list archive)
Headers
Series net/mlx5: add bonding configuration support |

Message

Slava Ovsiienko Sept. 25, 2019, 7:53 a.m. UTC
  Multiport Mellanox NICs may support the bonding configurations internally.
Let's suppose there is ConnectX-5 NIC with two physical ports, on the host
it presents two PCI physical functions:

- PF0, say with PCI address 0000:82:00.0 and net interface ens1f0
- PF1, say with PCI address 0000:82:00.1 and net interface ens1f1

Also, let's suppose SR-IOV feature is enabled, swithdev mode is engaged,
and there is some set virtual PCI functions and their representor interfaces.
The physical interfaces may be combined into single bond interface,
supported by NIC HW/FW means directly with standard script:

  modprobe bonding miimon=100 mode=4  # 100 ms link check interval, mode - LACP
  ip link set ens3f0 master bond0
  ip link set ens3f0 master bond1

The dedicated Infiniband devices for single ports is destroyed, the new
multiport Infiniband device is created for bond interface and all
representors for both PFs. The unified E-Switch is created either,
and all representor ports belong to the same unified switch domain.

To use the created bond interface with DPDK application both slave
PCI devices must be specified (in whitelist, if any):

  -w 82:00.0,representor=[0-4]
  -w 82:00.1,representor=[0-7]

Representor enumerations follows the VF enumerations in the same way
as for single device. The two PCI devices will be probed, but eth ports
for only one master device and for all representors will be created.
This ports may reference to different rte_pci_dev but share the
same switch domain ID.

The extra devargs specifying configurations must be compatible
(otherwise error on probing will be arisen). For example, it is not
allowed to specify different values of dv_flow_en parameter for
different PCI devices.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>

Viacheslav Ovsiienko (12):
  net/mlx5: move backing PCI device to private context
  net/mlx5: update PCI address retrieving routine
  net/mlx5: allocate device list explicitly
  net/mlx5: add VF LAG mode bonding device recognition
  net/mlx5: generate bonding device name
  net/mlx5: check the kernel support for VF LAG bonding
  net/mlx5: query vport index match mode and parameters
  net/mlx5: elaborate E-Switch port parameters query
  net/mlx5: update source and destination vport translations
  net/mlx5: extend switch domain searching range
  net/mlx5: update switch port ID in bonding configuration
  net/mlx5: check sibling device configurations mismatch

 drivers/net/mlx5/Makefile       |   5 +
 drivers/net/mlx5/meson.build    |   2 +
 drivers/net/mlx5/mlx5.c         | 359 +++++++++++++++++++++++++++++++++++++---
 drivers/net/mlx5/mlx5.h         |  23 ++-
 drivers/net/mlx5/mlx5_defs.h    |   4 +
 drivers/net/mlx5/mlx5_ethdev.c  | 128 +++++++-------
 drivers/net/mlx5/mlx5_flow_dv.c |  98 +++++++----
 drivers/net/mlx5/mlx5_prm.h     |   9 +-
 drivers/net/mlx5/mlx5_txq.c     |   2 +-
 9 files changed, 506 insertions(+), 124 deletions(-)
  

Comments

Matan Azrad Sept. 25, 2019, 10:29 a.m. UTC | #1
From: Viacheslav Ovsiienko
> Multiport Mellanox NICs may support the bonding configurations internally.
> Let's suppose there is ConnectX-5 NIC with two physical ports, on the host it
> presents two PCI physical functions:
> 
> - PF0, say with PCI address 0000:82:00.0 and net interface ens1f0
> - PF1, say with PCI address 0000:82:00.1 and net interface ens1f1
> 
> Also, let's suppose SR-IOV feature is enabled, swithdev mode is engaged,
> and there is some set virtual PCI functions and their representor interfaces.
> The physical interfaces may be combined into single bond interface,
> supported by NIC HW/FW means directly with standard script:
> 
>   modprobe bonding miimon=100 mode=4  # 100 ms link check interval, mode
> - LACP
>   ip link set ens3f0 master bond0
>   ip link set ens3f0 master bond1
> 
> The dedicated Infiniband devices for single ports is destroyed, the new
> multiport Infiniband device is created for bond interface and all representors
> for both PFs. The unified E-Switch is created either, and all representor ports
> belong to the same unified switch domain.
> 
> To use the created bond interface with DPDK application both slave PCI
> devices must be specified (in whitelist, if any):
> 
>   -w 82:00.0,representor=[0-4]
>   -w 82:00.1,representor=[0-7]
> 
> Representor enumerations follows the VF enumerations in the same way as
> for single device. The two PCI devices will be probed, but eth ports for only
> one master device and for all representors will be created.
> This ports may reference to different rte_pci_dev but share the same switch
> domain ID.
> 
> The extra devargs specifying configurations must be compatible (otherwise
> error on probing will be arisen). For example, it is not allowed to specify
> different values of dv_flow_en parameter for different PCI devices.
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> 
> Viacheslav Ovsiienko (12):
>   net/mlx5: move backing PCI device to private context
>   net/mlx5: update PCI address retrieving routine
>   net/mlx5: allocate device list explicitly
>   net/mlx5: add VF LAG mode bonding device recognition
>   net/mlx5: generate bonding device name
>   net/mlx5: check the kernel support for VF LAG bonding
>   net/mlx5: query vport index match mode and parameters
>   net/mlx5: elaborate E-Switch port parameters query
>   net/mlx5: update source and destination vport translations
>   net/mlx5: extend switch domain searching range
>   net/mlx5: update switch port ID in bonding configuration
>   net/mlx5: check sibling device configurations mismatch
> 
>  drivers/net/mlx5/Makefile       |   5 +
>  drivers/net/mlx5/meson.build    |   2 +
>  drivers/net/mlx5/mlx5.c         | 359
> +++++++++++++++++++++++++++++++++++++---
>  drivers/net/mlx5/mlx5.h         |  23 ++-
>  drivers/net/mlx5/mlx5_defs.h    |   4 +
>  drivers/net/mlx5/mlx5_ethdev.c  | 128 +++++++-------
> drivers/net/mlx5/mlx5_flow_dv.c |  98 +++++++----
>  drivers/net/mlx5/mlx5_prm.h     |   9 +-
>  drivers/net/mlx5/mlx5_txq.c     |   2 +-
>  9 files changed, 506 insertions(+), 124 deletions(-)
> 
> --
> 1.8.3.1
For all the series:
Acked-by: Matan Azrad <matan@mellanox.com>
  
Raslan Darawsheh Sept. 29, 2019, 11:47 a.m. UTC | #2
Hi,

> -----Original Message-----
> From: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> Sent: Wednesday, September 25, 2019 10:53 AM
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@mellanox.com>; Raslan Darawsheh
> <rasland@mellanox.com>
> Subject: [PATCH 00/12] net/mlx5: add bonding configuration support
> 
> Multiport Mellanox NICs may support the bonding configurations internally.
> Let's suppose there is ConnectX-5 NIC with two physical ports, on the host it
> presents two PCI physical functions:
> 
> - PF0, say with PCI address 0000:82:00.0 and net interface ens1f0
> - PF1, say with PCI address 0000:82:00.1 and net interface ens1f1
> 
> Also, let's suppose SR-IOV feature is enabled, swithdev mode is engaged,
> and there is some set virtual PCI functions and their representor interfaces.
> The physical interfaces may be combined into single bond interface,
> supported by NIC HW/FW means directly with standard script:
> 
>   modprobe bonding miimon=100 mode=4  # 100 ms link check interval, mode
> - LACP
>   ip link set ens3f0 master bond0
>   ip link set ens3f0 master bond1
> 
> The dedicated Infiniband devices for single ports is destroyed, the new
> multiport Infiniband device is created for bond interface and all representors
> for both PFs. The unified E-Switch is created either, and all representor ports
> belong to the same unified switch domain.
> 
> To use the created bond interface with DPDK application both slave PCI
> devices must be specified (in whitelist, if any):
> 
>   -w 82:00.0,representor=[0-4]
>   -w 82:00.1,representor=[0-7]
> 
> Representor enumerations follows the VF enumerations in the same way as
> for single device. The two PCI devices will be probed, but eth ports for only
> one master device and for all representors will be created.
> This ports may reference to different rte_pci_dev but share the same switch
> domain ID.
> 
> The extra devargs specifying configurations must be compatible (otherwise
> error on probing will be arisen). For example, it is not allowed to specify
> different values of dv_flow_en parameter for different PCI devices.
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> 
> Viacheslav Ovsiienko (12):
>   net/mlx5: move backing PCI device to private context
>   net/mlx5: update PCI address retrieving routine
>   net/mlx5: allocate device list explicitly
>   net/mlx5: add VF LAG mode bonding device recognition
>   net/mlx5: generate bonding device name
>   net/mlx5: check the kernel support for VF LAG bonding
>   net/mlx5: query vport index match mode and parameters
>   net/mlx5: elaborate E-Switch port parameters query
>   net/mlx5: update source and destination vport translations
>   net/mlx5: extend switch domain searching range
>   net/mlx5: update switch port ID in bonding configuration
>   net/mlx5: check sibling device configurations mismatch
> 
>  drivers/net/mlx5/Makefile       |   5 +
>  drivers/net/mlx5/meson.build    |   2 +
>  drivers/net/mlx5/mlx5.c         | 359
> +++++++++++++++++++++++++++++++++++++---
>  drivers/net/mlx5/mlx5.h         |  23 ++-
>  drivers/net/mlx5/mlx5_defs.h    |   4 +
>  drivers/net/mlx5/mlx5_ethdev.c  | 128 +++++++-------
> drivers/net/mlx5/mlx5_flow_dv.c |  98 +++++++----
>  drivers/net/mlx5/mlx5_prm.h     |   9 +-
>  drivers/net/mlx5/mlx5_txq.c     |   2 +-
>  9 files changed, 506 insertions(+), 124 deletions(-)
> 
> --
> 1.8.3.1


Series pushed to next-net-mlx,

Kindest regards,
Raslan Darawsheh