mbox series

[v7,0/2] Add Tx queue mapping of aggregated ports

Message ID 20230217154747.12401-1-jiaweiw@nvidia.com (mailing list archive)
Headers
Series Add Tx queue mapping of aggregated ports |

Message

Jiawei Wang Feb. 17, 2023, 3:47 p.m. UTC
  When multiple ports are aggregated into a single DPDK port,
(example: Linux bonding, DPDK bonding, failsafe, etc.),
we want to know which port is used for Rx and Tx.

This patch introduces the new ethdev API
rte_eth_dev_map_aggr_tx_affinity(), it's used to map a Tx queue
with an aggregated port of the DPDK port (specified with port_id),
The affinity is the number of the aggregated port.
Value 0 means no affinity and traffic could be routed to any
aggregated port, this is the default current behavior.

The maximum number of affinity is given by rte_eth_dev_count_aggr_ports().

This patch allows to map a Rx queue with an aggregated port by using
a flow rule. The new item is called RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY.

While uses the aggregated affinity as a matching item in the flow rule,
and sets the same affinity value by call
rte_eth_dev_map_aggr_tx_affinity(), then the packet can be sent from
the same port as the receiving one.
The affinity numbering starts from 1, then trying to match on
aggr_affinity 0 will result in an error.

RFC: http://patches.dpdk.org/project/dpdk/cover/20221221102934.13822-1-jiaweiw@nvidia.com/

v7:
* Remove the -ENOTSUP return value since no need anymore.
* Use the rte_eth_dev as argument in the internal function.

v6:
* Update the commit titles.
* Return 0 by default if dev_ops.count_aggr_ports is not defined.
* Adds the dev_configure and affinity value checking before call map_aggr_tx_affinity.
* Update the rte_eth_dev_count_aggr_ports description.

v5:
* Adds rte_eth_dev_map_aggr_tx_affinity() to map a Tx queue to an aggregated port.
* Adds rte_eth_dev_count_aggr_ports() to get the number of aggregated ports.
* Updates the flow item RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY.

v4:
* Rebase the latest code
* Update new field description
* Update release release note
* Reword the commit log to make clear

v3:
* Update exception rule
* Update the commit log
* Add the description for PHY affinity and numbering definition
* Add the number of physical ports into device info
* Change the patch order 

v2: Update based on the comments

Jiawei Wang (2):
  ethdev: add Tx queue mapping of aggregated ports
  ethdev: add flow matching of aggregated port

 app/test-pmd/cmdline.c                      | 92 +++++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 28 +++++++
 doc/guides/prog_guide/rte_flow.rst          |  8 ++
 doc/guides/rel_notes/release_23_03.rst      |  8 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 18 ++++
 lib/ethdev/ethdev_driver.h                  | 37 +++++++++
 lib/ethdev/ethdev_trace.h                   | 17 ++++
 lib/ethdev/ethdev_trace_points.c            |  6 ++
 lib/ethdev/rte_ethdev.c                     | 72 ++++++++++++++++
 lib/ethdev/rte_ethdev.h                     | 49 +++++++++++
 lib/ethdev/rte_flow.c                       |  1 +
 lib/ethdev/rte_flow.h                       | 35 ++++++++
 lib/ethdev/version.map                      |  2 +
 13 files changed, 373 insertions(+)
  

Comments

Ferruh Yigit Feb. 17, 2023, 4:45 p.m. UTC | #1
On 2/17/2023 3:47 PM, Jiawei Wang wrote:
> When multiple ports are aggregated into a single DPDK port,
> (example: Linux bonding, DPDK bonding, failsafe, etc.),
> we want to know which port is used for Rx and Tx.
> 
> This patch introduces the new ethdev API
> rte_eth_dev_map_aggr_tx_affinity(), it's used to map a Tx queue
> with an aggregated port of the DPDK port (specified with port_id),
> The affinity is the number of the aggregated port.
> Value 0 means no affinity and traffic could be routed to any
> aggregated port, this is the default current behavior.
> 
> The maximum number of affinity is given by rte_eth_dev_count_aggr_ports().
> 
> This patch allows to map a Rx queue with an aggregated port by using
> a flow rule. The new item is called RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY.
> 
> While uses the aggregated affinity as a matching item in the flow rule,
> and sets the same affinity value by call
> rte_eth_dev_map_aggr_tx_affinity(), then the packet can be sent from
> the same port as the receiving one.
> The affinity numbering starts from 1, then trying to match on
> aggr_affinity 0 will result in an error.
> 
> RFC: http://patches.dpdk.org/project/dpdk/cover/20221221102934.13822-1-jiaweiw@nvidia.com/
> 
> v7:
> * Remove the -ENOTSUP return value since no need anymore.
> * Use the rte_eth_dev as argument in the internal function.
> 
> v6:
> * Update the commit titles.
> * Return 0 by default if dev_ops.count_aggr_ports is not defined.
> * Adds the dev_configure and affinity value checking before call map_aggr_tx_affinity.
> * Update the rte_eth_dev_count_aggr_ports description.
> 
> v5:
> * Adds rte_eth_dev_map_aggr_tx_affinity() to map a Tx queue to an aggregated port.
> * Adds rte_eth_dev_count_aggr_ports() to get the number of aggregated ports.
> * Updates the flow item RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY.
> 
> v4:
> * Rebase the latest code
> * Update new field description
> * Update release release note
> * Reword the commit log to make clear
> 
> v3:
> * Update exception rule
> * Update the commit log
> * Add the description for PHY affinity and numbering definition
> * Add the number of physical ports into device info
> * Change the patch order 
> 
> v2: Update based on the comments
> 
> Jiawei Wang (2):
>   ethdev: add Tx queue mapping of aggregated ports
>   ethdev: add flow matching of aggregated port


Series applied to dpdk-next-net/main, thanks.