[v5,2/2] ethdev: add Aggregated affinity match item
Checks
Commit Message
When multiple ports are aggregated into a single DPDK port,
(example: Linux bonding, DPDK bonding, failsafe, etc.),
we want to know which port is used for Rx and Tx.
This patch allows to map a Rx queue with an aggregated port by using
a flow rule. The new item is called RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY.
While uses the aggregated affinity as a matching item in the flow rule,
and sets the same affinity value by call
rte_eth_dev_map_aggr_tx_affinity(), then the packet can be sent from
the same port as the receiving one.
The affinity numbering starts from 1, then trying to match on
aggr_affinity 0 will result in an error.
Add the testpmd command line to match the new item:
flow create 0 ingress group 0 pattern aggr_affinity affinity is 1 /
end actions queue index 0 / end
The above command means that creates a flow on a single DPDK port and
matches the packet from the first physical port and redirects
these packets into Rx queue 0.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 28 +++++++++++++++++
doc/guides/prog_guide/rte_flow.rst | 9 ++++++
doc/guides/rel_notes/release_23_03.rst | 1 +
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 +++
lib/ethdev/rte_flow.c | 1 +
lib/ethdev/rte_flow.h | 35 +++++++++++++++++++++
6 files changed, 78 insertions(+)
Comments
For the title, I suggest
ethdev: add flow matching of aggregated port
14/02/2023 16:48, Jiawei Wang:
> When multiple ports are aggregated into a single DPDK port,
> (example: Linux bonding, DPDK bonding, failsafe, etc.),
> we want to know which port is used for Rx and Tx.
>
> This patch allows to map a Rx queue with an aggregated port by using
> a flow rule. The new item is called RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY.
>
> While uses the aggregated affinity as a matching item in the flow rule,
> and sets the same affinity value by call
> rte_eth_dev_map_aggr_tx_affinity(), then the packet can be sent from
> the same port as the receiving one.
> The affinity numbering starts from 1, then trying to match on
> aggr_affinity 0 will result in an error.
>
> Add the testpmd command line to match the new item:
> flow create 0 ingress group 0 pattern aggr_affinity affinity is 1 /
> end actions queue index 0 / end
>
> The above command means that creates a flow on a single DPDK port and
> matches the packet from the first physical port and redirects
> these packets into Rx queue 0.
>
> Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Hi,
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Friday, February 17, 2023 1:46 AM
> To: Jiawei(Jonny) Wang <jiaweiw@nvidia.com>
> Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>;
> andrew.rybchenko@oktetlabs.ru; Aman Singh <aman.deep.singh@intel.com>;
> Yuying Zhang <yuying.zhang@intel.com>; Ferruh Yigit <ferruh.yigit@amd.com>;
> dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> Subject: Re: [PATCH v5 2/2] ethdev: add Aggregated affinity match item
>
> For the title, I suggest
> ethdev: add flow matching of aggregated port
>
> 14/02/2023 16:48, Jiawei Wang:
> > When multiple ports are aggregated into a single DPDK port,
> > (example: Linux bonding, DPDK bonding, failsafe, etc.), we want to
> > know which port is used for Rx and Tx.
> >
> > This patch allows to map a Rx queue with an aggregated port by using a
> > flow rule. The new item is called RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY.
> >
> > While uses the aggregated affinity as a matching item in the flow
> > rule, and sets the same affinity value by call
> > rte_eth_dev_map_aggr_tx_affinity(), then the packet can be sent from
> > the same port as the receiving one.
> > The affinity numbering starts from 1, then trying to match on
> > aggr_affinity 0 will result in an error.
> >
> > Add the testpmd command line to match the new item:
> > flow create 0 ingress group 0 pattern aggr_affinity affinity is 1 /
> > end actions queue index 0 / end
> >
> > The above command means that creates a flow on a single DPDK port and
> > matches the packet from the first physical port and redirects these
> > packets into Rx queue 0.
> >
> > Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
>
OK, update the title next patch, thanks for Ack.
@@ -481,6 +481,8 @@ enum index {
ITEM_METER,
ITEM_METER_COLOR,
ITEM_METER_COLOR_NAME,
+ ITEM_AGGR_AFFINITY,
+ ITEM_AGGR_AFFINITY_VALUE,
/* Validate/create actions. */
ACTIONS,
@@ -1403,6 +1405,7 @@ static const enum index next_item[] = {
ITEM_L2TPV2,
ITEM_PPP,
ITEM_METER,
+ ITEM_AGGR_AFFINITY,
END_SET,
ZERO,
};
@@ -1892,6 +1895,12 @@ static const enum index item_meter[] = {
ZERO,
};
+static const enum index item_aggr_affinity[] = {
+ ITEM_AGGR_AFFINITY_VALUE,
+ ITEM_NEXT,
+ ZERO,
+};
+
static const enum index next_action[] = {
ACTION_END,
ACTION_VOID,
@@ -6694,6 +6703,22 @@ static const struct token token_list[] = {
ARGS_ENTRY(struct buffer, port)),
.call = parse_mp,
},
+ [ITEM_AGGR_AFFINITY] = {
+ .name = "aggr_affinity",
+ .help = "match on the aggregated port receiving the packets",
+ .priv = PRIV_ITEM(AGGR_AFFINITY,
+ sizeof(struct rte_flow_item_aggr_affinity)),
+ .next = NEXT(item_aggr_affinity),
+ .call = parse_vc,
+ },
+ [ITEM_AGGR_AFFINITY_VALUE] = {
+ .name = "affinity",
+ .help = "aggregated affinity value",
+ .next = NEXT(item_aggr_affinity, NEXT_ENTRY(COMMON_UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_aggr_affinity,
+ affinity)),
+ },
};
/** Remove and return last entry from argument stack. */
@@ -11424,6 +11449,9 @@ flow_item_default_mask(const struct rte_flow_item *item)
case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT:
mask = &ipv6_routing_ext_default_mask;
break;
+ case RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY:
+ mask = &rte_flow_item_aggr_affinity_mask;
+ break;
default:
break;
}
@@ -1536,6 +1536,15 @@ Matches IPv6 routing extension header.
- ``type``: IPv6 routing extension header type.
- ``segments_left``: How many IPv6 destination addresses carries on.
+
+Item: ``AGGR_AFFINITY``
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Matches on the aggregated port of the received packet.
+In case of multiple aggregated ports, the affinity numbering starts from 1.
+
+- ``affinity``: Aggregated affinity.
+
Actions
~~~~~~~
@@ -74,6 +74,7 @@ New Features
to get the number of aggregated ports.
* Introduced new function ``rte_eth_dev_map_aggr_tx_affinity()``
to map a Tx queue with an aggregated port of the DPDK port.
+ * Added Rx affinity flow matching of an aggregated port.
* **Added rte_flow support for matching IPv6 routing extension header fields.**
@@ -3775,6 +3775,10 @@ This section lists supported pattern items and their attributes, if any.
- ``color {value}``: meter color value (green/yellow/red).
+- ``aggr_affinity``: match aggregated port.
+
+ - ``affinity {value}``: aggregated port (starts from 1).
+
- ``send_to_kernel``: send packets to kernel.
@@ -162,6 +162,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = {
MK_FLOW_ITEM(PPP, sizeof(struct rte_flow_item_ppp)),
MK_FLOW_ITEM(METER_COLOR, sizeof(struct rte_flow_item_meter_color)),
MK_FLOW_ITEM(IPV6_ROUTING_EXT, sizeof(struct rte_flow_item_ipv6_routing_ext)),
+ MK_FLOW_ITEM(AGGR_AFFINITY, sizeof(struct rte_flow_item_aggr_affinity)),
};
/** Generate flow_action[] entry. */
@@ -656,6 +656,15 @@ enum rte_flow_item_type {
* @see struct rte_flow_item_icmp6_echo.
*/
RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REPLY,
+
+ /**
+ * Matches on the aggregated port of the received packet.
+ * Used in case multiple ports are aggregated to the a DPDK port.
+ * First port is number 1.
+ *
+ * @see struct rte_flow_item_aggr_affinity.
+ */
+ RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY,
};
/**
@@ -2187,6 +2196,32 @@ static const struct rte_flow_item_meter_color rte_flow_item_meter_color_mask = {
};
#endif
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY
+ *
+ * For multiple ports aggregated to a single DPDK port,
+ * match the aggregated port receiving the packets.
+ */
+struct rte_flow_item_aggr_affinity {
+ /**
+ * An aggregated port receiving the packets.
+ * Numbering starts from 1.
+ * Number of aggregated ports is reported by rte_eth_dev_count_aggr_ports().
+ */
+ uint8_t affinity;
+};
+
+/** Default mask for RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY. */
+#ifndef __cplusplus
+static const struct rte_flow_item_aggr_affinity
+rte_flow_item_aggr_affinity_mask = {
+ .affinity = 0xff,
+};
+#endif
+
/**
* Action types.
*