[RFC,1/5] ethdev: add port affinity match item

Message ID 20221221102934.13822-2-jiaweiw@nvidia.com (mailing list archive)
State Superseded, archived
Delegated to: Ferruh Yigit
Headers
Series add new port affinity item and affinity in Tx queue API |

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/loongarch-compilation warning apply patch failure

Commit Message

Jiawei Wang Dec. 21, 2022, 10:29 a.m. UTC
  For the multiple hardware ports connect to a single DPDK port (mhpsdp),
currently there is no information to indicate the packet belongs to
which hardware port.

This patch introduces a new port affinity item in rte flow API, and
the port affinity value reflects the physical port affinity of the
received packets.

While uses the port affinity as a matching item in the flow, and sets the
same affinity on the tx queue, then the packet can be sent from the same
hardware port with received.

This patch also adds the testpmd command line to match the new item:
	flow create 0 ingress group 0 pattern port_affinity affinity is 1 /
	end actions queue index 0 / end

The above command means that creates a flow on a single DPDK port and
matches the packet from the first physical port (assumes the affinity 1
stands for the first port) and redirects these packets into RxQ 0.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 29 +++++++++++++++++++++
 doc/guides/prog_guide/rte_flow.rst          |  7 +++++
 doc/guides/rel_notes/release_22_03.rst      |  5 ++++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  4 +++
 lib/ethdev/rte_flow.c                       |  1 +
 lib/ethdev/rte_flow.h                       | 28 ++++++++++++++++++++
 6 files changed, 74 insertions(+)
  

Comments

Ori Kam Jan. 11, 2023, 4:41 p.m. UTC | #1
Hi Jiawei,

> -----Original Message-----
> From: Jiawei(Jonny) Wang <jiaweiw@nvidia.com>
> Sent: Wednesday, 21 December 2022 12:30
> 
> For the multiple hardware ports connect to a single DPDK port (mhpsdp),
> currently there is no information to indicate the packet belongs to
> which hardware port.
> 
> This patch introduces a new port affinity item in rte flow API, and
> the port affinity value reflects the physical port affinity of the
> received packets.
> 
> While uses the port affinity as a matching item in the flow, and sets the
> same affinity on the tx queue, then the packet can be sent from the same
> hardware port with received.
> 
> This patch also adds the testpmd command line to match the new item:
> 	flow create 0 ingress group 0 pattern port_affinity affinity is 1 /
> 	end actions queue index 0 / end
> 
> The above command means that creates a flow on a single DPDK port and
> matches the packet from the first physical port (assumes the affinity 1
> stands for the first port) and redirects these packets into RxQ 0.
> 
> Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
> ---

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori
  
Thomas Monjalon Jan. 18, 2023, 11:07 a.m. UTC | #2
21/12/2022 11:29, Jiawei Wang:
> +	/**
> +	 * Matches on the physical port affinity of the received packet.
> +	 *
> +	 * See struct rte_flow_item_port_affinity.
> +	 */
> +	RTE_FLOW_ITEM_TYPE_PORT_AFFINITY,
>  };

I'm not sure about the word "affinity".
I think you want to match on a physical port.
It could be a global physical port id or
an index in the group of physical ports connected to a single DPDK port.
In first case, the name of the item could be RTE_FLOW_ITEM_TYPE_PHY_PORT,
in the second case, the name could be RTE_FLOW_ITEM_TYPE_MHPSDP_PHY_PORT,
"MHPSDP" meaning "Multiple Hardware Ports - Single DPDK Port".
We could replace "PHY" with "HW" as well.

Note that we cannot use the new item RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT
because we are in a case where multiple hardware ports are merged
in a single software represented port.


[...]
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * RTE_FLOW_ITEM_TYPE_PORT_AFFINITY
> + *
> + * For the multiple hardware ports connect to a single DPDK port (mhpsdp),
> + * use this item to match the hardware port affinity of the packets.
> + */
> +struct rte_flow_item_port_affinity {
> +	uint8_t affinity; /**< port affinity value. */
> +};

We need to define how the port numbering is done.
Is it driver-dependent?
Does it start at 0? etc...
  
Jiawei Wang Jan. 18, 2023, 2:41 p.m. UTC | #3
Hi,

> 
> 21/12/2022 11:29, Jiawei Wang:
> > +	/**
> > +	 * Matches on the physical port affinity of the received packet.
> > +	 *
> > +	 * See struct rte_flow_item_port_affinity.
> > +	 */
> > +	RTE_FLOW_ITEM_TYPE_PORT_AFFINITY,
> >  };
> 
> I'm not sure about the word "affinity".
> I think you want to match on a physical port.
> It could be a global physical port id or an index in the group of physical ports
> connected to a single DPDK port.
> In first case, the name of the item could be RTE_FLOW_ITEM_TYPE_PHY_PORT,
> in the second case, the name could be
> RTE_FLOW_ITEM_TYPE_MHPSDP_PHY_PORT,
> "MHPSDP" meaning "Multiple Hardware Ports - Single DPDK Port".
> We could replace "PHY" with "HW" as well.
>

Since DPDK only probe/attach the single port, seems first case does not meet this case.
Here, 'affinity' stands for the packet association with actual physical port.
 
> Note that we cannot use the new item
> RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT
> because we are in a case where multiple hardware ports are merged in a single
> software represented port.
> 
> 
> [...]
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior notice
> > + *
> > + * RTE_FLOW_ITEM_TYPE_PORT_AFFINITY
> > + *
> > + * For the multiple hardware ports connect to a single DPDK port
> > +(mhpsdp),
> > + * use this item to match the hardware port affinity of the packets.
> > + */
> > +struct rte_flow_item_port_affinity {
> > +	uint8_t affinity; /**< port affinity value. */ };
> 
> We need to define how the port numbering is done.
> Is it driver-dependent?
> Does it start at 0? etc...
> 
> 

User can define any value they want; one use case is the packet could be received and
sent to same port, then they can set the same 'affinity' value in flow and queue configuration.

The flow behavior is driver dependent.

Thanks.
  
Thomas Monjalon Jan. 18, 2023, 4:26 p.m. UTC | #4
18/01/2023 15:41, Jiawei(Jonny) Wang:
> Hi,
> 
> > 
> > 21/12/2022 11:29, Jiawei Wang:
> > > +	/**
> > > +	 * Matches on the physical port affinity of the received packet.
> > > +	 *
> > > +	 * See struct rte_flow_item_port_affinity.
> > > +	 */
> > > +	RTE_FLOW_ITEM_TYPE_PORT_AFFINITY,
> > >  };
> > 
> > I'm not sure about the word "affinity".
> > I think you want to match on a physical port.
> > It could be a global physical port id or an index in the group of physical ports
> > connected to a single DPDK port.
> > In first case, the name of the item could be RTE_FLOW_ITEM_TYPE_PHY_PORT,
> > in the second case, the name could be
> > RTE_FLOW_ITEM_TYPE_MHPSDP_PHY_PORT,
> > "MHPSDP" meaning "Multiple Hardware Ports - Single DPDK Port".
> > We could replace "PHY" with "HW" as well.
> >
> 
> Since DPDK only probe/attach the single port, seems first case does not meet this case.
> Here, 'affinity' stands for the packet association with actual physical port.

I think it is more than affinity because the packet is effectively
received from this port.
And the other concern is that this name does not give any clue
that we are talking about multiple ports merged in a single one.

> > Note that we cannot use the new item
> > RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT
> > because we are in a case where multiple hardware ports are merged in a single
> > software represented port.
> > 
> > 
> > [...]
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this structure may change without prior notice
> > > + *
> > > + * RTE_FLOW_ITEM_TYPE_PORT_AFFINITY
> > > + *
> > > + * For the multiple hardware ports connect to a single DPDK port
> > > +(mhpsdp),
> > > + * use this item to match the hardware port affinity of the packets.
> > > + */
> > > +struct rte_flow_item_port_affinity {
> > > +	uint8_t affinity; /**< port affinity value. */ };
> > 
> > We need to define how the port numbering is done.
> > Is it driver-dependent?
> > Does it start at 0? etc...
> 
> User can define any value they want; one use case is the packet could be received and
> sent to same port, then they can set the same 'affinity' value in flow and queue configuration.

No it does not work.
If ports are numbered 1 and 2, and user thinks it is 0 and 1,
the port 2 won't be matched at all.

> The flow behavior is driver dependent.
> 
> Thanks.
  
Jiawei Wang Jan. 24, 2023, 2 p.m. UTC | #5
Hi,


> > >
> > > 21/12/2022 11:29, Jiawei Wang:
> > > > +	/**
> > > > +	 * Matches on the physical port affinity of the received packet.
> > > > +	 *
> > > > +	 * See struct rte_flow_item_port_affinity.
> > > > +	 */
> > > > +	RTE_FLOW_ITEM_TYPE_PORT_AFFINITY,
> > > >  };
> > >
> > > I'm not sure about the word "affinity".
> > > I think you want to match on a physical port.
> > > It could be a global physical port id or an index in the group of
> > > physical ports connected to a single DPDK port.
> > > In first case, the name of the item could be
> > > RTE_FLOW_ITEM_TYPE_PHY_PORT, in the second case, the name could be
> > > RTE_FLOW_ITEM_TYPE_MHPSDP_PHY_PORT,
> > > "MHPSDP" meaning "Multiple Hardware Ports - Single DPDK Port".
> > > We could replace "PHY" with "HW" as well.
> > >
> >
> > Since DPDK only probe/attach the single port, seems first case does not meet
> this case.
> > Here, 'affinity' stands for the packet association with actual physical port.
> 
> I think it is more than affinity because the packet is effectively received from
> this port.
> And the other concern is that this name does not give any clue that we are
> talking about multiple ports merged in a single one.
> 

RTE_FLOW_ITEM_TYPE_MHPSDP_HW_PORT is better? @Ori Kam WDYT?

> > > Note that we cannot use the new item
> > > RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT
> > > because we are in a case where multiple hardware ports are merged in
> > > a single software represented port.
> > >
> > >
> > > [...]
> > > > +/**
> > > > + * @warning
> > > > + * @b EXPERIMENTAL: this structure may change without prior
> > > > +notice
> > > > + *
> > > > + * RTE_FLOW_ITEM_TYPE_PORT_AFFINITY
> > > > + *
> > > > + * For the multiple hardware ports connect to a single DPDK port
> > > > +(mhpsdp),
> > > > + * use this item to match the hardware port affinity of the packets.
> > > > + */
> > > > +struct rte_flow_item_port_affinity {
> > > > +	uint8_t affinity; /**< port affinity value. */ };
> > >
> > > We need to define how the port numbering is done.
> > > Is it driver-dependent?
> > > Does it start at 0? etc...
> >
> > User can define any value they want; one use case is the packet could
> > be received and sent to same port, then they can set the same 'affinity' value
> in flow and queue configuration.
> 
> No it does not work.
> If ports are numbered 1 and 2, and user thinks it is 0 and 1, the port 2 won't be
> matched at all.
> 

OK, I can update the document the affinity 0 is no affinity in tx side and then match on affinity 0
will result an error.
For above case, user should use 1 and 2 to match.

> > The flow behavior is driver dependent.
> >
> > Thanks.
> 
> 
> 
>
  

Patch

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 426585387f..3bc19e112a 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -514,6 +514,8 @@  enum index {
 	ITEM_QUOTA,
 	ITEM_QUOTA_STATE,
 	ITEM_QUOTA_STATE_NAME,
+	ITEM_PORT_AFFINITY,
+	ITEM_PORT_AFFINITY_VALUE,
 
 	/* Validate/create actions. */
 	ACTIONS,
@@ -1490,6 +1492,7 @@  static const enum index next_item[] = {
 	ITEM_PPP,
 	ITEM_METER,
 	ITEM_QUOTA,
+	ITEM_PORT_AFFINITY,
 	END_SET,
 	ZERO,
 };
@@ -1976,6 +1979,12 @@  static const enum index item_quota[] = {
 	ZERO,
 };
 
+static const enum index item_port_affinity[] = {
+	ITEM_PORT_AFFINITY_VALUE,
+	ITEM_NEXT,
+	ZERO,
+};
+
 static const enum index next_action[] = {
 	ACTION_END,
 	ACTION_VOID,
@@ -7239,6 +7248,23 @@  static const struct token token_list[] = {
 				ARGS_ENTRY(struct buffer, port)),
 		.call = parse_mp,
 	},
+	[ITEM_PORT_AFFINITY] = {
+		.name = "port_affinity",
+		.help = "match on the physical port affinity of the"
+			" received packet.",
+		.priv = PRIV_ITEM(PORT_AFFINITY,
+				  sizeof(struct rte_flow_item_port_affinity)),
+		.next = NEXT(item_port_affinity),
+		.call = parse_vc,
+	},
+	[ITEM_PORT_AFFINITY_VALUE] = {
+		.name = "affinity",
+		.help = "port affinity value",
+		.next = NEXT(item_port_affinity, NEXT_ENTRY(COMMON_UNSIGNED),
+			     item_param),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_port_affinity,
+					affinity)),
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -12329,6 +12355,9 @@  flow_item_default_mask(const struct rte_flow_item *item)
 	case RTE_FLOW_ITEM_TYPE_METER_COLOR:
 		mask = &rte_flow_item_meter_color_mask;
 		break;
+	case RTE_FLOW_ITEM_TYPE_PORT_AFFINITY:
+		mask = &rte_flow_item_port_affinity_mask;
+		break;
 	default:
 		break;
 	}
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 59932e82a6..dbf0e9a41f 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1558,6 +1558,13 @@  Matches Color Marker set by a Meter.
 
 - ``color``: Metering color marker.
 
+Item: ``PORT_AFFINITY``
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Matches on the physical port affinity of the received packet.
+
+- ``affinity``: Physical port affinity.
+
 Actions
 ~~~~~~~
 
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 0923707cb8..8acd3174f6 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -58,6 +58,11 @@  New Features
   Added ``gre_option`` item in rte_flow to support checksum/key/sequence
   matching in GRE packets.
 
+* **Added rte_flow support for matching Port Affinity fields.**
+
+  Added ``port_affinity`` item in rte_flow to support hardware port affinity of
+  the packets.
+
 * **Added new RSS offload types for L2TPv2 in RSS flow.**
 
   Added ``RTE_ETH_RSS_L2TPV2`` macro so that he L2TPv2 session ID field can be used as
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index f497bba26d..c0ace56c1f 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3722,6 +3722,10 @@  This section lists supported pattern items and their attributes, if any.
 
   - ``color {value}``: meter color value (green/yellow/red).
 
+- ``port_affinity``: match port affinity.
+
+  - ``affinity {value}``: port affinity value.
+
 - ``send_to_kernel``: send packets to kernel.
 
 
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 07b9ea48a9..645f392b24 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -162,6 +162,7 @@  static const struct rte_flow_desc_data rte_flow_desc_item[] = {
 	MK_FLOW_ITEM(PPP, sizeof(struct rte_flow_item_ppp)),
 	MK_FLOW_ITEM(METER_COLOR, sizeof(struct rte_flow_item_meter_color)),
 	MK_FLOW_ITEM(QUOTA, sizeof(struct rte_flow_item_quota)),
+	MK_FLOW_ITEM(PORT_AFFINITY, sizeof(struct rte_flow_item_port_affinity)),
 };
 
 /** Generate flow_action[] entry. */
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 21f7caf540..7907b7c0c2 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -667,6 +667,13 @@  enum rte_flow_item_type {
 	 * See struct rte_flow_item_sft.
 	 */
 	RTE_FLOW_ITEM_TYPE_SFT,
+
+	/**
+	 * Matches on the physical port affinity of the received packet.
+	 *
+	 * See struct rte_flow_item_port_affinity.
+	 */
+	RTE_FLOW_ITEM_TYPE_PORT_AFFINITY,
 };
 
 /**
@@ -2227,6 +2234,27 @@  static const struct rte_flow_item_meter_color rte_flow_item_meter_color_mask = {
 };
 #endif
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ITEM_TYPE_PORT_AFFINITY
+ *
+ * For the multiple hardware ports connect to a single DPDK port (mhpsdp),
+ * use this item to match the hardware port affinity of the packets.
+ */
+struct rte_flow_item_port_affinity {
+	uint8_t affinity; /**< port affinity value. */
+};
+
+/** Default mask for RTE_FLOW_ITEM_TYPE_PORT_AFFINITY. */
+#ifndef __cplusplus
+static const struct rte_flow_item_port_affinity
+rte_flow_item_port_affinity_mask = {
+	.affinity = 0xff,
+};
+#endif
+
 /**
  * Action types.
  *