[v1] net/ice: update QinQ switch filter handling

Message ID 20210413051459.4195-1-haiyue.wang@intel.com (mailing list archive)
State Accepted, archived
Delegated to: Qi Zhang
Headers
Series [v1] net/ice: update QinQ switch filter handling |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/travis-robot fail travis build: failed
ci/Performance-Testing fail build patch failure
ci/Intel-compilation fail Compilation issues

Commit Message

Wang, Haiyue April 13, 2021, 5:14 a.m. UTC
  The hardware oueter/inner VLAN protocol types are now updated to map to
new interface VLAN protocol types, so update the application to use new
VLAN protocol types when the rte_flow is QinQ filter type.

Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
---
Depends-on: series-16315 ("base code update batch 2")
---
 drivers/net/ice/ice_switch_filter.c | 36 ++++++++++++++++++-----------
 1 file changed, 22 insertions(+), 14 deletions(-)
  

Comments

Qi Zhang April 13, 2021, 7:24 a.m. UTC | #1
> -----Original Message-----
> From: Wang, Haiyue <haiyue.wang@intel.com>
> Sent: Tuesday, April 13, 2021 1:15 PM
> To: dev@dpdk.org
> Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Zhang, Yuying
> <yuying.zhang@intel.com>; Wang, Haiyue <haiyue.wang@intel.com>; Yang,
> Qiming <qiming.yang@intel.com>
> Subject: [PATCH v1] net/ice: update QinQ switch filter handling
> 
> The hardware oueter/inner VLAN protocol types are now updated to map to
> new interface VLAN protocol types, so update the application to use new VLAN
> protocol types when the rte_flow is QinQ filter type.
> 
> Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>

Acked-by: Qi Zhang <qi.z.zhang@intel.com>

Applied to dpdk-next-net-intel.

Thanks
Qi
  

Patch

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 6f9e861d08..0bf3660677 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -389,12 +389,17 @@  ice_switch_inset_get(const struct rte_flow_item pattern[],
 	bool profile_rule = 0;
 	bool nvgre_valid = 0;
 	bool vxlan_valid = 0;
+	bool qinq_valid = 0;
 	bool ipv6_valid = 0;
 	bool ipv4_valid = 0;
 	bool udp_valid = 0;
 	bool tcp_valid = 0;
 	uint16_t j, t = 0;
 
+	if (*tun_type == ICE_SW_TUN_AND_NON_TUN_QINQ ||
+	    *tun_type == ICE_NON_TUN_QINQ)
+		qinq_valid = 1;
+
 	for (item = pattern; item->type !=
 			RTE_FLOW_ITEM_TYPE_END; item++) {
 		if (item->last) {
@@ -932,22 +937,25 @@  ice_switch_inset_get(const struct rte_flow_item pattern[],
 				return 0;
 			}
 
-			if (!outer_vlan_valid &&
-			    (*tun_type == ICE_SW_TUN_AND_NON_TUN_QINQ ||
-			     *tun_type == ICE_NON_TUN_QINQ))
-				outer_vlan_valid = 1;
-			else if (!inner_vlan_valid &&
-				 (*tun_type == ICE_SW_TUN_AND_NON_TUN_QINQ ||
-				  *tun_type == ICE_NON_TUN_QINQ))
-				inner_vlan_valid = 1;
-			else if (!inner_vlan_valid)
-				inner_vlan_valid = 1;
+			if (qinq_valid) {
+				if (!outer_vlan_valid)
+					outer_vlan_valid = 1;
+				else
+					inner_vlan_valid = 1;
+			}
 
 			if (vlan_spec && vlan_mask) {
-				if (outer_vlan_valid && !inner_vlan_valid) {
-					list[t].type = ICE_VLAN_EX;
-					input_set |= ICE_INSET_VLAN_OUTER;
-				} else if (inner_vlan_valid) {
+				if (qinq_valid) {
+					if (!inner_vlan_valid) {
+						list[t].type = ICE_VLAN_EX;
+						input_set |=
+							ICE_INSET_VLAN_OUTER;
+					} else {
+						list[t].type = ICE_VLAN_IN;
+						input_set |=
+							ICE_INSET_VLAN_INNER;
+					}
+				} else {
 					list[t].type = ICE_VLAN_OFOS;
 					input_set |= ICE_INSET_VLAN_INNER;
 				}