From patchwork Tue Jul 7 01:38:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simei Su X-Patchwork-Id: 73356 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BE4F3A00BE; Tue, 7 Jul 2020 03:39:25 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BBC911DBD9; Tue, 7 Jul 2020 03:39:24 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id CF8D61DB94 for ; Tue, 7 Jul 2020 03:39:22 +0200 (CEST) IronPort-SDR: UEDIKFTnXpcOK5vKm6elx1U4MKW9lfCD9wjVOJKOb1LB/aS6c4u4d7iAAOp3JhLGzThGgTVIVz opcgdq85/ZTg== X-IronPort-AV: E=McAfee;i="6000,8403,9674"; a="147535747" X-IronPort-AV: E=Sophos;i="5.75,321,1589266800"; d="scan'208";a="147535747" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jul 2020 18:39:21 -0700 IronPort-SDR: kMd5Sy6wfVVSllWti8vbPJvP0Qfmw/SRJ35ZCrBUsRxXvlwhLuaum3ib3zDEUTD7cVvexVIFYX mrfpkPoofgEg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,321,1589266800"; d="scan'208";a="279449716" Received: from unknown (HELO npg-dpdk-cvl-simeisu-118d193.sh.intel.com) ([10.67.110.178]) by orsmga003.jf.intel.com with ESMTP; 06 Jul 2020 18:39:19 -0700 From: Simei Su To: qi.z.zhang@intel.com Cc: dev@dpdk.org, wei.zhao1@intel.com, junyux.jiang@intel.com, nannan.lu@intel.com, Simei Su Date: Tue, 7 Jul 2020 09:38:18 +0800 Message-Id: <1594085898-138749-1-git-send-email-simei.su@intel.com> X-Mailer: git-send-email 1.8.3.1 Subject: [dpdk-dev] [PATCH] net/ice: fix GTPU/PPPoE packets with no hash value X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When RSS init, because of profile overlap, the GTPU_IPV4 packets don't hit GTPU_INNER_IPV4 profile which causes no hash value. Because of no PPPoE profile, the PPPoE packets also has no hash value. This patch solves this issue by pulling GTPU_IPV4 profile into inner ipv4 group and creating related PPPoE profile at the same time. Fixes: 8cadc41294c6 ("net/ice: initialize and update RSS based on user config") Signed-off-by: Simei Su Acked-by: Qi Zhang --- drivers/net/ice/ice_ethdev.c | 80 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 80 insertions(+) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 0af9405..85b9eb4 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -2533,6 +2533,86 @@ static int ice_parse_devargs(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "%s SCTP_IPV6 rss flow fail %d", __func__, ret); } + + if (rss_hf & ETH_RSS_IPV4) { + ret = ice_add_rss_cfg(hw, vsi->idx, ICE_FLOW_HASH_IPV4, + ICE_FLOW_SEG_HDR_GTPU_IP, 0); + if (ret) + PMD_DRV_LOG(ERR, "%s GTPU_IPV4 rss flow fail %d", + __func__, ret); + + ret = ice_add_rss_cfg(hw, vsi->idx, ICE_FLOW_HASH_IPV4, + ICE_FLOW_SEG_HDR_PPPOE, 0); + if (ret) + PMD_DRV_LOG(ERR, "%s PPPoE_IPV4 rss flow fail %d", + __func__, ret); + } + + if (rss_hf & ETH_RSS_IPV6) { + ret = ice_add_rss_cfg(hw, vsi->idx, ICE_FLOW_HASH_IPV6, + ICE_FLOW_SEG_HDR_GTPU_IP, 0); + if (ret) + PMD_DRV_LOG(ERR, "%s GTPU_IPV6 rss flow fail %d", + __func__, ret); + + ret = ice_add_rss_cfg(hw, vsi->idx, ICE_FLOW_HASH_IPV6, + ICE_FLOW_SEG_HDR_PPPOE, 0); + if (ret) + PMD_DRV_LOG(ERR, "%s PPPoE_IPV6 rss flow fail %d", + __func__, ret); + } + + if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) { + ret = ice_add_rss_cfg(hw, vsi->idx, ICE_HASH_UDP_IPV4, + ICE_FLOW_SEG_HDR_PPPOE, 0); + if (ret) + PMD_DRV_LOG(ERR, "%s PPPoE_IPV4_UDP rss flow fail %d", + __func__, ret); + } + + if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) { + ret = ice_add_rss_cfg(hw, vsi->idx, ICE_HASH_UDP_IPV6, + ICE_FLOW_SEG_HDR_PPPOE, 0); + if (ret) + PMD_DRV_LOG(ERR, "%s PPPoE_IPV6_UDP rss flow fail %d", + __func__, ret); + } + + if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) { + ret = ice_add_rss_cfg(hw, vsi->idx, ICE_HASH_TCP_IPV4, + ICE_FLOW_SEG_HDR_PPPOE, 0); + if (ret) + PMD_DRV_LOG(ERR, "%s PPPoE_IPV4_TCP rss flow fail %d", + __func__, ret); + } + + if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) { + ret = ice_add_rss_cfg(hw, vsi->idx, ICE_HASH_TCP_IPV6, + ICE_FLOW_SEG_HDR_PPPOE, 0); + if (ret) + PMD_DRV_LOG(ERR, "%s PPPoE_IPV6_TCP rss flow fail %d", + __func__, ret); + } + + if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) { + ret = ice_add_rss_cfg(hw, vsi->idx, ICE_HASH_SCTP_IPV4, + ICE_FLOW_SEG_HDR_PPPOE, 0); + if (ret) + PMD_DRV_LOG(ERR, "%s PPPoE_IPV4_SCTP rss flow fail %d", + __func__, ret); + } + + if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) { + if (ret) + PMD_DRV_LOG(ERR, "%s GTPU_IPV6_SCTP rss flow fail %d", + __func__, ret); + + ret = ice_add_rss_cfg(hw, vsi->idx, ICE_HASH_SCTP_IPV6, + ICE_FLOW_SEG_HDR_PPPOE, 0); + if (ret) + PMD_DRV_LOG(ERR, "%s PPPoE_IPV6_SCTP rss flow fail %d", + __func__, ret); + } } static int ice_init_rss(struct ice_pf *pf)