From patchwork Wed Sep 16 01:51:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junyu Jiang X-Patchwork-Id: 77818 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 19524A04C7; Wed, 16 Sep 2020 04:17:30 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B30B11BF78; Wed, 16 Sep 2020 04:17:28 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id CECE7E07; Wed, 16 Sep 2020 04:17:25 +0200 (CEST) IronPort-SDR: EFr7Dj8bflrgZcyLz6oKmJlTTX6NMhvy9KiXU+McI7zfLYv/zy2vvvkCa7byK80SC5+2RRL0MR MwOiHRRL2m0w== X-IronPort-AV: E=McAfee;i="6000,8403,9745"; a="147136237" X-IronPort-AV: E=Sophos;i="5.76,430,1592895600"; d="scan'208";a="147136237" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Sep 2020 19:17:23 -0700 IronPort-SDR: qpTnPQTPoa+o607ylnJ5BEA524D0KS3bV8J5nUv1W4x31WwqLPJftxOwYNSf8VZWojOs5+aU2K FjCBlSHWJklQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,430,1592895600"; d="scan'208";a="319678195" Received: from unknown (HELO intel.sh.intel.com) ([10.239.255.60]) by orsmga002.jf.intel.com with ESMTP; 15 Sep 2020 19:17:21 -0700 From: Junyu Jiang To: dev@dpdk.org Cc: Jeff Guo , Beilei Xing , Junyu Jiang , stable@dpdk.org Date: Wed, 16 Sep 2020 01:51:05 +0000 Message-Id: <20200916015105.39815-1-junyux.jiang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200910015426.3140-1-junyux.jiang@intel.com> References: <20200910015426.3140-1-junyux.jiang@intel.com> Subject: [dpdk-dev] [PATCH v2] net/i40e: fix incorrect byte counters X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch fixed the issue that rx/tx bytes overflowed on 48 bit limitation by enlarging the limitation. Fixes: 4861cde46116 ("i40e: new poll mode driver") Cc: stable@dpdk.org Signed-off-by: Junyu Jiang Tested-by: Yingya Han Signed-off-by: Junyu Jiang Acked-by: Jeff Guo --- drivers/net/i40e/i40e_ethdev.c | 47 ++++++++++++++++++++++++++++++++++ drivers/net/i40e/i40e_ethdev.h | 9 +++++++ 2 files changed, 56 insertions(+) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 563f21d9d..4d4ea9861 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -3073,6 +3073,13 @@ i40e_update_vsi_stats(struct i40e_vsi *vsi) i40e_stat_update_48(hw, I40E_GLV_BPRCH(idx), I40E_GLV_BPRCL(idx), vsi->offset_loaded, &oes->rx_broadcast, &nes->rx_broadcast); + /* enlarge the limitation when rx_bytes overflowed */ + if (vsi->offset_loaded) { + if (I40E_RXTX_BYTES_LOW(vsi->old_rx_bytes) > nes->rx_bytes) + nes->rx_bytes += (uint64_t)1 << I40E_48_BIT_WIDTH; + nes->rx_bytes += I40E_RXTX_BYTES_HIGH(vsi->old_rx_bytes); + } + vsi->old_rx_bytes = nes->rx_bytes; /* exclude CRC bytes */ nes->rx_bytes -= (nes->rx_unicast + nes->rx_multicast + nes->rx_broadcast) * RTE_ETHER_CRC_LEN; @@ -3099,6 +3106,13 @@ i40e_update_vsi_stats(struct i40e_vsi *vsi) /* GLV_TDPC not supported */ i40e_stat_update_32(hw, I40E_GLV_TEPC(idx), vsi->offset_loaded, &oes->tx_errors, &nes->tx_errors); + /* enlarge the limitation when tx_bytes overflowed */ + if (vsi->offset_loaded) { + if (I40E_RXTX_BYTES_LOW(vsi->old_tx_bytes) > nes->tx_bytes) + nes->tx_bytes += (uint64_t)1 << I40E_48_BIT_WIDTH; + nes->tx_bytes += I40E_RXTX_BYTES_HIGH(vsi->old_tx_bytes); + } + vsi->old_tx_bytes = nes->tx_bytes; vsi->offset_loaded = true; PMD_DRV_LOG(DEBUG, "***************** VSI[%u] stats start *******************", @@ -3171,6 +3185,24 @@ i40e_read_stats_registers(struct i40e_pf *pf, struct i40e_hw *hw) pf->offset_loaded, &pf->internal_stats_offset.tx_broadcast, &pf->internal_stats.tx_broadcast); + /* enlarge the limitation when internal rx/tx bytes overflowed */ + if (pf->offset_loaded) { + if (I40E_RXTX_BYTES_LOW(pf->internal_old_rx_bytes) > + pf->internal_stats.rx_bytes) + pf->internal_stats.rx_bytes += + (uint64_t)1 << I40E_48_BIT_WIDTH; + pf->internal_stats.rx_bytes += + I40E_RXTX_BYTES_HIGH(pf->internal_old_rx_bytes); + + if (I40E_RXTX_BYTES_LOW(pf->internal_old_tx_bytes) > + pf->internal_stats.tx_bytes) + pf->internal_stats.tx_bytes += + (uint64_t)1 << I40E_48_BIT_WIDTH; + pf->internal_stats.tx_bytes += + I40E_RXTX_BYTES_HIGH(pf->internal_old_tx_bytes); + } + pf->internal_old_rx_bytes = pf->internal_stats.rx_bytes; + pf->internal_old_tx_bytes = pf->internal_stats.tx_bytes; /* exclude CRC size */ pf->internal_stats.rx_bytes -= (pf->internal_stats.rx_unicast + @@ -3194,6 +3226,14 @@ i40e_read_stats_registers(struct i40e_pf *pf, struct i40e_hw *hw) I40E_GLPRT_BPRCL(hw->port), pf->offset_loaded, &os->eth.rx_broadcast, &ns->eth.rx_broadcast); + /* enlarge the limitation when rx_bytes overflowed */ + if (pf->offset_loaded) { + if (I40E_RXTX_BYTES_LOW(pf->old_rx_bytes) > ns->eth.rx_bytes) + ns->eth.rx_bytes += (uint64_t)1 << I40E_48_BIT_WIDTH; + ns->eth.rx_bytes += I40E_RXTX_BYTES_HIGH(pf->old_rx_bytes); + } + pf->old_rx_bytes = ns->eth.rx_bytes; + /* Workaround: CRC size should not be included in byte statistics, * so subtract RTE_ETHER_CRC_LEN from the byte counter for each rx * packet. @@ -3252,6 +3292,13 @@ i40e_read_stats_registers(struct i40e_pf *pf, struct i40e_hw *hw) I40E_GLPRT_BPTCL(hw->port), pf->offset_loaded, &os->eth.tx_broadcast, &ns->eth.tx_broadcast); + /* enlarge the limitation when tx_bytes overflowed */ + if (pf->offset_loaded) { + if (I40E_RXTX_BYTES_LOW(pf->old_tx_bytes) > ns->eth.tx_bytes) + ns->eth.tx_bytes += (uint64_t)1 << I40E_48_BIT_WIDTH; + ns->eth.tx_bytes += I40E_RXTX_BYTES_HIGH(pf->old_tx_bytes); + } + pf->old_tx_bytes = ns->eth.tx_bytes; ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast + ns->eth.tx_broadcast) * RTE_ETHER_CRC_LEN; diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h index 19f821829..5d17be1f0 100644 --- a/drivers/net/i40e/i40e_ethdev.h +++ b/drivers/net/i40e/i40e_ethdev.h @@ -282,6 +282,9 @@ struct rte_flow { #define I40E_ETH_OVERHEAD \ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + I40E_VLAN_TAG_SIZE * 2) +#define I40E_RXTX_BYTES_HIGH(bytes) ((bytes) & ~I40E_48_BIT_MASK) +#define I40E_RXTX_BYTES_LOW(bytes) ((bytes) & I40E_48_BIT_MASK) + struct i40e_adapter; struct rte_pci_driver; @@ -399,6 +402,8 @@ struct i40e_vsi { uint8_t vlan_anti_spoof_on; /* The VLAN anti-spoofing enabled */ uint8_t vlan_filter_on; /* The VLAN filter enabled */ struct i40e_bw_info bw_info; /* VSI bandwidth information */ + uint64_t old_rx_bytes; + uint64_t old_tx_bytes; }; struct pool_entry { @@ -1156,6 +1161,10 @@ struct i40e_pf { uint16_t switch_domain_id; struct i40e_vf_msg_cfg vf_msg_cfg; + uint64_t old_rx_bytes; + uint64_t old_tx_bytes; + uint64_t internal_old_rx_bytes; + uint64_t internal_old_tx_bytes; }; enum pending_msg {