From patchwork Thu May 27 06:18:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Somnath Kotur X-Patchwork-Id: 93459 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4178BA0546; Thu, 27 May 2021 08:21:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BD06340150; Thu, 27 May 2021 08:21:10 +0200 (CEST) Received: from relay.smtp-ext.broadcom.com (saphodev.broadcom.com [192.19.11.229]) by mails.dpdk.org (Postfix) with ESMTP id 325AA40143 for ; Thu, 27 May 2021 08:21:09 +0200 (CEST) Received: from dhcp-10-123-153-55.dhcp.broadcom.net (bgccx-dev-host-lnx35.bec.broadcom.net [10.123.153.55]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by relay.smtp-ext.broadcom.com (Postfix) with ESMTPS id 61D8D3CB9E; Wed, 26 May 2021 23:21:07 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 relay.smtp-ext.broadcom.com 61D8D3CB9E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1622096468; bh=KT51Apy9/l+WmNsgWoEGmc5GJ2JMmzwjNGZczONRNTs=; h=From:To:Cc:Subject:Date:From; b=b4FNC6xqQJOK6qlfefbEfmDQfopi/3DEMD6vxRGQkpJNcie6R/6sWU5fVG1hYmlK4 hN3jXNQUh+Z3K82l+btG2aXbxb+kfKHCdYqBpxdIIsRiFEC/8+FGxkQ5tM3sX/RifV z8JoFmSupGY68oz1LOSsn5TfV8lb9cXgsJqu8JI0= From: Somnath Kotur To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Somnath Kotur , Ajit Khaparde Date: Thu, 27 May 2021 11:48:46 +0530 Message-Id: <20210527061846.11803-1-somnath.kotur@broadcom.com> X-Mailer: git-send-email 2.28.0.450.g3a238e5 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 1/2] net/bnxt: detect bad opaque in Rx completion X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" There is a rare hardware bug that can cause a bad opaque value in the RX or TPA start completion. When this happens, the hardware may have used the same buffer twice for 2 rx packets. In addition, the driver might also crash later using the bad opaque as an index into the ring. The rx opaque value is predictable and is always monotonically increasing. The workaround is to keep track of the expected next opaque value and compare it with the one returned by hardware during RX and TPA start completions. If they miscompare, log it, discard the completion, schedule a ring reset and move on to the next one. Signed-off-by: Somnath Kotur Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_hwrm.c | 19 +++++++ drivers/net/bnxt/bnxt_hwrm.h | 1 + drivers/net/bnxt/bnxt_rxq.h | 1 + drivers/net/bnxt/bnxt_rxr.c | 101 ++++++++++++++++++++++++++++++++++- drivers/net/bnxt/bnxt_rxr.h | 1 + 5 files changed, 121 insertions(+), 2 deletions(-) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 931ecea77c..3d04b93d88 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -2711,6 +2711,25 @@ void bnxt_free_hwrm_rx_ring(struct bnxt *bp, int queue_index) bp->grp_info[queue_index].cp_fw_ring_id = INVALID_HW_RING_ID; } +int bnxt_hwrm_rx_ring_reset(struct bnxt *bp, int queue_index) +{ + int rc; + struct hwrm_ring_reset_input req = {.req_type = 0 }; + struct hwrm_ring_reset_output *resp = bp->hwrm_cmd_resp_addr; + + HWRM_PREP(&req, HWRM_RING_RESET, BNXT_USE_CHIMP_MB); + + req.ring_type = HWRM_RING_RESET_INPUT_RING_TYPE_RX_RING_GRP; + req.ring_id = rte_cpu_to_le_16(bp->grp_info[queue_index].fw_grp_id); + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB); + + HWRM_CHECK_RESULT(); + + HWRM_UNLOCK(); + + return rc; +} + static int bnxt_free_all_hwrm_rings(struct bnxt *bp) { diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index 90aff0c2de..a7271b7876 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -301,4 +301,5 @@ int bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(struct bnxt *bp); int bnxt_hwrm_fw_echo_reply(struct bnxt *bp, uint32_t echo_req_data1, uint32_t echo_req_data2); int bnxt_hwrm_poll_ver_get(struct bnxt *bp); +int bnxt_hwrm_rx_ring_reset(struct bnxt *bp, int queue_index); #endif diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h index fd8c69fc80..42bd8e7ab7 100644 --- a/drivers/net/bnxt/bnxt_rxq.h +++ b/drivers/net/bnxt/bnxt_rxq.h @@ -30,6 +30,7 @@ struct bnxt_rx_queue { uint8_t rx_deferred_start; /* not in global dev start */ uint8_t rx_started; /* RX queue is started */ uint8_t drop_en; /* Drop when rx desc not available. */ + uint8_t in_reset; /* Rx ring is scheduled for reset */ struct bnxt *bp; int index; diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index 2ef4115ef9..63c8661669 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -10,6 +10,7 @@ #include #include #include +#include #include "bnxt.h" #include "bnxt_reps.h" @@ -17,9 +18,7 @@ #include "bnxt_rxr.h" #include "bnxt_rxq.h" #include "hsi_struct_def_dpdk.h" -#ifdef RTE_LIBRTE_IEEE1588 #include "bnxt_hwrm.h" -#endif #include #include @@ -134,6 +133,51 @@ struct rte_mbuf *bnxt_consume_rx_buf(struct bnxt_rx_ring_info *rxr, return mbuf; } +static void bnxt_rx_ring_reset(void *arg) +{ + struct bnxt *bp = arg; + int i, rc = 0; + struct bnxt_rx_queue *rxq; + + + for (i = 0; i < (int)bp->rx_nr_rings; i++) { + struct bnxt_rx_ring_info *rxr; + + rxq = bp->rx_queues[i]; + if (!rxq || !rxq->in_reset) + continue; + + rxr = rxq->rx_ring; + /* Disable and flush TPA before resetting the RX ring */ + if (rxr->tpa_info) + bnxt_hwrm_vnic_tpa_cfg(bp, rxq->vnic, false); + rc = bnxt_hwrm_rx_ring_reset(bp, i); + if (rc) { + PMD_DRV_LOG(ERR, "Rx ring%d reset failed\n", i); + continue; + } + + bnxt_rx_queue_release_mbufs(rxq); + rxr->rx_raw_prod = 0; + rxr->ag_raw_prod = 0; + rxr->rx_next_cons = 0; + bnxt_init_one_rx_ring(rxq); + bnxt_db_write(&rxr->rx_db, rxr->rx_raw_prod); + bnxt_db_write(&rxr->ag_db, rxr->ag_raw_prod); + if (rxr->tpa_info) + bnxt_hwrm_vnic_tpa_cfg(bp, rxq->vnic, true); + + rxq->in_reset = 0; + } +} + + +static void bnxt_sched_ring_reset(struct bnxt_rx_queue *rxq) +{ + rxq->in_reset = 1; + rte_eal_alarm_set(1, bnxt_rx_ring_reset, (void *)rxq->bp); +} + static void bnxt_tpa_get_metadata(struct bnxt *bp, struct bnxt_tpa_info *tpa_info, struct rx_tpa_start_cmpl *tpa_start, @@ -195,6 +239,12 @@ static void bnxt_tpa_start(struct bnxt_rx_queue *rxq, data_cons = tpa_start->opaque; tpa_info = &rxr->tpa_info[agg_id]; + if (unlikely(data_cons != rxr->rx_next_cons)) { + PMD_DRV_LOG(ERR, "TPA cons %x, expected cons %x\n", + data_cons, rxr->rx_next_cons); + bnxt_sched_ring_reset(rxq); + return; + } mbuf = bnxt_consume_rx_buf(rxr, data_cons); @@ -233,6 +283,9 @@ static void bnxt_tpa_start(struct bnxt_rx_queue *rxq, /* recycle next mbuf */ data_cons = RING_NEXT(data_cons); bnxt_reuse_rx_mbuf(rxr, bnxt_consume_rx_buf(rxr, data_cons)); + + rxr->rx_next_cons = RING_IDX(rxr->rx_ring_struct, + RING_NEXT(data_cons)); } static int bnxt_agg_bufs_valid(struct bnxt_cp_ring_info *cpr, @@ -330,6 +383,34 @@ static int bnxt_rx_pages(struct bnxt_rx_queue *rxq, return 0; } +static int bnxt_discard_rx(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, + uint32_t *raw_cons, void *cmp) +{ + struct rx_pkt_cmpl *rxcmp = cmp; + uint32_t tmp_raw_cons = *raw_cons; + uint8_t cmp_type, agg_bufs = 0; + + cmp_type = CMP_TYPE(rxcmp); + + if (cmp_type == CMPL_BASE_TYPE_RX_L2) { + agg_bufs = BNXT_RX_L2_AGG_BUFS(rxcmp); + } else if (cmp_type == RX_TPA_END_CMPL_TYPE_RX_TPA_END) { + struct rx_tpa_end_cmpl *tpa_end = cmp; + + if (BNXT_CHIP_P5(bp)) + return 0; + + agg_bufs = BNXT_TPA_END_AGG_BUFS(tpa_end); + } + + if (agg_bufs) { + if (!bnxt_agg_bufs_valid(cpr, agg_bufs, tmp_raw_cons)) + return -EBUSY; + } + *raw_cons = tmp_raw_cons; + return 0; +} + static inline struct rte_mbuf *bnxt_tpa_end( struct bnxt_rx_queue *rxq, uint32_t *raw_cp_cons, @@ -344,6 +425,13 @@ static inline struct rte_mbuf *bnxt_tpa_end( uint8_t payload_offset; struct bnxt_tpa_info *tpa_info; + if (unlikely(rxq->in_reset)) { + PMD_DRV_LOG(ERR, "rxq->in_reset: raw_cp_cons:%d\n", + *raw_cp_cons); + bnxt_discard_rx(rxq->bp, cpr, raw_cp_cons, tpa_end); + return NULL; + } + if (BNXT_CHIP_P5(rxq->bp)) { struct rx_tpa_v2_end_cmpl *th_tpa_end; struct rx_tpa_v2_end_cmpl_hi *th_tpa_end1; @@ -839,6 +927,14 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, raw_prod = rxr->rx_raw_prod; cons = rxcmp->opaque; + if (unlikely(cons != rxr->rx_next_cons)) { + bnxt_discard_rx(bp, cpr, &tmp_raw_cons, rxcmp); + PMD_DRV_LOG(ERR, "RX cons %x != expected cons %x\n", + cons, rxr->rx_next_cons); + bnxt_sched_ring_reset(rxq); + rc = -EBUSY; + goto next_rx; + } mbuf = bnxt_consume_rx_buf(rxr, cons); if (mbuf == NULL) return -EBUSY; @@ -911,6 +1007,7 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, goto rx; } rxr->rx_raw_prod = raw_prod; + rxr->rx_next_cons = RING_IDX(rxr->rx_ring_struct, RING_NEXT(cons)); if (BNXT_TRUFLOW_EN(bp) && (BNXT_VF_IS_TRUSTED(bp) || BNXT_PF(bp)) && vfr_flag) { diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h index b43256e03e..7b7756313e 100644 --- a/drivers/net/bnxt/bnxt_rxr.h +++ b/drivers/net/bnxt/bnxt_rxr.h @@ -66,6 +66,7 @@ struct bnxt_rx_ring_info { uint16_t rx_raw_prod; uint16_t ag_raw_prod; uint16_t rx_cons; /* Needed for representor */ + uint16_t rx_next_cons; struct bnxt_db_info rx_db; struct bnxt_db_info ag_db; From patchwork Thu May 27 06:19:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Somnath Kotur X-Patchwork-Id: 93460 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E3A92A0546; Thu, 27 May 2021 08:21:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CC503410F0; Thu, 27 May 2021 08:21:24 +0200 (CEST) Received: from relay.smtp-ext.broadcom.com (lpdvacalvio01.broadcom.com [192.19.229.182]) by mails.dpdk.org (Postfix) with ESMTP id F23AA410F0 for ; Thu, 27 May 2021 08:21:22 +0200 (CEST) Received: from dhcp-10-123-153-55.dhcp.broadcom.net (bgccx-dev-host-lnx35.bec.broadcom.net [10.123.153.55]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by relay.smtp-ext.broadcom.com (Postfix) with ESMTPS id 0E7487A20; Wed, 26 May 2021 23:21:20 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 relay.smtp-ext.broadcom.com 0E7487A20 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1622096482; bh=ZiIt9eDuXVEPfEPh1bME1kdnkk3xFWNijHsm7o36p1g=; h=From:To:Cc:Subject:Date:From; b=dRpSEohpzYOWdhdBIQ4VHbUF9Eeu07sZYebP65qDpT6y6FRf5z6iUudkRuSaX0ZGm EJ4omEGGuroXEwp85voYNJ5WXR2uhLdJ7BkMU1tAGoEO9REktNQbBu9PxSUoT01qwj jl3ZKE10q0v2SQ95xTHGSts+o9NUGZNB/YKHRuAk= From: Somnath Kotur To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Somnath Kotur , Ajit Khaparde Date: Thu, 27 May 2021 11:49:01 +0530 Message-Id: <20210527061901.11859-1-somnath.kotur@broadcom.com> X-Mailer: git-send-email 2.28.0.450.g3a238e5 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 2/2] net/bnxt: workaround for spurious zero counter values in Thor X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" There is a HW bug that can result in certain stats being reported as zero. Workaround this by ignoring stats with a value of zero based on the previously stored snapshot of the same stat. This bug mainly manifests in the output of func_qstats as FW aggregrates each ring's stat value to give the per function stat and if one of them is zero, the per function stat value ends up being lower than the previous snapshot which shows up as a zero PPS value in testpmd. Eliminate invocation of func_qstats and aggregate the per-ring stat values in the driver itself to derive the func_qstats output post accounting for the spurious zero stat value. Signed-off-by: Somnath Kotur Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 45 +++++++++++ drivers/net/bnxt/bnxt_ethdev.c | 14 ++++ drivers/net/bnxt/bnxt_hwrm.c | 106 +++++++++++++++++++++---- drivers/net/bnxt/bnxt_hwrm.h | 2 + drivers/net/bnxt/bnxt_stats.c | 137 +++++++++++++++++++++++++++++++-- 5 files changed, 280 insertions(+), 24 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index db67bff127..f3108b0c0d 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -609,6 +609,49 @@ struct bnxt_flow_stat_info { struct bnxt_ctx_mem_buf_info tx_fc_out_tbl; }; +struct bnxt_ring_stats { + /* Number of transmitted unicast packets */ + uint64_t tx_ucast_pkts; + /* Number of transmitted multicast packets */ + uint64_t tx_mcast_pkts; + /* Number of transmitted broadcast packets */ + uint64_t tx_bcast_pkts; + /* Number of packets discarded in transmit path */ + uint64_t tx_discard_pkts; + /* Number of packets in transmit path with error */ + uint64_t tx_error_pkts; + /* Number of transmitted bytes for unicast traffic */ + uint64_t tx_ucast_bytes; + /* Number of transmitted bytes for multicast traffic */ + uint64_t tx_mcast_bytes; + /* Number of transmitted bytes for broadcast traffic */ + uint64_t tx_bcast_bytes; + /* Number of received unicast packets */ + uint64_t rx_ucast_pkts; + /* Number of received multicast packets */ + uint64_t rx_mcast_pkts; + /* Number of received broadcast packets */ + uint64_t rx_bcast_pkts; + /* Number of packets discarded in receive path */ + uint64_t rx_discard_pkts; + /* Number of packets in receive path with errors */ + uint64_t rx_error_pkts; + /* Number of received bytes for unicast traffic */ + uint64_t rx_ucast_bytes; + /* Number of received bytes for multicast traffic */ + uint64_t rx_mcast_bytes; + /* Number of received bytes for broadcast traffic */ + uint64_t rx_bcast_bytes; + /* Number of aggregated unicast packets */ + uint64_t rx_agg_pkts; + /* Number of aggregated unicast bytes */ + uint64_t rx_agg_bytes; + /* Number of aggregation events */ + uint64_t rx_agg_events; + /* Number of aborted aggregations */ + uint64_t rx_agg_aborts; +}; + struct bnxt { void *bar0; @@ -832,6 +875,7 @@ struct bnxt { struct bnxt_flow_stat_info *flow_stat; uint16_t max_num_kflows; uint16_t tx_cfa_action; + struct bnxt_ring_stats *prev_ring_stats[RTE_ETHDEV_QUEUE_STAT_CNTRS]; }; static @@ -999,5 +1043,6 @@ int bnxt_flow_stats_cnt(struct bnxt *bp); uint32_t bnxt_get_speed_capabilities(struct bnxt *bp); int bnxt_flow_ops_get_op(struct rte_eth_dev *dev, const struct rte_flow_ops **ops); +void bnxt_update_prev_stat(uint64_t *cntr, uint64_t *prev_cntr); #endif diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 3778e28cca..c66c9b0d75 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -1557,12 +1557,17 @@ bnxt_uninit_locks(struct bnxt *bp) static void bnxt_drv_uninit(struct bnxt *bp) { + int i = 0; + bnxt_free_leds_info(bp); bnxt_free_cos_queues(bp); bnxt_free_link_info(bp); bnxt_free_parent_info(bp); bnxt_uninit_locks(bp); + for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) + rte_free(bp->prev_ring_stats[i]); + rte_memzone_free((const struct rte_memzone *)bp->tx_mem_zone); bp->tx_mem_zone = NULL; rte_memzone_free((const struct rte_memzone *)bp->rx_mem_zone); @@ -4644,6 +4649,7 @@ static int bnxt_alloc_stats_mem(struct bnxt *bp) const struct rte_memzone *mz = NULL; uint32_t total_alloc_len; rte_iova_t mz_phys_addr; + int i = 0; if (pci_dev->id.device_id == BROADCOM_DEV_ID_NS2) return 0; @@ -4724,6 +4730,14 @@ static int bnxt_alloc_stats_mem(struct bnxt *bp) bp->flags |= BNXT_FLAG_EXT_TX_PORT_STATS; } + for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) { + bp->prev_ring_stats[i] = rte_zmalloc("bnxt_prev_stats", + sizeof(struct bnxt_ring_stats), + 0); + if (bp->prev_ring_stats[i] == NULL) + return -ENOMEM; + } + return 0; } diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 3d04b93d88..b495cf1dd7 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -4305,12 +4305,25 @@ int bnxt_hwrm_exec_fwd_resp(struct bnxt *bp, uint16_t target_id, return rc; } -int bnxt_hwrm_ctx_qstats(struct bnxt *bp, uint32_t cid, int idx, - struct rte_eth_stats *stats, uint8_t rx) +void bnxt_update_prev_stat(uint64_t *cntr, uint64_t *prev_cntr) +{ + /* One of the HW stat values that make up this counter was zero as + * returned by HW in this iteration, so use the previous + * iteration's counter value + */ + if (*prev_cntr && *cntr == 0) + *cntr = *prev_cntr; + else + *prev_cntr = *cntr; +} + +int bnxt_hwrm_ring_stats(struct bnxt *bp, uint32_t cid, int idx, + struct bnxt_ring_stats *ring_stats, bool rx) { int rc = 0; struct hwrm_stat_ctx_query_input req = {.req_type = 0}; struct hwrm_stat_ctx_query_output *resp = bp->hwrm_cmd_resp_addr; + struct bnxt_ring_stats *prev_stats = bp->prev_ring_stats[idx]; HWRM_PREP(&req, HWRM_STAT_CTX_QUERY, BNXT_USE_CHIMP_MB); @@ -4321,21 +4334,82 @@ int bnxt_hwrm_ctx_qstats(struct bnxt *bp, uint32_t cid, int idx, HWRM_CHECK_RESULT(); if (rx) { - stats->q_ipackets[idx] = rte_le_to_cpu_64(resp->rx_ucast_pkts); - stats->q_ipackets[idx] += rte_le_to_cpu_64(resp->rx_mcast_pkts); - stats->q_ipackets[idx] += rte_le_to_cpu_64(resp->rx_bcast_pkts); - stats->q_ibytes[idx] = rte_le_to_cpu_64(resp->rx_ucast_bytes); - stats->q_ibytes[idx] += rte_le_to_cpu_64(resp->rx_mcast_bytes); - stats->q_ibytes[idx] += rte_le_to_cpu_64(resp->rx_bcast_bytes); - stats->q_errors[idx] = rte_le_to_cpu_64(resp->rx_discard_pkts); - stats->q_errors[idx] += rte_le_to_cpu_64(resp->rx_error_pkts); + ring_stats->rx_ucast_pkts = rte_le_to_cpu_64(resp->rx_ucast_pkts); + bnxt_update_prev_stat(&ring_stats->rx_ucast_pkts, + &prev_stats->rx_ucast_pkts); + + ring_stats->rx_mcast_pkts = rte_le_to_cpu_64(resp->rx_mcast_pkts); + bnxt_update_prev_stat(&ring_stats->rx_mcast_pkts, + &prev_stats->rx_mcast_pkts); + + ring_stats->rx_bcast_pkts = rte_le_to_cpu_64(resp->rx_bcast_pkts); + bnxt_update_prev_stat(&ring_stats->rx_bcast_pkts, + &prev_stats->rx_bcast_pkts); + + ring_stats->rx_ucast_bytes = rte_le_to_cpu_64(resp->rx_ucast_bytes); + bnxt_update_prev_stat(&ring_stats->rx_ucast_bytes, + &prev_stats->rx_ucast_bytes); + + ring_stats->rx_mcast_bytes = rte_le_to_cpu_64(resp->rx_mcast_bytes); + bnxt_update_prev_stat(&ring_stats->rx_mcast_bytes, + &prev_stats->rx_mcast_bytes); + + ring_stats->rx_bcast_bytes = rte_le_to_cpu_64(resp->rx_bcast_bytes); + bnxt_update_prev_stat(&ring_stats->rx_bcast_bytes, + &prev_stats->rx_bcast_bytes); + + ring_stats->rx_discard_pkts = rte_le_to_cpu_64(resp->rx_discard_pkts); + bnxt_update_prev_stat(&ring_stats->rx_discard_pkts, + &prev_stats->rx_discard_pkts); + + ring_stats->rx_error_pkts = rte_le_to_cpu_64(resp->rx_error_pkts); + bnxt_update_prev_stat(&ring_stats->rx_error_pkts, + &prev_stats->rx_error_pkts); + + ring_stats->rx_agg_pkts = rte_le_to_cpu_64(resp->rx_agg_pkts); + bnxt_update_prev_stat(&ring_stats->rx_agg_pkts, + &prev_stats->rx_agg_pkts); + + ring_stats->rx_agg_bytes = rte_le_to_cpu_64(resp->rx_agg_bytes); + bnxt_update_prev_stat(&ring_stats->rx_agg_bytes, + &prev_stats->rx_agg_bytes); + + ring_stats->rx_agg_events = rte_le_to_cpu_64(resp->rx_agg_events); + bnxt_update_prev_stat(&ring_stats->rx_agg_events, + &prev_stats->rx_agg_events); + + ring_stats->rx_agg_aborts = rte_le_to_cpu_64(resp->rx_agg_aborts); + bnxt_update_prev_stat(&ring_stats->rx_agg_aborts, + &prev_stats->rx_agg_aborts); } else { - stats->q_opackets[idx] = rte_le_to_cpu_64(resp->tx_ucast_pkts); - stats->q_opackets[idx] += rte_le_to_cpu_64(resp->tx_mcast_pkts); - stats->q_opackets[idx] += rte_le_to_cpu_64(resp->tx_bcast_pkts); - stats->q_obytes[idx] = rte_le_to_cpu_64(resp->tx_ucast_bytes); - stats->q_obytes[idx] += rte_le_to_cpu_64(resp->tx_mcast_bytes); - stats->q_obytes[idx] += rte_le_to_cpu_64(resp->tx_bcast_bytes); + /* Tx */ + ring_stats->tx_ucast_pkts = rte_le_to_cpu_64(resp->tx_ucast_pkts); + bnxt_update_prev_stat(&ring_stats->tx_ucast_pkts, + &prev_stats->tx_ucast_pkts); + + ring_stats->tx_mcast_pkts = rte_le_to_cpu_64(resp->tx_mcast_pkts); + bnxt_update_prev_stat(&ring_stats->tx_mcast_pkts, + &prev_stats->tx_mcast_pkts); + + ring_stats->tx_bcast_pkts = rte_le_to_cpu_64(resp->tx_bcast_pkts); + bnxt_update_prev_stat(&ring_stats->tx_bcast_pkts, + &prev_stats->tx_bcast_pkts); + + ring_stats->tx_ucast_bytes = rte_le_to_cpu_64(resp->tx_ucast_bytes); + bnxt_update_prev_stat(&ring_stats->tx_ucast_bytes, + &prev_stats->tx_ucast_bytes); + + ring_stats->tx_mcast_bytes = rte_le_to_cpu_64(resp->tx_mcast_bytes); + bnxt_update_prev_stat(&ring_stats->tx_mcast_bytes, + &prev_stats->tx_mcast_bytes); + + ring_stats->tx_bcast_bytes = rte_le_to_cpu_64(resp->tx_bcast_bytes); + bnxt_update_prev_stat(&ring_stats->tx_bcast_bytes, + &prev_stats->tx_bcast_bytes); + + ring_stats->tx_discard_pkts = rte_le_to_cpu_64(resp->tx_discard_pkts); + bnxt_update_prev_stat(&ring_stats->tx_discard_pkts, + &prev_stats->tx_discard_pkts); } HWRM_UNLOCK(); diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index a7271b7876..8fe0602689 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -302,4 +302,6 @@ int bnxt_hwrm_fw_echo_reply(struct bnxt *bp, uint32_t echo_req_data1, uint32_t echo_req_data2); int bnxt_hwrm_poll_ver_get(struct bnxt *bp); int bnxt_hwrm_rx_ring_reset(struct bnxt *bp, int queue_index); +int bnxt_hwrm_ring_stats(struct bnxt *bp, uint32_t cid, int idx, + struct bnxt_ring_stats *stats, bool rx); #endif diff --git a/drivers/net/bnxt/bnxt_stats.c b/drivers/net/bnxt/bnxt_stats.c index 11767e06d0..0b49861894 100644 --- a/drivers/net/bnxt/bnxt_stats.c +++ b/drivers/net/bnxt/bnxt_stats.c @@ -506,8 +506,51 @@ void bnxt_free_stats(struct bnxt *bp) } } +static void bnxt_fill_rte_eth_stats(struct rte_eth_stats *stats, + struct bnxt_ring_stats *ring_stats, + unsigned int i, bool rx) +{ + if (rx) { + stats->q_ipackets[i] = ring_stats->rx_ucast_pkts; + stats->q_ipackets[i] += ring_stats->rx_mcast_pkts; + stats->q_ipackets[i] += ring_stats->rx_bcast_pkts; + + stats->ipackets += stats->q_ipackets[i]; + + stats->q_ibytes[i] = ring_stats->rx_ucast_bytes; + stats->q_ibytes[i] += ring_stats->rx_mcast_bytes; + stats->q_ibytes[i] += ring_stats->rx_bcast_bytes; + + stats->ibytes += stats->q_ibytes[i]; + + stats->q_errors[i] = ring_stats->rx_discard_pkts; + stats->q_errors[i] += ring_stats->rx_error_pkts; + + /**< Total of RX packets dropped by the HW */ + stats->imissed += stats->q_errors[i]; + /**< Total number of erroneous received packets. */ + /* stats->ierrors = TBD - rx_error_pkts are also dropped, + * so need to see what to do here + */ + } else { + stats->q_opackets[i] = ring_stats->tx_ucast_pkts; + stats->q_opackets[i] += ring_stats->tx_mcast_pkts; + stats->q_opackets[i] += ring_stats->tx_bcast_pkts; + + stats->opackets += stats->q_opackets[i]; + + stats->q_obytes[i] = ring_stats->tx_ucast_bytes; + stats->q_obytes[i] += ring_stats->tx_mcast_bytes; + stats->q_obytes[i] += ring_stats->tx_bcast_bytes; + + stats->obytes += stats->q_obytes[i]; + + stats->oerrors += ring_stats->tx_discard_pkts; + } +} + int bnxt_stats_get_op(struct rte_eth_dev *eth_dev, - struct rte_eth_stats *bnxt_stats) + struct rte_eth_stats *bnxt_stats) { int rc = 0; unsigned int i; @@ -527,13 +570,17 @@ int bnxt_stats_get_op(struct rte_eth_dev *eth_dev, for (i = 0; i < num_q_stats; i++) { struct bnxt_rx_queue *rxq = bp->rx_queues[i]; struct bnxt_cp_ring_info *cpr = rxq->cp_ring; + struct bnxt_ring_stats ring_stats = {0}; if (!rxq->rx_started) continue; - rc = bnxt_hwrm_ctx_qstats(bp, cpr->hw_stats_ctx_id, i, - bnxt_stats, 1); + + rc = bnxt_hwrm_ring_stats(bp, cpr->hw_stats_ctx_id, i, + &ring_stats, true); if (unlikely(rc)) return rc; + + bnxt_fill_rte_eth_stats(bnxt_stats, &ring_stats, i, true); bnxt_stats->rx_nombuf += rte_atomic64_read(&rxq->rx_mbuf_alloc_fail); } @@ -544,16 +591,19 @@ int bnxt_stats_get_op(struct rte_eth_dev *eth_dev, for (i = 0; i < num_q_stats; i++) { struct bnxt_tx_queue *txq = bp->tx_queues[i]; struct bnxt_cp_ring_info *cpr = txq->cp_ring; + struct bnxt_ring_stats ring_stats = {0}; if (!txq->tx_started) continue; - rc = bnxt_hwrm_ctx_qstats(bp, cpr->hw_stats_ctx_id, i, - bnxt_stats, 0); + + rc = bnxt_hwrm_ring_stats(bp, cpr->hw_stats_ctx_id, i, + &ring_stats, false); if (unlikely(rc)) return rc; + + bnxt_fill_rte_eth_stats(bnxt_stats, &ring_stats, i, false); } - rc = bnxt_hwrm_func_qstats(bp, 0xffff, bnxt_stats, NULL); return rc; } @@ -582,6 +632,40 @@ int bnxt_stats_reset_op(struct rte_eth_dev *eth_dev) return ret; } +static void bnxt_fill_func_qstats(struct hwrm_func_qstats_output *func_qstats, + struct bnxt_ring_stats *ring_stats, + bool rx) +{ + if (rx) { + func_qstats->rx_ucast_pkts += ring_stats->rx_ucast_pkts; + func_qstats->rx_mcast_pkts += ring_stats->rx_mcast_pkts; + func_qstats->rx_bcast_pkts += ring_stats->rx_bcast_pkts; + + func_qstats->rx_ucast_bytes += ring_stats->rx_ucast_bytes; + func_qstats->rx_mcast_bytes += ring_stats->rx_mcast_bytes; + func_qstats->rx_bcast_bytes += ring_stats->rx_bcast_bytes; + + func_qstats->rx_discard_pkts += ring_stats->rx_discard_pkts; + func_qstats->rx_drop_pkts += ring_stats->rx_error_pkts; + + func_qstats->rx_agg_pkts += ring_stats->rx_agg_pkts; + func_qstats->rx_agg_bytes += ring_stats->rx_agg_bytes; + func_qstats->rx_agg_events += ring_stats->rx_agg_events; + func_qstats->rx_agg_aborts += ring_stats->rx_agg_aborts; + } else { + func_qstats->tx_ucast_pkts += ring_stats->tx_ucast_pkts; + func_qstats->tx_mcast_pkts += ring_stats->tx_mcast_pkts; + func_qstats->tx_bcast_pkts += ring_stats->tx_bcast_pkts; + + func_qstats->tx_ucast_bytes += ring_stats->tx_ucast_bytes; + func_qstats->tx_mcast_bytes += ring_stats->tx_mcast_bytes; + func_qstats->tx_bcast_bytes += ring_stats->tx_bcast_bytes; + + func_qstats->tx_drop_pkts += ring_stats->tx_error_pkts; + func_qstats->tx_discard_pkts += ring_stats->tx_discard_pkts; + } +} + int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *xstats, unsigned int n) { @@ -591,7 +675,7 @@ int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev, unsigned int tx_port_stats_ext_cnt; unsigned int stat_size = sizeof(uint64_t); struct hwrm_func_qstats_output func_qstats = {0}; - unsigned int stat_count; + unsigned int stat_count, num_q_stats; int rc; rc = is_bnxt_in_error(bp); @@ -608,7 +692,44 @@ int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev, if (n < stat_count || xstats == NULL) return stat_count; - bnxt_hwrm_func_qstats(bp, 0xffff, NULL, &func_qstats); + num_q_stats = RTE_MIN(bp->rx_cp_nr_rings, + (unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS); + + for (i = 0; i < num_q_stats; i++) { + struct bnxt_rx_queue *rxq = bp->rx_queues[i]; + struct bnxt_cp_ring_info *cpr = rxq->cp_ring; + struct bnxt_ring_stats ring_stats = {0}; + + if (!rxq->rx_started) + continue; + + rc = bnxt_hwrm_ring_stats(bp, cpr->hw_stats_ctx_id, i, + &ring_stats, true); + if (unlikely(rc)) + return rc; + + bnxt_fill_func_qstats(&func_qstats, &ring_stats, true); + } + + num_q_stats = RTE_MIN(bp->tx_cp_nr_rings, + (unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS); + + for (i = 0; i < num_q_stats; i++) { + struct bnxt_tx_queue *txq = bp->tx_queues[i]; + struct bnxt_cp_ring_info *cpr = txq->cp_ring; + struct bnxt_ring_stats ring_stats = {0}; + + if (!txq->tx_started) + continue; + + rc = bnxt_hwrm_ring_stats(bp, cpr->hw_stats_ctx_id, i, + &ring_stats, false); + if (unlikely(rc)) + return rc; + + bnxt_fill_func_qstats(&func_qstats, &ring_stats, false); + } + bnxt_hwrm_port_qstats(bp); bnxt_hwrm_ext_port_qstats(bp); rx_port_stats_ext_cnt = RTE_MIN(RTE_DIM(bnxt_rx_ext_stats_strings),