From patchwork Wed Jul 6 07:52:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aman Kumar X-Patchwork-Id: 113747 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 68D1DA0540; Wed, 6 Jul 2022 09:59:37 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9AD9142BE5; Wed, 6 Jul 2022 09:57:25 +0200 (CEST) Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) by mails.dpdk.org (Postfix) with ESMTP id 3913A42BB1 for ; Wed, 6 Jul 2022 09:57:24 +0200 (CEST) Received: by mail-pj1-f51.google.com with SMTP id s21so9959534pjq.4 for ; Wed, 06 Jul 2022 00:57:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vvdntech-in.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3BrgMdvd6MRvd+AYwoE6o1MpVBldMa4huRlu9MGIeZc=; b=VEPzB58vhVDr74FMQm+gttV7GABopmkelPQqJ2AHOjMdCEOHhVRofmtnuiFmZ2P4rk 77fTiSCJCWf+esLGpQSp7o5RAhTaVpVUl/LpF9rGjLgEfkJrmcNLrZjtBTYVg8G0rM4I LDd63rjad8n9CoCw9c+pxbIfEE0qIBvHFs2+ijS642a2RdUcwebacmIUk/aFNr0rpvso QXdXbgLfKSjaGTtyZMeRakmmfP62/4XwBwMNireKCHQiwZEBTC1hq1fZrK8t2KMHt1p1 RhbP6ct9I3BVDvPzI3fCmoOxmL3iGjy8c9qp8RxHl4CfxLGcTgIIISxjwf40UBi3aLaI qJKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3BrgMdvd6MRvd+AYwoE6o1MpVBldMa4huRlu9MGIeZc=; b=Joe5KZ9A/vszq49znRCHLGB6AVSthSEFy9MQevYngqKZLXUxCnSeMpd+S0uzLB2Oyr VQQ/LwsgMKOIAvun7xuIgowYTGObBTHJZ+xWLIzw/KgzZLlhbMoSDLELorUQaqvbYGG5 ojD6fqed1b24ZZXdVF5ZePjlM0dXEvR76RhG1kEiBh6XExH87R4TsKq7ZcPR2hEYcsMz wX+o0pJZ5y7dvvMsczbmCyovTSm2bcxyFCTYgtinQbRmSkyT6z+XWMR9XMP6Erf+jen6 nZhDNSgKqh6ygqTelMPc7HLNYHtMRViWYDbnajMWZ2Eo7eoEp96lWMeRRgntM6E+r8ns k0ew== X-Gm-Message-State: AJIora8FDCAoes3OMr7WhAXRzM3uy2K6fruDNqcQ48TgjKHwvPg03RnC SuukVHvpzFGgJYzh/47EXLZlSw3ENlSjwKkF X-Google-Smtp-Source: AGRyM1vYtAN5dmqh4okmAQurJCusHhBl5DdM/6KCFTzPLsHQftsWGYXvO36Hq3QDOmssVWVgHSkDWQ== X-Received: by 2002:a17:90a:d3d7:b0:1ef:ebe:d613 with SMTP id d23-20020a17090ad3d700b001ef0ebed613mr46692306pjw.240.1657094243169; Wed, 06 Jul 2022 00:57:23 -0700 (PDT) Received: from 470--5GDC--BLR.blore.vvdntech.com ([106.51.39.131]) by smtp.gmail.com with ESMTPSA id r4-20020a17090a438400b001ef81574355sm7378805pjg.12.2022.07.06.00.57.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jul 2022 00:57:22 -0700 (PDT) From: Aman Kumar To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, david.marchand@redhat.com, aman.kumar@vvdntech.in Subject: [RFC PATCH 29/29] net/qdma: add stats PMD ops for PF and VF Date: Wed, 6 Jul 2022 13:22:19 +0530 Message-Id: <20220706075219.517046-30-aman.kumar@vvdntech.in> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220706075219.517046-1-aman.kumar@vvdntech.in> References: <20220706075219.517046-1-aman.kumar@vvdntech.in> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch implements PMD ops related to stats for both PF and VF functions. Signed-off-by: Aman Kumar --- drivers/net/qdma/qdma.h | 3 +- drivers/net/qdma/qdma_devops.c | 114 +++++++++++++++++++++++++++--- drivers/net/qdma/qdma_rxtx.c | 4 ++ drivers/net/qdma/qdma_vf_ethdev.c | 1 + 4 files changed, 111 insertions(+), 11 deletions(-) diff --git a/drivers/net/qdma/qdma.h b/drivers/net/qdma/qdma.h index d9239f34a7..4c86d0702a 100644 --- a/drivers/net/qdma/qdma.h +++ b/drivers/net/qdma/qdma.h @@ -149,6 +149,7 @@ struct qdma_rx_queue { uint64_t mbuf_initializer; /* value to init mbufs */ struct qdma_q_pidx_reg_info q_pidx_info; struct qdma_q_cmpt_cidx_reg_info cmpt_cidx_info; + struct qdma_pkt_stats stats; uint16_t port_id; /* Device port identifier. */ uint8_t status:1; uint8_t err:1; @@ -212,9 +213,7 @@ struct qdma_tx_queue { uint16_t port_id; /* Device port identifier. */ uint8_t func_id; /* RX queue index. */ int8_t ringszidx; - struct qdma_pkt_stats stats; - uint64_t ep_addr; uint32_t queue_id; /* TX queue index. */ uint32_t num_queues; /* TX queue index. */ diff --git a/drivers/net/qdma/qdma_devops.c b/drivers/net/qdma/qdma_devops.c index e6803dd86f..f0b7291e8c 100644 --- a/drivers/net/qdma/qdma_devops.c +++ b/drivers/net/qdma/qdma_devops.c @@ -1745,9 +1745,40 @@ int qdma_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs) { - (void)dev; - (void)regs; + struct qdma_pci_dev *qdma_dev = dev->data->dev_private; + uint32_t *data = regs->data; + uint32_t reg_length = 0; + int ret = 0; + + ret = qdma_acc_get_num_config_regs(dev, + (enum qdma_ip_type)qdma_dev->ip_type, + ®_length); + if (ret < 0 || reg_length == 0) { + PMD_DRV_LOG(ERR, "%s: Failed to get number of config registers\n", + __func__); + return ret; + } + if (data == NULL) { + regs->length = reg_length - 1; + regs->width = sizeof(uint32_t); + return 0; + } + + /* Support only full register dump */ + if (regs->length == 0 || regs->length == (reg_length - 1)) { + regs->version = 1; + ret = qdma_acc_get_config_regs(dev, qdma_dev->is_vf, + (enum qdma_ip_type)qdma_dev->ip_type, data); + if (ret < 0) { + PMD_DRV_LOG(ERR, "%s: Failed to get config registers\n", + __func__); + } + return ret; + } + + PMD_DRV_LOG(ERR, "%s: Unsupported length (0x%x) requested\n", + __func__, regs->length); return -ENOTSUP; } @@ -1773,11 +1804,30 @@ int qdma_dev_queue_stats_mapping(struct rte_eth_dev *dev, uint8_t stat_idx, uint8_t is_rx) { - (void)dev; - (void)queue_id; - (void)stat_idx; - (void)is_rx; + struct qdma_pci_dev *qdma_dev = dev->data->dev_private; + + if (is_rx && queue_id >= dev->data->nb_rx_queues) { + PMD_DRV_LOG(ERR, "%s: Invalid Rx qid %d\n", + __func__, queue_id); + return -EINVAL; + } + + if (!is_rx && queue_id >= dev->data->nb_tx_queues) { + PMD_DRV_LOG(ERR, "%s: Invalid Tx qid %d\n", + __func__, queue_id); + return -EINVAL; + } + if (stat_idx >= RTE_ETHDEV_QUEUE_STAT_CNTRS) { + PMD_DRV_LOG(ERR, "%s: Invalid stats index %d\n", + __func__, stat_idx); + return -EINVAL; + } + + if (is_rx) + qdma_dev->rx_qid_statid_map[stat_idx] = queue_id; + else + qdma_dev->tx_qid_statid_map[stat_idx] = queue_id; return 0; } @@ -1795,9 +1845,42 @@ int qdma_dev_queue_stats_mapping(struct rte_eth_dev *dev, int qdma_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *eth_stats) { - (void)dev; - (void)eth_stats; + uint32_t i; + int qid; + struct qdma_rx_queue *rxq; + struct qdma_tx_queue *txq; + struct qdma_pci_dev *qdma_dev = dev->data->dev_private; + memset(eth_stats, 0, sizeof(struct rte_eth_stats)); + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = (struct qdma_rx_queue *)dev->data->rx_queues[i]; + eth_stats->ipackets += rxq->stats.pkts; + eth_stats->ibytes += rxq->stats.bytes; + } + + for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) { + qid = qdma_dev->rx_qid_statid_map[i]; + if (qid >= 0) { + rxq = (struct qdma_rx_queue *)dev->data->rx_queues[qid]; + eth_stats->q_ipackets[i] = rxq->stats.pkts; + eth_stats->q_ibytes[i] = rxq->stats.bytes; + } + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = (struct qdma_tx_queue *)dev->data->tx_queues[i]; + eth_stats->opackets += txq->stats.pkts; + eth_stats->obytes += txq->stats.bytes; + } + + for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) { + qid = qdma_dev->tx_qid_statid_map[i]; + if (qid >= 0) { + txq = (struct qdma_tx_queue *)dev->data->tx_queues[qid]; + eth_stats->q_opackets[i] = txq->stats.pkts; + eth_stats->q_obytes[i] = txq->stats.bytes; + } + } return 0; } @@ -1810,8 +1893,21 @@ int qdma_dev_stats_get(struct rte_eth_dev *dev, */ int qdma_dev_stats_reset(struct rte_eth_dev *dev) { - (void)dev; + uint32_t i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct qdma_rx_queue *rxq = + (struct qdma_rx_queue *)dev->data->rx_queues[i]; + rxq->stats.pkts = 0; + rxq->stats.bytes = 0; + } + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct qdma_tx_queue *txq = + (struct qdma_tx_queue *)dev->data->tx_queues[i]; + txq->stats.pkts = 0; + txq->stats.bytes = 0; + } return 0; } diff --git a/drivers/net/qdma/qdma_rxtx.c b/drivers/net/qdma/qdma_rxtx.c index 7652f35dd2..8a4caa465b 100644 --- a/drivers/net/qdma/qdma_rxtx.c +++ b/drivers/net/qdma/qdma_rxtx.c @@ -708,6 +708,8 @@ struct rte_mbuf *prepare_single_packet(struct qdma_rx_queue *rxq, pkt_length = qdma_ul_get_cmpt_pkt_len(&rxq->cmpt_data[cmpt_idx]); if (pkt_length) { + rxq->stats.pkts++; + rxq->stats.bytes += pkt_length; if (likely(pkt_length <= rxq->rx_buff_size)) { mb = rxq->sw_ring[id]; rxq->sw_ring[id++] = NULL; @@ -870,6 +872,8 @@ static uint16_t prepare_packets(struct qdma_rx_queue *rxq, while (count < nb_pkts) { pkt_length = qdma_ul_get_cmpt_pkt_len(&rxq->cmpt_data[count]); if (pkt_length) { + rxq->stats.pkts++; + rxq->stats.bytes += pkt_length; mb = prepare_segmented_packet(rxq, pkt_length, &rxq->rx_tail); rx_pkts[count_pkts++] = mb; diff --git a/drivers/net/qdma/qdma_vf_ethdev.c b/drivers/net/qdma/qdma_vf_ethdev.c index cbae4c9716..50529340b5 100644 --- a/drivers/net/qdma/qdma_vf_ethdev.c +++ b/drivers/net/qdma/qdma_vf_ethdev.c @@ -796,6 +796,7 @@ static struct eth_dev_ops qdma_vf_eth_dev_ops = { .rx_queue_stop = qdma_vf_dev_rx_queue_stop, .tx_queue_start = qdma_vf_dev_tx_queue_start, .tx_queue_stop = qdma_vf_dev_tx_queue_stop, + .stats_get = qdma_dev_stats_get, }; /**