From patchwork Fri Sep 22 20:13:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 131840 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5261942619; Fri, 22 Sep 2023 22:15:09 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 216894064A; Fri, 22 Sep 2023 22:15:05 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 239CA4064A for ; Fri, 22 Sep 2023 22:15:03 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38M7kgNk027734; Fri, 22 Sep 2023 13:15:02 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=zdXFGSxTbcredJz1YHlha9HeSGGY9mXIkisVYKp5dOk=; b=AnileJIqxomROOyl0Q+EekyCTbUTYD1TYLIvwyGJ9YWhoHexg/wO0aqXwfuUrqHPjrGh /WXlolOmn4gft0t6FHTPzisggOdKzUu267t+7kTBoMsdh+RD2m2DE5RxCe0a621JogFt k4jd+zgswYpGCGAPYghX5caXbCcnim6NL2d7GfkLmG2VrxenEk4gc1H+EX4Ru98biELv 8ojYZK/ccQnGg+HwhdMaEISZZwGBgviTj6BEi/6+3r6Hbw9uuW4qgE0VtPjVsLXI4gIA riP5Z/zAUm7W3XkuE0A52TDCyU1fI4P14orMG6VRuVJATDVJt1uxETgVQL1YVe8GiLQz Hw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t8tr74dc5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 22 Sep 2023 13:15:02 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 22 Sep 2023 13:15:00 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 22 Sep 2023 13:15:00 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 4BBC63F7071; Fri, 22 Sep 2023 13:14:55 -0700 (PDT) From: Amit Prakash Shukla To: Amit Prakash Shukla , Jerin Jacob CC: , , , , , , , , , , , , Subject: [PATCH v2 09/12] eventdev: add support for DMA adapter stats Date: Sat, 23 Sep 2023 01:43:34 +0530 Message-ID: <20230922201337.3347666-10-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230922201337.3347666-1-amitprakashs@marvell.com> References: <20230919134222.2500033-1-amitprakashs@marvell.com> <20230922201337.3347666-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: nobsIgCzl0sBjYDbTQ8jEtCacrw2MuCM X-Proofpoint-ORIG-GUID: nobsIgCzl0sBjYDbTQ8jEtCacrw2MuCM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-09-22_19,2023-09-21_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added DMA adapter stats support for get and reset functions. Signed-off-by: Amit Prakash Shukla --- lib/eventdev/rte_event_dma_adapter.c | 95 ++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+) diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index 20817768fc..06a09371d2 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -143,6 +143,9 @@ struct event_dma_adapter { /* Loop counter to flush dma ops */ uint16_t transmit_loop_count; + + /* Per instance stats structure */ + struct rte_event_dma_adapter_stats dma_stats; } __rte_cache_aligned; static struct event_dma_adapter **event_dma_adapter; @@ -472,6 +475,7 @@ rte_event_dma_adapter_free(uint8_t id) static inline unsigned int edma_enq_to_dma_dev(struct event_dma_adapter *adapter, struct rte_event *ev, unsigned int cnt) { + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; struct dma_vchan_info *vchan_qinfo = NULL; struct rte_event_dma_adapter_op *dma_op; uint16_t vchan, nb_enqueued = 0; @@ -481,6 +485,7 @@ edma_enq_to_dma_dev(struct event_dma_adapter *adapter, struct rte_event *ev, uns ret = 0; n = 0; + stats->event_deq_count += cnt; for (i = 0; i < cnt; i++) { dma_op = ev[i].event_ptr; @@ -503,6 +508,7 @@ edma_enq_to_dma_dev(struct event_dma_adapter *adapter, struct rte_event *ev, uns ret = edma_circular_buffer_flush_to_dma_dev(adapter, &vchan_qinfo->dma_buf, dma_dev_id, vchan, &nb_enqueued); + stats->dma_enq_count += nb_enqueued; n += nb_enqueued; /** @@ -549,6 +555,7 @@ edma_adapter_dev_flush(struct event_dma_adapter *adapter, int16_t dma_dev_id, static unsigned int edma_adapter_enq_flush(struct event_dma_adapter *adapter) { + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; int16_t dma_dev_id; uint16_t nb_enqueued = 0; uint16_t nb_ops_flushed = 0; @@ -563,6 +570,8 @@ edma_adapter_enq_flush(struct event_dma_adapter *adapter) if (!nb_ops_flushed) adapter->stop_enq_to_dma_dev = false; + stats->dma_enq_count += nb_enqueued; + return nb_enqueued; } @@ -574,6 +583,7 @@ edma_adapter_enq_flush(struct event_dma_adapter *adapter) static int edma_adapter_enq_run(struct event_dma_adapter *adapter, unsigned int max_enq) { + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; uint8_t event_port_id = adapter->event_port_id; uint8_t event_dev_id = adapter->eventdev_id; struct rte_event ev[DMA_BATCH_SIZE]; @@ -593,6 +603,7 @@ edma_adapter_enq_run(struct event_dma_adapter *adapter, unsigned int max_enq) break; } + stats->event_poll_count++; n = rte_event_dequeue_burst(event_dev_id, event_port_id, ev, DMA_BATCH_SIZE, 0); if (!n) @@ -613,6 +624,7 @@ static inline uint16_t edma_ops_enqueue_burst(struct event_dma_adapter *adapter, struct rte_event_dma_adapter_op **ops, uint16_t num) { + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; uint8_t event_port_id = adapter->event_port_id; uint8_t event_dev_id = adapter->eventdev_id; struct rte_event events[DMA_BATCH_SIZE]; @@ -652,6 +664,10 @@ edma_ops_enqueue_burst(struct event_dma_adapter *adapter, struct rte_event_dma_a } while (retry++ < DMA_ADAPTER_MAX_EV_ENQ_RETRIES && nb_enqueued < nb_ev); + stats->event_enq_fail_count += nb_ev - nb_enqueued; + stats->event_enq_count += nb_enqueued; + stats->event_enq_retry_count += retry - 1; + return nb_enqueued; } @@ -706,6 +722,7 @@ edma_ops_buffer_flush(struct event_dma_adapter *adapter) static inline unsigned int edma_adapter_deq_run(struct event_dma_adapter *adapter, unsigned int max_deq) { + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; struct dma_vchan_info *vchan_info; struct dma_ops_circular_buffer *tq_buf; struct rte_event_dma_adapter_op *ops; @@ -743,6 +760,7 @@ edma_adapter_deq_run(struct event_dma_adapter *adapter, unsigned int max_deq) continue; done = false; + stats->dma_deq_count += n; tq_buf = &dev_info->tqmap[vchan].dma_buf; @@ -1305,3 +1323,80 @@ rte_event_dma_adapter_runtime_params_get(uint8_t id, return 0; } + +int +rte_event_dma_adapter_stats_get(uint8_t id, struct rte_event_dma_adapter_stats *stats) +{ + struct rte_event_dma_adapter_stats dev_stats_sum = {0}; + struct rte_event_dma_adapter_stats dev_stats; + struct event_dma_adapter *adapter; + struct dma_device_info *dev_info; + struct rte_eventdev *dev; + uint16_t num_dma_dev; + uint32_t i; + int ret; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = edma_id_to_adapter(id); + if (adapter == NULL || stats == NULL) + return -EINVAL; + + num_dma_dev = rte_dma_count_avail(); + dev = &rte_eventdevs[adapter->eventdev_id]; + memset(stats, 0, sizeof(*stats)); + for (i = 0; i < num_dma_dev; i++) { + dev_info = &adapter->dma_devs[i]; + + if (dev_info->internal_event_port == 0 || + dev->dev_ops->dma_adapter_stats_get == NULL) + continue; + + ret = (*dev->dev_ops->dma_adapter_stats_get)(dev, i, &dev_stats); + if (ret) + continue; + + dev_stats_sum.dma_deq_count += dev_stats.dma_deq_count; + dev_stats_sum.event_enq_count += dev_stats.event_enq_count; + } + + if (adapter->service_initialized) + *stats = adapter->dma_stats; + + stats->dma_deq_count += dev_stats_sum.dma_deq_count; + stats->event_enq_count += dev_stats_sum.event_enq_count; + + return 0; +} + +int +rte_event_dma_adapter_stats_reset(uint8_t id) +{ + struct event_dma_adapter *adapter; + struct dma_device_info *dev_info; + struct rte_eventdev *dev; + uint16_t num_dma_dev; + uint32_t i; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + num_dma_dev = rte_dma_count_avail(); + dev = &rte_eventdevs[adapter->eventdev_id]; + for (i = 0; i < num_dma_dev; i++) { + dev_info = &adapter->dma_devs[i]; + + if (dev_info->internal_event_port == 0 || + dev->dev_ops->dma_adapter_stats_reset == NULL) + continue; + + (*dev->dev_ops->dma_adapter_stats_reset)(dev, i); + } + + memset(&adapter->dma_stats, 0, sizeof(adapter->dma_stats)); + + return 0; +}