From patchwork Fri Jun 28 07:49:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55548 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DC0C41B9D1; Fri, 28 Jun 2019 09:51:09 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id A78F71B9B9 for ; Fri, 28 Jun 2019 09:50:56 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5S7ngql000482 for ; Fri, 28 Jun 2019 00:50:55 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=/f0WY+lEo1Kv/7gg2+sjut8SJajod1nRx/FgGMAMECY=; b=vKqhBo8XtlFh7fxEWhLcQg41xqpN9IWOSUehMrYsYlsTWBOw3CdZVzGt2XDTntPoSo7v yOR8YsxMkSGxinzTnKgEHraYo4t2dry7TDcQvbD9Ke9zzzBuC+JBacC1DeqgSYz4Cy/D yVtMpUSFCSmE0T5baCLP3k5muHIRVyVMChjaE9r1WiQ8529IORCKPv0vl78E+mW5B+M3 r3gfFtDT8raqruPRZ2gO7FvuFx3QXFtF5US5B2Jw41WlDczgAN/t0FJgTBChB4v5Q+6Y Mkgd8GDuva5jshqBFLXz5SJuC3CaqLB3OEZWaqiu8n4aNaNNBWDUiEI0R9VSfJ2cXuwG Gw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd778ar7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 00:50:55 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 00:50:54 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 00:50:54 -0700 Received: from BG-LT7430.marvell.com (bg-lt7430.marvell.com [10.28.10.255]) by maili.marvell.com (Postfix) with ESMTP id 473EA3F7041; Fri, 28 Jun 2019 00:50:53 -0700 (PDT) From: To: CC: , Pavan Nikhilesh , "Nithin Dabilpuram" Date: Fri, 28 Jun 2019 13:19:52 +0530 Message-ID: <20190628075024.404-14-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628075024.404-1-pbhagavatula@marvell.com> References: <20190628075024.404-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_02:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 13/44] event/octeontx2: add xstats support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add support for retrieving statistics from SSO GWS and GGRP. Signed-off-by: Pavan Nikhilesh Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram --- drivers/event/octeontx2/otx2_evdev.c | 5 + drivers/event/octeontx2/otx2_evdev_stats.h | 242 +++++++++++++++++++++ 2 files changed, 247 insertions(+) create mode 100644 drivers/event/octeontx2/otx2_evdev_stats.h diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 6c37c5b5c..51220f447 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -12,6 +12,7 @@ #include #include +#include "otx2_evdev_stats.h" #include "otx2_evdev.h" #include "otx2_irq.h" @@ -763,6 +764,10 @@ static struct rte_eventdev_ops otx2_sso_ops = { .port_unlink = otx2_sso_port_unlink, .timeout_ticks = otx2_sso_timeout_ticks, + .xstats_get = otx2_sso_xstats_get, + .xstats_reset = otx2_sso_xstats_reset, + .xstats_get_names = otx2_sso_xstats_get_names, + .dump = otx2_sso_dump, }; diff --git a/drivers/event/octeontx2/otx2_evdev_stats.h b/drivers/event/octeontx2/otx2_evdev_stats.h new file mode 100644 index 000000000..df76a1333 --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev_stats.h @@ -0,0 +1,242 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_EVDEV_STATS_H__ +#define __OTX2_EVDEV_STATS_H__ + +#include "otx2_evdev.h" + +struct otx2_sso_xstats_name { + const char name[RTE_EVENT_DEV_XSTATS_NAME_SIZE]; + const size_t offset; + const uint64_t mask; + const uint8_t shift; + uint64_t reset_snap[OTX2_SSO_MAX_VHGRP]; +}; + +static struct otx2_sso_xstats_name sso_hws_xstats[] = { + {"last_grp_serviced", offsetof(struct sso_hws_stats, arbitration), + 0x3FF, 0, {0} }, + {"affinity_arbitration_credits", + offsetof(struct sso_hws_stats, arbitration), + 0xF, 16, {0} }, +}; + +static struct otx2_sso_xstats_name sso_grp_xstats[] = { + {"wrk_sched", offsetof(struct sso_grp_stats, ws_pc), ~0x0, 0, + {0} }, + {"xaq_dram", offsetof(struct sso_grp_stats, ext_pc), ~0x0, + 0, {0} }, + {"add_wrk", offsetof(struct sso_grp_stats, wa_pc), ~0x0, 0, + {0} }, + {"tag_switch_req", offsetof(struct sso_grp_stats, ts_pc), ~0x0, 0, + {0} }, + {"desched_req", offsetof(struct sso_grp_stats, ds_pc), ~0x0, 0, + {0} }, + {"desched_wrk", offsetof(struct sso_grp_stats, dq_pc), ~0x0, 0, + {0} }, + {"xaq_cached", offsetof(struct sso_grp_stats, aw_status), 0x3, + 0, {0} }, + {"work_inflight", offsetof(struct sso_grp_stats, aw_status), 0x3F, + 16, {0} }, + {"inuse_pages", offsetof(struct sso_grp_stats, page_cnt), + 0xFFFFFFFF, 0, {0} }, +}; + +#define OTX2_SSO_NUM_HWS_XSTATS RTE_DIM(sso_hws_xstats) +#define OTX2_SSO_NUM_GRP_XSTATS RTE_DIM(sso_grp_xstats) + +#define OTX2_SSO_NUM_XSTATS (OTX2_SSO_NUM_HWS_XSTATS + OTX2_SSO_NUM_GRP_XSTATS) + +static int +otx2_sso_xstats_get(const struct rte_eventdev *event_dev, + enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id, + const unsigned int ids[], uint64_t values[], unsigned int n) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + struct otx2_sso_xstats_name *xstats; + struct otx2_sso_xstats_name *xstat; + struct otx2_mbox *mbox = dev->mbox; + uint32_t xstats_mode_count = 0; + uint32_t start_offset = 0; + unsigned int i; + uint64_t value; + void *req_rsp; + int rc; + + switch (mode) { + case RTE_EVENT_DEV_XSTATS_DEVICE: + break; + case RTE_EVENT_DEV_XSTATS_PORT: + if (queue_port_id >= (signed int)dev->nb_event_ports) + goto invalid_value; + + xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS; + xstats = sso_hws_xstats; + + req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox); + ((struct sso_info_req *)req_rsp)->hws = queue_port_id; + rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp); + if (rc < 0) + goto invalid_value; + + break; + case RTE_EVENT_DEV_XSTATS_QUEUE: + if (queue_port_id >= (signed int)dev->nb_event_queues) + goto invalid_value; + + xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS; + start_offset = OTX2_SSO_NUM_HWS_XSTATS; + xstats = sso_grp_xstats; + + req_rsp = otx2_mbox_alloc_msg_sso_grp_get_stats(mbox); + ((struct sso_info_req *)req_rsp)->grp = queue_port_id; + rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp); + if (rc < 0) + goto invalid_value; + + break; + default: + otx2_err("Invalid mode received"); + goto invalid_value; + }; + + for (i = 0; i < n && i < xstats_mode_count; i++) { + xstat = &xstats[ids[i] - start_offset]; + value = *(uint64_t *)((char *)req_rsp + xstat->offset); + value = (value >> xstat->shift) & xstat->mask; + + values[i] = value; + values[i] -= xstat->reset_snap[queue_port_id]; + } + + return i; +invalid_value: + return -EINVAL; +} + +static int +otx2_sso_xstats_reset(struct rte_eventdev *event_dev, + enum rte_event_dev_xstats_mode mode, + int16_t queue_port_id, const uint32_t ids[], uint32_t n) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + struct otx2_sso_xstats_name *xstats; + struct otx2_sso_xstats_name *xstat; + struct otx2_mbox *mbox = dev->mbox; + uint32_t xstats_mode_count = 0; + uint32_t start_offset = 0; + unsigned int i; + uint64_t value; + void *req_rsp; + int rc; + + switch (mode) { + case RTE_EVENT_DEV_XSTATS_DEVICE: + return 0; + case RTE_EVENT_DEV_XSTATS_PORT: + if (queue_port_id >= (signed int)dev->nb_event_ports) + goto invalid_value; + + xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS; + xstats = sso_hws_xstats; + + req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox); + ((struct sso_info_req *)req_rsp)->hws = queue_port_id; + rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp); + if (rc < 0) + goto invalid_value; + + break; + case RTE_EVENT_DEV_XSTATS_QUEUE: + if (queue_port_id >= (signed int)dev->nb_event_queues) + goto invalid_value; + + xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS; + start_offset = OTX2_SSO_NUM_HWS_XSTATS; + xstats = sso_grp_xstats; + + req_rsp = otx2_mbox_alloc_msg_sso_grp_get_stats(mbox); + ((struct sso_info_req *)req_rsp)->grp = queue_port_id; + rc = otx2_mbox_process_msg(mbox, (void *)&req_rsp); + if (rc < 0) + goto invalid_value; + + break; + default: + otx2_err("Invalid mode received"); + goto invalid_value; + }; + + for (i = 0; i < n && i < xstats_mode_count; i++) { + xstat = &xstats[ids[i] - start_offset]; + value = *(uint64_t *)((char *)req_rsp + xstat->offset); + value = (value >> xstat->shift) & xstat->mask; + + xstat->reset_snap[queue_port_id] = value; + } + return i; +invalid_value: + return -EINVAL; +} + +static int +otx2_sso_xstats_get_names(const struct rte_eventdev *event_dev, + enum rte_event_dev_xstats_mode mode, + uint8_t queue_port_id, + struct rte_event_dev_xstats_name *xstats_names, + unsigned int *ids, unsigned int size) +{ + struct rte_event_dev_xstats_name xstats_names_copy[OTX2_SSO_NUM_XSTATS]; + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + uint32_t xstats_mode_count = 0; + uint32_t start_offset = 0; + unsigned int xidx = 0; + unsigned int i; + + for (i = 0; i < OTX2_SSO_NUM_HWS_XSTATS; i++) { + snprintf(xstats_names_copy[i].name, + sizeof(xstats_names_copy[i].name), "%s", + sso_hws_xstats[i].name); + } + + for (; i < OTX2_SSO_NUM_XSTATS; i++) { + snprintf(xstats_names_copy[i].name, + sizeof(xstats_names_copy[i].name), "%s", + sso_grp_xstats[i - OTX2_SSO_NUM_HWS_XSTATS].name); + } + + switch (mode) { + case RTE_EVENT_DEV_XSTATS_DEVICE: + break; + case RTE_EVENT_DEV_XSTATS_PORT: + if (queue_port_id >= (signed int)dev->nb_event_ports) + break; + xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS; + break; + case RTE_EVENT_DEV_XSTATS_QUEUE: + if (queue_port_id >= (signed int)dev->nb_event_queues) + break; + xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS; + start_offset = OTX2_SSO_NUM_HWS_XSTATS; + break; + default: + otx2_err("Invalid mode received"); + return -EINVAL; + }; + + if (xstats_mode_count > size || !ids || !xstats_names) + return xstats_mode_count; + + for (i = 0; i < xstats_mode_count; i++) { + xidx = i + start_offset; + strncpy(xstats_names[i].name, xstats_names_copy[xidx].name, + sizeof(xstats_names[i].name)); + ids[i] = xidx; + } + + return i; +} + +#endif