From patchwork Thu Aug 29 10:27:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sachin Saxena X-Patchwork-Id: 58256 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EC6151E4D3; Thu, 29 Aug 2019 12:42:05 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id D93191D446 for ; Thu, 29 Aug 2019 12:41:49 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id B9B7A20032C; Thu, 29 Aug 2019 12:41:49 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 818B220032A; Thu, 29 Aug 2019 12:41:47 +0200 (CEST) Received: from GDB1.ap.freescale.net (GDB1.ap.freescale.net [10.232.132.179]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id DB1BF4031F; Thu, 29 Aug 2019 18:41:42 +0800 (SGT) From: Sachin Saxena To: dev@dpdk.org Cc: thomas@monjalon.net, Nipun Gupta Date: Thu, 29 Aug 2019 15:57:13 +0530 Message-Id: <20190829102737.13267-7-sachin.saxena@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190829102737.13267-1-sachin.saxena@nxp.com> References: <20190827070730.11206-1-sachin.saxena@nxp.com> <20190829102737.13267-1-sachin.saxena@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 06/30] net/dpaa: support for Rx interrupt enable and disable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta This patch adds support for dpaa eth driver interrupt enable and disable callback functions. Signed-off-by: Nipun Gupta Acked-by: Hemant Agrawal --- drivers/bus/dpaa/base/qbman/qman.c | 45 +++++++++++++++++++++++ drivers/bus/dpaa/base/qbman/qman_driver.c | 5 +++ drivers/bus/dpaa/base/qbman/qman_priv.h | 2 + drivers/bus/dpaa/include/fsl_usd.h | 1 + drivers/bus/dpaa/rte_bus_dpaa_version.map | 3 ++ drivers/net/dpaa/dpaa_ethdev.c | 39 +++++++++++++++++++- 6 files changed, 94 insertions(+), 1 deletion(-) diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c index 79017f7f2..96208bc40 100644 --- a/drivers/bus/dpaa/base/qbman/qman.c +++ b/drivers/bus/dpaa/base/qbman/qman.c @@ -664,6 +664,12 @@ qman_free_global_portal(struct qman_portal *portal) return -1; } +void +qman_portal_uninhibit_isr(struct qman_portal *portal) +{ + qm_isr_uninhibit(&portal->p); +} + struct qman_portal *qman_create_affine_portal(const struct qm_portal_config *c, const struct qman_cgrs *cgrs) { @@ -1053,6 +1059,20 @@ int qman_irqsource_add(u32 bits) dpaa_set_bits(bits, &p->irq_sources); qm_isr_enable_write(&p->p, p->irq_sources); + return 0; +} + +int qman_fq_portal_irqsource_add(struct qman_portal *p, u32 bits) +{ + bits = bits & QM_PIRQ_VISIBLE; + + /* Clear any previously remaining interrupt conditions in + * QCSP_ISR. This prevents raising a false interrupt when + * interrupt conditions are enabled in QCSP_IER. + */ + qm_isr_status_clear(&p->p, bits); + dpaa_set_bits(bits, &p->irq_sources); + qm_isr_enable_write(&p->p, p->irq_sources); return 0; } @@ -1083,6 +1103,31 @@ int qman_irqsource_remove(u32 bits) return 0; } +int qman_fq_portal_irqsource_remove(struct qman_portal *p, u32 bits) +{ + u32 ier; + + /* Our interrupt handler only processes+clears status register bits that + * are in p->irq_sources. As we're trimming that mask, if one of them + * were to assert in the status register just before we remove it from + * the enable register, there would be an interrupt-storm when we + * release the IRQ lock. So we wait for the enable register update to + * take effect in h/w (by reading it back) and then clear all other bits + * in the status register. Ie. we clear them from ISR once it's certain + * IER won't allow them to reassert. + */ + + bits &= QM_PIRQ_VISIBLE; + dpaa_clear_bits(bits, &p->irq_sources); + qm_isr_enable_write(&p->p, p->irq_sources); + ier = qm_isr_enable_read(&p->p); + /* Using "~ier" (rather than "bits" or "~p->irq_sources") creates a + * data-dependency, ie. to protect against re-ordering. + */ + qm_isr_status_clear(&p->p, ~ier); + return 0; +} + u16 qman_affine_channel(int cpu) { if (cpu < 0) { diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c index acd003143..69244ef70 100644 --- a/drivers/bus/dpaa/base/qbman/qman_driver.c +++ b/drivers/bus/dpaa/base/qbman/qman_driver.c @@ -121,6 +121,11 @@ void qman_thread_irq(void) out_be32(qpcfg.addr_virt[DPAA_PORTAL_CI] + 0x36C0, 0); } +void qman_fq_portal_thread_irq(struct qman_portal *qp) +{ + qman_portal_uninhibit_isr(qp); +} + struct qman_portal *fsl_qman_fq_portal_create(int *fd) { struct qman_portal *portal = NULL; diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h index 97d5521a8..8254729e6 100644 --- a/drivers/bus/dpaa/base/qbman/qman_priv.h +++ b/drivers/bus/dpaa/base/qbman/qman_priv.h @@ -157,6 +157,8 @@ qman_init_portal(struct qman_portal *portal, struct qman_portal *qman_alloc_global_portal(struct qm_portal_config *q_pcfg); int qman_free_global_portal(struct qman_portal *portal); +void qman_portal_uninhibit_isr(struct qman_portal *portal); + struct qm_portal_config *qm_get_unused_portal(void); struct qm_portal_config *qm_get_unused_portal_idx(uint32_t idx); diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h index a407e2b22..3c26d6ccb 100644 --- a/drivers/bus/dpaa/include/fsl_usd.h +++ b/drivers/bus/dpaa/include/fsl_usd.h @@ -67,6 +67,7 @@ int bman_thread_fd(void); */ void qman_thread_irq(void); void bman_thread_irq(void); +void qman_fq_portal_thread_irq(struct qman_portal *qp); void qman_clear_irq(void); diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map index f779469f9..962b952d3 100644 --- a/drivers/bus/dpaa/rte_bus_dpaa_version.map +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map @@ -127,6 +127,9 @@ DPDK_19.05 { DPDK_19.11 { global: fsl_qman_fq_portal_create; + qman_fq_portal_irqsource_add; + qman_fq_portal_irqsource_remove; + qman_fq_portal_thread_irq; local: *; } DPDK_19.05; diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 1934f85ae..42ab3d05f 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright 2016 Freescale Semiconductor, Inc. All rights reserved. - * Copyright 2017 NXP + * Copyright 2017-2019 NXP * */ /* System headers */ @@ -1013,6 +1013,40 @@ dpaa_dev_set_mac_addr(struct rte_eth_dev *dev, return ret; } +static int dpaa_dev_queue_intr_enable(struct rte_eth_dev *dev, + uint16_t queue_id) +{ + struct dpaa_if *dpaa_intf = dev->data->dev_private; + struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_id]; + + if (!rxq->is_static) + return -EINVAL; + + return qman_fq_portal_irqsource_add(rxq->qp, QM_PIRQ_DQRI); +} + +static int dpaa_dev_queue_intr_disable(struct rte_eth_dev *dev, + uint16_t queue_id) +{ + struct dpaa_if *dpaa_intf = dev->data->dev_private; + struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_id]; + uint32_t temp; + ssize_t temp1; + + if (!rxq->is_static) + return -EINVAL; + + qman_fq_portal_irqsource_remove(rxq->qp, ~0); + + temp1 = read(rxq->q_fd, &temp, sizeof(temp)); + if (temp1 != sizeof(temp)) + DPAA_EVENTDEV_ERR("irq read error"); + + qman_fq_portal_thread_irq(rxq->qp); + + return 0; +} + static struct eth_dev_ops dpaa_devops = { .dev_configure = dpaa_eth_dev_configure, .dev_start = dpaa_eth_dev_start, @@ -1050,6 +1084,9 @@ static struct eth_dev_ops dpaa_devops = { .mac_addr_set = dpaa_dev_set_mac_addr, .fw_version_get = dpaa_fw_version_get, + + .rx_queue_intr_enable = dpaa_dev_queue_intr_enable, + .rx_queue_intr_disable = dpaa_dev_queue_intr_disable, }; static bool