From patchwork Thu Apr 1 12:37:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90383 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7037AA0548; Thu, 1 Apr 2021 14:40:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 265C1141102; Thu, 1 Apr 2021 14:39:10 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 244E2141088 for ; Thu, 1 Apr 2021 14:39:08 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLXZ019083 for ; Thu, 1 Apr 2021 05:39:07 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=4X8D6sJsboTM3z5qMYEZl8aWJzf0dQZkRAPI69HsYnA=; b=HpAwxJArE10rADniIcGNQsn7fZI4vbY6fnoAUuqdwAPEzursMQZ/wVJH3Cmn5dml4Pnl rCHeXrA2uVyTYfpMPyeC0yk6y2OP5clrbRpUDNvdTIeDMHKtcKkG0oqhCNQ9LrQs3dlA 2Su7IT1hkHfb6XGCowBbfyMM4BdX3MK+808DqC5bBjXOb5gYrZKEgMG0jh5KFVIAD8kl gVxewflUa+IhvFK+vobJjFP6ftWMVtuSBM/UAIpxg/lU493vrSteWIg8zsfVg3GaJkHs eZ9v1ePjn4jqIjEfzRkUuUb2kizsJcMjUGEy4oO3BjF/PBXdlP+Ga3nlUWO24eX/JJku Tg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje14-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:07 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:05 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:05 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 08EEA3F7041; Thu, 1 Apr 2021 05:39:02 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:35 +0530 Message-ID: <20210401123817.14348-11-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: hANXNU92l5mrVCd_qGSADyv8mPzUJsCp X-Proofpoint-ORIG-GUID: hANXNU92l5mrVCd_qGSADyv8mPzUJsCp X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 10/52] common/cnxk: add npa irq support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ashwin Sekhar T K Add support for NPA IRQs. Signed-off-by: Ashwin Sekhar T K --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_npa.c | 7 + drivers/common/cnxk/roc_npa_irq.c | 297 +++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_npa_priv.h | 4 + 4 files changed, 309 insertions(+) create mode 100644 drivers/common/cnxk/roc_npa_irq.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 2aeed3e..f8b777a 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -16,6 +16,7 @@ sources = files('roc_dev.c', 'roc_mbox.c', 'roc_model.c', 'roc_npa.c', + 'roc_npa_irq.c', 'roc_platform.c', 'roc_utils.c') includes += include_directories('../../bus/pci') diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c index 2aa726b..0d4a56a 100644 --- a/drivers/common/cnxk/roc_npa.c +++ b/drivers/common/cnxk/roc_npa.c @@ -242,11 +242,17 @@ npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev) idev->npa = lf; plt_wmb(); + rc = npa_register_irqs(lf); + if (rc) + goto npa_fini; + plt_npa_dbg("npa=%p max_pools=%d pf_func=0x%x msix=0x%x", lf, roc_idev_npa_maxpools_get(), lf->pf_func, npa_msixoff); return 0; +npa_fini: + npa_dev_fini(idev->npa); npa_detach: npa_detach(dev->mbox); fail: @@ -268,6 +274,7 @@ npa_lf_fini(void) if (__atomic_sub_fetch(&idev->npa_refcnt, 1, __ATOMIC_SEQ_CST) != 0) return 0; + npa_unregister_irqs(idev->npa); rc |= npa_dev_fini(idev->npa); rc |= npa_detach(idev->npa->mbox); idev_set_defaults(idev); diff --git a/drivers/common/cnxk/roc_npa_irq.c b/drivers/common/cnxk/roc_npa_irq.c new file mode 100644 index 0000000..2d1e535 --- /dev/null +++ b/drivers/common/cnxk/roc_npa_irq.c @@ -0,0 +1,297 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +static void +npa_err_irq(void *param) +{ + struct npa_lf *lf = (struct npa_lf *)param; + uint64_t intr; + + intr = plt_read64(lf->base + NPA_LF_ERR_INT); + if (intr == 0) + return; + + plt_err("Err_intr=0x%" PRIx64 "", intr); + + /* Clear interrupt */ + plt_write64(intr, lf->base + NPA_LF_ERR_INT); +} + +static int +npa_register_err_irq(struct npa_lf *lf) +{ + struct plt_intr_handle *handle = lf->intr_handle; + int rc, vec; + + vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT; + + /* Clear err interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C); + /* Register err interrupt vector */ + rc = dev_irq_register(handle, npa_err_irq, lf, vec); + + /* Enable hw interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1S); + + return rc; +} + +static void +npa_unregister_err_irq(struct npa_lf *lf) +{ + struct plt_intr_handle *handle = lf->intr_handle; + int vec; + + vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT; + + /* Clear err interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C); + dev_irq_unregister(handle, npa_err_irq, lf, vec); +} + +static void +npa_ras_irq(void *param) +{ + struct npa_lf *lf = (struct npa_lf *)param; + uint64_t intr; + + intr = plt_read64(lf->base + NPA_LF_RAS); + if (intr == 0) + return; + + plt_err("Ras_intr=0x%" PRIx64 "", intr); + + /* Clear interrupt */ + plt_write64(intr, lf->base + NPA_LF_RAS); +} + +static int +npa_register_ras_irq(struct npa_lf *lf) +{ + struct plt_intr_handle *handle = lf->intr_handle; + int rc, vec; + + vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON; + + /* Clear err interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C); + /* Set used interrupt vectors */ + rc = dev_irq_register(handle, npa_ras_irq, lf, vec); + /* Enable hw interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1S); + + return rc; +} + +static void +npa_unregister_ras_irq(struct npa_lf *lf) +{ + int vec; + struct plt_intr_handle *handle = lf->intr_handle; + + vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON; + + /* Clear err interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C); + dev_irq_unregister(handle, npa_ras_irq, lf, vec); +} + +static inline uint8_t +npa_q_irq_get_and_clear(struct npa_lf *lf, uint32_t q, uint32_t off, + uint64_t mask) +{ + uint64_t reg, wdata; + uint8_t qint; + + wdata = (uint64_t)q << 44; + reg = roc_atomic64_add_nosync(wdata, (int64_t *)(lf->base + off)); + + if (reg & BIT_ULL(42) /* OP_ERR */) { + plt_err("Failed execute irq get off=0x%x", off); + return 0; + } + + qint = reg & 0xff; + wdata &= mask; + plt_write64(wdata | qint, lf->base + off); + + return qint; +} + +static inline uint8_t +npa_pool_irq_get_and_clear(struct npa_lf *lf, uint32_t p) +{ + return npa_q_irq_get_and_clear(lf, p, NPA_LF_POOL_OP_INT, ~0xff00); +} + +static inline uint8_t +npa_aura_irq_get_and_clear(struct npa_lf *lf, uint32_t a) +{ + return npa_q_irq_get_and_clear(lf, a, NPA_LF_AURA_OP_INT, ~0xff00); +} + +static void +npa_q_irq(void *param) +{ + struct npa_qint *qint = (struct npa_qint *)param; + struct npa_lf *lf = qint->lf; + uint8_t irq, qintx = qint->qintx; + uint32_t q, pool, aura; + uint64_t intr; + + intr = plt_read64(lf->base + NPA_LF_QINTX_INT(qintx)); + if (intr == 0) + return; + + plt_err("queue_intr=0x%" PRIx64 " qintx=%d", intr, qintx); + + /* Handle pool queue interrupts */ + for (q = 0; q < lf->nr_pools; q++) { + /* Skip disabled POOL */ + if (plt_bitmap_get(lf->npa_bmp, q)) + continue; + + pool = q % lf->qints; + irq = npa_pool_irq_get_and_clear(lf, pool); + + if (irq & BIT_ULL(NPA_POOL_ERR_INT_OVFLS)) + plt_err("Pool=%d NPA_POOL_ERR_INT_OVFLS", pool); + + if (irq & BIT_ULL(NPA_POOL_ERR_INT_RANGE)) + plt_err("Pool=%d NPA_POOL_ERR_INT_RANGE", pool); + + if (irq & BIT_ULL(NPA_POOL_ERR_INT_PERR)) + plt_err("Pool=%d NPA_POOL_ERR_INT_PERR", pool); + } + + /* Handle aura queue interrupts */ + for (q = 0; q < lf->nr_pools; q++) { + /* Skip disabled AURA */ + if (plt_bitmap_get(lf->npa_bmp, q)) + continue; + + aura = q % lf->qints; + irq = npa_aura_irq_get_and_clear(lf, aura); + + if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_OVER)) + plt_err("Aura=%d NPA_AURA_ERR_INT_ADD_OVER", aura); + + if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_UNDER)) + plt_err("Aura=%d NPA_AURA_ERR_INT_ADD_UNDER", aura); + + if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_FREE_UNDER)) + plt_err("Aura=%d NPA_AURA_ERR_INT_FREE_UNDER", aura); + + if (irq & BIT_ULL(NPA_AURA_ERR_INT_POOL_DIS)) + plt_err("Aura=%d NPA_AURA_ERR_POOL_DIS", aura); + } + + /* Clear interrupt */ + plt_write64(intr, lf->base + NPA_LF_QINTX_INT(qintx)); +} + +static int +npa_register_queue_irqs(struct npa_lf *lf) +{ + struct plt_intr_handle *handle = lf->intr_handle; + int vec, q, qs, rc = 0; + + /* Figure out max qintx required */ + qs = PLT_MIN(lf->qints, lf->nr_pools); + + for (q = 0; q < qs; q++) { + vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q; + + /* Clear QINT CNT */ + plt_write64(0, lf->base + NPA_LF_QINTX_CNT(q)); + + /* Clear interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q)); + + struct npa_qint *qintmem = lf->npa_qint_mem; + + qintmem += q; + + qintmem->lf = lf; + qintmem->qintx = q; + + /* Sync qints_mem update */ + plt_wmb(); + + /* Register queue irq vector */ + rc = dev_irq_register(handle, npa_q_irq, qintmem, vec); + if (rc) + break; + + plt_write64(0, lf->base + NPA_LF_QINTX_CNT(q)); + plt_write64(0, lf->base + NPA_LF_QINTX_INT(q)); + /* Enable QINT interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1S(q)); + } + + return rc; +} + +static void +npa_unregister_queue_irqs(struct npa_lf *lf) +{ + struct plt_intr_handle *handle = lf->intr_handle; + int vec, q, qs; + + /* Figure out max qintx required */ + qs = PLT_MIN(lf->qints, lf->nr_pools); + + for (q = 0; q < qs; q++) { + vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q; + + /* Clear QINT CNT */ + plt_write64(0, lf->base + NPA_LF_QINTX_CNT(q)); + plt_write64(0, lf->base + NPA_LF_QINTX_INT(q)); + + /* Clear interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q)); + + struct npa_qint *qintmem = lf->npa_qint_mem; + + qintmem += q; + + /* Unregister queue irq vector */ + dev_irq_unregister(handle, npa_q_irq, qintmem, vec); + + qintmem->lf = NULL; + qintmem->qintx = 0; + } +} + +int +npa_register_irqs(struct npa_lf *lf) +{ + int rc; + + if (lf->npa_msixoff == MSIX_VECTOR_INVALID) { + plt_err("Invalid NPALF MSIX vector offset vector: 0x%x", + lf->npa_msixoff); + return NPA_ERR_PARAM; + } + + /* Register lf err interrupt */ + rc = npa_register_err_irq(lf); + /* Register RAS interrupt */ + rc |= npa_register_ras_irq(lf); + /* Register queue interrupts */ + rc |= npa_register_queue_irqs(lf); + + return rc; +} + +void +npa_unregister_irqs(struct npa_lf *lf) +{ + npa_unregister_err_irq(lf); + npa_unregister_ras_irq(lf); + npa_unregister_queue_irqs(lf); +} diff --git a/drivers/common/cnxk/roc_npa_priv.h b/drivers/common/cnxk/roc_npa_priv.h index dd6981f..5a02a61 100644 --- a/drivers/common/cnxk/roc_npa_priv.h +++ b/drivers/common/cnxk/roc_npa_priv.h @@ -56,4 +56,8 @@ roc_npa_to_npa_priv(struct roc_npa *roc_npa) int npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev); int npa_lf_fini(void); +/* IRQ */ +int npa_register_irqs(struct npa_lf *lf); +void npa_unregister_irqs(struct npa_lf *lf); + #endif /* _ROC_NPA_PRIV_H_ */