From patchwork Thu Dec 8 19:56:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 120600 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7C846A0093; Thu, 8 Dec 2022 20:56:54 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 10B0C410D2; Thu, 8 Dec 2022 20:56:54 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 15CAF4003F for ; Thu, 8 Dec 2022 20:56:52 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2B8JjcFX002003; Thu, 8 Dec 2022 11:56:52 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-type; s=pfpt0220; bh=Aig/o1ejip5llKv1LTVwcutK/Rc4Mhtlaoy1Z1r5aBs=; b=RT+s+S0dxYrQKJS+UqOM3uYHIVq1eXLdDMYPEf0iPsC3MhJlztCElu//7v2/aMsUPh9r ek0ocSoM6s6rcoW7DoVEva+32aCh7wHoI7Ysu/lJGNWSxbiqQQJpLCHyi5Ai9hZZygwE pdyB2ihVJt85VBlEkrA1ozwOFl/TGbkcWjmbnTopOfre6627VpllXreFcLNyhPlx3fpa XrpJZlgFIVKch1pjlSvw12bk1jdbD9tvH/yObr26Vg5Ee+lO0t5j0+cUod19USVDIizU SD0l5+pFMD06cvvbFW5uhTc2Ls7/BdnpV6IAt9L7Vbrzzi8NnKNtIYYoaS4Pw4ypUie5 LA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3mb22svnrs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 08 Dec 2022 11:56:51 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 8 Dec 2022 11:56:50 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 8 Dec 2022 11:56:50 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id C51BE3F7058; Thu, 8 Dec 2022 11:56:49 -0800 (PST) From: Srikanth Yalavarthi To: Thomas Monjalon , Srikanth Yalavarthi , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao CC: , , , Subject: [PATCH v1 1/1] common/cnxk: add ML headers and ROC code for cnxk Date: Thu, 8 Dec 2022 11:56:40 -0800 Message-ID: <20221208195640.26170-1-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: pgNycPMXdgtfUYFxjOu675V_wl4ZMH1W X-Proofpoint-GUID: pgNycPMXdgtfUYFxjOu675V_wl4ZMH1W X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-12-08_11,2022-12-08_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added ML cnxk headers for register, structure definitions and ROC layer. Implemented ROC functions, registered logtype for ML module with the name pmd.ml.cnxk and defined ML hardware ID. Signed-off-by: Srikanth Yalavarthi --- Depends-on: series-26047 ("implementation of ML common code") MAINTAINERS | 4 + drivers/common/cnxk/hw/ml.h | 170 ++++++++ drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_api.h | 4 + drivers/common/cnxk/roc_constants.h | 2 + drivers/common/cnxk/roc_dev_priv.h | 1 + drivers/common/cnxk/roc_ml.c | 626 ++++++++++++++++++++++++++++ drivers/common/cnxk/roc_ml.h | 152 +++++++ drivers/common/cnxk/roc_ml_priv.h | 24 ++ drivers/common/cnxk/roc_platform.c | 1 + drivers/common/cnxk/roc_platform.h | 2 + drivers/common/cnxk/roc_priv.h | 3 + drivers/common/cnxk/version.map | 29 ++ 13 files changed, 1019 insertions(+) create mode 100644 drivers/common/cnxk/hw/ml.h create mode 100644 drivers/common/cnxk/roc_ml.c create mode 100644 drivers/common/cnxk/roc_ml.h create mode 100644 drivers/common/cnxk/roc_ml_priv.h -- 2.17.1 diff --git a/MAINTAINERS b/MAINTAINERS index 6412209bff..8cdb3e215d 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1438,6 +1438,10 @@ ML common code M: Srikanth Yalavarthi F: drivers/common/ml/ +Marvell ML CNXK +M: Srikanth Yalavarthi +F: drivers/common/cnxk/hw/ml.h +F: drivers/common/cnxk/roc_ml* Packet processing ----------------- diff --git a/drivers/common/cnxk/hw/ml.h b/drivers/common/cnxk/hw/ml.h new file mode 100644 index 0000000000..3ead42b807 --- /dev/null +++ b/drivers/common/cnxk/hw/ml.h @@ -0,0 +1,170 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#ifndef __ML_HW_H__ +#define __ML_HW_H__ + +#include + +/* Constants */ +#define ML_ANBX_NR 0x3 + +/* Base offsets */ +#define ML_MLAB_BLK_OFFSET 0x20000000 /* CNF10KB */ +#define ML_AXI_START_ADDR 0x800000000 + +/* MLW register offsets / ML_PF_BAR0 */ +#define ML_CFG 0x10000 +#define ML_MLR_BASE 0x10008 +#define ML_AXI_BRIDGE_CTRL(a) (0x10020 | (uint64_t)(a) << 3) +#define ML_JOB_MGR_CTRL 0x10060 +#define ML_CORE_INT_LO 0x10140 +#define ML_CORE_INT_HI 0x10160 +#define ML_JCMDQ_IN(a) (0x11000 | (uint64_t)(a) << 3) /* CN10KA */ +#define ML_JCMDQ_STATUS 0x11010 /* CN10KA */ +#define ML_STGX_STATUS(a) (0x11020 | (uint64_t)(a) << 3) /* CNF10KB */ +#define ML_STG_CONTROL 0x11100 /* CNF10KB */ +#define ML_PNB_CMD_TYPE 0x113a0 /* CNF10KB */ +#define ML_SCRATCH(a) (0x14000 | (uint64_t)(a) << 3) +#define ML_ANBX_BACKP_DISABLE(a) (0x18000 | (uint64_t)(a) << 12) /* CN10KA */ +#define ML_ANBX_NCBI_P_OVR(a) (0x18010 | (uint64_t)(a) << 12) /* CN10KA */ +#define ML_ANBX_NCBI_NP_OVR(a) (0x18020 | (uint64_t)(a) << 12) /* CN10KA */ + +/* MLIP configuration register offsets / ML_PF_BAR0 */ +#define ML_SW_RST_CTRL 0x12084000 +#define ML_A35_0_RST_VECTOR_BASE_W(a) (0x12084014 + (a) * (0x04)) +#define ML_A35_1_RST_VECTOR_BASE_W(a) (0x1208401c + (a) * (0x04)) + +/* MLW scratch register offsets */ +#define ML_SCRATCH_WORK_PTR (ML_SCRATCH(0)) +#define ML_SCRATCH_FW_CTRL (ML_SCRATCH(1)) +#define ML_SCRATCH_DBG_BUFFER_HEAD_C0 (ML_SCRATCH(2)) +#define ML_SCRATCH_DBG_BUFFER_TAIL_C0 (ML_SCRATCH(3)) +#define ML_SCRATCH_DBG_BUFFER_HEAD_C1 (ML_SCRATCH(4)) +#define ML_SCRATCH_DBG_BUFFER_TAIL_C1 (ML_SCRATCH(5)) +#define ML_SCRATCH_EXCEPTION_SP_C0 (ML_SCRATCH(6)) +#define ML_SCRATCH_EXCEPTION_SP_C1 (ML_SCRATCH(7)) + +/* ML job completion structure */ +struct ml_jce_s { + /* WORD 0 */ + union ml_jce_w0 { + struct { + uint64_t rsvd_0_3 : 4; + + /* Reserved for future architecture */ + uint64_t ggrp_h : 2; + + /* Tag type */ + uint64_t ttype : 2; + + /* Physical function number */ + uint64_t pf_func : 16; + + /* Unused [7] + Guest Group [6:0] */ + uint64_t ggrp : 8; + + /* Tag */ + uint64_t tag : 32; + } s; + uint64_t u64; + } w0; + + /* WORD 1 */ + union ml_jce_w1 { + struct { + /* Work queue pointer */ + uint64_t wqp : 53; + uint64_t rsvd_53_63 : 11; + + } s; + uint64_t u64; + } w1; +}; + +/* ML job command structure */ +struct ml_job_cmd_s { + /* WORD 0 */ + union ml_job_cmd_w0 { + struct { + uint64_t rsvd_0_63; + } s; + uint64_t u64; + } w0; + + /* WORD 1 */ + union ml_job_cmd_w1 { + struct { + /* Job pointer */ + uint64_t jobptr : 53; + uint64_t rsvd_53_63 : 11; + } s; + uint64_t u64; + } w1; +}; + +/* ML A35 0 RST vector base structure */ +union ml_a35_0_rst_vector_base_s { + struct { + /* Base address */ + uint64_t addr : 37; + uint64_t rsvd_37_63 : 27; + } s; + + struct { + /* WORD 0 */ + uint32_t w0; + + /* WORD 1 */ + uint32_t w1; + } w; + + uint64_t u64; +}; + +/* ML A35 1 RST vector base structure */ +union ml_a35_1_rst_vector_base_s { + struct { + /* Base address */ + uint64_t addr : 37; + uint64_t rsvd_37_63 : 27; + } s; + + struct { + /* WORD 0 */ + uint32_t w0; + + /* WORD 1 */ + uint32_t w1; + } w; + + uint64_t u64; +}; + +/* Work pointer scratch register */ +union ml_scratch_work_ptr_s { + struct { + /* Work pointer */ + uint64_t work_ptr : 37; + uint64_t rsvd_37_63 : 27; + } s; + uint64_t u64; +}; + +/* Firmware control scratch register */ +union ml_scratch_fw_ctrl_s { + struct { + uint64_t rsvd_0_15 : 16; + + /* Valid job bit */ + uint64_t valid : 1; + + /* Done status bit */ + uint64_t done : 1; + uint64_t rsvd_18_63 : 46; + } s; + uint64_t u64; +}; + +#endif /* __ML_HW_H__ */ diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 849735921c..b4aa0a050c 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -26,6 +26,7 @@ sources = files( 'roc_irq.c', 'roc_ie_ot.c', 'roc_mbox.c', + 'roc_ml.c', 'roc_model.c', 'roc_nix.c', 'roc_nix_bpf.c', diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h index 072f16d77d..fdddf8c6c7 100644 --- a/drivers/common/cnxk/roc_api.h +++ b/drivers/common/cnxk/roc_api.h @@ -34,6 +34,7 @@ /* HW structure definition */ #include "hw/cpt.h" #include "hw/dpi.h" +#include "hw/ml.h" #include "hw/nix.h" #include "hw/npa.h" #include "hw/npc.h" @@ -106,4 +107,7 @@ /* NIX Inline dev */ #include "roc_nix_inl.h" +/* ML */ +#include "roc_ml.h" + #endif /* _ROC_API_H_ */ diff --git a/drivers/common/cnxk/roc_constants.h b/drivers/common/cnxk/roc_constants.h index 0495965daa..ddaef133b8 100644 --- a/drivers/common/cnxk/roc_constants.h +++ b/drivers/common/cnxk/roc_constants.h @@ -50,6 +50,8 @@ #define PCI_DEVID_CN10K_RVU_CPT_PF 0xA0F2 #define PCI_DEVID_CN10K_RVU_CPT_VF 0xA0F3 +#define PCI_DEVID_CN10K_ML_PF 0xA092 + #define PCI_SUBSYSTEM_DEVID_CN10KA 0xB900 #define PCI_SUBSYSTEM_DEVID_CN10KAS 0xB900 #define PCI_SUBSYSTEM_DEVID_CNF10KA 0xBA00 diff --git a/drivers/common/cnxk/roc_dev_priv.h b/drivers/common/cnxk/roc_dev_priv.h index 302dc0feb0..55700dc851 100644 --- a/drivers/common/cnxk/roc_dev_priv.h +++ b/drivers/common/cnxk/roc_dev_priv.h @@ -89,6 +89,7 @@ struct dev { struct dev_ops *ops; void *roc_nix; void *roc_cpt; + void *roc_ml; bool disable_shared_lmt; /* false(default): shared lmt mode enabled */ const struct plt_memzone *lmt_mz; } __plt_cache_aligned; diff --git a/drivers/common/cnxk/roc_ml.c b/drivers/common/cnxk/roc_ml.c new file mode 100644 index 0000000000..1950258f58 --- /dev/null +++ b/drivers/common/cnxk/roc_ml.c @@ -0,0 +1,626 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +#define TIME_SEC_IN_MS 10000 + +static int +roc_ml_reg_wait_to_clear(struct roc_ml *roc_ml, uint64_t offset, uint64_t mask) +{ + uint64_t start_cycle; + uint64_t wait_cycles; + uint64_t reg_val; + + wait_cycles = (ROC_ML_TIMEOUT_MS * plt_tsc_hz()) / TIME_SEC_IN_MS; + start_cycle = plt_tsc_cycles(); + do { + reg_val = roc_ml_reg_read64(roc_ml, offset); + + if (!(reg_val & mask)) + return 0; + } while (plt_tsc_cycles() - start_cycle < wait_cycles); + + return -ETIME; +} + +uint64_t +roc_ml_reg_read64(struct roc_ml *roc_ml, uint64_t offset) +{ + struct ml *ml = roc_ml_to_ml_priv(roc_ml); + + return plt_read64(PLT_PTR_ADD(ml->ml_reg_addr, offset)); +} + +void +roc_ml_reg_write64(struct roc_ml *roc_ml, uint64_t val, uint64_t offset) +{ + struct ml *ml = roc_ml_to_ml_priv(roc_ml); + + plt_write64(val, PLT_PTR_ADD(ml->ml_reg_addr, offset)); +} + +uint32_t +roc_ml_reg_read32(struct roc_ml *roc_ml, uint64_t offset) +{ + struct ml *ml = roc_ml_to_ml_priv(roc_ml); + + return plt_read32(PLT_PTR_ADD(ml->ml_reg_addr, offset)); +} + +void +roc_ml_reg_write32(struct roc_ml *roc_ml, uint32_t val, uint64_t offset) +{ + struct ml *ml = roc_ml_to_ml_priv(roc_ml); + + plt_write32(val, PLT_PTR_ADD(ml->ml_reg_addr, offset)); +} + +void +roc_ml_reg_save(struct roc_ml *roc_ml, uint64_t offset) +{ + struct ml *ml = roc_ml_to_ml_priv(roc_ml); + + if (offset == ML_MLR_BASE) { + ml->ml_mlr_base = + FIELD_GET(ROC_ML_MLR_BASE_BASE, roc_ml_reg_read64(roc_ml, offset)); + ml->ml_mlr_base_saved = true; + } +} + +void * +roc_ml_addr_ap2mlip(struct roc_ml *roc_ml, void *addr) +{ + struct ml *ml = roc_ml_to_ml_priv(roc_ml); + uint64_t ml_mlr_base; + + ml_mlr_base = (ml->ml_mlr_base_saved) ? ml->ml_mlr_base : + FIELD_GET(ROC_ML_MLR_BASE_BASE, + roc_ml_reg_read64(roc_ml, ML_MLR_BASE)); + return PLT_PTR_ADD(addr, ML_AXI_START_ADDR - ml_mlr_base); +} + +void * +roc_ml_addr_mlip2ap(struct roc_ml *roc_ml, void *addr) +{ + struct ml *ml = roc_ml_to_ml_priv(roc_ml); + uint64_t ml_mlr_base; + + ml_mlr_base = (ml->ml_mlr_base_saved) ? ml->ml_mlr_base : + FIELD_GET(ROC_ML_MLR_BASE_BASE, + roc_ml_reg_read64(roc_ml, ML_MLR_BASE)); + return PLT_PTR_ADD(addr, ml_mlr_base - ML_AXI_START_ADDR); +} + +uint64_t +roc_ml_addr_pa_to_offset(struct roc_ml *roc_ml, uint64_t phys_addr) +{ + struct ml *ml = roc_ml_to_ml_priv(roc_ml); + + if (roc_model_is_cn10ka()) + return phys_addr - ml->pci_dev->mem_resource[0].phys_addr; + else + return phys_addr - ml->pci_dev->mem_resource[0].phys_addr - ML_MLAB_BLK_OFFSET; +} + +uint64_t +roc_ml_addr_offset_to_pa(struct roc_ml *roc_ml, uint64_t offset) +{ + struct ml *ml = roc_ml_to_ml_priv(roc_ml); + + if (roc_model_is_cn10ka()) + return ml->pci_dev->mem_resource[0].phys_addr + offset; + else + return ml->pci_dev->mem_resource[0].phys_addr + ML_MLAB_BLK_OFFSET + offset; +} + +void +roc_ml_scratch_write_job(struct roc_ml *roc_ml, void *work_ptr) +{ + union ml_scratch_work_ptr_s reg_work_ptr; + union ml_scratch_fw_ctrl_s reg_fw_ctrl; + + reg_work_ptr.u64 = 0; + reg_work_ptr.s.work_ptr = PLT_U64_CAST(roc_ml_addr_ap2mlip(roc_ml, work_ptr)); + + reg_fw_ctrl.u64 = 0; + reg_fw_ctrl.s.valid = 1; + + roc_ml_reg_write64(roc_ml, reg_work_ptr.u64, ML_SCRATCH_WORK_PTR); + roc_ml_reg_write64(roc_ml, reg_fw_ctrl.u64, ML_SCRATCH_FW_CTRL); +} + +bool +roc_ml_scratch_is_valid_bit_set(struct roc_ml *roc_ml) +{ + union ml_scratch_fw_ctrl_s reg_fw_ctrl; + + reg_fw_ctrl.u64 = roc_ml_reg_read64(roc_ml, ML_SCRATCH_FW_CTRL); + + if (reg_fw_ctrl.s.valid == 1) + return true; + + return false; +} + +bool +roc_ml_scratch_is_done_bit_set(struct roc_ml *roc_ml) +{ + union ml_scratch_fw_ctrl_s reg_fw_ctrl; + + reg_fw_ctrl.u64 = roc_ml_reg_read64(roc_ml, ML_SCRATCH_FW_CTRL); + + if (reg_fw_ctrl.s.done == 1) + return true; + + return false; +} + +bool +roc_ml_scratch_enqueue(struct roc_ml *roc_ml, void *work_ptr) +{ + union ml_scratch_work_ptr_s reg_work_ptr; + union ml_scratch_fw_ctrl_s reg_fw_ctrl; + bool ret = false; + + reg_work_ptr.u64 = 0; + reg_work_ptr.s.work_ptr = PLT_U64_CAST(roc_ml_addr_ap2mlip(roc_ml, work_ptr)); + + reg_fw_ctrl.u64 = 0; + reg_fw_ctrl.s.valid = 1; + + if (plt_spinlock_trylock(&roc_ml->sp_spinlock) != 0) { + bool valid = roc_ml_scratch_is_valid_bit_set(roc_ml); + bool done = roc_ml_scratch_is_done_bit_set(roc_ml); + + if (valid == done) { + roc_ml_clk_force_on(roc_ml); + roc_ml_dma_stall_off(roc_ml); + + roc_ml_reg_write64(roc_ml, reg_work_ptr.u64, ML_SCRATCH_WORK_PTR); + roc_ml_reg_write64(roc_ml, reg_fw_ctrl.u64, ML_SCRATCH_FW_CTRL); + + ret = true; + } + plt_spinlock_unlock(&roc_ml->sp_spinlock); + } + + return ret; +} + +bool +roc_ml_scratch_dequeue(struct roc_ml *roc_ml, void *work_ptr) +{ + union ml_scratch_work_ptr_s reg_work_ptr; + bool ret = false; + + if (plt_spinlock_trylock(&roc_ml->sp_spinlock) != 0) { + bool valid = roc_ml_scratch_is_valid_bit_set(roc_ml); + bool done = roc_ml_scratch_is_done_bit_set(roc_ml); + + if (valid && done) { + reg_work_ptr.u64 = roc_ml_reg_read64(roc_ml, ML_SCRATCH_WORK_PTR); + if (work_ptr == + roc_ml_addr_mlip2ap(roc_ml, PLT_PTR_CAST(reg_work_ptr.u64))) { + roc_ml_dma_stall_on(roc_ml); + roc_ml_clk_force_off(roc_ml); + + roc_ml_reg_write64(roc_ml, 0, ML_SCRATCH_WORK_PTR); + roc_ml_reg_write64(roc_ml, 0, ML_SCRATCH_FW_CTRL); + ret = true; + } + } + plt_spinlock_unlock(&roc_ml->sp_spinlock); + } + + return ret; +} + +void +roc_ml_scratch_queue_reset(struct roc_ml *roc_ml) +{ + if (plt_spinlock_trylock(&roc_ml->sp_spinlock) != 0) { + roc_ml_dma_stall_on(roc_ml); + roc_ml_clk_force_off(roc_ml); + roc_ml_reg_write64(roc_ml, 0, ML_SCRATCH_WORK_PTR); + roc_ml_reg_write64(roc_ml, 0, ML_SCRATCH_FW_CTRL); + plt_spinlock_unlock(&roc_ml->sp_spinlock); + } +} + +bool +roc_ml_jcmdq_enqueue_lf(struct roc_ml *roc_ml, struct ml_job_cmd_s *job_cmd) +{ + bool ret = false; + + if (FIELD_GET(ROC_ML_JCMDQ_STATUS_AVAIL_COUNT, + roc_ml_reg_read64(roc_ml, ML_JCMDQ_STATUS)) != 0) { + roc_ml_reg_write64(roc_ml, job_cmd->w0.u64, ML_JCMDQ_IN(0)); + roc_ml_reg_write64(roc_ml, job_cmd->w1.u64, ML_JCMDQ_IN(1)); + ret = true; + } + + return ret; +} + +bool +roc_ml_jcmdq_enqueue_sl(struct roc_ml *roc_ml, struct ml_job_cmd_s *job_cmd) +{ + bool ret = false; + + if (plt_spinlock_trylock(&roc_ml->fp_spinlock) != 0) { + if (FIELD_GET(ROC_ML_JCMDQ_STATUS_AVAIL_COUNT, + roc_ml_reg_read64(roc_ml, ML_JCMDQ_STATUS)) != 0) { + roc_ml_reg_write64(roc_ml, job_cmd->w0.u64, ML_JCMDQ_IN(0)); + roc_ml_reg_write64(roc_ml, job_cmd->w1.u64, ML_JCMDQ_IN(1)); + ret = true; + } + plt_spinlock_unlock(&roc_ml->fp_spinlock); + } + + return ret; +} + +void +roc_ml_clk_force_on(struct roc_ml *roc_ml) +{ + uint64_t reg_val = 0; + + reg_val = roc_ml_reg_read64(roc_ml, ML_CFG); + reg_val |= ROC_ML_CFG_MLIP_CLK_FORCE; + roc_ml_reg_write64(roc_ml, reg_val, ML_CFG); +} + +void +roc_ml_clk_force_off(struct roc_ml *roc_ml) +{ + uint64_t reg_val = 0; + + roc_ml_reg_write64(roc_ml, 0, ML_SCRATCH_WORK_PTR); + + reg_val = roc_ml_reg_read64(roc_ml, ML_CFG); + reg_val &= ~ROC_ML_CFG_MLIP_CLK_FORCE; + roc_ml_reg_write64(roc_ml, reg_val, ML_CFG); +} + +void +roc_ml_dma_stall_on(struct roc_ml *roc_ml) +{ + uint64_t reg_val = 0; + + reg_val = roc_ml_reg_read64(roc_ml, ML_JOB_MGR_CTRL); + reg_val |= ROC_ML_JOB_MGR_CTRL_STALL_ON_IDLE; + roc_ml_reg_write64(roc_ml, reg_val, ML_JOB_MGR_CTRL); +} + +void +roc_ml_dma_stall_off(struct roc_ml *roc_ml) +{ + uint64_t reg_val = 0; + + reg_val = roc_ml_reg_read64(roc_ml, ML_JOB_MGR_CTRL); + reg_val &= ~ROC_ML_JOB_MGR_CTRL_STALL_ON_IDLE; + roc_ml_reg_write64(roc_ml, reg_val, ML_JOB_MGR_CTRL); +} + +bool +roc_ml_mlip_is_enabled(struct roc_ml *roc_ml) +{ + uint64_t reg_val; + + reg_val = roc_ml_reg_read64(roc_ml, ML_CFG); + + if ((reg_val & ROC_ML_CFG_MLIP_ENA) != 0) + return true; + + return false; +} + +int +roc_ml_mlip_reset(struct roc_ml *roc_ml, bool force) +{ + uint64_t reg_val; + + /* Force reset */ + if (force) { + /* Set ML(0)_CFG[ENA] = 0. */ + reg_val = roc_ml_reg_read64(roc_ml, ML_CFG); + reg_val &= ~ROC_ML_CFG_ENA; + roc_ml_reg_write64(roc_ml, reg_val, ML_CFG); + + /* Set ML(0)_CFG[MLIP_ENA] = 0. */ + reg_val = roc_ml_reg_read64(roc_ml, ML_CFG); + reg_val &= ~ROC_ML_CFG_MLIP_ENA; + roc_ml_reg_write64(roc_ml, reg_val, ML_CFG); + + /* Clear ML_MLR_BASE */ + roc_ml_reg_write64(roc_ml, 0, ML_MLR_BASE); + } + + if (roc_model_is_cn10ka()) { + /* Wait for all active jobs to finish. + * ML_CFG[ENA] : When set, MLW will accept job commands. This + * bit can be cleared at any time. If [BUSY] is set, software + * must wait until [BUSY] == 0 before setting this bit. + */ + roc_ml_reg_wait_to_clear(roc_ml, ML_CFG, ROC_ML_CFG_BUSY); + + /* (1) Set ML(0)_AXI_BRIDGE_CTRL(0..1)[FENCE] = 1 to instruct + * the AXI bridge not to accept any new transactions from MLIP. + */ + reg_val = roc_ml_reg_read64(roc_ml, ML_AXI_BRIDGE_CTRL(0)); + reg_val |= ROC_ML_AXI_BRIDGE_CTRL_FENCE; + roc_ml_reg_write64(roc_ml, reg_val, ML_AXI_BRIDGE_CTRL(0)); + + reg_val = roc_ml_reg_read64(roc_ml, ML_AXI_BRIDGE_CTRL(1)); + reg_val |= ROC_ML_AXI_BRIDGE_CTRL_FENCE; + roc_ml_reg_write64(roc_ml, reg_val, ML_AXI_BRIDGE_CTRL(1)); + + /* (2) Wait until ML(0)_AXI_BRIDGE_CTRL(0..1)[BUSY] = 0 which + * indicates that there is no outstanding transactions on + * AXI-NCB paths. + */ + roc_ml_reg_wait_to_clear(roc_ml, ML_AXI_BRIDGE_CTRL(0), + ROC_ML_AXI_BRIDGE_CTRL_BUSY); + roc_ml_reg_wait_to_clear(roc_ml, ML_AXI_BRIDGE_CTRL(1), + ROC_ML_AXI_BRIDGE_CTRL_BUSY); + + /* (3) Wait until ML(0)_JOB_MGR_CTRL[BUSY] = 0 which indicates + * that there are no pending jobs in the MLW's job manager. + */ + roc_ml_reg_wait_to_clear(roc_ml, ML_JOB_MGR_CTRL, ROC_ML_JOB_MGR_CTRL_BUSY); + + /* (4) Set ML(0)_CFG[ENA] = 0. */ + reg_val = roc_ml_reg_read64(roc_ml, ML_CFG); + reg_val &= ~ROC_ML_CFG_ENA; + roc_ml_reg_write64(roc_ml, reg_val, ML_CFG); + + /* (5) Set ML(0)_CFG[MLIP_ENA] = 0. */ + reg_val = roc_ml_reg_read64(roc_ml, ML_CFG); + reg_val &= ~ROC_ML_CFG_MLIP_ENA; + roc_ml_reg_write64(roc_ml, reg_val, ML_CFG); + + /* (6) Set ML(0)_AXI_BRIDGE_CTRL(0..1)[FENCE] = 0.*/ + reg_val = roc_ml_reg_read64(roc_ml, ML_AXI_BRIDGE_CTRL(0)); + reg_val &= ~ROC_ML_AXI_BRIDGE_CTRL_FENCE; + roc_ml_reg_write64(roc_ml, reg_val, ML_AXI_BRIDGE_CTRL(0)); + roc_ml_reg_write64(roc_ml, reg_val, ML_AXI_BRIDGE_CTRL(1)); + } + + if (roc_model_is_cnf10kb()) { + /* (1) Clear MLAB(0)_CFG[ENA]. Any new jobs will bypass the job + * execution stages and their completions will be returned to + * PSM. + */ + reg_val = roc_ml_reg_read64(roc_ml, ML_CFG); + reg_val &= ~ROC_ML_CFG_ENA; + roc_ml_reg_write64(roc_ml, reg_val, ML_CFG); + + /* (2) Quiesce the ACC and DMA AXI interfaces: For each of the + * two MLAB(0)_AXI_BRIDGE_CTRL(0..1) registers: + * + * (a) Set MLAB(0)_AXI_BRIDGE_CTRL(0..1)[FENCE] to block new AXI + * commands from MLIP. + * + * (b) Poll MLAB(0)_AXI_BRIDGE_CTRL(0..1)[BUSY] == 0. + */ + reg_val = roc_ml_reg_read64(roc_ml, ML_AXI_BRIDGE_CTRL(0)); + reg_val |= ROC_ML_AXI_BRIDGE_CTRL_FENCE; + roc_ml_reg_write64(roc_ml, reg_val, ML_AXI_BRIDGE_CTRL(0)); + + roc_ml_reg_wait_to_clear(roc_ml, ML_AXI_BRIDGE_CTRL(0), + ROC_ML_AXI_BRIDGE_CTRL_BUSY); + + reg_val = roc_ml_reg_read64(roc_ml, ML_AXI_BRIDGE_CTRL(1)); + reg_val |= ROC_ML_AXI_BRIDGE_CTRL_FENCE; + roc_ml_reg_write64(roc_ml, reg_val, ML_AXI_BRIDGE_CTRL(1)); + + roc_ml_reg_wait_to_clear(roc_ml, ML_AXI_BRIDGE_CTRL(1), + ROC_ML_AXI_BRIDGE_CTRL_BUSY); + + /* (3) Clear MLAB(0)_CFG[MLIP_ENA] to reset MLIP. + */ + reg_val = roc_ml_reg_read64(roc_ml, ML_CFG); + reg_val &= ~ROC_ML_CFG_MLIP_ENA; + roc_ml_reg_write64(roc_ml, reg_val, ML_CFG); + +cnf10kb_mlip_reset_stage_4a: + /* (4) Flush any outstanding jobs in MLAB's job execution + * stages: + * + * (a) Wait for completion stage to clear: + * - Poll MLAB(0)_STG(0..2)_STATUS[VALID] == 0. + */ + roc_ml_reg_wait_to_clear(roc_ml, ML_STGX_STATUS(0), ROC_ML_STG_STATUS_VALID); + roc_ml_reg_wait_to_clear(roc_ml, ML_STGX_STATUS(1), ROC_ML_STG_STATUS_VALID); + roc_ml_reg_wait_to_clear(roc_ml, ML_STGX_STATUS(2), ROC_ML_STG_STATUS_VALID); + +cnf10kb_mlip_reset_stage_4b: + /* (4b) Clear job run stage: Poll + * MLAB(0)_STG_CONTROL[RUN_TO_COMP] == 0. + */ + roc_ml_reg_wait_to_clear(roc_ml, ML_STG_CONTROL, ROC_ML_STG_CONTROL_RUN_TO_COMP); + + /* (4b) Clear job run stage: If MLAB(0)_STG(1)_STATUS[VALID] == + * 1: + * - Set MLAB(0)_STG_CONTROL[RUN_TO_COMP]. + * - Poll MLAB(0)_STG_CONTROL[RUN_TO_COMP] == 0. + * - Repeat step (a) to clear job completion stage. + */ + reg_val = roc_ml_reg_read64(roc_ml, ML_STGX_STATUS(1)); + if (reg_val & ROC_ML_STG_STATUS_VALID) { + reg_val = roc_ml_reg_read64(roc_ml, ML_STG_CONTROL); + reg_val |= ROC_ML_STG_CONTROL_RUN_TO_COMP; + roc_ml_reg_write64(roc_ml, reg_val, ML_STG_CONTROL); + + roc_ml_reg_wait_to_clear(roc_ml, ML_STG_CONTROL, + ROC_ML_STG_CONTROL_RUN_TO_COMP); + + goto cnf10kb_mlip_reset_stage_4a; + } + + /* (4c) Clear job fetch stage: Poll + * MLAB(0)_STG_CONTROL[FETCH_TO_RUN] == 0. + */ + roc_ml_reg_wait_to_clear(roc_ml, ML_STG_CONTROL, ROC_ML_STG_CONTROL_FETCH_TO_RUN); + + /* (4c) Clear job fetch stage: If + * MLAB(0)_STG(0..2)_STATUS[VALID] == 1: + * - Set MLAB(0)_STG_CONTROL[FETCH_TO_RUN]. + * - Poll MLAB(0)_STG_CONTROL[FETCH_TO_RUN] == 0. + * - Repeat step (b) to clear job run and completion stages. + */ + reg_val = (roc_ml_reg_read64(roc_ml, ML_STGX_STATUS(0)) | + roc_ml_reg_read64(roc_ml, ML_STGX_STATUS(1)) | + roc_ml_reg_read64(roc_ml, ML_STGX_STATUS(2))); + + if (reg_val & ROC_ML_STG_STATUS_VALID) { + reg_val = roc_ml_reg_read64(roc_ml, ML_STG_CONTROL); + reg_val |= ROC_ML_STG_CONTROL_RUN_TO_COMP; + roc_ml_reg_write64(roc_ml, reg_val, ML_STG_CONTROL); + + roc_ml_reg_wait_to_clear(roc_ml, ML_STG_CONTROL, + ROC_ML_STG_CONTROL_RUN_TO_COMP); + + goto cnf10kb_mlip_reset_stage_4b; + } + + /* (5) Reset the ACC and DMA AXI interfaces: For each of the two + * MLAB(0)_AXI_BRIDGE_CTRL(0..1) registers: + * + * (5a) Set and then clear + * MLAB(0)_AXI_BRIDGE_CTRL(0..1)[FLUSH_WRITE_DATA]. + * + * (5b) Clear MLAB(0)_AXI_BRIDGE_CTRL(0..1)[FENCE]. + */ + reg_val = roc_ml_reg_read64(roc_ml, ML_AXI_BRIDGE_CTRL(0)); + reg_val |= ROC_ML_AXI_BRIDGE_CTRL_FLUSH_WRITE_DATA; + roc_ml_reg_write64(roc_ml, reg_val, ML_AXI_BRIDGE_CTRL(0)); + + reg_val = roc_ml_reg_read64(roc_ml, ML_AXI_BRIDGE_CTRL(0)); + reg_val &= ~ROC_ML_AXI_BRIDGE_CTRL_FLUSH_WRITE_DATA; + roc_ml_reg_write64(roc_ml, reg_val, ML_AXI_BRIDGE_CTRL(0)); + + reg_val = roc_ml_reg_read64(roc_ml, ML_AXI_BRIDGE_CTRL(0)); + reg_val &= ~ROC_ML_AXI_BRIDGE_CTRL_FENCE; + roc_ml_reg_write64(roc_ml, reg_val, ML_AXI_BRIDGE_CTRL(0)); + + reg_val = roc_ml_reg_read64(roc_ml, ML_AXI_BRIDGE_CTRL(1)); + reg_val |= ROC_ML_AXI_BRIDGE_CTRL_FLUSH_WRITE_DATA; + roc_ml_reg_write64(roc_ml, reg_val, ML_AXI_BRIDGE_CTRL(1)); + + reg_val = roc_ml_reg_read64(roc_ml, ML_AXI_BRIDGE_CTRL(1)); + reg_val &= ~ROC_ML_AXI_BRIDGE_CTRL_FLUSH_WRITE_DATA; + roc_ml_reg_write64(roc_ml, reg_val, ML_AXI_BRIDGE_CTRL(1)); + + reg_val = roc_ml_reg_read64(roc_ml, ML_AXI_BRIDGE_CTRL(1)); + reg_val &= ~ROC_ML_AXI_BRIDGE_CTRL_FENCE; + roc_ml_reg_write64(roc_ml, reg_val, ML_AXI_BRIDGE_CTRL(1)); + } + + return 0; +} + +int +roc_ml_dev_init(struct roc_ml *roc_ml) +{ + struct plt_pci_device *pci_dev; + struct dev *dev; + struct ml *ml; + + if (roc_ml == NULL || roc_ml->pci_dev == NULL) + return -EINVAL; + + PLT_STATIC_ASSERT(sizeof(struct ml) <= ROC_ML_MEM_SZ); + + ml = roc_ml_to_ml_priv(roc_ml); + memset(ml, 0, sizeof(*ml)); + pci_dev = roc_ml->pci_dev; + dev = &ml->dev; + + ml->pci_dev = pci_dev; + dev->roc_ml = roc_ml; + + ml->ml_reg_addr = ml->pci_dev->mem_resource[0].addr; + ml->ml_mlr_base = 0; + ml->ml_mlr_base_saved = false; + + plt_ml_dbg("ML: PCI Physical Address : 0x%016lx", ml->pci_dev->mem_resource[0].phys_addr); + plt_ml_dbg("ML: PCI Virtual Address : 0x%016lx", + PLT_U64_CAST(ml->pci_dev->mem_resource[0].addr)); + + plt_spinlock_init(&roc_ml->sp_spinlock); + plt_spinlock_init(&roc_ml->fp_spinlock); + + return 0; +} + +int +roc_ml_dev_fini(struct roc_ml *roc_ml) +{ + struct ml *ml = roc_ml_to_ml_priv(roc_ml); + + if (ml == NULL) + return -EINVAL; + + return 0; +} + +int +roc_ml_blk_init(struct roc_bphy *roc_bphy, struct roc_ml *roc_ml) +{ + struct dev *dev; + struct ml *ml; + + if ((roc_ml == NULL) || (roc_bphy == NULL)) + return -EINVAL; + + PLT_STATIC_ASSERT(sizeof(struct ml) <= ROC_ML_MEM_SZ); + + ml = roc_ml_to_ml_priv(roc_ml); + memset(ml, 0, sizeof(*ml)); + + dev = &ml->dev; + + ml->pci_dev = roc_bphy->pci_dev; + dev->roc_ml = roc_ml; + + plt_ml_dbg( + "MLAB: Physical Address : 0x%016lx", + PLT_PTR_ADD_U64_CAST(ml->pci_dev->mem_resource[0].phys_addr, ML_MLAB_BLK_OFFSET)); + plt_ml_dbg("MLAB: Virtual Address : 0x%016lx", + PLT_PTR_ADD_U64_CAST(ml->pci_dev->mem_resource[0].addr, ML_MLAB_BLK_OFFSET)); + + ml->ml_reg_addr = PLT_PTR_ADD(ml->pci_dev->mem_resource[0].addr, ML_MLAB_BLK_OFFSET); + ml->ml_mlr_base = 0; + ml->ml_mlr_base_saved = false; + + plt_spinlock_init(&roc_ml->sp_spinlock); + plt_spinlock_init(&roc_ml->fp_spinlock); + + return 0; +} + +int +roc_ml_blk_fini(struct roc_bphy *roc_bphy, struct roc_ml *roc_ml) +{ + struct ml *ml; + + if ((roc_ml == NULL) || (roc_bphy == NULL)) + return -EINVAL; + + ml = roc_ml_to_ml_priv(roc_ml); + + if (ml == NULL) + return -EINVAL; + + return 0; +} + +uint16_t +roc_ml_sso_pf_func_get(void) +{ + return idev_sso_pffunc_get(); +} diff --git a/drivers/common/cnxk/roc_ml.h b/drivers/common/cnxk/roc_ml.h new file mode 100644 index 0000000000..3cd82be6a6 --- /dev/null +++ b/drivers/common/cnxk/roc_ml.h @@ -0,0 +1,152 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#ifndef _ROC_ML_H_ +#define _ROC_ML_H_ + +#include "roc_api.h" + +#define ROC_ML_MEM_SZ (6 * 1024) +#define ROC_ML_TIMEOUT_MS 10000 + +/* ML_CFG */ +#define ROC_ML_CFG_JD_SIZE GENMASK_ULL(1, 0) +#define ROC_ML_CFG_MLIP_ENA BIT_ULL(2) +#define ROC_ML_CFG_BUSY BIT_ULL(3) +#define ROC_ML_CFG_WRAP_CLK_FORCE BIT_ULL(4) +#define ROC_ML_CFG_MLIP_CLK_FORCE BIT_ULL(5) +#define ROC_ML_CFG_ENA BIT_ULL(6) + +/* ML_MLR_BASE */ +#define ROC_ML_MLR_BASE_BASE GENMASK_ULL(51, 0) + +/* ML_STG_STATUS */ +#define ROC_ML_STG_STATUS_VALID BIT_ULL(0) +#define ROC_ML_STG_STATUS_ADDR_ERR BIT_ULL(1) +#define ROC_ML_STG_STATUS_DMA_ERR BIT_ULL(2) +#define ROC_ML_STG_STATUS_TIMEOUT BIT_ULL(3) +#define ROC_ML_STG_STATUS_NFAT_ERR BIT_ULL(4) +#define ROC_ML_STG_STATUS_JOB_ERR BIT_ULL(5) +#define ROC_ML_STG_STATUS_ELAPSED_TICKS GENMASK_ULL(47, 6) + +/* ML_STG_CONTROL */ +#define ROC_ML_STG_CONTROL_FETCH_TO_RUN BIT_ULL(0) +#define ROC_ML_STG_CONTROL_RUN_TO_COMP BIT_ULL(1) + +/* ML_AXI_BRIDGE */ +#define ROC_ML_AXI_BRIDGE_CTRL_AXI_RESP_CTRL BIT_ULL(0) +#define ROC_ML_AXI_BRIDGE_CTRL_BRIDGE_CTRL_MODE BIT_ULL(1) +#define ROC_ML_AXI_BRIDGE_CTRL_FORCE_AXI_ID GENMASK_ULL(11, 2) +#define ROC_ML_AXI_BRIDGE_CTRL_CSR_WR_BLK BIT_ULL(13) +#define ROC_ML_AXI_BRIDGE_CTRL_NCB_WR_BLK BIT_ULL(14) +#define ROC_ML_AXI_BRIDGE_CTRL_CSR_RD_BLK BIT_ULL(15) +#define ROC_ML_AXI_BRIDGE_CTRL_NCB_RD_BLK BIT_ULL(16) +#define ROC_ML_AXI_BRIDGE_CTRL_FENCE BIT_ULL(17) +#define ROC_ML_AXI_BRIDGE_CTRL_BUSY BIT_ULL(18) +#define ROC_ML_AXI_BRIDGE_CTRL_FORCE_WRESP_OK BIT_ULL(19) +#define ROC_ML_AXI_BRIDGE_CTRL_FORCE_RRESP_OK BIT_ULL(20) +#define ROC_ML_AXI_BRIDGE_CTRL_CSR_FORCE_CMPLT BIT_ULL(21) +#define ROC_ML_AXI_BRIDGE_CTRL_WR_CNT_GEAR GENMASK_ULL(25, 22) +#define ROC_ML_AXI_BRIDGE_CTRL_RD_GEAR GENMASK_ULL(28, 26) +#define ROC_ML_AXI_BRIDGE_CTRL_CSR_CUTTHROUGH_MODE BIT_ULL(29) +#define ROC_ML_AXI_BRIDGE_CTRL_GAA_WRITE_CREDITS GENMASK_ULL(33, 30) +#define ROC_ML_AXI_BRIDGE_CTRL_GAA_READ_CREDITS GENMASK_ULL(37, 34) +#define ROC_ML_AXI_BRIDGE_CTRL_GAA_LOAD_WRITE_CREDITS BIT_ULL(38) +#define ROC_ML_AXI_BRIDGE_CTRL_GAA_LOAD_READ_CREDITS BIT_ULL(39) +#define ROC_ML_AXI_BRIDGE_CTRL_FLUSH_WRITE_DATA BIT_ULL(40) + +/* ML_JOB_MGR_CTRL */ +#define ROC_ML_JOB_MGR_CTRL_STALL_ON_ERR BIT_ULL(0) +#define ROC_ML_JOB_MGR_CTRL_PF_OVERRIDE BIT_ULL(1) +#define ROC_ML_JOB_MGR_CTRL_PF_FUNC_OVERRIDE GENMASK_ULL(19, 4) +#define ROC_ML_JOB_MGR_CTRL_BUSY BIT_ULL(20) +#define ROC_ML_JOB_MGR_CTRL_STALL_ON_IDLE BIT_ULL(21) + +/* ML_JCMDQ_STATUS */ +#define ROC_ML_JCMDQ_STATUS_AVAIL_COUNT GENMASK_ULL(4, 0) + +/* ML_ANBX_BACKP_DISABLE */ +#define ROC_ML_ANBX_BACKP_DISABLE_EXTMSTR_B_BACKP_DISABLE BIT_ULL(0) +#define ROC_ML_ANBX_BACKP_DISABLE_EXTMSTR_R_BACKP_DISABLE BIT_ULL(1) + +/* ML_ANBX_NCBI_P_OVR */ +#define ML_ANBX_NCBI_P_OVR_ANB_NCBI_P_MSH_DST_OVR_VLD BIT_ULL(0) +#define ML_ANBX_NCBI_P_OVR_ANB_NCBI_P_MSH_DST_OVR GENMASK_ULL(11, 1) +#define ML_ANBX_NCBI_P_OVR_ANB_NCBI_P_NS_OVR_VLD BIT_ULL(12) +#define ML_ANBX_NCBI_P_OVR_ANB_NCBI_P_NS_OVR BIT_ULL(13) +#define ML_ANBX_NCBI_P_OVR_ANB_NCBI_P_PADDR_OVR_VLD BIT_ULL(14) +#define ML_ANBX_NCBI_P_OVR_ANB_NCBI_P_PADDR_OVR BIT_ULL(15) +#define ML_ANBX_NCBI_P_OVR_ANB_NCBI_P_RO_OVR_VLD BIT_ULL(16) +#define ML_ANBX_NCBI_P_OVR_ANB_NCBI_P_RO_OVR BIT_ULL(17) +#define ML_ANBX_NCBI_P_OVR_ANB_NCBI_P_MPADID_VAL_OVR_VLD BIT_ULL(18) +#define ML_ANBX_NCBI_P_OVR_ANB_NCBI_P_MPADID_VAL_OVR BIT_ULL(19) +#define ML_ANBX_NCBI_P_OVR_ANB_NCBI_P_MPAMDID_OVR_VLD BIT_ULL(20) +#define ML_ANBX_NCBI_P_OVR_ANB_NCBI_P_MPAMDID_OVR BIT_ULL(21) + +/* ML_ANBX_NCBI_NP_OVR */ +#define ML_ANBX_NCBI_NP_OVR_ANB_NCBI_NP_MSH_DST_OVR_VLD BIT_ULL(0) +#define ML_ANBX_NCBI_NP_OVR_ANB_NCBI_NP_MSH_DST_OVR GENMASK_ULL(11, 1) +#define ML_ANBX_NCBI_NP_OVR_ANB_NCBI_NP_NS_OVR_VLD BIT_ULL(12) +#define ML_ANBX_NCBI_NP_OVR_ANB_NCBI_NP_NS_OVR BIT_ULL(13) +#define ML_ANBX_NCBI_NP_OVR_ANB_NCBI_NP_PADDR_OVR_VLD BIT_ULL(14) +#define ML_ANBX_NCBI_NP_OVR_ANB_NCBI_NP_PADDR_OVR BIT_ULL(15) +#define ML_ANBX_NCBI_NP_OVR_ANB_NCBI_NP_RO_OVR_VLD BIT_ULL(16) +#define ML_ANBX_NCBI_NP_OVR_ANB_NCBI_NP_RO_OVR BIT_ULL(17) +#define ML_ANBX_NCBI_NP_OVR_ANB_NCBI_NP_MPADID_VAL_OVR_VLD BIT_ULL(18) +#define ML_ANBX_NCBI_NP_OVR_ANB_NCBI_NP_MPADID_VAL_OVR BIT_ULL(19) +#define ML_ANBX_NCBI_NP_OVR_ANB_NCBI_NP_MPAMDID_OVR_VLD BIT_ULL(20) +#define ML_ANBX_NCBI_NP_OVR_ANB_NCBI_NP_MPAMDID_OVR BIT_ULL(21) + +/* ML_SW_RST_CTRL */ +#define ROC_ML_SW_RST_CTRL_ACC_RST BIT_ULL(0) +#define ROC_ML_SW_RST_CTRL_CMPC_RST BIT_ULL(1) + +struct roc_ml { + struct plt_pci_device *pci_dev; + plt_spinlock_t sp_spinlock; + plt_spinlock_t fp_spinlock; + uint8_t reserved[ROC_ML_MEM_SZ] __plt_cache_aligned; +} __plt_cache_aligned; + +/* Register read and write functions */ +uint64_t __roc_api roc_ml_reg_read64(struct roc_ml *roc_ml, uint64_t offset); +void __roc_api roc_ml_reg_write64(struct roc_ml *roc_ml, uint64_t val, uint64_t offset); +uint32_t __roc_api roc_ml_reg_read32(struct roc_ml *roc_ml, uint64_t offset); +void __roc_api roc_ml_reg_write32(struct roc_ml *roc_ml, uint32_t val, uint64_t offset); +void __roc_api roc_ml_reg_save(struct roc_ml *roc_ml, uint64_t offset); + +/* Address translation functions */ +uint64_t __roc_api roc_ml_addr_pa_to_offset(struct roc_ml *roc_ml, uint64_t phys_addr); +uint64_t __roc_api roc_ml_addr_offset_to_pa(struct roc_ml *roc_ml, uint64_t offset); +void *__roc_api roc_ml_addr_ap2mlip(struct roc_ml *roc_ml, void *addr); +void *__roc_api roc_ml_addr_mlip2ap(struct roc_ml *roc_ml, void *addr); + +/* Scratch and JCMDQ functions */ +void __roc_api roc_ml_scratch_write_job(struct roc_ml *roc_ml, void *jd); +bool __roc_api roc_ml_scratch_is_valid_bit_set(struct roc_ml *roc_ml); +bool __roc_api roc_ml_scratch_is_done_bit_set(struct roc_ml *roc_ml); +bool __roc_api roc_ml_scratch_enqueue(struct roc_ml *roc_ml, void *work_ptr); +bool __roc_api roc_ml_scratch_dequeue(struct roc_ml *roc_ml, void *work_ptr); +void __roc_api roc_ml_scratch_queue_reset(struct roc_ml *roc_ml); +bool __roc_api roc_ml_jcmdq_enqueue_lf(struct roc_ml *roc_ml, struct ml_job_cmd_s *job_cmd); +bool __roc_api roc_ml_jcmdq_enqueue_sl(struct roc_ml *roc_ml, struct ml_job_cmd_s *job_cmd); + +/* Device management functions */ +void __roc_api roc_ml_clk_force_on(struct roc_ml *roc_ml); +void __roc_api roc_ml_clk_force_off(struct roc_ml *roc_ml); +void __roc_api roc_ml_dma_stall_on(struct roc_ml *roc_ml); +void __roc_api roc_ml_dma_stall_off(struct roc_ml *roc_ml); +bool __roc_api roc_ml_mlip_is_enabled(struct roc_ml *roc_ml); +int __roc_api roc_ml_mlip_reset(struct roc_ml *roc_ml, bool force); + +/* Device / block functions */ +int __roc_api roc_ml_dev_init(struct roc_ml *roc_ml); +int __roc_api roc_ml_dev_fini(struct roc_ml *roc_ml); +int __roc_api roc_ml_blk_init(struct roc_bphy *roc_bphy, struct roc_ml *roc_ml); +int __roc_api roc_ml_blk_fini(struct roc_bphy *roc_bphy, struct roc_ml *roc_ml); + +/* Utility functions */ +uint16_t __roc_api roc_ml_sso_pf_func_get(void); + +#endif /*_ROC_ML_H_*/ diff --git a/drivers/common/cnxk/roc_ml_priv.h b/drivers/common/cnxk/roc_ml_priv.h new file mode 100644 index 0000000000..ad5fe90bab --- /dev/null +++ b/drivers/common/cnxk/roc_ml_priv.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#ifndef _ROC_ML_PRIV_H_ +#define _ROC_ML_PRIV_H_ + +#include "roc_api.h" + +struct ml { + struct plt_pci_device *pci_dev; + struct dev dev; + uint8_t *ml_reg_addr; + uint64_t ml_mlr_base; + bool ml_mlr_base_saved; +} __plt_cache_aligned; + +static inline struct ml * +roc_ml_to_ml_priv(struct roc_ml *roc_ml) +{ + return (struct ml *)&roc_ml->reserved[0]; +} + +#endif /* _ROC_ML_PRIV_H_ */ diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c index ce0f9b870c..f91b95ceab 100644 --- a/drivers/common/cnxk/roc_platform.c +++ b/drivers/common/cnxk/roc_platform.c @@ -63,6 +63,7 @@ roc_plt_init(void) RTE_LOG_REGISTER(cnxk_logtype_base, pmd.cnxk.base, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_mbox, pmd.cnxk.mbox, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_cpt, pmd.crypto.cnxk, NOTICE); +RTE_LOG_REGISTER(cnxk_logtype_ml, pmd.ml.cnxk, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_npa, pmd.mempool.cnxk, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_nix, pmd.net.cnxk, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_npc, pmd.net.cnxk.flow, NOTICE); diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h index 1a48ff3db4..a291ed1c66 100644 --- a/drivers/common/cnxk/roc_platform.h +++ b/drivers/common/cnxk/roc_platform.h @@ -233,6 +233,7 @@ extern int cnxk_logtype_base; extern int cnxk_logtype_mbox; extern int cnxk_logtype_cpt; +extern int cnxk_logtype_ml; extern int cnxk_logtype_npa; extern int cnxk_logtype_nix; extern int cnxk_logtype_npc; @@ -260,6 +261,7 @@ extern int cnxk_logtype_ree; #define plt_base_dbg(fmt, ...) plt_dbg(base, fmt, ##__VA_ARGS__) #define plt_cpt_dbg(fmt, ...) plt_dbg(cpt, fmt, ##__VA_ARGS__) #define plt_mbox_dbg(fmt, ...) plt_dbg(mbox, fmt, ##__VA_ARGS__) +#define plt_ml_dbg(fmt, ...) plt_dbg(ml, fmt, ##__VA_ARGS__) #define plt_npa_dbg(fmt, ...) plt_dbg(npa, fmt, ##__VA_ARGS__) #define plt_nix_dbg(fmt, ...) plt_dbg(nix, fmt, ##__VA_ARGS__) #define plt_npc_dbg(fmt, ...) plt_dbg(npc, fmt, ##__VA_ARGS__) diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h index 122d411fe7..14fe2e452a 100644 --- a/drivers/common/cnxk/roc_priv.h +++ b/drivers/common/cnxk/roc_priv.h @@ -47,4 +47,7 @@ /* REE */ #include "roc_ree_priv.h" +/* ML */ +#include "roc_ml_priv.h" + #endif /* _ROC_PRIV_H_ */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 17f0ec6b48..f7fe49e0ed 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -8,6 +8,7 @@ INTERNAL { cnxk_logtype_base; cnxk_logtype_cpt; cnxk_logtype_mbox; + cnxk_logtype_ml; cnxk_logtype_nix; cnxk_logtype_npa; cnxk_logtype_npc; @@ -96,6 +97,34 @@ INTERNAL { roc_idev_npa_nix_get; roc_idev_num_lmtlines_get; roc_idev_nix_inl_meta_aura_get; + roc_ml_reg_read64; + roc_ml_reg_write64; + roc_ml_reg_read32; + roc_ml_reg_write32; + roc_ml_reg_save; + roc_ml_addr_ap2mlip; + roc_ml_addr_mlip2ap; + roc_ml_addr_pa_to_offset; + roc_ml_addr_offset_to_pa; + roc_ml_scratch_write_job; + roc_ml_scratch_is_valid_bit_set; + roc_ml_scratch_is_done_bit_set; + roc_ml_scratch_enqueue; + roc_ml_scratch_dequeue; + roc_ml_scratch_queue_reset; + roc_ml_jcmdq_enqueue_lf; + roc_ml_jcmdq_enqueue_sl; + roc_ml_clk_force_on; + roc_ml_clk_force_off; + roc_ml_dma_stall_on; + roc_ml_dma_stall_off; + roc_ml_mlip_is_enabled; + roc_ml_mlip_reset; + roc_ml_dev_init; + roc_ml_dev_fini; + roc_ml_blk_init; + roc_ml_blk_fini; + roc_ml_sso_pf_func_get; roc_model; roc_se_auth_key_set; roc_se_ciph_key_set;