From patchwork Thu Apr 1 12:37:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90374 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DD081A0548; Thu, 1 Apr 2021 14:38:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BA7BF14117E; Thu, 1 Apr 2021 14:38:45 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id DC9A414117E for ; Thu, 1 Apr 2021 14:38:42 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPcAp019150 for ; Thu, 1 Apr 2021 05:38:42 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=pfpt0220; bh=A3+5kDaYb7NHNInylVlUjFAaR98F7yqWwYWxTAFs8yE=; b=SptZm3IG0nWlzWtiMdlxpXIpbhMdo/Owg6zP5r7P1wKcz00dnkZhLbZlFupdQR/3l56H 2w+aXke7P3jNN91q2sln87FvmsMuLCvMBNZujomy7izKCCDoeJPZElJcoSk1AsOSNnpX SpYA3p1Uu8yPcqlIRkL77fTec6Lnk5cz/2A3lPbZCnkXPpSnSxuppJex9xo37uLUYQ/P qdMJyd/4VGrwnOF4bn31tMwczi7DqGQUu8p3hLNujPmc2jVBcHAQT7YEx/EFDCWgQcUv 4EmH84ejXIqzJWN0LTfSTjYupIbyYmdVQdtQh4jkdr9lo81PvzZ/iNRrZG0jJPJoxhQ8 Dw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jjdxd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:38:40 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:38:37 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:38:37 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 3C6923F7041; Thu, 1 Apr 2021 05:38:33 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , , Nithin Dabilpuram Date: Thu, 1 Apr 2021 18:07:26 +0530 Message-ID: <20210401123817.14348-2-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: rE4Sr4_ccgJ5EtYdocjwP2KWzlLkIFt9 X-Proofpoint-ORIG-GUID: rE4Sr4_ccgJ5EtYdocjwP2KWzlLkIFt9 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 01/52] doc: add Marvell CNXK platform guide X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Platform specific guide for Marvell OCTEON CN9K/CN10K SoC is added. Signed-off-by: Nithin Dabilpuram Signed-off-by: Jerin Jacob --- MAINTAINERS | 9 + doc/guides/platform/cnxk.rst | 578 ++++ .../img/cnxk_packet_flow_hw_accelerators.svg | 2795 ++++++++++++++++++++ .../platform/img/cnxk_resource_virtualization.svg | 2428 +++++++++++++++++ doc/guides/platform/index.rst | 1 + 5 files changed, 5811 insertions(+) create mode 100644 doc/guides/platform/cnxk.rst create mode 100644 doc/guides/platform/img/cnxk_packet_flow_hw_accelerators.svg create mode 100644 doc/guides/platform/img/cnxk_resource_virtualization.svg diff --git a/MAINTAINERS b/MAINTAINERS index 0f5e745..c466420 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -732,6 +732,15 @@ F: drivers/net/ipn3ke/ F: doc/guides/nics/ipn3ke.rst F: doc/guides/nics/features/ipn3ke.ini +Marvell cnxk +M: Nithin Dabilpuram +M: Kiran Kumar K +M: Sunil Kumar Kori +M: Satha Rao +T: git://dpdk.org/next/dpdk-next-net-mrvl +F: drivers/common/cnxk/ +F: doc/guides/platform/cnxk.rst + Marvell mvpp2 M: Liron Himi T: git://dpdk.org/next/dpdk-next-net-mrvl diff --git a/doc/guides/platform/cnxk.rst b/doc/guides/platform/cnxk.rst new file mode 100644 index 0000000..3b07287 --- /dev/null +++ b/doc/guides/platform/cnxk.rst @@ -0,0 +1,578 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(C) 2021 Marvell. + +Marvell CNXK Platform Guide +=========================== + +This document gives an overview of **Marvell OCTEON CN9K and CN10K** RVU H/W block, +packet flow and procedure to build DPDK on OCTEON CNXK platform. + +More information about CN9K and CN10K SoC can be found at `Marvell Official Website +`_. + +Supported OCTEON CNXK SoCs +-------------------------- + +- CN106xx + +CNXK Resource Virtualization Unit architecture +---------------------------------------------- + +The :numref:`figure_cnxk_resource_virtualization` diagram depicts the +RVU architecture and a resource provisioning example. + +.. _figure_cnxk_resource_virtualization: + +.. figure:: img/cnxk_resource_virtualization.* + + CNXK Resource virtualization architecture and provisioning example + + +Resource Virtualization Unit (RVU) on Marvell's OCTEON CN9K/CN10K SoC maps HW +resources belonging to the network, crypto and other functional blocks onto +PCI-compatible physical and virtual functions. + +Each functional block has multiple local functions (LFs) for +provisioning to different PCIe devices. RVU supports multiple PCIe SRIOV +physical functions (PFs) and virtual functions (VFs). + +The :numref:`table_cnxk_rvu_dpdk_mapping` shows the various local +functions (LFs) provided by the RVU and its functional mapping to +DPDK subsystem. + +.. _table_cnxk_rvu_dpdk_mapping: + +.. table:: RVU managed functional blocks and its mapping to DPDK subsystem + + +---+-----+--------------------------------------------------------------+ + | # | LF | DPDK subsystem mapping | + +===+=====+==============================================================+ + | 1 | NIX | rte_ethdev, rte_tm, rte_event_eth_[rt]x_adapter, rte_security| + +---+-----+--------------------------------------------------------------+ + | 2 | NPA | rte_mempool | + +---+-----+--------------------------------------------------------------+ + | 3 | NPC | rte_flow | + +---+-----+--------------------------------------------------------------+ + | 4 | CPT | rte_cryptodev, rte_event_crypto_adapter | + +---+-----+--------------------------------------------------------------+ + | 5 | SSO | rte_eventdev | + +---+-----+--------------------------------------------------------------+ + | 6 | TIM | rte_event_timer_adapter | + +---+-----+--------------------------------------------------------------+ + | 7 | LBK | rte_ethdev | + +---+-----+--------------------------------------------------------------+ + | 8 | DPI | rte_rawdev | + +---+-----+--------------------------------------------------------------+ + | 9 | SDP | rte_ethdev | + +---+-----+--------------------------------------------------------------+ + | 10| REE | rte_regexdev | + +---+-----+--------------------------------------------------------------+ + +PF0 is called the administrative / admin function (AF) and has exclusive +privileges to provision RVU functional block's LFs to each of the PF/VF. + +PF/VFs communicates with AF via a shared memory region (mailbox).Upon receiving +requests from PF/VF, AF does resource provisioning and other HW configuration. + +AF is always attached to host, but PF/VFs may be used by host kernel itself, +or attached to VMs or to userspace applications like DPDK, etc. So, AF has to +handle provisioning/configuration requests sent by any device from any domain. + +The AF driver does not receive or process any data. +It is only a configuration driver used in control path. + +The :numref:`figure_cnxk_resource_virtualization` diagram also shows a +resource provisioning example where, + +1. PFx and PFx-VF0 bound to Linux netdev driver. +2. PFx-VF1 ethdev driver bound to the first DPDK application. +3. PFy ethdev driver, PFy-VF0 ethdev driver, PFz eventdev driver, PFm-VF0 cryptodev driver bound to the second DPDK application. + +LBK HW Access +------------- + +Loopback HW Unit (LBK) receives packets from NIX-RX and sends packets back to NIX-TX. +The loopback block has N channels and contains data buffering that is shared across +all channels. The LBK HW Unit is abstracted using ethdev subsystem, Where PF0's +VFs are exposed as ethdev device and odd-even pairs of VFs are tied together, +that is, packets sent on odd VF end up received on even VF and vice versa. +This would enable HW accelerated means of communication between two domains +where even VF bound to the first domain and odd VF bound to the second domain. + +Typical application usage models are, + +#. Communication between the Linux kernel and DPDK application. +#. Exception path to Linux kernel from DPDK application as SW ``KNI`` replacement. +#. Communication between two different DPDK applications. + +SDP interface +------------- + +System DPI Packet Interface unit(SDP) provides PCIe endpoint support for remote host +to DMA packets into and out of CNXK SoC. SDP interface comes in to live only when +CNXK SoC is connected in PCIe endpoint mode. It can be used to send/receive +packets to/from remote host machine using input/output queue pairs exposed to it. +SDP interface receives input packets from remote host from NIX-RX and sends packets +to remote host using NIX-TX. Remote host machine need to use corresponding driver +(kernel/user mode) to communicate with SDP interface on CNXK SoC. SDP supports +single PCIe SRIOV physical function(PF) and multiple virtual functions(VF's). Users +can bind PF or VF to use SDP interface and it will be enumerated as ethdev ports. + +The primary use case for SDP is to enable the smart NIC use case. Typical usage models are, + +#. Communication channel between remote host and CNXK SoC over PCIe. +#. Transfer packets received from network interface to remote host over PCIe and + vice-versa. + +CNXK packet flow +---------------------- + +The :numref:`figure_cnxk_packet_flow_hw_accelerators` diagram depicts +the packet flow on CNXK SoC in conjunction with use of various HW accelerators. + +.. _figure_cnxk_packet_flow_hw_accelerators: + +.. figure:: img/cnxk_packet_flow_hw_accelerators.* + + CNXK packet flow in conjunction with use of HW accelerators + +HW Offload Drivers +------------------ + +This section lists dataplane H/W block(s) available in CNXK SoC. + +Procedure to Setup Platform +--------------------------- + +There are three main prerequisites for setting up DPDK on CNXK +compatible board: + +1. **RVU AF Linux kernel driver** + + The dependent kernel drivers can be obtained from the + `kernel.org `_. + + Alternatively, the Marvell SDK also provides the required kernel drivers. + + Linux kernel should be configured with the following features enabled: + +.. code-block:: console + + # 64K pages enabled for better performance + CONFIG_ARM64_64K_PAGES=y + CONFIG_ARM64_VA_BITS_48=y + # huge pages support enabled + CONFIG_HUGETLBFS=y + CONFIG_HUGETLB_PAGE=y + # VFIO enabled with TYPE1 IOMMU at minimum + CONFIG_VFIO_IOMMU_TYPE1=y + CONFIG_VFIO_VIRQFD=y + CONFIG_VFIO=y + CONFIG_VFIO_NOIOMMU=y + CONFIG_VFIO_PCI=y + CONFIG_VFIO_PCI_MMAP=y + # SMMUv3 driver + CONFIG_ARM_SMMU_V3=y + # ARMv8.1 LSE atomics + CONFIG_ARM64_LSE_ATOMICS=y + # OCTEONTX2 drivers + CONFIG_OCTEONTX2_MBOX=y + CONFIG_OCTEONTX2_AF=y + # Enable if netdev PF driver required + CONFIG_OCTEONTX2_PF=y + # Enable if netdev VF driver required + CONFIG_OCTEONTX2_VF=y + CONFIG_CRYPTO_DEV_OCTEONTX2_CPT=y + # Enable if OCTEONTX2 DMA PF driver required + CONFIG_OCTEONTX2_DPI_PF=n + +2. **ARM64 Linux Tool Chain** + + For example, the *aarch64* Linaro Toolchain, which can be obtained from + `here `_. + + Alternatively, the Marvell SDK also provides GNU GCC toolchain, which is + optimized for CNXK CPU. + +3. **Rootfile system** + + Any *aarch64* supporting filesystem may be used. For example, + Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained + from ``_. + + Alternatively, the Marvell SDK provides the buildroot based root filesystem. + The SDK includes all the above prerequisites necessary to bring up the CNXK board. + +- Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment. + + +Debugging Options +----------------- + +.. _table_cnxk_common_debug_options: + +.. table:: CNXK common debug options + + +---+------------+-------------------------------------------------------+ + | # | Component | EAL log command | + +===+============+=======================================================+ + | 1 | Common | --log-level='pmd\.cnxk\.base,8' | + +---+------------+-------------------------------------------------------+ + | 2 | Mailbox | --log-level='pmd\.cnxk\.mbox,8' | + +---+------------+-------------------------------------------------------+ + +Debugfs support +~~~~~~~~~~~~~~~ + +The **RVU AF Linux kernel driver** provides support to dump RVU blocks +context or stats using debugfs. + +Enable ``debugfs`` by: + +1. Compile kernel with debugfs enabled, i.e ``CONFIG_DEBUGFS=y``. +2. Boot OCTEON CN9K/CN10K with debugfs supported kernel. +3. Verify ``debugfs`` mounted by default "mount | grep -i debugfs" or mount it manually by using. + +.. code-block:: console + + # mount -t debugfs none /sys/kernel/debug + +Currently ``debugfs`` supports the following RVU blocks NIX, NPA, NPC, NDC, +SSO & RPM. + +The file structure under ``/sys/kernel/debug`` is as follows + +.. code-block:: console + + octeontx2/ + | + cn10k/ + |-- rpm + | |-- rpm0 + | | '-- lmac0 + | | '-- stats + | |-- rpm1 + | | |-- lmac0 + | | | '-- stats + | | '-- lmac1 + | | '-- stats + | '-- rpm2 + | '-- lmac0 + | '-- stats + |-- cpt + | |-- cpt_engines_info + | |-- cpt_engines_sts + | |-- cpt_err_info + | |-- cpt_lfs_info + | '-- cpt_pc + |---- nix + | |-- cq_ctx + | |-- ndc_rx_cache + | |-- ndc_rx_hits_miss + | |-- ndc_tx_cache + | |-- ndc_tx_hits_miss + | |-- qsize + | |-- rq_ctx + | '-- sq_ctx + |-- npa + | |-- aura_ctx + | |-- ndc_cache + | |-- ndc_hits_miss + | |-- pool_ctx + | '-- qsize + |-- npc + | |-- mcam_info + | |-- mcam_rules + | '-- rx_miss_act_stats + |-- rsrc_alloc + '-- sso + |-- hws + | '-- sso_hws_info + '-- hwgrp + |-- sso_hwgrp_aq_thresh + |-- sso_hwgrp_iaq_walk + |-- sso_hwgrp_pc + |-- sso_hwgrp_free_list_walk + |-- sso_hwgrp_ient_walk + '-- sso_hwgrp_taq_walk + +RVU block LF allocation: + +.. code-block:: console + + cat /sys/kernel/debug/cn10k/rsrc_alloc + + pcifunc NPA NIX SSO GROUP SSOWS TIM CPT + PF1 0 0 + PF4 1 + PF13 0, 1 0, 1 0 + +RPM example usage: + +.. code-block:: console + + cat /sys/kernel/debug/cn10k/rpm/rpm0/lmac0/stats + + =======Link Status====== + + Link is UP 25000 Mbps + + =======NIX RX_STATS(rpm port level)====== + + rx_ucast_frames: 0 + rx_mcast_frames: 0 + rx_bcast_frames: 0 + rx_frames: 0 + rx_bytes: 0 + rx_drops: 0 + rx_errors: 0 + + =======NIX TX_STATS(rpm port level)====== + + tx_ucast_frames: 0 + tx_mcast_frames: 0 + tx_bcast_frames: 0 + tx_frames: 0 + tx_bytes: 0 + tx_drops: 0 + + =======rpm RX_STATS====== + + Octets of received packets: 0 + Octets of received packets with out error: 0 + Received packets with alignment errors: 0 + Control/PAUSE packets received: 0 + Packets received with Frame too long Errors: 0 + Packets received with a1nrange length Errors: 0 + Received packets: 0 + Packets received with FrameCheckSequenceErrors: 0 + Packets received with VLAN header: 0 + Error packets: 0 + Packets recievd with unicast DMAC: 0 + Packets received with multicast DMAC: 0 + Packets received with broadcast DMAC: 0 + Dropped packets: 0 + Total frames received on interface: 0 + Packets received with an octet count < 64: 0 + Packets received with an octet count == 64: 0 + Packets received with an octet count of 65–127: 0 + Packets received with an octet count of 128-255: 0 + Packets received with an octet count of 256-511: 0 + Packets received with an octet count of 512-1023: 0 + Packets received with an octet count of 1024-1518: 0 + Packets received with an octet count of > 1518: 0 + Oversized Packets: 0 + Jabber Packets: 0 + Fragmented Packets: 0 + CBFC(class based flow control) pause frames received for class 0: 0 + CBFC pause frames received for class 1: 0 + CBFC pause frames received for class 2: 0 + CBFC pause frames received for class 3: 0 + CBFC pause frames received for class 4: 0 + CBFC pause frames received for class 5: 0 + CBFC pause frames received for class 6: 0 + CBFC pause frames received for class 7: 0 + CBFC pause frames received for class 8: 0 + CBFC pause frames received for class 9: 0 + CBFC pause frames received for class 10: 0 + CBFC pause frames received for class 11: 0 + CBFC pause frames received for class 12: 0 + CBFC pause frames received for class 13: 0 + CBFC pause frames received for class 14: 0 + CBFC pause frames received for class 15: 0 + MAC control packets received: 0 + + =======rpm TX_STATS====== + + Total octets sent on the interface: 0 + Total octets transmitted OK: 0 + Control/Pause frames sent: 0 + Total frames transmitted OK: 0 + Total frames sent with VLAN header: 0 + Error Packets: 0 + Packets sent to to unicast DMAC: 0 + Packets sent to the multicast DMAC: 0 + Packets sent to a broadcast DMAC: 0 + Packets sent with an octet count == 64: 0 + Packets sent with an octet count of 65–127: 0 + Packets sent with an octet count of 128-255: 0 + Packets sent with an octet count of 256-511: 0 + Packets sent with an octet count of 512-1023: 0 + Packets sent with an octet count of 1024-1518: 0 + Packets sent with an octet count of > 1518: 0 + CBFC(class based flow control) pause frames transmitted for class 0: 0 + CBFC pause frames transmitted for class 1: 0 + CBFC pause frames transmitted for class 2: 0 + CBFC pause frames transmitted for class 3: 0 + CBFC pause frames transmitted for class 4: 0 + CBFC pause frames transmitted for class 5: 0 + CBFC pause frames transmitted for class 6: 0 + CBFC pause frames transmitted for class 7: 0 + CBFC pause frames transmitted for class 8: 0 + CBFC pause frames transmitted for class 9: 0 + CBFC pause frames transmitted for class 10: 0 + CBFC pause frames transmitted for class 11: 0 + CBFC pause frames transmitted for class 12: 0 + CBFC pause frames transmitted for class 13: 0 + CBFC pause frames transmitted for class 14: 0 + CBFC pause frames transmitted for class 15: 0 + MAC control packets sent: 0 + Total frames sent on the interface: 0 + +CPT example usage: + +.. code-block:: console + + cat /sys/kernel/debug/cn10k/cpt/cpt_pc + + CPT instruction requests 0 + CPT instruction latency 0 + CPT NCB read requests 0 + CPT NCB read latency 0 + CPT read requests caused by UC fills 0 + CPT active cycles pc 1395642 + CPT clock count pc 5579867595493 + +NIX example usage: + +.. code-block:: console + + Usage: echo [cq number/all] > /sys/kernel/debug/cn10k/nix/cq_ctx + cat /sys/kernel/debug/cn10k/nix/cq_ctx + echo 0 0 > /sys/kernel/debug/cn10k/nix/cq_ctx + cat /sys/kernel/debug/cn10k/nix/cq_ctx + + =====cq_ctx for nixlf:0 and qidx:0 is===== + W0: base 158ef1a00 + + W1: wrptr 0 + W1: avg_con 0 + W1: cint_idx 0 + W1: cq_err 0 + W1: qint_idx 0 + W1: bpid 0 + W1: bp_ena 0 + + W2: update_time 31043 + W2:avg_level 255 + W2: head 0 + W2:tail 0 + + W3: cq_err_int_ena 5 + W3:cq_err_int 0 + W3: qsize 4 + W3:caching 1 + W3: substream 0x000 + W3: ena 1 + W3: drop_ena 1 + W3: drop 64 + W3: bp 0 + +NPA example usage: + +.. code-block:: console + + Usage: echo [pool number/all] > /sys/kernel/debug/cn10k/npa/pool_ctx + cat /sys/kernel/debug/cn10k/npa/pool_ctx + echo 0 0 > /sys/kernel/debug/cn10k/npa/pool_ctx + cat /sys/kernel/debug/cn10k/npa/pool_ctx + + ======POOL : 0======= + W0: Stack base 1375bff00 + W1: ena 1 + W1: nat_align 1 + W1: stack_caching 1 + W1: stack_way_mask 0 + W1: buf_offset 1 + W1: buf_size 19 + W2: stack_max_pages 24315 + W2: stack_pages 24314 + W3: op_pc 267456 + W4: stack_offset 2 + W4: shift 5 + W4: avg_level 255 + W4: avg_con 0 + W4: fc_ena 0 + W4: fc_stype 0 + W4: fc_hyst_bits 0 + W4: fc_up_crossing 0 + W4: update_time 62993 + W5: fc_addr 0 + W6: ptr_start 1593adf00 + W7: ptr_end 180000000 + W8: err_int 0 + W8: err_int_ena 7 + W8: thresh_int 0 + W8: thresh_int_ena 0 + W8: thresh_up 0 + W8: thresh_qint_idx 0 + W8: err_qint_idx 0 + +NPC example usage: + +.. code-block:: console + + cat /sys/kernel/debug/cn10k/npc/mcam_info + + NPC MCAM info: + RX keywidth : 224bits + TX keywidth : 224bits + + MCAM entries : 2048 + Reserved : 158 + Available : 1890 + + MCAM counters : 512 + Reserved : 1 + Available : 511 + +SSO example usage: + +.. code-block:: console + + Usage: echo [/all] > /sys/kernel/debug/cn10k/sso/hws/sso_hws_info + echo 0 > /sys/kernel/debug/cn10k/sso/hws/sso_hws_info + + ================================================== + SSOW HWS[0] Arbitration State 0x0 + SSOW HWS[0] Guest Machine Control 0x0 + SSOW HWS[0] SET[0] Group Mask[0] 0xffffffffffffffff + SSOW HWS[0] SET[0] Group Mask[1] 0xffffffffffffffff + SSOW HWS[0] SET[0] Group Mask[2] 0xffffffffffffffff + SSOW HWS[0] SET[0] Group Mask[3] 0xffffffffffffffff + SSOW HWS[0] SET[1] Group Mask[0] 0xffffffffffffffff + SSOW HWS[0] SET[1] Group Mask[1] 0xffffffffffffffff + SSOW HWS[0] SET[1] Group Mask[2] 0xffffffffffffffff + SSOW HWS[0] SET[1] Group Mask[3] 0xffffffffffffffff + ================================================== + +Compile DPDK +------------ + +DPDK may be compiled either natively on OCTEON CN9K/CN10K platform or cross-compiled on +an x86 based platform. + +Native Compilation +~~~~~~~~~~~~~~~~~~ + +.. code-block:: console + + meson build + ninja -C build + +Cross Compilation +~~~~~~~~~~~~~~~~~ + +Refer to :doc:`../linux_gsg/cross_build_dpdk_for_arm64` for generic arm64 details. + +.. code-block:: console + + meson build --cross-file config/arm/arm64_cn10k_linux_gcc + ninja -C build + +.. note:: + + By default, meson cross compilation uses ``aarch64-linux-gnu-gcc`` toolchain, + if Marvell toolchain is available then it can be used by overriding the + c, cpp, ar, strip ``binaries`` attributes to respective Marvell + toolchain binaries in ``config/arm/arm64_cn10k_linux_gcc`` file. diff --git a/doc/guides/platform/img/cnxk_packet_flow_hw_accelerators.svg b/doc/guides/platform/img/cnxk_packet_flow_hw_accelerators.svg new file mode 100644 index 0000000..6d4a492 --- /dev/null +++ b/doc/guides/platform/img/cnxk_packet_flow_hw_accelerators.svg @@ -0,0 +1,2795 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + + + + + DDDpk + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Tx Rx + HW loop back device + + + + + + + + + + + + + + + + + Ethdev Ports (NIX) + Ingress Classification(NPC) + Egress Classification(NPC) + Rx Queues + Tx Queues + EgressTraffic Manager(NIX) + Scheduler SSO + Supports both poll mode and/or event modeby configuring scheduler + ARMv8/v9Cores + Hardware Libraries + Software Libraries + Mempool(NPA) + Timer(TIM) + Crypto(CPT) + Compress(ZIP) + SharedMemory + SW Ring + HASHLPMACL + Mbuf + De(Frag) + + diff --git a/doc/guides/platform/img/cnxk_resource_virtualization.svg b/doc/guides/platform/img/cnxk_resource_virtualization.svg new file mode 100644 index 0000000..ec89bb7 --- /dev/null +++ b/doc/guides/platform/img/cnxk_resource_virtualization.svg @@ -0,0 +1,2428 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + +   + + + + + + + + + + NIX AF + NPA AF + SSO AF + NPC AF + CPT AF + RVU AF + Linux AF driver(octeontx2_af)PF0 + + + CGX/RPM-0 + + + + CGX/RPM-1 + + + + + CGX/RPM-2 + + CGX/RPM-FW Iface + + + + + + + + + AF-PF MBOX + Linux Netdev PFdriver(octeontx2_pf)PFx + + NIX LF + + NPA LF + + + PF-VF MBOX + CGX/RPM-x LMAC-y + + + + + + + + Linux Netdev VFdriver(octeontx2_vf)PFx-VF0 + + NIX LF + + NPA LF + DPDK Ethdev VFdriverPFx-VF1 + + NIX LF + + NPA LF + + + DPDK Ethdev PFdriverPFy + + NIX LF + + NPA LF + PF-VF MBOX + + DPDK Eventdev PFdriverPFz + + TIM LF + + SSO LF + Linux Crypto PFdriverPFm + + NIX LF + + NPA LF + DPDK Ethdev VFdriverPFy-VF0 + + CPT LF + DPDK Crypto VFdriverPFm-VF0 + PF-VF MBOX + + DDDpk DPDK-APP1 with one ethdev over Linux PF + + DPDK-APP2 with Two ethdevs(PF,VF) ,eventdev, timer adapter and cryptodev + + + + + CGX/RPM-x LMAC-y + + + diff --git a/doc/guides/platform/index.rst b/doc/guides/platform/index.rst index f454ef8..7614e1a 100644 --- a/doc/guides/platform/index.rst +++ b/doc/guides/platform/index.rst @@ -11,6 +11,7 @@ The following are platform specific guides and setup information. :numbered: bluefield + cnxk dpaa dpaa2 octeontx From patchwork Thu Apr 1 12:37:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90375 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 38128A0548; Thu, 1 Apr 2021 14:39:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D9A1F14118D; Thu, 1 Apr 2021 14:38:47 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 6454814117E for ; Thu, 1 Apr 2021 14:38:44 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPTnb032528 for ; Thu, 1 Apr 2021 05:38:43 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=tYpBNG5XLUNxZsrkDa/iHcdpmk1NzddMbX+DldgCEOk=; b=S8XbfvvsDq+/EKMyqgE5dXWZnLPutjL0kxZiTarAHtpG6j14rQElh2Z5PRbmNd/TD/KA 6zUsa6SP8qK9MmHfwQbxFAGn2yBglpMJkszDcJe2+vTNvBw9hH8O9zidPPOGdoiHjOCr tt3LtmJsMp1joFcl0+7qiP9gUKRoisocLX5kXQHWjxI5ZqeCAv3uz0i1VN601L8gvtRX RuaxH6lMN1EDoEBq1KbKyZDG5n4Oomw3CE7/nWf3dJt5oXCZGerOCoAtUPZUnyPZz2Jo b2EdZjQZVjk01oQXMrjhipA1Nebg6YVgEA63r7B/VOT4IFduWlIGOwyounPQc2n9x2nd vA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 37n28j2dnf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:38:42 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:38:41 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:38:41 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 6F6653F703F; Thu, 1 Apr 2021 05:38:38 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , , Nithin Dabilpuram Date: Thu, 1 Apr 2021 18:07:27 +0530 Message-ID: <20210401123817.14348-3-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 1JEFSrCz0q0qcoVsp9HGp-xedVfo2ULZ X-Proofpoint-GUID: 1JEFSrCz0q0qcoVsp9HGp-xedVfo2ULZ X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 02/52] common/cnxk: add build infrastructre and HW definition X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add meson build infrastructure along with HW definition header file. This patch also adds cross-compile configs for arm for CN9K series and CN10K series of Marvell SoC's. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Sunil Kumar Kori Signed-off-by: Pavan Nikhilesh Signed-off-by: Satha Rao Signed-off-by: Kiran Kumar K --- drivers/common/cnxk/hw/nix.h | 2191 ++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/hw/npa.h | 376 +++++++ drivers/common/cnxk/hw/npc.h | 525 +++++++++ drivers/common/cnxk/hw/rvu.h | 222 ++++ drivers/common/cnxk/hw/sdp.h | 182 +++ drivers/common/cnxk/hw/sso.h | 233 ++++ drivers/common/cnxk/hw/ssow.h | 70 ++ drivers/common/cnxk/hw/tim.h | 49 + drivers/common/cnxk/meson.build | 14 + drivers/common/cnxk/roc_api.h | 69 ++ drivers/common/cnxk/roc_bitfield.h | 15 + drivers/common/cnxk/roc_bits.h | 32 + drivers/common/cnxk/roc_platform.c | 5 + drivers/common/cnxk/roc_platform.h | 154 +++ drivers/common/cnxk/version.map | 4 + drivers/meson.build | 1 + 16 files changed, 4142 insertions(+) create mode 100644 drivers/common/cnxk/hw/nix.h create mode 100644 drivers/common/cnxk/hw/npa.h create mode 100644 drivers/common/cnxk/hw/npc.h create mode 100644 drivers/common/cnxk/hw/rvu.h create mode 100644 drivers/common/cnxk/hw/sdp.h create mode 100644 drivers/common/cnxk/hw/sso.h create mode 100644 drivers/common/cnxk/hw/ssow.h create mode 100644 drivers/common/cnxk/hw/tim.h create mode 100644 drivers/common/cnxk/meson.build create mode 100644 drivers/common/cnxk/roc_api.h create mode 100644 drivers/common/cnxk/roc_bitfield.h create mode 100644 drivers/common/cnxk/roc_bits.h create mode 100644 drivers/common/cnxk/roc_platform.c create mode 100644 drivers/common/cnxk/roc_platform.h create mode 100644 drivers/common/cnxk/version.map diff --git a/drivers/common/cnxk/hw/nix.h b/drivers/common/cnxk/hw/nix.h new file mode 100644 index 0000000..6b86002 --- /dev/null +++ b/drivers/common/cnxk/hw/nix.h @@ -0,0 +1,2191 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef __NIX_HW_H__ +#define __NIX_HW_H__ + +/* Register offsets */ + +#define NIX_AF_CFG (0x0ull) +#define NIX_AF_STATUS (0x10ull) +#define NIX_AF_NDC_CFG (0x18ull) +#define NIX_AF_CONST (0x20ull) +#define NIX_AF_CONST1 (0x28ull) +#define NIX_AF_CONST2 (0x30ull) +#define NIX_AF_CONST3 (0x38ull) +#define NIX_AF_SQ_CONST (0x40ull) +#define NIX_AF_CQ_CONST (0x48ull) +#define NIX_AF_RQ_CONST (0x50ull) +#define NIX_AF_PL_CONST (0x58ull) /* [CN10K, .) */ +#define NIX_AF_PSE_CONST (0x60ull) +#define NIX_AF_TL1_CONST (0x70ull) +#define NIX_AF_TL2_CONST (0x78ull) +#define NIX_AF_TL3_CONST (0x80ull) +#define NIX_AF_TL4_CONST (0x88ull) +#define NIX_AF_MDQ_CONST (0x90ull) +#define NIX_AF_MC_MIRROR_CONST (0x98ull) +#define NIX_AF_LSO_CFG (0xa8ull) +#define NIX_AF_BLK_RST (0xb0ull) +#define NIX_AF_TX_TSTMP_CFG (0xc0ull) +#define NIX_AF_PL_TS (0xc8ull) /* [CN10K, .) */ +#define NIX_AF_RX_CFG (0xd0ull) +#define NIX_AF_AVG_DELAY (0xe0ull) +#define NIX_AF_CINT_DELAY (0xf0ull) +#define NIX_AF_VWQE_TIMER (0xf8ull) /* [CN10K, .) */ +#define NIX_AF_RX_MCAST_BASE (0x100ull) +#define NIX_AF_RX_MCAST_CFG (0x110ull) +#define NIX_AF_RX_MCAST_BUF_BASE (0x120ull) +#define NIX_AF_RX_MCAST_BUF_CFG (0x130ull) +#define NIX_AF_RX_MIRROR_BUF_BASE (0x140ull) +#define NIX_AF_RX_MIRROR_BUF_CFG (0x148ull) +#define NIX_AF_LF_RST (0x150ull) +#define NIX_AF_GEN_INT (0x160ull) +#define NIX_AF_GEN_INT_W1S (0x168ull) +#define NIX_AF_GEN_INT_ENA_W1S (0x170ull) +#define NIX_AF_GEN_INT_ENA_W1C (0x178ull) +#define NIX_AF_ERR_INT (0x180ull) +#define NIX_AF_ERR_INT_W1S (0x188ull) +#define NIX_AF_ERR_INT_ENA_W1S (0x190ull) +#define NIX_AF_ERR_INT_ENA_W1C (0x198ull) +#define NIX_AF_RAS (0x1a0ull) +#define NIX_AF_RAS_W1S (0x1a8ull) +#define NIX_AF_RAS_ENA_W1S (0x1b0ull) +#define NIX_AF_RAS_ENA_W1C (0x1b8ull) +#define NIX_AF_RVU_INT (0x1c0ull) +#define NIX_AF_RVU_INT_W1S (0x1c8ull) +#define NIX_AF_RVU_INT_ENA_W1S (0x1d0ull) +#define NIX_AF_RVU_INT_ENA_W1C (0x1d8ull) +#define NIX_AF_TCP_TIMER (0x1e0ull) +/* [CN10k, .) */ +#define NIX_AF_RX_DEF_ETX(a) (0x1f0ull | (uint64_t)(a) << 3) +#define NIX_AF_RX_DEF_OL2 (0x200ull) +#define NIX_AF_RX_DEF_GEN0_COLOR (0x208ull) /* [CN10K, .) */ +#define NIX_AF_RX_DEF_OIP4 (0x210ull) +#define NIX_AF_RX_DEF_GEN1_COLOR (0x218ull) /* [CN10K, .) */ +#define NIX_AF_RX_DEF_IIP4 (0x220ull) +#define NIX_AF_RX_DEF_VLAN0_PCP_DEI (0x228ull) /* [CN10K, .) */ +#define NIX_AF_RX_DEF_OIP6 (0x230ull) +#define NIX_AF_RX_DEF_VLAN1_PCP_DEI (0x238ull) /* [CN10K, .) */ +#define NIX_AF_RX_DEF_IIP6 (0x240ull) +#define NIX_AF_RX_DEF_OTCP (0x250ull) +#define NIX_AF_RX_DEF_ITCP (0x260ull) +#define NIX_AF_RX_DEF_OUDP (0x270ull) +#define NIX_AF_RX_DEF_IUDP (0x280ull) +#define NIX_AF_RX_DEF_OSCTP (0x290ull) +#define NIX_AF_RX_DEF_CST_APAD_0 (0x298ull) /* [CN10K, .) */ +#define NIX_AF_RX_DEF_ISCTP (0x2a0ull) +#define NIX_AF_RX_DEF_CST_APAD_1 (0x2a8ull) /* [CN10K, .) */ +#define NIX_AF_RX_DEF_IPSECX(a) (0x2b0ull | (uint64_t)(a) << 3) +#define NIX_AF_RX_DEF_IIP4_DSCP (0x2e0ull) /* [CN10K, .) */ +#define NIX_AF_RX_DEF_OIP4_DSCP (0x2e8ull) /* [CN10K, .) */ +#define NIX_AF_RX_DEF_IIP6_DSCP (0x2f0ull) /* [CN10K, .) */ +#define NIX_AF_RX_DEF_OIP6_DSCP (0x2f8ull) /* [CN10K, .) */ +#define NIX_AF_RX_IPSEC_GEN_CFG (0x300ull) +#define NIX_AF_RX_IPSEC_VWQE_GEN_CFG (0x310ull) /* [CN10K, .) */ +#define NIX_AF_RX_CPTX_INST_QSEL(a) (0x320ull | (uint64_t)(a) << 3) +#define NIX_AF_RX_CPTX_CREDIT(a) (0x360ull | (uint64_t)(a) << 3) +#define NIX_AF_NDC_RX_SYNC (0x3e0ull) +#define NIX_AF_NDC_TX_SYNC (0x3f0ull) +#define NIX_AF_AQ_CFG (0x400ull) +#define NIX_AF_AQ_BASE (0x410ull) +#define NIX_AF_AQ_STATUS (0x420ull) +#define NIX_AF_AQ_DOOR (0x430ull) +#define NIX_AF_AQ_DONE_WAIT (0x440ull) +#define NIX_AF_AQ_DONE (0x450ull) +#define NIX_AF_AQ_DONE_ACK (0x460ull) +#define NIX_AF_AQ_DONE_TIMER (0x470ull) +#define NIX_AF_AQ_DONE_ENA_W1S (0x490ull) +#define NIX_AF_AQ_DONE_ENA_W1C (0x498ull) +#define NIX_AF_RX_LINKX_CFG(a) (0x540ull | (uint64_t)(a) << 16) +#define NIX_AF_RX_SW_SYNC (0x550ull) +#define NIX_AF_RX_LINKX_WRR_CFG(a) (0x560ull | (uint64_t)(a) << 16) +#define NIX_AF_SEB_CFG (0x5f0ull) /* [CN10K, .) */ +#define NIX_AF_EXPR_TX_FIFO_STATUS (0x640ull) /* [CN9K, CN10K) */ +#define NIX_AF_NORM_TX_FIFO_STATUS (0x648ull) +#define NIX_AF_SDP_TX_FIFO_STATUS (0x650ull) +#define NIX_AF_TX_NPC_CAPTURE_CONFIG (0x660ull) +#define NIX_AF_TX_NPC_CAPTURE_INFO (0x668ull) +#define NIX_AF_TX_NPC_CAPTURE_RESPX(a) (0x680ull | (uint64_t)(a) << 3) +#define NIX_AF_SEB_ACTIVE_CYCLES_PCX(a) (0x6c0ull | (uint64_t)(a) << 3) +#define NIX_AF_SMQX_CFG(a) (0x700ull | (uint64_t)(a) << 16) +#define NIX_AF_SMQX_HEAD(a) (0x710ull | (uint64_t)(a) << 16) +#define NIX_AF_SMQX_TAIL(a) (0x720ull | (uint64_t)(a) << 16) +#define NIX_AF_SMQX_STATUS(a) (0x730ull | (uint64_t)(a) << 16) +#define NIX_AF_SMQX_NXT_HEAD(a) (0x740ull | (uint64_t)(a) << 16) +#define NIX_AF_SQM_ACTIVE_CYCLES_PC (0x770ull) +#define NIX_AF_SQM_SCLK_CNT (0x780ull) /* [CN10K, .) */ +#define NIX_AF_DWRR_SDP_MTU (0x790ull) /* [CN10K, .) */ +#define NIX_AF_DWRR_RPM_MTU (0x7a0ull) /* [CN10K, .) */ +#define NIX_AF_PSE_CHANNEL_LEVEL (0x800ull) +#define NIX_AF_PSE_SHAPER_CFG (0x810ull) +#define NIX_AF_PSE_ACTIVE_CYCLES_PC (0x8c0ull) +#define NIX_AF_MARK_FORMATX_CTL(a) (0x900ull | (uint64_t)(a) << 18) +#define NIX_AF_TX_LINKX_NORM_CREDIT(a) (0xa00ull | (uint64_t)(a) << 16) +/* [CN9K, CN10K) */ +#define NIX_AF_TX_LINKX_EXPR_CREDIT(a) (0xa10ull | (uint64_t)(a) << 16) +/* [CN9K, CN10K) */ +#define NIX_AF_TX_LINKX_SW_XOFF(a) (0xa20ull | (uint64_t)(a) << 16) +/* [CN10K, .) */ +#define NIX_AF_TX_LINKX_NORM_CDT_ADJ(a) (0xa20ull | (uint64_t)(a) << 16) +#define NIX_AF_TX_LINKX_HW_XOFF(a) (0xa30ull | (uint64_t)(a) << 16) +#define NIX_AF_SDP_LINK_CREDIT (0xa40ull) +#define NIX_AF_SDP_LINK_CDT_ADJ (0xa50ull) /* [CN10K, .) */ +/* [CN9K, CN10K) */ +#define NIX_AF_SDP_SW_XOFFX(a) (0xa60ull | (uint64_t)(a) << 3) +#define NIX_AF_SDP_HW_XOFFX(a) (0xac0ull | (uint64_t)(a) << 3) +#define NIX_AF_TL4X_BP_STATUS(a) (0xb00ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_SDP_LINK_CFG(a) (0xb10ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1_TW_ARB_CTL_DEBUG (0xbc0ull) /* [CN10K, .) */ +#define NIX_AF_TL1_TW_ARB_REQ_DEBUG (0xbc8ull) /* [CN10K, .) */ +#define NIX_AF_TL1X_SCHEDULE(a) (0xc00ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_SHAPE(a) (0xc10ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_CIR(a) (0xc20ull | (uint64_t)(a) << 16) +/* [CN9K, CN10K) */ +#define NIX_AF_TL1X_SHAPE_STATE(a) (0xc50ull | (uint64_t)(a) << 16) +/* [CN10K, .) */ +#define NIX_AF_TL1X_SHAPE_STATE_CIR(a) (0xc50ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_SW_XOFF(a) (0xc70ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_TOPOLOGY(a) (0xc80ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_MD_DEBUG0(a) (0xcc0ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_MD_DEBUG1(a) (0xcc8ull | (uint64_t)(a) << 16) +/* [CN9K, CN10K) */ +#define NIX_AF_TL1X_MD_DEBUG2(a) (0xcd0ull | (uint64_t)(a) << 16) +/* [CN10K, .) */ +#define NIX_AF_TL2X_SHAPE_STATE_CIR(a) (0xcd0ull | (uint64_t)(a) << 16) +/* [CN9K, CN10K) */ +#define NIX_AF_TL1X_MD_DEBUG3(a) (0xcd8ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_DROPPED_PACKETS(a) (0xd20ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_DROPPED_BYTES(a) (0xd30ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_RED_PACKETS(a) (0xd40ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_RED_BYTES(a) (0xd50ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_YELLOW_PACKETS(a) (0xd60ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_YELLOW_BYTES(a) (0xd70ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_GREEN_PACKETS(a) (0xd80ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_GREEN_BYTES(a) (0xd90ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQ_MD_COUNT (0xda0ull) /* [CN10K, .) */ +/* [CN10K, .) */ +#define NIX_AF_MDQX_OUT_MD_COUNT(a) (0xdb0ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2_TW_ARB_CTL_DEBUG (0xdc0ull) /* [CN10K, .) */ +/* [CN10K, .) */ +#define NIX_AF_TL2_TWX_ARB_REQ_DEBUG0(a) (0xdc8ull | (uint64_t)(a) << 16) +/* [CN10K, .) */ +#define NIX_AF_TL2_TWX_ARB_REQ_DEBUG1(a) (0xdd0ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_SCHEDULE(a) (0xe00ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_SHAPE(a) (0xe10ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_CIR(a) (0xe20ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_PIR(a) (0xe30ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_SCHED_STATE(a) (0xe40ull | (uint64_t)(a) << 16) +/* [CN9K, CN10K) */ +#define NIX_AF_TL2X_SHAPE_STATE(a) (0xe50ull | (uint64_t)(a) << 16) +/* [CN10K, .) */ +#define NIX_AF_TL2X_SHAPE_STATE_PIR(a) (0xe50ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_SW_XOFF(a) (0xe70ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_TOPOLOGY(a) (0xe80ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_PARENT(a) (0xe88ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_MD_DEBUG0(a) (0xec0ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_MD_DEBUG1(a) (0xec8ull | (uint64_t)(a) << 16) +/* [CN9K, CN10K) */ +#define NIX_AF_TL2X_MD_DEBUG2(a) (0xed0ull | (uint64_t)(a) << 16) +/* [CN10K, .) */ +#define NIX_AF_TL3X_SHAPE_STATE_CIR(a) (0xed0ull | (uint64_t)(a) << 16) +/* [CN9K, CN10K) */ +#define NIX_AF_TL2X_MD_DEBUG3(a) (0xed8ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3_TW_ARB_CTL_DEBUG (0xfc0ull) /* [CN10K, .) */ +/* [CN10k, .) */ +#define NIX_AF_TL3_TWX_ARB_REQ_DEBUG0(a) (0xfc8ull | (uint64_t)(a) << 16) +/* [CN10K, .) */ +#define NIX_AF_TL3_TWX_ARB_REQ_DEBUG1(a) (0xfd0ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_SCHEDULE(a) (0x1000ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_SHAPE(a) (0x1010ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_CIR(a) (0x1020ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_PIR(a) (0x1030ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_SCHED_STATE(a) (0x1040ull | (uint64_t)(a) << 16) +/* [CN9K, CN10K) */ +#define NIX_AF_TL3X_SHAPE_STATE(a) (0x1050ull | (uint64_t)(a) << 16) +/* [CN10K, .) */ +#define NIX_AF_TL3X_SHAPE_STATE_PIR(a) (0x1050ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_SW_XOFF(a) (0x1070ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_TOPOLOGY(a) (0x1080ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_PARENT(a) (0x1088ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_MD_DEBUG0(a) (0x10c0ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_MD_DEBUG1(a) (0x10c8ull | (uint64_t)(a) << 16) +/* [CN9K, CN10K) */ +#define NIX_AF_TL3X_MD_DEBUG2(a) (0x10d0ull | (uint64_t)(a) << 16) +/* [CN10K, .) */ +#define NIX_AF_TL4X_SHAPE_STATE_CIR(a) (0x10d0ull | (uint64_t)(a) << 16) +/* [CN9K, CN10K) */ +#define NIX_AF_TL3X_MD_DEBUG3(a) (0x10d8ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4_TW_ARB_CTL_DEBUG (0x11c0ull) /* [CN10K, .) */ +/* [CN10K, .) */ +#define NIX_AF_TL4_TWX_ARB_REQ_DEBUG0(a) (0x11c8ull | (uint64_t)(a) << 16) +/* [CN10K, .) */ +#define NIX_AF_TL4_TWX_ARB_REQ_DEBUG1(a) (0x11d0ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_SCHEDULE(a) (0x1200ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_SHAPE(a) (0x1210ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_CIR(a) (0x1220ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_PIR(a) (0x1230ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_SCHED_STATE(a) (0x1240ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_SHAPE_STATE(a) (0x1250ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_SW_XOFF(a) (0x1270ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_TOPOLOGY(a) (0x1280ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_PARENT(a) (0x1288ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_MD_DEBUG0(a) (0x12c0ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_MD_DEBUG1(a) (0x12c8ull | (uint64_t)(a) << 16) +/* [CN9K, CN10K) */ +#define NIX_AF_TL4X_MD_DEBUG2(a) (0x12d0ull | (uint64_t)(a) << 16) +/* [CN10K, .) */ +#define NIX_AF_MDQX_SHAPE_STATE_CIR(a) (0x12d0ull | (uint64_t)(a) << 16) +/* [CN9K, CN10K) */ +#define NIX_AF_TL4X_MD_DEBUG3(a) (0x12d8ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQ_TW_ARB_CTL_DEBUG (0x13c0ull) /* [CN10K, .) */ +/* [CN10K, .) */ +#define NIX_AF_MDQ_TWX_ARB_REQ_DEBUG0(a) (0x13c8ull | (uint64_t)(a) << 16) +/* [CN10K, .) */ +#define NIX_AF_MDQ_TWX_ARB_REQ_DEBUG1(a) (0x13d0ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQX_SCHEDULE(a) (0x1400ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQX_SHAPE(a) (0x1410ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQX_CIR(a) (0x1420ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQX_PIR(a) (0x1430ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQX_SCHED_STATE(a) (0x1440ull | (uint64_t)(a) << 16) +/* [CN9K, CN10K) */ +#define NIX_AF_MDQX_SHAPE_STATE(a) (0x1450ull | (uint64_t)(a) << 16) +/* [CN10K, .) */ +#define NIX_AF_MDQX_SHAPE_STATE_PIR(a) (0x1450ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQX_SW_XOFF(a) (0x1470ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQX_PARENT(a) (0x1480ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQX_MD_DEBUG(a) (0x14c0ull | (uint64_t)(a) << 16) +/* [CN10K, .) */ +#define NIX_AF_MDQX_IN_MD_COUNT(a) (0x14e0ull | (uint64_t)(a) << 16) +/* [CN9K, CN10K) */ +#define NIX_AF_TL3_TL2X_CFG(a) (0x1600ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3_TL2X_BP_STATUS(a) (0x1610ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3_TL2X_LINKX_CFG(a, b) \ + (0x1700ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) +#define NIX_AF_RX_FLOW_KEY_ALGX_FIELDX(a, b) \ + (0x1800ull | (uint64_t)(a) << 18 | (uint64_t)(b) << 3) +#define NIX_AF_TX_MCASTX(a) (0x1900ull | (uint64_t)(a) << 15) +#define NIX_AF_TX_VTAG_DEFX_CTL(a) (0x1a00ull | (uint64_t)(a) << 16) +#define NIX_AF_TX_VTAG_DEFX_DATA(a) (0x1a10ull | (uint64_t)(a) << 16) +#define NIX_AF_RX_BPIDX_STATUS(a) (0x1a20ull | (uint64_t)(a) << 17) +#define NIX_AF_RX_CHANX_CFG(a) (0x1a30ull | (uint64_t)(a) << 15) +#define NIX_AF_CINT_TIMERX(a) (0x1a40ull | (uint64_t)(a) << 18) +#define NIX_AF_LSO_FORMATX_FIELDX(a, b) \ + (0x1b00ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) +#define NIX_AF_LFX_CFG(a) (0x4000ull | (uint64_t)(a) << 17) +/* [CN10K, .) */ +#define NIX_AF_LINKX_CFG(a) (0x4010ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_SQS_CFG(a) (0x4020ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_TX_CFG2(a) (0x4028ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_SQS_BASE(a) (0x4030ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RQS_CFG(a) (0x4040ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RQS_BASE(a) (0x4050ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_CQS_CFG(a) (0x4060ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_CQS_BASE(a) (0x4070ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_TX_CFG(a) (0x4080ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_TX_PARSE_CFG(a) (0x4090ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RX_CFG(a) (0x40a0ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RSS_CFG(a) (0x40c0ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RSS_BASE(a) (0x40d0ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_QINTS_CFG(a) (0x4100ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_QINTS_BASE(a) (0x4110ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_CINTS_CFG(a) (0x4120ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_CINTS_BASE(a) (0x4130ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RX_IPSEC_CFG0(a) (0x4140ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RX_IPSEC_CFG1(a) (0x4148ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RX_IPSEC_DYNO_CFG(a) (0x4150ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RX_IPSEC_DYNO_BASE(a) (0x4158ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RX_IPSEC_SA_BASE(a) (0x4170ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_TX_STATUS(a) (0x4180ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RX_VTAG_TYPEX(a, b) \ + (0x4200ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3) +#define NIX_AF_LFX_LOCKX(a, b) \ + (0x4300ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3) +#define NIX_AF_LFX_TX_STATX(a, b) \ + (0x4400ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3) +#define NIX_AF_LFX_RX_STATX(a, b) \ + (0x4500ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3) +#define NIX_AF_LFX_RSS_GRPX(a, b) \ + (0x4600ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3) +#define NIX_AF_RX_NPC_MC_RCV (0x4700ull) +#define NIX_AF_RX_NPC_MC_DROP (0x4710ull) +#define NIX_AF_RX_NPC_MIRROR_RCV (0x4720ull) +#define NIX_AF_RX_NPC_MIRROR_DROP (0x4730ull) +/* [CN10K, .) */ +#define NIX_AF_LFX_VWQE_NORM_COMPL(a) (0x4740ull | (uint64_t)(a) << 17) +/* [CN10K, .) */ +#define NIX_AF_LFX_VWQE_RLS_TIMEOUT(a) (0x4750ull | (uint64_t)(a) << 17) +/* [CN10K, .) */ +#define NIX_AF_LFX_VWQE_HASH_FULL(a) (0x4760ull | (uint64_t)(a) << 17) +/* [CN10K, .) */ +#define NIX_AF_LFX_VWQE_SA_FULL(a) (0x4770ull | (uint64_t)(a) << 17) +#define NIX_AF_VWQE_HASH_FUNC_MASK (0x47a0ull) /* [CN10K, .) */ +#define NIX_AF_RX_ACTIVE_CYCLES_PCX(a) (0x4800ull | (uint64_t)(a) << 16) +/* [CN10K, .) */ +#define NIX_AF_RX_LINKX_WRR_OUT_CFG(a) (0x4a00ull | (uint64_t)(a) << 16) +#define NIX_PRIV_AF_INT_CFG (0x8000000ull) +#define NIX_PRIV_LFX_CFG(a) (0x8000010ull | (uint64_t)(a) << 8) +#define NIX_PRIV_LFX_INT_CFG(a) (0x8000020ull | (uint64_t)(a) << 8) +#define NIX_AF_RVU_LF_CFG_DEBUG (0x8000030ull) + +#define NIX_LF_RX_SECRETX(a) (0x0ull | (uint64_t)(a) << 3) +#define NIX_LF_CFG (0x100ull) +#define NIX_LF_GINT (0x200ull) +#define NIX_LF_GINT_W1S (0x208ull) +#define NIX_LF_GINT_ENA_W1C (0x210ull) +#define NIX_LF_GINT_ENA_W1S (0x218ull) +#define NIX_LF_ERR_INT (0x220ull) +#define NIX_LF_ERR_INT_W1S (0x228ull) +#define NIX_LF_ERR_INT_ENA_W1C (0x230ull) +#define NIX_LF_ERR_INT_ENA_W1S (0x238ull) +#define NIX_LF_RAS (0x240ull) +#define NIX_LF_RAS_W1S (0x248ull) +#define NIX_LF_RAS_ENA_W1C (0x250ull) +#define NIX_LF_RAS_ENA_W1S (0x258ull) +#define NIX_LF_SQ_OP_ERR_DBG (0x260ull) +#define NIX_LF_MNQ_ERR_DBG (0x270ull) +#define NIX_LF_SEND_ERR_DBG (0x280ull) +#define NIX_LF_TX_STATX(a) (0x300ull | (uint64_t)(a) << 3) +#define NIX_LF_RX_STATX(a) (0x400ull | (uint64_t)(a) << 3) +#define NIX_LF_OP_SENDX(a) (0x800ull | (uint64_t)(a) << 3) +#define NIX_LF_RQ_OP_INT (0x900ull) +#define NIX_LF_RQ_OP_OCTS (0x910ull) +#define NIX_LF_RQ_OP_PKTS (0x920ull) +#define NIX_LF_RQ_OP_DROP_OCTS (0x930ull) +#define NIX_LF_RQ_OP_DROP_PKTS (0x940ull) +#define NIX_LF_RQ_OP_RE_PKTS (0x950ull) +#define NIX_LF_OP_IPSEC_DYNO_CNT (0x980ull) +#define NIX_LF_OP_VWQE_FLUSH (0x9a0ull) /* [CN10K, .) */ +#define NIX_LF_PL_OP_BAND_PROF (0x9c0ull) /* [CN10K, .) */ +#define NIX_LF_SQ_OP_INT (0xa00ull) +#define NIX_LF_SQ_OP_OCTS (0xa10ull) +#define NIX_LF_SQ_OP_PKTS (0xa20ull) +#define NIX_LF_SQ_OP_STATUS (0xa30ull) +#define NIX_LF_SQ_OP_DROP_OCTS (0xa40ull) +#define NIX_LF_SQ_OP_DROP_PKTS (0xa50ull) +#define NIX_LF_CQ_OP_INT (0xb00ull) +#define NIX_LF_CQ_OP_DOOR (0xb30ull) +#define NIX_LF_CQ_OP_STATUS (0xb40ull) +#define NIX_LF_QINTX_CNT(a) (0xc00ull | (uint64_t)(a) << 12) +#define NIX_LF_QINTX_INT(a) (0xc10ull | (uint64_t)(a) << 12) +#define NIX_LF_QINTX_ENA_W1S(a) (0xc20ull | (uint64_t)(a) << 12) +#define NIX_LF_QINTX_ENA_W1C(a) (0xc30ull | (uint64_t)(a) << 12) +#define NIX_LF_CINTX_CNT(a) (0xd00ull | (uint64_t)(a) << 12) +#define NIX_LF_CINTX_WAIT(a) (0xd10ull | (uint64_t)(a) << 12) +#define NIX_LF_CINTX_INT(a) (0xd20ull | (uint64_t)(a) << 12) +#define NIX_LF_CINTX_INT_W1S(a) (0xd30ull | (uint64_t)(a) << 12) +#define NIX_LF_CINTX_ENA_W1S(a) (0xd40ull | (uint64_t)(a) << 12) +#define NIX_LF_CINTX_ENA_W1C(a) (0xd50ull | (uint64_t)(a) << 12) +/* [CN10K, .) */ +#define NIX_LF_RX_GEN_COLOR_CONVX(a) (0x4740ull | (uint64_t)(a) << 3) +#define NIX_LF_RX_VLAN0_COLOR_CONV (0x4760ull) /* [CN10K, .) */ +#define NIX_LF_RX_VLAN1_COLOR_CONV (0x4768ull) /* [CN10K, .) */ +#define NIX_LF_RX_IIP_COLOR_CONV_LO (0x4770ull) /* [CN10K, .) */ +#define NIX_LF_RX_IIP_COLOR_CONV_HI (0x4778ull) /* [CN10K, .) */ +#define NIX_LF_RX_OIP_COLOR_CONV_LO (0x4780ull) /* [CN10K, .) */ +#define NIX_LF_RX_OIP_COLOR_CONV_HI (0x4788ull) /* [CN10K, .) */ + +/* Enum offsets */ + +#define NIX_STAT_LF_TX_TX_UCAST (0x0ull) +#define NIX_STAT_LF_TX_TX_BCAST (0x1ull) +#define NIX_STAT_LF_TX_TX_MCAST (0x2ull) +#define NIX_STAT_LF_TX_TX_DROP (0x3ull) +#define NIX_STAT_LF_TX_TX_OCTS (0x4ull) + +#define NIX_STAT_LF_RX_RX_OCTS (0x0ull) +#define NIX_STAT_LF_RX_RX_UCAST (0x1ull) +#define NIX_STAT_LF_RX_RX_BCAST (0x2ull) +#define NIX_STAT_LF_RX_RX_MCAST (0x3ull) +#define NIX_STAT_LF_RX_RX_DROP (0x4ull) +#define NIX_STAT_LF_RX_RX_DROP_OCTS (0x5ull) +#define NIX_STAT_LF_RX_RX_FCS (0x6ull) +#define NIX_STAT_LF_RX_RX_ERR (0x7ull) +#define NIX_STAT_LF_RX_RX_DRP_BCAST (0x8ull) +#define NIX_STAT_LF_RX_RX_DRP_MCAST (0x9ull) +#define NIX_STAT_LF_RX_RX_DRP_L3BCAST (0xaull) +#define NIX_STAT_LF_RX_RX_DRP_L3MCAST (0xbull) + +#define NIX_STAT_LF_RX_RX_GC_OCTS_PASSED (0xcull) /* [CN10K, .) */ +#define NIX_STAT_LF_RX_RX_GC_PKTS_PASSED (0xdull) /* [CN10K, .) */ +#define NIX_STAT_LF_RX_RX_YC_OCTS_PASSED (0xeull) /* [CN10K, .) */ +#define NIX_STAT_LF_RX_RX_YC_PKTS_PASSED (0xfull) /* [CN10K, .) */ +#define NIX_STAT_LF_RX_RX_RC_OCTS_PASSED (0x10ull) /* [CN10K, .) */ +#define NIX_STAT_LF_RX_RX_RC_PKTS_PASSED (0x11ull) /* [CN10K, .) */ +#define NIX_STAT_LF_RX_RX_GC_OCTS_DROP (0x12ull) /* [CN10K, .) */ +#define NIX_STAT_LF_RX_RX_GC_PKTS_DROP (0x13ull) /* [CN10K, .) */ +#define NIX_STAT_LF_RX_RX_YC_OCTS_DROP (0x14ull) /* [CN10K, .) */ +#define NIX_STAT_LF_RX_RX_YC_PKTS_DROP (0x15ull) /* [CN10K, .) */ +#define NIX_STAT_LF_RX_RX_RC_OCTS_DROP (0x16ull) /* [CN10K, .) */ +#define NIX_STAT_LF_RX_RX_RC_PKTS_DROP (0x17ull) /* [CN10K, .) */ +#define NIX_STAT_LF_RX_RX_CPT_DROP_PKTS (0x18ull) /* [CN10K, .) */ + +#define CGX_RX_PKT_CNT (0x0ull) /* [CN9K, CN10K) */ +#define CGX_RX_OCT_CNT (0x1ull) /* [CN9K, CN10K) */ +#define CGX_RX_PAUSE_PKT_CNT (0x2ull) /* [CN9K, CN10K) */ +#define CGX_RX_PAUSE_OCT_CNT (0x3ull) /* [CN9K, CN10K) */ +#define CGX_RX_DMAC_FILT_PKT_CNT (0x4ull) /* [CN9K, CN10K) */ +#define CGX_RX_DMAC_FILT_OCT_CNT (0x5ull) /* [CN9K, CN10K) */ +#define CGX_RX_FIFO_DROP_PKT_CNT (0x6ull) /* [CN9K, CN10K) */ +#define CGX_RX_FIFO_DROP_OCT_CNT (0x7ull) /* [CN9K, CN10K) */ +#define CGX_RX_ERR_CNT (0x8ull) /* [CN9K, CN10K) */ + +#define CGX_TX_COLLISION_DROP (0x0ull) /* [CN9K, CN10K) */ +#define CGX_TX_FRAME_DEFER_CNT (0x1ull) /* [CN9K, CN10K) */ +#define CGX_TX_MULTIPLE_COLLISION (0x2ull) /* [CN9K, CN10K) */ +#define CGX_TX_SINGLE_COLLISION (0x3ull) /* [CN9K, CN10K) */ +#define CGX_TX_OCT_CNT (0x4ull) /* [CN9K, CN10K) */ +#define CGX_TX_PKT_CNT (0x5ull) /* [CN9K, CN10K) */ +#define CGX_TX_1_63_PKT_CNT (0x6ull) /* [CN9K, CN10K) */ +#define CGX_TX_64_PKT_CNT (0x7ull) /* [CN9K, CN10K) */ +#define CGX_TX_65_127_PKT_CNT (0x8ull) /* [CN9K, CN10K) */ +#define CGX_TX_128_255_PKT_CNT (0x9ull) /* [CN9K, CN10K) */ +#define CGX_TX_256_511_PKT_CNT (0xaull) /* [CN9K, CN10K) */ +#define CGX_TX_512_1023_PKT_CNT (0xbull) /* [CN9K, CN10K) */ +#define CGX_TX_1024_1518_PKT_CNT (0xcull) /* [CN9K, CN10K) */ +#define CGX_TX_1519_MAX_PKT_CNT (0xdull) /* [CN9K, CN10K) */ +#define CGX_TX_BCAST_PKTS (0xeull) /* [CN9K, CN10K) */ +#define CGX_TX_MCAST_PKTS (0xfull) /* [CN9K, CN10K) */ +#define CGX_TX_UFLOW_PKTS (0x10ull) /* [CN9K, CN10K) */ +#define CGX_TX_PAUSE_PKTS (0x11ull) /* [CN9K, CN10K) */ + +#define RPM_MTI_STAT_RX_OCT_CNT (0x0ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_OCT_RECV_OK (0x1ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_ALIG_ERR (0x2ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_CTRL_FRM_RECV (0x3ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_FRM_LONG (0x4ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_LEN_ERR (0x5ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_FRM_RECV (0x6ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_FRM_SEQ_ERR (0x7ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_VLAN_OK (0x8ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_IN_ERR (0x9ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_IN_UCAST_PKT (0xaull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_IN_MCAST_PKT (0xbull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_IN_BCAST_PKT (0xcull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_DRP_EVENTS (0xdull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_PKT (0xeull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_UNDER_SIZE (0xfull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_1_64_PKT_CNT (0x10ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_65_127_PKT_CNT (0x11ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_128_255_PKT_CNT (0x12ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_256_511_PKT_CNT (0x13ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_512_1023_PKT_CNT (0x14ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_1024_1518_PKT_CNT (0x15ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_1519_MAX_PKT_CNT (0x16ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_OVER_SIZE (0x17ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_JABBER (0x18ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_ETH_FRAGS (0x19ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_CBFC_CLASS_0 (0x1aull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_CBFC_CLASS_1 (0x1bull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_CBFC_CLASS_2 (0x1cull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_CBFC_CLASS_3 (0x1dull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_CBFC_CLASS_4 (0x1eull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_CBFC_CLASS_5 (0x1full) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_CBFC_CLASS_6 (0x20ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_CBFC_CLASS_7 (0x21ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_CBFC_CLASS_8 (0x22ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_CBFC_CLASS_9 (0x23ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_CBFC_CLASS_10 (0x24ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_CBFC_CLASS_11 (0x25ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_CBFC_CLASS_12 (0x26ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_CBFC_CLASS_13 (0x27ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_CBFC_CLASS_14 (0x28ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_CBFC_CLASS_15 (0x29ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_RX_MAC_CONTROL (0x2aull) /* [CN10K, .) */ + +#define RPM_MTI_STAT_TX_OCT_CNT (0x0ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_OCT_TX_OK (0x1ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_PAUSE_MAC_CTRL (0x2ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_FRAMES_OK (0x3ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_VLAN_OK (0x4ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_OUT_ERR (0x5ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_UCAST_PKT_CNT (0x6ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_MCAST_PKT_CNT (0x7ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_BCAST_PKT_CNT (0x8ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_1_64_PKT_CNT (0x9ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_65_127_PKT_CNT (0xaull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_128_255_PKT_CNT (0xbull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_256_511_PKT_CNT (0xcull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_512_1023_PKT_CNT (0xdull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_1024_1518_PKT_CNT (0xeull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_1519_MAX_PKT_CNT (0xfull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_CBFC_CLASS_0 (0x10ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_CBFC_CLASS_1 (0x11ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_CBFC_CLASS_2 (0x12ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_CBFC_CLASS_3 (0x13ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_CBFC_CLASS_4 (0x14ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_CBFC_CLASS_5 (0x15ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_CBFC_CLASS_6 (0x16ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_CBFC_CLASS_7 (0x17ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_CBFC_CLASS_8 (0x18ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_CBFC_CLASS_9 (0x19ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_CBFC_CLASS_10 (0x1aull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_CBFC_CLASS_11 (0x1bull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_CBFC_CLASS_12 (0x1cull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_CBFC_CLASS_13 (0x1dull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_CBFC_CLASS_14 (0x1eull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_CBFC_CLASS_15 (0x1full) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_MAC_CONTROL_FRAMES (0x20ull) /* [CN10K, .) */ +#define RPM_MTI_STAT_TX_PKT_CNT (0x21ull) /* [CN10K, .) */ + +#define NIX_SQOPERR_SQ_OOR (0x0ull) +#define NIX_SQOPERR_SQ_CTX_FAULT (0x1ull) +#define NIX_SQOPERR_SQ_CTX_POISON (0x2ull) +#define NIX_SQOPERR_SQ_DISABLED (0x3ull) +#define NIX_SQOPERR_MAX_SQE_SIZE_ERR (0x4ull) +#define NIX_SQOPERR_SQE_OFLOW (0x5ull) +#define NIX_SQOPERR_SQB_NULL (0x6ull) +#define NIX_SQOPERR_SQB_FAULT (0x7ull) +#define NIX_SQOPERR_SQE_SIZEM1_ZERO (0x8ull) /* [CN10K, .) */ + +#define NIX_SQINT_LMT_ERR (0x0ull) +#define NIX_SQINT_MNQ_ERR (0x1ull) +#define NIX_SQINT_SEND_ERR (0x2ull) +#define NIX_SQINT_SQB_ALLOC_FAIL (0x3ull) + +#define NIX_SEND_STATUS_GOOD (0x0ull) +#define NIX_SEND_STATUS_SQ_CTX_FAULT (0x1ull) +#define NIX_SEND_STATUS_SQ_CTX_POISON (0x2ull) +#define NIX_SEND_STATUS_SQB_FAULT (0x3ull) +#define NIX_SEND_STATUS_SQB_POISON (0x4ull) +#define NIX_SEND_STATUS_SEND_HDR_ERR (0x5ull) +#define NIX_SEND_STATUS_SEND_EXT_ERR (0x6ull) +#define NIX_SEND_STATUS_JUMP_FAULT (0x7ull) +#define NIX_SEND_STATUS_JUMP_POISON (0x8ull) +#define NIX_SEND_STATUS_SEND_CRC_ERR (0x10ull) +#define NIX_SEND_STATUS_SEND_IMM_ERR (0x11ull) +#define NIX_SEND_STATUS_SEND_SG_ERR (0x12ull) +#define NIX_SEND_STATUS_SEND_MEM_ERR (0x13ull) +#define NIX_SEND_STATUS_INVALID_SUBDC (0x14ull) +#define NIX_SEND_STATUS_SUBDC_ORDER_ERR (0x15ull) +#define NIX_SEND_STATUS_DATA_FAULT (0x16ull) +#define NIX_SEND_STATUS_DATA_POISON (0x17ull) +#define NIX_SEND_STATUS_NPC_DROP_ACTION (0x20ull) +#define NIX_SEND_STATUS_LOCK_VIOL (0x21ull) +#define NIX_SEND_STATUS_NPC_UCAST_CHAN_ERR (0x22ull) +#define NIX_SEND_STATUS_NPC_MCAST_CHAN_ERR (0x23ull) +#define NIX_SEND_STATUS_NPC_MCAST_ABORT (0x24ull) +#define NIX_SEND_STATUS_NPC_VTAG_PTR_ERR (0x25ull) +#define NIX_SEND_STATUS_NPC_VTAG_SIZE_ERR (0x26ull) +#define NIX_SEND_STATUS_SEND_MEM_FAULT (0x27ull) +#define NIX_SEND_STATUS_SEND_STATS_ERR (0x28ull) + +#define NIX_SENDSTATSALG_NOP (0x0ull) +#define NIX_SENDSTATSALG_ADD_PKT_CNT (0x1ull) +#define NIX_SENDSTATSALG_ADD_BYTE_CNT (0x2ull) +#define NIX_SENDSTATSALG_ADD_PKT_BYTE_CNT (0x3ull) +#define NIX_SENDSTATSALG_UPDATE_PKT_CNT_ON_DROP (0x4ull) +#define NIX_SENDSTATSALG_UPDATE_BYTE_CNT_ON_DROP (0x5ull) +#define NIX_SENDSTATSALG_UPDATE_PKT_BYTE_CNT_ON_DROP (0x6ull) + +#define NIX_SENDMEMDSZ_B64 (0x0ull) +#define NIX_SENDMEMDSZ_B32 (0x1ull) +#define NIX_SENDMEMDSZ_B16 (0x2ull) +#define NIX_SENDMEMDSZ_B8 (0x3ull) + +#define NIX_SENDMEMALG_SET (0x0ull) +#define NIX_SENDMEMALG_SETTSTMP (0x1ull) +#define NIX_SENDMEMALG_SETRSLT (0x2ull) +#define NIX_SENDMEMALG_ADD (0x8ull) +#define NIX_SENDMEMALG_SUB (0x9ull) +#define NIX_SENDMEMALG_ADDLEN (0xaull) +#define NIX_SENDMEMALG_SUBLEN (0xbull) +#define NIX_SENDMEMALG_ADDMBUF (0xcull) +#define NIX_SENDMEMALG_SUBMBUF (0xdull) + +#define NIX_SUBDC_NOP (0x0ull) +#define NIX_SUBDC_EXT (0x1ull) +#define NIX_SUBDC_CRC (0x2ull) +#define NIX_SUBDC_IMM (0x3ull) +#define NIX_SUBDC_SG (0x4ull) +#define NIX_SUBDC_MEM (0x5ull) +#define NIX_SUBDC_JUMP (0x6ull) +#define NIX_SUBDC_WORK (0x7ull) +#define NIX_SUBDC_SG2 (0x8ull) /* [CN10K, .) */ +#define NIX_SUBDC_AGE_AND_STATS (0x9ull) /* [CN10K, .) */ +#define NIX_SUBDC_SOD (0xfull) + +#define NIX_STYPE_STF (0x0ull) +#define NIX_STYPE_STT (0x1ull) +#define NIX_STYPE_STP (0x2ull) + +#define NIX_RX_ACTIONOP_DROP (0x0ull) +#define NIX_RX_ACTIONOP_UCAST (0x1ull) +#define NIX_RX_ACTIONOP_UCAST_IPSEC (0x2ull) +#define NIX_RX_ACTIONOP_MCAST (0x3ull) +#define NIX_RX_ACTIONOP_RSS (0x4ull) +#define NIX_RX_ACTIONOP_PF_FUNC_DROP (0x5ull) +#define NIX_RX_ACTIONOP_MIRROR (0x6ull) + +#define NIX_RX_VTAGACTION_VTAG0_RELPTR (0x0ull) +#define NIX_RX_VTAGACTION_VTAG1_RELPTR (0x4ull) +#define NIX_RX_VTAGACTION_VTAG_VALID (0x1ull) +#define NIX_TX_VTAGACTION_VTAG0_RELPTR (sizeof(struct nix_inst_hdr_s) + 2 * 6) +#define NIX_TX_VTAGACTION_VTAG1_RELPTR \ + (sizeof(struct nix_inst_hdr_s) + 2 * 6 + 4) +#define NIX_RQINT_DROP (0x0ull) +#define NIX_RQINT_RED (0x1ull) +#define NIX_RQINT_R2 (0x2ull) +#define NIX_RQINT_R3 (0x3ull) +#define NIX_RQINT_R4 (0x4ull) +#define NIX_RQINT_R5 (0x5ull) +#define NIX_RQINT_R6 (0x6ull) +#define NIX_RQINT_R7 (0x7ull) + +#define NIX_MAXSQESZ_W16 (0x0ull) +#define NIX_MAXSQESZ_W8 (0x1ull) + +#define NIX_LSOALG_NOP (0x0ull) +#define NIX_LSOALG_ADD_SEGNUM (0x1ull) +#define NIX_LSOALG_ADD_PAYLEN (0x2ull) +#define NIX_LSOALG_ADD_OFFSET (0x3ull) +#define NIX_LSOALG_TCP_FLAGS (0x4ull) + +#define NIX_MNQERR_SQ_CTX_FAULT (0x0ull) +#define NIX_MNQERR_SQ_CTX_POISON (0x1ull) +#define NIX_MNQERR_SQB_FAULT (0x2ull) +#define NIX_MNQERR_SQB_POISON (0x3ull) +#define NIX_MNQERR_TOTAL_ERR (0x4ull) +#define NIX_MNQERR_LSO_ERR (0x5ull) +#define NIX_MNQERR_CQ_QUERY_ERR (0x6ull) +#define NIX_MNQERR_MAX_SQE_SIZE_ERR (0x7ull) +#define NIX_MNQERR_MAXLEN_ERR (0x8ull) +#define NIX_MNQERR_SQE_SIZEM1_ZERO (0x9ull) + +#define NIX_MDTYPE_RSVD (0x0ull) +#define NIX_MDTYPE_FLUSH (0x1ull) +#define NIX_MDTYPE_PMD (0x2ull) + +#define NIX_NDC_TX_PORT_LMT (0x0ull) +#define NIX_NDC_TX_PORT_ENQ (0x1ull) +#define NIX_NDC_TX_PORT_MNQ (0x2ull) +#define NIX_NDC_TX_PORT_DEQ (0x3ull) +#define NIX_NDC_TX_PORT_DMA (0x4ull) +#define NIX_NDC_TX_PORT_XQE (0x5ull) + +#define NIX_NDC_RX_PORT_AQ (0x0ull) +#define NIX_NDC_RX_PORT_CQ (0x1ull) +#define NIX_NDC_RX_PORT_CINT (0x2ull) +#define NIX_NDC_RX_PORT_MC (0x3ull) +#define NIX_NDC_RX_PORT_PKT (0x4ull) +#define NIX_NDC_RX_PORT_RQ (0x5ull) + +#define NIX_RE_OPCODE_RE_NONE (0x0ull) +#define NIX_RE_OPCODE_RE_PARTIAL (0x1ull) +#define NIX_RE_OPCODE_RE_JABBER (0x2ull) +#define NIX_RE_OPCODE_RE_FCS (0x7ull) +#define NIX_RE_OPCODE_RE_FCS_RCV (0x8ull) +#define NIX_RE_OPCODE_RE_TERMINATE (0x9ull) +#define NIX_RE_OPCODE_RE_RX_CTL (0xbull) +#define NIX_RE_OPCODE_RE_SKIP (0xcull) +#define NIX_RE_OPCODE_RE_DMAPKT (0xfull) +#define NIX_RE_OPCODE_UNDERSIZE (0x10ull) +#define NIX_RE_OPCODE_OVERSIZE (0x11ull) +#define NIX_RE_OPCODE_OL2_LENMISM (0x12ull) + +#define NIX_REDALG_STD (0x0ull) +#define NIX_REDALG_SEND (0x1ull) +#define NIX_REDALG_STALL (0x2ull) +#define NIX_REDALG_DISCARD (0x3ull) + +#define NIX_RX_BAND_PROF_ACTIONRESULT_PASS (0x0ull) /* [CN10K, .) */ +#define NIX_RX_BAND_PROF_ACTIONRESULT_DROP (0x1ull) /* [CN10K, .) */ +#define NIX_RX_BAND_PROF_ACTIONRESULT_RED (0x2ull) /* [CN10K, .) */ + +#define NIX_RX_BAND_PROF_LAYER_LEAF (0x0ull) /* [CN10K, .) */ +#define NIX_RX_BAND_PROF_LAYER_MIDDLE (0x1ull) /* [CN10K, .) */ +#define NIX_RX_BAND_PROF_LAYER_TOP (0x2ull) /* [CN10K, .) */ + +#define NIX_RX_COLORRESULT_GREEN (0x0ull) /* [CN10K, .) */ +#define NIX_RX_COLORRESULT_YELLOW (0x1ull) /* [CN10K, .) */ +#define NIX_RX_COLORRESULT_RED (0x2ull) /* [CN10K, .) */ + +#define NIX_RX_MCOP_RQ (0x0ull) +#define NIX_RX_MCOP_RSS (0x1ull) + +#define NIX_RX_PERRCODE_NPC_RESULT_ERR (0x2ull) +#define NIX_RX_PERRCODE_MCAST_FAULT (0x4ull) +#define NIX_RX_PERRCODE_MIRROR_FAULT (0x5ull) +#define NIX_RX_PERRCODE_MCAST_POISON (0x6ull) +#define NIX_RX_PERRCODE_MIRROR_POISON (0x7ull) +#define NIX_RX_PERRCODE_DATA_FAULT (0x8ull) +#define NIX_RX_PERRCODE_MEMOUT (0x9ull) +#define NIX_RX_PERRCODE_BUFS_OFLOW (0xaull) +#define NIX_RX_PERRCODE_OL3_LEN (0x10ull) +#define NIX_RX_PERRCODE_OL4_LEN (0x11ull) +#define NIX_RX_PERRCODE_OL4_CHK (0x12ull) +#define NIX_RX_PERRCODE_OL4_PORT (0x13ull) +#define NIX_RX_PERRCODE_IL3_LEN (0x20ull) +#define NIX_RX_PERRCODE_IL4_LEN (0x21ull) +#define NIX_RX_PERRCODE_IL4_CHK (0x22ull) +#define NIX_RX_PERRCODE_IL4_PORT (0x23ull) + +#define NIX_SA_ALG_NON_MS (0x0ull) /* [CN10K, .) */ +#define NIX_SA_ALG_MS_CISCO (0x1ull) /* [CN10K, .) */ +#define NIX_SA_ALG_MS_VIPTELA (0x2ull) /* [CN10K, .) */ + +#define NIX_SENDCRCALG_CRC32 (0x0ull) +#define NIX_SENDCRCALG_CRC32C (0x1ull) +#define NIX_SENDCRCALG_ONES16 (0x2ull) + +#define NIX_SENDL3TYPE_NONE (0x0ull) +#define NIX_SENDL3TYPE_IP4 (0x2ull) +#define NIX_SENDL3TYPE_IP4_CKSUM (0x3ull) +#define NIX_SENDL3TYPE_IP6 (0x4ull) + +#define NIX_SENDL4TYPE_NONE (0x0ull) +#define NIX_SENDL4TYPE_TCP_CKSUM (0x1ull) +#define NIX_SENDL4TYPE_SCTP_CKSUM (0x2ull) +#define NIX_SENDL4TYPE_UDP_CKSUM (0x3ull) + +#define NIX_SENDLDTYPE_LDD (0x0ull) +#define NIX_SENDLDTYPE_LDT (0x1ull) +#define NIX_SENDLDTYPE_LDWB (0x2ull) + +#define NIX_XQESZ_W64 (0x0ull) +#define NIX_XQESZ_W16 (0x1ull) + +#define NIX_XQE_TYPE_INVALID (0x0ull) +#define NIX_XQE_TYPE_RX (0x1ull) +#define NIX_XQE_TYPE_RX_IPSECS (0x2ull) +#define NIX_XQE_TYPE_RX_IPSECH (0x3ull) +#define NIX_XQE_TYPE_RX_IPSECD (0x4ull) +#define NIX_XQE_TYPE_RX_VWQE (0x5ull) /* [CN10K, .) */ +#define NIX_XQE_TYPE_RES_6 (0x6ull) +#define NIX_XQE_TYPE_RES_7 (0x7ull) +#define NIX_XQE_TYPE_SEND (0x8ull) +#define NIX_XQE_TYPE_RES_9 (0x9ull) +#define NIX_XQE_TYPE_RES_A (0xAull) +#define NIX_XQE_TYPE_RES_B (0xBull) +#define NIX_XQE_TYPE_RES_C (0xCull) +#define NIX_XQE_TYPE_RES_D (0xDull) +#define NIX_XQE_TYPE_RES_E (0xEull) +#define NIX_XQE_TYPE_RES_F (0xFull) + +#define NIX_TX_VTAGOP_NOP (0x0ull) +#define NIX_TX_VTAGOP_INSERT (0x1ull) +#define NIX_TX_VTAGOP_REPLACE (0x2ull) + +#define NIX_VTAGSIZE_T4 (0x0ull) +#define NIX_VTAGSIZE_T8 (0x1ull) + +#define NIX_TXLAYER_OL3 (0x0ull) +#define NIX_TXLAYER_OL4 (0x1ull) +#define NIX_TXLAYER_IL3 (0x2ull) +#define NIX_TXLAYER_IL4 (0x3ull) + +#define NIX_TX_ACTIONOP_DROP (0x0ull) +#define NIX_TX_ACTIONOP_UCAST_DEFAULT (0x1ull) +#define NIX_TX_ACTIONOP_UCAST_CHAN (0x2ull) +#define NIX_TX_ACTIONOP_MCAST (0x3ull) +#define NIX_TX_ACTIONOP_DROP_VIOL (0x5ull) + +#define NIX_AQ_COMP_NOTDONE (0x0ull) +#define NIX_AQ_COMP_GOOD (0x1ull) +#define NIX_AQ_COMP_SWERR (0x2ull) +#define NIX_AQ_COMP_CTX_POISON (0x3ull) +#define NIX_AQ_COMP_CTX_FAULT (0x4ull) +#define NIX_AQ_COMP_LOCKERR (0x5ull) +#define NIX_AQ_COMP_SQB_ALLOC_FAIL (0x6ull) + +#define NIX_AF_INT_VEC_RVU (0x0ull) +#define NIX_AF_INT_VEC_GEN (0x1ull) +#define NIX_AF_INT_VEC_AQ_DONE (0x2ull) +#define NIX_AF_INT_VEC_AF_ERR (0x3ull) +#define NIX_AF_INT_VEC_POISON (0x4ull) + +#define NIX_AQINT_GEN_RX_MCAST_DROP (0x0ull) +#define NIX_AQINT_GEN_RX_MIRROR_DROP (0x1ull) +#define NIX_AQINT_GEN_TL1_DRAIN (0x3ull) +#define NIX_AQINT_GEN_SMQ_FLUSH_DONE (0x4ull) + +#define NIX_AQ_INSTOP_NOP (0x0ull) +#define NIX_AQ_INSTOP_INIT (0x1ull) +#define NIX_AQ_INSTOP_WRITE (0x2ull) +#define NIX_AQ_INSTOP_READ (0x3ull) +#define NIX_AQ_INSTOP_LOCK (0x4ull) +#define NIX_AQ_INSTOP_UNLOCK (0x5ull) + +#define NIX_AQ_CTYPE_RQ (0x0ull) +#define NIX_AQ_CTYPE_SQ (0x1ull) +#define NIX_AQ_CTYPE_CQ (0x2ull) +#define NIX_AQ_CTYPE_MCE (0x3ull) +#define NIX_AQ_CTYPE_RSS (0x4ull) +#define NIX_AQ_CTYPE_DYNO (0x5ull) +#define NIX_AQ_CTYPE_BAND_PROF (0x6ull) /* [CN10K, .) */ + +#define NIX_COLORRESULT_GREEN (0x0ull) +#define NIX_COLORRESULT_YELLOW (0x1ull) +#define NIX_COLORRESULT_RED_SEND (0x2ull) +#define NIX_COLORRESULT_RED_DROP (0x3ull) + +#define NIX_CHAN_LBKX_CHX(a, b) \ + (0x000ull | ((uint64_t)(a) << 8) | (uint64_t)(b)) +#define NIX_CHAN_CPT_CH_END (0x4ffull) /* [CN10K, .) */ +#define NIX_CHAN_CPT_CH_START (0x400ull) /* [CN10K, .) */ +#define NIX_CHAN_R4 (0x400ull) /* [CN9K, CN10K) */ +#define NIX_CHAN_R5 (0x500ull) +#define NIX_CHAN_R6 (0x600ull) +#define NIX_CHAN_SDP_CH_END (0x7ffull) +#define NIX_CHAN_SDP_CH_START (0x700ull) +/* [CN9K, CN10K) */ +#define NIX_CHAN_CGXX_LMACX_CHX(a, b, c) \ + (0x800ull | ((uint64_t)(a) << 8) | ((uint64_t)(b) << 4) | (uint64_t)(c)) +/* [CN10K, .) */ +#define NIX_CHAN_RPMX_LMACX_CHX(a, b, c) \ + (0x800ull | ((uint64_t)(a) << 8) | ((uint64_t)(b) << 4) | (uint64_t)(c)) + +#define NIX_INTF_SDP (0x4ull) +#define NIX_INTF_CGX0 (0x0ull) /* [CN9K, CN10K) */ +#define NIX_INTF_CGX1 (0x1ull) /* [CN9K, CN10K) */ +#define NIX_INTF_CGX2 (0x2ull) /* [CN9K, CN10K) */ +#define NIX_INTF_RPM0 (0x0ull) /* [CN10K, .) */ +#define NIX_INTF_RPM1 (0x1ull) /* [CN10K, .) */ +#define NIX_INTF_RPM2 (0x2ull) /* [CN10K, .) */ +#define NIX_INTF_LBK0 (0x3ull) +#define NIX_INTF_CPT0 (0x5ull) /* [CN10K, .) */ + +#define NIX_CQERRINT_DOOR_ERR (0x0ull) +#define NIX_CQERRINT_WR_FULL (0x1ull) +#define NIX_CQERRINT_CQE_FAULT (0x2ull) + +#define NIX_LINK_SDP (0xdull) /* [CN10K, .) */ +#define NIX_LINK_CPT (0xeull) /* [CN10K, .) */ +#define NIX_LINK_MC (0xfull) /* [CN10K, .) */ +/* [CN10K, .) */ +#define NIX_LINK_RPMX_LMACX(a, b) \ + (0x00ull | ((uint64_t)(a) << 2) | (uint64_t)(b)) +#define NIX_LINK_LBK0 (0xcull) + +#define NIX_LF_INT_VEC_GINT (0x80ull) +#define NIX_LF_INT_VEC_ERR_INT (0x81ull) +#define NIX_LF_INT_VEC_POISON (0x82ull) +#define NIX_LF_INT_VEC_QINT_END (0x3full) +#define NIX_LF_INT_VEC_QINT_START (0x0ull) +#define NIX_LF_INT_VEC_CINT_END (0x7full) +#define NIX_LF_INT_VEC_CINT_START (0x40ull) + +#define NIX_INTF_RX (0x0ull) +#define NIX_INTF_TX (0x1ull) + +/* Enums definitions */ + +/* Structures definitions */ + +/* NIX aging and send stats subdescriptor structure */ +struct nix_age_and_send_stats_s { + uint64_t threshold : 29; + uint64_t latency_drop : 1; + uint64_t aging : 1; + uint64_t wmem : 1; + uint64_t ooffset : 12; + uint64_t ioffset : 12; + uint64_t sel : 1; + uint64_t alg : 3; + uint64_t subdc : 4; + uint64_t addr : 64; /* W1 */ +}; + +/* NIX admin queue instruction structure */ +struct nix_aq_inst_s { + uint64_t op : 4; + uint64_t ctype : 4; + uint64_t lf : 7; + uint64_t rsvd_23_15 : 9; + uint64_t cindex : 20; + uint64_t rsvd_62_44 : 19; + uint64_t doneint : 1; + uint64_t res_addr : 64; /* W1 */ +}; + +/* NIX admin queue result structure */ +struct nix_aq_res_s { + uint64_t op : 4; + uint64_t ctype : 4; + uint64_t compcode : 8; + uint64_t doneint : 1; + uint64_t rsvd_63_17 : 47; + uint64_t rsvd_127_64 : 64; /* W1 */ +}; + +/* NIX bandwidth profile structure */ +struct nix_band_prof_s { + uint64_t pc_mode : 2; + uint64_t icolor : 2; + uint64_t tnl_ena : 1; + uint64_t rsvd_7_5 : 3; + uint64_t peir_exponent : 5; + uint64_t rsvd_15_13 : 3; + uint64_t pebs_exponent : 5; + uint64_t rsvd_23_21 : 3; + uint64_t cir_exponent : 5; + uint64_t rsvd_31_29 : 3; + uint64_t cbs_exponent : 5; + uint64_t rsvd_39_37 : 3; + uint64_t peir_mantissa : 8; + uint64_t pebs_mantissa : 8; + uint64_t cir_mantissa : 8; + uint64_t cbs_mantissa : 8; + uint64_t lmode : 1; + uint64_t l_sellect : 3; + uint64_t rdiv : 4; + uint64_t adjust_exponent : 5; + uint64_t rsvd_86_85 : 2; + uint64_t adjust_mantissa : 9; + uint64_t gc_action : 2; + uint64_t yc_action : 2; + uint64_t rc_action : 2; + uint64_t meter_algo : 2; + uint64_t band_prof_id : 7; + uint64_t rsvd_118_111 : 8; + uint64_t hl_en : 1; + uint64_t rsvd_127_120 : 8; + uint64_t ts : 48; + uint64_t rsvd_191_176 : 16; + uint64_t pe_accum : 32; + uint64_t c_accum : 32; + uint64_t green_pkt_pass : 48; + uint64_t rsvd_319_304 : 16; + uint64_t yellow_pkt_pass : 48; + uint64_t rsvd_383_368 : 16; + uint64_t red_pkt_pass : 48; + uint64_t rsvd_447_432 : 16; + uint64_t green_octs_pass : 48; + uint64_t rsvd_511_496 : 16; + uint64_t yellow_octs_pass : 48; + uint64_t rsvd_575_560 : 16; + uint64_t red_octs_pass : 48; + uint64_t rsvd_639_624 : 16; + uint64_t green_pkt_drop : 48; + uint64_t rsvd_703_688 : 16; + uint64_t yellow_pkt_drop : 48; + uint64_t rsvd_767_752 : 16; + uint64_t red_pkt_drop : 48; + uint64_t rsvd_831_816 : 16; + uint64_t green_octs_drop : 48; + uint64_t rsvd_895_880 : 16; + uint64_t yellow_octs_drop : 48; + uint64_t rsvd_959_944 : 16; + uint64_t red_octs_drop : 48; + uint64_t rsvd_1023_1008 : 16; +}; + +/* NIX completion interrupt context hardware structure */ +struct nix_cint_hw_s { + uint64_t ecount : 32; + uint64_t qcount : 16; + uint64_t intr : 1; + uint64_t ena : 1; + uint64_t timer_idx : 8; + uint64_t rsvd_63_58 : 6; + uint64_t ecount_wait : 32; + uint64_t qcount_wait : 16; + uint64_t time_wait : 8; + uint64_t rsvd_127_120 : 8; +}; + +/* NIX completion queue entry header structure */ +struct nix_cqe_hdr_s { + uint64_t tag : 32; + uint64_t q : 20; + uint64_t rsvd_57_52 : 6; + uint64_t node : 2; + uint64_t cqe_type : 4; +}; + +/* NIX completion queue context structure */ +struct nix_cq_ctx_s { + uint64_t base : 64; /* W0 */ + uint64_t rsvd_67_64 : 4; + uint64_t bp_ena : 1; + uint64_t rsvd_71_69 : 3; + uint64_t bpid : 9; + uint64_t rsvd_83_81 : 3; + uint64_t qint_idx : 7; + uint64_t cq_err : 1; + uint64_t cint_idx : 7; + uint64_t avg_con : 9; + uint64_t wrptr : 20; + uint64_t tail : 20; + uint64_t head : 20; + uint64_t avg_level : 8; + uint64_t update_time : 16; + uint64_t bp : 8; + uint64_t drop : 8; + uint64_t drop_ena : 1; + uint64_t ena : 1; + uint64_t rsvd_211_210 : 2; + uint64_t substream : 20; + uint64_t caching : 1; + uint64_t rsvd_235_233 : 3; + uint64_t qsize : 4; + uint64_t cq_err_int : 8; + uint64_t cq_err_int_ena : 8; +}; + +/* NIX instruction header structure */ +struct nix_inst_hdr_s { + uint64_t pf_func : 16; + uint64_t sq : 20; + uint64_t rsvd_63_36 : 28; +}; + +/* NIX i/o virtual address structure */ +struct nix_iova_s { + uint64_t addr : 64; /* W0 */ +}; + +/* NIX IPsec dynamic ordering counter structure */ +struct nix_ipsec_dyno_s { + uint32_t count : 32; /* W0 */ +}; + +/* NIX memory value structure */ +struct nix_mem_result_s { + uint64_t v : 1; + uint64_t color : 2; + uint64_t rsvd_63_3 : 61; +}; + +/* NIX statistics operation write data structure */ +struct nix_op_q_wdata_s { + uint64_t rsvd_31_0 : 32; + uint64_t q : 20; + uint64_t rsvd_63_52 : 12; +}; + +/* NIX queue interrupt context hardware structure */ +struct nix_qint_hw_s { + uint32_t count : 22; + uint32_t rsvd_30_22 : 9; + uint32_t ena : 1; +}; + +/* [CN10K, .) NIX receive queue context structure */ +struct nix_cn10k_rq_ctx_hw_s { + uint64_t ena : 1; + uint64_t sso_ena : 1; + uint64_t ipsech_ena : 1; + uint64_t ena_wqwd : 1; + uint64_t cq : 20; + uint64_t rsvd_36_24 : 13; + uint64_t lenerr_dis : 1; + uint64_t csum_il4_dis : 1; + uint64_t csum_ol4_dis : 1; + uint64_t len_il4_dis : 1; + uint64_t len_il3_dis : 1; + uint64_t len_ol4_dis : 1; + uint64_t len_ol3_dis : 1; + uint64_t wqe_aura : 20; + uint64_t spb_aura : 20; + uint64_t lpb_aura : 20; + uint64_t sso_grp : 10; + uint64_t sso_tt : 2; + uint64_t pb_caching : 2; + uint64_t wqe_caching : 1; + uint64_t xqe_drop_ena : 1; + uint64_t spb_drop_ena : 1; + uint64_t lpb_drop_ena : 1; + uint64_t pb_stashing : 1; + uint64_t ipsecd_drop_en : 1; + uint64_t chi_ena : 1; + uint64_t rsvd_127_125 : 3; + uint64_t band_prof_id : 10; + uint64_t rsvd_138 : 1; + uint64_t policer_ena : 1; + uint64_t spb_sizem1 : 6; + uint64_t wqe_skip : 2; + uint64_t spb_high_sizem1 : 3; + uint64_t spb_ena : 1; + uint64_t lpb_sizem1 : 12; + uint64_t first_skip : 7; + uint64_t rsvd_171 : 1; + uint64_t later_skip : 6; + uint64_t xqe_imm_size : 6; + uint64_t rsvd_189_184 : 6; + uint64_t xqe_imm_copy : 1; + uint64_t xqe_hdr_split : 1; + uint64_t xqe_drop : 8; + uint64_t xqe_pass : 8; + uint64_t wqe_pool_drop : 8; + uint64_t wqe_pool_pass : 8; + uint64_t spb_aura_drop : 8; + uint64_t spb_aura_pass : 8; + uint64_t spb_pool_drop : 8; + uint64_t spb_pool_pass : 8; + uint64_t lpb_aura_drop : 8; + uint64_t lpb_aura_pass : 8; + uint64_t lpb_pool_drop : 8; + uint64_t lpb_pool_pass : 8; + uint64_t rsvd_319_288 : 32; + uint64_t ltag : 24; + uint64_t good_utag : 8; + uint64_t bad_utag : 8; + uint64_t flow_tagw : 6; + uint64_t ipsec_vwqe : 1; + uint64_t vwqe_ena : 1; + uint64_t vtime_wait : 8; + uint64_t max_vsize_exp : 4; + uint64_t vwqe_skip : 2; + uint64_t rsvd_383_382 : 2; + uint64_t octs : 48; + uint64_t rsvd_447_432 : 16; + uint64_t pkts : 48; + uint64_t rsvd_511_496 : 16; + uint64_t drop_octs : 48; + uint64_t rsvd_575_560 : 16; + uint64_t drop_pkts : 48; + uint64_t rsvd_639_624 : 16; + uint64_t re_pkts : 48; + uint64_t rsvd_702_688 : 15; + uint64_t ena_copy : 1; + uint64_t rsvd_739_704 : 36; + uint64_t rq_int : 8; + uint64_t rq_int_ena : 8; + uint64_t qint_idx : 7; + uint64_t rsvd_767_763 : 5; + uint64_t rsvd_831_768 : 64; /* W12 */ + uint64_t rsvd_895_832 : 64; /* W13 */ + uint64_t rsvd_959_896 : 64; /* W14 */ + uint64_t rsvd_1023_960 : 64; /* W15 */ +}; + +/* NIX receive queue context structure */ +struct nix_rq_ctx_hw_s { + uint64_t ena : 1; + uint64_t sso_ena : 1; + uint64_t ipsech_ena : 1; + uint64_t ena_wqwd : 1; + uint64_t cq : 20; + uint64_t substream : 20; + uint64_t wqe_aura : 20; + uint64_t spb_aura : 20; + uint64_t lpb_aura : 20; + uint64_t sso_grp : 10; + uint64_t sso_tt : 2; + uint64_t pb_caching : 2; + uint64_t wqe_caching : 1; + uint64_t xqe_drop_ena : 1; + uint64_t spb_drop_ena : 1; + uint64_t lpb_drop_ena : 1; + uint64_t wqe_skip : 2; + uint64_t rsvd_127_124 : 4; + uint64_t rsvd_139_128 : 12; + uint64_t spb_sizem1 : 6; + uint64_t rsvd_150_146 : 5; + uint64_t spb_ena : 1; + uint64_t lpb_sizem1 : 12; + uint64_t first_skip : 7; + uint64_t rsvd_171 : 1; + uint64_t later_skip : 6; + uint64_t xqe_imm_size : 6; + uint64_t rsvd_189_184 : 6; + uint64_t xqe_imm_copy : 1; + uint64_t xqe_hdr_split : 1; + uint64_t xqe_drop : 8; + uint64_t xqe_pass : 8; + uint64_t wqe_pool_drop : 8; + uint64_t wqe_pool_pass : 8; + uint64_t spb_aura_drop : 8; + uint64_t spb_aura_pass : 8; + uint64_t spb_pool_drop : 8; + uint64_t spb_pool_pass : 8; + uint64_t lpb_aura_drop : 8; + uint64_t lpb_aura_pass : 8; + uint64_t lpb_pool_drop : 8; + uint64_t lpb_pool_pass : 8; + uint64_t rsvd_319_288 : 32; + uint64_t ltag : 24; + uint64_t good_utag : 8; + uint64_t bad_utag : 8; + uint64_t flow_tagw : 6; + uint64_t rsvd_383_366 : 18; + uint64_t octs : 48; + uint64_t rsvd_447_432 : 16; + uint64_t pkts : 48; + uint64_t rsvd_511_496 : 16; + uint64_t drop_octs : 48; + uint64_t rsvd_575_560 : 16; + uint64_t drop_pkts : 48; + uint64_t rsvd_639_624 : 16; + uint64_t re_pkts : 48; + uint64_t rsvd_702_688 : 15; + uint64_t ena_copy : 1; + uint64_t rsvd_739_704 : 36; + uint64_t rq_int : 8; + uint64_t rq_int_ena : 8; + uint64_t qint_idx : 7; + uint64_t rsvd_767_763 : 5; + uint64_t rsvd_831_768 : 64; /* W12 */ + uint64_t rsvd_895_832 : 64; /* W13 */ + uint64_t rsvd_959_896 : 64; /* W14 */ + uint64_t rsvd_1023_960 : 64; /* W15 */ +}; + +/* [CN10K, .) NIX Receive queue context structure */ +struct nix_cn10k_rq_ctx_s { + uint64_t ena : 1; + uint64_t sso_ena : 1; + uint64_t ipsech_ena : 1; + uint64_t ena_wqwd : 1; + uint64_t cq : 20; + uint64_t rsvd_36_24 : 13; + uint64_t lenerr_dis : 1; + uint64_t csum_il4_dis : 1; + uint64_t csum_ol4_dis : 1; + uint64_t len_il4_dis : 1; + uint64_t len_il3_dis : 1; + uint64_t len_ol4_dis : 1; + uint64_t len_ol3_dis : 1; + uint64_t wqe_aura : 20; + uint64_t spb_aura : 20; + uint64_t lpb_aura : 20; + uint64_t sso_grp : 10; + uint64_t sso_tt : 2; + uint64_t pb_caching : 2; + uint64_t wqe_caching : 1; + uint64_t xqe_drop_ena : 1; + uint64_t spb_drop_ena : 1; + uint64_t lpb_drop_ena : 1; + uint64_t pb_stashing : 1; + uint64_t ipsecd_drop_en : 1; + uint64_t chi_ena : 1; + uint64_t rsvd_127_125 : 3; + uint64_t band_prof_id : 10; + uint64_t rsvd_138 : 1; + uint64_t policer_ena : 1; + uint64_t spb_sizem1 : 6; + uint64_t wqe_skip : 2; + uint64_t spb_high_sizem1 : 3; + uint64_t spb_ena : 1; + uint64_t lpb_sizem1 : 12; + uint64_t first_skip : 7; + uint64_t rsvd_171 : 1; + uint64_t later_skip : 6; + uint64_t xqe_imm_size : 6; + uint64_t rsvd_189_184 : 6; + uint64_t xqe_imm_copy : 1; + uint64_t xqe_hdr_split : 1; + uint64_t xqe_drop : 8; + uint64_t xqe_pass : 8; + uint64_t wqe_pool_drop : 8; + uint64_t wqe_pool_pass : 8; + uint64_t spb_aura_drop : 8; + uint64_t spb_aura_pass : 8; + uint64_t spb_pool_drop : 8; + uint64_t spb_pool_pass : 8; + uint64_t lpb_aura_drop : 8; + uint64_t lpb_aura_pass : 8; + uint64_t lpb_pool_drop : 8; + uint64_t lpb_pool_pass : 8; + uint64_t rsvd_291_288 : 4; + uint64_t rq_int : 8; + uint64_t rq_int_ena : 8; + uint64_t qint_idx : 7; + uint64_t rsvd_319_315 : 5; + uint64_t ltag : 24; + uint64_t good_utag : 8; + uint64_t bad_utag : 8; + uint64_t flow_tagw : 6; + uint64_t ipsec_vwqe : 1; + uint64_t vwqe_ena : 1; + uint64_t vtime_wait : 8; + uint64_t max_vsize_exp : 4; + uint64_t vwqe_skip : 2; + uint64_t rsvd_383_382 : 2; + uint64_t octs : 48; + uint64_t rsvd_447_432 : 16; + uint64_t pkts : 48; + uint64_t rsvd_511_496 : 16; + uint64_t drop_octs : 48; + uint64_t rsvd_575_560 : 16; + uint64_t drop_pkts : 48; + uint64_t rsvd_639_624 : 16; + uint64_t re_pkts : 48; + uint64_t rsvd_703_688 : 16; + uint64_t rsvd_767_704 : 64; /* W11 */ + uint64_t rsvd_831_768 : 64; /* W12 */ + uint64_t rsvd_895_832 : 64; /* W13 */ + uint64_t rsvd_959_896 : 64; /* W14 */ + uint64_t rsvd_1023_960 : 64; /* W15 */ +}; + +/* NIX receive queue context structure */ +struct nix_rq_ctx_s { + uint64_t ena : 1; + uint64_t sso_ena : 1; + uint64_t ipsech_ena : 1; + uint64_t ena_wqwd : 1; + uint64_t cq : 20; + uint64_t substream : 20; + uint64_t wqe_aura : 20; + uint64_t spb_aura : 20; + uint64_t lpb_aura : 20; + uint64_t sso_grp : 10; + uint64_t sso_tt : 2; + uint64_t pb_caching : 2; + uint64_t wqe_caching : 1; + uint64_t xqe_drop_ena : 1; + uint64_t spb_drop_ena : 1; + uint64_t lpb_drop_ena : 1; + uint64_t rsvd_127_122 : 6; + uint64_t rsvd_139_128 : 12; + uint64_t spb_sizem1 : 6; + uint64_t wqe_skip : 2; + uint64_t rsvd_150_148 : 3; + uint64_t spb_ena : 1; + uint64_t lpb_sizem1 : 12; + uint64_t first_skip : 7; + uint64_t rsvd_171 : 1; + uint64_t later_skip : 6; + uint64_t xqe_imm_size : 6; + uint64_t rsvd_189_184 : 6; + uint64_t xqe_imm_copy : 1; + uint64_t xqe_hdr_split : 1; + uint64_t xqe_drop : 8; + uint64_t xqe_pass : 8; + uint64_t wqe_pool_drop : 8; + uint64_t wqe_pool_pass : 8; + uint64_t spb_aura_drop : 8; + uint64_t spb_aura_pass : 8; + uint64_t spb_pool_drop : 8; + uint64_t spb_pool_pass : 8; + uint64_t lpb_aura_drop : 8; + uint64_t lpb_aura_pass : 8; + uint64_t lpb_pool_drop : 8; + uint64_t lpb_pool_pass : 8; + uint64_t rsvd_291_288 : 4; + uint64_t rq_int : 8; + uint64_t rq_int_ena : 8; + uint64_t qint_idx : 7; + uint64_t rsvd_319_315 : 5; + uint64_t ltag : 24; + uint64_t good_utag : 8; + uint64_t bad_utag : 8; + uint64_t flow_tagw : 6; + uint64_t rsvd_383_366 : 18; + uint64_t octs : 48; + uint64_t rsvd_447_432 : 16; + uint64_t pkts : 48; + uint64_t rsvd_511_496 : 16; + uint64_t drop_octs : 48; + uint64_t rsvd_575_560 : 16; + uint64_t drop_pkts : 48; + uint64_t rsvd_639_624 : 16; + uint64_t re_pkts : 48; + uint64_t rsvd_703_688 : 16; + uint64_t rsvd_767_704 : 64; /* W11 */ + uint64_t rsvd_831_768 : 64; /* W12 */ + uint64_t rsvd_895_832 : 64; /* W13 */ + uint64_t rsvd_959_896 : 64; /* W14 */ + uint64_t rsvd_1023_960 : 64; /* W15 */ +}; + +/* NIX receive side scaling entry structure */ +struct nix_rsse_s { + uint32_t rq : 20; + uint32_t rsvd_31_20 : 12; +}; + +/* NIX receive action structure */ +struct nix_rx_action_s { + uint64_t op : 4; + uint64_t pf_func : 16; + uint64_t index : 20; + uint64_t match_id : 16; + uint64_t flow_key_alg : 5; + uint64_t rsvd_63_61 : 3; +}; + +/* NIX receive immediate sub descriptor structure */ +struct nix_rx_imm_s { + uint64_t size : 16; + uint64_t apad : 3; + uint64_t rsvd_59_19 : 41; + uint64_t subdc : 4; +}; + +/* NIX receive multicast/mirror entry structure */ +struct nix_rx_mce_s { + uint64_t op : 2; + uint64_t rsvd_2 : 1; + uint64_t eol : 1; + uint64_t index : 20; + uint64_t rsvd_31_24 : 8; + uint64_t pf_func : 16; + uint64_t next : 16; +}; + +/* NIX receive parse structure */ +union nix_rx_parse_u { + struct { + uint64_t chan : 12; + uint64_t desc_sizem1 : 5; + uint64_t imm_copy : 1; + uint64_t express : 1; + uint64_t wqwd : 1; + uint64_t errlev : 4; + uint64_t errcode : 8; + uint64_t latype : 4; + uint64_t lbtype : 4; + uint64_t lctype : 4; + uint64_t ldtype : 4; + uint64_t letype : 4; + uint64_t lftype : 4; + uint64_t lgtype : 4; + uint64_t lhtype : 4; + uint64_t pkt_lenm1 : 16; + uint64_t l2m : 1; + uint64_t l2b : 1; + uint64_t l3m : 1; + uint64_t l3b : 1; + uint64_t vtag0_valid : 1; + uint64_t vtag0_gone : 1; + uint64_t vtag1_valid : 1; + uint64_t vtag1_gone : 1; + uint64_t pkind : 6; + uint64_t nix_idx : 2; + uint64_t vtag0_tci : 16; + uint64_t vtag1_tci : 16; + uint64_t laflags : 8; + uint64_t lbflags : 8; + uint64_t lcflags : 8; + uint64_t ldflags : 8; + uint64_t leflags : 8; + uint64_t lfflags : 8; + uint64_t lgflags : 8; + uint64_t lhflags : 8; + uint64_t eoh_ptr : 8; + uint64_t wqe_aura : 20; + uint64_t pb_aura : 20; + uint64_t match_id : 16; + uint64_t laptr : 8; + uint64_t lbptr : 8; + uint64_t lcptr : 8; + uint64_t ldptr : 8; + uint64_t leptr : 8; + uint64_t lfptr : 8; + uint64_t lgptr : 8; + uint64_t lhptr : 8; + uint64_t vtag0_ptr : 8; + uint64_t vtag1_ptr : 8; + uint64_t flow_key_alg : 5; + uint64_t rsvd_341 : 1; + uint64_t rsvd_349_342 : 8; + uint64_t rsvd_353_350 : 4; + uint64_t rsvd_359_354 : 6; + uint64_t color : 2; + uint64_t rsvd_381_362 : 20; + uint64_t rsvd_382 : 1; + uint64_t rsvd_383 : 1; + uint64_t rsvd_447_384 : 64; /* W6 */ + }; + struct { + uint64_t chan : 12; + uint64_t desc_sizem1 : 5; + uint64_t imm_copy : 1; + uint64_t express : 1; + uint64_t wqwd : 1; + uint64_t errlev : 4; + uint64_t errcode : 8; + uint64_t latype : 4; + uint64_t lbtype : 4; + uint64_t lctype : 4; + uint64_t ldtype : 4; + uint64_t letype : 4; + uint64_t lftype : 4; + uint64_t lgtype : 4; + uint64_t lhtype : 4; + uint64_t pkt_lenm1 : 16; + uint64_t l2m : 1; + uint64_t l2b : 1; + uint64_t l3m : 1; + uint64_t l3b : 1; + uint64_t vtag0_valid : 1; + uint64_t vtag0_gone : 1; + uint64_t vtag1_valid : 1; + uint64_t vtag1_gone : 1; + uint64_t pkind : 6; + uint64_t rsvd_95_94 : 2; + uint64_t vtag0_tci : 16; + uint64_t vtag1_tci : 16; + uint64_t laflags : 8; + uint64_t lbflags : 8; + uint64_t lcflags : 8; + uint64_t ldflags : 8; + uint64_t leflags : 8; + uint64_t lfflags : 8; + uint64_t lgflags : 8; + uint64_t lhflags : 8; + uint64_t eoh_ptr : 8; + uint64_t wqe_aura : 20; + uint64_t pb_aura : 20; + uint64_t match_id : 16; + uint64_t laptr : 8; + uint64_t lbptr : 8; + uint64_t lcptr : 8; + uint64_t ldptr : 8; + uint64_t leptr : 8; + uint64_t lfptr : 8; + uint64_t lgptr : 8; + uint64_t lhptr : 8; + uint64_t vtag0_ptr : 8; + uint64_t vtag1_ptr : 8; + uint64_t flow_key_alg : 5; + uint64_t rsvd_383_341 : 43; + uint64_t rsvd_447_384 : 64; /* W6 */ + } cn9k; +}; + +/* NIX receive scatter/gather sub descriptor structure */ +struct nix_rx_sg_s { + uint64_t seg1_size : 16; + uint64_t seg2_size : 16; + uint64_t seg3_size : 16; + uint64_t segs : 2; + uint64_t rsvd_59_50 : 10; + uint64_t subdc : 4; +}; + +/* NIX receive vtag action structure */ +union nix_rx_vtag_action_u { + struct { + uint64_t vtag0_relptr : 8; + uint64_t vtag0_lid : 3; + uint64_t sa_xor : 1; + uint64_t vtag0_type : 3; + uint64_t vtag0_valid : 1; + uint64_t sa_lo : 16; + uint64_t vtag1_relptr : 8; + uint64_t vtag1_lid : 3; + uint64_t rsvd_43 : 1; + uint64_t vtag1_type : 3; + uint64_t vtag1_valid : 1; + uint64_t sa_hi : 16; + }; + struct { + uint64_t vtag0_relptr : 8; + uint64_t vtag0_lid : 3; + uint64_t rsvd_11 : 1; + uint64_t vtag0_type : 3; + uint64_t vtag0_valid : 1; + uint64_t rsvd_31_16 : 16; + uint64_t vtag1_relptr : 8; + uint64_t vtag1_lid : 3; + uint64_t rsvd_43 : 1; + uint64_t vtag1_type : 3; + uint64_t vtag1_valid : 1; + uint64_t rsvd_63_48 : 16; + } cn9k; +}; + +/* NIX send completion structure */ +struct nix_send_comp_s { + uint64_t status : 8; + uint64_t sqe_id : 16; + uint64_t rsvd_63_24 : 40; +}; + +/* NIX send CRC sub descriptor structure */ +struct nix_send_crc_s { + uint64_t size : 16; + uint64_t start : 16; + uint64_t insert : 16; + uint64_t rsvd_57_48 : 10; + uint64_t alg : 2; + uint64_t subdc : 4; + uint64_t iv : 32; + uint64_t rsvd_127_96 : 32; +}; + +/* NIX send extended header sub descriptor structure */ +PLT_STD_C11 +union nix_send_ext_w0_u { + uint64_t u; + struct { + uint64_t lso_mps : 14; + uint64_t lso : 1; + uint64_t tstmp : 1; + uint64_t lso_sb : 8; + uint64_t lso_format : 5; + uint64_t rsvd_31_29 : 3; + uint64_t shp_chg : 9; + uint64_t shp_dis : 1; + uint64_t shp_ra : 2; + uint64_t markptr : 8; + uint64_t markform : 7; + uint64_t mark_en : 1; + uint64_t subdc : 4; + }; +}; + +PLT_STD_C11 +union nix_send_ext_w1_u { + uint64_t u; + struct { + uint64_t vlan0_ins_ptr : 8; + uint64_t vlan0_ins_tci : 16; + uint64_t vlan1_ins_ptr : 8; + uint64_t vlan1_ins_tci : 16; + uint64_t vlan0_ins_ena : 1; + uint64_t vlan1_ins_ena : 1; + uint64_t init_color : 2; + uint64_t rsvd_127_116 : 12; + }; + struct { + uint64_t vlan0_ins_ptr : 8; + uint64_t vlan0_ins_tci : 16; + uint64_t vlan1_ins_ptr : 8; + uint64_t vlan1_ins_tci : 16; + uint64_t vlan0_ins_ena : 1; + uint64_t vlan1_ins_ena : 1; + uint64_t rsvd_127_114 : 14; + } cn9k; +}; + +struct nix_send_ext_s { + union nix_send_ext_w0_u w0; + union nix_send_ext_w1_u w1; +}; + +/* NIX send header sub descriptor structure */ +PLT_STD_C11 +union nix_send_hdr_w0_u { + uint64_t u; + struct { + uint64_t total : 18; + uint64_t rsvd_18 : 1; + uint64_t df : 1; + uint64_t aura : 20; + uint64_t sizem1 : 3; + uint64_t pnc : 1; + uint64_t sq : 20; + }; +}; + +PLT_STD_C11 +union nix_send_hdr_w1_u { + uint64_t u; + struct { + uint64_t ol3ptr : 8; + uint64_t ol4ptr : 8; + uint64_t il3ptr : 8; + uint64_t il4ptr : 8; + uint64_t ol3type : 4; + uint64_t ol4type : 4; + uint64_t il3type : 4; + uint64_t il4type : 4; + uint64_t sqe_id : 16; + }; +}; + +struct nix_send_hdr_s { + union nix_send_hdr_w0_u w0; + union nix_send_hdr_w1_u w1; +}; + +/* NIX send immediate sub descriptor structure */ +struct nix_send_imm_s { + uint64_t size : 16; + uint64_t apad : 3; + uint64_t rsvd_59_19 : 41; + uint64_t subdc : 4; +}; + +/* NIX send jump sub descriptor structure */ +struct nix_send_jump_s { + uint64_t sizem1 : 7; + uint64_t rsvd_13_7 : 7; + uint64_t ld_type : 2; + uint64_t aura : 20; + uint64_t rsvd_58_36 : 23; + uint64_t f : 1; + uint64_t subdc : 4; + uint64_t addr : 64; /* W1 */ +}; + +/* NIX send memory sub descriptor structure */ +PLT_STD_C11 +union nix_send_mem_w0_u { + uint64_t u; + struct { + uint64_t offset : 16; + uint64_t rsvd_51_16 : 36; + uint64_t per_lso_seg : 1; + uint64_t wmem : 1; + uint64_t dsz : 2; + uint64_t alg : 4; + uint64_t subdc : 4; + }; + struct { + uint64_t offset : 16; + uint64_t rsvd_52_16 : 37; + uint64_t wmem : 1; + uint64_t dsz : 2; + uint64_t alg : 4; + uint64_t subdc : 4; + } cn9k; +}; + +struct nix_send_mem_s { + union nix_send_mem_w0_u w0; + uint64_t addr : 64; /* W1 */ +}; + +/* NIX send scatter/gather sub descriptor structure */ +PLT_STD_C11 +union nix_send_sg2_s { + uint64_t u; + struct { + uint64_t seg1_size : 16; + uint64_t aura : 20; + uint64_t i1 : 1; + uint64_t fabs : 1; + uint64_t foff : 8; + uint64_t rsvd_57_46 : 12; + uint64_t ld_type : 2; + uint64_t subdc : 4; + }; +}; + +PLT_STD_C11 +union nix_send_sg_s { + uint64_t u; + struct { + uint64_t seg1_size : 16; + uint64_t seg2_size : 16; + uint64_t seg3_size : 16; + uint64_t segs : 2; + uint64_t rsvd_54_50 : 5; + uint64_t i1 : 1; + uint64_t i2 : 1; + uint64_t i3 : 1; + uint64_t ld_type : 2; + uint64_t subdc : 4; + }; +}; + +/* NIX send work sub descriptor structure */ +struct nix_send_work_s { + uint64_t tag : 32; + uint64_t tt : 2; + uint64_t grp : 10; + uint64_t rsvd_59_44 : 16; + uint64_t subdc : 4; + uint64_t addr : 64; /* W1 */ +}; + +/* [CN10K, .) NIX sq context hardware structure */ +struct nix_cn10k_sq_ctx_hw_s { + uint64_t ena : 1; + uint64_t substream : 20; + uint64_t max_sqe_size : 2; + uint64_t sqe_way_mask : 16; + uint64_t sqb_aura : 20; + uint64_t gbl_rsvd1 : 5; + uint64_t cq_id : 20; + uint64_t cq_ena : 1; + uint64_t qint_idx : 6; + uint64_t gbl_rsvd2 : 1; + uint64_t sq_int : 8; + uint64_t sq_int_ena : 8; + uint64_t xoff : 1; + uint64_t sqe_stype : 2; + uint64_t gbl_rsvd : 17; + uint64_t head_sqb : 64; /* W2 */ + uint64_t head_offset : 6; + uint64_t sqb_dequeue_count : 16; + uint64_t default_chan : 12; + uint64_t sdp_mcast : 1; + uint64_t sso_ena : 1; + uint64_t dse_rsvd1 : 28; + uint64_t sqb_enqueue_count : 16; + uint64_t tail_offset : 6; + uint64_t lmt_dis : 1; + uint64_t smq_rr_weight : 14; + uint64_t dnq_rsvd1 : 27; + uint64_t tail_sqb : 64; /* W5 */ + uint64_t next_sqb : 64; /* W6 */ + uint64_t smq : 10; + uint64_t smq_pend : 1; + uint64_t smq_next_sq : 20; + uint64_t smq_next_sq_vld : 1; + uint64_t mnq_dis : 1; + uint64_t scm1_rsvd2 : 31; + uint64_t smenq_sqb : 64; /* W8 */ + uint64_t smenq_offset : 6; + uint64_t cq_limit : 8; + uint64_t smq_rr_count : 32; + uint64_t scm_lso_rem : 18; + uint64_t smq_lso_segnum : 8; + uint64_t vfi_lso_total : 18; + uint64_t vfi_lso_sizem1 : 3; + uint64_t vfi_lso_sb : 8; + uint64_t vfi_lso_mps : 14; + uint64_t vfi_lso_vlan0_ins_ena : 1; + uint64_t vfi_lso_vlan1_ins_ena : 1; + uint64_t vfi_lso_vld : 1; + uint64_t smenq_next_sqb_vld : 1; + uint64_t scm_dq_rsvd1 : 9; + uint64_t smenq_next_sqb : 64; /* W11 */ + uint64_t age_drop_octs : 32; + uint64_t age_drop_pkts : 32; + uint64_t drop_pkts : 48; + uint64_t drop_octs_lsw : 16; + uint64_t drop_octs_msw : 32; + uint64_t pkts_lsw : 32; + uint64_t pkts_msw : 16; + uint64_t octs : 48; +}; + +/* NIX sq context hardware structure */ +struct nix_sq_ctx_hw_s { + uint64_t ena : 1; + uint64_t substream : 20; + uint64_t max_sqe_size : 2; + uint64_t sqe_way_mask : 16; + uint64_t sqb_aura : 20; + uint64_t gbl_rsvd1 : 5; + uint64_t cq_id : 20; + uint64_t cq_ena : 1; + uint64_t qint_idx : 6; + uint64_t gbl_rsvd2 : 1; + uint64_t sq_int : 8; + uint64_t sq_int_ena : 8; + uint64_t xoff : 1; + uint64_t sqe_stype : 2; + uint64_t gbl_rsvd : 17; + uint64_t head_sqb : 64; /* W2 */ + uint64_t head_offset : 6; + uint64_t sqb_dequeue_count : 16; + uint64_t default_chan : 12; + uint64_t sdp_mcast : 1; + uint64_t sso_ena : 1; + uint64_t dse_rsvd1 : 28; + uint64_t sqb_enqueue_count : 16; + uint64_t tail_offset : 6; + uint64_t lmt_dis : 1; + uint64_t smq_rr_quantum : 24; + uint64_t dnq_rsvd1 : 17; + uint64_t tail_sqb : 64; /* W5 */ + uint64_t next_sqb : 64; /* W6 */ + uint64_t mnq_dis : 1; + uint64_t smq : 9; + uint64_t smq_pend : 1; + uint64_t smq_next_sq : 20; + uint64_t smq_next_sq_vld : 1; + uint64_t scm1_rsvd2 : 32; + uint64_t smenq_sqb : 64; /* W8 */ + uint64_t smenq_offset : 6; + uint64_t cq_limit : 8; + uint64_t smq_rr_count : 25; + uint64_t scm_lso_rem : 18; + uint64_t scm_dq_rsvd0 : 7; + uint64_t smq_lso_segnum : 8; + uint64_t vfi_lso_total : 18; + uint64_t vfi_lso_sizem1 : 3; + uint64_t vfi_lso_sb : 8; + uint64_t vfi_lso_mps : 14; + uint64_t vfi_lso_vlan0_ins_ena : 1; + uint64_t vfi_lso_vlan1_ins_ena : 1; + uint64_t vfi_lso_vld : 1; + uint64_t smenq_next_sqb_vld : 1; + uint64_t scm_dq_rsvd1 : 9; + uint64_t smenq_next_sqb : 64; /* W11 */ + uint64_t seb_rsvd1 : 64; /* W12 */ + uint64_t drop_pkts : 48; + uint64_t drop_octs_lsw : 16; + uint64_t drop_octs_msw : 32; + uint64_t pkts_lsw : 32; + uint64_t pkts_msw : 16; + uint64_t octs : 48; +}; + +/* [CN10K, .) NIX Send queue context structure */ +struct nix_cn10k_sq_ctx_s { + uint64_t ena : 1; + uint64_t qint_idx : 6; + uint64_t substream : 20; + uint64_t sdp_mcast : 1; + uint64_t cq : 20; + uint64_t sqe_way_mask : 16; + uint64_t smq : 10; + uint64_t cq_ena : 1; + uint64_t xoff : 1; + uint64_t sso_ena : 1; + uint64_t smq_rr_weight : 14; + uint64_t default_chan : 12; + uint64_t sqb_count : 16; + uint64_t rsvd_120_119 : 2; + uint64_t smq_rr_count_lb : 7; + uint64_t smq_rr_count_ub : 25; + uint64_t sqb_aura : 20; + uint64_t sq_int : 8; + uint64_t sq_int_ena : 8; + uint64_t sqe_stype : 2; + uint64_t rsvd_191 : 1; + uint64_t max_sqe_size : 2; + uint64_t cq_limit : 8; + uint64_t lmt_dis : 1; + uint64_t mnq_dis : 1; + uint64_t smq_next_sq : 20; + uint64_t smq_lso_segnum : 8; + uint64_t tail_offset : 6; + uint64_t smenq_offset : 6; + uint64_t head_offset : 6; + uint64_t smenq_next_sqb_vld : 1; + uint64_t smq_pend : 1; + uint64_t smq_next_sq_vld : 1; + uint64_t rsvd_255_253 : 3; + uint64_t next_sqb : 64; /* W4 */ + uint64_t tail_sqb : 64; /* W5 */ + uint64_t smenq_sqb : 64; /* W6 */ + uint64_t smenq_next_sqb : 64; /* W7 */ + uint64_t head_sqb : 64; /* W8 */ + uint64_t rsvd_583_576 : 8; + uint64_t vfi_lso_total : 18; + uint64_t vfi_lso_sizem1 : 3; + uint64_t vfi_lso_sb : 8; + uint64_t vfi_lso_mps : 14; + uint64_t vfi_lso_vlan0_ins_ena : 1; + uint64_t vfi_lso_vlan1_ins_ena : 1; + uint64_t vfi_lso_vld : 1; + uint64_t rsvd_639_630 : 10; + uint64_t scm_lso_rem : 18; + uint64_t rsvd_703_658 : 46; + uint64_t octs : 48; + uint64_t rsvd_767_752 : 16; + uint64_t pkts : 48; + uint64_t rsvd_831_816 : 16; + uint64_t aged_drop_octs : 32; + uint64_t aged_drop_pkts : 32; + uint64_t drop_octs : 48; + uint64_t rsvd_959_944 : 16; + uint64_t drop_pkts : 48; + uint64_t rsvd_1023_1008 : 16; +}; + +/* NIX send queue context structure */ +struct nix_sq_ctx_s { + uint64_t ena : 1; + uint64_t qint_idx : 6; + uint64_t substream : 20; + uint64_t sdp_mcast : 1; + uint64_t cq : 20; + uint64_t sqe_way_mask : 16; + uint64_t smq : 9; + uint64_t cq_ena : 1; + uint64_t xoff : 1; + uint64_t sso_ena : 1; + uint64_t smq_rr_quantum : 24; + uint64_t default_chan : 12; + uint64_t sqb_count : 16; + uint64_t smq_rr_count : 25; + uint64_t sqb_aura : 20; + uint64_t sq_int : 8; + uint64_t sq_int_ena : 8; + uint64_t sqe_stype : 2; + uint64_t rsvd_191 : 1; + uint64_t max_sqe_size : 2; + uint64_t cq_limit : 8; + uint64_t lmt_dis : 1; + uint64_t mnq_dis : 1; + uint64_t smq_next_sq : 20; + uint64_t smq_lso_segnum : 8; + uint64_t tail_offset : 6; + uint64_t smenq_offset : 6; + uint64_t head_offset : 6; + uint64_t smenq_next_sqb_vld : 1; + uint64_t smq_pend : 1; + uint64_t smq_next_sq_vld : 1; + uint64_t rsvd_255_253 : 3; + uint64_t next_sqb : 64; /* W4 */ + uint64_t tail_sqb : 64; /* W5 */ + uint64_t smenq_sqb : 64; /* W6 */ + uint64_t smenq_next_sqb : 64; /* W7 */ + uint64_t head_sqb : 64; /* W8 */ + uint64_t rsvd_583_576 : 8; + uint64_t vfi_lso_total : 18; + uint64_t vfi_lso_sizem1 : 3; + uint64_t vfi_lso_sb : 8; + uint64_t vfi_lso_mps : 14; + uint64_t vfi_lso_vlan0_ins_ena : 1; + uint64_t vfi_lso_vlan1_ins_ena : 1; + uint64_t vfi_lso_vld : 1; + uint64_t rsvd_639_630 : 10; + uint64_t scm_lso_rem : 18; + uint64_t rsvd_703_658 : 46; + uint64_t octs : 48; + uint64_t rsvd_767_752 : 16; + uint64_t pkts : 48; + uint64_t rsvd_831_816 : 16; + uint64_t rsvd_895_832 : 64; /* W13 */ + uint64_t drop_octs : 48; + uint64_t rsvd_959_944 : 16; + uint64_t drop_pkts : 48; + uint64_t rsvd_1023_1008 : 16; +}; + +/* NIX transmit action structure */ +struct nix_tx_action_s { + uint64_t op : 4; + uint64_t rsvd_11_4 : 8; + uint64_t index : 20; + uint64_t match_id : 16; + uint64_t rsvd_63_48 : 16; +}; + +/* NIX transmit vtag action structure */ +struct nix_tx_vtag_action_s { + uint64_t vtag0_relptr : 8; + uint64_t vtag0_lid : 3; + uint64_t rsvd_11 : 1; + uint64_t vtag0_op : 2; + uint64_t rsvd_15_14 : 2; + uint64_t vtag0_def : 10; + uint64_t rsvd_31_26 : 6; + uint64_t vtag1_relptr : 8; + uint64_t vtag1_lid : 3; + uint64_t rsvd_43 : 1; + uint64_t vtag1_op : 2; + uint64_t rsvd_47_46 : 2; + uint64_t vtag1_def : 10; + uint64_t rsvd_63_58 : 6; +}; + +/* NIX work queue entry header structure */ +struct nix_wqe_hdr_s { + uint64_t tag : 32; + uint64_t tt : 2; + uint64_t grp : 10; + uint64_t node : 2; + uint64_t q : 14; + uint64_t wqe_type : 4; +}; + +/* NIX Rx flow key algorithm field structure */ +struct nix_rx_flowkey_alg { + uint64_t key_offset : 6; + uint64_t ln_mask : 1; + uint64_t fn_mask : 1; + uint64_t hdr_offset : 8; + uint64_t bytesm1 : 5; + uint64_t lid : 3; + uint64_t reserved_24_24 : 1; + uint64_t ena : 1; + uint64_t sel_chan : 1; + uint64_t ltype_mask : 4; + uint64_t ltype_match : 4; + uint64_t reserved_35_63 : 29; +}; + +/* NIX LSO format field structure */ +struct nix_lso_format { + uint64_t offset : 8; + uint64_t layer : 2; + uint64_t rsvd_10_11 : 2; + uint64_t sizem1 : 2; + uint64_t rsvd_14_15 : 2; + uint64_t alg : 3; + uint64_t rsvd_19_63 : 45; +}; + +#define NIX_LSO_FIELD_MAX (8) +#define NIX_LSO_FIELD_ALG_MASK GENMASK(18, 16) +#define NIX_LSO_FIELD_SZ_MASK GENMASK(13, 12) +#define NIX_LSO_FIELD_LY_MASK GENMASK(9, 8) +#define NIX_LSO_FIELD_OFF_MASK GENMASK(7, 0) + +#define NIX_LSO_FIELD_MASK \ + (NIX_LSO_FIELD_OFF_MASK | NIX_LSO_FIELD_LY_MASK | \ + NIX_LSO_FIELD_SZ_MASK | NIX_LSO_FIELD_ALG_MASK) + +#define NIX_CN9K_MAX_HW_FRS 9212UL +#define NIX_LBK_MAX_HW_FRS 65535UL +#define NIX_RPM_MAX_HW_FRS 16380UL +#define NIX_MIN_HW_FRS 60UL + +/* NIX rate limits */ +#define NIX_TM_MAX_RATE_DIV_EXP 12 +#define NIX_TM_MAX_RATE_EXPONENT 0xf +#define NIX_TM_MAX_RATE_MANTISSA 0xff + +#define NIX_TM_SHAPER_RATE_CONST ((uint64_t)2E6) + +/* NIX rate calculation in Bits/Sec + * PIR_ADD = ((256 + NIX_*_PIR[RATE_MANTISSA]) + * << NIX_*_PIR[RATE_EXPONENT]) / 256 + * PIR = (2E6 * PIR_ADD / (1 << NIX_*_PIR[RATE_DIVIDER_EXPONENT])) + * + * CIR_ADD = ((256 + NIX_*_CIR[RATE_MANTISSA]) + * << NIX_*_CIR[RATE_EXPONENT]) / 256 + * CIR = (2E6 * CIR_ADD / (CCLK_TICKS << NIX_*_CIR[RATE_DIVIDER_EXPONENT])) + */ +#define NIX_TM_SHAPER_RATE(exponent, mantissa, div_exp) \ + ((NIX_TM_SHAPER_RATE_CONST * ((256 + (mantissa)) << (exponent))) / \ + (((1ull << (div_exp)) * 256))) + +/* Rate limit in Bits/Sec */ +#define NIX_TM_MIN_SHAPER_RATE NIX_TM_SHAPER_RATE(0, 0, NIX_TM_MAX_RATE_DIV_EXP) + +#define NIX_TM_MAX_SHAPER_RATE \ + NIX_TM_SHAPER_RATE(NIX_TM_MAX_RATE_EXPONENT, NIX_TM_MAX_RATE_MANTISSA, \ + 0) + +/* NIX burst limits */ +#define NIX_TM_MAX_BURST_EXPONENT 0xf +#define NIX_TM_MAX_BURST_MANTISSA 0xff + +/* NIX burst calculation + * PIR_BURST = ((256 + NIX_*_PIR[BURST_MANTISSA]) + * << (NIX_*_PIR[BURST_EXPONENT] + 1)) + * / 256 + * + * CIR_BURST = ((256 + NIX_*_CIR[BURST_MANTISSA]) + * << (NIX_*_CIR[BURST_EXPONENT] + 1)) + * / 256 + */ +#define NIX_TM_SHAPER_BURST(exponent, mantissa) \ + (((256 + (mantissa)) << ((exponent) + 1)) / 256) + +/* Burst limit in Bytes */ +#define NIX_TM_MIN_SHAPER_BURST NIX_TM_SHAPER_BURST(0, 0) + +#define NIX_TM_MAX_SHAPER_BURST \ + NIX_TM_SHAPER_BURST(NIX_TM_MAX_BURST_EXPONENT, \ + NIX_TM_MAX_BURST_MANTISSA) + +/* Min is limited so that NIX_AF_SMQX_CFG[MINLEN]+ADJUST is not -ve */ +#define NIX_TM_LENGTH_ADJUST_MIN ((int)-NIX_MIN_HW_FRS + 1) +#define NIX_TM_LENGTH_ADJUST_MAX 255 + +#define NIX_TM_TLX_SP_PRIO_MAX 10 +#define NIX_CN9K_TM_RR_QUANTUM_MAX (BIT_ULL(24) - 1) +#define NIX_TM_RR_QUANTUM_MAX (BIT_ULL(14) - 1) + +/* [CN9K, CN10K) */ +#define NIX_CN9K_TXSCH_LVL_SMQ_MAX 512 + +/* [CN10K, .) */ +#define NIX_TXSCH_LVL_SMQ_MAX 832 + +/* [CN9K, .) */ +#define NIX_TXSCH_LVL_TL4_MAX 512 +#define NIX_TXSCH_LVL_TL3_MAX 256 +#define NIX_TXSCH_LVL_TL2_MAX 256 +#define NIX_TXSCH_LVL_TL1_MAX 28 + +#define NIX_CQ_OP_STAT_OP_ERR 63 +#define NIX_CQ_OP_STAT_CQ_ERR 46 + +#define NIX_RQ_CN10K_SPB_MAX_SIZE 4096 + +/* [CN9K, .) */ +#define NIX_LSO_SEG_MAX 256 +#define NIX_LSO_MPS_MAX (BIT_ULL(14) - 1) + +/* Software defined LSO base format IDX */ +#define NIX_LSO_FORMAT_IDX_TSOV4 0 +#define NIX_LSO_FORMAT_IDX_TSOV6 1 + +#endif /* __NIX_HW_H__ */ diff --git a/drivers/common/cnxk/hw/npa.h b/drivers/common/cnxk/hw/npa.h new file mode 100644 index 0000000..891a1b2 --- /dev/null +++ b/drivers/common/cnxk/hw/npa.h @@ -0,0 +1,376 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef __NPA_HW_H__ +#define __NPA_HW_H__ + +/* Register offsets */ + +#define NPA_AF_BLK_RST (0x0ull) +#define NPA_AF_CONST (0x10ull) +#define NPA_AF_CONST1 (0x18ull) +#define NPA_AF_LF_RST (0x20ull) +#define NPA_AF_GEN_CFG (0x30ull) +#define NPA_AF_NDC_CFG (0x40ull) +#define NPA_AF_NDC_SYNC (0x50ull) +#define NPA_AF_INP_CTL (0xd0ull) +#define NPA_AF_ACTIVE_CYCLES_PC (0xf0ull) +#define NPA_AF_AVG_DELAY (0x100ull) +#define NPA_AF_GEN_INT (0x140ull) +#define NPA_AF_GEN_INT_W1S (0x148ull) +#define NPA_AF_GEN_INT_ENA_W1S (0x150ull) +#define NPA_AF_GEN_INT_ENA_W1C (0x158ull) +#define NPA_AF_RVU_INT (0x160ull) +#define NPA_AF_RVU_INT_W1S (0x168ull) +#define NPA_AF_RVU_INT_ENA_W1S (0x170ull) +#define NPA_AF_RVU_INT_ENA_W1C (0x178ull) +#define NPA_AF_ERR_INT (0x180ull) +#define NPA_AF_ERR_INT_W1S (0x188ull) +#define NPA_AF_ERR_INT_ENA_W1S (0x190ull) +#define NPA_AF_ERR_INT_ENA_W1C (0x198ull) +#define NPA_AF_RAS (0x1a0ull) +#define NPA_AF_RAS_W1S (0x1a8ull) +#define NPA_AF_RAS_ENA_W1S (0x1b0ull) +#define NPA_AF_RAS_ENA_W1C (0x1b8ull) +#define NPA_AF_AQ_CFG (0x600ull) +#define NPA_AF_AQ_BASE (0x610ull) +#define NPA_AF_AQ_STATUS (0x620ull) +#define NPA_AF_AQ_DOOR (0x630ull) +#define NPA_AF_AQ_DONE_WAIT (0x640ull) +#define NPA_AF_AQ_DONE (0x650ull) +#define NPA_AF_AQ_DONE_ACK (0x660ull) +#define NPA_AF_AQ_DONE_TIMER (0x670ull) +#define NPA_AF_AQ_DONE_INT (0x680ull) +#define NPA_AF_AQ_DONE_ENA_W1S (0x690ull) +#define NPA_AF_AQ_DONE_ENA_W1C (0x698ull) +#define NPA_AF_BATCH_CTL (0x6a0ull) /* [CN10K, .) */ +#define NPA_AF_BATCH_ACCEPT_CTL (0x6a8ull) /* [CN10K, .) */ +#define NPA_AF_BATCH_ERR_DATA0 (0x6c0ull) /* [CN10K, .) */ +#define NPA_AF_BATCH_ERR_DATA1 (0x6c8ull) /* [CN10K, .) */ +#define NPA_AF_LFX_AURAS_CFG(a) (0x4000ull | (uint64_t)(a) << 18) +#define NPA_AF_LFX_LOC_AURAS_BASE(a) (0x4010ull | (uint64_t)(a) << 18) +#define NPA_AF_LFX_QINTS_CFG(a) (0x4100ull | (uint64_t)(a) << 18) +#define NPA_AF_LFX_QINTS_BASE(a) (0x4110ull | (uint64_t)(a) << 18) +#define NPA_PRIV_AF_INT_CFG (0x10000ull) +#define NPA_PRIV_LFX_CFG(a) (0x10010ull | (uint64_t)(a) << 8) +#define NPA_PRIV_LFX_INT_CFG(a) (0x10020ull | (uint64_t)(a) << 8) +#define NPA_AF_RVU_LF_CFG_DEBUG (0x10030ull) +#define NPA_AF_DTX_FILTER_CTL (0x10040ull) + +#define NPA_LF_AURA_OP_ALLOCX(a) (0x10ull | (uint64_t)(a) << 3) +#define NPA_LF_AURA_OP_FREE0 (0x20ull) +#define NPA_LF_AURA_OP_FREE1 (0x28ull) +#define NPA_LF_AURA_OP_CNT (0x30ull) +#define NPA_LF_AURA_OP_LIMIT (0x50ull) +#define NPA_LF_AURA_OP_INT (0x60ull) +#define NPA_LF_AURA_OP_THRESH (0x70ull) +#define NPA_LF_POOL_OP_PC (0x100ull) +#define NPA_LF_POOL_OP_AVAILABLE (0x110ull) +#define NPA_LF_POOL_OP_PTR_START0 (0x120ull) +#define NPA_LF_POOL_OP_PTR_START1 (0x128ull) +#define NPA_LF_POOL_OP_PTR_END0 (0x130ull) +#define NPA_LF_POOL_OP_PTR_END1 (0x138ull) +#define NPA_LF_POOL_OP_INT (0x160ull) +#define NPA_LF_POOL_OP_THRESH (0x170ull) +#define NPA_LF_ERR_INT (0x200ull) +#define NPA_LF_ERR_INT_W1S (0x208ull) +#define NPA_LF_ERR_INT_ENA_W1C (0x210ull) +#define NPA_LF_ERR_INT_ENA_W1S (0x218ull) +#define NPA_LF_RAS (0x220ull) +#define NPA_LF_RAS_W1S (0x228ull) +#define NPA_LF_RAS_ENA_W1C (0x230ull) +#define NPA_LF_RAS_ENA_W1S (0x238ull) +#define NPA_LF_QINTX_CNT(a) (0x300ull | (uint64_t)(a) << 12) +#define NPA_LF_QINTX_INT(a) (0x310ull | (uint64_t)(a) << 12) +#define NPA_LF_QINTX_ENA_W1S(a) (0x320ull | (uint64_t)(a) << 12) +#define NPA_LF_QINTX_ENA_W1C(a) (0x330ull | (uint64_t)(a) << 12) +#define NPA_LF_AURA_BATCH_ALLOC (0x340ull) /* [CN10K, .) */ +#define NPA_LF_AURA_BATCH_FREE0 (0x400ull) /* [CN10K, .) */ +#define NPA_LF_AURA_BATCH_FREEX(a) \ + (0x400ull | (uint64_t)(a) << 3) /* [CN10K, .) */ + +/* Enum offsets */ + +#define NPA_AF_BATCH_FAIL_BATCH_PASS (0x0ull) /* [CN10K, .) */ +#define NPA_AF_BATCH_FAIL_BATCH_CNT_OOR (0x1ull) /* [CN10K, .) */ +#define NPA_AF_BATCH_FAIL_BATCH_STORE_FAIL (0x2ull) /* [CN10K, .) */ + +#define NPA_AQ_COMP_NOTDONE (0x0ull) +#define NPA_AQ_COMP_GOOD (0x1ull) +#define NPA_AQ_COMP_SWERR (0x2ull) +#define NPA_AQ_COMP_CTX_POISON (0x3ull) +#define NPA_AQ_COMP_CTX_FAULT (0x4ull) +#define NPA_AQ_COMP_LOCKERR (0x5ull) + +#define NPA_AF_INT_VEC_RVU (0x0ull) +#define NPA_AF_INT_VEC_GEN (0x1ull) +#define NPA_AF_INT_VEC_AQ_DONE (0x2ull) +#define NPA_AF_INT_VEC_AF_ERR (0x3ull) +#define NPA_AF_INT_VEC_POISON (0x4ull) + +#define NPA_AQ_INSTOP_NOP (0x0ull) +#define NPA_AQ_INSTOP_INIT (0x1ull) +#define NPA_AQ_INSTOP_WRITE (0x2ull) +#define NPA_AQ_INSTOP_READ (0x3ull) +#define NPA_AQ_INSTOP_LOCK (0x4ull) +#define NPA_AQ_INSTOP_UNLOCK (0x5ull) + +#define NPA_AQ_CTYPE_AURA (0x0ull) +#define NPA_AQ_CTYPE_POOL (0x1ull) + +#define NPA_BPINTF_NIX0_RX (0x0ull) +#define NPA_BPINTF_NIX1_RX (0x1ull) + +#define NPA_AURA_ERR_INT_AURA_FREE_UNDER (0x0ull) +#define NPA_AURA_ERR_INT_AURA_ADD_OVER (0x1ull) +#define NPA_AURA_ERR_INT_AURA_ADD_UNDER (0x2ull) +#define NPA_AURA_ERR_INT_POOL_DIS (0x3ull) +#define NPA_AURA_ERR_INT_R4 (0x4ull) +#define NPA_AURA_ERR_INT_R5 (0x5ull) +#define NPA_AURA_ERR_INT_R6 (0x6ull) +#define NPA_AURA_ERR_INT_R7 (0x7ull) + +#define NPA_LF_INT_VEC_ERR_INT (0x40ull) +#define NPA_LF_INT_VEC_POISON (0x41ull) +#define NPA_LF_INT_VEC_QINT_END (0x3full) +#define NPA_LF_INT_VEC_QINT_START (0x0ull) + +#define NPA_INPQ_SSO (0x4ull) +#define NPA_INPQ_TIM (0x5ull) +#define NPA_INPQ_DPI (0x6ull) +#define NPA_INPQ_AURA_OP (0xeull) +#define NPA_INPQ_INTERNAL_RSV (0xfull) +#define NPA_INPQ_NIX0_RX (0x0ull) +#define NPA_INPQ_NIX1_RX (0x2ull) +#define NPA_INPQ_NIX0_TX (0x1ull) +#define NPA_INPQ_NIX1_TX (0x3ull) +#define NPA_INPQ_R_END (0xdull) +#define NPA_INPQ_R_START (0x7ull) + +#define NPA_POOL_ERR_INT_OVFLS (0x0ull) +#define NPA_POOL_ERR_INT_RANGE (0x1ull) +#define NPA_POOL_ERR_INT_PERR (0x2ull) +#define NPA_POOL_ERR_INT_R3 (0x3ull) +#define NPA_POOL_ERR_INT_R4 (0x4ull) +#define NPA_POOL_ERR_INT_R5 (0x5ull) +#define NPA_POOL_ERR_INT_R6 (0x6ull) +#define NPA_POOL_ERR_INT_R7 (0x7ull) + +#define NPA_NDC0_PORT_AURA0 (0x0ull) +#define NPA_NDC0_PORT_AURA1 (0x1ull) +#define NPA_NDC0_PORT_POOL0 (0x2ull) +#define NPA_NDC0_PORT_POOL1 (0x3ull) +#define NPA_NDC0_PORT_STACK0 (0x4ull) +#define NPA_NDC0_PORT_STACK1 (0x5ull) + +#define NPA_LF_ERR_INT_AURA_DIS (0x0ull) +#define NPA_LF_ERR_INT_AURA_OOR (0x1ull) +#define NPA_LF_ERR_INT_AURA_FAULT (0xcull) +#define NPA_LF_ERR_INT_POOL_FAULT (0xdull) +#define NPA_LF_ERR_INT_STACK_FAULT (0xeull) +#define NPA_LF_ERR_INT_QINT_FAULT (0xfull) + +#define NPA_BATCH_ALLOC_RESULT_ACCEPTED (0x0ull) /* [CN10K, .) */ +#define NPA_BATCH_ALLOC_RESULT_WAIT (0x1ull) /* [CN10K, .) */ +#define NPA_BATCH_ALLOC_RESULT_ERR (0x2ull) /* [CN10K, .) */ +#define NPA_BATCH_ALLOC_RESULT_NOCORE (0x3ull) /* [CN10K, .) */ +#define NPA_BATCH_ALLOC_RESULT_NOCORE_WAIT (0x4ull) /* [CN10K, .) */ + +#define NPA_BATCH_ALLOC_CCODE_INVAL (0x0ull) /* [CN10K, .) */ +#define NPA_BATCH_ALLOC_CCODE_VAL (0x1ull) /* [CN10K, .) */ +#define NPA_BATCH_ALLOC_CCODE_VAL_NULL (0x2ull) /* [CN10K, .) */ + +#define NPA_INPQ_ENAS_REMOTE_PORT (0x0ull) /* [CN10K, .) */ +#define NPA_INPQ_ENAS_RESP_DISABLE (0x702ull) /* [CN10K, .) */ +#define NPA_INPQ_ENAS_NOTIF_DISABLE (0x7cfull) /* [CN10K, .) */ + +/* Structures definitions */ + +/* NPA admin queue instruction structure */ +struct npa_aq_inst_s { + uint64_t op : 4; + uint64_t ctype : 4; + uint64_t lf : 9; + uint64_t rsvd_23_17 : 7; + uint64_t cindex : 20; + uint64_t rsvd_62_44 : 19; + uint64_t doneint : 1; + uint64_t res_addr : 64; /* W1 */ +}; + +/* NPA admin queue result structure */ +struct npa_aq_res_s { + uint64_t op : 4; + uint64_t ctype : 4; + uint64_t compcode : 8; + uint64_t doneint : 1; + uint64_t rsvd_63_17 : 47; + uint64_t rsvd_127_64 : 64; /* W1 */ +}; + +/* NPA aura operation write data structure */ +struct npa_aura_op_wdata_s { + uint64_t aura : 20; + uint64_t rsvd_62_20 : 43; + uint64_t drop : 1; +}; + +/* NPA aura context structure */ +struct npa_aura_s { + uint64_t pool_addr : 64; /* W0 */ + uint64_t ena : 1; + uint64_t rsvd_66_65 : 2; + uint64_t pool_caching : 1; + uint64_t pool_way_mask : 16; + uint64_t avg_con : 9; + uint64_t rsvd_93 : 1; + uint64_t pool_drop_ena : 1; + uint64_t aura_drop_ena : 1; + uint64_t bp_ena : 2; + uint64_t rsvd_103_98 : 6; + uint64_t aura_drop : 8; + uint64_t shift : 6; + uint64_t rsvd_119_118 : 2; + uint64_t avg_level : 8; + uint64_t count : 36; + uint64_t rsvd_167_164 : 4; + uint64_t nix0_bpid : 9; + uint64_t rsvd_179_177 : 3; + uint64_t nix1_bpid : 9; + uint64_t rsvd_191_189 : 3; + uint64_t limit : 36; + uint64_t rsvd_231_228 : 4; + uint64_t bp : 8; + uint64_t rsvd_242_240 : 3; + uint64_t fc_be : 1; /* [CN10K, .) */ + uint64_t fc_ena : 1; + uint64_t fc_up_crossing : 1; + uint64_t fc_stype : 2; + uint64_t fc_hyst_bits : 4; + uint64_t rsvd_255_252 : 4; + uint64_t fc_addr : 64; /* W4 */ + uint64_t pool_drop : 8; + uint64_t update_time : 16; + uint64_t err_int : 8; + uint64_t err_int_ena : 8; + uint64_t thresh_int : 1; + uint64_t thresh_int_ena : 1; + uint64_t thresh_up : 1; + uint64_t rsvd_363 : 1; + uint64_t thresh_qint_idx : 7; + uint64_t rsvd_371 : 1; + uint64_t err_qint_idx : 7; + uint64_t rsvd_383_379 : 5; + uint64_t thresh : 36; + uint64_t rsvd_423_420 : 4; + uint64_t fc_msh_dst : 11; /* [CN10K, .) */ + uint64_t rsvd_447_435 : 13; + uint64_t rsvd_511_448 : 64; /* W7 */ +}; + +/* NPA pool context structure */ +struct npa_pool_s { + uint64_t stack_base : 64; /* W0 */ + uint64_t ena : 1; + uint64_t nat_align : 1; + uint64_t rsvd_67_66 : 2; + uint64_t stack_caching : 1; + uint64_t rsvd_71_69 : 3; + uint64_t stack_way_mask : 16; + uint64_t buf_offset : 12; + uint64_t rsvd_103_100 : 4; + uint64_t buf_size : 11; + uint64_t rsvd_127_115 : 13; + uint64_t stack_max_pages : 32; + uint64_t stack_pages : 32; + uint64_t op_pc : 48; + uint64_t rsvd_255_240 : 16; + uint64_t stack_offset : 4; + uint64_t rsvd_263_260 : 4; + uint64_t shift : 6; + uint64_t rsvd_271_270 : 2; + uint64_t avg_level : 8; + uint64_t avg_con : 9; + uint64_t fc_ena : 1; + uint64_t fc_stype : 2; + uint64_t fc_hyst_bits : 4; + uint64_t fc_up_crossing : 1; + uint64_t fc_be : 1; /* [CN10K, .) */ + uint64_t rsvd_299_298 : 2; + uint64_t update_time : 16; + uint64_t rsvd_319_316 : 4; + uint64_t fc_addr : 64; /* W5 */ + uint64_t ptr_start : 64; /* W6 */ + uint64_t ptr_end : 64; /* W7 */ + uint64_t rsvd_535_512 : 24; + uint64_t err_int : 8; + uint64_t err_int_ena : 8; + uint64_t thresh_int : 1; + uint64_t thresh_int_ena : 1; + uint64_t thresh_up : 1; + uint64_t rsvd_555 : 1; + uint64_t thresh_qint_idx : 7; + uint64_t rsvd_563 : 1; + uint64_t err_qint_idx : 7; + uint64_t rsvd_575_571 : 5; + uint64_t thresh : 36; + uint64_t rsvd_615_612 : 4; + uint64_t fc_msh_dst : 11; /* [CN10K, .) */ + uint64_t rsvd_639_627 : 13; + uint64_t rsvd_703_640 : 64; /* W10 */ + uint64_t rsvd_767_704 : 64; /* W11 */ + uint64_t rsvd_831_768 : 64; /* W12 */ + uint64_t rsvd_895_832 : 64; /* W13 */ + uint64_t rsvd_959_896 : 64; /* W14 */ + uint64_t rsvd_1023_960 : 64; /* W15 */ +}; + +/* NPA queue interrupt context hardware structure */ +struct npa_qint_hw_s { + uint32_t count : 22; + uint32_t rsvd_30_22 : 9; + uint32_t ena : 1; +}; + +/* NPA batch allocate compare hardware structure */ +struct npa_batch_alloc_compare_s { + uint64_t aura : 20; + uint64_t rsvd_31_20 : 12; + uint64_t count : 10; + uint64_t rsvd_47_42 : 6; + uint64_t stype : 2; + uint64_t rsvd_61_50 : 12; + uint64_t dis_wait : 1; + uint64_t drop : 1; +}; + +/* NPA batch alloc dma write status structure */ +struct npa_batch_alloc_status_s { + uint8_t count : 5; + uint8_t ccode : 2; + uint8_t rsvd_7_7 : 1; +}; + +typedef enum { + ALLOC_RESULT_ACCEPTED = 0, + ALLOC_RESULT_WAIT = 1, + ALLOC_RESULT_ERR = 2, + ALLOC_RESULT_NOCORE = 3, + ALLOC_RESULT_NOCORE_WAIT = 4, +} npa_batch_alloc_result_e; + +typedef enum { + ALLOC_CCODE_INVAL = 0, + ALLOC_CCODE_VAL = 1, + ALLOC_CCODE_VAL_NULL = 2, +} npa_batch_alloc_ccode_e; + +typedef enum { + ALLOC_STYPE_STF = 0, + ALLOC_STYPE_STT = 1, + ALLOC_STYPE_STP = 2, + ALLOC_STYPE_STSTP = 3, +} npa_batch_alloc_stype_e; + +#endif /* __NPA_HW_H__ */ diff --git a/drivers/common/cnxk/hw/npc.h b/drivers/common/cnxk/hw/npc.h new file mode 100644 index 0000000..e0f06bf --- /dev/null +++ b/drivers/common/cnxk/hw/npc.h @@ -0,0 +1,525 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef __NPC_HW_H__ +#define __NPC_HW_H__ + +/* Register offsets */ + +#define NPC_AF_CFG (0x0ull) +#define NPC_AF_ACTIVE_PC (0x10ull) +#define NPC_AF_CONST (0x20ull) +#define NPC_AF_CONST1 (0x30ull) +#define NPC_AF_BLK_RST (0x40ull) +#define NPC_AF_MCAM_SCRUB_CTL (0xa0ull) +#define NPC_AF_KCAM_SCRUB_CTL (0xb0ull) +#define NPC_AF_KPUX_CFG(a) (0x500ull | (uint64_t)(a) << 3) +#define NPC_AF_PCK_CFG (0x600ull) +#define NPC_AF_PCK_DEF_OL2 (0x610ull) +#define NPC_AF_PCK_DEF_OIP4 (0x620ull) +#define NPC_AF_PCK_DEF_OIP6 (0x630ull) +#define NPC_AF_PCK_DEF_IIP4 (0x640ull) +#define NPC_AF_KEX_LDATAX_FLAGS_CFG(a) (0x800ull | (uint64_t)(a) << 3) +#define NPC_AF_INTFX_KEX_CFG(a) (0x1010ull | (uint64_t)(a) << 8) +#define NPC_AF_PKINDX_ACTION0(a) (0x80000ull | (uint64_t)(a) << 6) +#define NPC_AF_PKINDX_ACTION1(a) (0x80008ull | (uint64_t)(a) << 6) +#define NPC_AF_PKINDX_CPI_DEFX(a, b) \ + (0x80020ull | (uint64_t)(a) << 6 | (uint64_t)(b) << 3) +#define NPC_AF_CHLEN90B_PKIND (0x3bull) +#define NPC_AF_KPUX_ENTRYX_CAMX(a, b, c) \ + (0x100000ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6 | \ + (uint64_t)(c) << 3) +#define NPC_AF_KPUX_ENTRYX_ACTION0(a, b) \ + (0x100020ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6) +#define NPC_AF_KPUX_ENTRYX_ACTION1(a, b) \ + (0x100028ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6) +#define NPC_AF_KPUX_ENTRY_DISX(a, b) \ + (0x180000ull | (uint64_t)(a) << 6 | (uint64_t)(b) << 3) +#define NPC_AF_CPIX_CFG(a) (0x200000ull | (uint64_t)(a) << 3) +#define NPC_AF_INTFX_LIDX_LTX_LDX_CFG(a, b, c, d) \ + (0x900000ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 12 | \ + (uint64_t)(c) << 5 | (uint64_t)(d) << 3) +#define NPC_AF_INTFX_LDATAX_FLAGSX_CFG(a, b, c) \ + (0x980000ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 12 | \ + (uint64_t)(c) << 3) +#define NPC_AF_MCAMEX_BANKX_CAMX_INTF(a, b, c) \ + (0x1000000ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \ + (uint64_t)(c) << 3) +#define NPC_AF_MCAMEX_BANKX_CAMX_W0(a, b, c) \ + (0x1000010ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \ + (uint64_t)(c) << 3) +#define NPC_AF_MCAMEX_BANKX_CAMX_W1(a, b, c) \ + (0x1000020ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \ + (uint64_t)(c) << 3) +#define NPC_AF_MCAMEX_BANKX_CFG(a, b) \ + (0x1800000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) +#define NPC_AF_MCAMEX_BANKX_STAT_ACT(a, b) \ + (0x1880000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) +#define NPC_AF_MATCH_STATX(a) (0x1880008ull | (uint64_t)(a) << 8) +#define NPC_AF_INTFX_MISS_STAT_ACT(a) (0x1880040ull + 0x8 * (uint64_t)(a)) +#define NPC_AF_MCAMEX_BANKX_ACTION(a, b) \ + (0x1900000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) +#define NPC_AF_MCAMEX_BANKX_TAG_ACT(a, b) \ + (0x1900008ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) +#define NPC_AF_INTFX_MISS_ACT(a) (0x1a00000ull | (uint64_t)(a) << 4) +#define NPC_AF_INTFX_MISS_TAG_ACT(a) (0x1b00008ull | (uint64_t)(a) << 4) +#define NPC_AF_MCAM_BANKX_HITX(a, b) \ + (0x1c80000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) +#define NPC_AF_LKUP_CTL (0x2000000ull) +#define NPC_AF_LKUP_DATAX(a) (0x2000200ull | (uint64_t)(a) << 4) +#define NPC_AF_LKUP_RESULTX(a) (0x2000400ull | (uint64_t)(a) << 4) +#define NPC_AF_INTFX_STAT(a) (0x2000800ull | (uint64_t)(a) << 4) +#define NPC_AF_DBG_CTL (0x3000000ull) +#define NPC_AF_DBG_STATUS (0x3000010ull) +#define NPC_AF_KPUX_DBG(a) (0x3000020ull | (uint64_t)(a) << 8) +#define NPC_AF_IKPU_ERR_CTL (0x3000080ull) +#define NPC_AF_KPUX_ERR_CTL(a) (0x30000a0ull | (uint64_t)(a) << 8) +#define NPC_AF_MCAM_DBG (0x3001000ull) +#define NPC_AF_DBG_DATAX(a) (0x3001400ull | (uint64_t)(a) << 4) +#define NPC_AF_DBG_RESULTX(a) (0x3001800ull | (uint64_t)(a) << 4) + +/* Enum offsets */ + +#define NPC_INTF_NIX0_RX (0x0ull) +#define NPC_INTF_NIX0_TX (0x1ull) + +#define NPC_LKUPOP_PKT (0x0ull) +#define NPC_LKUPOP_KEY (0x1ull) + +#define NPC_MCAM_KEY_X1 (0x0ull) +#define NPC_MCAM_KEY_X2 (0x1ull) +#define NPC_MCAM_KEY_X4 (0x2ull) + +#ifndef __NPC_ERRLEVELS__ +#define __NPC_ERRLEVELS__ + +enum NPC_ERRLEV_E { + NPC_ERRLEV_RE = 0, + NPC_ERRLEV_LA = 1, + NPC_ERRLEV_LB = 2, + NPC_ERRLEV_LC = 3, + NPC_ERRLEV_LD = 4, + NPC_ERRLEV_LE = 5, + NPC_ERRLEV_LF = 6, + NPC_ERRLEV_LG = 7, + NPC_ERRLEV_LH = 8, + NPC_ERRLEV_R9 = 9, + NPC_ERRLEV_R10 = 10, + NPC_ERRLEV_R11 = 11, + NPC_ERRLEV_R12 = 12, + NPC_ERRLEV_R13 = 13, + NPC_ERRLEV_R14 = 14, + NPC_ERRLEV_NIX = 15, + NPC_ERRLEV_ENUM_LAST = 16, +}; + +#endif + +enum npc_kpu_err_code { + NPC_EC_NOERR = 0, /* has to be zero */ + NPC_EC_UNK, + NPC_EC_IH_LENGTH, + NPC_EC_EDSA_UNK, + NPC_EC_L2_K1, + NPC_EC_L2_K2, + NPC_EC_L2_K3, + NPC_EC_L2_K3_ETYPE_UNK, + NPC_EC_L2_K4, + NPC_EC_MPLS_2MANY, + NPC_EC_MPLS_UNK, + NPC_EC_NSH_UNK, + NPC_EC_IP_TTL_0, + NPC_EC_IP_FRAG_OFFSET_1, + NPC_EC_IP_VER, + NPC_EC_IP6_HOP_0, + NPC_EC_IP6_VER, + NPC_EC_TCP_FLAGS_FIN_ONLY, + NPC_EC_TCP_FLAGS_ZERO, + NPC_EC_TCP_FLAGS_RST_FIN, + NPC_EC_TCP_FLAGS_URG_SYN, + NPC_EC_TCP_FLAGS_RST_SYN, + NPC_EC_TCP_FLAGS_SYN_FIN, + NPC_EC_VXLAN, + NPC_EC_NVGRE, + NPC_EC_GRE, + NPC_EC_GRE_VER1, + NPC_EC_L4, + NPC_EC_OIP4_CSUM, + NPC_EC_IIP4_CSUM, + NPC_EC_LAST /* has to be the last item */ +}; + +enum NPC_LID_E { + NPC_LID_LA = 0, + NPC_LID_LB, + NPC_LID_LC, + NPC_LID_LD, + NPC_LID_LE, + NPC_LID_LF, + NPC_LID_LG, + NPC_LID_LH, +}; + +#ifndef __NPC_LT_TYPES__ +#define __NPC_LT_TYPES__ +#define NPC_LT_NA 0 + +enum npc_kpu_la_ltype { + NPC_LT_LA_8023 = 1, + NPC_LT_LA_ETHER, + NPC_LT_LA_IH_NIX_ETHER, + NPC_LT_LA_IH_8_ETHER, + NPC_LT_LA_IH_4_ETHER, + NPC_LT_LA_IH_2_ETHER, + NPC_LT_LA_HIGIG2_ETHER, + NPC_LT_LA_IH_NIX_HIGIG2_ETHER, + NPC_LT_LA_CH_LEN_90B_ETHER, + NPC_LT_LA_CPT_HDR, + NPC_LT_LA_CUSTOM0 = 0xE, + NPC_LT_LA_CUSTOM1 = 0xF, +}; + +enum npc_kpu_lb_ltype { + NPC_LT_LB_ETAG = 1, + NPC_LT_LB_CTAG, + NPC_LT_LB_STAG_QINQ, + NPC_LT_LB_BTAG, + NPC_LT_LB_ITAG, + NPC_LT_LB_DSA, + NPC_LT_LB_DSA_VLAN, + NPC_LT_LB_EDSA, + NPC_LT_LB_EDSA_VLAN, + NPC_LT_LB_EXDSA, + NPC_LT_LB_EXDSA_VLAN, + NPC_LT_LB_FDSA, + NPC_LT_LB_CUSTOM0 = 0xE, + NPC_LT_LB_CUSTOM1 = 0xF, +}; + +enum npc_kpu_lc_ltype { + NPC_LT_LC_PTP = 1, + NPC_LT_LC_IP, + NPC_LT_LC_IP_OPT, + NPC_LT_LC_IP6, + NPC_LT_LC_IP6_EXT, + NPC_LT_LC_ARP, + NPC_LT_LC_RARP, + NPC_LT_LC_MPLS, + NPC_LT_LC_NSH, + NPC_LT_LC_FCOE, + NPC_LT_LC_CUSTOM0 = 0xE, + NPC_LT_LC_CUSTOM1 = 0xF, +}; + +/* Don't modify Ltypes up to SCTP, otherwise it will + * effect flow tag calculation and thus RSS. + */ +enum npc_kpu_ld_ltype { + NPC_LT_LD_TCP = 1, + NPC_LT_LD_UDP, + NPC_LT_LD_ICMP, + NPC_LT_LD_SCTP, + NPC_LT_LD_ICMP6, + NPC_LT_LD_CUSTOM0, + NPC_LT_LD_CUSTOM1, + NPC_LT_LD_IGMP = 8, + NPC_LT_LD_AH, + NPC_LT_LD_GRE, + NPC_LT_LD_NVGRE, + NPC_LT_LD_NSH, + NPC_LT_LD_TU_MPLS_IN_NSH, + NPC_LT_LD_TU_MPLS_IN_IP, +}; + +enum npc_kpu_le_ltype { + NPC_LT_LE_VXLAN = 1, + NPC_LT_LE_GENEVE, + NPC_LT_LE_ESP, + NPC_LT_LE_GTPU = 4, + NPC_LT_LE_VXLANGPE, + NPC_LT_LE_GTPC, + NPC_LT_LE_NSH, + NPC_LT_LE_TU_MPLS_IN_GRE, + NPC_LT_LE_TU_NSH_IN_GRE, + NPC_LT_LE_TU_MPLS_IN_UDP, + NPC_LT_LE_CUSTOM0 = 0xE, + NPC_LT_LE_CUSTOM1 = 0xF, +}; + +#endif + +enum npc_kpu_lf_ltype { + NPC_LT_LF_TU_ETHER = 1, + NPC_LT_LF_TU_PPP, + NPC_LT_LF_TU_MPLS_IN_VXLANGPE, + NPC_LT_LF_TU_NSH_IN_VXLANGPE, + NPC_LT_LF_TU_MPLS_IN_NSH, + NPC_LT_LF_TU_3RD_NSH, + NPC_LT_LF_CUSTOM0 = 0xE, + NPC_LT_LF_CUSTOM1 = 0xF, +}; + +enum npc_kpu_lg_ltype { + NPC_LT_LG_TU_IP = 1, + NPC_LT_LG_TU_IP6, + NPC_LT_LG_TU_ARP, + NPC_LT_LG_TU_ETHER_IN_NSH, + NPC_LT_LG_CUSTOM0 = 0xE, + NPC_LT_LG_CUSTOM1 = 0xF, +}; + +/* Don't modify Ltypes up to SCTP, otherwise it will + * effect flow tag calculation and thus RSS. + */ +enum npc_kpu_lh_ltype { + NPC_LT_LH_TU_TCP = 1, + NPC_LT_LH_TU_UDP, + NPC_LT_LH_TU_ICMP, + NPC_LT_LH_TU_SCTP, + NPC_LT_LH_TU_ICMP6, + NPC_LT_LH_TU_IGMP = 8, + NPC_LT_LH_TU_ESP, + NPC_LT_LH_TU_AH, + NPC_LT_LH_CUSTOM0 = 0xE, + NPC_LT_LH_CUSTOM1 = 0xF, +}; + +enum npc_kpu_lb_uflag { + NPC_F_LB_U_UNK_ETYPE = 0x80, + NPC_F_LB_U_MORE_TAG = 0x40, +}; + +enum npc_kpu_lb_lflag { + NPC_F_LB_L_WITH_CTAG = 1, + NPC_F_LB_L_WITH_CTAG_UNK, + NPC_F_LB_L_WITH_STAG_CTAG, + NPC_F_LB_L_WITH_STAG_STAG, + NPC_F_LB_L_WITH_QINQ_CTAG, + NPC_F_LB_L_WITH_QINQ_QINQ, + NPC_F_LB_L_WITH_ITAG, + NPC_F_LB_L_WITH_ITAG_STAG, + NPC_F_LB_L_WITH_ITAG_CTAG, + NPC_F_LB_L_WITH_ITAG_UNK, + NPC_F_LB_L_WITH_BTAG_ITAG, + NPC_F_LB_L_WITH_STAG, + NPC_F_LB_L_WITH_QINQ, + NPC_F_LB_L_DSA, + NPC_F_LB_L_DSA_VLAN, + NPC_F_LB_L_EDSA, + NPC_F_LB_L_EDSA_VLAN, + NPC_F_LB_L_EXDSA, + NPC_F_LB_L_EXDSA_VLAN, + NPC_F_LB_L_FDSA, +}; + +enum npc_kpu_lc_uflag { + NPC_F_LC_U_UNK_PROTO = 0x10, + NPC_F_LC_U_IP_FRAG = 0x20, + NPC_F_LC_U_IP6_FRAG = 0x40, +}; + +/* Structures definitions */ +struct npc_kpu_profile_cam { + uint8_t state; + uint8_t state_mask; + uint16_t dp0; + uint16_t dp0_mask; + uint16_t dp1; + uint16_t dp1_mask; + uint16_t dp2; + uint16_t dp2_mask; +}; + +struct npc_kpu_profile_action { + uint8_t errlev; + uint8_t errcode; + uint8_t dp0_offset; + uint8_t dp1_offset; + uint8_t dp2_offset; + uint8_t bypass_count; + uint8_t parse_done; + uint8_t next_state; + uint8_t ptr_advance; + uint8_t cap_ena; + uint8_t lid; + uint8_t ltype; + uint8_t flags; + uint8_t offset; + uint8_t mask; + uint8_t right; + uint8_t shift; +}; + +struct npc_kpu_profile { + int cam_entries; + int action_entries; + struct npc_kpu_profile_cam *cam; + struct npc_kpu_profile_action *action; +}; + +/* NPC KPU register formats */ +struct npc_kpu_cam { + uint64_t dp0_data : 16; + uint64_t dp1_data : 16; + uint64_t dp2_data : 16; + uint64_t state : 8; + uint64_t rsvd_63_56 : 8; +}; + +struct npc_kpu_action0 { + uint64_t var_len_shift : 3; + uint64_t var_len_right : 1; + uint64_t var_len_mask : 8; + uint64_t var_len_offset : 8; + uint64_t ptr_advance : 8; + uint64_t capture_flags : 8; + uint64_t capture_ltype : 4; + uint64_t capture_lid : 3; + uint64_t rsvd_43 : 1; + uint64_t next_state : 8; + uint64_t parse_done : 1; + uint64_t capture_ena : 1; + uint64_t byp_count : 3; + uint64_t rsvd_63_57 : 7; +}; + +struct npc_kpu_action1 { + uint64_t dp0_offset : 8; + uint64_t dp1_offset : 8; + uint64_t dp2_offset : 8; + uint64_t errcode : 8; + uint64_t errlev : 4; + uint64_t rsvd_63_36 : 28; +}; + +struct npc_kpu_pkind_cpi_def { + uint64_t cpi_base : 10; + uint64_t rsvd_11_10 : 2; + uint64_t add_shift : 3; + uint64_t rsvd_15 : 1; + uint64_t add_mask : 8; + uint64_t add_offset : 8; + uint64_t flags_mask : 8; + uint64_t flags_match : 8; + uint64_t ltype_mask : 4; + uint64_t ltype_match : 4; + uint64_t lid : 3; + uint64_t rsvd_62_59 : 4; + uint64_t ena : 1; +}; + +struct nix_rx_action { + uint64_t op : 4; + uint64_t pf_func : 16; + uint64_t index : 20; + uint64_t match_id : 16; + uint64_t flow_key_alg : 5; + uint64_t rsvd_63_61 : 3; +}; + +struct nix_tx_action { + uint64_t op : 4; + uint64_t rsvd_11_4 : 8; + uint64_t index : 20; + uint64_t match_id : 16; + uint64_t rsvd_63_48 : 16; +}; + +/* NPC layer parse information structure */ +struct npc_layer_info_s { + uint32_t lptr : 8; + uint32_t flags : 8; + uint32_t ltype : 4; + uint32_t rsvd_31_20 : 12; +}; + +/* NPC layer mcam search key extract structure */ +struct npc_layer_kex_s { + uint16_t flags : 8; + uint16_t ltype : 4; + uint16_t rsvd_15_12 : 4; +}; + +/* NPC mcam search key x1 structure */ +struct npc_mcam_key_x1_s { + uint64_t intf : 2; + uint64_t rsvd_63_2 : 62; + uint64_t kw0 : 64; /* W1 */ + uint64_t kw1 : 48; + uint64_t rsvd_191_176 : 16; +}; + +/* NPC mcam search key x2 structure */ +struct npc_mcam_key_x2_s { + uint64_t intf : 2; + uint64_t rsvd_63_2 : 62; + uint64_t kw0 : 64; /* W1 */ + uint64_t kw1 : 64; /* W2 */ + uint64_t kw2 : 64; /* W3 */ + uint64_t kw3 : 32; + uint64_t rsvd_319_288 : 32; +}; + +/* NPC mcam search key x4 structure */ +struct npc_mcam_key_x4_s { + uint64_t intf : 2; + uint64_t rsvd_63_2 : 62; + uint64_t kw0 : 64; /* W1 */ + uint64_t kw1 : 64; /* W2 */ + uint64_t kw2 : 64; /* W3 */ + uint64_t kw3 : 64; /* W4 */ + uint64_t kw4 : 64; /* W5 */ + uint64_t kw5 : 64; /* W6 */ + uint64_t kw6 : 64; /* W7 */ +}; + +/* NPC parse key extract structure */ +struct npc_parse_kex_s { + uint64_t chan : 12; + uint64_t errlev : 4; + uint64_t errcode : 8; + uint64_t l2m : 1; + uint64_t l2b : 1; + uint64_t l3m : 1; + uint64_t l3b : 1; + uint64_t la : 12; + uint64_t lb : 12; + uint64_t lc : 12; + uint64_t ld : 12; + uint64_t le : 12; + uint64_t lf : 12; + uint64_t lg : 12; + uint64_t lh : 12; + uint64_t rsvd_127_124 : 4; +}; + +/* NPC result structure */ +struct npc_result_s { + uint64_t intf : 2; + uint64_t pkind : 6; + uint64_t chan : 12; + uint64_t errlev : 4; + uint64_t errcode : 8; + uint64_t l2m : 1; + uint64_t l2b : 1; + uint64_t l3m : 1; + uint64_t l3b : 1; + uint64_t eoh_ptr : 8; + uint64_t rsvd_63_44 : 20; + uint64_t action : 64; /* W1 */ + uint64_t vtag_action : 64; /* W2 */ + uint64_t la : 20; + uint64_t lb : 20; + uint64_t lc : 20; + uint64_t rsvd_255_252 : 4; + uint64_t ld : 20; + uint64_t le : 20; + uint64_t lf : 20; + uint64_t rsvd_319_316 : 4; + uint64_t lg : 20; + uint64_t lh : 20; + uint64_t rsvd_383_360 : 24; +}; + +#endif /* __NPC_HW_H__ */ diff --git a/drivers/common/cnxk/hw/rvu.h b/drivers/common/cnxk/hw/rvu.h new file mode 100644 index 0000000..632d949 --- /dev/null +++ b/drivers/common/cnxk/hw/rvu.h @@ -0,0 +1,222 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef __RVU_HW_H__ +#define __RVU_HW_H__ + +/* Register offsets */ + +#define RVU_AF_MSIXTR_BASE (0x10ull) +#define RVU_AF_BLK_RST (0x30ull) +#define RVU_AF_PF_BAR4_ADDR (0x40ull) +#define RVU_AF_RAS (0x100ull) +#define RVU_AF_RAS_W1S (0x108ull) +#define RVU_AF_RAS_ENA_W1S (0x110ull) +#define RVU_AF_RAS_ENA_W1C (0x118ull) +#define RVU_AF_GEN_INT (0x120ull) +#define RVU_AF_GEN_INT_W1S (0x128ull) +#define RVU_AF_GEN_INT_ENA_W1S (0x130ull) +#define RVU_AF_GEN_INT_ENA_W1C (0x138ull) +#define RVU_AF_AFPFX_MBOXX(a, b) \ + (0x2000ull | (uint64_t)(a) << 4 | (uint64_t)(b) << 3) +#define RVU_AF_PFME_STATUS (0x2800ull) +#define RVU_AF_PFTRPEND (0x2810ull) +#define RVU_AF_PFTRPEND_W1S (0x2820ull) +#define RVU_AF_PF_RST (0x2840ull) +#define RVU_AF_HWVF_RST (0x2850ull) +#define RVU_AF_PFAF_MBOX_INT (0x2880ull) +#define RVU_AF_PFAF_MBOX_INT_W1S (0x2888ull) +#define RVU_AF_PFAF_MBOX_INT_ENA_W1S (0x2890ull) +#define RVU_AF_PFAF_MBOX_INT_ENA_W1C (0x2898ull) +#define RVU_AF_PFFLR_INT (0x28a0ull) +#define RVU_AF_PFFLR_INT_W1S (0x28a8ull) +#define RVU_AF_PFFLR_INT_ENA_W1S (0x28b0ull) +#define RVU_AF_PFFLR_INT_ENA_W1C (0x28b8ull) +#define RVU_AF_PFME_INT (0x28c0ull) +#define RVU_AF_PFME_INT_W1S (0x28c8ull) +#define RVU_AF_PFME_INT_ENA_W1S (0x28d0ull) +#define RVU_AF_PFME_INT_ENA_W1C (0x28d8ull) +#define RVU_PRIV_CONST (0x8000000ull) +#define RVU_PRIV_GEN_CFG (0x8000010ull) +#define RVU_PRIV_CLK_CFG (0x8000020ull) +#define RVU_PRIV_ACTIVE_PC (0x8000030ull) +#define RVU_PRIV_PFX_CFG(a) (0x8000100ull | (uint64_t)(a) << 16) +#define RVU_PRIV_PFX_MSIX_CFG(a) (0x8000110ull | (uint64_t)(a) << 16) +#define RVU_PRIV_PFX_ID_CFG(a) (0x8000120ull | (uint64_t)(a) << 16) +#define RVU_PRIV_PFX_INT_CFG(a) (0x8000200ull | (uint64_t)(a) << 16) +#define RVU_PRIV_PFX_NIXX_CFG(a, b) \ + (0x8000300ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) +#define RVU_PRIV_PFX_NPA_CFG(a) (0x8000310ull | (uint64_t)(a) << 16) +#define RVU_PRIV_PFX_SSO_CFG(a) (0x8000320ull | (uint64_t)(a) << 16) +#define RVU_PRIV_PFX_SSOW_CFG(a) (0x8000330ull | (uint64_t)(a) << 16) +#define RVU_PRIV_PFX_TIM_CFG(a) (0x8000340ull | (uint64_t)(a) << 16) +#define RVU_PRIV_PFX_CPTX_CFG(a, b) \ + (0x8000350ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) +#define RVU_PRIV_BLOCK_TYPEX_REV(a) (0x8000400ull | (uint64_t)(a) << 3) +#define RVU_PRIV_HWVFX_INT_CFG(a) (0x8001280ull | (uint64_t)(a) << 16) +#define RVU_PRIV_HWVFX_NIXX_CFG(a, b) \ + (0x8001300ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) +#define RVU_PRIV_HWVFX_NPA_CFG(a) (0x8001310ull | (uint64_t)(a) << 16) +#define RVU_PRIV_HWVFX_SSO_CFG(a) (0x8001320ull | (uint64_t)(a) << 16) +#define RVU_PRIV_HWVFX_SSOW_CFG(a) (0x8001330ull | (uint64_t)(a) << 16) +#define RVU_PRIV_HWVFX_TIM_CFG(a) (0x8001340ull | (uint64_t)(a) << 16) +#define RVU_PRIV_HWVFX_CPTX_CFG(a, b) \ + (0x8001350ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) + +#define RVU_PF_VFX_PFVF_MBOXX(a, b) \ + (0x0ull | (uint64_t)(a) << 12 | (uint64_t)(b) << 3) +#define RVU_PF_VF_BAR4_ADDR (0x10ull) +#define RVU_PF_BLOCK_ADDRX_DISC(a) (0x200ull | (uint64_t)(a) << 3) +#define RVU_PF_VFME_STATUSX(a) (0x800ull | (uint64_t)(a) << 3) +#define RVU_PF_VFTRPENDX(a) (0x820ull | (uint64_t)(a) << 3) +#define RVU_PF_VFTRPEND_W1SX(a) (0x840ull | (uint64_t)(a) << 3) +#define RVU_PF_VFPF_MBOX_INTX(a) (0x880ull | (uint64_t)(a) << 3) +#define RVU_PF_VFPF_MBOX_INT_W1SX(a) (0x8a0ull | (uint64_t)(a) << 3) +#define RVU_PF_VFPF_MBOX_INT_ENA_W1SX(a) (0x8c0ull | (uint64_t)(a) << 3) +#define RVU_PF_VFPF_MBOX_INT_ENA_W1CX(a) (0x8e0ull | (uint64_t)(a) << 3) +#define RVU_PF_VFFLR_INTX(a) (0x900ull | (uint64_t)(a) << 3) +#define RVU_PF_VFFLR_INT_W1SX(a) (0x920ull | (uint64_t)(a) << 3) +#define RVU_PF_VFFLR_INT_ENA_W1SX(a) (0x940ull | (uint64_t)(a) << 3) +#define RVU_PF_VFFLR_INT_ENA_W1CX(a) (0x960ull | (uint64_t)(a) << 3) +#define RVU_PF_VFME_INTX(a) (0x980ull | (uint64_t)(a) << 3) +#define RVU_PF_VFME_INT_W1SX(a) (0x9a0ull | (uint64_t)(a) << 3) +#define RVU_PF_VFME_INT_ENA_W1SX(a) (0x9c0ull | (uint64_t)(a) << 3) +#define RVU_PF_VFME_INT_ENA_W1CX(a) (0x9e0ull | (uint64_t)(a) << 3) +#define RVU_PF_PFAF_MBOXX(a) (0xc00ull | (uint64_t)(a) << 3) +#define RVU_PF_INT (0xc20ull) +#define RVU_PF_INT_W1S (0xc28ull) +#define RVU_PF_INT_ENA_W1S (0xc30ull) +#define RVU_PF_INT_ENA_W1C (0xc38ull) +#define RVU_PF_MSIX_VECX_ADDR(a) (0x80000ull | (uint64_t)(a) << 4) +#define RVU_PF_MSIX_VECX_CTL(a) (0x80008ull | (uint64_t)(a) << 4) +#define RVU_PF_MSIX_PBAX(a) (0xf0000ull | (uint64_t)(a) << 3) +#define RVU_VF_VFPF_MBOXX(a) (0x0ull | (uint64_t)(a) << 3) +#define RVU_VF_INT (0x20ull) +#define RVU_VF_INT_W1S (0x28ull) +#define RVU_VF_INT_ENA_W1S (0x30ull) +#define RVU_VF_INT_ENA_W1C (0x38ull) +#define RVU_VF_BLOCK_ADDRX_DISC(a) (0x200ull | (uint64_t)(a) << 3) +#define RVU_VF_MSIX_VECX_ADDR(a) (0x80000ull | (uint64_t)(a) << 4) +#define RVU_VF_MSIX_VECX_CTL(a) (0x80008ull | (uint64_t)(a) << 4) +#define RVU_VF_MBOX_REGION (0xc0000ull) /* [CN10K, .) */ +#define RVU_VF_MSIX_PBAX(a) (0xf0000ull | (uint64_t)(a) << 3) + +/* Enum offsets */ + +#define RVU_BAR_RVU_PF_END_BAR0 (0x84f000000000ull) +#define RVU_BAR_RVU_PF_START_BAR0 (0x840000000000ull) +#define RVU_BAR_RVU_PFX_FUNCX_BAR2(a, b) \ + (0x840200000000ull | ((uint64_t)(a) << 36) | ((uint64_t)(b) << 25)) + +#define RVU_AF_INT_VEC_POISON (0x0ull) +#define RVU_AF_INT_VEC_PFFLR (0x1ull) +#define RVU_AF_INT_VEC_PFME (0x2ull) +#define RVU_AF_INT_VEC_GEN (0x3ull) +#define RVU_AF_INT_VEC_MBOX (0x4ull) + +#define RVU_BLOCK_TYPE_RVUM (0x0ull) +#define RVU_BLOCK_TYPE_LMT (0x2ull) +#define RVU_BLOCK_TYPE_NIX (0x3ull) +#define RVU_BLOCK_TYPE_NPA (0x4ull) +#define RVU_BLOCK_TYPE_NPC (0x5ull) +#define RVU_BLOCK_TYPE_SSO (0x6ull) +#define RVU_BLOCK_TYPE_SSOW (0x7ull) +#define RVU_BLOCK_TYPE_TIM (0x8ull) +#define RVU_BLOCK_TYPE_CPT (0x9ull) +#define RVU_BLOCK_TYPE_NDC (0xaull) +#define RVU_BLOCK_TYPE_DDF (0xbull) +#define RVU_BLOCK_TYPE_ZIP (0xcull) +#define RVU_BLOCK_TYPE_RAD (0xdull) +#define RVU_BLOCK_TYPE_DFA (0xeull) +#define RVU_BLOCK_TYPE_HNA (0xfull) + +#define RVU_BLOCK_ADDR_RVUM (0x0ull) +#define RVU_BLOCK_ADDR_LMT (0x1ull) +#define RVU_BLOCK_ADDR_NPA (0x3ull) +#define RVU_BLOCK_ADDR_NIX0 (0x4ull) +#define RVU_BLOCK_ADDR_NIX1 (0x5ull) +#define RVU_BLOCK_ADDR_NPC (0x6ull) +#define RVU_BLOCK_ADDR_SSO (0x7ull) +#define RVU_BLOCK_ADDR_SSOW (0x8ull) +#define RVU_BLOCK_ADDR_TIM (0x9ull) +#define RVU_BLOCK_ADDR_CPT0 (0xaull) +#define RVU_BLOCK_ADDR_CPT1 (0xbull) +#define RVU_BLOCK_ADDR_NDC0 (0xcull) +#define RVU_BLOCK_ADDR_NDC1 (0xdull) +#define RVU_BLOCK_ADDR_NDC2 (0xeull) +#define RVU_BLOCK_ADDR_R_END (0x1full) +#define RVU_BLOCK_ADDR_R_START (0x14ull) + +#define RVU_VF_INT_VEC_MBOX (0x0ull) + +#define RVU_PF_INT_VEC_AFPF_MBOX (0x6ull) +#define RVU_PF_INT_VEC_VFFLR0 (0x0ull) +#define RVU_PF_INT_VEC_VFFLR1 (0x1ull) +#define RVU_PF_INT_VEC_VFME0 (0x2ull) +#define RVU_PF_INT_VEC_VFME1 (0x3ull) +#define RVU_PF_INT_VEC_VFPF_MBOX0 (0x4ull) +#define RVU_PF_INT_VEC_VFPF_MBOX1 (0x5ull) + +#define AF_BAR2_ALIASX_SIZE (0x100000ull) + +#define TIM_AF_BAR2_SEL (0x9000000ull) +#define SSO_AF_BAR2_SEL (0x9000000ull) +#define NIX_AF_BAR2_SEL (0x9000000ull) +#define SSOW_AF_BAR2_SEL (0x9000000ull) +#define NPA_AF_BAR2_SEL (0x9000000ull) +#define CPT_AF_BAR2_SEL (0x9000000ull) +#define RVU_AF_BAR2_SEL (0x9000000ull) + +#define AF_BAR2_ALIASX(a, b) \ + (0x9100000ull | (uint64_t)(a) << 12 | (uint64_t)(b)) +#define TIM_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b) +#define SSO_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b) +#define NIX_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(0, b) +#define SSOW_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b) +#define NPA_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(0, b) +#define CPT_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b) +#define RVU_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b) + +/* Structures definitions */ + +/* RVU admin function register address structure */ +struct rvu_af_addr_s { + uint64_t addr : 28; + uint64_t block : 5; + uint64_t rsvd_63_33 : 31; +}; + +/* RVU function-unique address structure */ +struct rvu_func_addr_s { + uint32_t addr : 12; + uint32_t lf_slot : 8; + uint32_t block : 5; + uint32_t rsvd_31_25 : 7; +}; + +/* RVU msi-x vector structure */ +struct rvu_msix_vec_s { + uint64_t addr : 64; /* W0 */ + uint64_t data : 32; + uint64_t mask : 1; + uint64_t pend : 1; + uint64_t rsvd_127_98 : 30; +}; + +/* RVU pf function identification structure */ +struct rvu_pf_func_s { + uint16_t func : 10; + uint16_t pf : 6; +}; + +#define RVU_CN9K_LMT_SLOT_MAX 256ULL +#define RVU_CN9K_LMT_SLOT_MASK (RVU_CN9K_LMT_SLOT_MAX - 1) + +#define RVU_LMT_SZ 128ULL + +/* 2048 LMT lines in BAR4 [CN10k, .) */ +#define RVU_LMT_LINE_MAX 2048 +#define RVU_LMT_LINE_BURST_MAX (uint16_t)32 /* [CN10K, .) */ + +#endif /* __RVU_HW_H__ */ diff --git a/drivers/common/cnxk/hw/sdp.h b/drivers/common/cnxk/hw/sdp.h new file mode 100644 index 0000000..686f516 --- /dev/null +++ b/drivers/common/cnxk/hw/sdp.h @@ -0,0 +1,182 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef __SDP_HW_H_ +#define __SDP_HW_H_ + +/* SDP VF IOQs */ +#define SDP_MIN_RINGS_PER_VF (1) +#define SDP_MAX_RINGS_PER_VF (8) + +/* SDP VF IQ configuration */ +#define SDP_VF_MAX_IQ_DESCRIPTORS (512) +#define SDP_VF_MIN_IQ_DESCRIPTORS (128) + +#define SDP_VF_DB_MIN (1) +#define SDP_VF_DB_TIMEOUT (1) +#define SDP_VF_INTR_THRESHOLD (0xFFFFFFFF) + +#define SDP_VF_64BYTE_INSTR (64) +#define SDP_VF_32BYTE_INSTR (32) + +/* SDP VF OQ configuration */ +#define SDP_VF_MAX_OQ_DESCRIPTORS (512) +#define SDP_VF_MIN_OQ_DESCRIPTORS (128) +#define SDP_VF_OQ_BUF_SIZE (2048) +#define SDP_VF_OQ_REFIL_THRESHOLD (16) + +#define SDP_VF_OQ_INFOPTR_MODE (1) +#define SDP_VF_OQ_BUFPTR_MODE (0) + +#define SDP_VF_OQ_INTR_PKT (1) +#define SDP_VF_OQ_INTR_TIME (10) +#define SDP_VF_CFG_IO_QUEUES SDP_MAX_RINGS_PER_VF + +/* Wait time in milliseconds for FLR */ +#define SDP_VF_PCI_FLR_WAIT (100) +#define SDP_VF_BUSY_LOOP_COUNT (10000) + +#define SDP_VF_MAX_IO_QUEUES SDP_MAX_RINGS_PER_VF +#define SDP_VF_MIN_IO_QUEUES SDP_MIN_RINGS_PER_VF + +/* SDP VF IOQs per rawdev */ +#define SDP_VF_MAX_IOQS_PER_RAWDEV SDP_VF_MAX_IO_QUEUES +#define SDP_VF_DEFAULT_IOQS_PER_RAWDEV SDP_VF_MIN_IO_QUEUES + +/* SDP VF Register definitions */ +#define SDP_VF_RING_OFFSET (0x1ull << 17) + +/* SDP VF IQ Registers */ +#define SDP_VF_R_IN_CONTROL_START (0x10000) +#define SDP_VF_R_IN_ENABLE_START (0x10010) +#define SDP_VF_R_IN_INSTR_BADDR_START (0x10020) +#define SDP_VF_R_IN_INSTR_RSIZE_START (0x10030) +#define SDP_VF_R_IN_INSTR_DBELL_START (0x10040) +#define SDP_VF_R_IN_CNTS_START (0x10050) +#define SDP_VF_R_IN_INT_LEVELS_START (0x10060) +#define SDP_VF_R_IN_PKT_CNT_START (0x10080) +#define SDP_VF_R_IN_BYTE_CNT_START (0x10090) + +#define SDP_VF_R_IN_CONTROL(ring) \ + (SDP_VF_R_IN_CONTROL_START + (SDP_VF_RING_OFFSET * (ring))) + +#define SDP_VF_R_IN_ENABLE(ring) \ + (SDP_VF_R_IN_ENABLE_START + (SDP_VF_RING_OFFSET * (ring))) + +#define SDP_VF_R_IN_INSTR_BADDR(ring) \ + (SDP_VF_R_IN_INSTR_BADDR_START + (SDP_VF_RING_OFFSET * (ring))) + +#define SDP_VF_R_IN_INSTR_RSIZE(ring) \ + (SDP_VF_R_IN_INSTR_RSIZE_START + (SDP_VF_RING_OFFSET * (ring))) + +#define SDP_VF_R_IN_INSTR_DBELL(ring) \ + (SDP_VF_R_IN_INSTR_DBELL_START + (SDP_VF_RING_OFFSET * (ring))) + +#define SDP_VF_R_IN_CNTS(ring) \ + (SDP_VF_R_IN_CNTS_START + (SDP_VF_RING_OFFSET * (ring))) + +#define SDP_VF_R_IN_INT_LEVELS(ring) \ + (SDP_VF_R_IN_INT_LEVELS_START + (SDP_VF_RING_OFFSET * (ring))) + +#define SDP_VF_R_IN_PKT_CNT(ring) \ + (SDP_VF_R_IN_PKT_CNT_START + (SDP_VF_RING_OFFSET * (ring))) + +#define SDP_VF_R_IN_BYTE_CNT(ring) \ + (SDP_VF_R_IN_BYTE_CNT_START + (SDP_VF_RING_OFFSET * (ring))) + +/* SDP VF IQ Masks */ +#define SDP_VF_R_IN_CTL_RPVF_MASK (0xF) +#define SDP_VF_R_IN_CTL_RPVF_POS (48) + +#define SDP_VF_R_IN_CTL_IDLE (0x1ull << 28) +#define SDP_VF_R_IN_CTL_RDSIZE (0x3ull << 25) /* Setting to max(4) */ +#define SDP_VF_R_IN_CTL_IS_64B (0x1ull << 24) +#define SDP_VF_R_IN_CTL_D_NSR (0x1ull << 8) +#define SDP_VF_R_IN_CTL_D_ESR (0x1ull << 6) +#define SDP_VF_R_IN_CTL_D_ROR (0x1ull << 5) +#define SDP_VF_R_IN_CTL_NSR (0x1ull << 3) +#define SDP_VF_R_IN_CTL_ESR (0x1ull << 1) +#define SDP_VF_R_IN_CTL_ROR (0x1ull << 0) + +#define SDP_VF_R_IN_CTL_MASK (SDP_VF_R_IN_CTL_RDSIZE | SDP_VF_R_IN_CTL_IS_64B) + +/* SDP VF OQ Registers */ +#define SDP_VF_R_OUT_CNTS_START (0x10100) +#define SDP_VF_R_OUT_INT_LEVELS_START (0x10110) +#define SDP_VF_R_OUT_SLIST_BADDR_START (0x10120) +#define SDP_VF_R_OUT_SLIST_RSIZE_START (0x10130) +#define SDP_VF_R_OUT_SLIST_DBELL_START (0x10140) +#define SDP_VF_R_OUT_CONTROL_START (0x10150) +#define SDP_VF_R_OUT_ENABLE_START (0x10160) +#define SDP_VF_R_OUT_PKT_CNT_START (0x10180) +#define SDP_VF_R_OUT_BYTE_CNT_START (0x10190) + +#define SDP_VF_R_OUT_CONTROL(ring) \ + (SDP_VF_R_OUT_CONTROL_START + (SDP_VF_RING_OFFSET * (ring))) + +#define SDP_VF_R_OUT_ENABLE(ring) \ + (SDP_VF_R_OUT_ENABLE_START + (SDP_VF_RING_OFFSET * (ring))) + +#define SDP_VF_R_OUT_SLIST_BADDR(ring) \ + (SDP_VF_R_OUT_SLIST_BADDR_START + (SDP_VF_RING_OFFSET * (ring))) + +#define SDP_VF_R_OUT_SLIST_RSIZE(ring) \ + (SDP_VF_R_OUT_SLIST_RSIZE_START + (SDP_VF_RING_OFFSET * (ring))) + +#define SDP_VF_R_OUT_SLIST_DBELL(ring) \ + (SDP_VF_R_OUT_SLIST_DBELL_START + (SDP_VF_RING_OFFSET * (ring))) + +#define SDP_VF_R_OUT_CNTS(ring) \ + (SDP_VF_R_OUT_CNTS_START + (SDP_VF_RING_OFFSET * (ring))) + +#define SDP_VF_R_OUT_INT_LEVELS(ring) \ + (SDP_VF_R_OUT_INT_LEVELS_START + (SDP_VF_RING_OFFSET * (ring))) + +#define SDP_VF_R_OUT_PKT_CNT(ring) \ + (SDP_VF_R_OUT_PKT_CNT_START + (SDP_VF_RING_OFFSET * (ring))) + +#define SDP_VF_R_OUT_BYTE_CNT(ring) \ + (SDP_VF_R_OUT_BYTE_CNT_START + (SDP_VF_RING_OFFSET * (ring))) + +/* SDP VF OQ Masks */ +#define SDP_VF_R_OUT_CTL_IDLE (1ull << 40) +#define SDP_VF_R_OUT_CTL_ES_I (1ull << 34) +#define SDP_VF_R_OUT_CTL_NSR_I (1ull << 33) +#define SDP_VF_R_OUT_CTL_ROR_I (1ull << 32) +#define SDP_VF_R_OUT_CTL_ES_D (1ull << 30) +#define SDP_VF_R_OUT_CTL_NSR_D (1ull << 29) +#define SDP_VF_R_OUT_CTL_ROR_D (1ull << 28) +#define SDP_VF_R_OUT_CTL_ES_P (1ull << 26) +#define SDP_VF_R_OUT_CTL_NSR_P (1ull << 25) +#define SDP_VF_R_OUT_CTL_ROR_P (1ull << 24) +#define SDP_VF_R_OUT_CTL_IMODE (1ull << 23) + +#define SDP_VF_R_OUT_INT_LEVELS_BMODE (1ull << 63) +#define SDP_VF_R_OUT_INT_LEVELS_TIMET (32) + +/* SDP Instruction Header */ +struct sdp_instr_ih { + /* Data Len */ + uint64_t tlen : 16; + + /* Reserved1 */ + uint64_t rsvd1 : 20; + + /* PKIND for SDP */ + uint64_t pkind : 6; + + /* Front Data size */ + uint64_t fsz : 6; + + /* No. of entries in gather list */ + uint64_t gsz : 14; + + /* Gather indicator */ + uint64_t gather : 1; + + /* Reserved2 */ + uint64_t rsvd2 : 1; +} __plt_packed; + +#endif /* __SDP_HW_H_ */ diff --git a/drivers/common/cnxk/hw/sso.h b/drivers/common/cnxk/hw/sso.h new file mode 100644 index 0000000..25deaa4 --- /dev/null +++ b/drivers/common/cnxk/hw/sso.h @@ -0,0 +1,233 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef __SSO_HW_H__ +#define __SSO_HW_H__ + +/* Register offsets */ + +#define SSO_AF_CONST (0x1000ull) +#define SSO_AF_CONST1 (0x1008ull) +#define SSO_AF_WQ_INT_PC (0x1020ull) +#define SSO_AF_NOS_CNT (0x1050ull) /* [CN9K, CN10K) */ +#define SSO_AF_GWS_INV (0x1060ull) /* [CN10K, .) */ +#define SSO_AF_AW_WE (0x1080ull) +#define SSO_AF_WS_CFG (0x1088ull) +#define SSO_AF_GWE_CFG (0x1098ull) +#define SSO_AF_GWE_RANDOM (0x10b0ull) +#define SSO_AF_LF_HWGRP_RST (0x10e0ull) +#define SSO_AF_AW_CFG (0x10f0ull) +#define SSO_AF_BLK_RST (0x10f8ull) +#define SSO_AF_ACTIVE_CYCLES0 (0x1100ull) +#define SSO_AF_ACTIVE_CYCLES1 (0x1108ull) +#define SSO_AF_ACTIVE_CYCLES2 (0x1110ull) +#define SSO_AF_ERR0 (0x1220ull) +#define SSO_AF_ERR0_W1S (0x1228ull) +#define SSO_AF_ERR0_ENA_W1C (0x1230ull) +#define SSO_AF_ERR0_ENA_W1S (0x1238ull) +#define SSO_AF_ERR2 (0x1260ull) +#define SSO_AF_ERR2_W1S (0x1268ull) +#define SSO_AF_ERR2_ENA_W1C (0x1270ull) +#define SSO_AF_ERR2_ENA_W1S (0x1278ull) +#define SSO_AF_UNMAP_INFO (0x12f0ull) +#define SSO_AF_UNMAP_INFO2 (0x1300ull) +#define SSO_AF_UNMAP_INFO3 (0x1310ull) +#define SSO_AF_RAS (0x1420ull) +#define SSO_AF_RAS_W1S (0x1430ull) +#define SSO_AF_RAS_ENA_W1C (0x1460ull) +#define SSO_AF_RAS_ENA_W1S (0x1470ull) +#define SSO_AF_AW_INP_CTL (0x2070ull) +#define SSO_AF_AW_ADD (0x2080ull) +#define SSO_AF_AW_READ_ARB (0x2090ull) +#define SSO_AF_XAQ_REQ_PC (0x20b0ull) +#define SSO_AF_XAQ_LATENCY_PC (0x20b8ull) +#define SSO_AF_TAQ_CNT (0x20c0ull) +#define SSO_AF_TAQ_ADD (0x20e0ull) +#define SSO_AF_POISONX(a) (0x2100ull | (uint64_t)(a) << 3) +#define SSO_AF_POISONX_W1S(a) (0x2200ull | (uint64_t)(a) << 3) +#define SSO_PRIV_AF_INT_CFG (0x3000ull) +#define SSO_AF_RVU_LF_CFG_DEBUG (0x3800ull) +#define SSO_PRIV_LFX_HWGRP_CFG(a) (0x10000ull | (uint64_t)(a) << 3) +#define SSO_PRIV_LFX_HWGRP_INT_CFG(a) (0x20000ull | (uint64_t)(a) << 3) +#define SSO_AF_IU_ACCNTX_CFG(a) (0x50000ull | (uint64_t)(a) << 3) +#define SSO_AF_IU_ACCNTX_RST(a) (0x60000ull | (uint64_t)(a) << 3) +#define SSO_AF_XAQX_HEAD_PTR(a) (0x80000ull | (uint64_t)(a) << 3) +#define SSO_AF_XAQX_TAIL_PTR(a) (0x90000ull | (uint64_t)(a) << 3) +#define SSO_AF_XAQX_HEAD_NEXT(a) (0xa0000ull | (uint64_t)(a) << 3) +#define SSO_AF_XAQX_TAIL_NEXT(a) (0xb0000ull | (uint64_t)(a) << 3) +#define SSO_AF_TIAQX_STATUS(a) (0xc0000ull | (uint64_t)(a) << 3) +#define SSO_AF_TOAQX_STATUS(a) (0xd0000ull | (uint64_t)(a) << 3) +#define SSO_AF_XAQX_GMCTL(a) (0xe0000ull | (uint64_t)(a) << 3) +#define SSO_AF_HWGRPX_IAQ_THR(a) (0x200000ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_TAQ_THR(a) (0x200010ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_PRI(a) (0x200020ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_AW_FWD(a) \ + (0x200030ull | (uint64_t)(a) << 12) /* [CN10K, .) */ +#define SSO_AF_HWGRPX_WS_PC(a) (0x200050ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_EXT_PC(a) (0x200060ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_WA_PC(a) (0x200070ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_TS_PC(a) (0x200080ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_DS_PC(a) (0x200090ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_DQ_PC(a) (0x2000A0ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_LS_PC(a) \ + (0x2000c0ull | (uint64_t)(a) << 12) /* [CN10K, .) */ +#define SSO_AF_HWGRPX_PAGE_CNT(a) (0x200100ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_AW_STATUS(a) (0x200110ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_AW_CFG(a) (0x200120ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_AW_TAGSPACE(a) (0x200130ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_XAQ_AURA(a) (0x200140ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_XAQ_LIMIT(a) (0x200220ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_IU_ACCNT(a) (0x200230ull | (uint64_t)(a) << 12) +#define SSO_AF_HWSX_ARB(a) (0x400100ull | (uint64_t)(a) << 12) +#define SSO_AF_HWSX_INV(a) (0x400180ull | (uint64_t)(a) << 12) +#define SSO_AF_HWSX_GMCTL(a) (0x400200ull | (uint64_t)(a) << 12) +#define SSO_AF_HWSX_LSW_CFG(a) \ + (0x400300ull | (uint64_t)(a) << 12) /* [CN10K, .) */ +#define SSO_AF_HWSX_SX_GRPMSKX(a, b, c) \ + (0x400400ull | (uint64_t)(a) << 12 | (uint64_t)(b) << 5 | \ + (uint64_t)(c) << 3) +#define SSO_AF_TILEMAPX(a) \ + (0x400600ull | (uint64_t)(a) << 12) /* [CN10K, .) \ + */ +#define SSO_AF_IPL_FREEX(a) (0x800000ull | (uint64_t)(a) << 3) +#define SSO_AF_IPL_IAQX(a) (0x840000ull | (uint64_t)(a) << 3) +#define SSO_AF_IPL_DESCHEDX(a) (0x860000ull | (uint64_t)(a) << 3) +#define SSO_AF_IPL_CONFX(a) (0x880000ull | (uint64_t)(a) << 3) +#define SSO_AF_NPA_DIGESTX(a) (0x900000ull | (uint64_t)(a) << 3) +#define SSO_AF_NPA_DIGESTX_W1S(a) (0x900100ull | (uint64_t)(a) << 3) +#define SSO_AF_BFP_DIGESTX(a) (0x900200ull | (uint64_t)(a) << 3) +#define SSO_AF_BFP_DIGESTX_W1S(a) (0x900300ull | (uint64_t)(a) << 3) +#define SSO_AF_BFPN_DIGESTX(a) (0x900400ull | (uint64_t)(a) << 3) +#define SSO_AF_BFPN_DIGESTX_W1S(a) (0x900500ull | (uint64_t)(a) << 3) +#define SSO_AF_GRPDIS_DIGESTX(a) (0x900600ull | (uint64_t)(a) << 3) +#define SSO_AF_GRPDIS_DIGESTX_W1S(a) (0x900700ull | (uint64_t)(a) << 3) +#define SSO_AF_AWEMPTY_DIGESTX(a) (0x900800ull | (uint64_t)(a) << 3) +#define SSO_AF_AWEMPTY_DIGESTX_W1S(a) (0x900900ull | (uint64_t)(a) << 3) +#define SSO_AF_WQP0_DIGESTX(a) (0x900a00ull | (uint64_t)(a) << 3) +#define SSO_AF_WQP0_DIGESTX_W1S(a) (0x900b00ull | (uint64_t)(a) << 3) +#define SSO_AF_AW_DROPPED_DIGESTX(a) (0x900c00ull | (uint64_t)(a) << 3) +#define SSO_AF_AW_DROPPED_DIGESTX_W1S(a) (0x900d00ull | (uint64_t)(a) << 3) +#define SSO_AF_QCTLDIS_DIGESTX(a) (0x900e00ull | (uint64_t)(a) << 3) +#define SSO_AF_QCTLDIS_DIGESTX_W1S(a) (0x900f00ull | (uint64_t)(a) << 3) +#define SSO_AF_XAQDIS_DIGESTX(a) (0x901000ull | (uint64_t)(a) << 3) +#define SSO_AF_XAQDIS_DIGESTX_W1S(a) (0x901100ull | (uint64_t)(a) << 3) +#define SSO_AF_FLR_AQ_DIGESTX(a) (0x901200ull | (uint64_t)(a) << 3) +#define SSO_AF_FLR_AQ_DIGESTX_W1S(a) (0x901300ull | (uint64_t)(a) << 3) +#define SSO_AF_WS_GMULTI_DIGESTX(a) (0x902000ull | (uint64_t)(a) << 3) +#define SSO_AF_WS_GMULTI_DIGESTX_W1S(a) (0x902100ull | (uint64_t)(a) << 3) +#define SSO_AF_WS_GUNMAP_DIGESTX(a) (0x902200ull | (uint64_t)(a) << 3) +#define SSO_AF_WS_GUNMAP_DIGESTX_W1S(a) (0x902300ull | (uint64_t)(a) << 3) +#define SSO_AF_WS_AWE_DIGESTX(a) \ + (0x902400ull | (uint64_t)(a) << 3) /* [CN9K, CN10K) */ +#define SSO_AF_WS_AWE_DIGESTX_W1S(a) \ + (0x902500ull | (uint64_t)(a) << 3) /* [CN9K, CN10K) */ +#define SSO_AF_WS_GWI_DIGESTX(a) \ + (0x902600ull | (uint64_t)(a) << 3) /* [CN9K, CN10K) */ +#define SSO_AF_WS_GWI_DIGESTX_W1S(a) \ + (0x902700ull | (uint64_t)(a) << 3) /* [CN9K, CN10K) */ +#define SSO_AF_WS_NE_DIGESTX(a) (0x902800ull | (uint64_t)(a) << 3) +#define SSO_AF_WS_NE_DIGESTX_W1S(a) (0x902900ull | (uint64_t)(a) << 3) +#define SSO_AF_IENTX_TAG(a) (0xa00000ull | (uint64_t)(a) << 3) +#define SSO_AF_IENTX_GRP(a) (0xa20000ull | (uint64_t)(a) << 3) +#define SSO_AF_IENTX_PENDTAG(a) (0xa40000ull | (uint64_t)(a) << 3) +#define SSO_AF_IENTX_LINKS(a) (0xa60000ull | (uint64_t)(a) << 3) +#define SSO_AF_IENTX_QLINKS(a) (0xa80000ull | (uint64_t)(a) << 3) +#define SSO_AF_IENTX_WQP(a) (0xaa0000ull | (uint64_t)(a) << 3) +#define SSO_AF_IENTX_LSW(a) \ + (0xac0000ull | (uint64_t)(a) << 3) /* [CN10K, .) */ + +#define SSO_AF_TAQX_LINK(a) (0xc00000ull | (uint64_t)(a) << 3) +#define SSO_AF_TAQX_WAEX_TAG(a, b) \ + (0xe00000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) +#define SSO_AF_TAQX_WAEX_WQP(a, b) \ + (0xe00008ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) + +#define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull) +#define SSO_LF_GGRP_OP_ADD_WORK1 (0x8ull) +#define SSO_LF_GGRP_QCTL (0x20ull) +#define SSO_LF_GGRP_EXE_DIS (0x80ull) +#define SSO_LF_GGRP_INT (0x100ull) +#define SSO_LF_GGRP_INT_W1S (0x108ull) +#define SSO_LF_GGRP_INT_ENA_W1S (0x110ull) +#define SSO_LF_GGRP_INT_ENA_W1C (0x118ull) +#define SSO_LF_GGRP_INT_THR (0x140ull) +#define SSO_LF_GGRP_INT_CNT (0x180ull) +#define SSO_LF_GGRP_XAQ_CNT (0x1b0ull) +#define SSO_LF_GGRP_AQ_CNT (0x1c0ull) +#define SSO_LF_GGRP_AQ_THR (0x1e0ull) +#define SSO_LF_GGRP_MISC_CNT (0x200ull) + +#define SSO_AF_IAQ_FREE_CNT_MASK 0x3FFFull +#define SSO_AF_IAQ_RSVD_FREE_MASK 0x3FFFull +#define SSO_AF_IAQ_RSVD_FREE_SHIFT 16 +#define SSO_AF_IAQ_FREE_CNT_MAX SSO_AF_IAQ_FREE_CNT_MASK +#define SSO_AF_AW_ADD_RSVD_FREE_MASK 0x3FFFull +#define SSO_AF_AW_ADD_RSVD_FREE_SHIFT 16 +#define SSO_HWGRP_IAQ_MAX_THR_MASK 0x3FFFull +#define SSO_HWGRP_IAQ_RSVD_THR_MASK 0x3FFFull +#define SSO_HWGRP_IAQ_MAX_THR_SHIFT 32 +#define SSO_HWGRP_IAQ_RSVD_THR 0x2 + +#define SSO_AF_TAQ_FREE_CNT_MASK 0x7FFull +#define SSO_AF_TAQ_RSVD_FREE_MASK 0x7FFull +#define SSO_AF_TAQ_RSVD_FREE_SHIFT 16 +#define SSO_AF_TAQ_FREE_CNT_MAX SSO_AF_TAQ_FREE_CNT_MASK +#define SSO_AF_TAQ_ADD_RSVD_FREE_MASK 0x1FFFull +#define SSO_AF_TAQ_ADD_RSVD_FREE_SHIFT 16 +#define SSO_HWGRP_TAQ_MAX_THR_MASK 0x7FFull +#define SSO_HWGRP_TAQ_RSVD_THR_MASK 0x7FFull +#define SSO_HWGRP_TAQ_MAX_THR_SHIFT 32 +#define SSO_HWGRP_TAQ_RSVD_THR 0x3 + +#define SSO_HWGRP_PRI_AFF_MASK 0xFull +#define SSO_HWGRP_PRI_AFF_SHIFT 8 +#define SSO_HWGRP_PRI_WGT_MASK 0x3Full +#define SSO_HWGRP_PRI_WGT_SHIFT 16 +#define SSO_HWGRP_PRI_WGT_LEFT_MASK 0x3Full +#define SSO_HWGRP_PRI_WGT_LEFT_SHIFT 24 + +#define SSO_HWGRP_AW_CFG_RWEN BIT_ULL(0) +#define SSO_HWGRP_AW_CFG_LDWB BIT_ULL(1) +#define SSO_HWGRP_AW_CFG_LDT BIT_ULL(2) +#define SSO_HWGRP_AW_CFG_STT BIT_ULL(3) +#define SSO_HWGRP_AW_CFG_XAQ_BYP_DIS BIT_ULL(4) + +#define SSO_HWGRP_AW_STS_TPTR_VLD BIT_ULL(8) +#define SSO_HWGRP_AW_STS_NPA_FETCH BIT_ULL(9) +#define SSO_HWGRP_AW_STS_XAQ_BUFSC_MASK 0x7ull +#define SSO_HWGRP_AW_STS_INIT_STS 0x18ull + +/* Enum offsets */ + +#define SSO_LF_INT_VEC_GRP (0x0ull) + +#define SSO_AF_INT_VEC_ERR0 (0x0ull) +#define SSO_AF_INT_VEC_ERR2 (0x1ull) +#define SSO_AF_INT_VEC_RAS (0x2ull) + +#define SSO_LSW_MODE_NO_LSW (0x0ull) /* [CN10K, .) */ +#define SSO_LSW_MODE_WAITW (0x1ull) /* [CN10K, .) */ +#define SSO_LSW_MODE_IMMED (0x2ull) /* [CN10K, .) */ + +#define SSO_WA_IOBN (0x0ull) +#define SSO_WA_ADDWQ (0x3ull) +#define SSO_WA_DPI (0x4ull) +#define SSO_WA_TIM (0x6ull) +#define SSO_WA_ZIP (0x7ull) /* [CN9K, CN10K) */ +#define SSO_WA_PSM (0x7ull) /* [CN10K, .) */ +#define SSO_WA_NIXRX0 (0x1ull) +#define SSO_WA_NIXRX1 (0x8ull) /* [CN10K, .) */ +#define SSO_WA_CPT0 (0x2ull) +#define SSO_WA_CPT1 (0x9ull) /* [CN10K, .) */ +#define SSO_WA_NIXTX0 (0x5ull) +#define SSO_WA_NIXTX1 (0xbull) /* [CN10K, .) */ +#define SSO_WA_ML0 (0xaull) /* [CN10K, .) */ +#define SSO_WA_ML1 (0xcull) /* [CN10K, .) */ + +#define SSO_TT_ORDERED (0x0ull) +#define SSO_TT_ATOMIC (0x1ull) +#define SSO_TT_UNTAGGED (0x2ull) +#define SSO_TT_EMPTY (0x3ull) + +#endif /* __SSO_HW_H__ */ diff --git a/drivers/common/cnxk/hw/ssow.h b/drivers/common/cnxk/hw/ssow.h new file mode 100644 index 0000000..618ab79 --- /dev/null +++ b/drivers/common/cnxk/hw/ssow.h @@ -0,0 +1,70 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef __SSOW_HW_H__ +#define __SSOW_HW_H__ + +/* Register offsets */ + +#define SSOW_AF_RVU_LF_HWS_CFG_DEBUG (0x10ull) +#define SSOW_AF_LF_HWS_RST (0x30ull) +#define SSOW_PRIV_LFX_HWS_CFG(a) (0x1000ull | (uint64_t)(a) << 3) +#define SSOW_PRIV_LFX_HWS_INT_CFG(a) (0x2000ull | (uint64_t)(a) << 3) +#define SSOW_AF_SCRATCH_WS (0x100000ull) +#define SSOW_AF_SCRATCH_GW (0x200000ull) +#define SSOW_AF_SCRATCH_AW (0x300000ull) + +#define SSOW_LF_GWS_LINKS (0x10ull) +#define SSOW_LF_GWS_PENDWQP (0x40ull) /* [CN9K, CN10K) */ +#define SSOW_LF_GWS_PENDSTATE (0x50ull) +#define SSOW_LF_GWS_NW_TIM (0x70ull) +#define SSOW_LF_GWS_GRPMSK_CHG (0x80ull) +#define SSOW_LF_GWS_INT (0x100ull) +#define SSOW_LF_GWS_INT_W1S (0x108ull) +#define SSOW_LF_GWS_INT_ENA_W1S (0x110ull) +#define SSOW_LF_GWS_INT_ENA_W1C (0x118ull) +#define SSOW_LF_GWS_TAG (0x200ull) +#define SSOW_LF_GWS_WQP (0x210ull) +#define SSOW_LF_GWS_SWTP (0x220ull) +#define SSOW_LF_GWS_PENDTAG (0x230ull) +#define SSOW_LF_GWS_WQE0 (0x240ull) /* [CN10K, .) */ +#define SSOW_LF_GWS_WQE1 (0x248ull) /* [CN10K, .) */ +#define SSOW_LF_GWS_OP_ALLOC_WE (0x400ull) /* [CN9K, CN10K) */ +#define SSOW_LF_GWS_PRF_TAG (0x400ull) /* [CN10K, .) */ +#define SSOW_LF_GWS_PRF_WQP (0x410ull) /* [CN10K, .) */ +#define SSOW_LF_GWS_PRF_WQE0 (0x440ull) /* [CN10K, .) */ +#define SSOW_LF_GWS_PRF_WQE1 (0x448ull) /* [CN10K, .) */ +#define SSOW_LF_GWS_OP_GET_WORK0 (0x600ull) +#define SSOW_LF_GWS_OP_GET_WORK1 (0x608ull) /* [CN10K, .) */ +#define SSOW_LF_GWS_OP_SWTAG_FLUSH (0x800ull) +#define SSOW_LF_GWS_OP_SWTAG_UNTAG (0x810ull) +#define SSOW_LF_GWS_OP_SWTP_CLR (0x820ull) +#define SSOW_LF_GWS_OP_UPD_WQP_GRP0 (0x830ull) +#define SSOW_LF_GWS_OP_UPD_WQP_GRP1 (0x838ull) +#define SSOW_LF_GWS_OP_DESCHED (0x880ull) +#define SSOW_LF_GWS_OP_DESCHED_NOSCH (0x8c0ull) /* [CN9K, CN10K) */ +#define SSOW_LF_GWS_OP_SWTAG_DESCHED (0x980ull) +#define SSOW_LF_GWS_OP_SWTAG_NOSCHED (0x9c0ull) /* [CN9K, CN10K) */ +#define SSOW_LF_GWS_OP_CLR_NSCHED0 (0xa00ull) /* [CN9K, CN10K) */ +#define SSOW_LF_GWS_OP_CLR_NSCHED1 (0xa08ull) /* [CN9K, CN10K) */ +#define SSOW_LF_GWS_OP_SWTP_SET (0xc00ull) +#define SSOW_LF_GWS_OP_SWTAG_NORM (0xc10ull) +#define SSOW_LF_GWS_OP_SWTAG_FULL0 (0xc20ull) +#define SSOW_LF_GWS_OP_SWTAG_FULL1 (0xc28ull) +#define SSOW_LF_GWS_OP_GWC_INVAL (0xe00ull) + +/* Enum offsets */ + +#define SSOW_LF_INT_VEC_IOP (0x0ull) + +#define SSOW_GW_RESULT_GW_WORK (0x0ull) /* [CN10K, .) */ +#define SSOW_GW_RESULT_GW_NO_WORK (0x1ull) /* [CN10K, .) */ +#define SSOW_GW_RESULT_GW_ERROR (0x2ull) /* [CN10K, .) */ + +#define SSOW_LF_GWS_TAG_PEND_GET_WORK_BIT 63 +#define SSOW_LF_GWS_TAG_PEND_SWITCH_BIT 62 +#define SSOW_LF_GWS_TAG_PEND_DESCHED_BIT 58 +#define SSOW_LF_GWS_TAG_HEAD_BIT 35 + +#endif /* __SSOW_HW_H__ */ diff --git a/drivers/common/cnxk/hw/tim.h b/drivers/common/cnxk/hw/tim.h new file mode 100644 index 0000000..a0fe29d --- /dev/null +++ b/drivers/common/cnxk/hw/tim.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef __TIM_HW_H__ +#define __TIM_HW_H__ + +/* TIM */ +#define TIM_AF_CONST (0x90) +#define TIM_PRIV_LFX_CFG(a) (0x20000 | (a) << 3) +#define TIM_PRIV_LFX_INT_CFG(a) (0x24000 | (a) << 3) +#define TIM_AF_RVU_LF_CFG_DEBUG (0x30000) +#define TIM_AF_BLK_RST (0x10) +#define TIM_AF_LF_RST (0x20) +#define TIM_AF_BLK_RST (0x10) +#define TIM_AF_RINGX_GMCTL(a) (0x2000 | (a) << 3) +#define TIM_AF_RINGX_CTL0(a) (0x4000 | (a) << 3) +#define TIM_AF_RINGX_CTL1(a) (0x6000 | (a) << 3) +#define TIM_AF_RINGX_CTL2(a) (0x8000 | (a) << 3) +#define TIM_AF_FLAGS_REG (0x80) +#define TIM_AF_FLAGS_REG_ENA_TIM BIT_ULL(0) +#define TIM_AF_RINGX_CTL1_ENA BIT_ULL(47) +#define TIM_AF_RINGX_CTL1_RCF_BUSY BIT_ULL(50) +#define TIM_AF_RINGX_CLT1_CLK_10NS (0) +#define TIM_AF_RINGX_CLT1_CLK_GPIO (1) +#define TIM_AF_RINGX_CLT1_CLK_GTI (2) +#define TIM_AF_RINGX_CLT1_CLK_PTP (3) + +/* ENUMS */ + +#define TIM_LF_INT_VEC_NRSPERR_INT (0x0ull) +#define TIM_LF_INT_VEC_RAS_INT (0x1ull) +#define TIM_LF_RING_AURA (0x0) +#define TIM_LF_RING_BASE (0x130) +#define TIM_LF_NRSPERR_INT (0x200) +#define TIM_LF_NRSPERR_INT_W1S (0x208) +#define TIM_LF_NRSPERR_INT_ENA_W1S (0x210) +#define TIM_LF_NRSPERR_INT_ENA_W1C (0x218) +#define TIM_LF_RAS_INT (0x300) +#define TIM_LF_RAS_INT_W1S (0x308) +#define TIM_LF_RAS_INT_ENA_W1S (0x310) +#define TIM_LF_RAS_INT_ENA_W1C (0x318) +#define TIM_LF_RING_REL (0x400) + +#define TIM_MAX_INTERVAL_TICKS ((1ULL << 32) - 1) +#define TIM_MAX_BUCKET_SIZE ((1ULL << 20) - 1) +#define TIM_MIN_BUCKET_SIZE 3 + +#endif /* __TIM_HW_H__ */ diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build new file mode 100644 index 0000000..d1160e6 --- /dev/null +++ b/drivers/common/cnxk/meson.build @@ -0,0 +1,14 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2021 Marvell, Inc +# + +if not is_linux or not dpdk_conf.get('RTE_ARCH_64') + build = false + reason = 'only supported on 64-bit Linux' + subdir_done() +endif + +config_flag_fmt = 'RTE_LIBRTE_@0@_COMMON' +deps = ['eal', 'pci', 'bus_pci', 'mbuf'] +sources = files('roc_platform.c') +includes += include_directories('../../bus/pci') diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h new file mode 100644 index 0000000..ec4ea24 --- /dev/null +++ b/drivers/common/cnxk/roc_api.h @@ -0,0 +1,69 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_API_H_ +#define _ROC_API_H_ + +#include +#include +#include + +/* Alignment */ +#define ROC_ALIGN 128 + +/* Bits manipulation */ +#include "roc_bits.h" + +/* Bitfields manipulation */ +#include "roc_bitfield.h" + +/* Constants */ +#define PLT_ETHER_ADDR_LEN 6 + +/* Platform definition */ +#include "roc_platform.h" + +#define ROC_LMT_LINES_PER_CORE_LOG2 5 +#define ROC_LMT_LINE_SIZE_LOG2 7 +#define ROC_LMT_BASE_PER_CORE_LOG2 \ + (ROC_LMT_LINES_PER_CORE_LOG2 + ROC_LMT_LINE_SIZE_LOG2) + +/* PCI IDs */ +#define PCI_VENDOR_ID_CAVIUM 0x177D +#define PCI_DEVID_CNXK_RVU_PF 0xA063 +#define PCI_DEVID_CNXK_RVU_VF 0xA064 +#define PCI_DEVID_CNXK_RVU_AF 0xA065 +#define PCI_DEVID_CNXK_RVU_SSO_TIM_PF 0xA0F9 +#define PCI_DEVID_CNXK_RVU_SSO_TIM_VF 0xA0FA +#define PCI_DEVID_CNXK_RVU_NPA_PF 0xA0FB +#define PCI_DEVID_CNXK_RVU_NPA_VF 0xA0FC +#define PCI_DEVID_CNXK_RVU_AF_VF 0xA0f8 +#define PCI_DEVID_CNXK_DPI_VF 0xA081 +#define PCI_DEVID_CNXK_EP_VF 0xB203 +#define PCI_DEVID_CNXK_RVU_SDP_PF 0xA0f6 +#define PCI_DEVID_CNXK_RVU_SDP_VF 0xA0f7 + +#define PCI_DEVID_CN9K_CGX 0xA059 +#define PCI_DEVID_CN10K_RPM 0xA060 + +#define PCI_SUBSYSTEM_DEVID_CN10KA 0xB900 +#define PCI_SUBSYSTEM_DEVID_CN10KAS 0xB900 + +#define PCI_SUBSYSTEM_DEVID_CN9KA 0x0000 +#define PCI_SUBSYSTEM_DEVID_CN9KB 0xb400 +#define PCI_SUBSYSTEM_DEVID_CN9KC 0x0200 +#define PCI_SUBSYSTEM_DEVID_CN9KD 0xB200 +#define PCI_SUBSYSTEM_DEVID_CN9KE 0xB100 + +/* HW structure definition */ +#include "hw/nix.h" +#include "hw/npa.h" +#include "hw/npc.h" +#include "hw/rvu.h" +#include "hw/sdp.h" +#include "hw/sso.h" +#include "hw/ssow.h" +#include "hw/tim.h" + +#endif /* _ROC_API_H_ */ diff --git a/drivers/common/cnxk/roc_bitfield.h b/drivers/common/cnxk/roc_bitfield.h new file mode 100644 index 0000000..d64cce2 --- /dev/null +++ b/drivers/common/cnxk/roc_bitfield.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_BITFIELD_H_ +#define _ROC_BITFIELD_H_ + +#define __bf_shf(x) (__builtin_ffsll(x) - 1) + +#define FIELD_PREP(mask, val) (((typeof(mask))(val) << __bf_shf(mask)) & (mask)) + +#define FIELD_GET(mask, reg) \ + ((typeof(mask))(((reg) & (mask)) >> __bf_shf(mask))) + +#endif /* _ROC_BITFIELD_H_ */ diff --git a/drivers/common/cnxk/roc_bits.h b/drivers/common/cnxk/roc_bits.h new file mode 100644 index 0000000..11216d9 --- /dev/null +++ b/drivers/common/cnxk/roc_bits.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_BITS_H_ +#define _ROC_BITS_H_ + +#ifndef BIT_ULL +#define BIT_ULL(nr) (1ULL << (nr)) +#endif + +#ifndef BIT +#define BIT(nr) (1UL << (nr)) +#endif + +#ifndef BITS_PER_LONG +#define BITS_PER_LONG (__SIZEOF_LONG__ * 8) +#endif +#ifndef BITS_PER_LONG_LONG +#define BITS_PER_LONG_LONG (__SIZEOF_LONG_LONG__ * 8) +#endif + +#ifndef GENMASK +#define GENMASK(h, l) (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h)))) +#endif +#ifndef GENMASK_ULL +#define GENMASK_ULL(h, l) \ + (((~0ULL) - (1ULL << (l)) + 1) & \ + (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h)))) +#endif + +#endif /* _ROC_BITS_H_ */ diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c new file mode 100644 index 0000000..6ddbc3b --- /dev/null +++ b/drivers/common/cnxk/roc_platform.c @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h new file mode 100644 index 0000000..6d498b2 --- /dev/null +++ b/drivers/common/cnxk/roc_platform.h @@ -0,0 +1,154 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_PLATFORM_H_ +#define _ROC_PLATFORM_H_ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "roc_bits.h" + +#if defined(__ARM_FEATURE_SVE) +#define PLT_CPU_FEATURE_PREAMBLE ".cpu generic+crc+lse+sve\n" +#else +#define PLT_CPU_FEATURE_PREAMBLE ".cpu generic+crc+lse\n" +#endif + +#define PLT_ASSERT RTE_ASSERT +#define PLT_MEMZONE_NAMESIZE RTE_MEMZONE_NAMESIZE +#define PLT_STD_C11 RTE_STD_C11 +#define PLT_PTR_ADD RTE_PTR_ADD +#define PLT_MAX_RXTX_INTR_VEC_ID RTE_MAX_RXTX_INTR_VEC_ID +#define PLT_INTR_VEC_RXTX_OFFSET RTE_INTR_VEC_RXTX_OFFSET +#define PLT_MIN RTE_MIN +#define PLT_MAX RTE_MAX +#define PLT_DIM RTE_DIM +#define PLT_SET_USED RTE_SET_USED +#define PLT_STATIC_ASSERT(s) _Static_assert(s, #s) +#define PLT_ALIGN RTE_ALIGN +#define PLT_ALIGN_MUL_CEIL RTE_ALIGN_MUL_CEIL +#define PLT_MODEL_MZ_NAME "roc_model_mz" +#define PLT_CACHE_LINE_SIZE RTE_CACHE_LINE_SIZE +#define BITMASK_ULL GENMASK_ULL + +/** Divide ceil */ +#define PLT_DIV_CEIL(x, y) \ + ({ \ + __typeof(x) __x = x; \ + __typeof(y) __y = y; \ + (__x + __y - 1) / __y; \ + }) + +#define __plt_cache_aligned __rte_cache_aligned +#define __plt_always_inline __rte_always_inline +#define __plt_packed __rte_packed +#define __roc_api __rte_internal +#define plt_iova_t rte_iova_t + +#define plt_pci_device rte_pci_device +#define plt_pci_read_config rte_pci_read_config +#define plt_pci_find_ext_capability rte_pci_find_ext_capability + +#define plt_log2_u32 rte_log2_u32 +#define plt_cpu_to_be_16 rte_cpu_to_be_16 +#define plt_be_to_cpu_16 rte_be_to_cpu_16 +#define plt_cpu_to_be_32 rte_cpu_to_be_32 +#define plt_be_to_cpu_32 rte_be_to_cpu_32 +#define plt_cpu_to_be_64 rte_cpu_to_be_64 +#define plt_be_to_cpu_64 rte_be_to_cpu_64 + +#define plt_align32prevpow2 rte_align32prevpow2 + +#define plt_bitmap rte_bitmap +#define plt_bitmap_init rte_bitmap_init +#define plt_bitmap_reset rte_bitmap_reset +#define plt_bitmap_free rte_bitmap_free +#define plt_bitmap_clear rte_bitmap_clear +#define plt_bitmap_set rte_bitmap_set +#define plt_bitmap_get rte_bitmap_get +#define plt_bitmap_scan_init __rte_bitmap_scan_init +#define plt_bitmap_scan rte_bitmap_scan +#define plt_bitmap_get_memory_footprint rte_bitmap_get_memory_footprint + +#define plt_spinlock_t rte_spinlock_t +#define plt_spinlock_init rte_spinlock_init +#define plt_spinlock_lock rte_spinlock_lock +#define plt_spinlock_unlock rte_spinlock_unlock + +#define plt_intr_callback_register rte_intr_callback_register +#define plt_intr_callback_unregister rte_intr_callback_unregister +#define plt_intr_disable rte_intr_disable +#define plt_thread_is_intr rte_thread_is_intr +#define plt_intr_callback_fn rte_intr_callback_fn + +#define plt_alarm_set rte_eal_alarm_set +#define plt_alarm_cancel rte_eal_alarm_cancel + +#define plt_intr_handle rte_intr_handle + +#define plt_zmalloc(sz, align) rte_zmalloc("cnxk", sz, align) +#define plt_free rte_free + +#define plt_read64(addr) rte_read64_relaxed((volatile void *)(addr)) +#define plt_write64(val, addr) \ + rte_write64_relaxed((val), (volatile void *)(addr)) + +#define plt_wmb() rte_wmb() +#define plt_rmb() rte_rmb() +#define plt_io_wmb() rte_io_wmb() +#define plt_io_rmb() rte_io_rmb() + +#define plt_mmap mmap +#define PLT_PROT_READ PROT_READ +#define PLT_PROT_WRITE PROT_WRITE +#define PLT_MAP_SHARED MAP_SHARED + +#define plt_memzone rte_memzone +#define plt_memzone_lookup rte_memzone_lookup +#define plt_memzone_reserve_cache_align(name, sz) \ + rte_memzone_reserve_aligned(name, sz, 0, 0, RTE_CACHE_LINE_SIZE) +#define plt_memzone_free rte_memzone_free + +#define plt_tsc_hz rte_get_tsc_hz +#define plt_delay_ms rte_delay_ms +#define plt_delay_us rte_delay_us + +#define plt_lcore_id rte_lcore_id + +#define plt_strlcpy rte_strlcpy + +#ifdef __cplusplus +#define CNXK_PCI_ID(subsystem_dev, dev) \ + { \ + RTE_CLASS_ANY_ID, \ + PCI_VENDOR_ID_CAVIUM, \ + (dev), \ + PCI_ANY_ID, \ + (subsystem_dev), \ + } +#else +#define CNXK_PCI_ID(subsystem_dev, dev) \ + { \ + .class_id = RTE_CLASS_ANY_ID, \ + .vendor_id = PCI_VENDOR_ID_CAVIUM, \ + .device_id = (dev), \ + .subsystem_vendor_id = PCI_ANY_ID, \ + .subsystem_device_id = (subsystem_dev), \ + } +#endif + +#endif /* _ROC_PLATFORM_H_ */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map new file mode 100644 index 0000000..dc012a1 --- /dev/null +++ b/drivers/common/cnxk/version.map @@ -0,0 +1,4 @@ +INTERNAL { + + local: *; +}; diff --git a/drivers/meson.build b/drivers/meson.build index 9c8eded..6572cfb 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -5,6 +5,7 @@ subdirs = [ 'common', 'bus', + 'common/cnxk', #depends on bus. 'common/mlx5', # depends on bus. 'common/qat', # depends on bus. 'common/sfc_efx', # depends on bus. From patchwork Thu Apr 1 12:37:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90376 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E1682A0548; Thu, 1 Apr 2021 14:39:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B574314119D; Thu, 1 Apr 2021 14:38:49 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id E85F3141182 for ; Thu, 1 Apr 2021 14:38:46 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLcj019085 for ; Thu, 1 Apr 2021 05:38:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=s/Kj+mpDMhzSaR81T+Rp2aXZJLU8Ilfjtv7aCShj9Dk=; b=jeuHHdNtMnWJw4i8nFSNp7yMF0/QKONRsX3ki952st32dUu54DkRcoV8Bev0UIsvHtv+ Q6qs47oBRqkP6vP+c+vJ74ITCrBf0BBHuQcCWxEcPWHgzwVPYnR3bl3UqAJusF0q+G+B p/HBYziNy4yxBuzYPD7shbOx8Sfw8Mi3gTKGpi8g43abfOy9OOiQgSj9nNDKFP51qmii Yz//EyP7f0HCiv84WwDUhxlg57CklBKKTABW4xboWh5UMukU2N0FAYenRjzxguJjOvH0 7NCusFUqfHx8opsmwZ1t9TBlcEEjnYXv06TEHJFgYcW/rQgU30rp7NIFDfVmheWg4e+/ aQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jjdxr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:38:46 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:38:44 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:38:44 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 9B0673F7041; Thu, 1 Apr 2021 05:38:41 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:28 +0530 Message-ID: <20210401123817.14348-4-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: Jd8eaLlvrKyrr1-wiI5b-K2JpnHXMBy4 X-Proofpoint-ORIG-GUID: Jd8eaLlvrKyrr1-wiI5b-K2JpnHXMBy4 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 03/52] common/cnxk: add model init and IO handling API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add routines for SoC model identification and HW IO handling routines specific to CN9K and CN10K Marvell SoC's. These are based on arm64 ISA and behaviour specific to Marvell SoC's. Signed-off-by: Jerin Jacob --- drivers/common/cnxk/meson.build | 4 +- drivers/common/cnxk/roc_api.h | 13 +++ drivers/common/cnxk/roc_io.h | 187 +++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_io_generic.h | 122 +++++++++++++++++++++++ drivers/common/cnxk/roc_model.c | 150 ++++++++++++++++++++++++++++ drivers/common/cnxk/roc_model.h | 104 +++++++++++++++++++ drivers/common/cnxk/roc_platform.c | 21 ++++ drivers/common/cnxk/roc_platform.h | 10 ++ drivers/common/cnxk/roc_priv.h | 11 +++ drivers/common/cnxk/roc_util_priv.h | 14 +++ drivers/common/cnxk/roc_utils.c | 35 +++++++ drivers/common/cnxk/roc_utils.h | 13 +++ drivers/common/cnxk/version.map | 5 + 13 files changed, 688 insertions(+), 1 deletion(-) create mode 100644 drivers/common/cnxk/roc_io.h create mode 100644 drivers/common/cnxk/roc_io_generic.h create mode 100644 drivers/common/cnxk/roc_model.c create mode 100644 drivers/common/cnxk/roc_model.h create mode 100644 drivers/common/cnxk/roc_priv.h create mode 100644 drivers/common/cnxk/roc_util_priv.h create mode 100644 drivers/common/cnxk/roc_utils.c create mode 100644 drivers/common/cnxk/roc_utils.h diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index d1160e6..b0c02ce 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -10,5 +10,7 @@ endif config_flag_fmt = 'RTE_LIBRTE_@0@_COMMON' deps = ['eal', 'pci', 'bus_pci', 'mbuf'] -sources = files('roc_platform.c') +sources = files('roc_model.c', + 'roc_platform.c', + 'roc_utils.c') includes += include_directories('../../bus/pci') diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h index ec4ea24..70d9c4a 100644 --- a/drivers/common/cnxk/roc_api.h +++ b/drivers/common/cnxk/roc_api.h @@ -29,6 +29,13 @@ #define ROC_LMT_BASE_PER_CORE_LOG2 \ (ROC_LMT_LINES_PER_CORE_LOG2 + ROC_LMT_LINE_SIZE_LOG2) +/* IO */ +#if defined(__aarch64__) +#include "roc_io.h" +#else +#include "roc_io_generic.h" +#endif + /* PCI IDs */ #define PCI_VENDOR_ID_CAVIUM 0x177D #define PCI_DEVID_CNXK_RVU_PF 0xA063 @@ -66,4 +73,10 @@ #include "hw/ssow.h" #include "hw/tim.h" +/* Model */ +#include "roc_model.h" + +/* Utils */ +#include "roc_utils.h" + #endif /* _ROC_API_H_ */ diff --git a/drivers/common/cnxk/roc_io.h b/drivers/common/cnxk/roc_io.h new file mode 100644 index 0000000..fb3d9c5 --- /dev/null +++ b/drivers/common/cnxk/roc_io.h @@ -0,0 +1,187 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_IO_H_ +#define _ROC_IO_H_ + +#define ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id) \ + do { \ + /* 32 Lines per core */ \ + lmt_id = plt_lcore_id() << ROC_LMT_LINES_PER_CORE_LOG2; \ + /* Each line is of 128B */ \ + (lmt_addr) += ((uint64_t)lmt_id << ROC_LMT_LINE_SIZE_LOG2); \ + } while (0) + +#define roc_load_pair(val0, val1, addr) \ + ({ \ + asm volatile("ldp %x[x0], %x[x1], [%x[p1]]" \ + : [x0] "=r"(val0), [x1] "=r"(val1) \ + : [p1] "r"(addr)); \ + }) + +#define roc_store_pair(val0, val1, addr) \ + ({ \ + asm volatile( \ + "stp %x[x0], %x[x1], [%x[p1], #0]!" ::[x0] "r"(val0), \ + [x1] "r"(val1), [p1] "r"(addr)); \ + }) + +#define roc_prefetch_store_keep(ptr) \ + ({ asm volatile("prfm pstl1keep, [%x0]\n" : : "r"(ptr)); }) + +#if defined(__clang__) +static __plt_always_inline void +roc_atomic128_cas_noreturn(uint64_t swap0, uint64_t swap1, int64_t *ptr) +{ + register uint64_t x0 __asm("x0") = swap0; + register uint64_t x1 __asm("x1") = swap1; + + asm volatile(PLT_CPU_FEATURE_PREAMBLE + "casp %[x0], %[x1], %[x0], %[x1], [%[ptr]]\n" + : [x0] "+r"(x0), [x1] "+r"(x1) + : [ptr] "r"(ptr) + : "memory"); +} +#else +static __plt_always_inline void +roc_atomic128_cas_noreturn(uint64_t swap0, uint64_t swap1, uint64_t ptr) +{ + __uint128_t wdata = swap0 | ((__uint128_t)swap1 << 64); + + asm volatile(PLT_CPU_FEATURE_PREAMBLE + "casp %[wdata], %H[wdata], %[wdata], %H[wdata], [%[ptr]]\n" + : [wdata] "+r"(wdata) + : [ptr] "r"(ptr) + : "memory"); +} +#endif + +static __plt_always_inline uint64_t +roc_atomic64_cas(uint64_t compare, uint64_t swap, int64_t *ptr) +{ + asm volatile(PLT_CPU_FEATURE_PREAMBLE + "cas %[compare], %[swap], [%[ptr]]\n" + : [compare] "+r"(compare) + : [swap] "r"(swap), [ptr] "r"(ptr) + : "memory"); + + return compare; +} + +static __plt_always_inline uint64_t +roc_atomic64_add_nosync(int64_t incr, int64_t *ptr) +{ + uint64_t result; + + /* Atomic add with no ordering */ + asm volatile(PLT_CPU_FEATURE_PREAMBLE "ldadd %x[i], %x[r], [%[b]]" + : [r] "=r"(result), "+m"(*ptr) + : [i] "r"(incr), [b] "r"(ptr) + : "memory"); + return result; +} + +static __plt_always_inline uint64_t +roc_atomic64_add_sync(int64_t incr, int64_t *ptr) +{ + uint64_t result; + + /* Atomic add with ordering */ + asm volatile(PLT_CPU_FEATURE_PREAMBLE "ldadda %x[i], %x[r], [%[b]]" + : [r] "=r"(result), "+m"(*ptr) + : [i] "r"(incr), [b] "r"(ptr) + : "memory"); + return result; +} + +static __plt_always_inline uint64_t +roc_lmt_submit_ldeor(plt_iova_t io_address) +{ + uint64_t result; + + asm volatile(PLT_CPU_FEATURE_PREAMBLE "ldeor xzr, %x[rf], [%[rs]]" + : [rf] "=r"(result) + : [rs] "r"(io_address)); + return result; +} + +static __plt_always_inline uint64_t +roc_lmt_submit_ldeorl(plt_iova_t io_address) +{ + uint64_t result; + + asm volatile(PLT_CPU_FEATURE_PREAMBLE "ldeorl xzr,%x[rf],[%[rs]]" + : [rf] "=r"(result) + : [rs] "r"(io_address)); + return result; +} + +static __plt_always_inline void +roc_lmt_submit_steor(uint64_t data, plt_iova_t io_address) +{ + asm volatile(PLT_CPU_FEATURE_PREAMBLE + "steor %x[d], [%[rs]]" ::[d] "r"(data), + [rs] "r"(io_address)); +} + +static __plt_always_inline void +roc_lmt_submit_steorl(uint64_t data, plt_iova_t io_address) +{ + asm volatile(PLT_CPU_FEATURE_PREAMBLE + "steorl %x[d], [%[rs]]" ::[d] "r"(data), + [rs] "r"(io_address)); +} + +static __plt_always_inline void +roc_lmt_mov(void *out, const void *in, const uint32_t lmtext) +{ + volatile const __uint128_t *src128 = (const __uint128_t *)in; + volatile __uint128_t *dst128 = (__uint128_t *)out; + + dst128[0] = src128[0]; + dst128[1] = src128[1]; + /* lmtext receives following value: + * 1: NIX_SUBDC_EXT needed i.e. tx vlan case + * 2: NIX_SUBDC_EXT + NIX_SUBDC_MEM i.e. tstamp case + */ + if (lmtext) { + dst128[2] = src128[2]; + if (lmtext > 1) + dst128[3] = src128[3]; + } +} + +static __plt_always_inline void +roc_lmt_mov_seg(void *out, const void *in, const uint16_t segdw) +{ + volatile const __uint128_t *src128 = (const __uint128_t *)in; + volatile __uint128_t *dst128 = (__uint128_t *)out; + uint8_t i; + + for (i = 0; i < segdw; i++) + dst128[i] = src128[i]; +} + +static __plt_always_inline void +roc_lmt_mov_one(void *out, const void *in) +{ + volatile const __uint128_t *src128 = (const __uint128_t *)in; + volatile __uint128_t *dst128 = (__uint128_t *)out; + + *dst128 = *src128; +} + +/* Non volatile version of roc_lmt_mov_seg() */ +static __plt_always_inline void +roc_lmt_mov_seg_nv(void *out, const void *in, const uint16_t segdw) +{ + const __uint128_t *src128 = (const __uint128_t *)in; + __uint128_t *dst128 = (__uint128_t *)out; + uint8_t i; + + for (i = 0; i < segdw; i++) + dst128[i] = src128[i]; +} + +#endif /* _ROC_IO_H_ */ diff --git a/drivers/common/cnxk/roc_io_generic.h b/drivers/common/cnxk/roc_io_generic.h new file mode 100644 index 0000000..c1689b6 --- /dev/null +++ b/drivers/common/cnxk/roc_io_generic.h @@ -0,0 +1,122 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_IO_GENERIC_H_ +#define _ROC_IO_GENERIC_H_ + +#define ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id) (lmt_id = 0) + +#define roc_load_pair(val0, val1, addr) \ + do { \ + val0 = plt_read64((void *)(addr)); \ + val1 = plt_read64((uint8_t *)(addr) + 8); \ + } while (0) + +#define roc_store_pair(val0, val1, addr) \ + do { \ + plt_write64(val0, (void *)(addr)); \ + plt_write64(val1, (((uint8_t *)(addr)) + 8)); \ + } while (0) + +#define roc_prefetch_store_keep(ptr) \ + do { \ + } while (0) + +static __plt_always_inline void +roc_atomic128_cas_noreturn(uint64_t swap0, uint64_t swap1, uint64_t ptr) +{ + PLT_SET_USED(swap0); + PLT_SET_USED(swap1); + PLT_SET_USED(ptr); +} + +static __plt_always_inline uint64_t +roc_atomic64_cas(uint64_t compare, uint64_t swap, int64_t *ptr) +{ + PLT_SET_USED(swap); + PLT_SET_USED(ptr); + + return compare; +} + +static inline uint64_t +roc_atomic64_add_nosync(int64_t incr, int64_t *ptr) +{ + PLT_SET_USED(ptr); + PLT_SET_USED(incr); + + return 0; +} + +static inline uint64_t +roc_atomic64_add_sync(int64_t incr, int64_t *ptr) +{ + PLT_SET_USED(ptr); + PLT_SET_USED(incr); + + return 0; +} + +static inline uint64_t +roc_lmt_submit_ldeor(plt_iova_t io_address) +{ + PLT_SET_USED(io_address); + + return 0; +} + +static __plt_always_inline uint64_t +roc_lmt_submit_ldeorl(plt_iova_t io_address) +{ + PLT_SET_USED(io_address); + + return 0; +} + +static inline void +roc_lmt_submit_steor(uint64_t data, plt_iova_t io_address) +{ + PLT_SET_USED(data); + PLT_SET_USED(io_address); +} + +static inline void +roc_lmt_submit_steorl(uint64_t data, plt_iova_t io_address) +{ + PLT_SET_USED(data); + PLT_SET_USED(io_address); +} + +static __plt_always_inline void +roc_lmt_mov(void *out, const void *in, const uint32_t lmtext) +{ + PLT_SET_USED(in); + PLT_SET_USED(lmtext); + memset(out, 0, sizeof(__uint128_t) * (lmtext ? lmtext > 1 ? 4 : 3 : 2)); +} + +static __plt_always_inline void +roc_lmt_mov_seg(void *out, const void *in, const uint16_t segdw) +{ + PLT_SET_USED(out); + PLT_SET_USED(in); + PLT_SET_USED(segdw); +} + +static __plt_always_inline void +roc_lmt_mov_one(void *out, const void *in) +{ + PLT_SET_USED(out); + PLT_SET_USED(in); +} + +static __plt_always_inline void +roc_lmt_mov_seg_nv(void *out, const void *in, const uint16_t segdw) +{ + PLT_SET_USED(out); + PLT_SET_USED(in); + PLT_SET_USED(segdw); +} + +#endif /* _ROC_IO_GENERIC_H_ */ diff --git a/drivers/common/cnxk/roc_model.c b/drivers/common/cnxk/roc_model.c new file mode 100644 index 0000000..b76e863 --- /dev/null +++ b/drivers/common/cnxk/roc_model.c @@ -0,0 +1,150 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +struct roc_model *roc_model; + +/* RoC and CPU IDs and revisions */ +#define VENDOR_ARM 0x41 /* 'A' */ +#define VENDOR_CAVIUM 0x43 /* 'C' */ + +#define PART_106XX 0xD49 +#define PART_98XX 0xB1 +#define PART_96XX 0xB2 +#define PART_95XX 0xB3 +#define PART_95XXN 0xB4 +#define PART_95XXMM 0xB5 +#define PART_95O 0xB6 + +#define MODEL_IMPL_BITS 8 +#define MODEL_IMPL_SHIFT 24 +#define MODEL_IMPL_MASK ((1 << MODEL_IMPL_BITS) - 1) +#define MODEL_PART_BITS 12 +#define MODEL_PART_SHIFT 4 +#define MODEL_PART_MASK ((1 << MODEL_PART_BITS) - 1) +#define MODEL_MAJOR_BITS 4 +#define MODEL_MAJOR_SHIFT 20 +#define MODEL_MAJOR_MASK ((1 << MODEL_MAJOR_BITS) - 1) +#define MODEL_MINOR_BITS 4 +#define MODEL_MINOR_SHIFT 0 +#define MODEL_MINOR_MASK ((1 << MODEL_MINOR_BITS) - 1) + +static const struct model_db { + uint32_t impl; + uint32_t part; + uint32_t major; + uint32_t minor; + uint64_t flag; + char name[ROC_MODEL_STR_LEN_MAX]; +} model_db[] = { + {VENDOR_ARM, PART_106XX, 0, 0, ROC_MODEL_CN10K, "cn10k"}, + {VENDOR_CAVIUM, PART_98XX, 0, 0, ROC_MODEL_CN98xx_A0, "cn98xx_a0"}, + {VENDOR_CAVIUM, PART_96XX, 0, 0, ROC_MODEL_CN96xx_A0, "cn96xx_a0"}, + {VENDOR_CAVIUM, PART_96XX, 0, 1, ROC_MODEL_CN96xx_B0, "cn96xx_b0"}, + {VENDOR_CAVIUM, PART_96XX, 2, 0, ROC_MODEL_CN96xx_C0, "cn96xx_c0"}, + {VENDOR_CAVIUM, PART_95XX, 0, 0, ROC_MODEL_CNF95xx_A0, "cnf95xx_a0"}, + {VENDOR_CAVIUM, PART_95XX, 1, 0, ROC_MODEL_CNF95xx_B0, "cnf95xx_b0"}, + {VENDOR_CAVIUM, PART_95XXN, 0, 0, ROC_MODEL_CNF95XXN_A0, "cnf95xxn_a0"}, + {VENDOR_CAVIUM, PART_95O, 0, 0, ROC_MODEL_CNF95XXO_A0, "cnf95O_a0"}, + {VENDOR_CAVIUM, PART_95XXMM, 0, 0, ROC_MODEL_CNF95XXMM_A0, + "cnf95xxmm_a0"} +}; + +static bool +populate_model(struct roc_model *model, uint32_t midr) +{ + uint32_t impl, major, part, minor; + bool found = false; + size_t i; + + impl = (midr >> MODEL_IMPL_SHIFT) & MODEL_IMPL_MASK; + part = (midr >> MODEL_PART_SHIFT) & MODEL_PART_MASK; + major = (midr >> MODEL_MAJOR_SHIFT) & MODEL_MAJOR_MASK; + minor = (midr >> MODEL_MINOR_SHIFT) & MODEL_MINOR_MASK; + + for (i = 0; i < PLT_DIM(model_db); i++) + if (model_db[i].impl == impl && model_db[i].part == part && + model_db[i].major == major && model_db[i].minor == minor) { + model->flag = model_db[i].flag; + strncpy(model->name, model_db[i].name, + ROC_MODEL_STR_LEN_MAX - 1); + found = true; + break; + } + if (!found) { + model->flag = 0; + strncpy(model->name, "unknown", ROC_MODEL_STR_LEN_MAX - 1); + plt_err("Invalid RoC model (impl=0x%x, part=0x%x)", impl, part); + } + + return found; +} + +static int +midr_get(unsigned long *val) +{ + const char *file = + "/sys/devices/system/cpu/cpu0/regs/identification/midr_el1"; + int rc = UTIL_ERR_FS; + char buf[BUFSIZ]; + char *end = NULL; + FILE *f; + + if (val == NULL) + goto err; + f = fopen(file, "r"); + if (f == NULL) + goto err; + + if (fgets(buf, sizeof(buf), f) == NULL) + goto fclose; + + *val = strtoul(buf, &end, 0); + if ((buf[0] == '\0') || (end == NULL) || (*end != '\n')) + goto fclose; + + rc = 0; +fclose: + fclose(f); +err: + return rc; +} + +static void +detect_invalid_config(void) +{ +#ifdef ROC_PLATFORM_CN9K +#ifdef ROC_PLATFORM_CN10K + PLT_STATIC_ASSERT(0); +#endif +#endif +} + +int +roc_model_init(struct roc_model *model) +{ + int rc = UTIL_ERR_PARAM; + unsigned long midr; + + detect_invalid_config(); + + if (!model) + goto err; + + rc = midr_get(&midr); + if (rc) + goto err; + + rc = UTIL_ERR_INVALID_MODEL; + if (!populate_model(model, midr)) + goto err; + + rc = 0; + plt_info("RoC Model: %s", model->name); + roc_model = model; +err: + return rc; +} diff --git a/drivers/common/cnxk/roc_model.h b/drivers/common/cnxk/roc_model.h new file mode 100644 index 0000000..63121f6 --- /dev/null +++ b/drivers/common/cnxk/roc_model.h @@ -0,0 +1,104 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_MODEL_H_ +#define _ROC_MODEL_H_ + +#include + +extern struct roc_model *roc_model; + +struct roc_model { +#define ROC_MODEL_CN96xx_A0 BIT_ULL(0) +#define ROC_MODEL_CN96xx_B0 BIT_ULL(1) +#define ROC_MODEL_CN96xx_C0 BIT_ULL(2) +#define ROC_MODEL_CNF95xx_A0 BIT_ULL(4) +#define ROC_MODEL_CNF95xx_B0 BIT_ULL(6) +#define ROC_MODEL_CNF95XXMM_A0 BIT_ULL(8) +#define ROC_MODEL_CNF95XXN_A0 BIT_ULL(12) +#define ROC_MODEL_CNF95XXO_A0 BIT_ULL(13) +#define ROC_MODEL_CN98xx_A0 BIT_ULL(16) +#define ROC_MODEL_CN10K BIT_ULL(20) + uint64_t flag; +#define ROC_MODEL_STR_LEN_MAX 128 + char name[ROC_MODEL_STR_LEN_MAX]; +} __plt_cache_aligned; + +#define ROC_MODEL_CN96xx_Ax (ROC_MODEL_CN96xx_A0 | ROC_MODEL_CN96xx_B0) +#define ROC_MODEL_CN9K \ + (ROC_MODEL_CN96xx_Ax | ROC_MODEL_CN96xx_C0 | ROC_MODEL_CNF95xx_A0 | \ + ROC_MODEL_CNF95xx_B0 | ROC_MODEL_CNF95XXMM_A0 | \ + ROC_MODEL_CNF95XXO_A0 | ROC_MODEL_CNF95XXN_A0 | ROC_MODEL_CN98xx_A0) + +/* Runtime variants */ +static inline uint64_t +roc_model_runtime_is_cn9k(void) +{ + return (roc_model->flag & (ROC_MODEL_CN9K)); +} + +static inline uint64_t +roc_model_runtime_is_cn10k(void) +{ + return (roc_model->flag & (ROC_MODEL_CN10K)); +} + +/* Compile time variants */ +#ifdef ROC_PLATFORM_CN9K +#define roc_model_constant_is_cn9k() 1 +#define roc_model_constant_is_cn10k() 0 +#else +#define roc_model_constant_is_cn9k() 0 +#define roc_model_constant_is_cn10k() 1 +#endif + +/* + * Compile time variants to enable optimized version check when the library + * configured for specific platform version else to fallback to runtime. + */ +static inline uint64_t +roc_model_is_cn9k(void) +{ +#ifdef ROC_PLATFORM_CN9K + return 1; +#endif +#ifdef ROC_PLATFORM_CN10K + return 0; +#endif + return roc_model_runtime_is_cn9k(); +} + +static inline uint64_t +roc_model_is_cn10k(void) +{ +#ifdef ROC_PLATFORM_CN10K + return 1; +#endif +#ifdef ROC_PLATFORM_CN9K + return 0; +#endif + return roc_model_runtime_is_cn10k(); +} + +static inline uint64_t +roc_model_is_cn96_A0(void) +{ + return roc_model->flag & ROC_MODEL_CN96xx_A0; +} + +static inline uint64_t +roc_model_is_cn96_Ax(void) +{ + return (roc_model->flag & ROC_MODEL_CN96xx_Ax); +} + +static inline uint64_t +roc_model_is_cn95_A0(void) +{ + return roc_model->flag & ROC_MODEL_CNF95xx_A0; +} + +int roc_model_init(struct roc_model *model); + +#endif diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c index 6ddbc3b..9f0c53e 100644 --- a/drivers/common/cnxk/roc_platform.c +++ b/drivers/common/cnxk/roc_platform.c @@ -3,3 +3,24 @@ */ #include "roc_api.h" + +int +roc_plt_init(void) +{ + const struct rte_memzone *mz; + + mz = rte_memzone_lookup(PLT_MODEL_MZ_NAME); + if (mz == NULL) + mz = rte_memzone_reserve(PLT_MODEL_MZ_NAME, + sizeof(struct roc_model), + SOCKET_ID_ANY, 0); + else + return 0; + + if (mz == NULL) { + plt_err("Failed to allocate memory for roc_model"); + return -ENOMEM; + } + roc_model_init(mz->addr); + return 0; +} diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h index 6d498b2..1d06435 100644 --- a/drivers/common/cnxk/roc_platform.h +++ b/drivers/common/cnxk/roc_platform.h @@ -131,6 +131,13 @@ #define plt_strlcpy rte_strlcpy +/* Log */ +#define plt_err(fmt, args...) \ + RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", __func__, __LINE__, ##args) +#define plt_info(fmt, args...) RTE_LOG(INFO, PMD, fmt "\n", ##args) +#define plt_warn(fmt, args...) RTE_LOG(WARNING, PMD, fmt "\n", ##args) +#define plt_print(fmt, args...) RTE_LOG(INFO, PMD, fmt "\n", ##args) + #ifdef __cplusplus #define CNXK_PCI_ID(subsystem_dev, dev) \ { \ @@ -151,4 +158,7 @@ } #endif +__rte_internal +int roc_plt_init(void); + #endif /* _ROC_PLATFORM_H_ */ diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h new file mode 100644 index 0000000..9c905d4 --- /dev/null +++ b/drivers/common/cnxk/roc_priv.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_PRIV_H_ +#define _ROC_PRIV_H_ + +/* Utils */ +#include "roc_util_priv.h" + +#endif /* _ROC_PRIV_H_ */ diff --git a/drivers/common/cnxk/roc_util_priv.h b/drivers/common/cnxk/roc_util_priv.h new file mode 100644 index 0000000..abad5c4 --- /dev/null +++ b/drivers/common/cnxk/roc_util_priv.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_UTIL_PRIV_H_ +#define _ROC_UTIL_PRIV_H_ + +enum util_err_status { + UTIL_ERR_PARAM = -6000, + UTIL_ERR_FS, + UTIL_ERR_INVALID_MODEL, +}; + +#endif /* _ROC_UTIL_PRIV_H_ */ diff --git a/drivers/common/cnxk/roc_utils.c b/drivers/common/cnxk/roc_utils.c new file mode 100644 index 0000000..c48f027 --- /dev/null +++ b/drivers/common/cnxk/roc_utils.c @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +const char * +roc_error_msg_get(int errorcode) +{ + const char *err_msg; + + switch (errorcode) { + case UTIL_ERR_PARAM: + err_msg = "Invalid parameter"; + break; + case UTIL_ERR_FS: + err_msg = "file operation failed"; + break; + case UTIL_ERR_INVALID_MODEL: + err_msg = "Invalid RoC model"; + break; + default: + /** + * Handle general error (as defined in linux errno.h) + */ + if (abs(errorcode) < 300) + err_msg = strerror(abs(errorcode)); + else + err_msg = "Unknown error code"; + break; + } + + return err_msg; +} diff --git a/drivers/common/cnxk/roc_utils.h b/drivers/common/cnxk/roc_utils.h new file mode 100644 index 0000000..634810e --- /dev/null +++ b/drivers/common/cnxk/roc_utils.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_UTILS_H_ +#define _ROC_UTILS_H_ + +#include "roc_platform.h" + +/* Utils */ +const char *__roc_api roc_error_msg_get(int errorcode); + +#endif /* _ROC_UTILS_H_ */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index dc012a1..1798b48 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -1,4 +1,9 @@ INTERNAL { + global: + + roc_error_msg_get; + roc_model; + roc_plt_init; local: *; }; From patchwork Thu Apr 1 12:37:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90377 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 69D86A0548; Thu, 1 Apr 2021 14:39:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ED461141192; Thu, 1 Apr 2021 14:38:51 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id B4D0114119C for ; Thu, 1 Apr 2021 14:38:49 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLcL019084 for ; Thu, 1 Apr 2021 05:38:49 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=HpTttIeOZ4ArukuR9JxbCPDlK/vH3JNR65/zZIX465w=; b=k7Yqo0a/ns4odLlArc01o+VujEZmndnP1DGb2ziZSP4I0V1Eq7B8XhLL//JwgnPrXMwq 4dTO6YD/G8CvSiv0x9qidWFFGdg2QLW4FpACLQ05Fcs5VnaIgjtB08MUewsIqfwvW35c NzReRsgK8ivqH1GL+5DN8yeXb5KnlgVELe08ttLg1icnSm02rStxcOHlNgCwWaMmYP1f 22QsKu5aNO/Bdfqi8Z1zcYWKAtgVoMumruF1LLhRN2VvDLI4Bw/d9KQNcTTD1oVWd3vg Xf3rFzOP2+jszR3iyBhuVRMopmcPVA6yAg9Dl3Mfbd6ipSEWeqqFNOwBA2Op9I29yhwM 7A== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jjdy0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:38:49 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:38:47 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:38:46 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 8C5B63F703F; Thu, 1 Apr 2021 05:38:44 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:29 +0530 Message-ID: <20210401123817.14348-5-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: uANM1uj4KhPaOVnHkUvzforzaNG3haid X-Proofpoint-ORIG-GUID: uANM1uj4KhPaOVnHkUvzforzaNG3haid X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 04/52] common/cnxk: add interrupt helper API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add interrupt helper API's in common code to register and unregister for specific interrupt vectors. These API's will be used by all cnxk drivers. Signed-off-by: Jerin Jacob --- drivers/common/cnxk/meson.build | 3 +- drivers/common/cnxk/roc_dev_priv.h | 14 +++ drivers/common/cnxk/roc_irq.c | 249 +++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_platform.c | 4 + drivers/common/cnxk/roc_platform.h | 11 ++ drivers/common/cnxk/roc_priv.h | 3 + drivers/common/cnxk/version.map | 1 + 7 files changed, 284 insertions(+), 1 deletion(-) create mode 100644 drivers/common/cnxk/roc_dev_priv.h create mode 100644 drivers/common/cnxk/roc_irq.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index b0c02ce..3e0678d 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -10,7 +10,8 @@ endif config_flag_fmt = 'RTE_LIBRTE_@0@_COMMON' deps = ['eal', 'pci', 'bus_pci', 'mbuf'] -sources = files('roc_model.c', +sources = files('roc_irq.c', + 'roc_model.c', 'roc_platform.c', 'roc_utils.c') includes += include_directories('../../bus/pci') diff --git a/drivers/common/cnxk/roc_dev_priv.h b/drivers/common/cnxk/roc_dev_priv.h new file mode 100644 index 0000000..2254677 --- /dev/null +++ b/drivers/common/cnxk/roc_dev_priv.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_DEV_PRIV_H +#define _ROC_DEV_PRIV_H + +int dev_irq_register(struct plt_intr_handle *intr_handle, + plt_intr_callback_fn cb, void *data, unsigned int vec); +void dev_irq_unregister(struct plt_intr_handle *intr_handle, + plt_intr_callback_fn cb, void *data, unsigned int vec); +int dev_irqs_disable(struct plt_intr_handle *intr_handle); + +#endif /* _ROC_DEV_PRIV_H */ diff --git a/drivers/common/cnxk/roc_irq.c b/drivers/common/cnxk/roc_irq.c new file mode 100644 index 0000000..4c2b4c3 --- /dev/null +++ b/drivers/common/cnxk/roc_irq.c @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +#if defined(__linux__) + +#include +#include +#include +#include +#include + +#define MSIX_IRQ_SET_BUF_LEN \ + (sizeof(struct vfio_irq_set) + sizeof(int) * (PLT_MAX_RXTX_INTR_VEC_ID)) + +static int +irq_get_info(struct plt_intr_handle *intr_handle) +{ + struct vfio_irq_info irq = {.argsz = sizeof(irq)}; + int rc; + + irq.index = VFIO_PCI_MSIX_IRQ_INDEX; + + rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq); + if (rc < 0) { + plt_err("Failed to get IRQ info rc=%d errno=%d", rc, errno); + return rc; + } + + plt_base_dbg("Flags=0x%x index=0x%x count=0x%x max_intr_vec_id=0x%x", + irq.flags, irq.index, irq.count, PLT_MAX_RXTX_INTR_VEC_ID); + + if (irq.count > PLT_MAX_RXTX_INTR_VEC_ID) { + plt_err("HW max=%d > PLT_MAX_RXTX_INTR_VEC_ID: %d", irq.count, + PLT_MAX_RXTX_INTR_VEC_ID); + intr_handle->max_intr = PLT_MAX_RXTX_INTR_VEC_ID; + } else { + intr_handle->max_intr = irq.count; + } + + return 0; +} + +static int +irq_config(struct plt_intr_handle *intr_handle, unsigned int vec) +{ + char irq_set_buf[MSIX_IRQ_SET_BUF_LEN]; + struct vfio_irq_set *irq_set; + int32_t *fd_ptr; + int len, rc; + + if (vec > intr_handle->max_intr) { + plt_err("vector=%d greater than max_intr=%d", vec, + intr_handle->max_intr); + return -EINVAL; + } + + len = sizeof(struct vfio_irq_set) + sizeof(int32_t); + + irq_set = (struct vfio_irq_set *)irq_set_buf; + irq_set->argsz = len; + + irq_set->start = vec; + irq_set->count = 1; + irq_set->flags = + VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER; + irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; + + /* Use vec fd to set interrupt vectors */ + fd_ptr = (int32_t *)&irq_set->data[0]; + fd_ptr[0] = intr_handle->efds[vec]; + + rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + if (rc) + plt_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc); + + return rc; +} + +static int +irq_init(struct plt_intr_handle *intr_handle) +{ + char irq_set_buf[MSIX_IRQ_SET_BUF_LEN]; + struct vfio_irq_set *irq_set; + int32_t *fd_ptr; + int len, rc; + uint32_t i; + + if (intr_handle->max_intr > PLT_MAX_RXTX_INTR_VEC_ID) { + plt_err("Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d", + intr_handle->max_intr, PLT_MAX_RXTX_INTR_VEC_ID); + return -ERANGE; + } + + len = sizeof(struct vfio_irq_set) + + sizeof(int32_t) * intr_handle->max_intr; + + irq_set = (struct vfio_irq_set *)irq_set_buf; + irq_set->argsz = len; + irq_set->start = 0; + irq_set->count = intr_handle->max_intr; + irq_set->flags = + VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER; + irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; + + fd_ptr = (int32_t *)&irq_set->data[0]; + for (i = 0; i < irq_set->count; i++) + fd_ptr[i] = -1; + + rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + if (rc) + plt_err("Failed to set irqs vector rc=%d", rc); + + return rc; +} + +int +dev_irqs_disable(struct plt_intr_handle *intr_handle) +{ + /* Clear max_intr to indicate re-init next time */ + intr_handle->max_intr = 0; + return plt_intr_disable(intr_handle); +} + +int +dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb, + void *data, unsigned int vec) +{ + struct plt_intr_handle tmp_handle; + int rc; + + /* If no max_intr read from VFIO */ + if (intr_handle->max_intr == 0) { + irq_get_info(intr_handle); + irq_init(intr_handle); + } + + if (vec > intr_handle->max_intr) { + plt_err("Vector=%d greater than max_intr=%d", vec, + intr_handle->max_intr); + return -EINVAL; + } + + tmp_handle = *intr_handle; + /* Create new eventfd for interrupt vector */ + tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); + if (tmp_handle.fd == -1) + return -ENODEV; + + /* Register vector interrupt callback */ + rc = plt_intr_callback_register(&tmp_handle, cb, data); + if (rc) { + plt_err("Failed to register vector:0x%x irq callback.", vec); + return rc; + } + + intr_handle->efds[vec] = tmp_handle.fd; + intr_handle->nb_efd = + (vec > intr_handle->nb_efd) ? vec : intr_handle->nb_efd; + if ((intr_handle->nb_efd + 1) > intr_handle->max_intr) + intr_handle->max_intr = intr_handle->nb_efd + 1; + + plt_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec, + intr_handle->nb_efd, intr_handle->max_intr); + + /* Enable MSIX vectors to VFIO */ + return irq_config(intr_handle, vec); +} + +void +dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb, + void *data, unsigned int vec) +{ + struct plt_intr_handle tmp_handle; + uint8_t retries = 5; /* 5 ms */ + int rc; + + if (vec > intr_handle->max_intr) { + plt_err("Error unregistering MSI-X interrupts vec:%d > %d", vec, + intr_handle->max_intr); + return; + } + + tmp_handle = *intr_handle; + tmp_handle.fd = intr_handle->efds[vec]; + if (tmp_handle.fd == -1) + return; + + do { + /* Un-register callback func from platform lib */ + rc = plt_intr_callback_unregister(&tmp_handle, cb, data); + /* Retry only if -EAGAIN */ + if (rc != -EAGAIN) + break; + plt_delay_ms(1); + retries--; + } while (retries); + + if (rc < 0) { + plt_err("Error unregistering MSI-X vec %d cb, rc=%d", vec, rc); + return; + } + + plt_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec, + intr_handle->nb_efd, intr_handle->max_intr); + + if (intr_handle->efds[vec] != -1) + close(intr_handle->efds[vec]); + /* Disable MSIX vectors from VFIO */ + intr_handle->efds[vec] = -1; + irq_config(intr_handle, vec); +} + +#else + +int +dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb, + void *data, unsigned int vec) +{ + PLT_SET_USED(intr_handle); + PLT_SET_USED(cb); + PLT_SET_USED(data); + PLT_SET_USED(vec); + + return -ENOTSUP; +} + +void +dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb, + void *data, unsigned int vec) +{ + PLT_SET_USED(intr_handle); + PLT_SET_USED(cb); + PLT_SET_USED(data); + PLT_SET_USED(vec); +} + +int +dev_irqs_disable(struct plt_intr_handle *intr_handle) +{ + PLT_SET_USED(intr_handle); + + return -ENOTSUP; +} + +#endif /* __linux__ */ diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c index 9f0c53e..ee1a28b 100644 --- a/drivers/common/cnxk/roc_platform.c +++ b/drivers/common/cnxk/roc_platform.c @@ -2,6 +2,8 @@ * Copyright(C) 2021 Marvell. */ +#include + #include "roc_api.h" int @@ -24,3 +26,5 @@ roc_plt_init(void) roc_model_init(mz->addr); return 0; } + +RTE_LOG_REGISTER(cnxk_logtype_base, pmd.cnxk.base, NOTICE); diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h index 1d06435..0fe0c18 100644 --- a/drivers/common/cnxk/roc_platform.h +++ b/drivers/common/cnxk/roc_platform.h @@ -132,12 +132,23 @@ #define plt_strlcpy rte_strlcpy /* Log */ +extern int cnxk_logtype_base; #define plt_err(fmt, args...) \ RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", __func__, __LINE__, ##args) #define plt_info(fmt, args...) RTE_LOG(INFO, PMD, fmt "\n", ##args) #define plt_warn(fmt, args...) RTE_LOG(WARNING, PMD, fmt "\n", ##args) #define plt_print(fmt, args...) RTE_LOG(INFO, PMD, fmt "\n", ##args) +/** + * Log debug message if given subsystem logging is enabled. + */ +#define plt_dbg(subsystem, fmt, args...) \ + rte_log(RTE_LOG_DEBUG, cnxk_logtype_##subsystem, \ + "[%s] %s():%u " fmt "\n", #subsystem, __func__, __LINE__, \ + ##args) + +#define plt_base_dbg(fmt, ...) plt_dbg(base, fmt, ##__VA_ARGS__) + #ifdef __cplusplus #define CNXK_PCI_ID(subsystem_dev, dev) \ { \ diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h index 9c905d4..cd87035 100644 --- a/drivers/common/cnxk/roc_priv.h +++ b/drivers/common/cnxk/roc_priv.h @@ -8,4 +8,7 @@ /* Utils */ #include "roc_util_priv.h" +/* Dev */ +#include "roc_dev_priv.h" + #endif /* _ROC_PRIV_H_ */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 1798b48..7102704 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -1,6 +1,7 @@ INTERNAL { global: + cnxk_logtype_base; roc_error_msg_get; roc_model; roc_plt_init; From patchwork Thu Apr 1 12:37:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90378 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 99923A0548; Thu, 1 Apr 2021 14:39:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AA4891411A3; Thu, 1 Apr 2021 14:38:55 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 6954E1411AB for ; Thu, 1 Apr 2021 14:38:53 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPHMl032508 for ; Thu, 1 Apr 2021 05:38:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=saskrqh1U4KElW7PRbDn6YVMiaM/kNt0KkYppXF0uiE=; b=YskvcJTymRGaXkHw3t+0bdjvwjlImzKg813wfbmvJOLR4nlLUuJwBvEGfdmHNOGWZ+vx 6rqAfzG95PD0EjL/RUV9IbYW0gkzSXRalnElq/OcVAudqg6Ib95jF28hYijh3IqgnSUw dBMpqg78RfBni5sEterFWDzLoivqiDIOUU3j0KMQVCgFuwyAElOOc6ZGK9y5RAZTh4/3 E6/4JBd2GXFCTkBK8QoQ2Oo9mpW0g9YIi/O4HxqFfq7sdQRGeqPlLhpY85n2SM2WGZpA mW/687x1SdpGJw7oLHDWdeEyHFv5tsGHenO1/9jsB2c3we4gCTfGkjvgUOq6qdq2GWWr 6Q== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 37n28j2dns-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:38:52 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:38:50 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:38:50 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 81CA63F7040; Thu, 1 Apr 2021 05:38:47 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , , Nithin Dabilpuram , Harman Kalra , Shijith Thotton Date: Thu, 1 Apr 2021 18:07:30 +0530 Message-ID: <20210401123817.14348-6-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: jUpjToabK6AzGzle4UU2uzZlXifu_XaG X-Proofpoint-GUID: jUpjToabK6AzGzle4UU2uzZlXifu_XaG X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 05/52] common/cnxk: add mbox request and response definitions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob The admin function driver sits in Linux kernel as mailbox server. The DPDK AF mailbox client, send the message to mailbox server to complete the administrative task such as get mac address. This patch adds mailbox request and response definition of existing mailbox defined between AF driver and DPDK driver. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Pavan Nikhilesh Signed-off-by: Kiran Kumar K Signed-off-by: Harman Kalra Signed-off-by: Sunil Kumar Kori Signed-off-by: Satha Rao Signed-off-by: Shijith Thotton --- drivers/common/cnxk/roc_api.h | 3 + drivers/common/cnxk/roc_mbox.h | 1735 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 1738 insertions(+) create mode 100644 drivers/common/cnxk/roc_mbox.h diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h index 70d9c4a..f2f1f5e 100644 --- a/drivers/common/cnxk/roc_api.h +++ b/drivers/common/cnxk/roc_api.h @@ -76,6 +76,9 @@ /* Model */ #include "roc_model.h" +/* Mbox */ +#include "roc_mbox.h" + /* Utils */ #include "roc_utils.h" diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h new file mode 100644 index 0000000..4864c76 --- /dev/null +++ b/drivers/common/cnxk/roc_mbox.h @@ -0,0 +1,1735 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef __ROC_MBOX_H__ +#define __ROC_MBOX_H__ + +#include +#include +#include + +/* Device memory does not support unaligned access, instruct compiler to + * not optimize the memory access when working with mailbox memory. + */ +#define __io volatile + +/* Header which precedes all mbox messages */ +struct mbox_hdr { + uint64_t __io msg_size; /* Total msgs size embedded */ + uint16_t __io num_msgs; /* No of msgs embedded */ +}; + +/* Header which precedes every msg and is also part of it */ +struct mbox_msghdr { + uint16_t __io pcifunc; /* Who's sending this msg */ + uint16_t __io id; /* Mbox message ID */ +#define MBOX_REQ_SIG (0xdead) +#define MBOX_RSP_SIG (0xbeef) + /* Signature, for validating corrupted msgs */ + uint16_t __io sig; +#define MBOX_VERSION (0x000a) + /* Version of msg's structure for this ID */ + uint16_t __io ver; + /* Offset of next msg within mailbox region */ + uint16_t __io next_msgoff; + int __io rc; /* Msg processed response code */ +}; + +/* Mailbox message types */ +#define MBOX_MSG_MASK 0xFFFF +#define MBOX_MSG_INVALID 0xFFFE +#define MBOX_MSG_MAX 0xFFFF + +#define MBOX_MESSAGES \ + /* Generic mbox IDs (range 0x000 - 0x1FF) */ \ + M(READY, 0x001, ready, msg_req, ready_msg_rsp) \ + M(ATTACH_RESOURCES, 0x002, attach_resources, rsrc_attach_req, msg_rsp) \ + M(DETACH_RESOURCES, 0x003, detach_resources, rsrc_detach_req, msg_rsp) \ + M(FREE_RSRC_CNT, 0x004, free_rsrc_cnt, msg_req, free_rsrcs_rsp) \ + M(MSIX_OFFSET, 0x005, msix_offset, msg_req, msix_offset_rsp) \ + M(VF_FLR, 0x006, vf_flr, msg_req, msg_rsp) \ + M(PTP_OP, 0x007, ptp_op, ptp_req, ptp_rsp) \ + M(GET_HW_CAP, 0x008, get_hw_cap, msg_req, get_hw_cap_rsp) \ + M(NDC_SYNC_OP, 0x009, ndc_sync_op, ndc_sync_op, msg_rsp) \ + M(LMTST_TBL_SETUP, 0x00a, lmtst_tbl_setup, lmtst_tbl_setup_req, \ + msg_rsp) \ + /* CGX mbox IDs (range 0x200 - 0x3FF) */ \ + M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \ + M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \ + M(CGX_STATS, 0x202, cgx_stats, msg_req, cgx_stats_rsp) \ + M(CGX_MAC_ADDR_SET, 0x203, cgx_mac_addr_set, cgx_mac_addr_set_or_get, \ + cgx_mac_addr_set_or_get) \ + M(CGX_MAC_ADDR_GET, 0x204, cgx_mac_addr_get, cgx_mac_addr_set_or_get, \ + cgx_mac_addr_set_or_get) \ + M(CGX_PROMISC_ENABLE, 0x205, cgx_promisc_enable, msg_req, msg_rsp) \ + M(CGX_PROMISC_DISABLE, 0x206, cgx_promisc_disable, msg_req, msg_rsp) \ + M(CGX_START_LINKEVENTS, 0x207, cgx_start_linkevents, msg_req, msg_rsp) \ + M(CGX_STOP_LINKEVENTS, 0x208, cgx_stop_linkevents, msg_req, msg_rsp) \ + M(CGX_GET_LINKINFO, 0x209, cgx_get_linkinfo, msg_req, \ + cgx_link_info_msg) \ + M(CGX_INTLBK_ENABLE, 0x20A, cgx_intlbk_enable, msg_req, msg_rsp) \ + M(CGX_INTLBK_DISABLE, 0x20B, cgx_intlbk_disable, msg_req, msg_rsp) \ + M(CGX_PTP_RX_ENABLE, 0x20C, cgx_ptp_rx_enable, msg_req, msg_rsp) \ + M(CGX_PTP_RX_DISABLE, 0x20D, cgx_ptp_rx_disable, msg_req, msg_rsp) \ + M(CGX_CFG_PAUSE_FRM, 0x20E, cgx_cfg_pause_frm, cgx_pause_frm_cfg, \ + cgx_pause_frm_cfg) \ + M(CGX_FW_DATA_GET, 0x20F, cgx_get_aux_link_info, msg_req, cgx_fw_data) \ + M(CGX_FEC_SET, 0x210, cgx_set_fec_param, fec_mode, fec_mode) \ + M(CGX_MAC_ADDR_ADD, 0x211, cgx_mac_addr_add, cgx_mac_addr_add_req, \ + cgx_mac_addr_add_rsp) \ + M(CGX_MAC_ADDR_DEL, 0x212, cgx_mac_addr_del, cgx_mac_addr_del_req, \ + msg_rsp) \ + M(CGX_MAC_MAX_ENTRIES_GET, 0x213, cgx_mac_max_entries_get, msg_req, \ + cgx_max_dmac_entries_get_rsp) \ + M(CGX_SET_LINK_STATE, 0x214, cgx_set_link_state, \ + cgx_set_link_state_msg, msg_rsp) \ + M(CGX_GET_PHY_MOD_TYPE, 0x215, cgx_get_phy_mod_type, msg_req, \ + cgx_phy_mod_type) \ + M(CGX_SET_PHY_MOD_TYPE, 0x216, cgx_set_phy_mod_type, cgx_phy_mod_type, \ + msg_rsp) \ + M(CGX_FEC_STATS, 0x217, cgx_fec_stats, msg_req, cgx_fec_stats_rsp) \ + M(CGX_SET_LINK_MODE, 0x218, cgx_set_link_mode, cgx_set_link_mode_req, \ + cgx_set_link_mode_rsp) \ + M(CGX_GET_PHY_FEC_STATS, 0x219, cgx_get_phy_fec_stats, msg_req, \ + msg_rsp) \ + M(CGX_STATS_RST, 0x21A, cgx_stats_rst, msg_req, msg_rsp) \ + M(RPM_STATS, 0x21C, rpm_stats, msg_req, rpm_stats_rsp) \ + /* NPA mbox IDs (range 0x400 - 0x5FF) */ \ + M(NPA_LF_ALLOC, 0x400, npa_lf_alloc, npa_lf_alloc_req, \ + npa_lf_alloc_rsp) \ + M(NPA_LF_FREE, 0x401, npa_lf_free, msg_req, msg_rsp) \ + M(NPA_AQ_ENQ, 0x402, npa_aq_enq, npa_aq_enq_req, npa_aq_enq_rsp) \ + M(NPA_HWCTX_DISABLE, 0x403, npa_hwctx_disable, hwctx_disable_req, \ + msg_rsp) \ + /* SSO/SSOW mbox IDs (range 0x600 - 0x7FF) */ \ + M(SSO_LF_ALLOC, 0x600, sso_lf_alloc, sso_lf_alloc_req, \ + sso_lf_alloc_rsp) \ + M(SSO_LF_FREE, 0x601, sso_lf_free, sso_lf_free_req, msg_rsp) \ + M(SSOW_LF_ALLOC, 0x602, ssow_lf_alloc, ssow_lf_alloc_req, msg_rsp) \ + M(SSOW_LF_FREE, 0x603, ssow_lf_free, ssow_lf_free_req, msg_rsp) \ + M(SSO_HW_SETCONFIG, 0x604, sso_hw_setconfig, sso_hw_setconfig, \ + msg_rsp) \ + M(SSO_GRP_SET_PRIORITY, 0x605, sso_grp_set_priority, sso_grp_priority, \ + msg_rsp) \ + M(SSO_GRP_GET_PRIORITY, 0x606, sso_grp_get_priority, sso_info_req, \ + sso_grp_priority) \ + M(SSO_WS_CACHE_INV, 0x607, sso_ws_cache_inv, msg_req, msg_rsp) \ + M(SSO_GRP_QOS_CONFIG, 0x608, sso_grp_qos_config, sso_grp_qos_cfg, \ + msg_rsp) \ + M(SSO_GRP_GET_STATS, 0x609, sso_grp_get_stats, sso_info_req, \ + sso_grp_stats) \ + M(SSO_HWS_GET_STATS, 0x610, sso_hws_get_stats, sso_info_req, \ + sso_hws_stats) \ + M(SSO_HW_RELEASE_XAQ, 0x611, sso_hw_release_xaq_aura, \ + sso_hw_xaq_release, msg_rsp) \ + /* TIM mbox IDs (range 0x800 - 0x9FF) */ \ + M(TIM_LF_ALLOC, 0x800, tim_lf_alloc, tim_lf_alloc_req, \ + tim_lf_alloc_rsp) \ + M(TIM_LF_FREE, 0x801, tim_lf_free, tim_ring_req, msg_rsp) \ + M(TIM_CONFIG_RING, 0x802, tim_config_ring, tim_config_req, msg_rsp) \ + M(TIM_ENABLE_RING, 0x803, tim_enable_ring, tim_ring_req, \ + tim_enable_rsp) \ + M(TIM_DISABLE_RING, 0x804, tim_disable_ring, tim_ring_req, msg_rsp) \ + /* CPT mbox IDs (range 0xA00 - 0xBFF) */ \ + M(CPT_LF_ALLOC, 0xA00, cpt_lf_alloc, cpt_lf_alloc_req_msg, msg_rsp) \ + M(CPT_LF_FREE, 0xA01, cpt_lf_free, msg_req, msg_rsp) \ + M(CPT_RD_WR_REGISTER, 0xA02, cpt_rd_wr_register, cpt_rd_wr_reg_msg, \ + cpt_rd_wr_reg_msg) \ + M(CPT_SET_CRYPTO_GRP, 0xA03, cpt_set_crypto_grp, \ + cpt_set_crypto_grp_req_msg, msg_rsp) \ + M(CPT_INLINE_IPSEC_CFG, 0xA04, cpt_inline_ipsec_cfg, \ + cpt_inline_ipsec_cfg_msg, msg_rsp) \ + M(CPT_STATS, 0xA05, cpt_sts_get, cpt_sts_req, cpt_sts_rsp) \ + M(CPT_RXC_TIME_CFG, 0xA06, cpt_rxc_time_cfg, cpt_rxc_time_cfg_req, \ + msg_rsp) \ + M(CPT_RX_INLINE_LF_CFG, 0xBFE, cpt_rx_inline_lf_cfg, \ + cpt_rx_inline_lf_cfg_msg, msg_rsp) \ + M(CPT_GET_CAPS, 0xBFD, cpt_caps_get, msg_req, cpt_caps_rsp_msg) \ + M(CPT_GET_ENG_GRP, 0xBFF, cpt_eng_grp_get, cpt_eng_grp_req, \ + cpt_eng_grp_rsp) \ + /* NPC mbox IDs (range 0x6000 - 0x7FFF) */ \ + M(NPC_MCAM_ALLOC_ENTRY, 0x6000, npc_mcam_alloc_entry, \ + npc_mcam_alloc_entry_req, npc_mcam_alloc_entry_rsp) \ + M(NPC_MCAM_FREE_ENTRY, 0x6001, npc_mcam_free_entry, \ + npc_mcam_free_entry_req, msg_rsp) \ + M(NPC_MCAM_WRITE_ENTRY, 0x6002, npc_mcam_write_entry, \ + npc_mcam_write_entry_req, msg_rsp) \ + M(NPC_MCAM_ENA_ENTRY, 0x6003, npc_mcam_ena_entry, \ + npc_mcam_ena_dis_entry_req, msg_rsp) \ + M(NPC_MCAM_DIS_ENTRY, 0x6004, npc_mcam_dis_entry, \ + npc_mcam_ena_dis_entry_req, msg_rsp) \ + M(NPC_MCAM_SHIFT_ENTRY, 0x6005, npc_mcam_shift_entry, \ + npc_mcam_shift_entry_req, npc_mcam_shift_entry_rsp) \ + M(NPC_MCAM_ALLOC_COUNTER, 0x6006, npc_mcam_alloc_counter, \ + npc_mcam_alloc_counter_req, npc_mcam_alloc_counter_rsp) \ + M(NPC_MCAM_FREE_COUNTER, 0x6007, npc_mcam_free_counter, \ + npc_mcam_oper_counter_req, msg_rsp) \ + M(NPC_MCAM_UNMAP_COUNTER, 0x6008, npc_mcam_unmap_counter, \ + npc_mcam_unmap_counter_req, msg_rsp) \ + M(NPC_MCAM_CLEAR_COUNTER, 0x6009, npc_mcam_clear_counter, \ + npc_mcam_oper_counter_req, msg_rsp) \ + M(NPC_MCAM_COUNTER_STATS, 0x600a, npc_mcam_counter_stats, \ + npc_mcam_oper_counter_req, npc_mcam_oper_counter_rsp) \ + M(NPC_MCAM_ALLOC_AND_WRITE_ENTRY, 0x600b, \ + npc_mcam_alloc_and_write_entry, npc_mcam_alloc_and_write_entry_req, \ + npc_mcam_alloc_and_write_entry_rsp) \ + M(NPC_GET_KEX_CFG, 0x600c, npc_get_kex_cfg, msg_req, \ + npc_get_kex_cfg_rsp) \ + M(NPC_INSTALL_FLOW, 0x600d, npc_install_flow, npc_install_flow_req, \ + npc_install_flow_rsp) \ + M(NPC_DELETE_FLOW, 0x600e, npc_delete_flow, npc_delete_flow_req, \ + msg_rsp) \ + M(NPC_MCAM_READ_ENTRY, 0x600f, npc_mcam_read_entry, \ + npc_mcam_read_entry_req, npc_mcam_read_entry_rsp) \ + M(NPC_SET_PKIND, 0x6010, npc_set_pkind, npc_set_pkind, msg_rsp) \ + M(NPC_MCAM_READ_BASE_RULE, 0x6011, npc_read_base_steer_rule, msg_req, \ + npc_mcam_read_base_rule_rsp) \ + /* NIX mbox IDs (range 0x8000 - 0xFFFF) */ \ + M(NIX_LF_ALLOC, 0x8000, nix_lf_alloc, nix_lf_alloc_req, \ + nix_lf_alloc_rsp) \ + M(NIX_LF_FREE, 0x8001, nix_lf_free, nix_lf_free_req, msg_rsp) \ + M(NIX_AQ_ENQ, 0x8002, nix_aq_enq, nix_aq_enq_req, nix_aq_enq_rsp) \ + M(NIX_HWCTX_DISABLE, 0x8003, nix_hwctx_disable, hwctx_disable_req, \ + msg_rsp) \ + M(NIX_TXSCH_ALLOC, 0x8004, nix_txsch_alloc, nix_txsch_alloc_req, \ + nix_txsch_alloc_rsp) \ + M(NIX_TXSCH_FREE, 0x8005, nix_txsch_free, nix_txsch_free_req, msg_rsp) \ + M(NIX_TXSCHQ_CFG, 0x8006, nix_txschq_cfg, nix_txschq_config, \ + nix_txschq_config) \ + M(NIX_STATS_RST, 0x8007, nix_stats_rst, msg_req, msg_rsp) \ + M(NIX_VTAG_CFG, 0x8008, nix_vtag_cfg, nix_vtag_config, msg_rsp) \ + M(NIX_RSS_FLOWKEY_CFG, 0x8009, nix_rss_flowkey_cfg, \ + nix_rss_flowkey_cfg, nix_rss_flowkey_cfg_rsp) \ + M(NIX_SET_MAC_ADDR, 0x800a, nix_set_mac_addr, nix_set_mac_addr, \ + msg_rsp) \ + M(NIX_SET_RX_MODE, 0x800b, nix_set_rx_mode, nix_rx_mode, msg_rsp) \ + M(NIX_SET_HW_FRS, 0x800c, nix_set_hw_frs, nix_frs_cfg, msg_rsp) \ + M(NIX_LF_START_RX, 0x800d, nix_lf_start_rx, msg_req, msg_rsp) \ + M(NIX_LF_STOP_RX, 0x800e, nix_lf_stop_rx, msg_req, msg_rsp) \ + M(NIX_MARK_FORMAT_CFG, 0x800f, nix_mark_format_cfg, \ + nix_mark_format_cfg, nix_mark_format_cfg_rsp) \ + M(NIX_SET_RX_CFG, 0x8010, nix_set_rx_cfg, nix_rx_cfg, msg_rsp) \ + M(NIX_LSO_FORMAT_CFG, 0x8011, nix_lso_format_cfg, nix_lso_format_cfg, \ + nix_lso_format_cfg_rsp) \ + M(NIX_LF_PTP_TX_ENABLE, 0x8013, nix_lf_ptp_tx_enable, msg_req, \ + msg_rsp) \ + M(NIX_LF_PTP_TX_DISABLE, 0x8014, nix_lf_ptp_tx_disable, msg_req, \ + msg_rsp) \ + M(NIX_SET_VLAN_TPID, 0x8015, nix_set_vlan_tpid, nix_set_vlan_tpid, \ + msg_rsp) \ + M(NIX_BP_ENABLE, 0x8016, nix_bp_enable, nix_bp_cfg_req, \ + nix_bp_cfg_rsp) \ + M(NIX_BP_DISABLE, 0x8017, nix_bp_disable, nix_bp_cfg_req, msg_rsp) \ + M(NIX_GET_MAC_ADDR, 0x8018, nix_get_mac_addr, msg_req, \ + nix_get_mac_addr_rsp) \ + M(NIX_INLINE_IPSEC_CFG, 0x8019, nix_inline_ipsec_cfg, \ + nix_inline_ipsec_cfg, msg_rsp) \ + M(NIX_INLINE_IPSEC_LF_CFG, 0x801a, nix_inline_ipsec_lf_cfg, \ + nix_inline_ipsec_lf_cfg, msg_rsp) \ + M(NIX_CN10K_AQ_ENQ, 0x801b, nix_cn10k_aq_enq, nix_cn10k_aq_enq_req, \ + nix_cn10k_aq_enq_rsp) \ + M(NIX_GET_HW_INFO, 0x801c, nix_get_hw_info, msg_req, nix_hw_info) + +/* Messages initiated by AF (range 0xC00 - 0xDFF) */ +#define MBOX_UP_CGX_MESSAGES \ + M(CGX_LINK_EVENT, 0xC00, cgx_link_event, cgx_link_info_msg, msg_rsp) \ + M(CGX_PTP_RX_INFO, 0xC01, cgx_ptp_rx_info, cgx_ptp_rx_info_msg, msg_rsp) + +enum { +#define M(_name, _id, _1, _2, _3) MBOX_MSG_##_name = _id, + MBOX_MESSAGES MBOX_UP_CGX_MESSAGES +#undef M +}; + +/* Mailbox message formats */ + +#define RVU_DEFAULT_PF_FUNC 0xFFFF + +/* Generic request msg used for those mbox messages which + * don't send any data in the request. + */ +struct msg_req { + struct mbox_msghdr hdr; +}; + +/* Generic response msg used a ack or response for those mbox + * messages which does not have a specific rsp msg format. + */ +struct msg_rsp { + struct mbox_msghdr hdr; +}; + +/* RVU mailbox error codes + * Range 256 - 300. + */ +enum rvu_af_status { + RVU_INVALID_VF_ID = -256, +}; + +struct ready_msg_rsp { + struct mbox_msghdr hdr; + uint16_t __io sclk_freq; /* SCLK frequency */ + uint16_t __io rclk_freq; /* RCLK frequency */ +}; + +/* Struct to set pkind */ +struct npc_set_pkind { + struct mbox_msghdr hdr; +#define ROC_PRIV_FLAGS_DEFAULT BIT_ULL(0) +#define ROC_PRIV_FLAGS_EDSA BIT_ULL(1) +#define ROC_PRIV_FLAGS_HIGIG BIT_ULL(2) +#define ROC_PRIV_FLAGS_LEN_90B BIT_ULL(3) +#define ROC_PRIV_FLAGS_CUSTOM BIT_ULL(63) + uint64_t __io mode; +#define PKIND_TX BIT_ULL(0) +#define PKIND_RX BIT_ULL(1) + uint8_t __io dir; + uint8_t __io pkind; /* valid only in case custom flag */ +}; + +/* Structure for requesting resource provisioning. + * 'modify' flag to be used when either requesting more + * or to detach partial of a certain resource type. + * Rest of the fields specify how many of what type to + * be attached. + * To request LFs from two blocks of same type this mailbox + * can be sent twice as below: + * struct rsrc_attach *attach; + * .. Allocate memory for message .. + * attach->cptlfs = 3; <3 LFs from CPT0> + * .. Send message .. + * .. Allocate memory for message .. + * attach->modify = 1; + * attach->cpt_blkaddr = BLKADDR_CPT1; + * attach->cptlfs = 2; <2 LFs from CPT1> + * .. Send message .. + */ +struct rsrc_attach_req { + struct mbox_msghdr hdr; + uint8_t __io modify : 1; + uint8_t __io npalf : 1; + uint8_t __io nixlf : 1; + uint16_t __io sso; + uint16_t __io ssow; + uint16_t __io timlfs; + uint16_t __io cptlfs; + uint16_t __io reelfs; + /* BLKADDR_CPT0/BLKADDR_CPT1 or 0 for BLKADDR_CPT0 */ + int __io cpt_blkaddr; + /* BLKADDR_REE0/BLKADDR_REE1 or 0 for BLKADDR_REE0 */ + int __io ree_blkaddr; +}; + +/* Structure for relinquishing resources. + * 'partial' flag to be used when relinquishing all resources + * but only of a certain type. If not set, all resources of all + * types provisioned to the RVU function will be detached. + */ +struct rsrc_detach_req { + struct mbox_msghdr hdr; + uint8_t __io partial : 1; + uint8_t __io npalf : 1; + uint8_t __io nixlf : 1; + uint8_t __io sso : 1; + uint8_t __io ssow : 1; + uint8_t __io timlfs : 1; + uint8_t __io cptlfs : 1; + uint8_t __io reelfs : 1; +}; + +/* NIX Transmit schedulers */ +#define NIX_TXSCH_LVL_SMQ 0x0 +#define NIX_TXSCH_LVL_MDQ 0x0 +#define NIX_TXSCH_LVL_TL4 0x1 +#define NIX_TXSCH_LVL_TL3 0x2 +#define NIX_TXSCH_LVL_TL2 0x3 +#define NIX_TXSCH_LVL_TL1 0x4 +#define NIX_TXSCH_LVL_CNT 0x5 + +/* + * Number of resources available to the caller. + * In reply to MBOX_MSG_FREE_RSRC_CNT. + */ +struct free_rsrcs_rsp { + struct mbox_msghdr hdr; + uint16_t __io schq[NIX_TXSCH_LVL_CNT]; + uint16_t __io sso; + uint16_t __io tim; + uint16_t __io ssow; + uint16_t __io cpt; + uint8_t __io npa; + uint8_t __io nix; + uint16_t __io schq_nix1[NIX_TXSCH_LVL_CNT]; + uint8_t __io nix1; + uint8_t __io cpt1; + uint8_t __io ree0; + uint8_t __io ree1; +}; + +#define MSIX_VECTOR_INVALID 0xFFFF +#define MAX_RVU_BLKLF_CNT 256 + +struct msix_offset_rsp { + struct mbox_msghdr hdr; + uint16_t __io npa_msixoff; + uint16_t __io nix_msixoff; + uint16_t __io sso; + uint16_t __io ssow; + uint16_t __io timlfs; + uint16_t __io cptlfs; + uint16_t __io sso_msixoff[MAX_RVU_BLKLF_CNT]; + uint16_t __io ssow_msixoff[MAX_RVU_BLKLF_CNT]; + uint16_t __io timlf_msixoff[MAX_RVU_BLKLF_CNT]; + uint16_t __io cptlf_msixoff[MAX_RVU_BLKLF_CNT]; + uint16_t __io cpt1_lfs; + uint16_t __io ree0_lfs; + uint16_t __io ree1_lfs; + uint16_t __io cpt1_lf_msixoff[MAX_RVU_BLKLF_CNT]; + uint16_t __io ree0_lf_msixoff[MAX_RVU_BLKLF_CNT]; + uint16_t __io ree1_lf_msixoff[MAX_RVU_BLKLF_CNT]; +}; + +struct lmtst_tbl_setup_req { + struct mbox_msghdr hdr; + + uint64_t __io dis_sched_early_comp : 1; + uint64_t __io sched_ena : 1; + uint64_t __io dis_line_pref : 1; + uint64_t __io ssow_pf_func : 13; + uint16_t __io pcifunc; +}; + +/* CGX mbox message formats */ + +struct cgx_stats_rsp { + struct mbox_msghdr hdr; +#define CGX_RX_STATS_COUNT 13 +#define CGX_TX_STATS_COUNT 18 + uint64_t __io rx_stats[CGX_RX_STATS_COUNT]; + uint64_t __io tx_stats[CGX_TX_STATS_COUNT]; +}; + +struct rpm_stats_rsp { + struct mbox_msghdr hdr; +#define RPM_RX_STATS_COUNT 43 +#define RPM_TX_STATS_COUNT 34 + uint64_t __io rx_stats[RPM_RX_STATS_COUNT]; + uint64_t __io tx_stats[RPM_TX_STATS_COUNT]; +}; + +struct cgx_fec_stats_rsp { + struct mbox_msghdr hdr; + uint64_t __io fec_corr_blks; + uint64_t __io fec_uncorr_blks; +}; + +/* Structure for requesting the operation for + * setting/getting mac address in the CGX interface + */ +struct cgx_mac_addr_set_or_get { + struct mbox_msghdr hdr; + uint8_t __io mac_addr[PLT_ETHER_ADDR_LEN]; +}; + +/* Structure for requesting the operation to + * add DMAC filter entry into CGX interface + */ +struct cgx_mac_addr_add_req { + struct mbox_msghdr hdr; + uint8_t __io mac_addr[PLT_ETHER_ADDR_LEN]; +}; + +/* Structure for response against the operation to + * add DMAC filter entry into CGX interface + */ +struct cgx_mac_addr_add_rsp { + struct mbox_msghdr hdr; + uint8_t __io index; +}; + +/* Structure for requesting the operation to + * delete DMAC filter entry from CGX interface + */ +struct cgx_mac_addr_del_req { + struct mbox_msghdr hdr; + uint8_t __io index; +}; + +/* Structure for response against the operation to + * get maximum supported DMAC filter entries + */ +struct cgx_max_dmac_entries_get_rsp { + struct mbox_msghdr hdr; + uint8_t __io max_dmac_filters; +}; + +struct cgx_link_user_info { + uint64_t __io link_up : 1; + uint64_t __io full_duplex : 1; + uint64_t __io lmac_type_id : 4; + uint64_t __io speed : 20; /* speed in Mbps */ + uint64_t __io an : 1; /* AN supported or not */ + uint64_t __io fec : 2; /* FEC type if enabled else 0 */ + uint64_t __io port : 8; +#define LMACTYPE_STR_LEN 16 + char lmac_type[LMACTYPE_STR_LEN]; +}; + +struct cgx_link_info_msg { + struct mbox_msghdr hdr; + struct cgx_link_user_info link_info; +}; + +struct cgx_ptp_rx_info_msg { + struct mbox_msghdr hdr; + uint8_t __io ptp_en; +}; + +struct cgx_pause_frm_cfg { + struct mbox_msghdr hdr; + uint8_t __io set; + /* set = 1 if the request is to config pause frames */ + /* set = 0 if the request is to fetch pause frames config */ + uint8_t __io rx_pause; + uint8_t __io tx_pause; +}; + +struct sfp_eeprom_s { +#define SFP_EEPROM_SIZE 256 + uint16_t __io sff_id; + uint8_t __io buf[SFP_EEPROM_SIZE]; + uint64_t __io reserved; +}; + +enum fec_type { + ROC_FEC_NONE, + ROC_FEC_BASER, + ROC_FEC_RS, +}; + +struct phy_s { + uint64_t __io can_change_mod_type : 1; + uint64_t __io mod_type : 1; +}; + +struct cgx_lmac_fwdata_s { + uint16_t __io rw_valid; + uint64_t __io supported_fec; + uint64_t __io supported_an; + uint64_t __io supported_link_modes; + /* Only applicable if AN is supported */ + uint64_t __io advertised_fec; + uint64_t __io advertised_link_modes; + /* Only applicable if SFP/QSFP slot is present */ + struct sfp_eeprom_s sfp_eeprom; + struct phy_s phy; +#define LMAC_FWDATA_RESERVED_MEM 1023 + uint64_t __io reserved[LMAC_FWDATA_RESERVED_MEM]; +}; + +struct cgx_fw_data { + struct mbox_msghdr hdr; + struct cgx_lmac_fwdata_s fwdata; +}; + +struct fec_mode { + struct mbox_msghdr hdr; + int __io fec; +}; + +struct cgx_set_link_state_msg { + struct mbox_msghdr hdr; + uint8_t __io enable; +}; + +struct cgx_phy_mod_type { + struct mbox_msghdr hdr; + int __io mod; +}; + +struct cgx_set_link_mode_args { + uint32_t __io speed; + uint8_t __io duplex; + uint8_t __io an; + uint8_t __io ports; + uint64_t __io mode; +}; + +struct cgx_set_link_mode_req { + struct mbox_msghdr hdr; + struct cgx_set_link_mode_args args; +}; + +struct cgx_set_link_mode_rsp { + struct mbox_msghdr hdr; + int __io status; +}; + +/* NPA mbox message formats */ + +/* NPA mailbox error codes + * Range 301 - 400. + */ +enum npa_af_status { + NPA_AF_ERR_PARAM = -301, + NPA_AF_ERR_AQ_FULL = -302, + NPA_AF_ERR_AQ_ENQUEUE = -303, + NPA_AF_ERR_AF_LF_INVALID = -304, + NPA_AF_ERR_AF_LF_ALLOC = -305, + NPA_AF_ERR_LF_RESET = -306, +}; + +#define NPA_AURA_SZ_0 0 +#define NPA_AURA_SZ_128 1 +#define NPA_AURA_SZ_256 2 +#define NPA_AURA_SZ_512 3 +#define NPA_AURA_SZ_1K 4 +#define NPA_AURA_SZ_2K 5 +#define NPA_AURA_SZ_4K 6 +#define NPA_AURA_SZ_8K 7 +#define NPA_AURA_SZ_16K 8 +#define NPA_AURA_SZ_32K 9 +#define NPA_AURA_SZ_64K 10 +#define NPA_AURA_SZ_128K 11 +#define NPA_AURA_SZ_256K 12 +#define NPA_AURA_SZ_512K 13 +#define NPA_AURA_SZ_1M 14 +#define NPA_AURA_SZ_MAX 15 + +/* For NPA LF context alloc and init */ +struct npa_lf_alloc_req { + struct mbox_msghdr hdr; + int __io node; + int __io aura_sz; /* No of auras. See NPA_AURA_SZ_* */ + uint32_t __io nr_pools; /* No of pools */ + uint64_t __io way_mask; +}; + +struct npa_lf_alloc_rsp { + struct mbox_msghdr hdr; + uint32_t __io stack_pg_ptrs; /* No of ptrs per stack page */ + uint32_t __io stack_pg_bytes; /* Size of stack page */ + uint16_t __io qints; /* NPA_AF_CONST::QINTS */ + uint8_t __io cache_lines; /* Batch Alloc DMA */ +}; + +/* NPA AQ enqueue msg */ +struct npa_aq_enq_req { + struct mbox_msghdr hdr; + uint32_t __io aura_id; + uint8_t __io ctype; + uint8_t __io op; + union { + /* Valid when op == WRITE/INIT and ctype == AURA. + * LF fills the pool_id in aura.pool_addr. AF will translate + * the pool_id to pool context pointer. + */ + __io struct npa_aura_s aura; + /* Valid when op == WRITE/INIT and ctype == POOL */ + __io struct npa_pool_s pool; + }; + /* Mask data when op == WRITE (1=write, 0=don't write) */ + union { + /* Valid when op == WRITE and ctype == AURA */ + __io struct npa_aura_s aura_mask; + /* Valid when op == WRITE and ctype == POOL */ + __io struct npa_pool_s pool_mask; + }; +}; + +struct npa_aq_enq_rsp { + struct mbox_msghdr hdr; + union { + /* Valid when op == READ and ctype == AURA */ + __io struct npa_aura_s aura; + /* Valid when op == READ and ctype == POOL */ + __io struct npa_pool_s pool; + }; +}; + +/* Disable all contexts of type 'ctype' */ +struct hwctx_disable_req { + struct mbox_msghdr hdr; + uint8_t __io ctype; +}; + +/* NIX mbox message formats */ + +/* NIX mailbox error codes + * Range 401 - 500. + */ +enum nix_af_status { + NIX_AF_ERR_PARAM = -401, + NIX_AF_ERR_AQ_FULL = -402, + NIX_AF_ERR_AQ_ENQUEUE = -403, + NIX_AF_ERR_AF_LF_INVALID = -404, + NIX_AF_ERR_AF_LF_ALLOC = -405, + NIX_AF_ERR_TLX_ALLOC_FAIL = -406, + NIX_AF_ERR_TLX_INVALID = -407, + NIX_AF_ERR_RSS_SIZE_INVALID = -408, + NIX_AF_ERR_RSS_GRPS_INVALID = -409, + NIX_AF_ERR_FRS_INVALID = -410, + NIX_AF_ERR_RX_LINK_INVALID = -411, + NIX_AF_INVAL_TXSCHQ_CFG = -412, + NIX_AF_SMQ_FLUSH_FAILED = -413, + NIX_AF_ERR_LF_RESET = -414, + NIX_AF_ERR_RSS_NOSPC_FIELD = -415, + NIX_AF_ERR_RSS_NOSPC_ALGO = -416, + NIX_AF_ERR_MARK_CFG_FAIL = -417, + NIX_AF_ERR_LSO_CFG_FAIL = -418, + NIX_AF_INVAL_NPA_PF_FUNC = -419, + NIX_AF_INVAL_SSO_PF_FUNC = -420, + NIX_AF_ERR_TX_VTAG_NOSPC = -421, + NIX_AF_ERR_RX_VTAG_INUSE = -422, + NIX_AF_ERR_PTP_CONFIG_FAIL = -423, +}; + +/* For NIX LF context alloc and init */ +struct nix_lf_alloc_req { + struct mbox_msghdr hdr; + int __io node; + uint32_t __io rq_cnt; /* No of receive queues */ + uint32_t __io sq_cnt; /* No of send queues */ + uint32_t __io cq_cnt; /* No of completion queues */ + uint8_t __io xqe_sz; + uint16_t __io rss_sz; + uint8_t __io rss_grps; + uint16_t __io npa_func; + /* RVU_DEFAULT_PF_FUNC == default pf_func associated with lf */ + uint16_t __io sso_func; + uint64_t __io rx_cfg; /* See NIX_AF_LF(0..127)_RX_CFG */ + uint64_t __io way_mask; +#define NIX_LF_RSS_TAG_LSB_AS_ADDER BIT_ULL(0) + uint64_t flags; +}; + +struct nix_lf_alloc_rsp { + struct mbox_msghdr hdr; + uint16_t __io sqb_size; + uint16_t __io rx_chan_base; + uint16_t __io tx_chan_base; + uint8_t __io rx_chan_cnt; /* Total number of RX channels */ + uint8_t __io tx_chan_cnt; /* Total number of TX channels */ + uint8_t __io lso_tsov4_idx; + uint8_t __io lso_tsov6_idx; + uint8_t __io mac_addr[PLT_ETHER_ADDR_LEN]; + uint8_t __io lf_rx_stats; /* NIX_AF_CONST1::LF_RX_STATS */ + uint8_t __io lf_tx_stats; /* NIX_AF_CONST1::LF_TX_STATS */ + uint16_t __io cints; /* NIX_AF_CONST2::CINTS */ + uint16_t __io qints; /* NIX_AF_CONST2::QINTS */ + uint8_t __io hw_rx_tstamp_en; /*set if rx timestamping enabled */ + uint8_t __io cgx_links; /* No. of CGX links present in HW */ + uint8_t __io lbk_links; /* No. of LBK links present in HW */ + uint8_t __io sdp_links; /* No. of SDP links present in HW */ + uint8_t tx_link; /* Transmit channel link number */ +}; + +struct nix_lf_free_req { + struct mbox_msghdr hdr; +#define NIX_LF_DISABLE_FLOWS BIT_ULL(0) +#define NIX_LF_DONT_FREE_TX_VTAG BIT_ULL(1) + uint64_t __io flags; +}; + +/* CN10x NIX AQ enqueue msg */ +struct nix_cn10k_aq_enq_req { + struct mbox_msghdr hdr; + uint32_t __io qidx; + uint8_t __io ctype; + uint8_t __io op; + union { + /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RQ */ + __io struct nix_cn10k_rq_ctx_s rq; + /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_SQ */ + __io struct nix_cn10k_sq_ctx_s sq; + /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_CQ */ + __io struct nix_cq_ctx_s cq; + /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RSS */ + __io struct nix_rsse_s rss; + /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_MCE */ + __io struct nix_rx_mce_s mce; + }; + /* Mask data when op == WRITE (1=write, 0=don't write) */ + union { + /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RQ */ + __io struct nix_cn10k_rq_ctx_s rq_mask; + /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_SQ */ + __io struct nix_cn10k_sq_ctx_s sq_mask; + /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_CQ */ + __io struct nix_cq_ctx_s cq_mask; + /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RSS */ + __io struct nix_rsse_s rss_mask; + /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_MCE */ + __io struct nix_rx_mce_s mce_mask; + }; +}; + +struct nix_cn10k_aq_enq_rsp { + struct mbox_msghdr hdr; + union { + struct nix_cn10k_rq_ctx_s rq; + struct nix_cn10k_sq_ctx_s sq; + struct nix_cq_ctx_s cq; + struct nix_rsse_s rss; + struct nix_rx_mce_s mce; + }; +}; + +/* NIX AQ enqueue msg */ +struct nix_aq_enq_req { + struct mbox_msghdr hdr; + uint32_t __io qidx; + uint8_t __io ctype; + uint8_t __io op; + union { + /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RQ */ + __io struct nix_rq_ctx_s rq; + /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_SQ */ + __io struct nix_sq_ctx_s sq; + /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_CQ */ + __io struct nix_cq_ctx_s cq; + /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RSS */ + __io struct nix_rsse_s rss; + /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_MCE */ + __io struct nix_rx_mce_s mce; + }; + /* Mask data when op == WRITE (1=write, 0=don't write) */ + union { + /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RQ */ + __io struct nix_rq_ctx_s rq_mask; + /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_SQ */ + __io struct nix_sq_ctx_s sq_mask; + /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_CQ */ + __io struct nix_cq_ctx_s cq_mask; + /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RSS */ + __io struct nix_rsse_s rss_mask; + /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_MCE */ + __io struct nix_rx_mce_s mce_mask; + }; +}; + +struct nix_aq_enq_rsp { + struct mbox_msghdr hdr; + union { + __io struct nix_rq_ctx_s rq; + __io struct nix_sq_ctx_s sq; + __io struct nix_cq_ctx_s cq; + __io struct nix_rsse_s rss; + __io struct nix_rx_mce_s mce; + }; +}; + +/* Tx scheduler/shaper mailbox messages */ + +#define MAX_TXSCHQ_PER_FUNC 128 + +struct nix_txsch_alloc_req { + struct mbox_msghdr hdr; + /* Scheduler queue count request at each level */ + uint16_t __io schq_contig[NIX_TXSCH_LVL_CNT]; /* Contig. queues */ + uint16_t __io schq[NIX_TXSCH_LVL_CNT]; /* Non-Contig. queues */ +}; + +struct nix_txsch_alloc_rsp { + struct mbox_msghdr hdr; + /* Scheduler queue count allocated at each level */ + uint16_t __io schq_contig[NIX_TXSCH_LVL_CNT]; /* Contig. queues */ + uint16_t __io schq[NIX_TXSCH_LVL_CNT]; /* Non-Contig. queues */ + /* Scheduler queue list allocated at each level */ + uint16_t __io schq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; + uint16_t __io schq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; + /* Traffic aggregation scheduler level */ + uint8_t __io aggr_level; + /* Aggregation lvl's RR_PRIO config */ + uint8_t __io aggr_lvl_rr_prio; + /* LINKX_CFG CSRs mapped to TL3 or TL2's index ? */ + uint8_t __io link_cfg_lvl; +}; + +struct nix_txsch_free_req { + struct mbox_msghdr hdr; +#define TXSCHQ_FREE_ALL BIT_ULL(0) + uint16_t __io flags; + /* Scheduler queue level to be freed */ + uint16_t __io schq_lvl; + /* List of scheduler queues to be freed */ + uint16_t __io schq; +}; + +struct nix_txschq_config { + struct mbox_msghdr hdr; + uint8_t __io lvl; /* SMQ/MDQ/TL4/TL3/TL2/TL1 */ + uint8_t __io read; +#define TXSCHQ_IDX_SHIFT 16 +#define TXSCHQ_IDX_MASK (BIT_ULL(10) - 1) +#define TXSCHQ_IDX(reg, shift) (((reg) >> (shift)) & TXSCHQ_IDX_MASK) + uint8_t __io num_regs; +#define MAX_REGS_PER_MBOX_MSG 20 + uint64_t __io reg[MAX_REGS_PER_MBOX_MSG]; + uint64_t __io regval[MAX_REGS_PER_MBOX_MSG]; + /* All 0's => overwrite with new value */ + uint64_t __io regval_mask[MAX_REGS_PER_MBOX_MSG]; +}; + +struct nix_vtag_config { + struct mbox_msghdr hdr; + /* '0' for 4 octet VTAG, '1' for 8 octet VTAG */ + uint8_t __io vtag_size; + /* cfg_type is '0' for tx vlan cfg + * cfg_type is '1' for rx vlan cfg + */ + uint8_t __io cfg_type; + union { + /* Valid when cfg_type is '0' */ + struct { + uint64_t __io vtag0; + uint64_t __io vtag1; + + /* cfg_vtag0 & cfg_vtag1 fields are valid + * when free_vtag0 & free_vtag1 are '0's. + */ + /* cfg_vtag0 = 1 to configure vtag0 */ + uint8_t __io cfg_vtag0 : 1; + /* cfg_vtag1 = 1 to configure vtag1 */ + uint8_t __io cfg_vtag1 : 1; + + /* vtag0_idx & vtag1_idx are only valid when + * both cfg_vtag0 & cfg_vtag1 are '0's, + * these fields are used along with free_vtag0 + * & free_vtag1 to free the nix lf's tx_vlan + * configuration. + * + * Denotes the indices of tx_vtag def registers + * that needs to be cleared and freed. + */ + int __io vtag0_idx; + int __io vtag1_idx; + + /* Free_vtag0 & free_vtag1 fields are valid + * when cfg_vtag0 & cfg_vtag1 are '0's. + */ + /* Free_vtag0 = 1 clears vtag0 configuration + * vtag0_idx denotes the index to be cleared. + */ + uint8_t __io free_vtag0 : 1; + /* Free_vtag1 = 1 clears vtag1 configuration + * vtag1_idx denotes the index to be cleared. + */ + uint8_t __io free_vtag1 : 1; + } tx; + + /* Valid when cfg_type is '1' */ + struct { + /* Rx vtag type index, valid values are in 0..7 range */ + uint8_t __io vtag_type; + /* Rx vtag strip */ + uint8_t __io strip_vtag : 1; + /* Rx vtag capture */ + uint8_t __io capture_vtag : 1; + } rx; + }; +}; + +struct nix_vtag_config_rsp { + struct mbox_msghdr hdr; + /* Indices of tx_vtag def registers used to configure + * tx vtag0 & vtag1 headers, these indices are valid + * when nix_vtag_config mbox requested for vtag0 and/ + * or vtag1 configuration. + */ + int __io vtag0_idx; + int __io vtag1_idx; +}; + +struct nix_rss_flowkey_cfg { + struct mbox_msghdr hdr; + int __io mcam_index; /* MCAM entry index to modify */ + uint32_t __io flowkey_cfg; /* Flowkey types selected */ +#define FLOW_KEY_TYPE_PORT BIT(0) +#define FLOW_KEY_TYPE_IPV4 BIT(1) +#define FLOW_KEY_TYPE_IPV6 BIT(2) +#define FLOW_KEY_TYPE_TCP BIT(3) +#define FLOW_KEY_TYPE_UDP BIT(4) +#define FLOW_KEY_TYPE_SCTP BIT(5) +#define FLOW_KEY_TYPE_NVGRE BIT(6) +#define FLOW_KEY_TYPE_VXLAN BIT(7) +#define FLOW_KEY_TYPE_GENEVE BIT(8) +#define FLOW_KEY_TYPE_ETH_DMAC BIT(9) +#define FLOW_KEY_TYPE_IPV6_EXT BIT(10) +#define FLOW_KEY_TYPE_GTPU BIT(11) +#define FLOW_KEY_TYPE_INNR_IPV4 BIT(12) +#define FLOW_KEY_TYPE_INNR_IPV6 BIT(13) +#define FLOW_KEY_TYPE_INNR_TCP BIT(14) +#define FLOW_KEY_TYPE_INNR_UDP BIT(15) +#define FLOW_KEY_TYPE_INNR_SCTP BIT(16) +#define FLOW_KEY_TYPE_INNR_ETH_DMAC BIT(17) +#define FLOW_KEY_TYPE_CH_LEN_90B BIT(18) +#define FLOW_KEY_TYPE_CUSTOM0 BIT(19) +#define FLOW_KEY_TYPE_VLAN BIT(20) +#define FLOW_KEY_TYPE_L4_DST BIT(28) +#define FLOW_KEY_TYPE_L4_SRC BIT(29) +#define FLOW_KEY_TYPE_L3_DST BIT(30) +#define FLOW_KEY_TYPE_L3_SRC BIT(31) + uint8_t __io group; /* RSS context or group */ +}; + +struct nix_rss_flowkey_cfg_rsp { + struct mbox_msghdr hdr; + uint8_t __io alg_idx; /* Selected algo index */ +}; + +struct nix_set_mac_addr { + struct mbox_msghdr hdr; + uint8_t __io mac_addr[PLT_ETHER_ADDR_LEN]; +}; + +struct nix_get_mac_addr_rsp { + struct mbox_msghdr hdr; + uint8_t __io mac_addr[PLT_ETHER_ADDR_LEN]; +}; + +struct nix_mark_format_cfg { + struct mbox_msghdr hdr; + uint8_t __io offset; + uint8_t __io y_mask; + uint8_t __io y_val; + uint8_t __io r_mask; + uint8_t __io r_val; +}; + +struct nix_mark_format_cfg_rsp { + struct mbox_msghdr hdr; + uint8_t __io mark_format_idx; +}; + +struct nix_lso_format_cfg { + struct mbox_msghdr hdr; + uint64_t __io field_mask; + uint64_t __io fields[NIX_LSO_FIELD_MAX]; +}; + +struct nix_lso_format_cfg_rsp { + struct mbox_msghdr hdr; + uint8_t __io lso_format_idx; +}; + +struct nix_rx_mode { + struct mbox_msghdr hdr; +#define NIX_RX_MODE_UCAST BIT(0) +#define NIX_RX_MODE_PROMISC BIT(1) +#define NIX_RX_MODE_ALLMULTI BIT(2) + uint16_t __io mode; +}; + +struct nix_rx_cfg { + struct mbox_msghdr hdr; +#define NIX_RX_OL3_VERIFY BIT(0) +#define NIX_RX_OL4_VERIFY BIT(1) + uint8_t __io len_verify; /* Outer L3/L4 len check */ +#define NIX_RX_CSUM_OL4_VERIFY BIT(0) + uint8_t __io csum_verify; /* Outer L4 checksum verification */ +}; + +struct nix_frs_cfg { + struct mbox_msghdr hdr; + uint8_t __io update_smq; /* Update SMQ's min/max lens */ + uint8_t __io update_minlen; /* Set minlen also */ + uint8_t __io sdp_link; /* Set SDP RX link */ + uint16_t __io maxlen; + uint16_t __io minlen; +}; + +struct nix_set_vlan_tpid { + struct mbox_msghdr hdr; +#define NIX_VLAN_TYPE_INNER 0 +#define NIX_VLAN_TYPE_OUTER 1 + uint8_t __io vlan_type; + uint16_t __io tpid; +}; + +struct nix_bp_cfg_req { + struct mbox_msghdr hdr; + uint16_t __io chan_base; /* Starting channel number */ + uint8_t __io chan_cnt; /* Number of channels */ + uint8_t __io bpid_per_chan; + /* bpid_per_chan = 0 assigns single bp id for range of channels */ + /* bpid_per_chan = 1 assigns separate bp id for each channel */ +}; + +/* PF can be mapped to either CGX or LBK interface, + * so maximum 64 channels are possible. + */ +#define NIX_MAX_CHAN 64 +struct nix_bp_cfg_rsp { + struct mbox_msghdr hdr; + /* Channel and bpid mapping */ + uint16_t __io chan_bpid[NIX_MAX_CHAN]; + /* Number of channel for which bpids are assigned */ + uint8_t __io chan_cnt; +}; + +/* Global NIX inline IPSec configuration */ +struct nix_inline_ipsec_cfg { + struct mbox_msghdr hdr; + uint32_t __io cpt_credit; + struct { + uint8_t __io egrp; + uint8_t __io opcode; + } gen_cfg; + struct { + uint16_t __io cpt_pf_func; + uint8_t __io cpt_slot; + } inst_qsel; + uint8_t __io enable; +}; + +/* Per NIX LF inline IPSec configuration */ +struct nix_inline_ipsec_lf_cfg { + struct mbox_msghdr hdr; + uint64_t __io sa_base_addr; + struct { + uint32_t __io tag_const; + uint16_t __io lenm1_max; + uint8_t __io sa_pow2_size; + uint8_t __io tt; + } ipsec_cfg0; + struct { + uint32_t __io sa_idx_max; + uint8_t __io sa_idx_w; + } ipsec_cfg1; + uint8_t __io enable; +}; + +struct nix_hw_info { + struct mbox_msghdr hdr; + uint16_t __io vwqe_delay; + uint16_t __io rsvd[15]; +}; + +/* SSO mailbox error codes + * Range 501 - 600. + */ +enum sso_af_status { + SSO_AF_ERR_PARAM = -501, + SSO_AF_ERR_LF_INVALID = -502, + SSO_AF_ERR_AF_LF_ALLOC = -503, + SSO_AF_ERR_GRP_EBUSY = -504, + SSO_AF_INVAL_NPA_PF_FUNC = -505, +}; + +struct sso_lf_alloc_req { + struct mbox_msghdr hdr; + int __io node; + uint16_t __io hwgrps; +}; + +struct sso_lf_alloc_rsp { + struct mbox_msghdr hdr; + uint32_t __io xaq_buf_size; + uint32_t __io xaq_wq_entries; + uint32_t __io in_unit_entries; + uint16_t __io hwgrps; +}; + +struct sso_lf_free_req { + struct mbox_msghdr hdr; + int __io node; + uint16_t __io hwgrps; +}; + +/* SSOW mailbox error codes + * Range 601 - 700. + */ +enum ssow_af_status { + SSOW_AF_ERR_PARAM = -601, + SSOW_AF_ERR_LF_INVALID = -602, + SSOW_AF_ERR_AF_LF_ALLOC = -603, +}; + +struct ssow_lf_alloc_req { + struct mbox_msghdr hdr; + int __io node; + uint16_t __io hws; +}; + +struct ssow_lf_free_req { + struct mbox_msghdr hdr; + int __io node; + uint16_t __io hws; +}; + +struct sso_hw_setconfig { + struct mbox_msghdr hdr; + uint32_t __io npa_aura_id; + uint16_t __io npa_pf_func; + uint16_t __io hwgrps; +}; + +struct sso_hw_xaq_release { + struct mbox_msghdr hdr; + uint16_t __io hwgrps; +}; + +struct sso_info_req { + struct mbox_msghdr hdr; + union { + uint16_t __io grp; + uint16_t __io hws; + }; +}; + +struct sso_grp_priority { + struct mbox_msghdr hdr; + uint16_t __io grp; + uint8_t __io priority; + uint8_t __io affinity; + uint8_t __io weight; +}; + +struct sso_grp_qos_cfg { + struct mbox_msghdr hdr; + uint16_t __io grp; + uint32_t __io xaq_limit; + uint16_t __io taq_thr; + uint16_t __io iaq_thr; +}; + +struct sso_grp_stats { + struct mbox_msghdr hdr; + uint16_t __io grp; + uint64_t __io ws_pc; + uint64_t __io ext_pc; + uint64_t __io wa_pc; + uint64_t __io ts_pc; + uint64_t __io ds_pc; + uint64_t __io dq_pc; + uint64_t __io aw_status; + uint64_t __io page_cnt; +}; + +struct sso_hws_stats { + struct mbox_msghdr hdr; + uint16_t __io hws; + uint64_t __io arbitration; +}; + +/* CPT mailbox error codes + * Range 901 - 1000. + */ +enum cpt_af_status { + CPT_AF_ERR_PARAM = -901, + CPT_AF_ERR_GRP_INVALID = -902, + CPT_AF_ERR_LF_INVALID = -903, + CPT_AF_ERR_ACCESS_DENIED = -904, + CPT_AF_ERR_SSO_PF_FUNC_INVALID = -905, + CPT_AF_ERR_NIX_PF_FUNC_INVALID = -906, + CPT_AF_ERR_INLINE_IPSEC_INB_ENA = -907, + CPT_AF_ERR_INLINE_IPSEC_OUT_ENA = -908 +}; + +/* CPT mbox message formats */ + +struct cpt_rd_wr_reg_msg { + struct mbox_msghdr hdr; + uint64_t __io reg_offset; + uint64_t __io *ret_val; + uint64_t __io val; + uint8_t __io is_write; +}; + +struct cpt_set_crypto_grp_req_msg { + struct mbox_msghdr hdr; + uint8_t __io crypto_eng_grp; +}; + +struct cpt_lf_alloc_req_msg { + struct mbox_msghdr hdr; + uint16_t __io nix_pf_func; + uint16_t __io sso_pf_func; + uint16_t __io eng_grpmsk; + uint8_t __io blkaddr; +}; + +#define CPT_INLINE_INBOUND 0 +#define CPT_INLINE_OUTBOUND 1 + +struct cpt_inline_ipsec_cfg_msg { + struct mbox_msghdr hdr; + uint8_t __io enable; + uint8_t __io slot; + uint8_t __io dir; + uint8_t __io sso_pf_func_ovrd; + uint16_t __io sso_pf_func; /* Inbound path SSO_PF_FUNC */ + uint16_t __io nix_pf_func; /* Outbound path NIX_PF_FUNC */ +}; + +struct cpt_sts_req { + struct mbox_msghdr hdr; + uint8_t __io blkaddr; +}; + +struct cpt_sts_rsp { + struct mbox_msghdr hdr; + uint64_t __io inst_req_pc; + uint64_t __io inst_lat_pc; + uint64_t __io rd_req_pc; + uint64_t __io rd_lat_pc; + uint64_t __io rd_uc_pc; + uint64_t __io active_cycles_pc; + uint64_t __io ctx_mis_pc; + uint64_t __io ctx_hit_pc; + uint64_t __io ctx_aop_pc; + uint64_t __io ctx_aop_lat_pc; + uint64_t __io ctx_ifetch_pc; + uint64_t __io ctx_ifetch_lat_pc; + uint64_t __io ctx_ffetch_pc; + uint64_t __io ctx_ffetch_lat_pc; + uint64_t __io ctx_wback_pc; + uint64_t __io ctx_wback_lat_pc; + uint64_t __io ctx_psh_pc; + uint64_t __io ctx_psh_lat_pc; + uint64_t __io ctx_err; + uint64_t __io ctx_enc_id; + uint64_t __io ctx_flush_timer; + uint64_t __io rxc_time; + uint64_t __io rxc_time_cfg; + uint64_t __io rxc_active_sts; + uint64_t __io rxc_zombie_sts; + uint64_t __io busy_sts_ae; + uint64_t __io free_sts_ae; + uint64_t __io busy_sts_se; + uint64_t __io free_sts_se; + uint64_t __io busy_sts_ie; + uint64_t __io free_sts_ie; + uint64_t __io exe_err_info; + uint64_t __io cptclk_cnt; + uint64_t __io diag; + uint64_t __io rxc_dfrg; + uint64_t __io x2p_link_cfg0; + uint64_t __io x2p_link_cfg1; +}; + +struct cpt_rxc_time_cfg_req { + struct mbox_msghdr hdr; + int blkaddr; + uint32_t step; + uint16_t zombie_thres; + uint16_t zombie_limit; + uint16_t active_thres; + uint16_t active_limit; +}; + +struct cpt_rx_inline_lf_cfg_msg { + struct mbox_msghdr hdr; + uint16_t __io sso_pf_func; +}; + +enum cpt_eng_type { + CPT_ENG_TYPE_AE = 1, + CPT_ENG_TYPE_SE = 2, + CPT_ENG_TYPE_IE = 3, + CPT_MAX_ENG_TYPES, +}; + +/* CPT HW capabilities */ +union cpt_eng_caps { + uint64_t __io u; + struct { + uint64_t __io reserved_0_4 : 5; + uint64_t __io mul : 1; + uint64_t __io sha1_sha2 : 1; + uint64_t __io chacha20 : 1; + uint64_t __io zuc_snow3g : 1; + uint64_t __io sha3 : 1; + uint64_t __io aes : 1; + uint64_t __io kasumi : 1; + uint64_t __io des : 1; + uint64_t __io crc : 1; + uint64_t __io reserved_14_63 : 50; + }; +}; + +struct cpt_caps_rsp_msg { + struct mbox_msghdr hdr; + uint16_t __io cpt_pf_drv_version; + uint8_t __io cpt_revision; + union cpt_eng_caps eng_caps[CPT_MAX_ENG_TYPES]; +}; + +struct cpt_eng_grp_req { + struct mbox_msghdr hdr; + uint8_t __io eng_type; +}; + +struct cpt_eng_grp_rsp { + struct mbox_msghdr hdr; + uint8_t __io eng_type; + uint8_t __io eng_grp_num; +}; + +/* NPC mbox message structs */ + +#define NPC_MCAM_ENTRY_INVALID 0xFFFF +#define NPC_MCAM_INVALID_MAP 0xFFFF + +/* NPC mailbox error codes + * Range 701 - 800. + */ +enum npc_af_status { + NPC_MCAM_INVALID_REQ = -701, + NPC_MCAM_ALLOC_DENIED = -702, + NPC_MCAM_ALLOC_FAILED = -703, + NPC_MCAM_PERM_DENIED = -704, + NPC_AF_ERR_HIGIG_CONFIG_FAIL = -705, +}; + +struct npc_mcam_alloc_entry_req { + struct mbox_msghdr hdr; +#define NPC_MAX_NONCONTIG_ENTRIES 256 + uint8_t __io contig; /* Contiguous entries ? */ +#define NPC_MCAM_ANY_PRIO 0 +#define NPC_MCAM_LOWER_PRIO 1 +#define NPC_MCAM_HIGHER_PRIO 2 + uint8_t __io priority; /* Lower or higher w.r.t ref_entry */ + uint16_t __io ref_entry; + uint16_t __io count; /* Number of entries requested */ +}; + +struct npc_mcam_alloc_entry_rsp { + struct mbox_msghdr hdr; + /* Entry alloc'ed or start index if contiguous. + * Invalid in case of non-contiguous. + */ + uint16_t __io entry; + uint16_t __io count; /* Number of entries allocated */ + uint16_t __io free_count; /* Number of entries available */ + uint16_t __io entry_list[NPC_MAX_NONCONTIG_ENTRIES]; +}; + +struct npc_mcam_free_entry_req { + struct mbox_msghdr hdr; + uint16_t __io entry; /* Entry index to be freed */ + uint8_t __io all; /* Free all entries alloc'ed to this PFVF */ +}; + +struct mcam_entry { +#define NPC_MAX_KWS_IN_KEY 7 /* Number of keywords in max key width */ + uint64_t __io kw[NPC_MAX_KWS_IN_KEY]; + uint64_t __io kw_mask[NPC_MAX_KWS_IN_KEY]; + uint64_t __io action; + uint64_t __io vtag_action; +}; + +struct npc_mcam_write_entry_req { + struct mbox_msghdr hdr; + struct mcam_entry entry_data; + uint16_t __io entry; /* MCAM entry to write this match key */ + uint16_t __io cntr; /* Counter for this MCAM entry */ + uint8_t __io intf; /* Rx or Tx interface */ + uint8_t __io enable_entry; /* Enable this MCAM entry ? */ + uint8_t __io set_cntr; /* Set counter for this entry ? */ +}; + +/* Enable/Disable a given entry */ +struct npc_mcam_ena_dis_entry_req { + struct mbox_msghdr hdr; + uint16_t __io entry; +}; + +struct npc_mcam_shift_entry_req { + struct mbox_msghdr hdr; +#define NPC_MCAM_MAX_SHIFTS 64 + uint16_t __io curr_entry[NPC_MCAM_MAX_SHIFTS]; + uint16_t __io new_entry[NPC_MCAM_MAX_SHIFTS]; + uint16_t __io shift_count; /* Number of entries to shift */ +}; + +struct npc_mcam_shift_entry_rsp { + struct mbox_msghdr hdr; + /* Index in 'curr_entry', not entry itself */ + uint16_t __io failed_entry_idx; +}; + +struct npc_mcam_alloc_counter_req { + struct mbox_msghdr hdr; + uint8_t __io contig; /* Contiguous counters ? */ +#define NPC_MAX_NONCONTIG_COUNTERS 64 + uint16_t __io count; /* Number of counters requested */ +}; + +struct npc_mcam_alloc_counter_rsp { + struct mbox_msghdr hdr; + /* Counter alloc'ed or start idx if contiguous. + * Invalid in case of non-contiguous. + */ + uint16_t __io cntr; + uint16_t __io count; /* Number of counters allocated */ + uint16_t __io cntr_list[NPC_MAX_NONCONTIG_COUNTERS]; +}; + +struct npc_mcam_oper_counter_req { + struct mbox_msghdr hdr; + uint16_t __io cntr; /* Free a counter or clear/fetch it's stats */ +}; + +struct npc_mcam_oper_counter_rsp { + struct mbox_msghdr hdr; + /* valid only while fetching counter's stats */ + uint64_t __io stat; +}; + +struct npc_mcam_unmap_counter_req { + struct mbox_msghdr hdr; + uint16_t __io cntr; + uint16_t __io entry; /* Entry and counter to be unmapped */ + uint8_t __io all; /* Unmap all entries using this counter ? */ +}; + +struct npc_mcam_alloc_and_write_entry_req { + struct mbox_msghdr hdr; + struct mcam_entry entry_data; + uint16_t __io ref_entry; + uint8_t __io priority; /* Lower or higher w.r.t ref_entry */ + uint8_t __io intf; /* Rx or Tx interface */ + uint8_t __io enable_entry; /* Enable this MCAM entry ? */ + uint8_t __io alloc_cntr; /* Allocate counter and map ? */ +}; + +struct npc_mcam_alloc_and_write_entry_rsp { + struct mbox_msghdr hdr; + uint16_t __io entry; + uint16_t __io cntr; +}; + +struct npc_get_kex_cfg_rsp { + struct mbox_msghdr hdr; + uint64_t __io rx_keyx_cfg; /* NPC_AF_INTF(0)_KEX_CFG */ + uint64_t __io tx_keyx_cfg; /* NPC_AF_INTF(1)_KEX_CFG */ +#define NPC_MAX_INTF 2 +#define NPC_MAX_LID 8 +#define NPC_MAX_LT 16 +#define NPC_MAX_LD 2 +#define NPC_MAX_LFL 16 + /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */ + uint64_t __io kex_ld_flags[NPC_MAX_LD]; + /* NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG */ + uint64_t __io intf_lid_lt_ld[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT] + [NPC_MAX_LD]; + /* NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG */ + uint64_t __io intf_ld_flags[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL]; +#define MKEX_NAME_LEN 128 + uint8_t __io mkex_pfl_name[MKEX_NAME_LEN]; +}; + +enum header_fields { + NPC_DMAC, + NPC_SMAC, + NPC_ETYPE, + NPC_OUTER_VID, + NPC_TOS, + NPC_SIP_IPV4, + NPC_DIP_IPV4, + NPC_SIP_IPV6, + NPC_DIP_IPV6, + NPC_SPORT_TCP, + NPC_DPORT_TCP, + NPC_SPORT_UDP, + NPC_DPORT_UDP, + NPC_FDSA_VAL, + NPC_HEADER_FIELDS_MAX, +}; + +struct flow_msg { + unsigned char __io dmac[6]; + unsigned char __io smac[6]; + uint16_t __io etype; + uint16_t __io vlan_etype; + uint16_t __io vlan_tci; + union { + uint32_t __io ip4src; + uint32_t __io ip6src[4]; + }; + union { + uint32_t __io ip4dst; + uint32_t __io ip6dst[4]; + }; + uint8_t __io tos; + uint8_t __io ip_ver; + uint8_t __io ip_proto; + uint8_t __io tc; + uint16_t __io sport; + uint16_t __io dport; +}; + +struct npc_install_flow_req { + struct mbox_msghdr hdr; + struct flow_msg packet; + struct flow_msg mask; + uint64_t __io features; + uint16_t __io entry; + uint16_t __io channel; + uint8_t __io intf; + uint8_t __io set_cntr; + uint8_t __io default_rule; + /* Overwrite(0) or append(1) flow to default rule? */ + uint8_t __io append; + uint16_t __io vf; + /* action */ + uint32_t __io index; + uint16_t __io match_id; + uint8_t __io flow_key_alg; + uint8_t __io op; + /* vtag action */ + uint8_t __io vtag0_type; + uint8_t __io vtag0_valid; + uint8_t __io vtag1_type; + uint8_t __io vtag1_valid; + + /* vtag tx action */ + uint16_t __io vtag0_def; + uint8_t __io vtag0_op; + uint16_t __io vtag1_def; + uint8_t __io vtag1_op; +}; + +struct npc_install_flow_rsp { + struct mbox_msghdr hdr; + /* Negative if no counter else counter number */ + int __io counter; +}; + +struct npc_delete_flow_req { + struct mbox_msghdr hdr; + uint16_t __io entry; + uint16_t __io start; /*Disable range of entries */ + uint16_t __io end; + uint8_t __io all; /* PF + VFs */ +}; + +struct npc_mcam_read_entry_req { + struct mbox_msghdr hdr; + /* MCAM entry to read */ + uint16_t __io entry; +}; + +struct npc_mcam_read_entry_rsp { + struct mbox_msghdr hdr; + struct mcam_entry entry_data; + uint8_t __io intf; + uint8_t __io enable; +}; + +struct npc_mcam_read_base_rule_rsp { + struct mbox_msghdr hdr; + struct mcam_entry entry_data; +}; + +/* TIM mailbox error codes + * Range 801 - 900. + */ +enum tim_af_status { + TIM_AF_NO_RINGS_LEFT = -801, + TIM_AF_INVALID_NPA_PF_FUNC = -802, + TIM_AF_INVALID_SSO_PF_FUNC = -803, + TIM_AF_RING_STILL_RUNNING = -804, + TIM_AF_LF_INVALID = -805, + TIM_AF_CSIZE_NOT_ALIGNED = -806, + TIM_AF_CSIZE_TOO_SMALL = -807, + TIM_AF_CSIZE_TOO_BIG = -808, + TIM_AF_INTERVAL_TOO_SMALL = -809, + TIM_AF_INVALID_BIG_ENDIAN_VALUE = -810, + TIM_AF_INVALID_CLOCK_SOURCE = -811, + TIM_AF_GPIO_CLK_SRC_NOT_ENABLED = -812, + TIM_AF_INVALID_BSIZE = -813, + TIM_AF_INVALID_ENABLE_PERIODIC = -814, + TIM_AF_INVALID_ENABLE_DONTFREE = -815, + TIM_AF_ENA_DONTFRE_NSET_PERIODIC = -816, + TIM_AF_RING_ALREADY_DISABLED = -817, +}; + +enum tim_clk_srcs { + TIM_CLK_SRCS_TENNS = 0, + TIM_CLK_SRCS_GPIO = 1, + TIM_CLK_SRCS_GTI = 2, + TIM_CLK_SRCS_PTP = 3, + TIM_CLK_SRSC_INVALID, +}; + +enum tim_gpio_edge { + TIM_GPIO_NO_EDGE = 0, + TIM_GPIO_LTOH_TRANS = 1, + TIM_GPIO_HTOL_TRANS = 2, + TIM_GPIO_BOTH_TRANS = 3, + TIM_GPIO_INVALID, +}; + +enum ptp_op { + PTP_OP_ADJFINE = 0, /* adjfine(req.scaled_ppm); */ + PTP_OP_GET_CLOCK = 1, /* rsp.clk = get_clock() */ +}; + +struct ptp_req { + struct mbox_msghdr hdr; + uint8_t __io op; + int64_t __io scaled_ppm; + uint8_t __io is_pmu; +}; + +struct ptp_rsp { + struct mbox_msghdr hdr; + uint64_t __io clk; + uint64_t __io tsc; +}; + +struct get_hw_cap_rsp { + struct mbox_msghdr hdr; + /* Schq mapping fixed or flexible */ + uint8_t __io nix_fixed_txschq_mapping; + uint8_t __io nix_shaping; /* Is shaping and coloring supported */ +}; + +struct ndc_sync_op { + struct mbox_msghdr hdr; + uint8_t __io nix_lf_tx_sync; + uint8_t __io nix_lf_rx_sync; + uint8_t __io npa_lf_sync; +}; + +struct tim_lf_alloc_req { + struct mbox_msghdr hdr; + uint16_t __io ring; + uint16_t __io npa_pf_func; + uint16_t __io sso_pf_func; +}; + +struct tim_ring_req { + struct mbox_msghdr hdr; + uint16_t __io ring; +}; + +struct tim_config_req { + struct mbox_msghdr hdr; + uint16_t __io ring; + uint8_t __io bigendian; + uint8_t __io clocksource; + uint8_t __io enableperiodic; + uint8_t __io enabledontfreebuffer; + uint32_t __io bucketsize; + uint32_t __io chunksize; + uint32_t __io interval; + uint8_t __io gpioedge; +}; + +struct tim_lf_alloc_rsp { + struct mbox_msghdr hdr; + uint64_t __io tenns_clk; +}; + +struct tim_enable_rsp { + struct mbox_msghdr hdr; + uint64_t __io timestarted; + uint32_t __io currentbucket; +}; + +#endif /* __ROC_MBOX_H__ */ From patchwork Thu Apr 1 12:37:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90379 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1BEB0A0548; Thu, 1 Apr 2021 14:39:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5FEF51411B5; Thu, 1 Apr 2021 14:38:58 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id AF2231411AD for ; Thu, 1 Apr 2021 14:38:56 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLXY019083 for ; Thu, 1 Apr 2021 05:38:56 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=cZZ59alcJQr1MlilVxdCdZmQs2LF+2FJdrogZJw7EXQ=; b=FOL4pGUboldtwcKrpUKNdppojqvLatN2qSmKduoxHH+sdQy7b4+VMi9LOuBItmWWzbbJ S1LVTV2kujnS4n1iVLgqZPxtBxQikMYQis8HqI7iDUzSv+v9Jaj1W8Es89qn7NpU0H6I 77r3Uib9k9d3a9EwjZuCiBb/ylvjaOJMurb8Csrh+lJrIx0CoSzDJ2fspbOiMW937i4p 2mh3pmxw7t2Q/8Mh10Q29eqIITC0UngYUfmzdc4YFHnWSboInm2kdCj0EJD2kKVSIovY x/OYtrKhu9mQdlOHZOUeclpOoOZAPXwNmPBmpdKwel5LpMx91LqC+gERX6I7/PnYZvpu Ew== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jjdys-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:38:55 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:38:53 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:38:53 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 418CE3F703F; Thu, 1 Apr 2021 05:38:51 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:31 +0530 Message-ID: <20210401123817.14348-7-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 969GDF6AvAatUDH6KIL1bwQUP_tREsGj X-Proofpoint-ORIG-GUID: 969GDF6AvAatUDH6KIL1bwQUP_tREsGj X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 06/52] common/cnxk: add mailbox base infra X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob This patch adds mailbox infra API's to communicate with Kernel AF driver. These API's will be used by all the other cnxk drivers for mbox init/fini, send/recv functionality. Signed-off-by: Jerin Jacob --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_dev_priv.h | 3 + drivers/common/cnxk/roc_mbox.c | 483 ++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_mbox_priv.h | 215 ++++++++++++++++ drivers/common/cnxk/roc_platform.c | 2 + drivers/common/cnxk/roc_platform.h | 3 + drivers/common/cnxk/roc_priv.h | 3 + drivers/common/cnxk/roc_utils.c | 7 + drivers/common/cnxk/roc_utils.h | 2 + drivers/common/cnxk/version.map | 2 + 10 files changed, 721 insertions(+) create mode 100644 drivers/common/cnxk/roc_mbox.c create mode 100644 drivers/common/cnxk/roc_mbox_priv.h diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 3e0678d..a7e7968 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -11,6 +11,7 @@ endif config_flag_fmt = 'RTE_LIBRTE_@0@_COMMON' deps = ['eal', 'pci', 'bus_pci', 'mbuf'] sources = files('roc_irq.c', + 'roc_mbox.c', 'roc_model.c', 'roc_platform.c', 'roc_utils.c') diff --git a/drivers/common/cnxk/roc_dev_priv.h b/drivers/common/cnxk/roc_dev_priv.h index 2254677..c7f79f7 100644 --- a/drivers/common/cnxk/roc_dev_priv.h +++ b/drivers/common/cnxk/roc_dev_priv.h @@ -5,6 +5,9 @@ #ifndef _ROC_DEV_PRIV_H #define _ROC_DEV_PRIV_H +extern uint16_t dev_rclk_freq; +extern uint16_t dev_sclk_freq; + int dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb, void *data, unsigned int vec); void dev_irq_unregister(struct plt_intr_handle *intr_handle, diff --git a/drivers/common/cnxk/roc_mbox.c b/drivers/common/cnxk/roc_mbox.c new file mode 100644 index 0000000..6f4ee68 --- /dev/null +++ b/drivers/common/cnxk/roc_mbox.c @@ -0,0 +1,483 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include +#include +#include +#include + +#include "roc_api.h" +#include "roc_priv.h" + +#define RVU_AF_AFPF_MBOX0 (0x02000) +#define RVU_AF_AFPF_MBOX1 (0x02008) + +#define RVU_PF_PFAF_MBOX0 (0xC00) +#define RVU_PF_PFAF_MBOX1 (0xC08) + +#define RVU_PF_VFX_PFVF_MBOX0 (0x0000) +#define RVU_PF_VFX_PFVF_MBOX1 (0x0008) + +#define RVU_VF_VFPF_MBOX0 (0x0000) +#define RVU_VF_VFPF_MBOX1 (0x0008) + +/* RCLK, SCLK in MHz */ +uint16_t dev_rclk_freq; +uint16_t dev_sclk_freq; + +static inline uint16_t +msgs_offset(void) +{ + return PLT_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); +} + +void +mbox_fini(struct mbox *mbox) +{ + mbox->reg_base = 0; + mbox->hwbase = 0; + plt_free(mbox->dev); + mbox->dev = NULL; +} + +void +mbox_reset(struct mbox *mbox, int devid) +{ + struct mbox_dev *mdev = &mbox->dev[devid]; + struct mbox_hdr *tx_hdr = + (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->tx_start); + struct mbox_hdr *rx_hdr = + (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); + + plt_spinlock_lock(&mdev->mbox_lock); + mdev->msg_size = 0; + mdev->rsp_size = 0; + tx_hdr->msg_size = 0; + tx_hdr->num_msgs = 0; + rx_hdr->msg_size = 0; + rx_hdr->num_msgs = 0; + plt_spinlock_unlock(&mdev->mbox_lock); +} + +int +mbox_init(struct mbox *mbox, uintptr_t hwbase, uintptr_t reg_base, + int direction, int ndevs, uint64_t intr_offset) +{ + struct mbox_dev *mdev; + char *var, *var_to; + int devid; + + mbox->intr_offset = intr_offset; + mbox->reg_base = reg_base; + mbox->hwbase = hwbase; + + switch (direction) { + case MBOX_DIR_AFPF: + case MBOX_DIR_PFVF: + mbox->tx_start = MBOX_DOWN_TX_START; + mbox->rx_start = MBOX_DOWN_RX_START; + mbox->tx_size = MBOX_DOWN_TX_SIZE; + mbox->rx_size = MBOX_DOWN_RX_SIZE; + break; + case MBOX_DIR_PFAF: + case MBOX_DIR_VFPF: + mbox->tx_start = MBOX_DOWN_RX_START; + mbox->rx_start = MBOX_DOWN_TX_START; + mbox->tx_size = MBOX_DOWN_RX_SIZE; + mbox->rx_size = MBOX_DOWN_TX_SIZE; + break; + case MBOX_DIR_AFPF_UP: + case MBOX_DIR_PFVF_UP: + mbox->tx_start = MBOX_UP_TX_START; + mbox->rx_start = MBOX_UP_RX_START; + mbox->tx_size = MBOX_UP_TX_SIZE; + mbox->rx_size = MBOX_UP_RX_SIZE; + break; + case MBOX_DIR_PFAF_UP: + case MBOX_DIR_VFPF_UP: + mbox->tx_start = MBOX_UP_RX_START; + mbox->rx_start = MBOX_UP_TX_START; + mbox->tx_size = MBOX_UP_RX_SIZE; + mbox->rx_size = MBOX_UP_TX_SIZE; + break; + default: + return -ENODEV; + } + + switch (direction) { + case MBOX_DIR_AFPF: + case MBOX_DIR_AFPF_UP: + mbox->trigger = RVU_AF_AFPF_MBOX0; + mbox->tr_shift = 4; + break; + case MBOX_DIR_PFAF: + case MBOX_DIR_PFAF_UP: + mbox->trigger = RVU_PF_PFAF_MBOX1; + mbox->tr_shift = 0; + break; + case MBOX_DIR_PFVF: + case MBOX_DIR_PFVF_UP: + mbox->trigger = RVU_PF_VFX_PFVF_MBOX0; + mbox->tr_shift = 12; + break; + case MBOX_DIR_VFPF: + case MBOX_DIR_VFPF_UP: + mbox->trigger = RVU_VF_VFPF_MBOX1; + mbox->tr_shift = 0; + break; + default: + return -ENODEV; + } + + mbox->dev = plt_zmalloc(ndevs * sizeof(struct mbox_dev), ROC_ALIGN); + if (!mbox->dev) { + mbox_fini(mbox); + return -ENOMEM; + } + mbox->ndevs = ndevs; + for (devid = 0; devid < ndevs; devid++) { + mdev = &mbox->dev[devid]; + mdev->mbase = (void *)(mbox->hwbase + (devid * MBOX_SIZE)); + plt_spinlock_init(&mdev->mbox_lock); + /* Init header to reset value */ + mbox_reset(mbox, devid); + } + + var = getenv("ROC_CN10K_MBOX_TIMEOUT"); + var_to = getenv("ROC_MBOX_TIMEOUT"); + + if (var) + mbox->rsp_tmo = atoi(var); + else if (var_to) + mbox->rsp_tmo = atoi(var_to); + else + mbox->rsp_tmo = MBOX_RSP_TIMEOUT; + + return 0; +} + +/** + * @internal + * Allocate a message response + */ +struct mbox_msghdr * +mbox_alloc_msg_rsp(struct mbox *mbox, int devid, int size, int size_rsp) +{ + struct mbox_dev *mdev = &mbox->dev[devid]; + struct mbox_msghdr *msghdr = NULL; + + plt_spinlock_lock(&mdev->mbox_lock); + size = PLT_ALIGN(size, MBOX_MSG_ALIGN); + size_rsp = PLT_ALIGN(size_rsp, MBOX_MSG_ALIGN); + /* Check if there is space in mailbox */ + if ((mdev->msg_size + size) > mbox->tx_size - msgs_offset()) + goto exit; + if ((mdev->rsp_size + size_rsp) > mbox->rx_size - msgs_offset()) + goto exit; + if (mdev->msg_size == 0) + mdev->num_msgs = 0; + mdev->num_msgs++; + + msghdr = (struct mbox_msghdr *)(((uintptr_t)mdev->mbase + + mbox->tx_start + msgs_offset() + + mdev->msg_size)); + + /* Clear the whole msg region */ + mbox_memset(msghdr, 0, sizeof(*msghdr) + size); + /* Init message header with reset values */ + msghdr->ver = MBOX_VERSION; + mdev->msg_size += size; + mdev->rsp_size += size_rsp; + msghdr->next_msgoff = mdev->msg_size + msgs_offset(); +exit: + plt_spinlock_unlock(&mdev->mbox_lock); + + return msghdr; +} + +/** + * @internal + * Send a mailbox message + */ +void +mbox_msg_send(struct mbox *mbox, int devid) +{ + struct mbox_dev *mdev = &mbox->dev[devid]; + struct mbox_hdr *tx_hdr = + (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->tx_start); + struct mbox_hdr *rx_hdr = + (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); + + /* Reset header for next messages */ + tx_hdr->msg_size = mdev->msg_size; + mdev->msg_size = 0; + mdev->rsp_size = 0; + mdev->msgs_acked = 0; + + /* num_msgs != 0 signals to the peer that the buffer has a number of + * messages. So this should be written after copying txmem + */ + tx_hdr->num_msgs = mdev->num_msgs; + rx_hdr->num_msgs = 0; + + /* Sync mbox data into memory */ + plt_wmb(); + + /* The interrupt should be fired after num_msgs is written + * to the shared memory + */ + plt_write64(1, (volatile void *)(mbox->reg_base + + (mbox->trigger | + (devid << mbox->tr_shift)))); +} + +/** + * @internal + * Wait and get mailbox response + */ +int +mbox_get_rsp(struct mbox *mbox, int devid, void **msg) +{ + struct mbox_dev *mdev = &mbox->dev[devid]; + struct mbox_msghdr *msghdr; + uint64_t offset; + int rc; + + rc = mbox_wait_for_rsp(mbox, devid); + if (rc < 0) + return -EIO; + + plt_rmb(); + + offset = mbox->rx_start + + PLT_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); + if (msg != NULL) + *msg = msghdr; + + return msghdr->rc; +} + +/** + * Polling for given wait time to get mailbox response + */ +static int +mbox_poll(struct mbox *mbox, uint32_t wait) +{ + uint32_t timeout = 0, sleep = 1; + uint32_t wait_us = wait * 1000; + uint64_t rsp_reg = 0; + uintptr_t reg_addr; + + reg_addr = mbox->reg_base + mbox->intr_offset; + do { + rsp_reg = plt_read64(reg_addr); + + if (timeout >= wait_us) + return -ETIMEDOUT; + + plt_delay_us(sleep); + timeout += sleep; + } while (!rsp_reg); + + plt_rmb(); + + /* Clear interrupt */ + plt_write64(rsp_reg, reg_addr); + + /* Reset mbox */ + mbox_reset(mbox, 0); + + return 0; +} + +/** + * @internal + * Wait and get mailbox response with timeout + */ +int +mbox_get_rsp_tmo(struct mbox *mbox, int devid, void **msg, uint32_t tmo) +{ + struct mbox_dev *mdev = &mbox->dev[devid]; + struct mbox_msghdr *msghdr; + uint64_t offset; + int rc; + + rc = mbox_wait_for_rsp_tmo(mbox, devid, tmo); + if (rc != 1) + return -EIO; + + plt_rmb(); + + offset = mbox->rx_start + + PLT_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); + if (msg != NULL) + *msg = msghdr; + + return msghdr->rc; +} + +static int +mbox_wait(struct mbox *mbox, int devid, uint32_t rst_timo) +{ + volatile struct mbox_dev *mdev = &mbox->dev[devid]; + uint32_t timeout = 0, sleep = 1; + + rst_timo = rst_timo * 1000; /* Milli seconds to micro seconds */ + while (mdev->num_msgs > mdev->msgs_acked) { + plt_delay_us(sleep); + timeout += sleep; + if (timeout >= rst_timo) { + struct mbox_hdr *tx_hdr = + (struct mbox_hdr *)((uintptr_t)mdev->mbase + + mbox->tx_start); + struct mbox_hdr *rx_hdr = + (struct mbox_hdr *)((uintptr_t)mdev->mbase + + mbox->rx_start); + + plt_err("MBOX[devid: %d] message wait timeout %d, " + "num_msgs: %d, msgs_acked: %d " + "(tx/rx num_msgs: %d/%d), msg_size: %d, " + "rsp_size: %d", + devid, timeout, mdev->num_msgs, + mdev->msgs_acked, tx_hdr->num_msgs, + rx_hdr->num_msgs, mdev->msg_size, + mdev->rsp_size); + + return -EIO; + } + plt_rmb(); + } + return 0; +} + +int +mbox_wait_for_rsp_tmo(struct mbox *mbox, int devid, uint32_t tmo) +{ + struct mbox_dev *mdev = &mbox->dev[devid]; + int rc = 0; + + /* Sync with mbox region */ + plt_rmb(); + + if (mbox->trigger == RVU_PF_VFX_PFVF_MBOX1 || + mbox->trigger == RVU_PF_VFX_PFVF_MBOX0) { + /* In case of VF, Wait a bit more to account round trip delay */ + tmo = tmo * 2; + } + + /* Wait message */ + if (plt_thread_is_intr()) + rc = mbox_poll(mbox, tmo); + else + rc = mbox_wait(mbox, devid, tmo); + + if (!rc) + rc = mdev->num_msgs; + + return rc; +} + +/** + * @internal + * Wait for the mailbox response + */ +int +mbox_wait_for_rsp(struct mbox *mbox, int devid) +{ + return mbox_wait_for_rsp_tmo(mbox, devid, mbox->rsp_tmo); +} + +int +mbox_get_availmem(struct mbox *mbox, int devid) +{ + struct mbox_dev *mdev = &mbox->dev[devid]; + int avail; + + plt_spinlock_lock(&mdev->mbox_lock); + avail = mbox->tx_size - mdev->msg_size - msgs_offset(); + plt_spinlock_unlock(&mdev->mbox_lock); + + return avail; +} + +int +send_ready_msg(struct mbox *mbox, uint16_t *pcifunc) +{ + struct ready_msg_rsp *rsp; + int rc; + + mbox_alloc_msg_ready(mbox); + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + if (rsp->hdr.ver != MBOX_VERSION) { + plt_err("Incompatible MBox versions(AF: 0x%04x Client: 0x%04x)", + rsp->hdr.ver, MBOX_VERSION); + return -EPIPE; + } + + if (pcifunc) + *pcifunc = rsp->hdr.pcifunc; + + /* Save rclk & sclk freq */ + if (!dev_rclk_freq || !dev_sclk_freq) { + dev_rclk_freq = rsp->rclk_freq; + dev_sclk_freq = rsp->sclk_freq; + } + return 0; +} + +int +reply_invalid_msg(struct mbox *mbox, int devid, uint16_t pcifunc, uint16_t id) +{ + struct msg_rsp *rsp; + + rsp = (struct msg_rsp *)mbox_alloc_msg(mbox, devid, sizeof(*rsp)); + if (!rsp) + return -ENOMEM; + rsp->hdr.id = id; + rsp->hdr.sig = MBOX_RSP_SIG; + rsp->hdr.rc = MBOX_MSG_INVALID; + rsp->hdr.pcifunc = pcifunc; + + return 0; +} + +/** + * @internal + * Convert mail box ID to name + */ +const char * +mbox_id2name(uint16_t id) +{ + switch (id) { + default: + return "INVALID ID"; +#define M(_name, _id, _1, _2, _3) \ + case _id: \ + return #_name; + MBOX_MESSAGES + MBOX_UP_CGX_MESSAGES +#undef M + } +} + +int +mbox_id2size(uint16_t id) +{ + switch (id) { + default: + return 0; +#define M(_1, _id, _2, _req_type, _3) \ + case _id: \ + return sizeof(struct _req_type); + MBOX_MESSAGES + MBOX_UP_CGX_MESSAGES +#undef M + } +} diff --git a/drivers/common/cnxk/roc_mbox_priv.h b/drivers/common/cnxk/roc_mbox_priv.h new file mode 100644 index 0000000..84516fb --- /dev/null +++ b/drivers/common/cnxk/roc_mbox_priv.h @@ -0,0 +1,215 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef __ROC_MBOX_PRIV_H__ +#define __ROC_MBOX_PRIV_H__ + +#include +#include +#include + +#define SZ_64K (64ULL * 1024ULL) +#define SZ_1K (1ULL * 1024ULL) +#define MBOX_SIZE SZ_64K + +/* AF/PF: PF initiated, PF/VF VF initiated */ +#define MBOX_DOWN_RX_START 0 +#define MBOX_DOWN_RX_SIZE (46 * SZ_1K) +#define MBOX_DOWN_TX_START (MBOX_DOWN_RX_START + MBOX_DOWN_RX_SIZE) +#define MBOX_DOWN_TX_SIZE (16 * SZ_1K) +/* AF/PF: AF initiated, PF/VF PF initiated */ +#define MBOX_UP_RX_START (MBOX_DOWN_TX_START + MBOX_DOWN_TX_SIZE) +#define MBOX_UP_RX_SIZE SZ_1K +#define MBOX_UP_TX_START (MBOX_UP_RX_START + MBOX_UP_RX_SIZE) +#define MBOX_UP_TX_SIZE SZ_1K + +#if MBOX_UP_TX_SIZE + MBOX_UP_TX_START != MBOX_SIZE +#error "Incorrect mailbox area sizes" +#endif + +#define INTR_MASK(pfvfs) ((pfvfs < 64) ? (BIT_ULL(pfvfs) - 1) : (~0ull)) + +#define MBOX_RSP_TIMEOUT 3000 /* Time to wait for mbox response in ms */ + +#define MBOX_MSG_ALIGN 16 /* Align mbox msg start to 16bytes */ + +/* Mailbox directions */ +#define MBOX_DIR_AFPF 0 /* AF replies to PF */ +#define MBOX_DIR_PFAF 1 /* PF sends messages to AF */ +#define MBOX_DIR_PFVF 2 /* PF replies to VF */ +#define MBOX_DIR_VFPF 3 /* VF sends messages to PF */ +#define MBOX_DIR_AFPF_UP 4 /* AF sends messages to PF */ +#define MBOX_DIR_PFAF_UP 5 /* PF replies to AF */ +#define MBOX_DIR_PFVF_UP 6 /* PF sends messages to VF */ +#define MBOX_DIR_VFPF_UP 7 /* VF replies to PF */ + +struct mbox_dev { + void *mbase; /* This dev's mbox region */ + plt_spinlock_t mbox_lock; + uint16_t msg_size; /* Total msg size to be sent */ + uint16_t rsp_size; /* Total rsp size to be sure the reply is ok */ + uint16_t num_msgs; /* No of msgs sent or waiting for response */ + uint16_t msgs_acked; /* No of msgs for which response is received */ +}; + +struct mbox { + uintptr_t hwbase; /* Mbox region advertised by HW */ + uintptr_t reg_base; /* CSR base for this dev */ + uint64_t trigger; /* Trigger mbox notification */ + uint16_t tr_shift; /* Mbox trigger shift */ + uint64_t rx_start; /* Offset of Rx region in mbox memory */ + uint64_t tx_start; /* Offset of Tx region in mbox memory */ + uint16_t rx_size; /* Size of Rx region */ + uint16_t tx_size; /* Size of Tx region */ + uint16_t ndevs; /* The number of peers */ + struct mbox_dev *dev; + uint64_t intr_offset; /* Offset to interrupt register */ + uint32_t rsp_tmo; +}; + +const char *mbox_id2name(uint16_t id); +int mbox_id2size(uint16_t id); +void mbox_reset(struct mbox *mbox, int devid); +int mbox_init(struct mbox *mbox, uintptr_t hwbase, uintptr_t reg_base, + int direction, int ndevsi, uint64_t intr_offset); +void mbox_fini(struct mbox *mbox); +void mbox_msg_send(struct mbox *mbox, int devid); +int mbox_wait_for_rsp(struct mbox *mbox, int devid); +int mbox_wait_for_rsp_tmo(struct mbox *mbox, int devid, uint32_t tmo); +int mbox_get_rsp(struct mbox *mbox, int devid, void **msg); +int mbox_get_rsp_tmo(struct mbox *mbox, int devid, void **msg, uint32_t tmo); +int mbox_get_availmem(struct mbox *mbox, int devid); +struct mbox_msghdr *mbox_alloc_msg_rsp(struct mbox *mbox, int devid, int size, + int size_rsp); + +static inline struct mbox_msghdr * +mbox_alloc_msg(struct mbox *mbox, int devid, int size) +{ + return mbox_alloc_msg_rsp(mbox, devid, size, 0); +} + +static inline void +mbox_req_init(uint16_t mbox_id, void *msghdr) +{ + struct mbox_msghdr *hdr = msghdr; + + hdr->sig = MBOX_REQ_SIG; + hdr->ver = MBOX_VERSION; + hdr->id = mbox_id; + hdr->pcifunc = 0; +} + +static inline void +mbox_rsp_init(uint16_t mbox_id, void *msghdr) +{ + struct mbox_msghdr *hdr = msghdr; + + hdr->sig = MBOX_RSP_SIG; + hdr->rc = -ETIMEDOUT; + hdr->id = mbox_id; +} + +static inline bool +mbox_nonempty(struct mbox *mbox, int devid) +{ + struct mbox_dev *mdev = &mbox->dev[devid]; + bool ret; + + plt_spinlock_lock(&mdev->mbox_lock); + ret = mdev->num_msgs != 0; + plt_spinlock_unlock(&mdev->mbox_lock); + + return ret; +} + +static inline int +mbox_process(struct mbox *mbox) +{ + mbox_msg_send(mbox, 0); + return mbox_get_rsp(mbox, 0, NULL); +} + +static inline int +mbox_process_msg(struct mbox *mbox, void **msg) +{ + mbox_msg_send(mbox, 0); + return mbox_get_rsp(mbox, 0, msg); +} + +static inline int +mbox_process_tmo(struct mbox *mbox, uint32_t tmo) +{ + mbox_msg_send(mbox, 0); + return mbox_get_rsp_tmo(mbox, 0, NULL, tmo); +} + +static inline int +mbox_process_msg_tmo(struct mbox *mbox, void **msg, uint32_t tmo) +{ + mbox_msg_send(mbox, 0); + return mbox_get_rsp_tmo(mbox, 0, msg, tmo); +} + +int send_ready_msg(struct mbox *mbox, uint16_t *pf_func /* out */); +int reply_invalid_msg(struct mbox *mbox, int devid, uint16_t pf_func, + uint16_t id); + +#define M(_name, _id, _fn_name, _req_type, _rsp_type) \ + static inline struct _req_type *mbox_alloc_msg_##_fn_name( \ + struct mbox *mbox) \ + { \ + struct _req_type *req; \ + req = (struct _req_type *)mbox_alloc_msg_rsp( \ + mbox, 0, sizeof(struct _req_type), \ + sizeof(struct _rsp_type)); \ + if (!req) \ + return NULL; \ + req->hdr.sig = MBOX_REQ_SIG; \ + req->hdr.id = _id; \ + plt_mbox_dbg("id=0x%x (%s)", req->hdr.id, \ + mbox_id2name(req->hdr.id)); \ + return req; \ + } + +MBOX_MESSAGES +#undef M + +/* This is required for copy operations from device memory which do not work on + * addresses which are unaligned to 16B. This is because of specific + * optimizations to libc memcpy. + */ +static inline volatile void * +mbox_memcpy(volatile void *d, const volatile void *s, size_t l) +{ + const volatile uint8_t *sb; + volatile uint8_t *db; + size_t i; + + if (!d || !s) + return NULL; + db = (volatile uint8_t *)d; + sb = (const volatile uint8_t *)s; + for (i = 0; i < l; i++) + db[i] = sb[i]; + return d; +} + +/* This is required for memory operations from device memory which do not + * work on addresses which are unaligned to 16B. This is because of specific + * optimizations to libc memset. + */ +static inline void +mbox_memset(volatile void *d, uint8_t val, size_t l) +{ + volatile uint8_t *db; + size_t i = 0; + + if (!d || !l) + return; + db = (volatile uint8_t *)d; + for (i = 0; i < l; i++) + db[i] = val; +} + +#endif /* __ROC_MBOX_PRIV_H__ */ diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c index ee1a28b..2cbabea 100644 --- a/drivers/common/cnxk/roc_platform.c +++ b/drivers/common/cnxk/roc_platform.c @@ -24,7 +24,9 @@ roc_plt_init(void) return -ENOMEM; } roc_model_init(mz->addr); + return 0; } RTE_LOG_REGISTER(cnxk_logtype_base, pmd.cnxk.base, NOTICE); +RTE_LOG_REGISTER(cnxk_logtype_mbox, pmd.cnxk.mbox, NOTICE); diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h index 0fe0c18..0165f85 100644 --- a/drivers/common/cnxk/roc_platform.h +++ b/drivers/common/cnxk/roc_platform.h @@ -133,6 +133,8 @@ /* Log */ extern int cnxk_logtype_base; +extern int cnxk_logtype_mbox; + #define plt_err(fmt, args...) \ RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", __func__, __LINE__, ##args) #define plt_info(fmt, args...) RTE_LOG(INFO, PMD, fmt "\n", ##args) @@ -148,6 +150,7 @@ extern int cnxk_logtype_base; ##args) #define plt_base_dbg(fmt, ...) plt_dbg(base, fmt, ##__VA_ARGS__) +#define plt_mbox_dbg(fmt, ...) plt_dbg(mbox, fmt, ##__VA_ARGS__) #ifdef __cplusplus #define CNXK_PCI_ID(subsystem_dev, dev) \ diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h index cd87035..c385f11 100644 --- a/drivers/common/cnxk/roc_priv.h +++ b/drivers/common/cnxk/roc_priv.h @@ -8,6 +8,9 @@ /* Utils */ #include "roc_util_priv.h" +/* Mbox */ +#include "roc_mbox_priv.h" + /* Dev */ #include "roc_dev_priv.h" diff --git a/drivers/common/cnxk/roc_utils.c b/drivers/common/cnxk/roc_utils.c index c48f027..b21064a 100644 --- a/drivers/common/cnxk/roc_utils.c +++ b/drivers/common/cnxk/roc_utils.c @@ -33,3 +33,10 @@ roc_error_msg_get(int errorcode) return err_msg; } + +void +roc_clk_freq_get(uint16_t *rclk_freq, uint16_t *sclk_freq) +{ + *rclk_freq = dev_rclk_freq; + *sclk_freq = dev_sclk_freq; +} diff --git a/drivers/common/cnxk/roc_utils.h b/drivers/common/cnxk/roc_utils.h index 634810e..32d44ae 100644 --- a/drivers/common/cnxk/roc_utils.h +++ b/drivers/common/cnxk/roc_utils.h @@ -10,4 +10,6 @@ /* Utils */ const char *__roc_api roc_error_msg_get(int errorcode); +void __roc_api roc_clk_freq_get(uint16_t *rclk_freq, uint16_t *sclk_freq); + #endif /* _ROC_UTILS_H_ */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 7102704..242ba87 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -2,6 +2,8 @@ INTERNAL { global: cnxk_logtype_base; + cnxk_logtype_mbox; + roc_clk_freq_get; roc_error_msg_get; roc_model; roc_plt_init; From patchwork Thu Apr 1 12:37:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90380 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BF6A2A0548; Thu, 1 Apr 2021 14:40:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A40EF1411BA; Thu, 1 Apr 2021 14:39:00 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 507E71411A7 for ; Thu, 1 Apr 2021 14:38:59 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLGE019096 for ; Thu, 1 Apr 2021 05:38:58 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=F1/gw1h5RE3Eotz6uCm+7oUbZbn/K/6QPG6pJAR98aY=; b=OTwDztn1fO5nA/eGbeZRzURukK60ScRqiMokMEbpVo7wN4boDjIoL4YjG97X8gscgtWY SuzBHFf2ihce3iLICoMsX75oeCAyMNFRVKnk3chkJFs5hY3xg/bOVY22EFAwQ21qCo0o 72LLfS/c+RaBDkVibY7VXWO//ifzA9+w3lb/RF7pvgUA33C5RcBP5Nu4eyjEPJoKMt/B ItgJlCJRqh/DLxtzXBGyQ7wFfzRy6nf3dxsHBL60WLC2er9TNsXPl4zzNLM31i6Yh/kO yz8FfvULllYPuZCTpXJ3CV04Ndyz3WUlmlaa9wDzdrNRZyieoHCem+EHDDVQlZW3uOPk mw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje0d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:38:58 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:38:56 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:38:56 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 343733F7040; Thu, 1 Apr 2021 05:38:53 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:32 +0530 Message-ID: <20210401123817.14348-8-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: JKcgWjxF5LO7LWMit0giHIjBKOyiTrgC X-Proofpoint-ORIG-GUID: JKcgWjxF5LO7LWMit0giHIjBKOyiTrgC X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 07/52] common/cnxk: add base device class X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Introduce 'dev' class to hold cnxk PCIe device specific information and operations. All PCIe drivers(ethdev, mempool, cryptodev and eventdev) of cnxk inherits this base object to avail the common functionalities such as mailbox creation, interrupt registration, LMT setup, VF message mbox forwarding, etc. Signed-off-by: Jerin Jacob --- drivers/common/cnxk/meson.build | 4 +- drivers/common/cnxk/roc_api.h | 3 + drivers/common/cnxk/roc_dev.c | 362 ++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_dev_priv.h | 42 +++++ drivers/common/cnxk/roc_idev.c | 77 ++++++++ drivers/common/cnxk/roc_idev.h | 12 ++ drivers/common/cnxk/roc_idev_priv.h | 22 +++ drivers/common/cnxk/roc_priv.h | 3 + drivers/common/cnxk/version.map | 2 + 9 files changed, 526 insertions(+), 1 deletion(-) create mode 100644 drivers/common/cnxk/roc_dev.c create mode 100644 drivers/common/cnxk/roc_idev.c create mode 100644 drivers/common/cnxk/roc_idev.h create mode 100644 drivers/common/cnxk/roc_idev_priv.h diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index a7e7968..17cbc36 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -10,7 +10,9 @@ endif config_flag_fmt = 'RTE_LIBRTE_@0@_COMMON' deps = ['eal', 'pci', 'bus_pci', 'mbuf'] -sources = files('roc_irq.c', +sources = files('roc_dev.c', + 'roc_idev.c', + 'roc_irq.c', 'roc_mbox.c', 'roc_model.c', 'roc_platform.c', diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h index f2f1f5e..27ddc3a 100644 --- a/drivers/common/cnxk/roc_api.h +++ b/drivers/common/cnxk/roc_api.h @@ -82,4 +82,7 @@ /* Utils */ #include "roc_utils.h" +/* Idev */ +#include "roc_idev.h" + #endif /* _ROC_API_H_ */ diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c new file mode 100644 index 0000000..380c71b --- /dev/null +++ b/drivers/common/cnxk/roc_dev.c @@ -0,0 +1,362 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include +#include +#include +#include +#include + +#include "roc_api.h" +#include "roc_priv.h" + +/* PCI Extended capability ID */ +#define ROC_PCI_EXT_CAP_ID_SRIOV 0x10 /* SRIOV cap */ + +/* Single Root I/O Virtualization */ +#define ROC_PCI_SRIOV_TOTAL_VF 0x0e /* Total VFs */ + +static void +process_msgs(struct dev *dev, struct mbox *mbox) +{ + struct mbox_dev *mdev = &mbox->dev[0]; + struct mbox_hdr *req_hdr; + struct mbox_msghdr *msg; + int msgs_acked = 0; + int offset; + uint16_t i; + + req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); + if (req_hdr->num_msgs == 0) + return; + + offset = mbox->rx_start + PLT_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN); + for (i = 0; i < req_hdr->num_msgs; i++) { + msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); + + msgs_acked++; + plt_base_dbg("Message 0x%x (%s) pf:%d/vf:%d", msg->id, + mbox_id2name(msg->id), dev_get_pf(msg->pcifunc), + dev_get_vf(msg->pcifunc)); + + switch (msg->id) { + /* Add message id's that are handled here */ + case MBOX_MSG_READY: + /* Get our identity */ + dev->pf_func = msg->pcifunc; + break; + + default: + if (msg->rc) + plt_err("Message (%s) response has err=%d", + mbox_id2name(msg->id), msg->rc); + break; + } + offset = mbox->rx_start + msg->next_msgoff; + } + + mbox_reset(mbox, 0); + /* Update acked if someone is waiting a message */ + mdev->msgs_acked = msgs_acked; + plt_wmb(); +} + +static int +mbox_process_msgs_up(struct dev *dev, struct mbox_msghdr *req) +{ + /* Check if valid, if not reply with a invalid msg */ + if (req->sig != MBOX_REQ_SIG) + return -EIO; + + switch (req->id) { + default: + reply_invalid_msg(&dev->mbox_up, 0, 0, req->id); + break; + } + + return -ENODEV; +} + +static void +process_msgs_up(struct dev *dev, struct mbox *mbox) +{ + struct mbox_dev *mdev = &mbox->dev[0]; + struct mbox_hdr *req_hdr; + struct mbox_msghdr *msg; + int i, err, offset; + + req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); + if (req_hdr->num_msgs == 0) + return; + + offset = mbox->rx_start + PLT_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN); + for (i = 0; i < req_hdr->num_msgs; i++) { + msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); + + plt_base_dbg("Message 0x%x (%s) pf:%d/vf:%d", msg->id, + mbox_id2name(msg->id), dev_get_pf(msg->pcifunc), + dev_get_vf(msg->pcifunc)); + err = mbox_process_msgs_up(dev, msg); + if (err) + plt_err("Error %d handling 0x%x (%s)", err, msg->id, + mbox_id2name(msg->id)); + offset = mbox->rx_start + msg->next_msgoff; + } + /* Send mbox responses */ + if (mdev->num_msgs) { + plt_base_dbg("Reply num_msgs:%d", mdev->num_msgs); + mbox_msg_send(mbox, 0); + } +} + +static void +roc_af_pf_mbox_irq(void *param) +{ + struct dev *dev = param; + uint64_t intr; + + intr = plt_read64(dev->bar2 + RVU_PF_INT); + if (intr == 0) + plt_base_dbg("Proceeding to check mbox UP messages if any"); + + plt_write64(intr, dev->bar2 + RVU_PF_INT); + plt_base_dbg("Irq 0x%" PRIx64 "(pf:%d)", intr, dev->pf); + + /* First process all configuration messages */ + process_msgs(dev, dev->mbox); + + /* Process Uplink messages */ + process_msgs_up(dev, &dev->mbox_up); +} + +static int +mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev) +{ + struct plt_intr_handle *intr_handle = &pci_dev->intr_handle; + int rc; + + plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); + + /* MBOX interrupt AF <-> PF */ + rc = dev_irq_register(intr_handle, roc_af_pf_mbox_irq, dev, + RVU_PF_INT_VEC_AFPF_MBOX); + if (rc) { + plt_err("Fail to register AF<->PF mbox irq"); + return rc; + } + + plt_write64(~0ull, dev->bar2 + RVU_PF_INT); + plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S); + + return rc; +} + +static int +mbox_register_irq(struct plt_pci_device *pci_dev, struct dev *dev) +{ + return mbox_register_pf_irq(pci_dev, dev); +} + +static void +mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev) +{ + struct plt_intr_handle *intr_handle = &pci_dev->intr_handle; + + plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); + + /* MBOX interrupt AF <-> PF */ + dev_irq_unregister(intr_handle, roc_af_pf_mbox_irq, dev, + RVU_PF_INT_VEC_AFPF_MBOX); +} + +static void +mbox_unregister_irq(struct plt_pci_device *pci_dev, struct dev *dev) +{ + mbox_unregister_pf_irq(pci_dev, dev); +} + +static uint16_t +dev_pf_total_vfs(struct plt_pci_device *pci_dev) +{ + uint16_t total_vfs = 0; + int sriov_pos, rc; + + sriov_pos = + plt_pci_find_ext_capability(pci_dev, ROC_PCI_EXT_CAP_ID_SRIOV); + if (sriov_pos <= 0) { + plt_warn("Unable to find SRIOV cap, rc=%d", sriov_pos); + return 0; + } + + rc = plt_pci_read_config(pci_dev, &total_vfs, 2, + sriov_pos + ROC_PCI_SRIOV_TOTAL_VF); + if (rc < 0) { + plt_warn("Unable to read SRIOV cap, rc=%d", rc); + return 0; + } + + return total_vfs; +} + +static int +dev_setup_shared_lmt_region(struct mbox *mbox) +{ + struct lmtst_tbl_setup_req *req; + + req = mbox_alloc_msg_lmtst_tbl_setup(mbox); + req->pcifunc = idev_lmt_pffunc_get(); + + return mbox_process(mbox); +} + +static int +dev_lmt_setup(struct plt_pci_device *pci_dev, struct dev *dev) +{ + uint64_t bar4_mbox_sz = MBOX_SIZE; + struct idev_cfg *idev; + int rc; + + if (roc_model_is_cn9k()) { + dev->lmt_base = dev->bar2 + (RVU_BLOCK_ADDR_LMT << 20); + return 0; + } + + /* [CN10K, .) */ + + /* Set common lmt region from second pf_func onwards. */ + if (!dev->disable_shared_lmt && idev_lmt_pffunc_get() && + dev->pf_func != idev_lmt_pffunc_get()) { + rc = dev_setup_shared_lmt_region(dev->mbox); + if (!rc) { + dev->lmt_base = roc_idev_lmt_base_addr_get(); + return rc; + } + plt_err("Failed to setup shared lmt region, pf_func %d err %d " + "Using respective LMT region per pf func", + dev->pf_func, rc); + } + + /* PF BAR4 should always be sufficient enough to + * hold PF-AF MBOX + PF-VF MBOX + LMT lines. + */ + if (pci_dev->mem_resource[4].len < + (bar4_mbox_sz + (RVU_LMT_LINE_MAX * RVU_LMT_SZ))) { + plt_err("Not enough bar4 space for lmt lines and mbox"); + return -EFAULT; + } + + /* LMT base is just after total VF MBOX area */ + bar4_mbox_sz += (MBOX_SIZE * dev_pf_total_vfs(pci_dev)); + dev->lmt_base = dev->bar4 + bar4_mbox_sz; + + /* Base LMT address should be chosen from only those pci funcs which + * participate in LMT shared mode. + */ + if (!dev->disable_shared_lmt) { + idev = idev_get_cfg(); + if (!__atomic_load_n(&idev->lmt_pf_func, __ATOMIC_ACQUIRE)) { + idev->lmt_base_addr = dev->lmt_base; + idev->lmt_pf_func = dev->pf_func; + idev->num_lmtlines = RVU_LMT_LINE_MAX; + } + } + + return 0; +} + +int +dev_init(struct dev *dev, struct plt_pci_device *pci_dev) +{ + int direction, up_direction, rc; + uintptr_t bar2, bar4, mbox; + uint64_t intr_offset; + + bar2 = (uintptr_t)pci_dev->mem_resource[2].addr; + bar4 = (uintptr_t)pci_dev->mem_resource[4].addr; + if (bar2 == 0 || bar4 == 0) { + plt_err("Failed to get PCI bars"); + rc = -ENODEV; + goto error; + } + + /* Trigger fault on bar2 and bar4 regions + * to avoid BUG_ON in remap_pfn_range() + * in latest kernel. + */ + *(volatile uint64_t *)bar2; + *(volatile uint64_t *)bar4; + + /* Check ROC model supported */ + if (roc_model->flag == 0) { + rc = UTIL_ERR_INVALID_MODEL; + goto error; + } + + dev->bar2 = bar2; + dev->bar4 = bar4; + + mbox = bar4; + direction = MBOX_DIR_PFAF; + up_direction = MBOX_DIR_PFAF_UP; + intr_offset = RVU_PF_INT; + + /* Initialize the local mbox */ + rc = mbox_init(&dev->mbox_local, mbox, bar2, direction, 1, intr_offset); + if (rc) + goto error; + dev->mbox = &dev->mbox_local; + + rc = mbox_init(&dev->mbox_up, mbox, bar2, up_direction, 1, intr_offset); + if (rc) + goto mbox_fini; + + /* Register mbox interrupts */ + rc = mbox_register_irq(pci_dev, dev); + if (rc) + goto mbox_fini; + + /* Check the readiness of PF/VF */ + rc = send_ready_msg(dev->mbox, &dev->pf_func); + if (rc) + goto mbox_unregister; + + dev->pf = dev_get_pf(dev->pf_func); + + dev->mbox_active = 1; + + /* Setup LMT line base */ + rc = dev_lmt_setup(pci_dev, dev); + if (rc) + goto iounmap; + + return rc; +iounmap: +mbox_unregister: + mbox_unregister_irq(pci_dev, dev); +mbox_fini: + mbox_fini(dev->mbox); + mbox_fini(&dev->mbox_up); +error: + return rc; +} + +int +dev_fini(struct dev *dev, struct plt_pci_device *pci_dev) +{ + struct plt_intr_handle *intr_handle = &pci_dev->intr_handle; + struct mbox *mbox; + + mbox_unregister_irq(pci_dev, dev); + + /* Release PF - AF */ + mbox = dev->mbox; + mbox_fini(mbox); + mbox = &dev->mbox_up; + mbox_fini(mbox); + dev->mbox_active = 0; + + /* Disable MSIX vectors */ + dev_irqs_disable(intr_handle); + return 0; +} diff --git a/drivers/common/cnxk/roc_dev_priv.h b/drivers/common/cnxk/roc_dev_priv.h index c7f79f7..c0308e7 100644 --- a/drivers/common/cnxk/roc_dev_priv.h +++ b/drivers/common/cnxk/roc_dev_priv.h @@ -5,9 +5,51 @@ #ifndef _ROC_DEV_PRIV_H #define _ROC_DEV_PRIV_H +#define RVU_PFVF_PF_SHIFT 10 +#define RVU_PFVF_PF_MASK 0x3F +#define RVU_PFVF_FUNC_SHIFT 0 +#define RVU_PFVF_FUNC_MASK 0x3FF +#define RVU_MAX_INT_RETRY 3 + +static inline int +dev_get_vf(uint16_t pf_func) +{ + return (((pf_func >> RVU_PFVF_FUNC_SHIFT) & RVU_PFVF_FUNC_MASK) - 1); +} + +static inline int +dev_get_pf(uint16_t pf_func) +{ + return (pf_func >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK; +} + +static inline int +dev_pf_func(int pf, int vf) +{ + return (pf << RVU_PFVF_PF_SHIFT) | ((vf << RVU_PFVF_FUNC_SHIFT) + 1); +} + +struct dev { + uint16_t pf; + uint16_t pf_func; + uint8_t mbox_active; + bool drv_inited; + uintptr_t bar2; + uintptr_t bar4; + uintptr_t lmt_base; + struct mbox mbox_local; + struct mbox mbox_up; + uint64_t hwcap; + struct mbox *mbox; + bool disable_shared_lmt; /* false(default): shared lmt mode enabled */ +} __plt_cache_aligned; + extern uint16_t dev_rclk_freq; extern uint16_t dev_sclk_freq; +int dev_init(struct dev *dev, struct plt_pci_device *pci_dev); +int dev_fini(struct dev *dev, struct plt_pci_device *pci_dev); + int dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb, void *data, unsigned int vec); void dev_irq_unregister(struct plt_intr_handle *intr_handle, diff --git a/drivers/common/cnxk/roc_idev.c b/drivers/common/cnxk/roc_idev.c new file mode 100644 index 0000000..7fbbbdc --- /dev/null +++ b/drivers/common/cnxk/roc_idev.c @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +struct idev_cfg * +idev_get_cfg(void) +{ + static const char name[] = "roc_cn10k_intra_device_conf"; + const struct plt_memzone *mz; + struct idev_cfg *idev; + + mz = plt_memzone_lookup(name); + if (mz != NULL) + return mz->addr; + + /* Request for the first time */ + mz = plt_memzone_reserve_cache_align(name, sizeof(struct idev_cfg)); + if (mz != NULL) { + idev = mz->addr; + idev_set_defaults(idev); + return idev; + } + return NULL; +} + +void +idev_set_defaults(struct idev_cfg *idev) +{ + idev->lmt_pf_func = 0; + idev->lmt_base_addr = 0; + idev->num_lmtlines = 0; +} + +uint16_t +idev_lmt_pffunc_get(void) +{ + struct idev_cfg *idev; + uint16_t lmt_pf_func; + + idev = idev_get_cfg(); + lmt_pf_func = 0; + if (idev != NULL) + lmt_pf_func = idev->lmt_pf_func; + + return lmt_pf_func; +} + +uint64_t +roc_idev_lmt_base_addr_get(void) +{ + uint64_t lmt_base_addr; + struct idev_cfg *idev; + + idev = idev_get_cfg(); + lmt_base_addr = 0; + if (idev != NULL) + lmt_base_addr = idev->lmt_base_addr; + + return lmt_base_addr; +} + +uint16_t +roc_idev_num_lmtlines_get(void) +{ + struct idev_cfg *idev; + uint16_t num_lmtlines; + + idev = idev_get_cfg(); + num_lmtlines = 0; + if (idev != NULL) + num_lmtlines = idev->num_lmtlines; + + return num_lmtlines; +} diff --git a/drivers/common/cnxk/roc_idev.h b/drivers/common/cnxk/roc_idev.h new file mode 100644 index 0000000..dff0741 --- /dev/null +++ b/drivers/common/cnxk/roc_idev.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_IDEV_H_ +#define _ROC_IDEV_H_ + +/* LMT */ +uint64_t __roc_api roc_idev_lmt_base_addr_get(void); +uint16_t __roc_api roc_idev_num_lmtlines_get(void); + +#endif /* _ROC_IDEV_H_ */ diff --git a/drivers/common/cnxk/roc_idev_priv.h b/drivers/common/cnxk/roc_idev_priv.h new file mode 100644 index 0000000..a096288 --- /dev/null +++ b/drivers/common/cnxk/roc_idev_priv.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_IDEV_PRIV_H_ +#define _ROC_IDEV_PRIV_H_ + +/* Intra device related functions */ +struct idev_cfg { + uint16_t lmt_pf_func; + uint16_t num_lmtlines; + uint64_t lmt_base_addr; +}; + +/* Generic */ +struct idev_cfg *idev_get_cfg(void); +void idev_set_defaults(struct idev_cfg *idev); + +/* idev lmt */ +uint16_t idev_lmt_pffunc_get(void); + +#endif /* _ROC_IDEV_PRIV_H_ */ diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h index c385f11..2df2d66 100644 --- a/drivers/common/cnxk/roc_priv.h +++ b/drivers/common/cnxk/roc_priv.h @@ -14,4 +14,7 @@ /* Dev */ #include "roc_dev_priv.h" +/* idev */ +#include "roc_idev_priv.h" + #endif /* _ROC_PRIV_H_ */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 242ba87..9279277 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -5,6 +5,8 @@ INTERNAL { cnxk_logtype_mbox; roc_clk_freq_get; roc_error_msg_get; + roc_idev_lmt_base_addr_get; + roc_idev_num_lmtlines_get; roc_model; roc_plt_init; From patchwork Thu Apr 1 12:37:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90381 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F775A0548; Thu, 1 Apr 2021 14:40:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 515B11411BE; Thu, 1 Apr 2021 14:39:04 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 0899F1411C4 for ; Thu, 1 Apr 2021 14:39:01 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPNX4032520 for ; Thu, 1 Apr 2021 05:39:01 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=SWTWTeQcss6Df8piJEhZg7xwngs0bcHjw0LzIBgQDxw=; b=QAFKoaPUVUitZv0Li1nbdLTEK/e+r+9Xj+QAlkA5qdHpfNQ99la+DG5zhs1noOvCQuEr AWLlP1LIm4EpHxU0ByVIudnAB255Hmc5/9P9wljNUyLQo+VkI0Q1emedr6Yr+1het5UQ pEsgUluioJi+iuO89poz98tlgLUwNO5f555AchyqEoX0VogXVSQtVONDVc8nKjAbHhrd WPQl1McWzjvRkV6ab+ELwVN3aCJjzIekCzYmSlnUqTW48cSxTR8LhJw1WxcwDhy85NhB aFQWfgBaqUDaT7ZgIRDBoenc3B8y/o+4NcX99HVsPqnsDEAM9n2lKAf96u0DONuXWz1h MA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 37n28j2dpf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:01 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:38:59 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:38:59 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 25AC43F703F; Thu, 1 Apr 2021 05:38:56 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:33 +0530 Message-ID: <20210401123817.14348-9-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: dk0ZSltgWU3r8-JrE8yzEhBIhSugw2BA X-Proofpoint-GUID: dk0ZSltgWU3r8-JrE8yzEhBIhSugw2BA X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 08/52] common/cnxk: add VF support to base device class X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add VF specific handling such as BAR4 setup, forwarding VF mbox messages to AF and vice-versa, VF FLR handling etc. Signed-off-by: Jerin Jacob --- drivers/common/cnxk/roc_dev.c | 857 ++++++++++++++++++++++++++++++++++++- drivers/common/cnxk/roc_dev_priv.h | 42 ++ 2 files changed, 879 insertions(+), 20 deletions(-) diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c index 380c71b..4cd5978 100644 --- a/drivers/common/cnxk/roc_dev.c +++ b/drivers/common/cnxk/roc_dev.c @@ -17,6 +17,337 @@ /* Single Root I/O Virtualization */ #define ROC_PCI_SRIOV_TOTAL_VF 0x0e /* Total VFs */ +static void * +mbox_mem_map(off_t off, size_t size) +{ + void *va = MAP_FAILED; + int mem_fd; + + if (size <= 0 || !off) { + plt_err("Invalid mbox area off 0x%lx size %lu", off, size); + goto error; + } + + mem_fd = open("/dev/mem", O_RDWR); + if (mem_fd < 0) + goto error; + + va = plt_mmap(NULL, size, PLT_PROT_READ | PLT_PROT_WRITE, + PLT_MAP_SHARED, mem_fd, off); + close(mem_fd); + + if (va == MAP_FAILED) + plt_err("Failed to mmap sz=0x%zx, fd=%d, off=%jd", size, mem_fd, + (intmax_t)off); +error: + return va; +} + +static void +mbox_mem_unmap(void *va, size_t size) +{ + if (va) + munmap(va, size); +} + +static int +pf_af_sync_msg(struct dev *dev, struct mbox_msghdr **rsp) +{ + uint32_t timeout = 0, sleep = 1; + struct mbox *mbox = dev->mbox; + struct mbox_dev *mdev = &mbox->dev[0]; + + volatile uint64_t int_status; + struct mbox_msghdr *msghdr; + uint64_t off; + int rc = 0; + + /* We need to disable PF interrupts. We are in timer interrupt */ + plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); + + /* Send message */ + mbox_msg_send(mbox, 0); + + do { + plt_delay_ms(sleep); + timeout += sleep; + if (timeout >= mbox->rsp_tmo) { + plt_err("Message timeout: %dms", mbox->rsp_tmo); + rc = -EIO; + break; + } + int_status = plt_read64(dev->bar2 + RVU_PF_INT); + } while ((int_status & 0x1) != 0x1); + + /* Clear */ + plt_write64(int_status, dev->bar2 + RVU_PF_INT); + + /* Enable interrupts */ + plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S); + + if (rc == 0) { + /* Get message */ + off = mbox->rx_start + + PLT_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + off); + if (rsp) + *rsp = msghdr; + rc = msghdr->rc; + } + + return rc; +} + +static int +af_pf_wait_msg(struct dev *dev, uint16_t vf, int num_msg) +{ + uint32_t timeout = 0, sleep = 1; + struct mbox *mbox = dev->mbox; + struct mbox_dev *mdev = &mbox->dev[0]; + volatile uint64_t int_status; + struct mbox_hdr *req_hdr; + struct mbox_msghdr *msg; + struct mbox_msghdr *rsp; + uint64_t offset; + size_t size; + int i; + + /* We need to disable PF interrupts. We are in timer interrupt */ + plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); + + /* Send message */ + mbox_msg_send(mbox, 0); + + do { + plt_delay_ms(sleep); + timeout++; + if (timeout >= mbox->rsp_tmo) { + plt_err("Routed messages %d timeout: %dms", num_msg, + mbox->rsp_tmo); + break; + } + int_status = plt_read64(dev->bar2 + RVU_PF_INT); + } while ((int_status & 0x1) != 0x1); + + /* Clear */ + plt_write64(~0ull, dev->bar2 + RVU_PF_INT); + + /* Enable interrupts */ + plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S); + + plt_spinlock_lock(&mdev->mbox_lock); + + req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); + if (req_hdr->num_msgs != num_msg) + plt_err("Routed messages: %d received: %d", num_msg, + req_hdr->num_msgs); + + /* Get messages from mbox */ + offset = mbox->rx_start + + PLT_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + for (i = 0; i < req_hdr->num_msgs; i++) { + msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); + size = mbox->rx_start + msg->next_msgoff - offset; + + /* Reserve PF/VF mbox message */ + size = PLT_ALIGN(size, MBOX_MSG_ALIGN); + rsp = mbox_alloc_msg(&dev->mbox_vfpf, vf, size); + mbox_rsp_init(msg->id, rsp); + + /* Copy message from AF<->PF mbox to PF<->VF mbox */ + mbox_memcpy((uint8_t *)rsp + sizeof(struct mbox_msghdr), + (uint8_t *)msg + sizeof(struct mbox_msghdr), + size - sizeof(struct mbox_msghdr)); + + /* Set status and sender pf_func data */ + rsp->rc = msg->rc; + rsp->pcifunc = msg->pcifunc; + + offset = mbox->rx_start + msg->next_msgoff; + } + plt_spinlock_unlock(&mdev->mbox_lock); + + return req_hdr->num_msgs; +} + +static int +vf_pf_process_msgs(struct dev *dev, uint16_t vf) +{ + struct mbox *mbox = &dev->mbox_vfpf; + struct mbox_dev *mdev = &mbox->dev[vf]; + struct mbox_hdr *req_hdr; + struct mbox_msghdr *msg; + int offset, routed = 0; + size_t size; + uint16_t i; + + req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); + if (!req_hdr->num_msgs) + return 0; + + offset = mbox->rx_start + PLT_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN); + + for (i = 0; i < req_hdr->num_msgs; i++) { + msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); + size = mbox->rx_start + msg->next_msgoff - offset; + + /* RVU_PF_FUNC_S */ + msg->pcifunc = dev_pf_func(dev->pf, vf); + + if (msg->id == MBOX_MSG_READY) { + struct ready_msg_rsp *rsp; + uint16_t max_bits = sizeof(dev->active_vfs[0]) * 8; + + /* Handle READY message in PF */ + dev->active_vfs[vf / max_bits] |= + BIT_ULL(vf % max_bits); + rsp = (struct ready_msg_rsp *)mbox_alloc_msg( + mbox, vf, sizeof(*rsp)); + mbox_rsp_init(msg->id, rsp); + + /* PF/VF function ID */ + rsp->hdr.pcifunc = msg->pcifunc; + rsp->hdr.rc = 0; + } else { + struct mbox_msghdr *af_req; + /* Reserve AF/PF mbox message */ + size = PLT_ALIGN(size, MBOX_MSG_ALIGN); + af_req = mbox_alloc_msg(dev->mbox, 0, size); + if (af_req == NULL) + return -ENOSPC; + mbox_req_init(msg->id, af_req); + + /* Copy message from VF<->PF mbox to PF<->AF mbox */ + mbox_memcpy((uint8_t *)af_req + + sizeof(struct mbox_msghdr), + (uint8_t *)msg + sizeof(struct mbox_msghdr), + size - sizeof(struct mbox_msghdr)); + af_req->pcifunc = msg->pcifunc; + routed++; + } + offset = mbox->rx_start + msg->next_msgoff; + } + + if (routed > 0) { + plt_base_dbg("pf:%d routed %d messages from vf:%d to AF", + dev->pf, routed, vf); + af_pf_wait_msg(dev, vf, routed); + mbox_reset(dev->mbox, 0); + } + + /* Send mbox responses to VF */ + if (mdev->num_msgs) { + plt_base_dbg("pf:%d reply %d messages to vf:%d", dev->pf, + mdev->num_msgs, vf); + mbox_msg_send(mbox, vf); + } + + return i; +} + +static int +vf_pf_process_up_msgs(struct dev *dev, uint16_t vf) +{ + struct mbox *mbox = &dev->mbox_vfpf_up; + struct mbox_dev *mdev = &mbox->dev[vf]; + struct mbox_hdr *req_hdr; + struct mbox_msghdr *msg; + int msgs_acked = 0; + int offset; + uint16_t i; + + req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); + if (req_hdr->num_msgs == 0) + return 0; + + offset = mbox->rx_start + PLT_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN); + + for (i = 0; i < req_hdr->num_msgs; i++) { + msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); + + msgs_acked++; + /* RVU_PF_FUNC_S */ + msg->pcifunc = dev_pf_func(dev->pf, vf); + + switch (msg->id) { + case MBOX_MSG_CGX_LINK_EVENT: + plt_base_dbg("PF: Msg 0x%x (%s) fn:0x%x (pf:%d,vf:%d)", + msg->id, mbox_id2name(msg->id), + msg->pcifunc, dev_get_pf(msg->pcifunc), + dev_get_vf(msg->pcifunc)); + break; + case MBOX_MSG_CGX_PTP_RX_INFO: + plt_base_dbg("PF: Msg 0x%x (%s) fn:0x%x (pf:%d,vf:%d)", + msg->id, mbox_id2name(msg->id), + msg->pcifunc, dev_get_pf(msg->pcifunc), + dev_get_vf(msg->pcifunc)); + break; + default: + plt_err("Not handled UP msg 0x%x (%s) func:0x%x", + msg->id, mbox_id2name(msg->id), msg->pcifunc); + } + offset = mbox->rx_start + msg->next_msgoff; + } + mbox_reset(mbox, vf); + mdev->msgs_acked = msgs_acked; + plt_wmb(); + + return i; +} + +static void +roc_vf_pf_mbox_handle_msg(void *param) +{ + uint16_t vf, max_vf, max_bits; + struct dev *dev = param; + + max_bits = sizeof(dev->intr.bits[0]) * sizeof(uint64_t); + max_vf = max_bits * MAX_VFPF_DWORD_BITS; + + for (vf = 0; vf < max_vf; vf++) { + if (dev->intr.bits[vf / max_bits] & BIT_ULL(vf % max_bits)) { + plt_base_dbg("Process vf:%d request (pf:%d, vf:%d)", vf, + dev->pf, dev->vf); + vf_pf_process_msgs(dev, vf); + /* UP messages */ + vf_pf_process_up_msgs(dev, vf); + dev->intr.bits[vf / max_bits] &= + ~(BIT_ULL(vf % max_bits)); + } + } + dev->timer_set = 0; +} + +static void +roc_vf_pf_mbox_irq(void *param) +{ + struct dev *dev = param; + bool alarm_set = false; + uint64_t intr; + int vfpf; + + for (vfpf = 0; vfpf < MAX_VFPF_DWORD_BITS; ++vfpf) { + intr = plt_read64(dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf)); + if (!intr) + continue; + + plt_base_dbg("vfpf: %d intr: 0x%" PRIx64 " (pf:%d, vf:%d)", + vfpf, intr, dev->pf, dev->vf); + + /* Save and clear intr bits */ + dev->intr.bits[vfpf] |= intr; + plt_write64(intr, dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf)); + alarm_set = true; + } + + if (!dev->timer_set && alarm_set) { + dev->timer_set = 1; + /* Start timer to handle messages */ + plt_alarm_set(VF_PF_MBOX_TIMER_MS, roc_vf_pf_mbox_handle_msg, + dev); + } +} + static void process_msgs(struct dev *dev, struct mbox *mbox) { @@ -62,6 +393,112 @@ process_msgs(struct dev *dev, struct mbox *mbox) plt_wmb(); } +/* Copies the message received from AF and sends it to VF */ +static void +pf_vf_mbox_send_up_msg(struct dev *dev, void *rec_msg) +{ + uint16_t max_bits = sizeof(dev->active_vfs[0]) * sizeof(uint64_t); + struct mbox *vf_mbox = &dev->mbox_vfpf_up; + struct msg_req *msg = rec_msg; + struct mbox_msghdr *vf_msg; + uint16_t vf; + size_t size; + + size = PLT_ALIGN(mbox_id2size(msg->hdr.id), MBOX_MSG_ALIGN); + /* Send UP message to all VF's */ + for (vf = 0; vf < vf_mbox->ndevs; vf++) { + /* VF active */ + if (!(dev->active_vfs[vf / max_bits] & (BIT_ULL(vf)))) + continue; + + plt_base_dbg("(%s) size: %zx to VF: %d", + mbox_id2name(msg->hdr.id), size, vf); + + /* Reserve PF/VF mbox message */ + vf_msg = mbox_alloc_msg(vf_mbox, vf, size); + if (!vf_msg) { + plt_err("Failed to alloc VF%d UP message", vf); + continue; + } + mbox_req_init(msg->hdr.id, vf_msg); + + /* + * Copy message from AF<->PF UP mbox + * to PF<->VF UP mbox + */ + mbox_memcpy((uint8_t *)vf_msg + sizeof(struct mbox_msghdr), + (uint8_t *)msg + sizeof(struct mbox_msghdr), + size - sizeof(struct mbox_msghdr)); + + vf_msg->rc = msg->hdr.rc; + /* Set PF to be a sender */ + vf_msg->pcifunc = dev->pf_func; + + /* Send to VF */ + mbox_msg_send(vf_mbox, vf); + } +} + +static int +mbox_up_handler_cgx_link_event(struct dev *dev, struct cgx_link_info_msg *msg, + struct msg_rsp *rsp) +{ + struct cgx_link_user_info *linfo = &msg->link_info; + void *roc_nix = dev->roc_nix; + + plt_base_dbg("pf:%d/vf:%d NIC Link %s --> 0x%x (%s) from: pf:%d/vf:%d", + dev_get_pf(dev->pf_func), dev_get_vf(dev->pf_func), + linfo->link_up ? "UP" : "DOWN", msg->hdr.id, + mbox_id2name(msg->hdr.id), dev_get_pf(msg->hdr.pcifunc), + dev_get_vf(msg->hdr.pcifunc)); + + /* PF gets link notification from AF */ + if (dev_get_pf(msg->hdr.pcifunc) == 0) { + if (dev->ops && dev->ops->link_status_update) + dev->ops->link_status_update(roc_nix, linfo); + + /* Forward the same message as received from AF to VF */ + pf_vf_mbox_send_up_msg(dev, msg); + } else { + /* VF gets link up notification */ + if (dev->ops && dev->ops->link_status_update) + dev->ops->link_status_update(roc_nix, linfo); + } + + rsp->hdr.rc = 0; + return 0; +} + +static int +mbox_up_handler_cgx_ptp_rx_info(struct dev *dev, + struct cgx_ptp_rx_info_msg *msg, + struct msg_rsp *rsp) +{ + void *roc_nix = dev->roc_nix; + + plt_base_dbg("pf:%d/vf:%d PTP mode %s --> 0x%x (%s) from: pf:%d/vf:%d", + dev_get_pf(dev->pf_func), dev_get_vf(dev->pf_func), + msg->ptp_en ? "ENABLED" : "DISABLED", msg->hdr.id, + mbox_id2name(msg->hdr.id), dev_get_pf(msg->hdr.pcifunc), + dev_get_vf(msg->hdr.pcifunc)); + + /* PF gets PTP notification from AF */ + if (dev_get_pf(msg->hdr.pcifunc) == 0) { + if (dev->ops && dev->ops->ptp_info_update) + dev->ops->ptp_info_update(roc_nix, msg->ptp_en); + + /* Forward the same message as received from AF to VF */ + pf_vf_mbox_send_up_msg(dev, msg); + } else { + /* VF gets PTP notification */ + if (dev->ops && dev->ops->ptp_info_update) + dev->ops->ptp_info_update(roc_nix, msg->ptp_en); + } + + rsp->hdr.rc = 0; + return 0; +} + static int mbox_process_msgs_up(struct dev *dev, struct mbox_msghdr *req) { @@ -73,6 +510,24 @@ mbox_process_msgs_up(struct dev *dev, struct mbox_msghdr *req) default: reply_invalid_msg(&dev->mbox_up, 0, 0, req->id); break; +#define M(_name, _id, _fn_name, _req_type, _rsp_type) \ + case _id: { \ + struct _rsp_type *rsp; \ + int err; \ + rsp = (struct _rsp_type *)mbox_alloc_msg( \ + &dev->mbox_up, 0, sizeof(struct _rsp_type)); \ + if (!rsp) \ + return -ENOMEM; \ + rsp->hdr.id = _id; \ + rsp->hdr.sig = MBOX_RSP_SIG; \ + rsp->hdr.pcifunc = dev->pf_func; \ + rsp->hdr.rc = 0; \ + err = mbox_up_handler_##_fn_name(dev, (struct _req_type *)req, \ + rsp); \ + return err; \ + } + MBOX_UP_CGX_MESSAGES +#undef M } return -ENODEV; @@ -111,6 +566,26 @@ process_msgs_up(struct dev *dev, struct mbox *mbox) } static void +roc_pf_vf_mbox_irq(void *param) +{ + struct dev *dev = param; + uint64_t intr; + + intr = plt_read64(dev->bar2 + RVU_VF_INT); + if (intr == 0) + plt_base_dbg("Proceeding to check mbox UP messages if any"); + + plt_write64(intr, dev->bar2 + RVU_VF_INT); + plt_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf); + + /* First process all configuration messages */ + process_msgs(dev, dev->mbox); + + /* Process Uplink messages */ + process_msgs_up(dev, &dev->mbox_up); +} + +static void roc_af_pf_mbox_irq(void *param) { struct dev *dev = param; @@ -121,7 +596,7 @@ roc_af_pf_mbox_irq(void *param) plt_base_dbg("Proceeding to check mbox UP messages if any"); plt_write64(intr, dev->bar2 + RVU_PF_INT); - plt_base_dbg("Irq 0x%" PRIx64 "(pf:%d)", intr, dev->pf); + plt_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf); /* First process all configuration messages */ process_msgs(dev, dev->mbox); @@ -134,10 +609,33 @@ static int mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev) { struct plt_intr_handle *intr_handle = &pci_dev->intr_handle; - int rc; + int i, rc; + + /* HW clear irq */ + for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) + plt_write64(~0ull, + dev->bar2 + RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i)); plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); + dev->timer_set = 0; + + /* MBOX interrupt for VF(0...63) <-> PF */ + rc = dev_irq_register(intr_handle, roc_vf_pf_mbox_irq, dev, + RVU_PF_INT_VEC_VFPF_MBOX0); + + if (rc) { + plt_err("Fail to register PF(VF0-63) mbox irq"); + return rc; + } + /* MBOX interrupt for VF(64...128) <-> PF */ + rc = dev_irq_register(intr_handle, roc_vf_pf_mbox_irq, dev, + RVU_PF_INT_VEC_VFPF_MBOX1); + + if (rc) { + plt_err("Fail to register PF(VF64-128) mbox irq"); + return rc; + } /* MBOX interrupt AF <-> PF */ rc = dev_irq_register(intr_handle, roc_af_pf_mbox_irq, dev, RVU_PF_INT_VEC_AFPF_MBOX); @@ -146,6 +644,11 @@ mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev) return rc; } + /* HW enable intr */ + for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) + plt_write64(~0ull, + dev->bar2 + RVU_PF_VFPF_MBOX_INT_ENA_W1SX(i)); + plt_write64(~0ull, dev->bar2 + RVU_PF_INT); plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S); @@ -153,27 +656,263 @@ mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev) } static int +mbox_register_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev) +{ + struct plt_intr_handle *intr_handle = &pci_dev->intr_handle; + int rc; + + /* Clear irq */ + plt_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C); + + /* MBOX interrupt PF <-> VF */ + rc = dev_irq_register(intr_handle, roc_pf_vf_mbox_irq, dev, + RVU_VF_INT_VEC_MBOX); + if (rc) { + plt_err("Fail to register PF<->VF mbox irq"); + return rc; + } + + /* HW enable intr */ + plt_write64(~0ull, dev->bar2 + RVU_VF_INT); + plt_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1S); + + return rc; +} + +static int mbox_register_irq(struct plt_pci_device *pci_dev, struct dev *dev) { - return mbox_register_pf_irq(pci_dev, dev); + if (dev_is_vf(dev)) + return mbox_register_vf_irq(pci_dev, dev); + else + return mbox_register_pf_irq(pci_dev, dev); } static void mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev) { struct plt_intr_handle *intr_handle = &pci_dev->intr_handle; + int i; + + /* HW clear irq */ + for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) + plt_write64(~0ull, + dev->bar2 + RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i)); plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); + dev->timer_set = 0; + + plt_alarm_cancel(roc_vf_pf_mbox_handle_msg, dev); + + /* Unregister the interrupt handler for each vectors */ + /* MBOX interrupt for VF(0...63) <-> PF */ + dev_irq_unregister(intr_handle, roc_vf_pf_mbox_irq, dev, + RVU_PF_INT_VEC_VFPF_MBOX0); + + /* MBOX interrupt for VF(64...128) <-> PF */ + dev_irq_unregister(intr_handle, roc_vf_pf_mbox_irq, dev, + RVU_PF_INT_VEC_VFPF_MBOX1); + /* MBOX interrupt AF <-> PF */ dev_irq_unregister(intr_handle, roc_af_pf_mbox_irq, dev, RVU_PF_INT_VEC_AFPF_MBOX); } static void +mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev) +{ + struct plt_intr_handle *intr_handle = &pci_dev->intr_handle; + + /* Clear irq */ + plt_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C); + + /* Unregister the interrupt handler */ + dev_irq_unregister(intr_handle, roc_pf_vf_mbox_irq, dev, + RVU_VF_INT_VEC_MBOX); +} + +static void mbox_unregister_irq(struct plt_pci_device *pci_dev, struct dev *dev) { - mbox_unregister_pf_irq(pci_dev, dev); + if (dev_is_vf(dev)) + mbox_unregister_vf_irq(pci_dev, dev); + else + mbox_unregister_pf_irq(pci_dev, dev); +} + +static int +vf_flr_send_msg(struct dev *dev, uint16_t vf) +{ + struct mbox *mbox = dev->mbox; + struct msg_req *req; + int rc; + + req = mbox_alloc_msg_vf_flr(mbox); + if (req == NULL) + return -ENOSPC; + /* Overwrite pcifunc to indicate VF */ + req->hdr.pcifunc = dev_pf_func(dev->pf, vf); + + /* Sync message in interrupt context */ + rc = pf_af_sync_msg(dev, NULL); + if (rc) + plt_err("Failed to send VF FLR mbox msg, rc=%d", rc); + + return rc; +} + +static void +roc_pf_vf_flr_irq(void *param) +{ + struct dev *dev = (struct dev *)param; + uint16_t max_vf = 64, vf; + uintptr_t bar2; + uint64_t intr; + int i; + + max_vf = (dev->maxvf > 0) ? dev->maxvf : 64; + bar2 = dev->bar2; + + plt_base_dbg("FLR VF interrupt: max_vf: %d", max_vf); + + for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) { + intr = plt_read64(bar2 + RVU_PF_VFFLR_INTX(i)); + if (!intr) + continue; + + for (vf = 0; vf < max_vf; vf++) { + if (!(intr & (1ULL << vf))) + continue; + + plt_base_dbg("FLR: i :%d intr: 0x%" PRIx64 ", vf-%d", i, + intr, (64 * i + vf)); + /* Clear interrupt */ + plt_write64(BIT_ULL(vf), bar2 + RVU_PF_VFFLR_INTX(i)); + /* Disable the interrupt */ + plt_write64(BIT_ULL(vf), + bar2 + RVU_PF_VFFLR_INT_ENA_W1CX(i)); + /* Inform AF about VF reset */ + vf_flr_send_msg(dev, vf); + + /* Signal FLR finish */ + plt_write64(BIT_ULL(vf), bar2 + RVU_PF_VFTRPENDX(i)); + /* Enable interrupt */ + plt_write64(~0ull, bar2 + RVU_PF_VFFLR_INT_ENA_W1SX(i)); + } + } +} + +static int +vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev) +{ + struct plt_intr_handle *intr_handle = &pci_dev->intr_handle; + int i; + + plt_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name); + + /* HW clear irq */ + for (i = 0; i < MAX_VFPF_DWORD_BITS; i++) + plt_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INT_ENA_W1CX(i)); + + dev_irq_unregister(intr_handle, roc_pf_vf_flr_irq, dev, + RVU_PF_INT_VEC_VFFLR0); + + dev_irq_unregister(intr_handle, roc_pf_vf_flr_irq, dev, + RVU_PF_INT_VEC_VFFLR1); + + return 0; +} + +static int +vf_flr_register_irqs(struct plt_pci_device *pci_dev, struct dev *dev) +{ + struct plt_intr_handle *handle = &pci_dev->intr_handle; + int i, rc; + + plt_base_dbg("Register VF FLR interrupts for %s", pci_dev->name); + + rc = dev_irq_register(handle, roc_pf_vf_flr_irq, dev, + RVU_PF_INT_VEC_VFFLR0); + if (rc) + plt_err("Failed to init RVU_PF_INT_VEC_VFFLR0 rc=%d", rc); + + rc = dev_irq_register(handle, roc_pf_vf_flr_irq, dev, + RVU_PF_INT_VEC_VFFLR1); + if (rc) + plt_err("Failed to init RVU_PF_INT_VEC_VFFLR1 rc=%d", rc); + + /* Enable HW interrupt */ + for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) { + plt_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INTX(i)); + plt_write64(~0ull, dev->bar2 + RVU_PF_VFTRPENDX(i)); + plt_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INT_ENA_W1SX(i)); + } + return 0; +} + +int +dev_active_vfs(struct dev *dev) +{ + int i, count = 0; + + for (i = 0; i < MAX_VFPF_DWORD_BITS; i++) + count += __builtin_popcount(dev->active_vfs[i]); + + return count; +} + +static void +dev_vf_hwcap_update(struct plt_pci_device *pci_dev, struct dev *dev) +{ + switch (pci_dev->id.device_id) { + case PCI_DEVID_CNXK_RVU_PF: + break; + case PCI_DEVID_CNXK_RVU_SSO_TIM_VF: + case PCI_DEVID_CNXK_RVU_NPA_VF: + case PCI_DEVID_CNXK_RVU_AF_VF: + case PCI_DEVID_CNXK_RVU_VF: + case PCI_DEVID_CNXK_RVU_SDP_VF: + dev->hwcap |= DEV_HWCAP_F_VF; + break; + } +} + +static uintptr_t +dev_vf_mbase_get(struct plt_pci_device *pci_dev, struct dev *dev) +{ + void *vf_mbase = NULL; + uintptr_t pa; + + if (dev_is_vf(dev)) + return 0; + + /* For CN10K onwards, it is just after PF MBOX */ + if (!roc_model_is_cn9k()) + return dev->bar4 + MBOX_SIZE; + + pa = plt_read64(dev->bar2 + RVU_PF_VF_BAR4_ADDR); + if (!pa) { + plt_err("Invalid VF mbox base pa"); + return pa; + } + + vf_mbase = mbox_mem_map(pa, MBOX_SIZE * pci_dev->max_vfs); + if (vf_mbase == MAP_FAILED) { + plt_err("Failed to mmap vf mbase at pa 0x%lx, rc=%d", pa, + errno); + return 0; + } + return (uintptr_t)vf_mbase; +} + +static void +dev_vf_mbase_put(struct plt_pci_device *pci_dev, uintptr_t vf_mbase) +{ + if (!vf_mbase || !pci_dev->max_vfs || !roc_model_is_cn9k()) + return; + + mbox_mem_unmap((void *)vf_mbase, MBOX_SIZE * pci_dev->max_vfs); } static uint16_t @@ -213,7 +952,6 @@ dev_setup_shared_lmt_region(struct mbox *mbox) static int dev_lmt_setup(struct plt_pci_device *pci_dev, struct dev *dev) { - uint64_t bar4_mbox_sz = MBOX_SIZE; struct idev_cfg *idev; int rc; @@ -237,19 +975,34 @@ dev_lmt_setup(struct plt_pci_device *pci_dev, struct dev *dev) dev->pf_func, rc); } - /* PF BAR4 should always be sufficient enough to - * hold PF-AF MBOX + PF-VF MBOX + LMT lines. - */ - if (pci_dev->mem_resource[4].len < - (bar4_mbox_sz + (RVU_LMT_LINE_MAX * RVU_LMT_SZ))) { - plt_err("Not enough bar4 space for lmt lines and mbox"); - return -EFAULT; + if (dev_is_vf(dev)) { + /* VF BAR4 should always be sufficient enough to + * hold LMT lines. + */ + if (pci_dev->mem_resource[4].len < + (RVU_LMT_LINE_MAX * RVU_LMT_SZ)) { + plt_err("Not enough bar4 space for lmt lines"); + return -EFAULT; + } + + dev->lmt_base = dev->bar4; + } else { + uint64_t bar4_mbox_sz = MBOX_SIZE; + + /* PF BAR4 should always be sufficient enough to + * hold PF-AF MBOX + PF-VF MBOX + LMT lines. + */ + if (pci_dev->mem_resource[4].len < + (bar4_mbox_sz + (RVU_LMT_LINE_MAX * RVU_LMT_SZ))) { + plt_err("Not enough bar4 space for lmt lines and mbox"); + return -EFAULT; + } + + /* LMT base is just after total VF MBOX area */ + bar4_mbox_sz += (MBOX_SIZE * dev_pf_total_vfs(pci_dev)); + dev->lmt_base = dev->bar4 + bar4_mbox_sz; } - /* LMT base is just after total VF MBOX area */ - bar4_mbox_sz += (MBOX_SIZE * dev_pf_total_vfs(pci_dev)); - dev->lmt_base = dev->bar4 + bar4_mbox_sz; - /* Base LMT address should be chosen from only those pci funcs which * participate in LMT shared mode. */ @@ -270,6 +1023,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev) { int direction, up_direction, rc; uintptr_t bar2, bar4, mbox; + uintptr_t vf_mbase = 0; uint64_t intr_offset; bar2 = (uintptr_t)pci_dev->mem_resource[2].addr; @@ -293,13 +1047,23 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev) goto error; } + dev->maxvf = pci_dev->max_vfs; dev->bar2 = bar2; dev->bar4 = bar4; + dev_vf_hwcap_update(pci_dev, dev); - mbox = bar4; - direction = MBOX_DIR_PFAF; - up_direction = MBOX_DIR_PFAF_UP; - intr_offset = RVU_PF_INT; + if (dev_is_vf(dev)) { + mbox = (roc_model_is_cn9k() ? + bar4 : (bar2 + RVU_VF_MBOX_REGION)); + direction = MBOX_DIR_VFPF; + up_direction = MBOX_DIR_VFPF_UP; + intr_offset = RVU_VF_INT; + } else { + mbox = bar4; + direction = MBOX_DIR_PFAF; + up_direction = MBOX_DIR_PFAF_UP; + intr_offset = RVU_PF_INT; + } /* Initialize the local mbox */ rc = mbox_init(&dev->mbox_local, mbox, bar2, direction, 1, intr_offset); @@ -322,7 +1086,43 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev) goto mbox_unregister; dev->pf = dev_get_pf(dev->pf_func); + dev->vf = dev_get_vf(dev->pf_func); + memset(&dev->active_vfs, 0, sizeof(dev->active_vfs)); + /* Allocate memory for device ops */ + dev->ops = plt_zmalloc(sizeof(struct dev_ops), 0); + if (dev->ops == NULL) { + rc = -ENOMEM; + goto mbox_unregister; + } + + /* Found VF devices in a PF device */ + if (pci_dev->max_vfs > 0) { + /* Remap mbox area for all vf's */ + vf_mbase = dev_vf_mbase_get(pci_dev, dev); + if (!vf_mbase) { + rc = -ENODEV; + goto mbox_unregister; + } + /* Init mbox object */ + rc = mbox_init(&dev->mbox_vfpf, vf_mbase, bar2, MBOX_DIR_PFVF, + pci_dev->max_vfs, intr_offset); + if (rc) + goto iounmap; + + /* PF -> VF UP messages */ + rc = mbox_init(&dev->mbox_vfpf_up, vf_mbase, bar2, + MBOX_DIR_PFVF_UP, pci_dev->max_vfs, intr_offset); + if (rc) + goto iounmap; + } + + /* Register VF-FLR irq handlers */ + if (!dev_is_vf(dev)) { + rc = vf_flr_register_irqs(pci_dev, dev); + if (rc) + goto iounmap; + } dev->mbox_active = 1; /* Setup LMT line base */ @@ -332,8 +1132,11 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev) return rc; iounmap: + dev_vf_mbase_put(pci_dev, vf_mbase); mbox_unregister: mbox_unregister_irq(pci_dev, dev); + if (dev->ops) + plt_free(dev->ops); mbox_fini: mbox_fini(dev->mbox); mbox_fini(&dev->mbox_up); @@ -349,6 +1152,20 @@ dev_fini(struct dev *dev, struct plt_pci_device *pci_dev) mbox_unregister_irq(pci_dev, dev); + if (!dev_is_vf(dev)) + vf_flr_unregister_irqs(pci_dev, dev); + /* Release PF - VF */ + mbox = &dev->mbox_vfpf; + if (mbox->hwbase && mbox->dev) + dev_vf_mbase_put(pci_dev, mbox->hwbase); + + if (dev->ops) + plt_free(dev->ops); + + mbox_fini(mbox); + mbox = &dev->mbox_vfpf_up; + mbox_fini(mbox); + /* Release PF - AF */ mbox = dev->mbox; mbox_fini(mbox); diff --git a/drivers/common/cnxk/roc_dev_priv.h b/drivers/common/cnxk/roc_dev_priv.h index c0308e7..d20b089 100644 --- a/drivers/common/cnxk/roc_dev_priv.h +++ b/drivers/common/cnxk/roc_dev_priv.h @@ -5,12 +5,38 @@ #ifndef _ROC_DEV_PRIV_H #define _ROC_DEV_PRIV_H +#define DEV_HWCAP_F_VF BIT_ULL(0) /* VF device */ + #define RVU_PFVF_PF_SHIFT 10 #define RVU_PFVF_PF_MASK 0x3F #define RVU_PFVF_FUNC_SHIFT 0 #define RVU_PFVF_FUNC_MASK 0x3FF +#define RVU_MAX_VF 64 /* RVU_PF_VFPF_MBOX_INT(0..1) */ #define RVU_MAX_INT_RETRY 3 +/* PF/VF message handling timer */ +#define VF_PF_MBOX_TIMER_MS (20 * 1000) + +typedef struct { +/* 128 devices translate to two 64 bits dwords */ +#define MAX_VFPF_DWORD_BITS 2 + uint64_t bits[MAX_VFPF_DWORD_BITS]; +} dev_intr_t; + +/* Link status update callback */ +typedef void (*link_info_t)(void *roc_nix, + struct cgx_link_user_info *link); + +/* PTP info callback */ +typedef int (*ptp_info_t)(void *roc_nix, bool enable); + +struct dev_ops { + link_info_t link_status_update; + ptp_info_t ptp_info_update; +}; + +#define dev_is_vf(dev) ((dev)->hwcap & DEV_HWCAP_F_VF) + static inline int dev_get_vf(uint16_t pf_func) { @@ -29,18 +55,33 @@ dev_pf_func(int pf, int vf) return (pf << RVU_PFVF_PF_SHIFT) | ((vf << RVU_PFVF_FUNC_SHIFT) + 1); } +static inline int +dev_is_afvf(uint16_t pf_func) +{ + return !(pf_func & ~RVU_PFVF_FUNC_MASK); +} + struct dev { uint16_t pf; + int16_t vf; uint16_t pf_func; uint8_t mbox_active; bool drv_inited; + uint64_t active_vfs[MAX_VFPF_DWORD_BITS]; uintptr_t bar2; uintptr_t bar4; uintptr_t lmt_base; struct mbox mbox_local; struct mbox mbox_up; + struct mbox mbox_vfpf; + struct mbox mbox_vfpf_up; + dev_intr_t intr; + int timer_set; /* ~0 : no alarm handling */ uint64_t hwcap; struct mbox *mbox; + uint16_t maxvf; + struct dev_ops *ops; + void *roc_nix; bool disable_shared_lmt; /* false(default): shared lmt mode enabled */ } __plt_cache_aligned; @@ -49,6 +90,7 @@ extern uint16_t dev_sclk_freq; int dev_init(struct dev *dev, struct plt_pci_device *pci_dev); int dev_fini(struct dev *dev, struct plt_pci_device *pci_dev); +int dev_active_vfs(struct dev *dev); int dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb, void *data, unsigned int vec); From patchwork Thu Apr 1 12:37:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90382 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7E915A0548; Thu, 1 Apr 2021 14:40:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EB1F2140E5E; Thu, 1 Apr 2021 14:39:06 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id E59431411C8 for ; Thu, 1 Apr 2021 14:39:04 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLcR019084 for ; Thu, 1 Apr 2021 05:39:04 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=SLUBJQ6gXj0DiO98L0yGzepCf7//1fwyAAwm9pN5Y/o=; b=VS1eeA/MxTqIV0apnsJYZbe52Jq+TW/NXgYs7IgVGrJC/vhlBAKGNm45zC0YzIYEKbDs v/w8ry3g8pEpMLixab+Zpgvpzb1YRoZqfBnnurBgGRNUptK6OBZrOSaY4sS/bVIdB3x4 6qTxzUo4sWPPUqgbsBbGvp8PFF6Q3Pv5wSVNLvYpUl/vHcfyo/JJQlbAMip0R2p9/MVA hFfXxfcNVIe1eZA8Ra2lgPbHWpj21OFhVCDPNY1jf8d9s+bQbfdgMc1XqPw6tmiEl/fC tKPSZUpYOJB8No4MHPxsItt01udEj2DQQI5ZckmkBmO80EyVcerVQy7rtLqCl8O6OHRr LA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje0v-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:04 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:02 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:02 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 1652D3F7040; Thu, 1 Apr 2021 05:38:59 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:34 +0530 Message-ID: <20210401123817.14348-10-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: lRGjciNb9GnlSpX5j2p6ZVRUhGXYrMQF X-Proofpoint-ORIG-GUID: lRGjciNb9GnlSpX5j2p6ZVRUhGXYrMQF X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 09/52] common/cnxk: add base npa device support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ashwin Sekhar T K Add base NPA device support. NPA i.e Network Pool Allocator is HW block that provides HW mempool functionality on Marvell CN9K and CN10K SoC's. NPA by providing HW mempool support, also facilitates Rx and Tx packet alloc and packet free by HW without SW intervention. Signed-off-by: Ashwin Sekhar T K --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_api.h | 3 + drivers/common/cnxk/roc_dev.c | 11 ++ drivers/common/cnxk/roc_dev_priv.h | 6 + drivers/common/cnxk/roc_idev.c | 67 ++++++++ drivers/common/cnxk/roc_idev.h | 3 + drivers/common/cnxk/roc_idev_priv.h | 12 ++ drivers/common/cnxk/roc_npa.c | 318 ++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_npa.h | 20 +++ drivers/common/cnxk/roc_npa_priv.h | 59 +++++++ drivers/common/cnxk/roc_platform.c | 1 + drivers/common/cnxk/roc_platform.h | 2 + drivers/common/cnxk/roc_priv.h | 3 + drivers/common/cnxk/roc_utils.c | 22 +++ drivers/common/cnxk/version.map | 5 + 15 files changed, 533 insertions(+) create mode 100644 drivers/common/cnxk/roc_npa.c create mode 100644 drivers/common/cnxk/roc_npa.h create mode 100644 drivers/common/cnxk/roc_npa_priv.h diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 17cbc36..2aeed3e 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -15,6 +15,7 @@ sources = files('roc_dev.c', 'roc_irq.c', 'roc_mbox.c', 'roc_model.c', + 'roc_npa.c', 'roc_platform.c', 'roc_utils.c') includes += include_directories('../../bus/pci') diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h index 27ddc3a..9289c68 100644 --- a/drivers/common/cnxk/roc_api.h +++ b/drivers/common/cnxk/roc_api.h @@ -79,6 +79,9 @@ /* Mbox */ #include "roc_mbox.h" +/* NPA */ +#include "roc_npa.h" + /* Utils */ #include "roc_utils.h" diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c index 4cd5978..a39acc9 100644 --- a/drivers/common/cnxk/roc_dev.c +++ b/drivers/common/cnxk/roc_dev.c @@ -1125,6 +1125,10 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev) } dev->mbox_active = 1; + rc = npa_lf_init(dev, pci_dev); + if (rc) + goto iounmap; + /* Setup LMT line base */ rc = dev_lmt_setup(pci_dev, dev); if (rc) @@ -1150,6 +1154,13 @@ dev_fini(struct dev *dev, struct plt_pci_device *pci_dev) struct plt_intr_handle *intr_handle = &pci_dev->intr_handle; struct mbox *mbox; + /* Check if this dev hosts npalf and has 1+ refs */ + if (idev_npa_lf_active(dev) > 1) + return -EAGAIN; + + /* Clear references to this pci dev */ + npa_lf_fini(); + mbox_unregister_irq(pci_dev, dev); if (!dev_is_vf(dev)) diff --git a/drivers/common/cnxk/roc_dev_priv.h b/drivers/common/cnxk/roc_dev_priv.h index d20b089..910cfb6 100644 --- a/drivers/common/cnxk/roc_dev_priv.h +++ b/drivers/common/cnxk/roc_dev_priv.h @@ -78,6 +78,7 @@ struct dev { dev_intr_t intr; int timer_set; /* ~0 : no alarm handling */ uint64_t hwcap; + struct npa_lf npa; struct mbox *mbox; uint16_t maxvf; struct dev_ops *ops; @@ -85,6 +86,11 @@ struct dev { bool disable_shared_lmt; /* false(default): shared lmt mode enabled */ } __plt_cache_aligned; +struct npa { + struct plt_pci_device *pci_dev; + struct dev dev; +} __plt_cache_aligned; + extern uint16_t dev_rclk_freq; extern uint16_t dev_sclk_freq; diff --git a/drivers/common/cnxk/roc_idev.c b/drivers/common/cnxk/roc_idev.c index 7fbbbdc..bf9cce8 100644 --- a/drivers/common/cnxk/roc_idev.c +++ b/drivers/common/cnxk/roc_idev.c @@ -29,9 +29,76 @@ idev_get_cfg(void) void idev_set_defaults(struct idev_cfg *idev) { + idev->npa = NULL; + idev->npa_pf_func = 0; + idev->max_pools = 128; idev->lmt_pf_func = 0; idev->lmt_base_addr = 0; idev->num_lmtlines = 0; + __atomic_store_n(&idev->npa_refcnt, 0, __ATOMIC_RELEASE); +} + +uint16_t +idev_npa_pffunc_get(void) +{ + struct idev_cfg *idev; + uint16_t npa_pf_func; + + idev = idev_get_cfg(); + npa_pf_func = 0; + if (idev != NULL) + npa_pf_func = idev->npa_pf_func; + + return npa_pf_func; +} + +struct npa_lf * +idev_npa_obj_get(void) +{ + struct idev_cfg *idev; + + idev = idev_get_cfg(); + if (idev && __atomic_load_n(&idev->npa_refcnt, __ATOMIC_ACQUIRE)) + return idev->npa; + + return NULL; +} + +uint32_t +roc_idev_npa_maxpools_get(void) +{ + struct idev_cfg *idev; + uint32_t max_pools; + + idev = idev_get_cfg(); + max_pools = 0; + if (idev != NULL) + max_pools = idev->max_pools; + + return max_pools; +} + +void +roc_idev_npa_maxpools_set(uint32_t max_pools) +{ + struct idev_cfg *idev; + + idev = idev_get_cfg(); + if (idev != NULL) + __atomic_store_n(&idev->max_pools, max_pools, __ATOMIC_RELEASE); +} + +uint16_t +idev_npa_lf_active(struct dev *dev) +{ + struct idev_cfg *idev; + + /* Check if npalf is actively used on this dev */ + idev = idev_get_cfg(); + if (!idev || !idev->npa || idev->npa->mbox != dev->mbox) + return 0; + + return __atomic_load_n(&idev->npa_refcnt, __ATOMIC_ACQUIRE); } uint16_t diff --git a/drivers/common/cnxk/roc_idev.h b/drivers/common/cnxk/roc_idev.h index dff0741..f267865 100644 --- a/drivers/common/cnxk/roc_idev.h +++ b/drivers/common/cnxk/roc_idev.h @@ -5,6 +5,9 @@ #ifndef _ROC_IDEV_H_ #define _ROC_IDEV_H_ +uint32_t __roc_api roc_idev_npa_maxpools_get(void); +void __roc_api roc_idev_npa_maxpools_set(uint32_t max_pools); + /* LMT */ uint64_t __roc_api roc_idev_lmt_base_addr_get(void); uint16_t __roc_api roc_idev_num_lmtlines_get(void); diff --git a/drivers/common/cnxk/roc_idev_priv.h b/drivers/common/cnxk/roc_idev_priv.h index a096288..36cdb33 100644 --- a/drivers/common/cnxk/roc_idev_priv.h +++ b/drivers/common/cnxk/roc_idev_priv.h @@ -6,7 +6,12 @@ #define _ROC_IDEV_PRIV_H_ /* Intra device related functions */ +struct npa_lf; struct idev_cfg { + uint16_t npa_pf_func; + struct npa_lf *npa; + uint16_t npa_refcnt; + uint32_t max_pools; uint16_t lmt_pf_func; uint16_t num_lmtlines; uint64_t lmt_base_addr; @@ -16,6 +21,13 @@ struct idev_cfg { struct idev_cfg *idev_get_cfg(void); void idev_set_defaults(struct idev_cfg *idev); +/* idev npa */ +uint16_t idev_npa_pffunc_get(void); +struct npa_lf *idev_npa_obj_get(void); +uint32_t idev_npa_maxpools_get(void); +void idev_npa_maxpools_set(uint32_t max_pools); +uint16_t idev_npa_lf_active(struct dev *dev); + /* idev lmt */ uint16_t idev_lmt_pffunc_get(void); diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c new file mode 100644 index 0000000..2aa726b --- /dev/null +++ b/drivers/common/cnxk/roc_npa.c @@ -0,0 +1,318 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +static inline int +npa_attach(struct mbox *mbox) +{ + struct rsrc_attach_req *req; + + req = mbox_alloc_msg_attach_resources(mbox); + if (req == NULL) + return -ENOSPC; + req->modify = true; + req->npalf = true; + + return mbox_process(mbox); +} + +static inline int +npa_detach(struct mbox *mbox) +{ + struct rsrc_detach_req *req; + + req = mbox_alloc_msg_detach_resources(mbox); + if (req == NULL) + return -ENOSPC; + req->partial = true; + req->npalf = true; + + return mbox_process(mbox); +} + +static inline int +npa_get_msix_offset(struct mbox *mbox, uint16_t *npa_msixoff) +{ + struct msix_offset_rsp *msix_rsp; + int rc; + + /* Get NPA MSIX vector offsets */ + mbox_alloc_msg_msix_offset(mbox); + rc = mbox_process_msg(mbox, (void *)&msix_rsp); + if (rc == 0) + *npa_msixoff = msix_rsp->npa_msixoff; + + return rc; +} + +static inline int +npa_lf_alloc(struct npa_lf *lf) +{ + struct mbox *mbox = lf->mbox; + struct npa_lf_alloc_req *req; + struct npa_lf_alloc_rsp *rsp; + int rc; + + req = mbox_alloc_msg_npa_lf_alloc(mbox); + if (req == NULL) + return -ENOSPC; + req->aura_sz = lf->aura_sz; + req->nr_pools = lf->nr_pools; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return NPA_ERR_ALLOC; + + lf->stack_pg_ptrs = rsp->stack_pg_ptrs; + lf->stack_pg_bytes = rsp->stack_pg_bytes; + lf->qints = rsp->qints; + + return 0; +} + +static int +npa_lf_free(struct mbox *mbox) +{ + mbox_alloc_msg_npa_lf_free(mbox); + return mbox_process(mbox); +} + +static inline uint32_t +aura_size_to_u32(uint8_t val) +{ + if (val == NPA_AURA_SZ_0) + return 128; + if (val >= NPA_AURA_SZ_MAX) + return BIT_ULL(20); + + return 1 << (val + 6); +} + +static inline void +pool_count_aura_sz_get(uint32_t *nr_pools, uint8_t *aura_sz) +{ + uint32_t val; + + val = roc_idev_npa_maxpools_get(); + if (val < aura_size_to_u32(NPA_AURA_SZ_128)) + val = 128; + if (val > aura_size_to_u32(NPA_AURA_SZ_1M)) + val = BIT_ULL(20); + + roc_idev_npa_maxpools_set(val); + *nr_pools = val; + *aura_sz = plt_log2_u32(val) - 6; +} + +static int +npa_dev_init(struct npa_lf *lf, uintptr_t base, struct mbox *mbox) +{ + uint32_t i, bmp_sz, nr_pools; + uint8_t aura_sz; + int rc; + + /* Sanity checks */ + if (!lf || !base || !mbox) + return NPA_ERR_PARAM; + + if (base & ROC_AURA_ID_MASK) + return NPA_ERR_BASE_INVALID; + + pool_count_aura_sz_get(&nr_pools, &aura_sz); + if (aura_sz == NPA_AURA_SZ_0 || aura_sz >= NPA_AURA_SZ_MAX) + return NPA_ERR_PARAM; + + memset(lf, 0x0, sizeof(*lf)); + lf->base = base; + lf->aura_sz = aura_sz; + lf->nr_pools = nr_pools; + lf->mbox = mbox; + + rc = npa_lf_alloc(lf); + if (rc) + goto exit; + + bmp_sz = plt_bitmap_get_memory_footprint(nr_pools); + + /* Allocate memory for bitmap */ + lf->npa_bmp_mem = plt_zmalloc(bmp_sz, ROC_ALIGN); + if (lf->npa_bmp_mem == NULL) { + rc = NPA_ERR_ALLOC; + goto lf_free; + } + + /* Initialize pool resource bitmap array */ + lf->npa_bmp = plt_bitmap_init(nr_pools, lf->npa_bmp_mem, bmp_sz); + if (lf->npa_bmp == NULL) { + rc = NPA_ERR_PARAM; + goto bmap_mem_free; + } + + /* Mark all pools available */ + for (i = 0; i < nr_pools; i++) + plt_bitmap_set(lf->npa_bmp, i); + + /* Allocate memory for qint context */ + lf->npa_qint_mem = plt_zmalloc(sizeof(struct npa_qint) * nr_pools, 0); + if (lf->npa_qint_mem == NULL) { + rc = NPA_ERR_ALLOC; + goto bmap_free; + } + + /* Allocate memory for nap_aura_lim memory */ + lf->aura_lim = plt_zmalloc(sizeof(struct npa_aura_lim) * nr_pools, 0); + if (lf->aura_lim == NULL) { + rc = NPA_ERR_ALLOC; + goto qint_free; + } + + /* Init aura start & end limits */ + for (i = 0; i < nr_pools; i++) { + lf->aura_lim[i].ptr_start = UINT64_MAX; + lf->aura_lim[i].ptr_end = 0x0ull; + } + + return 0; + +qint_free: + plt_free(lf->npa_qint_mem); +bmap_free: + plt_bitmap_free(lf->npa_bmp); +bmap_mem_free: + plt_free(lf->npa_bmp_mem); +lf_free: + npa_lf_free(lf->mbox); +exit: + return rc; +} + +static int +npa_dev_fini(struct npa_lf *lf) +{ + if (!lf) + return NPA_ERR_PARAM; + + plt_free(lf->aura_lim); + plt_free(lf->npa_qint_mem); + plt_bitmap_free(lf->npa_bmp); + plt_free(lf->npa_bmp_mem); + + return npa_lf_free(lf->mbox); +} + +int +npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev) +{ + struct idev_cfg *idev; + uint16_t npa_msixoff; + struct npa_lf *lf; + int rc; + + idev = idev_get_cfg(); + if (idev == NULL) + return NPA_ERR_ALLOC; + + /* Not the first PCI device */ + if (__atomic_fetch_add(&idev->npa_refcnt, 1, __ATOMIC_SEQ_CST) != 0) + return 0; + + rc = npa_attach(dev->mbox); + if (rc) + goto fail; + + rc = npa_get_msix_offset(dev->mbox, &npa_msixoff); + if (rc) + goto npa_detach; + + lf = &dev->npa; + rc = npa_dev_init(lf, dev->bar2 + (RVU_BLOCK_ADDR_NPA << 20), + dev->mbox); + if (rc) + goto npa_detach; + + lf->pf_func = dev->pf_func; + lf->npa_msixoff = npa_msixoff; + lf->intr_handle = &pci_dev->intr_handle; + lf->pci_dev = pci_dev; + + idev->npa_pf_func = dev->pf_func; + idev->npa = lf; + plt_wmb(); + + plt_npa_dbg("npa=%p max_pools=%d pf_func=0x%x msix=0x%x", lf, + roc_idev_npa_maxpools_get(), lf->pf_func, npa_msixoff); + + return 0; + +npa_detach: + npa_detach(dev->mbox); +fail: + __atomic_fetch_sub(&idev->npa_refcnt, 1, __ATOMIC_SEQ_CST); + return rc; +} + +int +npa_lf_fini(void) +{ + struct idev_cfg *idev; + int rc = 0; + + idev = idev_get_cfg(); + if (idev == NULL) + return NPA_ERR_ALLOC; + + /* Not the last PCI device */ + if (__atomic_sub_fetch(&idev->npa_refcnt, 1, __ATOMIC_SEQ_CST) != 0) + return 0; + + rc |= npa_dev_fini(idev->npa); + rc |= npa_detach(idev->npa->mbox); + idev_set_defaults(idev); + + return rc; +} + +int +roc_npa_dev_init(struct roc_npa *roc_npa) +{ + struct plt_pci_device *pci_dev; + struct npa *npa; + struct dev *dev; + int rc; + + if (roc_npa == NULL || roc_npa->pci_dev == NULL) + return NPA_ERR_PARAM; + + PLT_STATIC_ASSERT(sizeof(struct npa) <= ROC_NPA_MEM_SZ); + npa = roc_npa_to_npa_priv(roc_npa); + memset(npa, 0, sizeof(*npa)); + pci_dev = roc_npa->pci_dev; + dev = &npa->dev; + + /* Initialize device */ + rc = dev_init(dev, pci_dev); + if (rc) { + plt_err("Failed to init roc device"); + goto fail; + } + + npa->pci_dev = pci_dev; + dev->drv_inited = true; +fail: + return rc; +} + +int +roc_npa_dev_fini(struct roc_npa *roc_npa) +{ + struct npa *npa = roc_npa_to_npa_priv(roc_npa); + + if (npa == NULL) + return NPA_ERR_PARAM; + + npa->dev.drv_inited = false; + return dev_fini(&npa->dev, npa->pci_dev); +} diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h new file mode 100644 index 0000000..a708725 --- /dev/null +++ b/drivers/common/cnxk/roc_npa.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_NPA_H_ +#define _ROC_NPA_H_ + +#define ROC_AURA_ID_MASK (BIT_ULL(16) - 1) + +struct roc_npa { + struct plt_pci_device *pci_dev; + +#define ROC_NPA_MEM_SZ (1 * 1024) + uint8_t reserved[ROC_NPA_MEM_SZ] __plt_cache_aligned; +} __plt_cache_aligned; + +int __roc_api roc_npa_dev_init(struct roc_npa *roc_npa); +int __roc_api roc_npa_dev_fini(struct roc_npa *roc_npa); + +#endif /* _ROC_NPA_H_ */ diff --git a/drivers/common/cnxk/roc_npa_priv.h b/drivers/common/cnxk/roc_npa_priv.h new file mode 100644 index 0000000..dd6981f --- /dev/null +++ b/drivers/common/cnxk/roc_npa_priv.h @@ -0,0 +1,59 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_NPA_PRIV_H_ +#define _ROC_NPA_PRIV_H_ + +enum npa_error_status { + NPA_ERR_PARAM = -512, + NPA_ERR_ALLOC = -513, + NPA_ERR_INVALID_BLOCK_SZ = -514, + NPA_ERR_AURA_ID_ALLOC = -515, + NPA_ERR_AURA_POOL_INIT = -516, + NPA_ERR_AURA_POOL_FINI = -517, + NPA_ERR_BASE_INVALID = -518, + NPA_ERR_DEVICE_NOT_BOUNDED = -519, +}; + +struct npa_lf { + struct plt_intr_handle *intr_handle; + struct npa_aura_lim *aura_lim; + struct plt_pci_device *pci_dev; + struct plt_bitmap *npa_bmp; + struct mbox *mbox; + uint32_t stack_pg_ptrs; + uint32_t stack_pg_bytes; + uint16_t npa_msixoff; + void *npa_qint_mem; + void *npa_bmp_mem; + uint32_t nr_pools; + uint16_t pf_func; + uint8_t aura_sz; + uint32_t qints; + uintptr_t base; +}; + +struct npa_qint { + struct npa_lf *lf; + uint8_t qintx; +}; + +struct npa_aura_lim { + uint64_t ptr_start; + uint64_t ptr_end; +}; + +struct dev; + +static inline struct npa * +roc_npa_to_npa_priv(struct roc_npa *roc_npa) +{ + return (struct npa *)&roc_npa->reserved[0]; +} + +/* NPA lf */ +int npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev); +int npa_lf_fini(void); + +#endif /* _ROC_NPA_PRIV_H_ */ diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c index 2cbabea..7dce0bd 100644 --- a/drivers/common/cnxk/roc_platform.c +++ b/drivers/common/cnxk/roc_platform.c @@ -30,3 +30,4 @@ roc_plt_init(void) RTE_LOG_REGISTER(cnxk_logtype_base, pmd.cnxk.base, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_mbox, pmd.cnxk.mbox, NOTICE); +RTE_LOG_REGISTER(cnxk_logtype_npa, pmd.mempool.cnxk, NOTICE); diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h index 0165f85..7ffaca6 100644 --- a/drivers/common/cnxk/roc_platform.h +++ b/drivers/common/cnxk/roc_platform.h @@ -134,6 +134,7 @@ /* Log */ extern int cnxk_logtype_base; extern int cnxk_logtype_mbox; +extern int cnxk_logtype_npa; #define plt_err(fmt, args...) \ RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", __func__, __LINE__, ##args) @@ -151,6 +152,7 @@ extern int cnxk_logtype_mbox; #define plt_base_dbg(fmt, ...) plt_dbg(base, fmt, ##__VA_ARGS__) #define plt_mbox_dbg(fmt, ...) plt_dbg(mbox, fmt, ##__VA_ARGS__) +#define plt_npa_dbg(fmt, ...) plt_dbg(npa, fmt, ##__VA_ARGS__) #ifdef __cplusplus #define CNXK_PCI_ID(subsystem_dev, dev) \ diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h index 2df2d66..21599dc 100644 --- a/drivers/common/cnxk/roc_priv.h +++ b/drivers/common/cnxk/roc_priv.h @@ -11,6 +11,9 @@ /* Mbox */ #include "roc_mbox_priv.h" +/* NPA */ +#include "roc_npa_priv.h" + /* Dev */ #include "roc_dev_priv.h" diff --git a/drivers/common/cnxk/roc_utils.c b/drivers/common/cnxk/roc_utils.c index b21064a..b5d8f0b 100644 --- a/drivers/common/cnxk/roc_utils.c +++ b/drivers/common/cnxk/roc_utils.c @@ -11,9 +11,31 @@ roc_error_msg_get(int errorcode) const char *err_msg; switch (errorcode) { + case NPA_ERR_PARAM: case UTIL_ERR_PARAM: err_msg = "Invalid parameter"; break; + case NPA_ERR_ALLOC: + err_msg = "NPA alloc failed"; + break; + case NPA_ERR_INVALID_BLOCK_SZ: + err_msg = "NPA invalid block size"; + break; + case NPA_ERR_AURA_ID_ALLOC: + err_msg = "NPA aura id alloc failed"; + break; + case NPA_ERR_AURA_POOL_INIT: + err_msg = "NPA aura pool init failed"; + break; + case NPA_ERR_AURA_POOL_FINI: + err_msg = "NPA aura pool fini failed"; + break; + case NPA_ERR_BASE_INVALID: + err_msg = "NPA invalid base"; + break; + case NPA_ERR_DEVICE_NOT_BOUNDED: + err_msg = "NPA device is not bounded"; + break; case UTIL_ERR_FS: err_msg = "file operation failed"; break; diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 9279277..4797db2 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -3,11 +3,16 @@ INTERNAL { cnxk_logtype_base; cnxk_logtype_mbox; + cnxk_logtype_npa; roc_clk_freq_get; roc_error_msg_get; roc_idev_lmt_base_addr_get; + roc_idev_npa_maxpools_get; + roc_idev_npa_maxpools_set; roc_idev_num_lmtlines_get; roc_model; + roc_npa_dev_fini; + roc_npa_dev_init; roc_plt_init; local: *; From patchwork Thu Apr 1 12:37:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90383 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7037AA0548; Thu, 1 Apr 2021 14:40:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 265C1141102; Thu, 1 Apr 2021 14:39:10 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 244E2141088 for ; Thu, 1 Apr 2021 14:39:08 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLXZ019083 for ; Thu, 1 Apr 2021 05:39:07 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=4X8D6sJsboTM3z5qMYEZl8aWJzf0dQZkRAPI69HsYnA=; b=HpAwxJArE10rADniIcGNQsn7fZI4vbY6fnoAUuqdwAPEzursMQZ/wVJH3Cmn5dml4Pnl rCHeXrA2uVyTYfpMPyeC0yk6y2OP5clrbRpUDNvdTIeDMHKtcKkG0oqhCNQ9LrQs3dlA 2Su7IT1hkHfb6XGCowBbfyMM4BdX3MK+808DqC5bBjXOb5gYrZKEgMG0jh5KFVIAD8kl gVxewflUa+IhvFK+vobJjFP6ftWMVtuSBM/UAIpxg/lU493vrSteWIg8zsfVg3GaJkHs eZ9v1ePjn4jqIjEfzRkUuUb2kizsJcMjUGEy4oO3BjF/PBXdlP+Ga3nlUWO24eX/JJku Tg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje14-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:07 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:05 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:05 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 08EEA3F7041; Thu, 1 Apr 2021 05:39:02 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:35 +0530 Message-ID: <20210401123817.14348-11-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: hANXNU92l5mrVCd_qGSADyv8mPzUJsCp X-Proofpoint-ORIG-GUID: hANXNU92l5mrVCd_qGSADyv8mPzUJsCp X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 10/52] common/cnxk: add npa irq support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ashwin Sekhar T K Add support for NPA IRQs. Signed-off-by: Ashwin Sekhar T K --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_npa.c | 7 + drivers/common/cnxk/roc_npa_irq.c | 297 +++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_npa_priv.h | 4 + 4 files changed, 309 insertions(+) create mode 100644 drivers/common/cnxk/roc_npa_irq.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 2aeed3e..f8b777a 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -16,6 +16,7 @@ sources = files('roc_dev.c', 'roc_mbox.c', 'roc_model.c', 'roc_npa.c', + 'roc_npa_irq.c', 'roc_platform.c', 'roc_utils.c') includes += include_directories('../../bus/pci') diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c index 2aa726b..0d4a56a 100644 --- a/drivers/common/cnxk/roc_npa.c +++ b/drivers/common/cnxk/roc_npa.c @@ -242,11 +242,17 @@ npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev) idev->npa = lf; plt_wmb(); + rc = npa_register_irqs(lf); + if (rc) + goto npa_fini; + plt_npa_dbg("npa=%p max_pools=%d pf_func=0x%x msix=0x%x", lf, roc_idev_npa_maxpools_get(), lf->pf_func, npa_msixoff); return 0; +npa_fini: + npa_dev_fini(idev->npa); npa_detach: npa_detach(dev->mbox); fail: @@ -268,6 +274,7 @@ npa_lf_fini(void) if (__atomic_sub_fetch(&idev->npa_refcnt, 1, __ATOMIC_SEQ_CST) != 0) return 0; + npa_unregister_irqs(idev->npa); rc |= npa_dev_fini(idev->npa); rc |= npa_detach(idev->npa->mbox); idev_set_defaults(idev); diff --git a/drivers/common/cnxk/roc_npa_irq.c b/drivers/common/cnxk/roc_npa_irq.c new file mode 100644 index 0000000..2d1e535 --- /dev/null +++ b/drivers/common/cnxk/roc_npa_irq.c @@ -0,0 +1,297 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +static void +npa_err_irq(void *param) +{ + struct npa_lf *lf = (struct npa_lf *)param; + uint64_t intr; + + intr = plt_read64(lf->base + NPA_LF_ERR_INT); + if (intr == 0) + return; + + plt_err("Err_intr=0x%" PRIx64 "", intr); + + /* Clear interrupt */ + plt_write64(intr, lf->base + NPA_LF_ERR_INT); +} + +static int +npa_register_err_irq(struct npa_lf *lf) +{ + struct plt_intr_handle *handle = lf->intr_handle; + int rc, vec; + + vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT; + + /* Clear err interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C); + /* Register err interrupt vector */ + rc = dev_irq_register(handle, npa_err_irq, lf, vec); + + /* Enable hw interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1S); + + return rc; +} + +static void +npa_unregister_err_irq(struct npa_lf *lf) +{ + struct plt_intr_handle *handle = lf->intr_handle; + int vec; + + vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT; + + /* Clear err interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C); + dev_irq_unregister(handle, npa_err_irq, lf, vec); +} + +static void +npa_ras_irq(void *param) +{ + struct npa_lf *lf = (struct npa_lf *)param; + uint64_t intr; + + intr = plt_read64(lf->base + NPA_LF_RAS); + if (intr == 0) + return; + + plt_err("Ras_intr=0x%" PRIx64 "", intr); + + /* Clear interrupt */ + plt_write64(intr, lf->base + NPA_LF_RAS); +} + +static int +npa_register_ras_irq(struct npa_lf *lf) +{ + struct plt_intr_handle *handle = lf->intr_handle; + int rc, vec; + + vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON; + + /* Clear err interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C); + /* Set used interrupt vectors */ + rc = dev_irq_register(handle, npa_ras_irq, lf, vec); + /* Enable hw interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1S); + + return rc; +} + +static void +npa_unregister_ras_irq(struct npa_lf *lf) +{ + int vec; + struct plt_intr_handle *handle = lf->intr_handle; + + vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON; + + /* Clear err interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C); + dev_irq_unregister(handle, npa_ras_irq, lf, vec); +} + +static inline uint8_t +npa_q_irq_get_and_clear(struct npa_lf *lf, uint32_t q, uint32_t off, + uint64_t mask) +{ + uint64_t reg, wdata; + uint8_t qint; + + wdata = (uint64_t)q << 44; + reg = roc_atomic64_add_nosync(wdata, (int64_t *)(lf->base + off)); + + if (reg & BIT_ULL(42) /* OP_ERR */) { + plt_err("Failed execute irq get off=0x%x", off); + return 0; + } + + qint = reg & 0xff; + wdata &= mask; + plt_write64(wdata | qint, lf->base + off); + + return qint; +} + +static inline uint8_t +npa_pool_irq_get_and_clear(struct npa_lf *lf, uint32_t p) +{ + return npa_q_irq_get_and_clear(lf, p, NPA_LF_POOL_OP_INT, ~0xff00); +} + +static inline uint8_t +npa_aura_irq_get_and_clear(struct npa_lf *lf, uint32_t a) +{ + return npa_q_irq_get_and_clear(lf, a, NPA_LF_AURA_OP_INT, ~0xff00); +} + +static void +npa_q_irq(void *param) +{ + struct npa_qint *qint = (struct npa_qint *)param; + struct npa_lf *lf = qint->lf; + uint8_t irq, qintx = qint->qintx; + uint32_t q, pool, aura; + uint64_t intr; + + intr = plt_read64(lf->base + NPA_LF_QINTX_INT(qintx)); + if (intr == 0) + return; + + plt_err("queue_intr=0x%" PRIx64 " qintx=%d", intr, qintx); + + /* Handle pool queue interrupts */ + for (q = 0; q < lf->nr_pools; q++) { + /* Skip disabled POOL */ + if (plt_bitmap_get(lf->npa_bmp, q)) + continue; + + pool = q % lf->qints; + irq = npa_pool_irq_get_and_clear(lf, pool); + + if (irq & BIT_ULL(NPA_POOL_ERR_INT_OVFLS)) + plt_err("Pool=%d NPA_POOL_ERR_INT_OVFLS", pool); + + if (irq & BIT_ULL(NPA_POOL_ERR_INT_RANGE)) + plt_err("Pool=%d NPA_POOL_ERR_INT_RANGE", pool); + + if (irq & BIT_ULL(NPA_POOL_ERR_INT_PERR)) + plt_err("Pool=%d NPA_POOL_ERR_INT_PERR", pool); + } + + /* Handle aura queue interrupts */ + for (q = 0; q < lf->nr_pools; q++) { + /* Skip disabled AURA */ + if (plt_bitmap_get(lf->npa_bmp, q)) + continue; + + aura = q % lf->qints; + irq = npa_aura_irq_get_and_clear(lf, aura); + + if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_OVER)) + plt_err("Aura=%d NPA_AURA_ERR_INT_ADD_OVER", aura); + + if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_UNDER)) + plt_err("Aura=%d NPA_AURA_ERR_INT_ADD_UNDER", aura); + + if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_FREE_UNDER)) + plt_err("Aura=%d NPA_AURA_ERR_INT_FREE_UNDER", aura); + + if (irq & BIT_ULL(NPA_AURA_ERR_INT_POOL_DIS)) + plt_err("Aura=%d NPA_AURA_ERR_POOL_DIS", aura); + } + + /* Clear interrupt */ + plt_write64(intr, lf->base + NPA_LF_QINTX_INT(qintx)); +} + +static int +npa_register_queue_irqs(struct npa_lf *lf) +{ + struct plt_intr_handle *handle = lf->intr_handle; + int vec, q, qs, rc = 0; + + /* Figure out max qintx required */ + qs = PLT_MIN(lf->qints, lf->nr_pools); + + for (q = 0; q < qs; q++) { + vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q; + + /* Clear QINT CNT */ + plt_write64(0, lf->base + NPA_LF_QINTX_CNT(q)); + + /* Clear interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q)); + + struct npa_qint *qintmem = lf->npa_qint_mem; + + qintmem += q; + + qintmem->lf = lf; + qintmem->qintx = q; + + /* Sync qints_mem update */ + plt_wmb(); + + /* Register queue irq vector */ + rc = dev_irq_register(handle, npa_q_irq, qintmem, vec); + if (rc) + break; + + plt_write64(0, lf->base + NPA_LF_QINTX_CNT(q)); + plt_write64(0, lf->base + NPA_LF_QINTX_INT(q)); + /* Enable QINT interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1S(q)); + } + + return rc; +} + +static void +npa_unregister_queue_irqs(struct npa_lf *lf) +{ + struct plt_intr_handle *handle = lf->intr_handle; + int vec, q, qs; + + /* Figure out max qintx required */ + qs = PLT_MIN(lf->qints, lf->nr_pools); + + for (q = 0; q < qs; q++) { + vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q; + + /* Clear QINT CNT */ + plt_write64(0, lf->base + NPA_LF_QINTX_CNT(q)); + plt_write64(0, lf->base + NPA_LF_QINTX_INT(q)); + + /* Clear interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q)); + + struct npa_qint *qintmem = lf->npa_qint_mem; + + qintmem += q; + + /* Unregister queue irq vector */ + dev_irq_unregister(handle, npa_q_irq, qintmem, vec); + + qintmem->lf = NULL; + qintmem->qintx = 0; + } +} + +int +npa_register_irqs(struct npa_lf *lf) +{ + int rc; + + if (lf->npa_msixoff == MSIX_VECTOR_INVALID) { + plt_err("Invalid NPALF MSIX vector offset vector: 0x%x", + lf->npa_msixoff); + return NPA_ERR_PARAM; + } + + /* Register lf err interrupt */ + rc = npa_register_err_irq(lf); + /* Register RAS interrupt */ + rc |= npa_register_ras_irq(lf); + /* Register queue interrupts */ + rc |= npa_register_queue_irqs(lf); + + return rc; +} + +void +npa_unregister_irqs(struct npa_lf *lf) +{ + npa_unregister_err_irq(lf); + npa_unregister_ras_irq(lf); + npa_unregister_queue_irqs(lf); +} diff --git a/drivers/common/cnxk/roc_npa_priv.h b/drivers/common/cnxk/roc_npa_priv.h index dd6981f..5a02a61 100644 --- a/drivers/common/cnxk/roc_npa_priv.h +++ b/drivers/common/cnxk/roc_npa_priv.h @@ -56,4 +56,8 @@ roc_npa_to_npa_priv(struct roc_npa *roc_npa) int npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev); int npa_lf_fini(void); +/* IRQ */ +int npa_register_irqs(struct npa_lf *lf); +void npa_unregister_irqs(struct npa_lf *lf); + #endif /* _ROC_NPA_PRIV_H_ */ From patchwork Thu Apr 1 12:37:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90384 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 48210A0548; Thu, 1 Apr 2021 14:40:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 076221411D9; Thu, 1 Apr 2021 14:39:12 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 8D4CA141173 for ; Thu, 1 Apr 2021 14:39:10 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPv1R000338 for ; Thu, 1 Apr 2021 05:39:09 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=HyRaiDL3Xlg/9WSjfhaFUWJ+BqQSRUpMBAxtwiG+eVk=; b=amxRi8yqRjkSzAD+qKj9ydqq8NftwUXdyaHHEIJchI34auclf/x7sypItCF4VjHhhHDy ddc773vIaYXOespnTysHO+gImkpbWD3cUsMXfnS9TFTGE+d+tf4sA4ZanuZl4fkIU09G gVABxQ3WOddffL0qNraOJEL+TNmlp8ZlPp0Epx3fJ60xsnnI/20X0BihOHwbjqIJXBlv rUJTR6mPgwPjDQ2rPvTuEB+57TfbVsUl4IQkCP1QJmEIW8UVsEL30x1UxhBnukylSp7q uUtPc77jzgwJ6/EWxbliiWDMVkNuvsgZ1FubMCZzrl7HjaeVyzB36LVcsIzQXvjCu0uA Xw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 37n28j2dpr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:09 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:08 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:08 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id E3BC33F703F; Thu, 1 Apr 2021 05:39:05 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:36 +0530 Message-ID: <20210401123817.14348-12-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: wT1l936iw-0I9y9I4u1FYPIsB4YCVXKw X-Proofpoint-GUID: wT1l936iw-0I9y9I4u1FYPIsB4YCVXKw X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 11/52] common/cnxk: add npa debug support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ashwin Sekhar T K Add NPA debug APIs. Signed-off-by: Ashwin Sekhar T K --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_npa.h | 4 + drivers/common/cnxk/roc_npa_debug.c | 184 ++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_npa_irq.c | 1 + drivers/common/cnxk/version.map | 2 + 5 files changed, 192 insertions(+) create mode 100644 drivers/common/cnxk/roc_npa_debug.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index f8b777a..01a8f80 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -16,6 +16,7 @@ sources = files('roc_dev.c', 'roc_mbox.c', 'roc_model.c', 'roc_npa.c', + 'roc_npa_debug.c', 'roc_npa_irq.c', 'roc_platform.c', 'roc_utils.c') diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h index a708725..029f966 100644 --- a/drivers/common/cnxk/roc_npa.h +++ b/drivers/common/cnxk/roc_npa.h @@ -17,4 +17,8 @@ struct roc_npa { int __roc_api roc_npa_dev_init(struct roc_npa *roc_npa); int __roc_api roc_npa_dev_fini(struct roc_npa *roc_npa); +/* Debug */ +int __roc_api roc_npa_ctx_dump(void); +int __roc_api roc_npa_dump(void); + #endif /* _ROC_NPA_H_ */ diff --git a/drivers/common/cnxk/roc_npa_debug.c b/drivers/common/cnxk/roc_npa_debug.c new file mode 100644 index 0000000..421c2af --- /dev/null +++ b/drivers/common/cnxk/roc_npa_debug.c @@ -0,0 +1,184 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +#define npa_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__) + +static inline void +npa_pool_dump(__io struct npa_pool_s *pool) +{ + npa_dump("W0: Stack base\t\t0x%" PRIx64 "", pool->stack_base); + npa_dump("W1: ena \t\t%d\nW1: nat_align \t\t%d\nW1: stack_caching \t%d", + pool->ena, pool->nat_align, pool->stack_caching); + npa_dump("W1: stack_way_mask\t%d\nW1: buf_offset\t\t%d", + pool->stack_way_mask, pool->buf_offset); + npa_dump("W1: buf_size \t\t%d", pool->buf_size); + + npa_dump("W2: stack_max_pages \t%d\nW2: stack_pages\t\t%d", + pool->stack_max_pages, pool->stack_pages); + + npa_dump("W3: op_pc \t\t0x%" PRIx64 "", (uint64_t)pool->op_pc); + + npa_dump("W4: stack_offset\t%d\nW4: shift\t\t%d\nW4: avg_level\t\t%d", + pool->stack_offset, pool->shift, pool->avg_level); + npa_dump("W4: avg_con \t\t%d\nW4: fc_ena\t\t%d\nW4: fc_stype\t\t%d", + pool->avg_con, pool->fc_ena, pool->fc_stype); + npa_dump("W4: fc_hyst_bits\t%d\nW4: fc_up_crossing\t%d", + pool->fc_hyst_bits, pool->fc_up_crossing); + npa_dump("W4: update_time\t\t%d\n", pool->update_time); + + npa_dump("W5: fc_addr\t\t0x%" PRIx64 "\n", pool->fc_addr); + + npa_dump("W6: ptr_start\t\t0x%" PRIx64 "\n", pool->ptr_start); + + npa_dump("W7: ptr_end\t\t0x%" PRIx64 "\n", pool->ptr_end); + npa_dump("W8: err_int\t\t%d\nW8: err_int_ena\t\t%d", pool->err_int, + pool->err_int_ena); + npa_dump("W8: thresh_int\t\t%d", pool->thresh_int); + + npa_dump("W8: thresh_int_ena\t%d\nW8: thresh_up\t\t%d", + pool->thresh_int_ena, pool->thresh_up); + npa_dump("W8: thresh_qint_idx\t%d\nW8: err_qint_idx\t%d", + pool->thresh_qint_idx, pool->err_qint_idx); +} + +static inline void +npa_aura_dump(__io struct npa_aura_s *aura) +{ + npa_dump("W0: Pool addr\t\t0x%" PRIx64 "\n", aura->pool_addr); + + npa_dump("W1: ena\t\t\t%d\nW1: pool caching\t%d\nW1: pool way mask\t%d", + aura->ena, aura->pool_caching, aura->pool_way_mask); + npa_dump("W1: avg con\t\t%d\nW1: pool drop ena\t%d", aura->avg_con, + aura->pool_drop_ena); + npa_dump("W1: aura drop ena\t%d", aura->aura_drop_ena); + npa_dump("W1: bp_ena\t\t%d\nW1: aura drop\t\t%d\nW1: aura shift\t\t%d", + aura->bp_ena, aura->aura_drop, aura->shift); + npa_dump("W1: avg_level\t\t%d\n", aura->avg_level); + + npa_dump("W2: count\t\t%" PRIx64 "\nW2: nix0_bpid\t\t%d", + (uint64_t)aura->count, aura->nix0_bpid); + npa_dump("W2: nix1_bpid\t\t%d", aura->nix1_bpid); + + npa_dump("W3: limit\t\t%" PRIx64 "\nW3: bp\t\t\t%d\nW3: fc_ena\t\t%d\n", + (uint64_t)aura->limit, aura->bp, aura->fc_ena); + npa_dump("W3: fc_up_crossing\t%d\nW3: fc_stype\t\t%d", + aura->fc_up_crossing, aura->fc_stype); + + npa_dump("W3: fc_hyst_bits\t%d", aura->fc_hyst_bits); + + npa_dump("W4: fc_addr\t\t0x%" PRIx64 "\n", aura->fc_addr); + + npa_dump("W5: pool_drop\t\t%d\nW5: update_time\t\t%d", aura->pool_drop, + aura->update_time); + npa_dump("W5: err_int\t\t%d", aura->err_int); + npa_dump("W5: err_int_ena\t\t%d\nW5: thresh_int\t\t%d", + aura->err_int_ena, aura->thresh_int); + npa_dump("W5: thresh_int_ena\t%d", aura->thresh_int_ena); + + npa_dump("W5: thresh_up\t\t%d\nW5: thresh_qint_idx\t%d", + aura->thresh_up, aura->thresh_qint_idx); + npa_dump("W5: err_qint_idx\t%d", aura->err_qint_idx); + + npa_dump("W6: thresh\t\t%" PRIx64 "\n", (uint64_t)aura->thresh); +} + +int +roc_npa_ctx_dump(void) +{ + struct npa_aq_enq_req *aq; + struct npa_aq_enq_rsp *rsp; + struct npa_lf *lf; + uint32_t q; + int rc = 0; + + lf = idev_npa_obj_get(); + if (lf == NULL) + return NPA_ERR_DEVICE_NOT_BOUNDED; + + for (q = 0; q < lf->nr_pools; q++) { + /* Skip disabled POOL */ + if (plt_bitmap_get(lf->npa_bmp, q)) + continue; + + aq = mbox_alloc_msg_npa_aq_enq(lf->mbox); + if (aq == NULL) + return -ENOSPC; + aq->aura_id = q; + aq->ctype = NPA_AQ_CTYPE_POOL; + aq->op = NPA_AQ_INSTOP_READ; + + rc = mbox_process_msg(lf->mbox, (void *)&rsp); + if (rc) { + plt_err("Failed to get pool(%d) context", q); + return rc; + } + npa_dump("============== pool=%d ===============\n", q); + npa_pool_dump(&rsp->pool); + } + + for (q = 0; q < lf->nr_pools; q++) { + /* Skip disabled AURA */ + if (plt_bitmap_get(lf->npa_bmp, q)) + continue; + + aq = mbox_alloc_msg_npa_aq_enq(lf->mbox); + if (aq == NULL) + return -ENOSPC; + aq->aura_id = q; + aq->ctype = NPA_AQ_CTYPE_AURA; + aq->op = NPA_AQ_INSTOP_READ; + + rc = mbox_process_msg(lf->mbox, (void *)&rsp); + if (rc) { + plt_err("Failed to get aura(%d) context", q); + return rc; + } + npa_dump("============== aura=%d ===============\n", q); + npa_aura_dump(&rsp->aura); + } + + return rc; +} + +int +roc_npa_dump(void) +{ + struct npa_lf *lf; + int aura_cnt = 0; + uint32_t i; + + lf = idev_npa_obj_get(); + if (lf == NULL) + return NPA_ERR_DEVICE_NOT_BOUNDED; + + for (i = 0; i < lf->nr_pools; i++) { + if (plt_bitmap_get(lf->npa_bmp, i)) + continue; + aura_cnt++; + } + + npa_dump("npa@%p", lf); + npa_dump(" pf = %d", dev_get_pf(lf->pf_func)); + npa_dump(" vf = %d", dev_get_vf(lf->pf_func)); + npa_dump(" aura_cnt = %d", aura_cnt); + npa_dump(" \tpci_dev = %p", lf->pci_dev); + npa_dump(" \tnpa_bmp = %p", lf->npa_bmp); + npa_dump(" \tnpa_bmp_mem = %p", lf->npa_bmp_mem); + npa_dump(" \tnpa_qint_mem = %p", lf->npa_qint_mem); + npa_dump(" \tintr_handle = %p", lf->intr_handle); + npa_dump(" \tmbox = %p", lf->mbox); + npa_dump(" \tbase = 0x%" PRIx64 "", lf->base); + npa_dump(" \tstack_pg_ptrs = %d", lf->stack_pg_ptrs); + npa_dump(" \tstack_pg_bytes = %d", lf->stack_pg_bytes); + npa_dump(" \tnpa_msixoff = 0x%x", lf->npa_msixoff); + npa_dump(" \tnr_pools = %d", lf->nr_pools); + npa_dump(" \tpf_func = 0x%x", lf->pf_func); + npa_dump(" \taura_sz = %d", lf->aura_sz); + npa_dump(" \tqints = %d", lf->qints); + + return 0; +} diff --git a/drivers/common/cnxk/roc_npa_irq.c b/drivers/common/cnxk/roc_npa_irq.c index 2d1e535..0a19319 100644 --- a/drivers/common/cnxk/roc_npa_irq.c +++ b/drivers/common/cnxk/roc_npa_irq.c @@ -192,6 +192,7 @@ npa_q_irq(void *param) /* Clear interrupt */ plt_write64(intr, lf->base + NPA_LF_QINTX_INT(qintx)); + roc_npa_ctx_dump(); } static int diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 4797db2..3571db3 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -11,8 +11,10 @@ INTERNAL { roc_idev_npa_maxpools_set; roc_idev_num_lmtlines_get; roc_model; + roc_npa_ctx_dump; roc_npa_dev_fini; roc_npa_dev_init; + roc_npa_dump; roc_plt_init; local: *; From patchwork Thu Apr 1 12:37:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90385 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E46A7A0548; Thu, 1 Apr 2021 14:40:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4D27C1411E9; Thu, 1 Apr 2021 14:39:16 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id AC1331411E8 for ; Thu, 1 Apr 2021 14:39:14 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLGL019096 for ; Thu, 1 Apr 2021 05:39:13 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=zSvxGSRCup0YIpoU+IICOTVhQJLRNiLkno5ajKU45BA=; b=Uk3wno3CB05Csk8f1/5O7MaaPgA3AfA57cP0XYUrQZnPh753hun1qpm7/qFlSG2MEgi2 qqSig9QOw3ZvNTgEJ6xl/UPMey4u6VHvGS8WPB8wGnoXiu4ipsC3+7rNdqQfT7vXuP9Z 5ULg+JrYlvgSFXd6GAkUUl9qv/Y4+IftFB/RNtcypdFdYXDbPYxnrPgyYBPWcjDuxj2c A4uq4TfJultE5DfddvU7cw+h+RbSbxfeAwU21OhjJ62z2aP9Ls4BphinxIUtKN4+vcdo s3LIPwFJ3d0dkXQIey95AAKxIHJ/dMv7R1pBE4QUxdbIs7e7iEZPFbdmxo/nKul+N4zu +Q== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje17-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:13 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:11 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:11 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id D69A03F7040; Thu, 1 Apr 2021 05:39:08 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:37 +0530 Message-ID: <20210401123817.14348-13-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: F_JMdOwVMZh65p-m-jLVdntbUrbVUipx X-Proofpoint-ORIG-GUID: F_JMdOwVMZh65p-m-jLVdntbUrbVUipx X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 12/52] common/cnxk: add npa pool HW ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ashwin Sekhar T K Add APIs for creating, destroying, modifying NPA pools. Signed-off-by: Ashwin Sekhar T K --- drivers/common/cnxk/roc_npa.c | 421 ++++++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_npa.h | 146 ++++++++++++++ drivers/common/cnxk/version.map | 5 + 3 files changed, 572 insertions(+) diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c index 0d4a56a..80f5a78 100644 --- a/drivers/common/cnxk/roc_npa.c +++ b/drivers/common/cnxk/roc_npa.c @@ -5,6 +5,427 @@ #include "roc_api.h" #include "roc_priv.h" +void +roc_npa_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova, + uint64_t end_iova) +{ + const uint64_t start = roc_npa_aura_handle_to_base(aura_handle) + + NPA_LF_POOL_OP_PTR_START0; + const uint64_t end = roc_npa_aura_handle_to_base(aura_handle) + + NPA_LF_POOL_OP_PTR_END0; + uint64_t reg = roc_npa_aura_handle_to_aura(aura_handle); + struct npa_lf *lf = idev_npa_obj_get(); + struct npa_aura_lim *lim; + + PLT_ASSERT(lf); + lim = lf->aura_lim; + + lim[reg].ptr_start = PLT_MIN(lim[reg].ptr_start, start_iova); + lim[reg].ptr_end = PLT_MAX(lim[reg].ptr_end, end_iova); + + roc_store_pair(lim[reg].ptr_start, reg, start); + roc_store_pair(lim[reg].ptr_end, reg, end); +} + +static int +npa_aura_pool_init(struct mbox *mbox, uint32_t aura_id, struct npa_aura_s *aura, + struct npa_pool_s *pool) +{ + struct npa_aq_enq_req *aura_init_req, *pool_init_req; + struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp; + struct mbox_dev *mdev = &mbox->dev[0]; + int rc = -ENOSPC, off; + + aura_init_req = mbox_alloc_msg_npa_aq_enq(mbox); + if (aura_init_req == NULL) + return rc; + aura_init_req->aura_id = aura_id; + aura_init_req->ctype = NPA_AQ_CTYPE_AURA; + aura_init_req->op = NPA_AQ_INSTOP_INIT; + mbox_memcpy(&aura_init_req->aura, aura, sizeof(*aura)); + + pool_init_req = mbox_alloc_msg_npa_aq_enq(mbox); + if (pool_init_req == NULL) + return rc; + pool_init_req->aura_id = aura_id; + pool_init_req->ctype = NPA_AQ_CTYPE_POOL; + pool_init_req->op = NPA_AQ_INSTOP_INIT; + mbox_memcpy(&pool_init_req->pool, pool, sizeof(*pool)); + + rc = mbox_process(mbox); + if (rc < 0) + return rc; + + off = mbox->rx_start + + PLT_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + aura_init_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); + off = mbox->rx_start + aura_init_rsp->hdr.next_msgoff; + pool_init_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); + + if (aura_init_rsp->hdr.rc == 0 && pool_init_rsp->hdr.rc == 0) + return 0; + else + return NPA_ERR_AURA_POOL_INIT; +} + +static int +npa_aura_pool_fini(struct mbox *mbox, uint32_t aura_id, uint64_t aura_handle) +{ + struct npa_aq_enq_req *aura_req, *pool_req; + struct npa_aq_enq_rsp *aura_rsp, *pool_rsp; + struct mbox_dev *mdev = &mbox->dev[0]; + struct ndc_sync_op *ndc_req; + int rc = -ENOSPC, off; + uint64_t ptr; + + /* Procedure for disabling an aura/pool */ + plt_delay_us(10); + + /* Clear all the pointers from the aura */ + do { + ptr = roc_npa_aura_op_alloc(aura_handle, 0); + } while (ptr); + + pool_req = mbox_alloc_msg_npa_aq_enq(mbox); + if (pool_req == NULL) + return rc; + pool_req->aura_id = aura_id; + pool_req->ctype = NPA_AQ_CTYPE_POOL; + pool_req->op = NPA_AQ_INSTOP_WRITE; + pool_req->pool.ena = 0; + pool_req->pool_mask.ena = ~pool_req->pool_mask.ena; + + aura_req = mbox_alloc_msg_npa_aq_enq(mbox); + if (aura_req == NULL) + return rc; + aura_req->aura_id = aura_id; + aura_req->ctype = NPA_AQ_CTYPE_AURA; + aura_req->op = NPA_AQ_INSTOP_WRITE; + aura_req->aura.ena = 0; + aura_req->aura_mask.ena = ~aura_req->aura_mask.ena; + + rc = mbox_process(mbox); + if (rc < 0) + return rc; + + off = mbox->rx_start + + PLT_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + pool_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); + + off = mbox->rx_start + pool_rsp->hdr.next_msgoff; + aura_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); + + if (aura_rsp->hdr.rc != 0 || pool_rsp->hdr.rc != 0) + return NPA_ERR_AURA_POOL_FINI; + + /* Sync NDC-NPA for LF */ + ndc_req = mbox_alloc_msg_ndc_sync_op(mbox); + if (ndc_req == NULL) + return -ENOSPC; + ndc_req->npa_lf_sync = 1; + rc = mbox_process(mbox); + if (rc) { + plt_err("Error on NDC-NPA LF sync, rc %d", rc); + return NPA_ERR_AURA_POOL_FINI; + } + return 0; +} + +static inline char * +npa_stack_memzone_name(struct npa_lf *lf, int pool_id, char *name) +{ + snprintf(name, PLT_MEMZONE_NAMESIZE, "roc_npa_stack_%x_%d", lf->pf_func, + pool_id); + return name; +} + +static inline const struct plt_memzone * +npa_stack_dma_alloc(struct npa_lf *lf, char *name, int pool_id, size_t size) +{ + const char *mz_name = npa_stack_memzone_name(lf, pool_id, name); + + return plt_memzone_reserve_cache_align(mz_name, size); +} + +static inline int +npa_stack_dma_free(struct npa_lf *lf, char *name, int pool_id) +{ + const struct plt_memzone *mz; + + mz = plt_memzone_lookup(npa_stack_memzone_name(lf, pool_id, name)); + if (mz == NULL) + return NPA_ERR_PARAM; + + return plt_memzone_free(mz); +} + +static inline int +bitmap_ctzll(uint64_t slab) +{ + if (slab == 0) + return 0; + + return __builtin_ctzll(slab); +} + +static int +npa_aura_pool_pair_alloc(struct npa_lf *lf, const uint32_t block_size, + const uint32_t block_count, struct npa_aura_s *aura, + struct npa_pool_s *pool, uint64_t *aura_handle) +{ + int rc, aura_id, pool_id, stack_size, alloc_size; + char name[PLT_MEMZONE_NAMESIZE]; + const struct plt_memzone *mz; + uint64_t slab; + uint32_t pos; + + /* Sanity check */ + if (!lf || !block_size || !block_count || !pool || !aura || + !aura_handle) + return NPA_ERR_PARAM; + + /* Block size should be cache line aligned and in range of 128B-128KB */ + if (block_size % ROC_ALIGN || block_size < 128 || + block_size > 128 * 1024) + return NPA_ERR_INVALID_BLOCK_SZ; + + pos = 0; + slab = 0; + /* Scan from the beginning */ + plt_bitmap_scan_init(lf->npa_bmp); + /* Scan bitmap to get the free pool */ + rc = plt_bitmap_scan(lf->npa_bmp, &pos, &slab); + /* Empty bitmap */ + if (rc == 0) { + plt_err("Mempools exhausted"); + return NPA_ERR_AURA_ID_ALLOC; + } + + /* Get aura_id from resource bitmap */ + aura_id = pos + bitmap_ctzll(slab); + /* Mark pool as reserved */ + plt_bitmap_clear(lf->npa_bmp, aura_id); + + /* Configuration based on each aura has separate pool(aura-pool pair) */ + pool_id = aura_id; + rc = (aura_id < 0 || pool_id >= (int)lf->nr_pools || + aura_id >= (int)BIT_ULL(6 + lf->aura_sz)) ? + NPA_ERR_AURA_ID_ALLOC : + 0; + if (rc) + goto exit; + + /* Allocate stack memory */ + stack_size = (block_count + lf->stack_pg_ptrs - 1) / lf->stack_pg_ptrs; + alloc_size = stack_size * lf->stack_pg_bytes; + + mz = npa_stack_dma_alloc(lf, name, pool_id, alloc_size); + if (mz == NULL) { + rc = NPA_ERR_ALLOC; + goto aura_res_put; + } + + /* Update aura fields */ + aura->pool_addr = pool_id; /* AF will translate to associated poolctx */ + aura->ena = 1; + aura->shift = __builtin_clz(block_count) - 8; + aura->limit = block_count; + aura->pool_caching = 1; + aura->err_int_ena = BIT(NPA_AURA_ERR_INT_AURA_ADD_OVER); + aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_ADD_UNDER); + aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_FREE_UNDER); + aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_POOL_DIS); + /* Many to one reduction */ + aura->err_qint_idx = aura_id % lf->qints; + + /* Update pool fields */ + pool->stack_base = mz->iova; + pool->ena = 1; + pool->buf_size = block_size / ROC_ALIGN; + pool->stack_max_pages = stack_size; + pool->shift = __builtin_clz(block_count) - 8; + pool->ptr_start = 0; + pool->ptr_end = ~0; + pool->stack_caching = 1; + pool->err_int_ena = BIT(NPA_POOL_ERR_INT_OVFLS); + pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_RANGE); + pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_PERR); + + /* Many to one reduction */ + pool->err_qint_idx = pool_id % lf->qints; + + /* Issue AURA_INIT and POOL_INIT op */ + rc = npa_aura_pool_init(lf->mbox, aura_id, aura, pool); + if (rc) + goto stack_mem_free; + + *aura_handle = roc_npa_aura_handle_gen(aura_id, lf->base); + /* Update aura count */ + roc_npa_aura_op_cnt_set(*aura_handle, 0, block_count); + /* Read it back to make sure aura count is updated */ + roc_npa_aura_op_cnt_get(*aura_handle); + + return 0; + +stack_mem_free: + plt_memzone_free(mz); +aura_res_put: + plt_bitmap_set(lf->npa_bmp, aura_id); +exit: + return rc; +} + +int +roc_npa_pool_create(uint64_t *aura_handle, uint32_t block_size, + uint32_t block_count, struct npa_aura_s *aura, + struct npa_pool_s *pool) +{ + struct npa_aura_s defaura; + struct npa_pool_s defpool; + struct idev_cfg *idev; + struct npa_lf *lf; + int rc; + + lf = idev_npa_obj_get(); + if (lf == NULL) { + rc = NPA_ERR_DEVICE_NOT_BOUNDED; + goto error; + } + + idev = idev_get_cfg(); + if (idev == NULL) { + rc = NPA_ERR_ALLOC; + goto error; + } + + if (aura == NULL) { + memset(&defaura, 0, sizeof(struct npa_aura_s)); + aura = &defaura; + } + if (pool == NULL) { + memset(&defpool, 0, sizeof(struct npa_pool_s)); + defpool.nat_align = 1; + defpool.buf_offset = 1; + pool = &defpool; + } + + rc = npa_aura_pool_pair_alloc(lf, block_size, block_count, aura, pool, + aura_handle); + if (rc) { + plt_err("Failed to alloc pool or aura rc=%d", rc); + goto error; + } + + plt_npa_dbg("lf=%p block_sz=%d block_count=%d aura_handle=0x%" PRIx64, + lf, block_size, block_count, *aura_handle); + + /* Just hold the reference of the object */ + __atomic_fetch_add(&idev->npa_refcnt, 1, __ATOMIC_SEQ_CST); +error: + return rc; +} + +int +roc_npa_aura_limit_modify(uint64_t aura_handle, uint16_t aura_limit) +{ + struct npa_aq_enq_req *aura_req; + struct npa_lf *lf; + int rc; + + lf = idev_npa_obj_get(); + if (lf == NULL) + return NPA_ERR_DEVICE_NOT_BOUNDED; + + aura_req = mbox_alloc_msg_npa_aq_enq(lf->mbox); + if (aura_req == NULL) + return -ENOMEM; + aura_req->aura_id = roc_npa_aura_handle_to_aura(aura_handle); + aura_req->ctype = NPA_AQ_CTYPE_AURA; + aura_req->op = NPA_AQ_INSTOP_WRITE; + + aura_req->aura.limit = aura_limit; + aura_req->aura_mask.limit = ~(aura_req->aura_mask.limit); + rc = mbox_process(lf->mbox); + + return rc; +} + +static int +npa_aura_pool_pair_free(struct npa_lf *lf, uint64_t aura_handle) +{ + char name[PLT_MEMZONE_NAMESIZE]; + int aura_id, pool_id, rc; + + if (!lf || !aura_handle) + return NPA_ERR_PARAM; + + aura_id = roc_npa_aura_handle_to_aura(aura_handle); + pool_id = aura_id; + rc = npa_aura_pool_fini(lf->mbox, aura_id, aura_handle); + rc |= npa_stack_dma_free(lf, name, pool_id); + + plt_bitmap_set(lf->npa_bmp, aura_id); + + return rc; +} + +int +roc_npa_pool_destroy(uint64_t aura_handle) +{ + struct npa_lf *lf = idev_npa_obj_get(); + int rc = 0; + + plt_npa_dbg("lf=%p aura_handle=0x%" PRIx64, lf, aura_handle); + rc = npa_aura_pool_pair_free(lf, aura_handle); + if (rc) + plt_err("Failed to destroy pool or aura rc=%d", rc); + + /* Release the reference of npa */ + rc |= npa_lf_fini(); + return rc; +} + +int +roc_npa_pool_range_update_check(uint64_t aura_handle) +{ + uint64_t aura_id = roc_npa_aura_handle_to_aura(aura_handle); + struct npa_lf *lf; + struct npa_aura_lim *lim; + __io struct npa_pool_s *pool; + struct npa_aq_enq_req *req; + struct npa_aq_enq_rsp *rsp; + int rc; + + lf = idev_npa_obj_get(); + if (lf == NULL) + return NPA_ERR_PARAM; + + lim = lf->aura_lim; + + req = mbox_alloc_msg_npa_aq_enq(lf->mbox); + if (req == NULL) + return -ENOSPC; + + req->aura_id = aura_id; + req->ctype = NPA_AQ_CTYPE_POOL; + req->op = NPA_AQ_INSTOP_READ; + + rc = mbox_process_msg(lf->mbox, (void *)&rsp); + if (rc) { + plt_err("Failed to get pool(0x%" PRIx64 ") context", aura_id); + return rc; + } + + pool = &rsp->pool; + if (lim[aura_id].ptr_start != pool->ptr_start || + lim[aura_id].ptr_end != pool->ptr_end) { + plt_err("Range update failed on pool(0x%" PRIx64 ")", aura_id); + return NPA_ERR_PARAM; + } + + return 0; +} + static inline int npa_attach(struct mbox *mbox) { diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h index 029f966..6983849 100644 --- a/drivers/common/cnxk/roc_npa.h +++ b/drivers/common/cnxk/roc_npa.h @@ -6,6 +6,140 @@ #define _ROC_NPA_H_ #define ROC_AURA_ID_MASK (BIT_ULL(16) - 1) +#define ROC_AURA_OP_LIMIT_MASK (BIT_ULL(36) - 1) + +/* + * Generate 64bit handle to have optimized alloc and free aura operation. + * 0 - ROC_AURA_ID_MASK for storing the aura_id. + * [ROC_AURA_ID_MASK+1, (2^64 - 1)] for storing the lf base address. + * This scheme is valid when OS can give ROC_AURA_ID_MASK + * aligned address for lf base address. + */ +static inline uint64_t +roc_npa_aura_handle_gen(uint32_t aura_id, uintptr_t addr) +{ + uint64_t val; + + val = aura_id & ROC_AURA_ID_MASK; + return (uint64_t)addr | val; +} + +static inline uint64_t +roc_npa_aura_handle_to_aura(uint64_t aura_handle) +{ + return aura_handle & ROC_AURA_ID_MASK; +} + +static inline uintptr_t +roc_npa_aura_handle_to_base(uint64_t aura_handle) +{ + return (uintptr_t)(aura_handle & ~ROC_AURA_ID_MASK); +} + +static inline uint64_t +roc_npa_aura_op_alloc(uint64_t aura_handle, const int drop) +{ + uint64_t wdata = roc_npa_aura_handle_to_aura(aura_handle); + int64_t *addr; + + if (drop) + wdata |= BIT_ULL(63); /* DROP */ + + addr = (int64_t *)(roc_npa_aura_handle_to_base(aura_handle) + + NPA_LF_AURA_OP_ALLOCX(0)); + return roc_atomic64_add_nosync(wdata, addr); +} + +static inline void +roc_npa_aura_op_free(uint64_t aura_handle, const int fabs, uint64_t iova) +{ + uint64_t reg = roc_npa_aura_handle_to_aura(aura_handle); + const uint64_t addr = + roc_npa_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_FREE0; + if (fabs) + reg |= BIT_ULL(63); /* FABS */ + + roc_store_pair(iova, reg, addr); +} + +static inline uint64_t +roc_npa_aura_op_cnt_get(uint64_t aura_handle) +{ + uint64_t wdata; + int64_t *addr; + uint64_t reg; + + wdata = roc_npa_aura_handle_to_aura(aura_handle) << 44; + addr = (int64_t *)(roc_npa_aura_handle_to_base(aura_handle) + + NPA_LF_AURA_OP_CNT); + reg = roc_atomic64_add_nosync(wdata, addr); + + if (reg & BIT_ULL(42) /* OP_ERR */) + return 0; + else + return reg & 0xFFFFFFFFF; +} + +static inline void +roc_npa_aura_op_cnt_set(uint64_t aura_handle, const int sign, uint64_t count) +{ + uint64_t reg = count & (BIT_ULL(36) - 1); + + if (sign) + reg |= BIT_ULL(43); /* CNT_ADD */ + + reg |= (roc_npa_aura_handle_to_aura(aura_handle) << 44); + + plt_write64(reg, roc_npa_aura_handle_to_base(aura_handle) + + NPA_LF_AURA_OP_CNT); +} + +static inline uint64_t +roc_npa_aura_op_limit_get(uint64_t aura_handle) +{ + uint64_t wdata; + int64_t *addr; + uint64_t reg; + + wdata = roc_npa_aura_handle_to_aura(aura_handle) << 44; + addr = (int64_t *)(roc_npa_aura_handle_to_base(aura_handle) + + NPA_LF_AURA_OP_LIMIT); + reg = roc_atomic64_add_nosync(wdata, addr); + + if (reg & BIT_ULL(42) /* OP_ERR */) + return 0; + else + return reg & ROC_AURA_OP_LIMIT_MASK; +} + +static inline void +roc_npa_aura_op_limit_set(uint64_t aura_handle, uint64_t limit) +{ + uint64_t reg = limit & ROC_AURA_OP_LIMIT_MASK; + + reg |= (roc_npa_aura_handle_to_aura(aura_handle) << 44); + + plt_write64(reg, roc_npa_aura_handle_to_base(aura_handle) + + NPA_LF_AURA_OP_LIMIT); +} + +static inline uint64_t +roc_npa_aura_op_available(uint64_t aura_handle) +{ + uint64_t wdata; + uint64_t reg; + int64_t *addr; + + wdata = roc_npa_aura_handle_to_aura(aura_handle) << 44; + addr = (int64_t *)(roc_npa_aura_handle_to_base(aura_handle) + + NPA_LF_POOL_OP_AVAILABLE); + reg = roc_atomic64_add_nosync(wdata, addr); + + if (reg & BIT_ULL(42) /* OP_ERR */) + return 0; + else + return reg & 0xFFFFFFFFF; +} struct roc_npa { struct plt_pci_device *pci_dev; @@ -17,6 +151,18 @@ struct roc_npa { int __roc_api roc_npa_dev_init(struct roc_npa *roc_npa); int __roc_api roc_npa_dev_fini(struct roc_npa *roc_npa); +/* NPA pool */ +int __roc_api roc_npa_pool_create(uint64_t *aura_handle, uint32_t block_size, + uint32_t block_count, struct npa_aura_s *aura, + struct npa_pool_s *pool); +int __roc_api roc_npa_aura_limit_modify(uint64_t aura_handle, + uint16_t aura_limit); +int __roc_api roc_npa_pool_destroy(uint64_t aura_handle); +int __roc_api roc_npa_pool_range_update_check(uint64_t aura_handle); +void __roc_api roc_npa_aura_op_range_set(uint64_t aura_handle, + uint64_t start_iova, + uint64_t end_iova); + /* Debug */ int __roc_api roc_npa_ctx_dump(void); int __roc_api roc_npa_dump(void); diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 3571db3..e2c0de9 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -11,10 +11,15 @@ INTERNAL { roc_idev_npa_maxpools_set; roc_idev_num_lmtlines_get; roc_model; + roc_npa_aura_limit_modify; + roc_npa_aura_op_range_set; roc_npa_ctx_dump; roc_npa_dev_fini; roc_npa_dev_init; roc_npa_dump; + roc_npa_pool_create; + roc_npa_pool_destroy; + roc_npa_pool_range_update_check; roc_plt_init; local: *; From patchwork Thu Apr 1 12:37:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90386 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B2ACCA0548; Thu, 1 Apr 2021 14:40:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0593E1411F5; Thu, 1 Apr 2021 14:39:19 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 8B6E91411EF for ; Thu, 1 Apr 2021 14:39:17 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLct019085 for ; Thu, 1 Apr 2021 05:39:17 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=gdDiD/G5riP/g6w3YN+RkOmtzhY7RuFUTae+0DVOug0=; b=W/ukNINuWv4wAIuRf8Se+Xz6HLYpRmyqZkdyoEhkPmJv6LOmuYipgfvkULrmgxW0iJ0c SY6VwSJyqiI0k3BT+m1uXmdMFC0v4YLzMsB8KqgyB9z5pP+7LLxPX3Z/BfdFbg+d9WPI iTpyqjbb91tLN8XKd+3TX5BdLACBBM4UuBuYG0T+apTTkACCJPTiXj1sMi6B1U6Zbqzj vghgkvLuccjCx1U/e+IqtX4fCW7ph0bxlj0xXIibw78xpPrJ6QT4ERc6HDEcErBMbzHO uhfWLQqCqhf41iIgThfF2BjYRbHtVSX3OBLccsGfawgUNtITXPyggAoNu+IB4CqJVEUz sw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje1c-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:16 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:14 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:14 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id C5E353F7041; Thu, 1 Apr 2021 05:39:11 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:38 +0530 Message-ID: <20210401123817.14348-14-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: Rhy-NwOxqoy_gTv0LenWZc50A40oP-Ga X-Proofpoint-ORIG-GUID: Rhy-NwOxqoy_gTv0LenWZc50A40oP-Ga X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 13/52] common/cnxk: add npa bulk alloc/free support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ashwin Sekhar T K Add APIs to alloc/free in bulk from NPA pool. Signed-off-by: Ashwin Sekhar T K --- drivers/common/cnxk/roc_npa.h | 229 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 229 insertions(+) diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h index 6983849..b829b23 100644 --- a/drivers/common/cnxk/roc_npa.h +++ b/drivers/common/cnxk/roc_npa.h @@ -8,6 +8,11 @@ #define ROC_AURA_ID_MASK (BIT_ULL(16) - 1) #define ROC_AURA_OP_LIMIT_MASK (BIT_ULL(36) - 1) +/* 16 CASP instructions can be outstanding in CN9k, but we use only 15 + * outstanding CASPs as we run out of registers. + */ +#define ROC_CN9K_NPA_BULK_ALLOC_MAX_PTRS 30 + /* * Generate 64bit handle to have optimized alloc and free aura operation. * 0 - ROC_AURA_ID_MASK for storing the aura_id. @@ -141,6 +146,230 @@ roc_npa_aura_op_available(uint64_t aura_handle) return reg & 0xFFFFFFFFF; } +static inline void +roc_npa_aura_op_bulk_free(uint64_t aura_handle, uint64_t const *buf, + unsigned int num, const int fabs) +{ + unsigned int i; + + for (i = 0; i < num; i++) { + const uint64_t inbuf = buf[i]; + + roc_npa_aura_op_free(aura_handle, fabs, inbuf); + } +} + +static inline unsigned int +roc_npa_aura_bulk_alloc(uint64_t aura_handle, uint64_t *buf, unsigned int num, + const int drop) +{ +#if defined(__aarch64__) + uint64_t wdata = roc_npa_aura_handle_to_aura(aura_handle); + unsigned int i, count; + uint64_t addr; + + if (drop) + wdata |= BIT_ULL(63); /* DROP */ + + addr = roc_npa_aura_handle_to_base(aura_handle) + + NPA_LF_AURA_OP_ALLOCX(0); + + switch (num) { + case 30: + asm volatile( + ".cpu generic+lse\n" + "mov v18.d[0], %[dst]\n" + "mov v18.d[1], %[loc]\n" + "mov v19.d[0], %[wdata]\n" + "mov v19.d[1], x30\n" + "mov v20.d[0], x24\n" + "mov v20.d[1], x25\n" + "mov v21.d[0], x26\n" + "mov v21.d[1], x27\n" + "mov v22.d[0], x28\n" + "mov v22.d[1], x29\n" + "mov x28, v19.d[0]\n" + "mov x29, v19.d[0]\n" + "mov x30, v18.d[1]\n" + "casp x0, x1, x28, x29, [x30]\n" + "casp x2, x3, x28, x29, [x30]\n" + "casp x4, x5, x28, x29, [x30]\n" + "casp x6, x7, x28, x29, [x30]\n" + "casp x8, x9, x28, x29, [x30]\n" + "casp x10, x11, x28, x29, [x30]\n" + "casp x12, x13, x28, x29, [x30]\n" + "casp x14, x15, x28, x29, [x30]\n" + "casp x16, x17, x28, x29, [x30]\n" + "casp x18, x19, x28, x29, [x30]\n" + "casp x20, x21, x28, x29, [x30]\n" + "casp x22, x23, x28, x29, [x30]\n" + "casp x24, x25, x28, x29, [x30]\n" + "casp x26, x27, x28, x29, [x30]\n" + "casp x28, x29, x28, x29, [x30]\n" + "mov x30, v18.d[0]\n" + "stp x0, x1, [x30]\n" + "stp x2, x3, [x30, #16]\n" + "stp x4, x5, [x30, #32]\n" + "stp x6, x7, [x30, #48]\n" + "stp x8, x9, [x30, #64]\n" + "stp x10, x11, [x30, #80]\n" + "stp x12, x13, [x30, #96]\n" + "stp x14, x15, [x30, #112]\n" + "stp x16, x17, [x30, #128]\n" + "stp x18, x19, [x30, #144]\n" + "stp x20, x21, [x30, #160]\n" + "stp x22, x23, [x30, #176]\n" + "stp x24, x25, [x30, #192]\n" + "stp x26, x27, [x30, #208]\n" + "stp x28, x29, [x30, #224]\n" + "mov %[dst], v18.d[0]\n" + "mov %[loc], v18.d[1]\n" + "mov %[wdata], v19.d[0]\n" + "mov x30, v19.d[1]\n" + "mov x24, v20.d[0]\n" + "mov x25, v20.d[1]\n" + "mov x26, v21.d[0]\n" + "mov x27, v21.d[1]\n" + "mov x28, v22.d[0]\n" + "mov x29, v22.d[1]\n" + : + : [wdata] "r"(wdata), [loc] "r"(addr), [dst] "r"(buf) + : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", + "x7", "x8", "x9", "x10", "x11", "x12", "x13", "x14", + "x15", "x16", "x17", "x18", "x19", "x20", "x21", + "x22", "x23", "v18", "v19", "v20", "v21", "v22"); + break; + case 16: + asm volatile( + ".cpu generic+lse\n" + "mov x16, %[wdata]\n" + "mov x17, %[wdata]\n" + "casp x0, x1, x16, x17, [%[loc]]\n" + "casp x2, x3, x16, x17, [%[loc]]\n" + "casp x4, x5, x16, x17, [%[loc]]\n" + "casp x6, x7, x16, x17, [%[loc]]\n" + "casp x8, x9, x16, x17, [%[loc]]\n" + "casp x10, x11, x16, x17, [%[loc]]\n" + "casp x12, x13, x16, x17, [%[loc]]\n" + "casp x14, x15, x16, x17, [%[loc]]\n" + "stp x0, x1, [%[dst]]\n" + "stp x2, x3, [%[dst], #16]\n" + "stp x4, x5, [%[dst], #32]\n" + "stp x6, x7, [%[dst], #48]\n" + "stp x8, x9, [%[dst], #64]\n" + "stp x10, x11, [%[dst], #80]\n" + "stp x12, x13, [%[dst], #96]\n" + "stp x14, x15, [%[dst], #112]\n" + : + : [wdata] "r" (wdata), [dst] "r" (buf), [loc] "r" (addr) + : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", + "x7", "x8", "x9", "x10", "x11", "x12", "x13", "x14", + "x15", "x16", "x17" + ); + break; + case 8: + asm volatile( + ".cpu generic+lse\n" + "mov x16, %[wdata]\n" + "mov x17, %[wdata]\n" + "casp x0, x1, x16, x17, [%[loc]]\n" + "casp x2, x3, x16, x17, [%[loc]]\n" + "casp x4, x5, x16, x17, [%[loc]]\n" + "casp x6, x7, x16, x17, [%[loc]]\n" + "stp x0, x1, [%[dst]]\n" + "stp x2, x3, [%[dst], #16]\n" + "stp x4, x5, [%[dst], #32]\n" + "stp x6, x7, [%[dst], #48]\n" + : + : [wdata] "r" (wdata), [dst] "r" (buf), [loc] "r" (addr) + : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", + "x7", "x16", "x17" + ); + break; + case 4: + asm volatile( + ".cpu generic+lse\n" + "mov x16, %[wdata]\n" + "mov x17, %[wdata]\n" + "casp x0, x1, x16, x17, [%[loc]]\n" + "casp x2, x3, x16, x17, [%[loc]]\n" + "stp x0, x1, [%[dst]]\n" + "stp x2, x3, [%[dst], #16]\n" + : + : [wdata] "r" (wdata), [dst] "r" (buf), [loc] "r" (addr) + : "memory", "x0", "x1", "x2", "x3", "x16", "x17" + ); + break; + case 2: + asm volatile( + ".cpu generic+lse\n" + "mov x16, %[wdata]\n" + "mov x17, %[wdata]\n" + "casp x0, x1, x16, x17, [%[loc]]\n" + "stp x0, x1, [%[dst]]\n" + : + : [wdata] "r" (wdata), [dst] "r" (buf), [loc] "r" (addr) + : "memory", "x0", "x1", "x16", "x17" + ); + break; + case 1: + buf[0] = roc_npa_aura_op_alloc(aura_handle, drop); + return !!buf[0]; + } + + /* Pack the pointers */ + for (i = 0, count = 0; i < num; i++) + if (buf[i]) + buf[count++] = buf[i]; + + return count; +#else + unsigned int i, count; + + for (i = 0, count = 0; i < num; i++) { + buf[count] = roc_npa_aura_op_alloc(aura_handle, drop); + if (buf[count]) + count++; + } + + return count; +#endif +} + +static inline unsigned int +roc_npa_aura_op_bulk_alloc(uint64_t aura_handle, uint64_t *buf, + unsigned int num, const int drop, const int partial) +{ + unsigned int chunk, count, num_alloc; + + count = 0; + while (num) { + chunk = (num >= ROC_CN9K_NPA_BULK_ALLOC_MAX_PTRS) ? + ROC_CN9K_NPA_BULK_ALLOC_MAX_PTRS : + plt_align32prevpow2(num); + + num_alloc = + roc_npa_aura_bulk_alloc(aura_handle, buf, chunk, drop); + + count += num_alloc; + buf += num_alloc; + num -= num_alloc; + + if (unlikely(num_alloc != chunk)) + break; + } + + /* If the requested number of pointers was not allocated and if partial + * alloc is not desired, then free allocated pointers. + */ + if (unlikely(num != 0 && !partial)) { + roc_npa_aura_op_bulk_free(aura_handle, buf - count, count, 1); + count = 0; + } + + return count; +} + struct roc_npa { struct plt_pci_device *pci_dev; From patchwork Thu Apr 1 12:37:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90387 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4D0D8A0548; Thu, 1 Apr 2021 14:41:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A026A1411CB; Thu, 1 Apr 2021 14:39:21 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id BD3CB1411FA for ; Thu, 1 Apr 2021 14:39:19 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLcT019084 for ; Thu, 1 Apr 2021 05:39:19 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=P7LRmI0yLEosk6T8ylKMtUDq+5SWALp6NoShf6yiOVM=; b=YuDAmBeftc9vgBggfqAkpvEaz6ubwhlpFtNBvxIA8HXLBotPH1O9FYmdO6riuZsK7600 a5oMVSM9OT1c+sW+MfNQdRORXSp+8Gp1Ri4QsxgWMwPCROP/ZgLbmwS+grxBs6c7ICVo Kglnb/uszl5/CcJ/o4W6lt+vi8zykncXIrkjR2MnZSnDLgNNt3Hmwxh1hUW0hhnVukRy 1MxeM44tWIvDJltCwSPG5ptC8RkDiI9sSLeCT3LVBPnFxPeltRU1+EvqJjMVCgNaTbYw H3t0cVfW5rzTTMepvr4zAYEfpYe5jNzC8b5y+zmcxbeJZsXvStwQysfikR5MGhDXhmKD pQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje1p-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:19 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:17 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:17 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id B76FA3F7048; Thu, 1 Apr 2021 05:39:14 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:39 +0530 Message-ID: <20210401123817.14348-15-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: XA8vtMcm1NDcYHp-fi1ByKiW0FjS0Jll X-Proofpoint-ORIG-GUID: XA8vtMcm1NDcYHp-fi1ByKiW0FjS0Jll X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 14/52] common/cnxk: add npa performance counter support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ashwin Sekhar T K Add APIs to read NPA performance counters. Signed-off-by: Ashwin Sekhar T K --- drivers/common/cnxk/roc_npa.c | 50 +++++++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_npa.h | 37 ++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 1 + 3 files changed, 88 insertions(+) diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c index 80f5a78..f1e03b7 100644 --- a/drivers/common/cnxk/roc_npa.c +++ b/drivers/common/cnxk/roc_npa.c @@ -131,6 +131,56 @@ npa_aura_pool_fini(struct mbox *mbox, uint32_t aura_id, uint64_t aura_handle) return 0; } +int +roc_npa_pool_op_pc_reset(uint64_t aura_handle) +{ + struct npa_lf *lf = idev_npa_obj_get(); + struct npa_aq_enq_req *pool_req; + struct npa_aq_enq_rsp *pool_rsp; + struct ndc_sync_op *ndc_req; + struct mbox_dev *mdev; + int rc = -ENOSPC, off; + struct mbox *mbox; + + if (lf == NULL) + return NPA_ERR_PARAM; + + mbox = lf->mbox; + mdev = &mbox->dev[0]; + plt_npa_dbg("lf=%p aura_handle=0x%" PRIx64, lf, aura_handle); + + pool_req = mbox_alloc_msg_npa_aq_enq(mbox); + if (pool_req == NULL) + return rc; + pool_req->aura_id = roc_npa_aura_handle_to_aura(aura_handle); + pool_req->ctype = NPA_AQ_CTYPE_POOL; + pool_req->op = NPA_AQ_INSTOP_WRITE; + pool_req->pool.op_pc = 0; + pool_req->pool_mask.op_pc = ~pool_req->pool_mask.op_pc; + + rc = mbox_process(mbox); + if (rc < 0) + return rc; + + off = mbox->rx_start + + PLT_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + pool_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); + + if (pool_rsp->hdr.rc != 0) + return NPA_ERR_AURA_POOL_FINI; + + /* Sync NDC-NPA for LF */ + ndc_req = mbox_alloc_msg_ndc_sync_op(mbox); + if (ndc_req == NULL) + return -ENOSPC; + ndc_req->npa_lf_sync = 1; + rc = mbox_process(mbox); + if (rc) { + plt_err("Error on NDC-NPA LF sync, rc %d", rc); + return NPA_ERR_AURA_POOL_FINI; + } + return 0; +} static inline char * npa_stack_memzone_name(struct npa_lf *lf, int pool_id, char *name) { diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h index b829b23..7c6f78d 100644 --- a/drivers/common/cnxk/roc_npa.h +++ b/drivers/common/cnxk/roc_npa.h @@ -146,6 +146,40 @@ roc_npa_aura_op_available(uint64_t aura_handle) return reg & 0xFFFFFFFFF; } +static inline uint64_t +roc_npa_pool_op_performance_counter(uint64_t aura_handle, const int drop) +{ + union { + uint64_t u; + struct npa_aura_op_wdata_s s; + } op_wdata; + int64_t *addr; + uint64_t reg; + + op_wdata.u = 0; + op_wdata.s.aura = roc_npa_aura_handle_to_aura(aura_handle); + if (drop) + op_wdata.s.drop |= BIT_ULL(63); /* DROP */ + + addr = (int64_t *)(roc_npa_aura_handle_to_base(aura_handle) + + NPA_LF_POOL_OP_PC); + + reg = roc_atomic64_add_nosync(op_wdata.u, addr); + /* + * NPA_LF_POOL_OP_PC Read Data + * + * 63 49 48 48 47 0 + * ----------------------------- + * | Reserved | OP_ERR | OP_PC | + * ----------------------------- + */ + + if (reg & BIT_ULL(48) /* OP_ERR */) + return 0; + else + return reg & 0xFFFFFFFFFFFF; +} + static inline void roc_npa_aura_op_bulk_free(uint64_t aura_handle, uint64_t const *buf, unsigned int num, const int fabs) @@ -396,4 +430,7 @@ void __roc_api roc_npa_aura_op_range_set(uint64_t aura_handle, int __roc_api roc_npa_ctx_dump(void); int __roc_api roc_npa_dump(void); +/* Reset operation performance counter. */ +int __roc_api roc_npa_pool_op_pc_reset(uint64_t aura_handle); + #endif /* _ROC_NPA_H_ */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index e2c0de9..78e9686 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -19,6 +19,7 @@ INTERNAL { roc_npa_dump; roc_npa_pool_create; roc_npa_pool_destroy; + roc_npa_pool_op_pc_reset; roc_npa_pool_range_update_check; roc_plt_init; From patchwork Thu Apr 1 12:37:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90388 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D9915A0548; Thu, 1 Apr 2021 14:41:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CE084141202; Thu, 1 Apr 2021 14:39:22 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 2A9D81411FD for ; Thu, 1 Apr 2021 14:39:22 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPv1S000338 for ; Thu, 1 Apr 2021 05:39:21 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=TQbyUhEJ944Lwy+XXCAC17MRUls3OX7S95mZnBOJLgo=; b=dH8YuJ93/vOFNSSRXKyFUSr1/8RTKqW8HzZN+BXekk8SqrKljZ/5QwBz4Uq5i78vP8xn 1an/2wQzaVkXOXP/tuq+mi0jfFjQvHf2EGcq6ITFaRA1pwBoQBr/6lUkIRJNNKAcDnKb JCYmtueNczbNM/lePQ2XNMMua1ITh4WVSHXXFSe5zuahkTdo87mu7iCpGqk+CAC+yoCh uytzQg95ElteOJoSmHtBFB/0naxl9JEuZRLaMGpgTDd4cG+5ZzobzRzgwOiXCwxTy4CS rnJbqfdqJaeuJfuZVoKdi8PjaDUfCqA31JlLpPRyQnkG/IeaEU2UN43P9mUg8LSbzBsX 7A== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 37n28j2dq8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:21 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:20 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:20 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 9828B3F7041; Thu, 1 Apr 2021 05:39:17 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:40 +0530 Message-ID: <20210401123817.14348-16-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: mJUqNJh-ZNbzz-6eVivs94aQqjH0_1S1 X-Proofpoint-GUID: mJUqNJh-ZNbzz-6eVivs94aQqjH0_1S1 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 15/52] common/cnxk: add npa batch alloc/free support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ashwin Sekhar T K Add APIs to do allocations/frees in batch from NPA pool. Signed-off-by: Ashwin Sekhar T K --- drivers/common/cnxk/roc_npa.h | 217 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 217 insertions(+) diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h index 7c6f78d..89f5c6f 100644 --- a/drivers/common/cnxk/roc_npa.h +++ b/drivers/common/cnxk/roc_npa.h @@ -8,6 +8,9 @@ #define ROC_AURA_ID_MASK (BIT_ULL(16) - 1) #define ROC_AURA_OP_LIMIT_MASK (BIT_ULL(36) - 1) +#define ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS 512 +#define ROC_CN10K_NPA_BATCH_FREE_MAX_PTRS 15 + /* 16 CASP instructions can be outstanding in CN9k, but we use only 15 * outstanding CASPs as we run out of registers. */ @@ -180,6 +183,114 @@ roc_npa_pool_op_performance_counter(uint64_t aura_handle, const int drop) return reg & 0xFFFFFFFFFFFF; } +static inline int +roc_npa_aura_batch_alloc_issue(uint64_t aura_handle, uint64_t *buf, + unsigned int num, const int dis_wait, + const int drop) +{ + unsigned int i; + int64_t *addr; + uint64_t res; + union { + uint64_t u; + struct npa_batch_alloc_compare_s compare_s; + } cmp; + + if (num > ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS) + return -1; + + /* Zero first word of every cache line */ + for (i = 0; i < num; i += (ROC_ALIGN / sizeof(uint64_t))) + buf[i] = 0; + + addr = (int64_t *)(roc_npa_aura_handle_to_base(aura_handle) + + NPA_LF_AURA_BATCH_ALLOC); + cmp.u = 0; + cmp.compare_s.aura = roc_npa_aura_handle_to_aura(aura_handle); + cmp.compare_s.drop = drop; + cmp.compare_s.stype = ALLOC_STYPE_STSTP; + cmp.compare_s.dis_wait = dis_wait; + cmp.compare_s.count = num; + + res = roc_atomic64_cas(cmp.u, (uint64_t)buf, addr); + if (res != ALLOC_RESULT_ACCEPTED && res != ALLOC_RESULT_NOCORE) + return -1; + + return 0; +} + +static inline unsigned int +roc_npa_aura_batch_alloc_count(uint64_t *aligned_buf, unsigned int num) +{ + unsigned int count, i; + + if (num > ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS) + return 0; + + count = 0; + /* Check each ROC cache line one by one */ + for (i = 0; i < num; i += (ROC_ALIGN >> 3)) { + struct npa_batch_alloc_status_s *status; + int ccode; + + status = (struct npa_batch_alloc_status_s *)&aligned_buf[i]; + + /* Status is updated in first 7 bits of each 128 byte cache + * line. Wait until the status gets updated. + */ + do { + ccode = (volatile int)status->ccode; + } while (ccode == ALLOC_CCODE_INVAL); + + count += status->count; + } + + return count; +} + +static inline unsigned int +roc_npa_aura_batch_alloc_extract(uint64_t *buf, uint64_t *aligned_buf, + unsigned int num) +{ + unsigned int count, i; + + if (num > ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS) + return 0; + + count = 0; + /* Check each ROC cache line one by one */ + for (i = 0; i < num; i += (ROC_ALIGN >> 3)) { + struct npa_batch_alloc_status_s *status; + int line_count, ccode; + + status = (struct npa_batch_alloc_status_s *)&aligned_buf[i]; + + /* Status is updated in first 7 bits of each 128 byte cache + * line. Wait until the status gets updated. + */ + do { + ccode = (volatile int)status->ccode; + } while (ccode == ALLOC_CCODE_INVAL); + + line_count = status->count; + + /* Clear the status from the cache line */ + status->ccode = 0; + status->count = 0; + + /* 'Compress' the allocated buffers as there can + * be 'holes' at the end of the 128 byte cache + * lines. + */ + memmove(&buf[count], &aligned_buf[i], + line_count * sizeof(uint64_t)); + + count += line_count; + } + + return count; +} + static inline void roc_npa_aura_op_bulk_free(uint64_t aura_handle, uint64_t const *buf, unsigned int num, const int fabs) @@ -194,6 +305,112 @@ roc_npa_aura_op_bulk_free(uint64_t aura_handle, uint64_t const *buf, } static inline unsigned int +roc_npa_aura_op_batch_alloc(uint64_t aura_handle, uint64_t *buf, + uint64_t *aligned_buf, unsigned int num, + const int dis_wait, const int drop, + const int partial) +{ + unsigned int count, chunk, num_alloc; + + /* The buffer should be 128 byte cache line aligned */ + if (((uint64_t)aligned_buf & (ROC_ALIGN - 1)) != 0) + return 0; + + count = 0; + while (num) { + chunk = (num > ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS) ? + ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS : + num; + + if (roc_npa_aura_batch_alloc_issue(aura_handle, aligned_buf, + chunk, dis_wait, drop)) + break; + + num_alloc = roc_npa_aura_batch_alloc_extract(buf, aligned_buf, + chunk); + + count += num_alloc; + buf += num_alloc; + num -= num_alloc; + + if (num_alloc != chunk) + break; + } + + /* If the requested number of pointers was not allocated and if partial + * alloc is not desired, then free allocated pointers. + */ + if (unlikely(num != 0 && !partial)) { + roc_npa_aura_op_bulk_free(aura_handle, buf - count, count, 1); + count = 0; + } + + return count; +} + +static inline void +roc_npa_aura_batch_free(uint64_t aura_handle, uint64_t const *buf, + unsigned int num, const int fabs, uint64_t lmt_addr, + uint64_t lmt_id) +{ + uint64_t addr, tar_addr, free0; + volatile uint64_t *lmt_data; + unsigned int i; + + if (num > ROC_CN10K_NPA_BATCH_FREE_MAX_PTRS) + return; + + lmt_data = (uint64_t *)lmt_addr; + + addr = roc_npa_aura_handle_to_base(aura_handle) + + NPA_LF_AURA_BATCH_FREE0; + + /* + * NPA_LF_AURA_BATCH_FREE0 + * + * 63 63 62 33 32 32 31 20 19 0 + * ----------------------------------------- + * | FABS | Rsvd | COUNT_EOT | Rsvd | AURA | + * ----------------------------------------- + */ + free0 = roc_npa_aura_handle_to_aura(aura_handle); + if (fabs) + free0 |= (0x1UL << 63); + if (num & 0x1) + free0 |= (0x1UL << 32); + + /* tar_addr[4:6] is LMTST size-1 in units of 128b */ + tar_addr = addr | ((num >> 1) << 4); + + lmt_data[0] = free0; + for (i = 0; i < num; i++) + lmt_data[i + 1] = buf[i]; + + roc_lmt_submit_steorl(lmt_id, tar_addr); + plt_io_wmb(); +} + +static inline void +roc_npa_aura_op_batch_free(uint64_t aura_handle, uint64_t const *buf, + unsigned int num, const int fabs, uint64_t lmt_addr, + uint64_t lmt_id) +{ + unsigned int chunk; + + while (num) { + chunk = (num >= ROC_CN10K_NPA_BATCH_FREE_MAX_PTRS) ? + ROC_CN10K_NPA_BATCH_FREE_MAX_PTRS : + num; + + roc_npa_aura_batch_free(aura_handle, buf, chunk, fabs, lmt_addr, + lmt_id); + + buf += chunk; + num -= chunk; + } +} + +static inline unsigned int roc_npa_aura_bulk_alloc(uint64_t aura_handle, uint64_t *buf, unsigned int num, const int drop) { From patchwork Thu Apr 1 12:37:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90389 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A9E62A0548; Thu, 1 Apr 2021 14:41:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1A6341411D2; Thu, 1 Apr 2021 14:39:27 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id B3EB91411D2 for ; Thu, 1 Apr 2021 14:39:25 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPcAu019150 for ; Thu, 1 Apr 2021 05:39:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=YmbEUE2L9bsSgqQqAk2mAI6/QA5D7zs7arHSMunSQdk=; b=M45BfccO0HFBhUeuXH2GI+mDnsIZa/18qmInIvEPqD8LOOwnG4i+cMM07s5pfqQ9kw+0 B4yR4jcE/8Rb0M4dzKQg0ARgJkbsgU0N7gYjM1T1fxYLlZzXgiL664WphR9AVQ6ZYt7f tsrxvQBfuoeHoIpChJpUM/wozf4e/R2TEIJ1JbpQSXSoPX7R1aIo0shUs0phoXXxI/Pt X3Hvgxh9bSfbcA44K+CsS49F/tit0phUs+dPWs3zUN6mPGQLd5oTJ5oxpxk9aBGpxR4l Qk29fSZnGsRXj7vP+C2jRxmi6bev1eLqIMnolvCpaEPJ+eW4nzvY3bPOz+H9Ay11OEW0 +g== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje21-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:25 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:23 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:22 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 87F773F7040; Thu, 1 Apr 2021 05:39:20 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:41 +0530 Message-ID: <20210401123817.14348-17-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 9qMuAes_sXZ3Xr5nguV_Qk2bInCOdUjc X-Proofpoint-ORIG-GUID: 9qMuAes_sXZ3Xr5nguV_Qk2bInCOdUjc X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 16/52] common/cnxk: add npa lf init/fini callback support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ashwin Sekhar T K Add support for npa lf init/fini callbacks. Signed-off-by: Ashwin Sekhar T K --- drivers/common/cnxk/roc_npa.c | 27 +++++++++++++++++++++++++++ drivers/common/cnxk/roc_npa.h | 8 ++++++++ drivers/common/cnxk/version.map | 2 ++ 3 files changed, 37 insertions(+) diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c index f1e03b7..9528c85 100644 --- a/drivers/common/cnxk/roc_npa.c +++ b/drivers/common/cnxk/roc_npa.c @@ -5,6 +5,21 @@ #include "roc_api.h" #include "roc_priv.h" +static roc_npa_lf_init_cb_t npa_lf_init_cb; +static roc_npa_lf_fini_cb_t npa_lf_fini_cb; + +void +roc_npa_lf_init_cb_register(roc_npa_lf_init_cb_t cb) +{ + npa_lf_init_cb = cb; +} + +void +roc_npa_lf_fini_cb_register(roc_npa_lf_fini_cb_t cb) +{ + npa_lf_fini_cb = cb; +} + void roc_npa_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova, uint64_t end_iova) @@ -717,11 +732,20 @@ npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev) if (rc) goto npa_fini; + if (npa_lf_init_cb) { + rc = npa_lf_init_cb(); + if (rc) + goto npa_irq_unregister; + } + plt_npa_dbg("npa=%p max_pools=%d pf_func=0x%x msix=0x%x", lf, roc_idev_npa_maxpools_get(), lf->pf_func, npa_msixoff); return 0; +npa_irq_unregister: + npa_unregister_irqs(idev->npa); + npa_fini: npa_dev_fini(idev->npa); npa_detach: @@ -750,6 +774,9 @@ npa_lf_fini(void) rc |= npa_detach(idev->npa->mbox); idev_set_defaults(idev); + if (npa_lf_fini_cb) + npa_lf_fini_cb(); + return rc; } diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h index 89f5c6f..c857789 100644 --- a/drivers/common/cnxk/roc_npa.h +++ b/drivers/common/cnxk/roc_npa.h @@ -16,6 +16,10 @@ */ #define ROC_CN9K_NPA_BULK_ALLOC_MAX_PTRS 30 +/* Callbacks that are called after NPA lf init/fini respectively */ +typedef int (*roc_npa_lf_init_cb_t)(void); +typedef void (*roc_npa_lf_fini_cb_t)(void); + /* * Generate 64bit handle to have optimized alloc and free aura operation. * 0 - ROC_AURA_ID_MASK for storing the aura_id. @@ -650,4 +654,8 @@ int __roc_api roc_npa_dump(void); /* Reset operation performance counter. */ int __roc_api roc_npa_pool_op_pc_reset(uint64_t aura_handle); +/* Callback registration */ +void __roc_api roc_npa_lf_init_cb_register(roc_npa_lf_init_cb_t cb); +void __roc_api roc_npa_lf_fini_cb_register(roc_npa_lf_fini_cb_t cb); + #endif /* _ROC_NPA_H_ */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 78e9686..c0f282d 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -17,6 +17,8 @@ INTERNAL { roc_npa_dev_fini; roc_npa_dev_init; roc_npa_dump; + roc_npa_lf_fini_cb_register; + roc_npa_lf_init_cb_register; roc_npa_pool_create; roc_npa_pool_destroy; roc_npa_pool_op_pc_reset; From patchwork Thu Apr 1 12:37:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90390 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 801CBA0548; Thu, 1 Apr 2021 14:41:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B3A281411F2; Thu, 1 Apr 2021 14:39:30 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id AED85141212 for ; Thu, 1 Apr 2021 14:39:28 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLGO019096 for ; Thu, 1 Apr 2021 05:39:28 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=jVnUd5UpQknEEMjAK6HWFhnvOBzNeUTLGpYohx7gdEk=; b=JyvotnO13lCs0gl7Ozwp1Dc6eQ9HNqZ76zWsvmmBST5RfY9hh8h+JAf/qEq5vYpqeoBm OwO44i1Uvh4Iw9olXv36i9UrKaeywFY7Z7i0mI7vzu9/hgqc8OUXrhcvduc+JCuSGBdn zdPbXqUXrd47lKc3lmRigSwvPwNqkK2GCqnlL50xlI8ItREtqLaGjQ+D84sCb/StyCTF 44i9gUKXncDpiRLQ2zl2apvLKx7lELKJ9StgfzV42FCll1dgvAiEIwYWrabJPx0tukTn 2ANTwGvBHqJGkQmeQL8iV3vX8C0NMQnbInrU2NVp6NOid06wa7/kNuSGpTi/awrhLVpx xw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje24-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:27 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:25 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:26 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 708CC3F7041; Thu, 1 Apr 2021 05:39:23 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:42 +0530 Message-ID: <20210401123817.14348-18-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: kv8AWYtKw7ZcSPA3uGrF7ZxFnYKvT-0F X-Proofpoint-ORIG-GUID: kv8AWYtKw7ZcSPA3uGrF7ZxFnYKvT-0F X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 17/52] common/cnxk: add base nix support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add base nix support as ROC(Rest of Chip) API which will be used by generic ETHDEV PMD(net/cnxk). This patch adds support to device init, fini, resource alloc and free API which sets up a ETHDEV PCI device of either CN9K or CN10K Marvell SoC. Signed-off-by: Jerin Jacob Signed-off-by: Sunil Kumar Kori Signed-off-by: Satha Rao --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_api.h | 3 + drivers/common/cnxk/roc_idev.c | 13 ++ drivers/common/cnxk/roc_idev.h | 2 + drivers/common/cnxk/roc_nix.c | 396 +++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_nix.h | 84 ++++++++ drivers/common/cnxk/roc_nix_priv.h | 100 ++++++++++ drivers/common/cnxk/roc_platform.c | 1 + drivers/common/cnxk/roc_platform.h | 2 + drivers/common/cnxk/roc_priv.h | 3 + drivers/common/cnxk/roc_utils.c | 42 ++++ drivers/common/cnxk/version.map | 16 ++ 12 files changed, 663 insertions(+) create mode 100644 drivers/common/cnxk/roc_nix.c create mode 100644 drivers/common/cnxk/roc_nix.h create mode 100644 drivers/common/cnxk/roc_nix_priv.h diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 01a8f80..1efb601 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -15,6 +15,7 @@ sources = files('roc_dev.c', 'roc_irq.c', 'roc_mbox.c', 'roc_model.c', + 'roc_nix.c', 'roc_npa.c', 'roc_npa_debug.c', 'roc_npa_irq.c', diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h index 9289c68..718916d 100644 --- a/drivers/common/cnxk/roc_api.h +++ b/drivers/common/cnxk/roc_api.h @@ -82,6 +82,9 @@ /* NPA */ #include "roc_npa.h" +/* NIX */ +#include "roc_nix.h" + /* Utils */ #include "roc_utils.h" diff --git a/drivers/common/cnxk/roc_idev.c b/drivers/common/cnxk/roc_idev.c index bf9cce8..a92ac6a 100644 --- a/drivers/common/cnxk/roc_idev.c +++ b/drivers/common/cnxk/roc_idev.c @@ -142,3 +142,16 @@ roc_idev_num_lmtlines_get(void) return num_lmtlines; } + +struct roc_nix * +roc_idev_npa_nix_get(void) +{ + struct npa_lf *npa_lf = idev_npa_obj_get(); + struct dev *dev; + + if (!npa_lf) + return NULL; + + dev = container_of(npa_lf, struct dev, npa); + return dev->roc_nix; +} diff --git a/drivers/common/cnxk/roc_idev.h b/drivers/common/cnxk/roc_idev.h index f267865..043e8af 100644 --- a/drivers/common/cnxk/roc_idev.h +++ b/drivers/common/cnxk/roc_idev.h @@ -12,4 +12,6 @@ void __roc_api roc_idev_npa_maxpools_set(uint32_t max_pools); uint64_t __roc_api roc_idev_lmt_base_addr_get(void); uint16_t __roc_api roc_idev_num_lmtlines_get(void); +struct roc_nix *__roc_api roc_idev_npa_nix_get(void); + #endif /* _ROC_IDEV_H_ */ diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c new file mode 100644 index 0000000..040f78c --- /dev/null +++ b/drivers/common/cnxk/roc_nix.c @@ -0,0 +1,396 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +bool +roc_nix_is_lbk(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + return nix->lbk_link; +} + +int +roc_nix_get_base_chan(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + return nix->rx_chan_base; +} + +uint16_t +roc_nix_get_vwqe_interval(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + return nix->vwqe_interval; +} + +bool +roc_nix_is_sdp(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + return nix->sdp_link; +} + +bool +roc_nix_is_pf(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + return !dev_is_vf(&nix->dev); +} + +int +roc_nix_get_pf(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + return dev_get_pf(dev->pf_func); +} + +int +roc_nix_get_vf(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + return dev_get_vf(dev->pf_func); +} + +bool +roc_nix_is_vf_or_sdp(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + return (dev_is_vf(&nix->dev) != 0) || roc_nix_is_sdp(roc_nix); +} + +uint16_t +roc_nix_get_pf_func(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + return dev->pf_func; +} + +int +roc_nix_max_pkt_len(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + if (roc_model_is_cn9k()) + return NIX_CN9K_MAX_HW_FRS; + + if (nix->lbk_link || roc_nix_is_sdp(roc_nix)) + return NIX_LBK_MAX_HW_FRS; + + return NIX_RPM_MAX_HW_FRS; +} + +int +roc_nix_lf_alloc(struct roc_nix *roc_nix, uint32_t nb_rxq, uint32_t nb_txq, + uint64_t rx_cfg) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_lf_alloc_req *req; + struct nix_lf_alloc_rsp *rsp; + int rc = -ENOSPC; + + req = mbox_alloc_msg_nix_lf_alloc(mbox); + if (req == NULL) + return rc; + req->rq_cnt = nb_rxq; + req->sq_cnt = nb_txq; + req->cq_cnt = nb_rxq; + /* XQESZ can be W64 or W16 */ + req->xqe_sz = NIX_XQESZ_W16; + req->rss_sz = nix->reta_sz; + req->rss_grps = ROC_NIX_RSS_GRPS; + req->npa_func = idev_npa_pffunc_get(); + req->rx_cfg = rx_cfg; + + if (!roc_nix->rss_tag_as_xor) + req->flags = NIX_LF_RSS_TAG_LSB_AS_ADDER; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto fail; + + nix->sqb_size = rsp->sqb_size; + nix->tx_chan_base = rsp->tx_chan_base; + nix->rx_chan_base = rsp->rx_chan_base; + if (roc_nix_is_lbk(roc_nix) && roc_nix->enable_loop) + nix->tx_chan_base = rsp->rx_chan_base; + nix->rx_chan_cnt = rsp->rx_chan_cnt; + nix->tx_chan_cnt = rsp->tx_chan_cnt; + nix->lso_tsov4_idx = rsp->lso_tsov4_idx; + nix->lso_tsov6_idx = rsp->lso_tsov6_idx; + nix->lf_tx_stats = rsp->lf_tx_stats; + nix->lf_rx_stats = rsp->lf_rx_stats; + nix->cints = rsp->cints; + roc_nix->cints = rsp->cints; + nix->qints = rsp->qints; + nix->ptp_en = rsp->hw_rx_tstamp_en; + roc_nix->rx_ptp_ena = rsp->hw_rx_tstamp_en; + nix->cgx_links = rsp->cgx_links; + nix->lbk_links = rsp->lbk_links; + nix->sdp_links = rsp->sdp_links; + nix->tx_link = rsp->tx_link; + nix->nb_rx_queues = nb_rxq; + nix->nb_tx_queues = nb_txq; + nix->sqs = plt_zmalloc(sizeof(struct roc_nix_sq *) * nb_txq, 0); + if (!nix->sqs) + return -ENOMEM; +fail: + return rc; +} + +int +roc_nix_lf_free(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_lf_free_req *req; + struct ndc_sync_op *ndc_req; + int rc = -ENOSPC; + + plt_free(nix->sqs); + nix->sqs = NULL; + + /* Sync NDC-NIX for LF */ + ndc_req = mbox_alloc_msg_ndc_sync_op(mbox); + if (ndc_req == NULL) + return rc; + ndc_req->nix_lf_tx_sync = 1; + ndc_req->nix_lf_rx_sync = 1; + rc = mbox_process(mbox); + if (rc) + plt_err("Error on NDC-NIX-[TX, RX] LF sync, rc %d", rc); + + req = mbox_alloc_msg_nix_lf_free(mbox); + if (req == NULL) + return -ENOSPC; + /* Let AF driver free all this nix lf's + * NPC entries allocated using NPC MBOX. + */ + req->flags = 0; + + return mbox_process(mbox); +} + +static inline int +nix_lf_attach(struct dev *dev) +{ + struct mbox *mbox = dev->mbox; + struct rsrc_attach_req *req; + int rc = -ENOSPC; + + /* Attach NIX(lf) */ + req = mbox_alloc_msg_attach_resources(mbox); + if (req == NULL) + return rc; + req->modify = true; + req->nixlf = true; + + return mbox_process(mbox); +} + +static inline int +nix_lf_get_msix_offset(struct dev *dev, struct nix *nix) +{ + struct msix_offset_rsp *msix_rsp; + struct mbox *mbox = dev->mbox; + int rc; + + /* Get MSIX vector offsets */ + mbox_alloc_msg_msix_offset(mbox); + rc = mbox_process_msg(mbox, (void *)&msix_rsp); + if (rc == 0) + nix->msixoff = msix_rsp->nix_msixoff; + + return rc; +} + +static inline int +nix_lf_detach(struct nix *nix) +{ + struct mbox *mbox = (&nix->dev)->mbox; + struct rsrc_detach_req *req; + int rc = -ENOSPC; + + req = mbox_alloc_msg_detach_resources(mbox); + if (req == NULL) + return rc; + req->partial = true; + req->nixlf = true; + + return mbox_process(mbox); +} + +static int +roc_nix_get_hw_info(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_hw_info *hw_info; + int rc; + + mbox_alloc_msg_nix_get_hw_info(mbox); + rc = mbox_process_msg(mbox, (void *)&hw_info); + if (rc == 0) + nix->vwqe_interval = hw_info->vwqe_delay; + + return rc; +} + +static void +sdp_lbk_id_update(struct plt_pci_device *pci_dev, struct nix *nix) +{ + nix->sdp_link = false; + nix->lbk_link = false; + + /* Update SDP/LBK link based on PCI device id */ + switch (pci_dev->id.device_id) { + case PCI_DEVID_CNXK_RVU_SDP_PF: + case PCI_DEVID_CNXK_RVU_SDP_VF: + nix->sdp_link = true; + break; + case PCI_DEVID_CNXK_RVU_AF_VF: + nix->lbk_link = true; + break; + default: + break; + } +} + +static inline uint64_t +nix_get_blkaddr(struct dev *dev) +{ + uint64_t reg; + + /* Reading the discovery register to know which NIX is the LF + * attached to. + */ + reg = plt_read64(dev->bar2 + + RVU_PF_BLOCK_ADDRX_DISC(RVU_BLOCK_ADDR_NIX0)); + + return reg & 0x1FFULL ? RVU_BLOCK_ADDR_NIX0 : RVU_BLOCK_ADDR_NIX1; +} + +int +roc_nix_dev_init(struct roc_nix *roc_nix) +{ + enum roc_nix_rss_reta_sz reta_sz; + struct plt_pci_device *pci_dev; + uint16_t max_sqb_count; + uint64_t blkaddr; + struct dev *dev; + struct nix *nix; + int rc; + + if (roc_nix == NULL || roc_nix->pci_dev == NULL) + return NIX_ERR_PARAM; + + reta_sz = roc_nix->reta_sz; + if (reta_sz != 0 && reta_sz != 64 && reta_sz != 128 && reta_sz != 256) + return NIX_ERR_PARAM; + + if (reta_sz == 0) + reta_sz = ROC_NIX_RSS_RETA_SZ_64; + + max_sqb_count = roc_nix->max_sqb_count; + max_sqb_count = PLT_MIN(max_sqb_count, NIX_MAX_SQB); + max_sqb_count = PLT_MAX(max_sqb_count, NIX_MIN_SQB); + roc_nix->max_sqb_count = max_sqb_count; + + PLT_STATIC_ASSERT(sizeof(struct nix) <= ROC_NIX_MEM_SZ); + nix = roc_nix_to_nix_priv(roc_nix); + pci_dev = roc_nix->pci_dev; + dev = &nix->dev; + + if (nix->dev.drv_inited) + return 0; + + if (dev->mbox_active) + goto skip_dev_init; + + memset(nix, 0, sizeof(*nix)); + /* Initialize device */ + rc = dev_init(dev, pci_dev); + if (rc) { + plt_err("Failed to init roc device"); + goto fail; + } + +skip_dev_init: + dev->roc_nix = roc_nix; + + nix->lmt_base = dev->lmt_base; + /* Expose base LMT line address for + * "Per Core LMT line" mode. + */ + roc_nix->lmt_base = dev->lmt_base; + + /* Attach NIX LF */ + rc = nix_lf_attach(dev); + if (rc) + goto dev_fini; + + blkaddr = nix_get_blkaddr(dev); + nix->is_nix1 = (blkaddr == RVU_BLOCK_ADDR_NIX1); + + /* Calculating base address based on which NIX block LF + * is attached to. + */ + nix->base = dev->bar2 + (blkaddr << 20); + + /* Get NIX MSIX offset */ + rc = nix_lf_get_msix_offset(dev, nix); + if (rc) + goto lf_detach; + + /* Update nix context */ + sdp_lbk_id_update(pci_dev, nix); + nix->pci_dev = pci_dev; + nix->reta_sz = reta_sz; + nix->mtu = ROC_NIX_DEFAULT_HW_FRS; + + /* Get NIX HW info */ + roc_nix_get_hw_info(roc_nix); + nix->dev.drv_inited = true; + + return 0; +lf_detach: + nix_lf_detach(nix); +dev_fini: + rc |= dev_fini(dev, pci_dev); +fail: + return rc; +} + +int +roc_nix_dev_fini(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + int rc = 0; + + if (nix == NULL) + return NIX_ERR_PARAM; + + if (!nix->dev.drv_inited) + goto fini; + + rc = nix_lf_detach(nix); + nix->dev.drv_inited = false; +fini: + rc |= dev_fini(&nix->dev, nix->pci_dev); + return rc; +} diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h new file mode 100644 index 0000000..fc078f8 --- /dev/null +++ b/drivers/common/cnxk/roc_nix.h @@ -0,0 +1,84 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_NIX_H_ +#define _ROC_NIX_H_ + +/* Constants */ +enum roc_nix_rss_reta_sz { + ROC_NIX_RSS_RETA_SZ_64 = 64, + ROC_NIX_RSS_RETA_SZ_128 = 128, + ROC_NIX_RSS_RETA_SZ_256 = 256, +}; + +enum roc_nix_sq_max_sqe_sz { + roc_nix_maxsqesz_w16 = NIX_MAXSQESZ_W16, + roc_nix_maxsqesz_w8 = NIX_MAXSQESZ_W8, +}; + +/* NIX LF RX offload configuration flags. + * These are input flags to roc_nix_lf_alloc:rx_cfg + */ +#define ROC_NIX_LF_RX_CFG_DROP_RE BIT_ULL(32) +#define ROC_NIX_LF_RX_CFG_L2_LEN_ERR BIT_ULL(33) +#define ROC_NIX_LF_RX_CFG_IP6_UDP_OPT BIT_ULL(34) +#define ROC_NIX_LF_RX_CFG_DIS_APAD BIT_ULL(35) +#define ROC_NIX_LF_RX_CFG_CSUM_IL4 BIT_ULL(36) +#define ROC_NIX_LF_RX_CFG_CSUM_OL4 BIT_ULL(37) +#define ROC_NIX_LF_RX_CFG_LEN_IL4 BIT_ULL(38) +#define ROC_NIX_LF_RX_CFG_LEN_IL3 BIT_ULL(39) +#define ROC_NIX_LF_RX_CFG_LEN_OL4 BIT_ULL(40) +#define ROC_NIX_LF_RX_CFG_LEN_OL3 BIT_ULL(41) + +/* Group 0 will be used for RSS, 1 -7 will be used for npc_flow RSS action*/ +#define ROC_NIX_RSS_GROUP_DEFAULT 0 +#define ROC_NIX_RSS_GRPS 8 +#define ROC_NIX_RSS_RETA_MAX ROC_NIX_RSS_RETA_SZ_256 +#define ROC_NIX_RSS_KEY_LEN 48 /* 352 Bits */ + +#define ROC_NIX_DEFAULT_HW_FRS 1514 + +#define ROC_NIX_VWQE_MAX_SIZE_LOG2 11 +#define ROC_NIX_VWQE_MIN_SIZE_LOG2 2 +struct roc_nix { + /* Input parameters */ + struct plt_pci_device *pci_dev; + uint16_t port_id; + bool rss_tag_as_xor; + uint16_t max_sqb_count; + enum roc_nix_rss_reta_sz reta_sz; + bool enable_loop; + /* End of input parameters */ + /* LMT line base for "Per Core Tx LMT line" mode*/ + uintptr_t lmt_base; + bool io_enabled; + bool rx_ptp_ena; + uint16_t cints; + +#define ROC_NIX_MEM_SZ (6 * 1024) + uint8_t reserved[ROC_NIX_MEM_SZ] __plt_cache_aligned; +} __plt_cache_aligned; + +/* Dev */ +int __roc_api roc_nix_dev_init(struct roc_nix *roc_nix); +int __roc_api roc_nix_dev_fini(struct roc_nix *roc_nix); + +/* Type */ +bool __roc_api roc_nix_is_lbk(struct roc_nix *roc_nix); +bool __roc_api roc_nix_is_sdp(struct roc_nix *roc_nix); +bool __roc_api roc_nix_is_pf(struct roc_nix *roc_nix); +bool __roc_api roc_nix_is_vf_or_sdp(struct roc_nix *roc_nix); +int __roc_api roc_nix_get_base_chan(struct roc_nix *roc_nix); +int __roc_api roc_nix_get_pf(struct roc_nix *roc_nix); +int __roc_api roc_nix_get_vf(struct roc_nix *roc_nix); +uint16_t __roc_api roc_nix_get_pf_func(struct roc_nix *roc_nix); +uint16_t __roc_api roc_nix_get_vwqe_interval(struct roc_nix *roc_nix); +int __roc_api roc_nix_max_pkt_len(struct roc_nix *roc_nix); + +/* LF ops */ +int __roc_api roc_nix_lf_alloc(struct roc_nix *roc_nix, uint32_t nb_rxq, + uint32_t nb_txq, uint64_t rx_cfg); +int __roc_api roc_nix_lf_free(struct roc_nix *roc_nix); + +#endif /* _ROC_NIX_H_ */ diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h new file mode 100644 index 0000000..92a0d2e --- /dev/null +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -0,0 +1,100 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_NIX_PRIV_H_ +#define _ROC_NIX_PRIV_H_ + +/* Constants */ +#define NIX_CQ_ENTRY_SZ 128 +#define NIX_CQ_ENTRY64_SZ 512 +#define NIX_CQ_ALIGN (uint16_t)512 +#define NIX_MAX_SQB (uint16_t)512 +#define NIX_DEF_SQB (uint16_t)16 +#define NIX_MIN_SQB (uint16_t)8 +#define NIX_SQB_LIST_SPACE (uint16_t)2 +#define NIX_SQB_LOWER_THRESH (uint16_t)70 + +/* Apply BP/DROP when CQ is 95% full */ +#define NIX_CQ_THRESH_LEVEL (5 * 256 / 100) + +struct nix { + uint16_t reta[ROC_NIX_RSS_GRPS][ROC_NIX_RSS_RETA_MAX]; + enum roc_nix_rss_reta_sz reta_sz; + struct plt_pci_device *pci_dev; + uint16_t bpid[NIX_MAX_CHAN]; + struct roc_nix_sq **sqs; + uint16_t vwqe_interval; + uint16_t tx_chan_base; + uint16_t rx_chan_base; + uint16_t nb_rx_queues; + uint16_t nb_tx_queues; + uint8_t lso_tsov6_idx; + uint8_t lso_tsov4_idx; + uint8_t lf_rx_stats; + uint8_t lf_tx_stats; + uint8_t rx_chan_cnt; + uint8_t rss_alg_idx; + uint8_t tx_chan_cnt; + uintptr_t lmt_base; + uint8_t cgx_links; + uint8_t lbk_links; + uint8_t sdp_links; + uint8_t tx_link; + uint16_t sqb_size; + /* Without FCS, with L2 overhead */ + uint16_t mtu; + uint16_t chan_cnt; + uint16_t msixoff; + uint8_t rx_pause; + uint8_t tx_pause; + struct dev dev; + uint16_t cints; + uint16_t qints; + uintptr_t base; + bool sdp_link; + bool lbk_link; + bool ptp_en; + bool is_nix1; + +} __plt_cache_aligned; + +enum nix_err_status { + NIX_ERR_PARAM = -2048, + NIX_ERR_NO_MEM, + NIX_ERR_INVALID_RANGE, + NIX_ERR_INTERNAL, + NIX_ERR_OP_NOTSUP, + NIX_ERR_QUEUE_INVALID_RANGE, + NIX_ERR_AQ_READ_FAILED, + NIX_ERR_AQ_WRITE_FAILED, + NIX_ERR_NDC_SYNC, +}; + +enum nix_q_size { + nix_q_size_16, /* 16 entries */ + nix_q_size_64, /* 64 entries */ + nix_q_size_256, + nix_q_size_1K, + nix_q_size_4K, + nix_q_size_16K, + nix_q_size_64K, + nix_q_size_256K, + nix_q_size_1M, /* Million entries */ + nix_q_size_max +}; + +static inline struct nix * +roc_nix_to_nix_priv(struct roc_nix *roc_nix) +{ + return (struct nix *)&roc_nix->reserved[0]; +} + +static inline struct roc_nix * +nix_priv_to_roc_nix(struct nix *nix) +{ + return (struct roc_nix *)((char *)nix - + offsetof(struct roc_nix, reserved)); +} + +#endif /* _ROC_NIX_PRIV_H_ */ diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c index 7dce0bd..3ac81ce 100644 --- a/drivers/common/cnxk/roc_platform.c +++ b/drivers/common/cnxk/roc_platform.c @@ -31,3 +31,4 @@ roc_plt_init(void) RTE_LOG_REGISTER(cnxk_logtype_base, pmd.cnxk.base, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_mbox, pmd.cnxk.mbox, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_npa, pmd.mempool.cnxk, NOTICE); +RTE_LOG_REGISTER(cnxk_logtype_nix, pmd.net.cnxk, NOTICE); diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h index 7ffaca6..5e4976b 100644 --- a/drivers/common/cnxk/roc_platform.h +++ b/drivers/common/cnxk/roc_platform.h @@ -135,6 +135,7 @@ extern int cnxk_logtype_base; extern int cnxk_logtype_mbox; extern int cnxk_logtype_npa; +extern int cnxk_logtype_nix; #define plt_err(fmt, args...) \ RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", __func__, __LINE__, ##args) @@ -153,6 +154,7 @@ extern int cnxk_logtype_npa; #define plt_base_dbg(fmt, ...) plt_dbg(base, fmt, ##__VA_ARGS__) #define plt_mbox_dbg(fmt, ...) plt_dbg(mbox, fmt, ##__VA_ARGS__) #define plt_npa_dbg(fmt, ...) plt_dbg(npa, fmt, ##__VA_ARGS__) +#define plt_nix_dbg(fmt, ...) plt_dbg(nix, fmt, ##__VA_ARGS__) #ifdef __cplusplus #define CNXK_PCI_ID(subsystem_dev, dev) \ diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h index 21599dc..7371785 100644 --- a/drivers/common/cnxk/roc_priv.h +++ b/drivers/common/cnxk/roc_priv.h @@ -20,4 +20,7 @@ /* idev */ #include "roc_idev_priv.h" +/* NIX */ +#include "roc_nix_priv.h" + #endif /* _ROC_PRIV_H_ */ diff --git a/drivers/common/cnxk/roc_utils.c b/drivers/common/cnxk/roc_utils.c index b5d8f0b..2b157a3 100644 --- a/drivers/common/cnxk/roc_utils.c +++ b/drivers/common/cnxk/roc_utils.c @@ -11,10 +11,37 @@ roc_error_msg_get(int errorcode) const char *err_msg; switch (errorcode) { + case NIX_AF_ERR_PARAM: + case NIX_ERR_PARAM: case NPA_ERR_PARAM: case UTIL_ERR_PARAM: err_msg = "Invalid parameter"; break; + case NIX_ERR_NO_MEM: + err_msg = "Out of memory"; + break; + case NIX_ERR_INVALID_RANGE: + err_msg = "Range is not supported"; + break; + case NIX_ERR_INTERNAL: + err_msg = "Internal error"; + break; + case NIX_ERR_OP_NOTSUP: + err_msg = "Operation not supported"; + break; + case NIX_ERR_QUEUE_INVALID_RANGE: + err_msg = "Invalid Queue range"; + break; + case NIX_ERR_AQ_READ_FAILED: + err_msg = "AQ read failed"; + break; + case NIX_ERR_AQ_WRITE_FAILED: + err_msg = "AQ write failed"; + break; + case NIX_ERR_NDC_SYNC: + err_msg = "NDC Sync failed"; + break; + break; case NPA_ERR_ALLOC: err_msg = "NPA alloc failed"; break; @@ -36,6 +63,21 @@ roc_error_msg_get(int errorcode) case NPA_ERR_DEVICE_NOT_BOUNDED: err_msg = "NPA device is not bounded"; break; + case NIX_AF_ERR_AQ_FULL: + err_msg = "AQ full"; + break; + case NIX_AF_ERR_AQ_ENQUEUE: + err_msg = "AQ enqueue failed"; + break; + case NIX_AF_ERR_AF_LF_INVALID: + err_msg = "Invalid NIX LF"; + break; + case NIX_AF_ERR_AF_LF_ALLOC: + err_msg = "NIX LF alloc failed"; + break; + case NIX_AF_ERR_LF_RESET: + err_msg = "NIX LF reset failed"; + break; case UTIL_ERR_FS: err_msg = "file operation failed"; break; diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index c0f282d..62aa2ba 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -3,14 +3,30 @@ INTERNAL { cnxk_logtype_base; cnxk_logtype_mbox; + cnxk_logtype_nix; cnxk_logtype_npa; roc_clk_freq_get; roc_error_msg_get; roc_idev_lmt_base_addr_get; roc_idev_npa_maxpools_get; roc_idev_npa_maxpools_set; + roc_idev_npa_nix_get; roc_idev_num_lmtlines_get; roc_model; + roc_nix_dev_fini; + roc_nix_dev_init; + roc_nix_get_base_chan; + roc_nix_get_pf; + roc_nix_get_pf_func; + roc_nix_get_vf; + roc_nix_get_vwqe_interval; + roc_nix_is_lbk; + roc_nix_is_pf; + roc_nix_is_sdp; + roc_nix_is_vf_or_sdp; + roc_nix_lf_alloc; + roc_nix_lf_free; + roc_nix_max_pkt_len; roc_npa_aura_limit_modify; roc_npa_aura_op_range_set; roc_npa_ctx_dump; From patchwork Thu Apr 1 12:37:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90391 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4A757A0548; Thu, 1 Apr 2021 14:41:37 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F138B141216; Thu, 1 Apr 2021 14:39:33 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 73CBE141213 for ; Thu, 1 Apr 2021 14:39:31 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPNX5032520 for ; Thu, 1 Apr 2021 05:39:30 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=gju9a4nnD3zjsbZRtIfHaPJUBuE9zKMBgk7jmbDhnCU=; b=ibzeA3cdX1pEa7kRIC9DWzJ22Q2C9mmkVdeLfuwomyNw8rVN0eqoUNXPoeNeADw2tiDf w8OAYFuNqo0jUvIODuL1/515g7SNFeMg4RbeXceW3I51QH+bQC/oBUSTGQHJAGJnZOuT 8XD+5mkbChtfpm2Yt5miNSXEJDgXl4Ao7NajbGRRqZs0ZrZdWT+k/gD/aCvI/GufeyRc DhMm8J7JAPj2ZTSgPKIwWAf7mrwulPAy7+xcK5B8pJ7BRDVuEvE9FbduM1V1ExZ1VFVk ahREatmJHCSFKtr4W+oBIurb+rO6KdVuQDtr/oP1iMzGhviT85fenpB5lGIxZS5yiB5F 8g== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 37n28j2dqm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:30 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:29 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:29 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 68A483F703F; Thu, 1 Apr 2021 05:39:26 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , , Harman Kalra Date: Thu, 1 Apr 2021 18:07:43 +0530 Message-ID: <20210401123817.14348-19-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: iFXXsu_A4mOaRwu99-4_oeh0VtAIG6Tt X-Proofpoint-GUID: iFXXsu_A4mOaRwu99-4_oeh0VtAIG6Tt X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 18/52] common/cnxk: add nix irq support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add support to register NIX error and completion queue IRQ's using base device class IRQ helper API's. Signed-off-by: Jerin Jacob Signed-off-by: Sunil Kumar Kori Signed-off-by: Harman Kalra --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_nix.c | 7 + drivers/common/cnxk/roc_nix.h | 12 + drivers/common/cnxk/roc_nix_irq.c | 484 +++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_nix_priv.h | 18 ++ drivers/common/cnxk/version.map | 8 + 6 files changed, 530 insertions(+) create mode 100644 drivers/common/cnxk/roc_nix_irq.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 1efb601..19619c3 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -16,6 +16,7 @@ sources = files('roc_dev.c', 'roc_mbox.c', 'roc_model.c', 'roc_nix.c', + 'roc_nix_irq.c', 'roc_npa.c', 'roc_npa_debug.c', 'roc_npa_irq.c', diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c index 040f78c..e64936e 100644 --- a/drivers/common/cnxk/roc_nix.c +++ b/drivers/common/cnxk/roc_nix.c @@ -363,6 +363,11 @@ roc_nix_dev_init(struct roc_nix *roc_nix) nix->reta_sz = reta_sz; nix->mtu = ROC_NIX_DEFAULT_HW_FRS; + /* Register error and ras interrupts */ + rc = nix_register_irqs(nix); + if (rc) + goto lf_detach; + /* Get NIX HW info */ roc_nix_get_hw_info(roc_nix); nix->dev.drv_inited = true; @@ -388,6 +393,8 @@ roc_nix_dev_fini(struct roc_nix *roc_nix) if (!nix->dev.drv_inited) goto fini; + nix_unregister_irqs(nix); + rc = nix_lf_detach(nix); nix->dev.drv_inited = false; fini: diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index fc078f8..ca96de7 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -81,4 +81,16 @@ int __roc_api roc_nix_lf_alloc(struct roc_nix *roc_nix, uint32_t nb_rxq, uint32_t nb_txq, uint64_t rx_cfg); int __roc_api roc_nix_lf_free(struct roc_nix *roc_nix); +/* IRQ */ +void __roc_api roc_nix_rx_queue_intr_enable(struct roc_nix *roc_nix, + uint16_t rxq_id); +void __roc_api roc_nix_rx_queue_intr_disable(struct roc_nix *roc_nix, + uint16_t rxq_id); +void __roc_api roc_nix_err_intr_ena_dis(struct roc_nix *roc_nix, bool enb); +void __roc_api roc_nix_ras_intr_ena_dis(struct roc_nix *roc_nix, bool enb); +int __roc_api roc_nix_register_queue_irqs(struct roc_nix *roc_nix); +void __roc_api roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix); +int __roc_api roc_nix_register_cq_irqs(struct roc_nix *roc_nix); +void __roc_api roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix); + #endif /* _ROC_NIX_H_ */ diff --git a/drivers/common/cnxk/roc_nix_irq.c b/drivers/common/cnxk/roc_nix_irq.c new file mode 100644 index 0000000..79f25b0 --- /dev/null +++ b/drivers/common/cnxk/roc_nix_irq.c @@ -0,0 +1,484 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +static void +nix_err_intr_enb_dis(struct nix *nix, bool enb) +{ + /* Enable all nix lf error irqs except RQ_DISABLED and CQ_DISABLED */ + if (enb) + plt_write64(~(BIT_ULL(11) | BIT_ULL(24)), + nix->base + NIX_LF_ERR_INT_ENA_W1S); + else + plt_write64(~0ull, nix->base + NIX_LF_ERR_INT_ENA_W1C); +} + +static void +nix_ras_intr_enb_dis(struct nix *nix, bool enb) +{ + if (enb) + plt_write64(~0ull, nix->base + NIX_LF_RAS_ENA_W1S); + else + plt_write64(~0ull, nix->base + NIX_LF_RAS_ENA_W1C); +} + +void +roc_nix_rx_queue_intr_enable(struct roc_nix *roc_nix, uint16_t rx_queue_id) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + /* Enable CINT interrupt */ + plt_write64(BIT_ULL(0), nix->base + NIX_LF_CINTX_ENA_W1S(rx_queue_id)); +} + +void +roc_nix_rx_queue_intr_disable(struct roc_nix *roc_nix, uint16_t rx_queue_id) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + /* Clear and disable CINT interrupt */ + plt_write64(BIT_ULL(0), nix->base + NIX_LF_CINTX_ENA_W1C(rx_queue_id)); +} + +void +roc_nix_err_intr_ena_dis(struct roc_nix *roc_nix, bool enb) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + return nix_err_intr_enb_dis(nix, enb); +} + +void +roc_nix_ras_intr_ena_dis(struct roc_nix *roc_nix, bool enb) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + return nix_ras_intr_enb_dis(nix, enb); +} + +static void +nix_lf_err_irq(void *param) +{ + struct nix *nix = (struct nix *)param; + struct dev *dev = &nix->dev; + uint64_t intr; + + intr = plt_read64(nix->base + NIX_LF_ERR_INT); + if (intr == 0) + return; + + plt_err("Err_irq=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf); + + /* Clear interrupt */ + plt_write64(intr, nix->base + NIX_LF_ERR_INT); +} + +static int +nix_lf_register_err_irq(struct nix *nix) +{ + struct plt_intr_handle *handle = &nix->pci_dev->intr_handle; + int rc, vec; + + vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT; + /* Clear err interrupt */ + nix_err_intr_enb_dis(nix, false); + /* Set used interrupt vectors */ + rc = dev_irq_register(handle, nix_lf_err_irq, nix, vec); + /* Enable all dev interrupt except for RQ_DISABLED */ + nix_err_intr_enb_dis(nix, true); + + return rc; +} + +static void +nix_lf_unregister_err_irq(struct nix *nix) +{ + struct plt_intr_handle *handle = &nix->pci_dev->intr_handle; + int vec; + + vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT; + /* Clear err interrupt */ + nix_err_intr_enb_dis(nix, false); + dev_irq_unregister(handle, nix_lf_err_irq, nix, vec); +} + +static void +nix_lf_ras_irq(void *param) +{ + struct nix *nix = (struct nix *)param; + struct dev *dev = &nix->dev; + uint64_t intr; + + intr = plt_read64(nix->base + NIX_LF_RAS); + if (intr == 0) + return; + + plt_err("Ras_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf); + /* Clear interrupt */ + plt_write64(intr, nix->base + NIX_LF_RAS); +} + +static int +nix_lf_register_ras_irq(struct nix *nix) +{ + struct plt_intr_handle *handle = &nix->pci_dev->intr_handle; + int rc, vec; + + vec = nix->msixoff + NIX_LF_INT_VEC_POISON; + /* Clear err interrupt */ + nix_ras_intr_enb_dis(nix, false); + /* Set used interrupt vectors */ + rc = dev_irq_register(handle, nix_lf_ras_irq, nix, vec); + /* Enable dev interrupt */ + nix_ras_intr_enb_dis(nix, true); + + return rc; +} + +static void +nix_lf_unregister_ras_irq(struct nix *nix) +{ + struct plt_intr_handle *handle = &nix->pci_dev->intr_handle; + int vec; + + vec = nix->msixoff + NIX_LF_INT_VEC_POISON; + /* Clear err interrupt */ + nix_ras_intr_enb_dis(nix, false); + dev_irq_unregister(handle, nix_lf_ras_irq, nix, vec); +} + +static inline uint8_t +nix_lf_q_irq_get_and_clear(struct nix *nix, uint16_t q, uint32_t off, + uint64_t mask) +{ + uint64_t reg, wdata; + uint8_t qint; + + wdata = (uint64_t)q << 44; + reg = roc_atomic64_add_nosync(wdata, (int64_t *)(nix->base + off)); + + if (reg & BIT_ULL(42) /* OP_ERR */) { + plt_err("Failed execute irq get off=0x%x", off); + return 0; + } + qint = reg & 0xff; + wdata &= mask; + plt_write64(wdata | qint, nix->base + off); + + return qint; +} + +static inline uint8_t +nix_lf_rq_irq_get_and_clear(struct nix *nix, uint16_t rq) +{ + return nix_lf_q_irq_get_and_clear(nix, rq, NIX_LF_RQ_OP_INT, ~0xff00); +} + +static inline uint8_t +nix_lf_cq_irq_get_and_clear(struct nix *nix, uint16_t cq) +{ + return nix_lf_q_irq_get_and_clear(nix, cq, NIX_LF_CQ_OP_INT, ~0xff00); +} + +static inline uint8_t +nix_lf_sq_irq_get_and_clear(struct nix *nix, uint16_t sq) +{ + return nix_lf_q_irq_get_and_clear(nix, sq, NIX_LF_SQ_OP_INT, ~0x1ff00); +} + +static inline void +nix_lf_sq_debug_reg(struct nix *nix, uint32_t off) +{ + uint64_t reg; + + reg = plt_read64(nix->base + off); + if (reg & BIT_ULL(44)) + plt_err("SQ=%d err_code=0x%x", (int)((reg >> 8) & 0xfffff), + (uint8_t)(reg & 0xff)); +} + +static void +nix_lf_cq_irq(void *param) +{ + struct nix_qint *cint = (struct nix_qint *)param; + struct nix *nix = cint->nix; + + /* Clear interrupt */ + plt_write64(BIT_ULL(0), nix->base + NIX_LF_CINTX_INT(cint->qintx)); +} + +static void +nix_lf_q_irq(void *param) +{ + struct nix_qint *qint = (struct nix_qint *)param; + uint8_t irq, qintx = qint->qintx; + struct nix *nix = qint->nix; + struct dev *dev = &nix->dev; + int q, cq, rq, sq; + uint64_t intr; + + intr = plt_read64(nix->base + NIX_LF_QINTX_INT(qintx)); + if (intr == 0) + return; + + plt_err("Queue_intr=0x%" PRIx64 " qintx=%d pf=%d, vf=%d", intr, qintx, + dev->pf, dev->vf); + + /* Handle RQ interrupts */ + for (q = 0; q < nix->nb_rx_queues; q++) { + rq = q % nix->qints; + irq = nix_lf_rq_irq_get_and_clear(nix, rq); + + if (irq & BIT_ULL(NIX_RQINT_DROP)) + plt_err("RQ=%d NIX_RQINT_DROP", rq); + + if (irq & BIT_ULL(NIX_RQINT_RED)) + plt_err("RQ=%d NIX_RQINT_RED", rq); + } + + /* Handle CQ interrupts */ + for (q = 0; q < nix->nb_rx_queues; q++) { + cq = q % nix->qints; + irq = nix_lf_cq_irq_get_and_clear(nix, cq); + + if (irq & BIT_ULL(NIX_CQERRINT_DOOR_ERR)) + plt_err("CQ=%d NIX_CQERRINT_DOOR_ERR", cq); + + if (irq & BIT_ULL(NIX_CQERRINT_WR_FULL)) + plt_err("CQ=%d NIX_CQERRINT_WR_FULL", cq); + + if (irq & BIT_ULL(NIX_CQERRINT_CQE_FAULT)) + plt_err("CQ=%d NIX_CQERRINT_CQE_FAULT", cq); + } + + /* Handle SQ interrupts */ + for (q = 0; q < nix->nb_tx_queues; q++) { + sq = q % nix->qints; + irq = nix_lf_sq_irq_get_and_clear(nix, sq); + + if (irq & BIT_ULL(NIX_SQINT_LMT_ERR)) { + plt_err("SQ=%d NIX_SQINT_LMT_ERR", sq); + nix_lf_sq_debug_reg(nix, NIX_LF_SQ_OP_ERR_DBG); + } + if (irq & BIT_ULL(NIX_SQINT_MNQ_ERR)) { + plt_err("SQ=%d NIX_SQINT_MNQ_ERR", sq); + nix_lf_sq_debug_reg(nix, NIX_LF_MNQ_ERR_DBG); + } + if (irq & BIT_ULL(NIX_SQINT_SEND_ERR)) { + plt_err("SQ=%d NIX_SQINT_SEND_ERR", sq); + nix_lf_sq_debug_reg(nix, NIX_LF_SEND_ERR_DBG); + } + if (irq & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL)) { + plt_err("SQ=%d NIX_SQINT_SQB_ALLOC_FAIL", sq); + nix_lf_sq_debug_reg(nix, NIX_LF_SEND_ERR_DBG); + } + } + + /* Clear interrupt */ + plt_write64(intr, nix->base + NIX_LF_QINTX_INT(qintx)); +} + +int +roc_nix_register_queue_irqs(struct roc_nix *roc_nix) +{ + int vec, q, sqs, rqs, qs, rc = 0; + struct plt_intr_handle *handle; + struct nix *nix; + + nix = roc_nix_to_nix_priv(roc_nix); + handle = &nix->pci_dev->intr_handle; + + /* Figure out max qintx required */ + rqs = PLT_MIN(nix->qints, nix->nb_rx_queues); + sqs = PLT_MIN(nix->qints, nix->nb_tx_queues); + qs = PLT_MAX(rqs, sqs); + + nix->configured_qints = qs; + + nix->qints_mem = + plt_zmalloc(nix->configured_qints * sizeof(struct nix_qint), 0); + if (nix->qints_mem == NULL) + return -ENOMEM; + + for (q = 0; q < qs; q++) { + vec = nix->msixoff + NIX_LF_INT_VEC_QINT_START + q; + + /* Clear QINT CNT */ + plt_write64(0, nix->base + NIX_LF_QINTX_CNT(q)); + + /* Clear interrupt */ + plt_write64(~0ull, nix->base + NIX_LF_QINTX_ENA_W1C(q)); + + nix->qints_mem[q].nix = nix; + nix->qints_mem[q].qintx = q; + + /* Sync qints_mem update */ + plt_wmb(); + + /* Register queue irq vector */ + rc = dev_irq_register(handle, nix_lf_q_irq, &nix->qints_mem[q], + vec); + if (rc) + break; + + plt_write64(0, nix->base + NIX_LF_QINTX_CNT(q)); + plt_write64(0, nix->base + NIX_LF_QINTX_INT(q)); + /* Enable QINT interrupt */ + plt_write64(~0ull, nix->base + NIX_LF_QINTX_ENA_W1S(q)); + } + + return rc; +} + +void +roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix) +{ + struct plt_intr_handle *handle; + struct nix *nix; + int vec, q; + + nix = roc_nix_to_nix_priv(roc_nix); + handle = &nix->pci_dev->intr_handle; + + for (q = 0; q < nix->configured_qints; q++) { + vec = nix->msixoff + NIX_LF_INT_VEC_QINT_START + q; + + /* Clear QINT CNT */ + plt_write64(0, nix->base + NIX_LF_QINTX_CNT(q)); + plt_write64(0, nix->base + NIX_LF_QINTX_INT(q)); + + /* Clear interrupt */ + plt_write64(~0ull, nix->base + NIX_LF_QINTX_ENA_W1C(q)); + + /* Unregister queue irq vector */ + dev_irq_unregister(handle, nix_lf_q_irq, &nix->qints_mem[q], + vec); + } + nix->configured_qints = 0; + + plt_free(nix->qints_mem); + nix->qints_mem = NULL; +} + +int +roc_nix_register_cq_irqs(struct roc_nix *roc_nix) +{ + struct plt_intr_handle *handle; + uint8_t rc = 0, vec, q; + struct nix *nix; + + nix = roc_nix_to_nix_priv(roc_nix); + handle = &nix->pci_dev->intr_handle; + + nix->configured_cints = PLT_MIN(nix->cints, nix->nb_rx_queues); + + nix->cints_mem = + plt_zmalloc(nix->configured_cints * sizeof(struct nix_qint), 0); + if (nix->cints_mem == NULL) + return -ENOMEM; + + for (q = 0; q < nix->configured_cints; q++) { + vec = nix->msixoff + NIX_LF_INT_VEC_CINT_START + q; + + /* Clear CINT CNT */ + plt_write64(0, nix->base + NIX_LF_CINTX_CNT(q)); + + /* Clear interrupt */ + plt_write64(BIT_ULL(0), nix->base + NIX_LF_CINTX_ENA_W1C(q)); + + nix->cints_mem[q].nix = nix; + nix->cints_mem[q].qintx = q; + + /* Sync cints_mem update */ + plt_wmb(); + + /* Register queue irq vector */ + rc = dev_irq_register(handle, nix_lf_cq_irq, &nix->cints_mem[q], + vec); + if (rc) { + plt_err("Fail to register CQ irq, rc=%d", rc); + return rc; + } + + if (!handle->intr_vec) { + handle->intr_vec = plt_zmalloc( + nix->configured_cints * sizeof(int), 0); + if (!handle->intr_vec) { + plt_err("Failed to allocate %d rx intr_vec", + nix->configured_cints); + return -ENOMEM; + } + } + /* VFIO vector zero is resereved for misc interrupt so + * doing required adjustment. (b13bfab4cd) + */ + handle->intr_vec[q] = PLT_INTR_VEC_RXTX_OFFSET + vec; + + /* Configure CQE interrupt coalescing parameters */ + plt_write64(((CQ_CQE_THRESH_DEFAULT) | + (CQ_CQE_THRESH_DEFAULT << 32) | + (CQ_TIMER_THRESH_DEFAULT << 48)), + nix->base + NIX_LF_CINTX_WAIT((q))); + + /* Keeping the CQ interrupt disabled as the rx interrupt + * feature needs to be enabled/disabled on demand. + */ + } + + return rc; +} + +void +roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix) +{ + struct plt_intr_handle *handle; + struct nix *nix; + int vec, q; + + nix = roc_nix_to_nix_priv(roc_nix); + handle = &nix->pci_dev->intr_handle; + + for (q = 0; q < nix->configured_cints; q++) { + vec = nix->msixoff + NIX_LF_INT_VEC_CINT_START + q; + + /* Clear CINT CNT */ + plt_write64(0, nix->base + NIX_LF_CINTX_CNT(q)); + + /* Clear interrupt */ + plt_write64(BIT_ULL(0), nix->base + NIX_LF_CINTX_ENA_W1C(q)); + + /* Unregister queue irq vector */ + dev_irq_unregister(handle, nix_lf_cq_irq, &nix->cints_mem[q], + vec); + } + plt_free(nix->cints_mem); +} + +int +nix_register_irqs(struct nix *nix) +{ + int rc; + + if (nix->msixoff == MSIX_VECTOR_INVALID) { + plt_err("Invalid NIXLF MSIX vector offset vector: 0x%x", + nix->msixoff); + return NIX_ERR_PARAM; + } + + /* Register lf err interrupt */ + rc = nix_lf_register_err_irq(nix); + /* Register RAS interrupt */ + rc |= nix_lf_register_ras_irq(nix); + + return rc; +} + +void +nix_unregister_irqs(struct nix *nix) +{ + nix_lf_unregister_err_irq(nix); + nix_lf_unregister_ras_irq(nix); +} diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index 92a0d2e..1457696 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -18,11 +18,25 @@ /* Apply BP/DROP when CQ is 95% full */ #define NIX_CQ_THRESH_LEVEL (5 * 256 / 100) +/* IRQ triggered when NIX_LF_CINTX_CNT[QCOUNT] crosses this value */ +#define CQ_CQE_THRESH_DEFAULT 0x1ULL +#define CQ_TIMER_THRESH_DEFAULT 0xAULL /* ~1usec i.e (0xA * 100nsec) */ +#define CQ_TIMER_THRESH_MAX 255 + +struct nix_qint { + struct nix *nix; + uint8_t qintx; +}; + struct nix { uint16_t reta[ROC_NIX_RSS_GRPS][ROC_NIX_RSS_RETA_MAX]; enum roc_nix_rss_reta_sz reta_sz; struct plt_pci_device *pci_dev; uint16_t bpid[NIX_MAX_CHAN]; + struct nix_qint *qints_mem; + struct nix_qint *cints_mem; + uint8_t configured_qints; + uint8_t configured_cints; struct roc_nix_sq **sqs; uint16_t vwqe_interval; uint16_t tx_chan_base; @@ -97,4 +111,8 @@ nix_priv_to_roc_nix(struct nix *nix) offsetof(struct roc_nix, reserved)); } +/* IRQ */ +int nix_register_irqs(struct nix *nix); +void nix_unregister_irqs(struct nix *nix); + #endif /* _ROC_NIX_PRIV_H_ */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 62aa2ba..b86e318 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -15,6 +15,7 @@ INTERNAL { roc_model; roc_nix_dev_fini; roc_nix_dev_init; + roc_nix_err_intr_ena_dis; roc_nix_get_base_chan; roc_nix_get_pf; roc_nix_get_pf_func; @@ -27,6 +28,13 @@ INTERNAL { roc_nix_lf_alloc; roc_nix_lf_free; roc_nix_max_pkt_len; + roc_nix_ras_intr_ena_dis; + roc_nix_register_cq_irqs; + roc_nix_register_queue_irqs; + roc_nix_rx_queue_intr_disable; + roc_nix_rx_queue_intr_enable; + roc_nix_unregister_cq_irqs; + roc_nix_unregister_queue_irqs; roc_npa_aura_limit_modify; roc_npa_aura_op_range_set; roc_npa_ctx_dump; From patchwork Thu Apr 1 12:37:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90399 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BC465A0548; Thu, 1 Apr 2021 14:42:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D8A15141248; Thu, 1 Apr 2021 14:39:58 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 026C1141241 for ; Thu, 1 Apr 2021 14:39:55 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLGT019096 for ; Thu, 1 Apr 2021 05:39:55 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=9NusUXi7qL/A971MgjD7WlItacUclexpg2BKy/maMgk=; b=F2XC5GurhnX29V/6PKa3x5B9gx0i475HxnOdAU7mGVh/yHiM1cWBvMRinZM246O2eHFW SNx4GT9q/Y6eNbyAethyzU6aAbodNFoHbW8XbnS+Ma4ur9UFWaphNZSbq/t1y+ES5Szt xTkV0ZV/y0H2qNgFPp2CIyXV/NuQis7WXCJyMpsOMksXFNViMGEGYzzFVOakc6sNwRZy 2o+NmhIXHr+EkUGFin3jwSRUFtzusFxWUNgBcuYWyQ/W5vKZaGKjXT0vF7xUQ0cHWiak O6vjRYgAY8H/l2SHTyXR4xrlRXkC+9omtetB2uVuTRa4fuFK0FMReTE0I6EotSw70VwD wg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje31-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:55 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:53 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:32 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id A82B53F7041; Thu, 1 Apr 2021 05:39:29 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:44 +0530 Message-ID: <20210401123817.14348-20-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: arlBy5bUg78-zGQgrBSp9FAJvmsxATgl X-Proofpoint-ORIG-GUID: arlBy5bUg78-zGQgrBSp9FAJvmsxATgl X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 19/52] common/cnxk: add nix Rx queue management API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add nix Rx queue management API to init/modify/fini RQ context and also setup CQ(completion queue) context. Current support is both for CN9K and CN10K devices. Signed-off-by: Jerin Jacob --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_nix.h | 52 ++++ drivers/common/cnxk/roc_nix_queue.c | 496 ++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 6 + 4 files changed, 555 insertions(+) create mode 100644 drivers/common/cnxk/roc_nix_queue.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 19619c3..47e7c43 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -17,6 +17,7 @@ sources = files('roc_dev.c', 'roc_model.c', 'roc_nix.c', 'roc_nix_irq.c', + 'roc_nix_queue.c', 'roc_npa.c', 'roc_npa_debug.c', 'roc_npa_irq.c', diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index ca96de7..227167e 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -41,6 +41,48 @@ enum roc_nix_sq_max_sqe_sz { #define ROC_NIX_VWQE_MAX_SIZE_LOG2 11 #define ROC_NIX_VWQE_MIN_SIZE_LOG2 2 + +struct roc_nix_rq { + /* Input parameters */ + uint16_t qid; + uint64_t aura_handle; + bool ipsech_ena; + uint16_t first_skip; + uint16_t later_skip; + uint16_t wqe_skip; + uint16_t lpb_size; + uint32_t tag_mask; + uint32_t flow_tag_width; + uint8_t tt; /* Valid when SSO is enabled */ + uint16_t hwgrp; /* Valid when SSO is enabled */ + bool sso_ena; + bool vwqe_ena; + uint64_t spb_aura_handle; /* Valid when SPB is enabled */ + uint16_t spb_size; /* Valid when SPB is enabled */ + bool spb_ena; + uint8_t vwqe_first_skip; + uint32_t vwqe_max_sz_exp; + uint64_t vwqe_wait_tmo; + uint64_t vwqe_aura_handle; + /* End of Input parameters */ + struct roc_nix *roc_nix; +}; + +struct roc_nix_cq { + /* Input parameters */ + uint16_t qid; + uint16_t nb_desc; + /* End of Input parameters */ + uint16_t drop_thresh; + struct roc_nix *roc_nix; + uintptr_t door; + int64_t *status; + uint64_t wdata; + void *desc_base; + uint32_t qmask; + uint32_t head; +}; + struct roc_nix { /* Input parameters */ struct plt_pci_device *pci_dev; @@ -93,4 +135,14 @@ void __roc_api roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix); int __roc_api roc_nix_register_cq_irqs(struct roc_nix *roc_nix); void __roc_api roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix); +/* Queue */ +int __roc_api roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, + bool ena); +int __roc_api roc_nix_rq_modify(struct roc_nix *roc_nix, struct roc_nix_rq *rq, + bool ena); +int __roc_api roc_nix_rq_ena_dis(struct roc_nix_rq *rq, bool enable); +int __roc_api roc_nix_rq_fini(struct roc_nix_rq *rq); +int __roc_api roc_nix_cq_init(struct roc_nix *roc_nix, struct roc_nix_cq *cq); +int __roc_api roc_nix_cq_fini(struct roc_nix_cq *cq); + #endif /* _ROC_NIX_H_ */ diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c new file mode 100644 index 0000000..524b3bc --- /dev/null +++ b/drivers/common/cnxk/roc_nix_queue.c @@ -0,0 +1,496 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +static inline uint32_t +nix_qsize_to_val(enum nix_q_size qsize) +{ + return (16UL << (qsize * 2)); +} + +static inline enum nix_q_size +nix_qsize_clampup(uint32_t val) +{ + int i = nix_q_size_16; + + for (; i < nix_q_size_max; i++) + if (val <= nix_qsize_to_val(i)) + break; + + if (i >= nix_q_size_max) + i = nix_q_size_max - 1; + + return i; +} + +int +roc_nix_rq_ena_dis(struct roc_nix_rq *rq, bool enable) +{ + struct nix *nix = roc_nix_to_nix_priv(rq->roc_nix); + struct mbox *mbox = (&nix->dev)->mbox; + int rc; + + /* Pkts will be dropped silently if RQ is disabled */ + if (roc_model_is_cn9k()) { + struct nix_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = rq->qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + aq->rq.ena = enable; + aq->rq_mask.ena = ~(aq->rq_mask.ena); + } else { + struct nix_cn10k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq->qidx = rq->qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + aq->rq.ena = enable; + aq->rq_mask.ena = ~(aq->rq_mask.ena); + } + + rc = mbox_process(mbox); + + if (roc_model_is_cn10k()) + plt_write64(rq->qid, nix->base + NIX_LF_OP_VWQE_FLUSH); + return rc; +} + +static int +rq_cn9k_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena) +{ + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = rq->qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = cfg ? NIX_AQ_INSTOP_WRITE : NIX_AQ_INSTOP_INIT; + + if (rq->sso_ena) { + /* SSO mode */ + aq->rq.sso_ena = 1; + aq->rq.sso_tt = rq->tt; + aq->rq.sso_grp = rq->hwgrp; + aq->rq.ena_wqwd = 1; + aq->rq.wqe_skip = rq->wqe_skip; + aq->rq.wqe_caching = 1; + + aq->rq.good_utag = rq->tag_mask >> 24; + aq->rq.bad_utag = rq->tag_mask >> 24; + aq->rq.ltag = rq->tag_mask & BITMASK_ULL(24, 0); + } else { + /* CQ mode */ + aq->rq.sso_ena = 0; + aq->rq.good_utag = rq->tag_mask >> 24; + aq->rq.bad_utag = rq->tag_mask >> 24; + aq->rq.ltag = rq->tag_mask & BITMASK_ULL(24, 0); + aq->rq.cq = rq->qid; + } + + if (rq->ipsech_ena) + aq->rq.ipsech_ena = 1; + + aq->rq.spb_ena = 0; + aq->rq.lpb_aura = roc_npa_aura_handle_to_aura(rq->aura_handle); + + /* Sizes must be aligned to 8 bytes */ + if (rq->first_skip & 0x7 || rq->later_skip & 0x7 || rq->lpb_size & 0x7) + return -EINVAL; + + /* Expressed in number of dwords */ + aq->rq.first_skip = rq->first_skip / 8; + aq->rq.later_skip = rq->later_skip / 8; + aq->rq.flow_tagw = rq->flow_tag_width; /* 32-bits */ + aq->rq.lpb_sizem1 = rq->lpb_size / 8; + aq->rq.lpb_sizem1 -= 1; /* Expressed in size minus one */ + aq->rq.ena = ena; + aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */ + aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */ + aq->rq.rq_int_ena = 0; + /* Many to one reduction */ + aq->rq.qint_idx = rq->qid % nix->qints; + aq->rq.xqe_drop_ena = 1; + + if (cfg) { + if (rq->sso_ena) { + /* SSO mode */ + aq->rq_mask.sso_ena = ~aq->rq_mask.sso_ena; + aq->rq_mask.sso_tt = ~aq->rq_mask.sso_tt; + aq->rq_mask.sso_grp = ~aq->rq_mask.sso_grp; + aq->rq_mask.ena_wqwd = ~aq->rq_mask.ena_wqwd; + aq->rq_mask.wqe_skip = ~aq->rq_mask.wqe_skip; + aq->rq_mask.wqe_caching = ~aq->rq_mask.wqe_caching; + aq->rq_mask.good_utag = ~aq->rq_mask.good_utag; + aq->rq_mask.bad_utag = ~aq->rq_mask.bad_utag; + aq->rq_mask.ltag = ~aq->rq_mask.ltag; + } else { + /* CQ mode */ + aq->rq_mask.sso_ena = ~aq->rq_mask.sso_ena; + aq->rq_mask.good_utag = ~aq->rq_mask.good_utag; + aq->rq_mask.bad_utag = ~aq->rq_mask.bad_utag; + aq->rq_mask.ltag = ~aq->rq_mask.ltag; + aq->rq_mask.cq = ~aq->rq_mask.cq; + } + + if (rq->ipsech_ena) + aq->rq_mask.ipsech_ena = ~aq->rq_mask.ipsech_ena; + + aq->rq_mask.spb_ena = ~aq->rq_mask.spb_ena; + aq->rq_mask.lpb_aura = ~aq->rq_mask.lpb_aura; + aq->rq_mask.first_skip = ~aq->rq_mask.first_skip; + aq->rq_mask.later_skip = ~aq->rq_mask.later_skip; + aq->rq_mask.flow_tagw = ~aq->rq_mask.flow_tagw; + aq->rq_mask.lpb_sizem1 = ~aq->rq_mask.lpb_sizem1; + aq->rq_mask.ena = ~aq->rq_mask.ena; + aq->rq_mask.pb_caching = ~aq->rq_mask.pb_caching; + aq->rq_mask.xqe_imm_size = ~aq->rq_mask.xqe_imm_size; + aq->rq_mask.rq_int_ena = ~aq->rq_mask.rq_int_ena; + aq->rq_mask.qint_idx = ~aq->rq_mask.qint_idx; + aq->rq_mask.xqe_drop_ena = ~aq->rq_mask.xqe_drop_ena; + } + + return 0; +} + +static int +rq_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena) +{ + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_cn10k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq->qidx = rq->qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = cfg ? NIX_AQ_INSTOP_WRITE : NIX_AQ_INSTOP_INIT; + + if (rq->sso_ena) { + /* SSO mode */ + aq->rq.sso_ena = 1; + aq->rq.sso_tt = rq->tt; + aq->rq.sso_grp = rq->hwgrp; + aq->rq.ena_wqwd = 1; + aq->rq.wqe_skip = rq->wqe_skip; + aq->rq.wqe_caching = 1; + + aq->rq.good_utag = rq->tag_mask >> 24; + aq->rq.bad_utag = rq->tag_mask >> 24; + aq->rq.ltag = rq->tag_mask & BITMASK_ULL(24, 0); + + if (rq->vwqe_ena) { + aq->rq.vwqe_ena = true; + aq->rq.vwqe_skip = rq->vwqe_first_skip; + /* Maximal Vector size is (2^(MAX_VSIZE_EXP+2)) */ + aq->rq.max_vsize_exp = rq->vwqe_max_sz_exp - 2; + aq->rq.vtime_wait = rq->vwqe_wait_tmo; + aq->rq.wqe_aura = rq->vwqe_aura_handle; + } + } else { + /* CQ mode */ + aq->rq.sso_ena = 0; + aq->rq.good_utag = rq->tag_mask >> 24; + aq->rq.bad_utag = rq->tag_mask >> 24; + aq->rq.ltag = rq->tag_mask & BITMASK_ULL(24, 0); + aq->rq.cq = rq->qid; + } + + if (rq->ipsech_ena) + aq->rq.ipsech_ena = 1; + + aq->rq.lpb_aura = roc_npa_aura_handle_to_aura(rq->aura_handle); + + /* Sizes must be aligned to 8 bytes */ + if (rq->first_skip & 0x7 || rq->later_skip & 0x7 || rq->lpb_size & 0x7) + return -EINVAL; + + /* Expressed in number of dwords */ + aq->rq.first_skip = rq->first_skip / 8; + aq->rq.later_skip = rq->later_skip / 8; + aq->rq.flow_tagw = rq->flow_tag_width; /* 32-bits */ + aq->rq.lpb_sizem1 = rq->lpb_size / 8; + aq->rq.lpb_sizem1 -= 1; /* Expressed in size minus one */ + aq->rq.ena = ena; + + if (rq->spb_ena) { + uint32_t spb_sizem1; + + aq->rq.spb_ena = 1; + aq->rq.spb_aura = + roc_npa_aura_handle_to_aura(rq->spb_aura_handle); + + if (rq->spb_size & 0x7 || + rq->spb_size > NIX_RQ_CN10K_SPB_MAX_SIZE) + return -EINVAL; + + spb_sizem1 = rq->spb_size / 8; /* Expressed in no. of dwords */ + spb_sizem1 -= 1; /* Expressed in size minus one */ + aq->rq.spb_sizem1 = spb_sizem1 & 0x3F; + aq->rq.spb_high_sizem1 = (spb_sizem1 >> 6) & 0x7; + } else { + aq->rq.spb_ena = 0; + } + + aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */ + aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */ + aq->rq.rq_int_ena = 0; + /* Many to one reduction */ + aq->rq.qint_idx = rq->qid % nix->qints; + aq->rq.xqe_drop_ena = 1; + + if (cfg) { + if (rq->sso_ena) { + /* SSO mode */ + aq->rq_mask.sso_ena = ~aq->rq_mask.sso_ena; + aq->rq_mask.sso_tt = ~aq->rq_mask.sso_tt; + aq->rq_mask.sso_grp = ~aq->rq_mask.sso_grp; + aq->rq_mask.ena_wqwd = ~aq->rq_mask.ena_wqwd; + aq->rq_mask.wqe_skip = ~aq->rq_mask.wqe_skip; + aq->rq_mask.wqe_caching = ~aq->rq_mask.wqe_caching; + aq->rq_mask.good_utag = ~aq->rq_mask.good_utag; + aq->rq_mask.bad_utag = ~aq->rq_mask.bad_utag; + aq->rq_mask.ltag = ~aq->rq_mask.ltag; + if (rq->vwqe_ena) { + aq->rq_mask.vwqe_ena = ~aq->rq_mask.vwqe_ena; + aq->rq_mask.vwqe_skip = ~aq->rq_mask.vwqe_skip; + aq->rq_mask.max_vsize_exp = + ~aq->rq_mask.max_vsize_exp; + aq->rq_mask.vtime_wait = + ~aq->rq_mask.vtime_wait; + aq->rq_mask.wqe_aura = ~aq->rq_mask.wqe_aura; + } + } else { + /* CQ mode */ + aq->rq_mask.sso_ena = ~aq->rq_mask.sso_ena; + aq->rq_mask.good_utag = ~aq->rq_mask.good_utag; + aq->rq_mask.bad_utag = ~aq->rq_mask.bad_utag; + aq->rq_mask.ltag = ~aq->rq_mask.ltag; + aq->rq_mask.cq = ~aq->rq_mask.cq; + } + + if (rq->ipsech_ena) + aq->rq_mask.ipsech_ena = ~aq->rq_mask.ipsech_ena; + + if (rq->spb_ena) { + aq->rq_mask.spb_aura = ~aq->rq_mask.spb_aura; + aq->rq_mask.spb_sizem1 = ~aq->rq_mask.spb_sizem1; + aq->rq_mask.spb_high_sizem1 = + ~aq->rq_mask.spb_high_sizem1; + } + + aq->rq_mask.spb_ena = ~aq->rq_mask.spb_ena; + aq->rq_mask.lpb_aura = ~aq->rq_mask.lpb_aura; + aq->rq_mask.first_skip = ~aq->rq_mask.first_skip; + aq->rq_mask.later_skip = ~aq->rq_mask.later_skip; + aq->rq_mask.flow_tagw = ~aq->rq_mask.flow_tagw; + aq->rq_mask.lpb_sizem1 = ~aq->rq_mask.lpb_sizem1; + aq->rq_mask.ena = ~aq->rq_mask.ena; + aq->rq_mask.pb_caching = ~aq->rq_mask.pb_caching; + aq->rq_mask.xqe_imm_size = ~aq->rq_mask.xqe_imm_size; + aq->rq_mask.rq_int_ena = ~aq->rq_mask.rq_int_ena; + aq->rq_mask.qint_idx = ~aq->rq_mask.qint_idx; + aq->rq_mask.xqe_drop_ena = ~aq->rq_mask.xqe_drop_ena; + } + + return 0; +} + +int +roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = (&nix->dev)->mbox; + bool is_cn9k = roc_model_is_cn9k(); + int rc; + + if (roc_nix == NULL || rq == NULL) + return NIX_ERR_PARAM; + + if (rq->qid >= nix->nb_rx_queues) + return NIX_ERR_QUEUE_INVALID_RANGE; + + rq->roc_nix = roc_nix; + + if (is_cn9k) + rc = rq_cn9k_cfg(nix, rq, false, ena); + else + rc = rq_cfg(nix, rq, false, ena); + + if (rc) + return rc; + + return mbox_process(mbox); +} + +int +roc_nix_rq_modify(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = (&nix->dev)->mbox; + bool is_cn9k = roc_model_is_cn9k(); + int rc; + + if (roc_nix == NULL || rq == NULL) + return NIX_ERR_PARAM; + + if (rq->qid >= nix->nb_rx_queues) + return NIX_ERR_QUEUE_INVALID_RANGE; + + rq->roc_nix = roc_nix; + + if (is_cn9k) + rc = rq_cn9k_cfg(nix, rq, true, ena); + else + rc = rq_cfg(nix, rq, true, ena); + + if (rc) + return rc; + + return mbox_process(mbox); +} + +int +roc_nix_rq_fini(struct roc_nix_rq *rq) +{ + /* Disabling RQ is sufficient */ + return roc_nix_rq_ena_dis(rq, false); +} + +int +roc_nix_cq_init(struct roc_nix *roc_nix, struct roc_nix_cq *cq) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = (&nix->dev)->mbox; + volatile struct nix_cq_ctx_s *cq_ctx; + enum nix_q_size qsize; + size_t desc_sz; + int rc; + + if (cq == NULL) + return NIX_ERR_PARAM; + + if (cq->qid >= nix->nb_rx_queues) + return NIX_ERR_QUEUE_INVALID_RANGE; + + qsize = nix_qsize_clampup(cq->nb_desc); + cq->nb_desc = nix_qsize_to_val(qsize); + cq->qmask = cq->nb_desc - 1; + cq->door = nix->base + NIX_LF_CQ_OP_DOOR; + cq->status = (int64_t *)(nix->base + NIX_LF_CQ_OP_STATUS); + cq->wdata = (uint64_t)cq->qid << 32; + cq->roc_nix = roc_nix; + cq->drop_thresh = NIX_CQ_THRESH_LEVEL; + + /* CQE of W16 */ + desc_sz = cq->nb_desc * NIX_CQ_ENTRY_SZ; + cq->desc_base = plt_zmalloc(desc_sz, NIX_CQ_ALIGN); + if (cq->desc_base == NULL) { + rc = NIX_ERR_NO_MEM; + goto fail; + } + + if (roc_model_is_cn9k()) { + struct nix_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = cq->qid; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_INIT; + cq_ctx = &aq->cq; + } else { + struct nix_cn10k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq->qidx = cq->qid; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_INIT; + cq_ctx = &aq->cq; + } + + cq_ctx->ena = 1; + cq_ctx->caching = 1; + cq_ctx->qsize = qsize; + cq_ctx->base = (uint64_t)cq->desc_base; + cq_ctx->avg_level = 0xff; + cq_ctx->cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT); + cq_ctx->cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR); + + /* Many to one reduction */ + cq_ctx->qint_idx = cq->qid % nix->qints; + /* Map CQ0 [RQ0] to CINT0 and so on till max 64 irqs */ + cq_ctx->cint_idx = cq->qid; + + cq_ctx->drop = cq->drop_thresh; + cq_ctx->drop_ena = 1; + + /* TX pause frames enable flow ctrl on RX side */ + if (nix->tx_pause) { + /* Single BPID is allocated for all rx channels for now */ + cq_ctx->bpid = nix->bpid[0]; + cq_ctx->bp = cq_ctx->drop; + cq_ctx->bp_ena = 1; + } + + rc = mbox_process(mbox); + if (rc) + goto free_mem; + + return 0; + +free_mem: + plt_free(cq->desc_base); +fail: + return rc; +} + +int +roc_nix_cq_fini(struct roc_nix_cq *cq) +{ + struct mbox *mbox; + struct nix *nix; + int rc; + + if (cq == NULL) + return NIX_ERR_PARAM; + + nix = roc_nix_to_nix_priv(cq->roc_nix); + mbox = (&nix->dev)->mbox; + + /* Disable CQ */ + if (roc_model_is_cn9k()) { + struct nix_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = cq->qid; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_WRITE; + aq->cq.ena = 0; + aq->cq.bp_ena = 0; + aq->cq_mask.ena = ~aq->cq_mask.ena; + aq->cq_mask.bp_ena = ~aq->cq_mask.bp_ena; + } else { + struct nix_cn10k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq->qidx = cq->qid; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_WRITE; + aq->cq.ena = 0; + aq->cq.bp_ena = 0; + aq->cq_mask.ena = ~aq->cq_mask.ena; + aq->cq_mask.bp_ena = ~aq->cq_mask.bp_ena; + } + + rc = mbox_process(mbox); + if (rc) + return rc; + + plt_free(cq->desc_base); + return 0; +} diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index b86e318..257825a 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -13,6 +13,8 @@ INTERNAL { roc_idev_npa_nix_get; roc_idev_num_lmtlines_get; roc_model; + roc_nix_cq_fini; + roc_nix_cq_init; roc_nix_dev_fini; roc_nix_dev_init; roc_nix_err_intr_ena_dis; @@ -31,6 +33,10 @@ INTERNAL { roc_nix_ras_intr_ena_dis; roc_nix_register_cq_irqs; roc_nix_register_queue_irqs; + roc_nix_rq_ena_dis; + roc_nix_rq_fini; + roc_nix_rq_init; + roc_nix_rq_modify; roc_nix_rx_queue_intr_disable; roc_nix_rx_queue_intr_enable; roc_nix_unregister_cq_irqs; From patchwork Thu Apr 1 12:37:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90392 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B93AFA0562; Thu, 1 Apr 2021 14:41:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A7CDA141217; Thu, 1 Apr 2021 14:39:39 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id B94D3141198 for ; Thu, 1 Apr 2021 14:39:37 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLGQ019096 for ; Thu, 1 Apr 2021 05:39:37 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=4xQMO1lp7XEgoTlueeXifeXV6XXAR0RkFVhSiN0nvTk=; b=UzeTafzVU3gfXVJheTVIGEl3TKHn1znOFYL600uEonqEOXQk30xG8JtBQ4aUhxy0nvT2 cS6CgmRNQLcqEpyjZEwTGIX9qNH/cDW8CDzohAJ1dEUsqfMtDcwJ7NRsB5AtF8A+3LEb ul1dgwXlbOJ4WaWp9wFva2rckNgaGeM3xakqybQNMwPRbtiwMRKXHvO93vOaN+kilm55 Jg2Q3P/o9Hgx3/RiIjRaF1YTZUw/mOpGQ3dLTj3iYscWcZXhOaEWx20wENiaQjKEtZSb 1qXc1dsF1NgoYbMySUbzPvWIJ52bzMVDnM/CPST/YfJ/x/qCKnyQNX9kuY3d6sLEPTJb qA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje2f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:37 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:35 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:34 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 8C1A03F7040; Thu, 1 Apr 2021 05:39:32 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:45 +0530 Message-ID: <20210401123817.14348-21-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 2IPZYu2IXKL4P8lUvWZToFHk3-pFdMSE X-Proofpoint-ORIG-GUID: 2IPZYu2IXKL4P8lUvWZToFHk3-pFdMSE X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 20/52] common/cnxk: add nix Tx queue management API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob This patch adds support to init/modify/fini NIX SQ(send queue) for both CN9K and CN10K platforms. Signed-off-by: Jerin Jacob --- drivers/common/cnxk/roc_nix.h | 19 ++ drivers/common/cnxk/roc_nix_queue.c | 358 ++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 2 + 3 files changed, 379 insertions(+) diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 227167e..8027e6d 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -83,6 +83,23 @@ struct roc_nix_cq { uint32_t head; }; +struct roc_nix_sq { + /* Input parameters */ + enum roc_nix_sq_max_sqe_sz max_sqe_sz; + uint32_t nb_desc; + uint16_t qid; + /* End of Input parameters */ + uint16_t sqes_per_sqb_log2; + struct roc_nix *roc_nix; + uint64_t aura_handle; + int16_t nb_sqb_bufs_adj; + uint16_t nb_sqb_bufs; + plt_iova_t io_addr; + void *lmt_addr; + void *sqe_mem; + void *fc; +}; + struct roc_nix { /* Input parameters */ struct plt_pci_device *pci_dev; @@ -144,5 +161,7 @@ int __roc_api roc_nix_rq_ena_dis(struct roc_nix_rq *rq, bool enable); int __roc_api roc_nix_rq_fini(struct roc_nix_rq *rq); int __roc_api roc_nix_cq_init(struct roc_nix *roc_nix, struct roc_nix_cq *cq); int __roc_api roc_nix_cq_fini(struct roc_nix_cq *cq); +int __roc_api roc_nix_sq_init(struct roc_nix *roc_nix, struct roc_nix_sq *sq); +int __roc_api roc_nix_sq_fini(struct roc_nix_sq *sq); #endif /* _ROC_NIX_H_ */ diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c index 524b3bc..c5287a9 100644 --- a/drivers/common/cnxk/roc_nix_queue.c +++ b/drivers/common/cnxk/roc_nix_queue.c @@ -494,3 +494,361 @@ roc_nix_cq_fini(struct roc_nix_cq *cq) plt_free(cq->desc_base); return 0; } + +static int +sqb_pool_populate(struct roc_nix *roc_nix, struct roc_nix_sq *sq) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uint16_t sqes_per_sqb, count, nb_sqb_bufs; + struct npa_pool_s pool; + struct npa_aura_s aura; + uint64_t blk_sz; + uint64_t iova; + int rc; + + blk_sz = nix->sqb_size; + if (sq->max_sqe_sz == roc_nix_maxsqesz_w16) + sqes_per_sqb = (blk_sz / 8) / 16; + else + sqes_per_sqb = (blk_sz / 8) / 8; + + sq->nb_desc = PLT_MAX(256U, sq->nb_desc); + nb_sqb_bufs = sq->nb_desc / sqes_per_sqb; + nb_sqb_bufs += NIX_SQB_LIST_SPACE; + /* Clamp up the SQB count */ + nb_sqb_bufs = PLT_MIN(roc_nix->max_sqb_count, + (uint16_t)PLT_MAX(NIX_DEF_SQB, nb_sqb_bufs)); + + sq->nb_sqb_bufs = nb_sqb_bufs; + sq->sqes_per_sqb_log2 = (uint16_t)plt_log2_u32(sqes_per_sqb); + sq->nb_sqb_bufs_adj = + nb_sqb_bufs - + (PLT_ALIGN_MUL_CEIL(nb_sqb_bufs, sqes_per_sqb) / sqes_per_sqb); + sq->nb_sqb_bufs_adj = + (sq->nb_sqb_bufs_adj * NIX_SQB_LOWER_THRESH) / 100; + + /* Explicitly set nat_align alone as by default pool is with both + * nat_align and buf_offset = 1 which we don't want for SQB. + */ + memset(&pool, 0, sizeof(struct npa_pool_s)); + pool.nat_align = 1; + + memset(&aura, 0, sizeof(aura)); + aura.fc_ena = 1; + aura.fc_addr = (uint64_t)sq->fc; + aura.fc_hyst_bits = 0; /* Store count on all updates */ + rc = roc_npa_pool_create(&sq->aura_handle, blk_sz, nb_sqb_bufs, &aura, + &pool); + if (rc) + goto fail; + + sq->sqe_mem = plt_zmalloc(blk_sz * nb_sqb_bufs, blk_sz); + if (sq->sqe_mem == NULL) { + rc = NIX_ERR_NO_MEM; + goto nomem; + } + + /* Fill the initial buffers */ + iova = (uint64_t)sq->sqe_mem; + for (count = 0; count < nb_sqb_bufs; count++) { + roc_npa_aura_op_free(sq->aura_handle, 0, iova); + iova += blk_sz; + } + roc_npa_aura_op_range_set(sq->aura_handle, (uint64_t)sq->sqe_mem, iova); + + return rc; +nomem: + roc_npa_pool_destroy(sq->aura_handle); +fail: + return rc; +} + +static void +sq_cn9k_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum, + uint16_t smq) +{ + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = sq->qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_INIT; + aq->sq.max_sqe_size = sq->max_sqe_sz; + + aq->sq.max_sqe_size = sq->max_sqe_sz; + aq->sq.smq = smq; + aq->sq.smq_rr_quantum = rr_quantum; + aq->sq.default_chan = nix->tx_chan_base; + aq->sq.sqe_stype = NIX_STYPE_STF; + aq->sq.ena = 1; + if (aq->sq.max_sqe_size == NIX_MAXSQESZ_W8) + aq->sq.sqe_stype = NIX_STYPE_STP; + aq->sq.sqb_aura = roc_npa_aura_handle_to_aura(sq->aura_handle); + aq->sq.sq_int_ena = BIT(NIX_SQINT_LMT_ERR); + aq->sq.sq_int_ena |= BIT(NIX_SQINT_SQB_ALLOC_FAIL); + aq->sq.sq_int_ena |= BIT(NIX_SQINT_SEND_ERR); + aq->sq.sq_int_ena |= BIT(NIX_SQINT_MNQ_ERR); + + /* Many to one reduction */ + aq->sq.qint_idx = sq->qid % nix->qints; +} + +static int +sq_cn9k_fini(struct nix *nix, struct roc_nix_sq *sq) +{ + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_aq_enq_rsp *rsp; + struct nix_aq_enq_req *aq; + uint16_t sqes_per_sqb; + void *sqb_buf; + int rc, count; + + aq = mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = sq->qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_READ; + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + /* Check if sq is already cleaned up */ + if (!rsp->sq.ena) + return 0; + + /* Disable sq */ + aq = mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = sq->qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_WRITE; + aq->sq_mask.ena = ~aq->sq_mask.ena; + aq->sq.ena = 0; + rc = mbox_process(mbox); + if (rc) + return rc; + + /* Read SQ and free sqb's */ + aq = mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = sq->qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_READ; + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + if (aq->sq.smq_pend) + plt_err("SQ has pending SQE's"); + + count = aq->sq.sqb_count; + sqes_per_sqb = 1 << sq->sqes_per_sqb_log2; + /* Free SQB's that are used */ + sqb_buf = (void *)rsp->sq.head_sqb; + while (count) { + void *next_sqb; + + next_sqb = *(void **)((uintptr_t)sqb_buf + + (uint32_t)((sqes_per_sqb - 1) * + sq->max_sqe_sz)); + roc_npa_aura_op_free(sq->aura_handle, 1, (uint64_t)sqb_buf); + sqb_buf = next_sqb; + count--; + } + + /* Free next to use sqb */ + if (rsp->sq.next_sqb) + roc_npa_aura_op_free(sq->aura_handle, 1, rsp->sq.next_sqb); + return 0; +} + +static void +sq_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum, + uint16_t smq) +{ + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_cn10k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq->qidx = sq->qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_INIT; + aq->sq.max_sqe_size = sq->max_sqe_sz; + + aq->sq.max_sqe_size = sq->max_sqe_sz; + aq->sq.smq = smq; + aq->sq.smq_rr_weight = rr_quantum; + aq->sq.default_chan = nix->tx_chan_base; + aq->sq.sqe_stype = NIX_STYPE_STF; + aq->sq.ena = 1; + if (aq->sq.max_sqe_size == NIX_MAXSQESZ_W8) + aq->sq.sqe_stype = NIX_STYPE_STP; + aq->sq.sqb_aura = roc_npa_aura_handle_to_aura(sq->aura_handle); + aq->sq.sq_int_ena = BIT(NIX_SQINT_LMT_ERR); + aq->sq.sq_int_ena |= BIT(NIX_SQINT_SQB_ALLOC_FAIL); + aq->sq.sq_int_ena |= BIT(NIX_SQINT_SEND_ERR); + aq->sq.sq_int_ena |= BIT(NIX_SQINT_MNQ_ERR); + + /* Many to one reduction */ + aq->sq.qint_idx = sq->qid % nix->qints; +} + +static int +sq_fini(struct nix *nix, struct roc_nix_sq *sq) +{ + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_cn10k_aq_enq_rsp *rsp; + struct nix_cn10k_aq_enq_req *aq; + uint16_t sqes_per_sqb; + void *sqb_buf; + int rc, count; + + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq->qidx = sq->qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_READ; + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + /* Check if sq is already cleaned up */ + if (!rsp->sq.ena) + return 0; + + /* Disable sq */ + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq->qidx = sq->qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_WRITE; + aq->sq_mask.ena = ~aq->sq_mask.ena; + aq->sq.ena = 0; + rc = mbox_process(mbox); + if (rc) + return rc; + + /* Read SQ and free sqb's */ + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq->qidx = sq->qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_READ; + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + if (aq->sq.smq_pend) + plt_err("SQ has pending SQE's"); + + count = aq->sq.sqb_count; + sqes_per_sqb = 1 << sq->sqes_per_sqb_log2; + /* Free SQB's that are used */ + sqb_buf = (void *)rsp->sq.head_sqb; + while (count) { + void *next_sqb; + + next_sqb = *(void **)((uintptr_t)sqb_buf + + (uint32_t)((sqes_per_sqb - 1) * + sq->max_sqe_sz)); + roc_npa_aura_op_free(sq->aura_handle, 1, (uint64_t)sqb_buf); + sqb_buf = next_sqb; + count--; + } + + /* Free next to use sqb */ + if (rsp->sq.next_sqb) + roc_npa_aura_op_free(sq->aura_handle, 1, rsp->sq.next_sqb); + return 0; +} + +int +roc_nix_sq_init(struct roc_nix *roc_nix, struct roc_nix_sq *sq) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = (&nix->dev)->mbox; + uint16_t qid, smq = UINT16_MAX; + uint32_t rr_quantum = 0; + int rc; + + if (sq == NULL) + return NIX_ERR_PARAM; + + qid = sq->qid; + if (qid >= nix->nb_tx_queues) + return NIX_ERR_QUEUE_INVALID_RANGE; + + sq->roc_nix = roc_nix; + /* + * Allocate memory for flow control updates from HW. + * Alloc one cache line, so that fits all FC_STYPE modes. + */ + sq->fc = plt_zmalloc(ROC_ALIGN, ROC_ALIGN); + if (sq->fc == NULL) { + rc = NIX_ERR_NO_MEM; + goto fail; + } + + rc = sqb_pool_populate(roc_nix, sq); + if (rc) + goto nomem; + + /* Init SQ context */ + if (roc_model_is_cn9k()) + sq_cn9k_init(nix, sq, rr_quantum, smq); + else + sq_init(nix, sq, rr_quantum, smq); + + rc = mbox_process(mbox); + if (rc) + goto nomem; + + nix->sqs[qid] = sq; + sq->io_addr = nix->base + NIX_LF_OP_SENDX(0); + /* Evenly distribute LMT slot for each sq */ + if (roc_model_is_cn9k()) { + /* Multiple cores/SQ's can use same LMTLINE safely in CN9K */ + sq->lmt_addr = (void *)(nix->lmt_base + + ((qid & RVU_CN9K_LMT_SLOT_MASK) << 12)); + } + + return rc; +nomem: + plt_free(sq->fc); +fail: + return rc; +} + +int +roc_nix_sq_fini(struct roc_nix_sq *sq) +{ + struct nix *nix; + struct mbox *mbox; + struct ndc_sync_op *ndc_req; + uint16_t qid; + int rc = 0; + + if (sq == NULL) + return NIX_ERR_PARAM; + + nix = roc_nix_to_nix_priv(sq->roc_nix); + mbox = (&nix->dev)->mbox; + + qid = sq->qid; + + /* Release SQ context */ + if (roc_model_is_cn9k()) + rc |= sq_cn9k_fini(roc_nix_to_nix_priv(sq->roc_nix), sq); + else + rc |= sq_fini(roc_nix_to_nix_priv(sq->roc_nix), sq); + + /* Sync NDC-NIX-TX for LF */ + ndc_req = mbox_alloc_msg_ndc_sync_op(mbox); + if (ndc_req == NULL) + return -ENOSPC; + ndc_req->nix_lf_tx_sync = 1; + if (mbox_process(mbox)) + rc |= NIX_ERR_NDC_SYNC; + + rc |= roc_npa_pool_destroy(sq->aura_handle); + plt_free(sq->fc); + plt_free(sq->sqe_mem); + nix->sqs[qid] = NULL; + + return rc; +} diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 257825a..4998635 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -39,6 +39,8 @@ INTERNAL { roc_nix_rq_modify; roc_nix_rx_queue_intr_disable; roc_nix_rx_queue_intr_enable; + roc_nix_sq_fini; + roc_nix_sq_init; roc_nix_unregister_cq_irqs; roc_nix_unregister_queue_irqs; roc_npa_aura_limit_modify; From patchwork Thu Apr 1 12:37:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90393 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 09488A0548; Thu, 1 Apr 2021 14:41:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E4A711411A8; Thu, 1 Apr 2021 14:39:42 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id B7C8B141222 for ; Thu, 1 Apr 2021 14:39:40 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLS8019082 for ; Thu, 1 Apr 2021 05:39:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=DyywX4dkQ270ulXBZtQqFG8AanE40Rm+N74kuikozF8=; b=R4hmUTduqcXRq3BxSOpjoB1Johf8/0YM6iQPFK7tEHkArEx9Up38WZqJNy0jBBMb07pQ lu4f755jFc0De1QSbSmCa2HxU/wGsBJd6QoAw3ws2bv+WxrWXovT4C7l70A0muU0DMwy 2rKh+z2iOKZlwjffD3BtaoyEUbq9hQcD7lWlBpz9bkhf9xShGYHzcH/uQDspRXaeZce3 n/PU+Yt7+QWyxo/0HMLBs9QqXCtP4fMZqFZ/KZr6qWCgXmxx4nvNhHvHFebSUuxGO05+ c0cFuYLAMnSD9w7tBJL+4v06SwdKU2TomOBLkZOdGgBtGgJ2hlIMTGxh5fmwdn5WsikR OQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje2h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:40 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:37 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:37 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 7D8E43F7045; Thu, 1 Apr 2021 05:39:35 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:46 +0530 Message-ID: <20210401123817.14348-22-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: PrQM14qrekhgZe9Vrxvzdibg3qKzXeDD X-Proofpoint-ORIG-GUID: PrQM14qrekhgZe9Vrxvzdibg3qKzXeDD X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 21/52] common/cnxk: add nix MAC operations support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add support to different MAC related operations such as MAC address set/get, link set/get, link status callback, etc. Signed-off-by: Sunil Kumar Kori --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_nix.h | 41 ++++++ drivers/common/cnxk/roc_nix_mac.c | 298 ++++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 15 ++ 4 files changed, 355 insertions(+) create mode 100644 drivers/common/cnxk/roc_nix_mac.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 47e7c43..b9da55b 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -17,6 +17,7 @@ sources = files('roc_dev.c', 'roc_model.c', 'roc_nix.c', 'roc_nix_irq.c', + 'roc_nix_mac.c', 'roc_nix_queue.c', 'roc_npa.c', 'roc_npa_debug.c', diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 8027e6d..66e0bfa 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -100,6 +100,23 @@ struct roc_nix_sq { void *fc; }; +struct roc_nix_link_info { + uint64_t status : 1; + uint64_t full_duplex : 1; + uint64_t lmac_type_id : 4; + uint64_t speed : 20; + uint64_t autoneg : 1; + uint64_t fec : 2; + uint64_t port : 8; +}; + +/* Link status update callback */ +typedef void (*link_status_t)(struct roc_nix *roc_nix, + struct roc_nix_link_info *link); + +/* PTP info update callback */ +typedef int (*ptp_info_update_t)(struct roc_nix *roc_nix, bool enable); + struct roc_nix { /* Input parameters */ struct plt_pci_device *pci_dev; @@ -152,6 +169,30 @@ void __roc_api roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix); int __roc_api roc_nix_register_cq_irqs(struct roc_nix *roc_nix); void __roc_api roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix); +/* MAC */ +int __roc_api roc_nix_mac_rxtx_start_stop(struct roc_nix *roc_nix, bool start); +int __roc_api roc_nix_mac_link_event_start_stop(struct roc_nix *roc_nix, + bool start); +int __roc_api roc_nix_mac_loopback_enable(struct roc_nix *roc_nix, bool enable); +int __roc_api roc_nix_mac_addr_set(struct roc_nix *roc_nix, + const uint8_t addr[]); +int __roc_api roc_nix_mac_max_entries_get(struct roc_nix *roc_nix); +int __roc_api roc_nix_mac_addr_add(struct roc_nix *roc_nix, uint8_t addr[]); +int __roc_api roc_nix_mac_addr_del(struct roc_nix *roc_nix, uint32_t index); +int __roc_api roc_nix_mac_promisc_mode_enable(struct roc_nix *roc_nix, + int enable); +int __roc_api roc_nix_mac_link_state_set(struct roc_nix *roc_nix, uint8_t up); +int __roc_api roc_nix_mac_link_info_set(struct roc_nix *roc_nix, + struct roc_nix_link_info *link_info); +int __roc_api roc_nix_mac_link_info_get(struct roc_nix *roc_nix, + struct roc_nix_link_info *link_info); +int __roc_api roc_nix_mac_mtu_set(struct roc_nix *roc_nix, uint16_t mtu); +int __roc_api roc_nix_mac_max_rx_len_set(struct roc_nix *roc_nix, + uint16_t maxlen); +int __roc_api roc_nix_mac_link_cb_register(struct roc_nix *roc_nix, + link_status_t link_update); +void __roc_api roc_nix_mac_link_cb_unregister(struct roc_nix *roc_nix); + /* Queue */ int __roc_api roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena); diff --git a/drivers/common/cnxk/roc_nix_mac.c b/drivers/common/cnxk/roc_nix_mac.c new file mode 100644 index 0000000..682d5a7 --- /dev/null +++ b/drivers/common/cnxk/roc_nix_mac.c @@ -0,0 +1,298 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +static inline struct mbox * +nix_to_mbox(struct nix *nix) +{ + struct dev *dev = &nix->dev; + + return dev->mbox; +} + +int +roc_nix_mac_rxtx_start_stop(struct roc_nix *roc_nix, bool start) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = nix_to_mbox(nix); + + if (roc_nix_is_vf_or_sdp(roc_nix)) + return NIX_ERR_OP_NOTSUP; + + if (start) + mbox_alloc_msg_cgx_start_rxtx(mbox); + else + mbox_alloc_msg_cgx_stop_rxtx(mbox); + + return mbox_process(mbox); +} + +int +roc_nix_mac_link_event_start_stop(struct roc_nix *roc_nix, bool start) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = nix_to_mbox(nix); + + if (roc_nix_is_vf_or_sdp(roc_nix)) + return NIX_ERR_OP_NOTSUP; + + if (start) + mbox_alloc_msg_cgx_start_linkevents(mbox); + else + mbox_alloc_msg_cgx_stop_linkevents(mbox); + + return mbox_process(mbox); +} + +int +roc_nix_mac_loopback_enable(struct roc_nix *roc_nix, bool enable) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = nix_to_mbox(nix); + + if (enable && roc_nix_is_vf_or_sdp(roc_nix)) + return NIX_ERR_OP_NOTSUP; + + if (enable) + mbox_alloc_msg_cgx_intlbk_enable(mbox); + else + mbox_alloc_msg_cgx_intlbk_disable(mbox); + + return mbox_process(mbox); +} + +int +roc_nix_mac_addr_set(struct roc_nix *roc_nix, const uint8_t addr[]) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = nix_to_mbox(nix); + struct cgx_mac_addr_set_or_get *req; + + if (roc_nix_is_vf_or_sdp(roc_nix)) + return NIX_ERR_OP_NOTSUP; + + if (dev_active_vfs(&nix->dev)) + return NIX_ERR_OP_NOTSUP; + + req = mbox_alloc_msg_cgx_mac_addr_set(mbox); + mbox_memcpy(req->mac_addr, addr, PLT_ETHER_ADDR_LEN); + + return mbox_process(mbox); +} + +int +roc_nix_mac_max_entries_get(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct cgx_max_dmac_entries_get_rsp *rsp; + struct mbox *mbox = nix_to_mbox(nix); + int rc; + + if (roc_nix_is_vf_or_sdp(roc_nix)) + return NIX_ERR_OP_NOTSUP; + + mbox_alloc_msg_cgx_mac_max_entries_get(mbox); + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + return rsp->max_dmac_filters ? rsp->max_dmac_filters : 1; +} + +int +roc_nix_mac_addr_add(struct roc_nix *roc_nix, uint8_t addr[]) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = nix_to_mbox(nix); + struct cgx_mac_addr_add_req *req; + struct cgx_mac_addr_add_rsp *rsp; + int rc; + + if (roc_nix_is_vf_or_sdp(roc_nix)) + return NIX_ERR_OP_NOTSUP; + + if (dev_active_vfs(&nix->dev)) + return NIX_ERR_OP_NOTSUP; + + req = mbox_alloc_msg_cgx_mac_addr_add(mbox); + mbox_memcpy(req->mac_addr, addr, PLT_ETHER_ADDR_LEN); + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc < 0) + return rc; + + return rsp->index; +} + +int +roc_nix_mac_addr_del(struct roc_nix *roc_nix, uint32_t index) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = nix_to_mbox(nix); + struct cgx_mac_addr_del_req *req; + int rc = -ENOSPC; + + if (roc_nix_is_vf_or_sdp(roc_nix)) + return NIX_ERR_OP_NOTSUP; + + req = mbox_alloc_msg_cgx_mac_addr_del(mbox); + if (req == NULL) + return rc; + req->index = index; + + return mbox_process(mbox); +} + +int +roc_nix_mac_promisc_mode_enable(struct roc_nix *roc_nix, int enable) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = nix_to_mbox(nix); + + if (roc_nix_is_vf_or_sdp(roc_nix)) + return NIX_ERR_OP_NOTSUP; + + if (enable) + mbox_alloc_msg_cgx_promisc_enable(mbox); + else + mbox_alloc_msg_cgx_promisc_disable(mbox); + + return mbox_process(mbox); +} + +int +roc_nix_mac_link_info_get(struct roc_nix *roc_nix, + struct roc_nix_link_info *link_info) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = nix_to_mbox(nix); + struct cgx_link_info_msg *rsp; + int rc; + + mbox_alloc_msg_cgx_get_linkinfo(mbox); + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + link_info->status = rsp->link_info.link_up; + link_info->full_duplex = rsp->link_info.full_duplex; + link_info->lmac_type_id = rsp->link_info.lmac_type_id; + link_info->speed = rsp->link_info.speed; + link_info->autoneg = rsp->link_info.an; + link_info->fec = rsp->link_info.fec; + link_info->port = rsp->link_info.port; + + return 0; +} + +int +roc_nix_mac_link_state_set(struct roc_nix *roc_nix, uint8_t up) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = nix_to_mbox(nix); + struct cgx_set_link_state_msg *req; + int rc = -ENOSPC; + + req = mbox_alloc_msg_cgx_set_link_state(mbox); + if (req == NULL) + return rc; + req->enable = up; + return mbox_process(mbox); +} + +int +roc_nix_mac_link_info_set(struct roc_nix *roc_nix, + struct roc_nix_link_info *link_info) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = nix_to_mbox(nix); + struct cgx_set_link_mode_req *req; + int rc; + + rc = roc_nix_mac_link_state_set(roc_nix, link_info->status); + if (rc) + return rc; + + req = mbox_alloc_msg_cgx_set_link_mode(mbox); + if (req == NULL) + return -ENOSPC; + req->args.speed = link_info->speed; + req->args.duplex = link_info->full_duplex; + req->args.an = link_info->autoneg; + + return mbox_process(mbox); +} + +int +roc_nix_mac_mtu_set(struct roc_nix *roc_nix, uint16_t mtu) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = nix_to_mbox(nix); + struct nix_frs_cfg *req; + bool sdp_link = false; + int rc = -ENOSPC; + + if (roc_nix_is_sdp(roc_nix)) + sdp_link = true; + + req = mbox_alloc_msg_nix_set_hw_frs(mbox); + if (req == NULL) + return rc; + req->maxlen = mtu; + req->update_smq = true; + req->sdp_link = sdp_link; + + rc = mbox_process(mbox); + if (rc) + return rc; + + /* Save MTU for later use */ + nix->mtu = mtu; + return 0; +} + +int +roc_nix_mac_max_rx_len_set(struct roc_nix *roc_nix, uint16_t maxlen) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = nix_to_mbox(nix); + struct nix_frs_cfg *req; + bool sdp_link = false; + int rc = -ENOSPC; + + if (roc_nix_is_sdp(roc_nix)) + sdp_link = true; + + req = mbox_alloc_msg_nix_set_hw_frs(mbox); + if (req == NULL) + return rc; + req->sdp_link = sdp_link; + req->maxlen = maxlen; + + return mbox_process(mbox); +} + +int +roc_nix_mac_link_cb_register(struct roc_nix *roc_nix, link_status_t link_update) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + if (link_update == NULL) + return NIX_ERR_PARAM; + + dev->ops->link_status_update = (link_info_t)link_update; + return 0; +} + +void +roc_nix_mac_link_cb_unregister(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + dev->ops->link_status_update = NULL; +} diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 4998635..79500cc 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -29,6 +29,21 @@ INTERNAL { roc_nix_is_vf_or_sdp; roc_nix_lf_alloc; roc_nix_lf_free; + roc_nix_mac_addr_add; + roc_nix_mac_addr_del; + roc_nix_mac_addr_set; + roc_nix_mac_link_cb_register; + roc_nix_mac_link_cb_unregister; + roc_nix_mac_link_event_start_stop; + roc_nix_mac_link_info_get; + roc_nix_mac_link_info_set; + roc_nix_mac_link_state_set; + roc_nix_mac_loopback_enable; + roc_nix_mac_max_entries_get; + roc_nix_mac_max_rx_len_set; + roc_nix_mac_mtu_set; + roc_nix_mac_promisc_mode_enable; + roc_nix_mac_rxtx_start_stop; roc_nix_max_pkt_len; roc_nix_ras_intr_ena_dis; roc_nix_register_cq_irqs; From patchwork Thu Apr 1 12:37:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90394 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7BB6BA0548; Thu, 1 Apr 2021 14:42:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 24F00141228; Thu, 1 Apr 2021 14:39:45 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 9B399141226 for ; Thu, 1 Apr 2021 14:39:43 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLXb019083 for ; Thu, 1 Apr 2021 05:39:43 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=Txq0v2SzSj0CQumyu1ihTWORWEHtikNlwekbDnEiKnE=; b=IxIjQONkoM1cgAbrh+PmIUOysz8RpRBW2V2mKHMsKcpm4IaiYwsUb3AC1I5WsoY3rWO7 cRwlehC1vKsQyMG1t8MRXRU0QzJz1hFsUirpAFUBwWGSLI4w2y1ITi317ja+fHr8YqU6 QfmX6ZGjmQklOvBG1OpTqj54U4OCR/gNLwnh1TJwN50fSlXmb6VNpDFp07//nx+CwBPT vSp0VHiimkqajUgoh3hOG8XPOCpO1LsaY/JdVR3FMvrUZ4xGtoFAmOIMMc6hrIigyiul zW7uNhB53xrJv11tmzC/S/FSTRUBu8a1pktahTOgYJnPSPfQNIk//cIGZu9ReXt44UFq ug== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje2n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:43 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:40 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:40 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 75A823F703F; Thu, 1 Apr 2021 05:39:38 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:47 +0530 Message-ID: <20210401123817.14348-23-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: tV6dXv0IUT2_T-XvUhFLpdkUtWguAhHk X-Proofpoint-ORIG-GUID: tV6dXv0IUT2_T-XvUhFLpdkUtWguAhHk X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 22/52] common/cnxk: add nix specific npc operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add NIX specific NPC operations such as NPC mac address get/set, mcast entry add/delete, promiscuous mode enable/disable etc. Signed-off-by: Sunil Kumar Kori --- drivers/common/cnxk/meson.build | 2 + drivers/common/cnxk/roc_nix.h | 25 +++++++++ drivers/common/cnxk/roc_nix_mcast.c | 98 ++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_nix_npc.c | 103 ++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 9 ++++ 5 files changed, 237 insertions(+) create mode 100644 drivers/common/cnxk/roc_nix_mcast.c create mode 100644 drivers/common/cnxk/roc_nix_npc.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index b9da55b..f216d4a 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -18,6 +18,8 @@ sources = files('roc_dev.c', 'roc_nix.c', 'roc_nix_irq.c', 'roc_nix_mac.c', + 'roc_nix_mcast.c', + 'roc_nix_npc.c', 'roc_nix_queue.c', 'roc_npa.c', 'roc_npa_debug.c', diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 66e0bfa..638f827 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -193,6 +193,18 @@ int __roc_api roc_nix_mac_link_cb_register(struct roc_nix *roc_nix, link_status_t link_update); void __roc_api roc_nix_mac_link_cb_unregister(struct roc_nix *roc_nix); +/* NPC */ +int __roc_api roc_nix_npc_promisc_ena_dis(struct roc_nix *roc_nix, int enable); + +int __roc_api roc_nix_npc_mac_addr_set(struct roc_nix *roc_nix, uint8_t addr[]); + +int __roc_api roc_nix_npc_mac_addr_get(struct roc_nix *roc_nix, uint8_t *addr); + +int __roc_api roc_nix_npc_rx_ena_dis(struct roc_nix *roc_nix, bool enable); + +int __roc_api roc_nix_npc_mcast_config(struct roc_nix *roc_nix, + bool mcast_enable, bool prom_enable); + /* Queue */ int __roc_api roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena); @@ -205,4 +217,17 @@ int __roc_api roc_nix_cq_fini(struct roc_nix_cq *cq); int __roc_api roc_nix_sq_init(struct roc_nix *roc_nix, struct roc_nix_sq *sq); int __roc_api roc_nix_sq_fini(struct roc_nix_sq *sq); +/* MCAST*/ +int __roc_api roc_nix_mcast_mcam_entry_alloc(struct roc_nix *roc_nix, + uint16_t nb_entries, + uint8_t priority, + uint16_t index[]); +int __roc_api roc_nix_mcast_mcam_entry_free(struct roc_nix *roc_nix, + uint32_t index); +int __roc_api roc_nix_mcast_mcam_entry_write(struct roc_nix *roc_nix, + struct mcam_entry *entry, + uint32_t index, uint8_t intf, + uint64_t action); +int __roc_api roc_nix_mcast_mcam_entry_ena_dis(struct roc_nix *roc_nix, + uint32_t index, bool enable); #endif /* _ROC_NIX_H_ */ diff --git a/drivers/common/cnxk/roc_nix_mcast.c b/drivers/common/cnxk/roc_nix_mcast.c new file mode 100644 index 0000000..87d083e --- /dev/null +++ b/drivers/common/cnxk/roc_nix_mcast.c @@ -0,0 +1,98 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +static inline struct mbox * +get_mbox(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + return dev->mbox; +} + +int +roc_nix_mcast_mcam_entry_alloc(struct roc_nix *roc_nix, uint16_t nb_entries, + uint8_t priority, uint16_t index[]) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct npc_mcam_alloc_entry_req *req; + struct npc_mcam_alloc_entry_rsp *rsp; + int rc = -ENOSPC, i; + + req = mbox_alloc_msg_npc_mcam_alloc_entry(mbox); + if (req == NULL) + return rc; + req->priority = priority; + req->count = nb_entries; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + for (i = 0; i < rsp->count; i++) + index[i] = rsp->entry_list[i]; + + return rsp->count; +} + +int +roc_nix_mcast_mcam_entry_free(struct roc_nix *roc_nix, uint32_t index) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct npc_mcam_free_entry_req *req; + int rc = -ENOSPC; + + req = mbox_alloc_msg_npc_mcam_free_entry(mbox); + if (req == NULL) + return rc; + req->entry = index; + + return mbox_process_msg(mbox, NULL); +} + +int +roc_nix_mcast_mcam_entry_write(struct roc_nix *roc_nix, + struct mcam_entry *entry, uint32_t index, + uint8_t intf, uint64_t action) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct npc_mcam_write_entry_req *req; + int rc = -ENOSPC; + + req = mbox_alloc_msg_npc_mcam_write_entry(mbox); + if (req == NULL) + return rc; + req->entry = index; + req->intf = intf; + req->enable_entry = true; + mbox_memcpy(&req->entry_data, entry, sizeof(struct mcam_entry)); + req->entry_data.action = action; + + return mbox_process(mbox); +} + +int +roc_nix_mcast_mcam_entry_ena_dis(struct roc_nix *roc_nix, uint32_t index, + bool enable) +{ + struct npc_mcam_ena_dis_entry_req *req; + struct mbox *mbox = get_mbox(roc_nix); + int rc = -ENOSPC; + + if (enable) { + req = mbox_alloc_msg_npc_mcam_ena_entry(mbox); + if (req == NULL) + return rc; + } else { + req = mbox_alloc_msg_npc_mcam_dis_entry(mbox); + if (req == NULL) + return rc; + } + + req->entry = index; + return mbox_process(mbox); +} diff --git a/drivers/common/cnxk/roc_nix_npc.c b/drivers/common/cnxk/roc_nix_npc.c new file mode 100644 index 0000000..c0666c8 --- /dev/null +++ b/drivers/common/cnxk/roc_nix_npc.c @@ -0,0 +1,103 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +static inline struct mbox * +get_mbox(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + return dev->mbox; +} + +int +roc_nix_npc_promisc_ena_dis(struct roc_nix *roc_nix, int enable) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct nix_rx_mode *req; + int rc = -ENOSPC; + + if (roc_nix_is_vf_or_sdp(roc_nix)) + return NIX_ERR_PARAM; + + req = mbox_alloc_msg_nix_set_rx_mode(mbox); + if (req == NULL) + return rc; + + if (enable) + req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC; + + return mbox_process(mbox); +} + +int +roc_nix_npc_mac_addr_set(struct roc_nix *roc_nix, uint8_t addr[]) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct nix_set_mac_addr *req; + + req = mbox_alloc_msg_nix_set_mac_addr(mbox); + mbox_memcpy(req->mac_addr, addr, PLT_ETHER_ADDR_LEN); + return mbox_process(mbox); +} + +int +roc_nix_npc_mac_addr_get(struct roc_nix *roc_nix, uint8_t *addr) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct nix_get_mac_addr_rsp *rsp; + int rc; + + mbox_alloc_msg_nix_get_mac_addr(mbox); + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + mbox_memcpy(addr, rsp->mac_addr, PLT_ETHER_ADDR_LEN); + return 0; +} + +int +roc_nix_npc_rx_ena_dis(struct roc_nix *roc_nix, bool enable) +{ + struct mbox *mbox = get_mbox(roc_nix); + int rc; + + if (enable) + mbox_alloc_msg_nix_lf_start_rx(mbox); + else + mbox_alloc_msg_nix_lf_stop_rx(mbox); + + rc = mbox_process(mbox); + if (!rc) + roc_nix->io_enabled = enable; + return rc; +} + +int +roc_nix_npc_mcast_config(struct roc_nix *roc_nix, bool mcast_enable, + bool prom_enable) + +{ + struct mbox *mbox = get_mbox(roc_nix); + struct nix_rx_mode *req; + int rc = -ENOSPC; + + if (roc_nix_is_vf_or_sdp(roc_nix)) + return 0; + + req = mbox_alloc_msg_nix_set_rx_mode(mbox); + if (req == NULL) + return rc; + + if (mcast_enable) + req->mode = NIX_RX_MODE_ALLMULTI; + else if (prom_enable) + req->mode = NIX_RX_MODE_PROMISC; + + return mbox_process(mbox); +} diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 79500cc..2ca6368 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -45,6 +45,15 @@ INTERNAL { roc_nix_mac_promisc_mode_enable; roc_nix_mac_rxtx_start_stop; roc_nix_max_pkt_len; + roc_nix_mcast_mcam_entry_alloc; + roc_nix_mcast_mcam_entry_ena_dis; + roc_nix_mcast_mcam_entry_free; + roc_nix_mcast_mcam_entry_write; + roc_nix_npc_mac_addr_get; + roc_nix_npc_mac_addr_set; + roc_nix_npc_promisc_ena_dis; + roc_nix_npc_rx_ena_dis; + roc_nix_npc_mcast_config; roc_nix_ras_intr_ena_dis; roc_nix_register_cq_irqs; roc_nix_register_queue_irqs; From patchwork Thu Apr 1 12:37:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90395 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C831FA0562; Thu, 1 Apr 2021 14:42:10 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CC2DB14122D; Thu, 1 Apr 2021 14:39:47 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 4712D14122D for ; Thu, 1 Apr 2021 14:39:46 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CR8Bu002019 for ; Thu, 1 Apr 2021 05:39:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=s4y/Jg74Av2H+dVZHV2nv0EMzXS5ekZ97FODpvbFqeU=; b=ZutqRPYsyQmSrutAP6kEA1amBCmUHmizXbkcKPWKZ/8TwrDHxhUL2Ab+8TLAfVNlV2GF 5/BdZupkx0nRCvVumcXbLgM/bK+p3TlWrB85NyLUjrz5SKtxEZW9JgUSli9LO/K3g/pY OdX6F33/ZDT7Gac8KRcSzwqvhusLiZOCnTOxUuwnAcBSW9pR0s6ctdacfjzeS388yygt 3EwSSGJWpp6qZTaAZCGA51vbOwZ+WuSqnyGPc1LAAnxSNfg+BqdCPxeLVdCDDQhTdC3k wR5ySU99jOtA4bhF0cicJ6zArQXgJ/W8BTcbjRze2uGlt7kmiXAMYPRQxTF3/p1uekPh LA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 37n28j2ds0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:45 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:44 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:44 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 65C793F7043; Thu, 1 Apr 2021 05:39:41 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , , Vidya Sagar Velumuri Date: Thu, 1 Apr 2021 18:07:48 +0530 Message-ID: <20210401123817.14348-24-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: KKwkWCn4qb2Dpyv6iDO3obnHygXNVepo X-Proofpoint-GUID: KKwkWCn4qb2Dpyv6iDO3obnHygXNVepo X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 23/52] common/cnxk: add nix inline IPsec config API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vidya Sagar Velumuri Add API to configure NIX block for inline IPSec. Signed-off-by: Vidya Sagar Velumuri --- drivers/common/cnxk/roc_nix.c | 28 ++++++++++++++++++++++++++++ drivers/common/cnxk/roc_nix.h | 10 ++++++++++ drivers/common/cnxk/version.map | 1 + 3 files changed, 39 insertions(+) diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c index e64936e..0621976 100644 --- a/drivers/common/cnxk/roc_nix.c +++ b/drivers/common/cnxk/roc_nix.c @@ -81,6 +81,34 @@ roc_nix_get_pf_func(struct roc_nix *roc_nix) } int +roc_nix_lf_inl_ipsec_cfg(struct roc_nix *roc_nix, struct roc_nix_ipsec_cfg *cfg, + bool enb) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_inline_ipsec_lf_cfg *lf_cfg; + struct mbox *mbox = (&nix->dev)->mbox; + + lf_cfg = mbox_alloc_msg_nix_inline_ipsec_lf_cfg(mbox); + if (lf_cfg == NULL) + return -ENOSPC; + + if (enb) { + lf_cfg->enable = 1; + lf_cfg->sa_base_addr = cfg->iova; + lf_cfg->ipsec_cfg1.sa_idx_w = plt_log2_u32(cfg->max_sa); + lf_cfg->ipsec_cfg0.lenm1_max = roc_nix_max_pkt_len(roc_nix) - 1; + lf_cfg->ipsec_cfg1.sa_idx_max = cfg->max_sa - 1; + lf_cfg->ipsec_cfg0.sa_pow2_size = plt_log2_u32(cfg->sa_size); + lf_cfg->ipsec_cfg0.tag_const = cfg->tag_const; + lf_cfg->ipsec_cfg0.tt = cfg->tt; + } else { + lf_cfg->enable = 0; + } + + return mbox_process(mbox); +} + +int roc_nix_max_pkt_len(struct roc_nix *roc_nix) { struct nix *nix = roc_nix_to_nix_priv(roc_nix); diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 638f827..1c097cb 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -110,6 +110,14 @@ struct roc_nix_link_info { uint64_t port : 8; }; +struct roc_nix_ipsec_cfg { + uint32_t sa_size; + uint32_t tag_const; + plt_iova_t iova; + uint16_t max_sa; + uint8_t tt; +}; + /* Link status update callback */ typedef void (*link_status_t)(struct roc_nix *roc_nix, struct roc_nix_link_info *link); @@ -156,6 +164,8 @@ int __roc_api roc_nix_max_pkt_len(struct roc_nix *roc_nix); int __roc_api roc_nix_lf_alloc(struct roc_nix *roc_nix, uint32_t nb_rxq, uint32_t nb_txq, uint64_t rx_cfg); int __roc_api roc_nix_lf_free(struct roc_nix *roc_nix); +int __roc_api roc_nix_lf_inl_ipsec_cfg(struct roc_nix *roc_nix, + struct roc_nix_ipsec_cfg *cfg, bool enb); /* IRQ */ void __roc_api roc_nix_rx_queue_intr_enable(struct roc_nix *roc_nix, diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 2ca6368..fdb1aee 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -28,6 +28,7 @@ INTERNAL { roc_nix_is_sdp; roc_nix_is_vf_or_sdp; roc_nix_lf_alloc; + roc_nix_lf_inl_ipsec_cfg; roc_nix_lf_free; roc_nix_mac_addr_add; roc_nix_mac_addr_del; From patchwork Thu Apr 1 12:37:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90396 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2184AA0548; Thu, 1 Apr 2021 14:42:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 175E614123A; Thu, 1 Apr 2021 14:39:51 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id B10CA14119B for ; Thu, 1 Apr 2021 14:39:49 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLcv019085 for ; Thu, 1 Apr 2021 05:39:49 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=+Sx6JgrFr3wjWAPNZAsSfcinWcms2k+PWnX7ZaxRAwY=; b=QGuHfwECf1U1axW3mgHUAbhVodSuWJ4E4XXhlOEc7JEW+S+9NWhwYa8E2fsOLNTn4g1w TDcFtgJn7wnMYitxKs2O0HX4WEG95CbcjQLEIjerTxHAOuzmo4ziaDP8nM+lFvf/iQKv jCtBX3VNO+5VtvOa4q5/fcdJ9d2epcTMgpnf2SgMGY5r8SLXoc2ebr3l/5dwM++G0HHp Sx55Zj8Q0R18KDLF8e4KUeWdea7slJFrsKRggZpR5f4xmbGJrE473nhQYsmsitUv+eRx jcEXspzqOheLsBzOb5qTEeZbXRT1gSaEJZI0musQqok6L2HeiKPotR0f4R4oKteiPrJM lQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje2v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:49 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:47 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:47 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 92B483F703F; Thu, 1 Apr 2021 05:39:44 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:49 +0530 Message-ID: <20210401123817.14348-25-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: dgbSQA5RKeZZWH7r5bgGjW5p4k6hD5kb X-Proofpoint-ORIG-GUID: dgbSQA5RKeZZWH7r5bgGjW5p4k6hD5kb X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 24/52] common/cnxk: add nix RSS support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add API's for default/non-default reta table setup, key set/get, and flow algo setup for CN9K and CN10K. Signed-off-by: Jerin Jacob --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_nix.h | 17 +++ drivers/common/cnxk/roc_nix_rss.c | 220 ++++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 7 ++ 4 files changed, 245 insertions(+) create mode 100644 drivers/common/cnxk/roc_nix_rss.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index f216d4a..f6a8880 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -21,6 +21,7 @@ sources = files('roc_dev.c', 'roc_nix_mcast.c', 'roc_nix_npc.c', 'roc_nix_queue.c', + 'roc_nix_rss.c', 'roc_npa.c', 'roc_npa_debug.c', 'roc_npa_irq.c', diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 1c097cb..83388ce 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -215,6 +215,23 @@ int __roc_api roc_nix_npc_rx_ena_dis(struct roc_nix *roc_nix, bool enable); int __roc_api roc_nix_npc_mcast_config(struct roc_nix *roc_nix, bool mcast_enable, bool prom_enable); +/* RSS */ +void __roc_api roc_nix_rss_key_default_fill(struct roc_nix *roc_nix, + uint8_t key[ROC_NIX_RSS_KEY_LEN]); +void __roc_api roc_nix_rss_key_set(struct roc_nix *roc_nix, + const uint8_t key[ROC_NIX_RSS_KEY_LEN]); +void __roc_api roc_nix_rss_key_get(struct roc_nix *roc_nix, + uint8_t key[ROC_NIX_RSS_KEY_LEN]); +int __roc_api roc_nix_rss_reta_set(struct roc_nix *roc_nix, uint8_t group, + uint16_t reta[ROC_NIX_RSS_RETA_MAX]); +int __roc_api roc_nix_rss_reta_get(struct roc_nix *roc_nix, uint8_t group, + uint16_t reta[ROC_NIX_RSS_RETA_MAX]); +int __roc_api roc_nix_rss_flowkey_set(struct roc_nix *roc_nix, uint8_t *alg_idx, + uint32_t flowkey, uint8_t group, + int mcam_index); +int __roc_api roc_nix_rss_default_setup(struct roc_nix *roc_nix, + uint32_t flowkey); + /* Queue */ int __roc_api roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena); diff --git a/drivers/common/cnxk/roc_nix_rss.c b/drivers/common/cnxk/roc_nix_rss.c new file mode 100644 index 0000000..2d7b84a --- /dev/null +++ b/drivers/common/cnxk/roc_nix_rss.c @@ -0,0 +1,220 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +void +roc_nix_rss_key_default_fill(struct roc_nix *roc_nix, + uint8_t key[ROC_NIX_RSS_KEY_LEN]) +{ + PLT_SET_USED(roc_nix); + const uint8_t default_key[ROC_NIX_RSS_KEY_LEN] = { + 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, + 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, + 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, + 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, + 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD}; + + memcpy(key, default_key, ROC_NIX_RSS_KEY_LEN); +} + +void +roc_nix_rss_key_set(struct roc_nix *roc_nix, + const uint8_t key[ROC_NIX_RSS_KEY_LEN]) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + const uint64_t *keyptr; + uint64_t val; + uint32_t idx; + + keyptr = (const uint64_t *)key; + for (idx = 0; idx < (ROC_NIX_RSS_KEY_LEN >> 3); idx++) { + val = plt_cpu_to_be_64(keyptr[idx]); + plt_write64(val, nix->base + NIX_LF_RX_SECRETX(idx)); + } +} + +void +roc_nix_rss_key_get(struct roc_nix *roc_nix, uint8_t key[ROC_NIX_RSS_KEY_LEN]) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uint64_t *keyptr = (uint64_t *)key; + uint64_t val; + uint32_t idx; + + for (idx = 0; idx < (ROC_NIX_RSS_KEY_LEN >> 3); idx++) { + val = plt_read64(nix->base + NIX_LF_RX_SECRETX(idx)); + keyptr[idx] = plt_be_to_cpu_64(val); + } +} + +static int +nix_cn9k_rss_reta_set(struct nix *nix, uint8_t group, + uint16_t reta[ROC_NIX_RSS_RETA_MAX]) +{ + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_aq_enq_req *req; + uint16_t idx; + int rc; + + for (idx = 0; idx < nix->reta_sz; idx++) { + req = mbox_alloc_msg_nix_aq_enq(mbox); + if (!req) { + /* The shared memory buffer can be full. + * Flush it and retry + */ + rc = mbox_process(mbox); + if (rc < 0) + return rc; + req = mbox_alloc_msg_nix_aq_enq(mbox); + if (!req) + return NIX_ERR_NO_MEM; + } + req->rss.rq = reta[idx]; + /* Fill AQ info */ + req->qidx = (group * nix->reta_sz) + idx; + req->ctype = NIX_AQ_CTYPE_RSS; + req->op = NIX_AQ_INSTOP_INIT; + } + + rc = mbox_process(mbox); + if (rc < 0) + return rc; + + return 0; +} + +static int +nix_rss_reta_set(struct nix *nix, uint8_t group, + uint16_t reta[ROC_NIX_RSS_RETA_MAX]) +{ + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_cn10k_aq_enq_req *req; + uint16_t idx; + int rc; + + for (idx = 0; idx < nix->reta_sz; idx++) { + req = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + if (!req) { + /* The shared memory buffer can be full. + * Flush it and retry + */ + rc = mbox_process(mbox); + if (rc < 0) + return rc; + req = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + if (!req) + return NIX_ERR_NO_MEM; + } + req->rss.rq = reta[idx]; + /* Fill AQ info */ + req->qidx = (group * nix->reta_sz) + idx; + req->ctype = NIX_AQ_CTYPE_RSS; + req->op = NIX_AQ_INSTOP_INIT; + } + + rc = mbox_process(mbox); + if (rc < 0) + return rc; + + return 0; +} + +int +roc_nix_rss_reta_set(struct roc_nix *roc_nix, uint8_t group, + uint16_t reta[ROC_NIX_RSS_RETA_MAX]) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + int rc; + + if (group >= ROC_NIX_RSS_GRPS) + return NIX_ERR_PARAM; + + if (roc_model_is_cn9k()) + rc = nix_cn9k_rss_reta_set(nix, group, reta); + else + rc = nix_rss_reta_set(nix, group, reta); + if (rc) + return rc; + + memcpy(&nix->reta[group], reta, ROC_NIX_RSS_RETA_MAX); + return 0; +} + +int +roc_nix_rss_reta_get(struct roc_nix *roc_nix, uint8_t group, + uint16_t reta[ROC_NIX_RSS_RETA_MAX]) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + if (group >= ROC_NIX_RSS_GRPS) + return NIX_ERR_PARAM; + + memcpy(reta, &nix->reta[group], ROC_NIX_RSS_RETA_MAX); + return 0; +} + +int +roc_nix_rss_flowkey_set(struct roc_nix *roc_nix, uint8_t *alg_idx, + uint32_t flowkey, uint8_t group, int mcam_index) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_rss_flowkey_cfg_rsp *rss_rsp; + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_rss_flowkey_cfg *cfg; + int rc = -ENOSPC; + + if (group >= ROC_NIX_RSS_GRPS) + return NIX_ERR_PARAM; + + cfg = mbox_alloc_msg_nix_rss_flowkey_cfg(mbox); + if (cfg == NULL) + return rc; + cfg->flowkey_cfg = flowkey; + cfg->mcam_index = mcam_index; /* -1 indicates default group */ + cfg->group = group; /* 0 is default group */ + rc = mbox_process_msg(mbox, (void *)&rss_rsp); + if (rc) + return rc; + if (alg_idx) + *alg_idx = rss_rsp->alg_idx; + + return rc; +} + +int +roc_nix_rss_default_setup(struct roc_nix *roc_nix, uint32_t flowkey) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uint16_t idx, qcnt = nix->nb_rx_queues; + uint16_t reta[ROC_NIX_RSS_RETA_MAX]; + uint8_t key[ROC_NIX_RSS_KEY_LEN]; + uint8_t alg_idx; + int rc; + + roc_nix_rss_key_default_fill(roc_nix, key); + roc_nix_rss_key_set(roc_nix, key); + + /* Update default RSS RETA */ + for (idx = 0; idx < nix->reta_sz; idx++) + reta[idx] = idx % qcnt; + rc = roc_nix_rss_reta_set(roc_nix, 0, reta); + if (rc) { + plt_err("Failed to set RSS reta table rc=%d", rc); + goto fail; + } + + /* Update the default flowkey */ + rc = roc_nix_rss_flowkey_set(roc_nix, &alg_idx, flowkey, + ROC_NIX_RSS_GROUP_DEFAULT, -1); + if (rc) { + plt_err("Failed to set RSS flowkey rc=%d", rc); + goto fail; + } + + nix->rss_alg_idx = alg_idx; +fail: + return rc; +} diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index fdb1aee..14601a8 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -62,6 +62,13 @@ INTERNAL { roc_nix_rq_fini; roc_nix_rq_init; roc_nix_rq_modify; + roc_nix_rss_default_setup; + roc_nix_rss_flowkey_set; + roc_nix_rss_key_default_fill; + roc_nix_rss_key_get; + roc_nix_rss_key_set; + roc_nix_rss_reta_get; + roc_nix_rss_reta_set; roc_nix_rx_queue_intr_disable; roc_nix_rx_queue_intr_enable; roc_nix_sq_fini; From patchwork Thu Apr 1 12:37:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90397 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 43CA2A0548; Thu, 1 Apr 2021 14:42:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5E7D314123E; Thu, 1 Apr 2021 14:39:54 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 2B8BB14121D for ; Thu, 1 Apr 2021 14:39:52 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPwhT000423 for ; Thu, 1 Apr 2021 05:39:51 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=7aMi0F3Koj4ko9T8aS6nblS2QMnYKEVBNqj83owJ9Rk=; b=LZHC3SM1L+yxMM35UsSGM5xG+4nP0U7eO96zTby1CDmRGnqiuQ3z00AgXSMudmDti/i6 CA3riR2g+pN4E9PeJp8HCTKlsAORXVv9IIh1kPIHNzx2+rOvUs9nudW/Pcq9RS0EGL7Y cDHioB/Oa4UpKjLCJ3ZfvjiUYgV1uvY0e+HZlFbmIH8iCEabNtVosWmx0DmKe0zHMZ+a IKMySAlWhqFfNDN+YMGid8v0ZYSdwdKAwxYNI5raMxl6pIu4dncvXZWk3JW1YSicLD+8 uKw8CQEMGT7QpcEmdrpQqULtrcC+z/ItCzO7+t8IvYRNCl++zW17LqnFbbvNiDw4STGu cg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 37n28j2ds8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:51 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:50 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:50 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 8B2353F7040; Thu, 1 Apr 2021 05:39:47 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:50 +0530 Message-ID: <20210401123817.14348-26-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: myb1ivP-iFiKzbMPNBE1Sz3ugVQD1C9i X-Proofpoint-GUID: myb1ivP-iFiKzbMPNBE1Sz3ugVQD1C9i X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 25/52] common/cnxk: add nix ptp support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add support to enable/disable Rx and Tx PTP timestamping support. Also provide API's to register ptp info callbacks to get config change update from Kernel. Signed-off-by: Sunil Kumar Kori --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_nix.h | 16 +++++ drivers/common/cnxk/roc_nix_ptp.c | 122 ++++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 6 ++ 4 files changed, 145 insertions(+) create mode 100644 drivers/common/cnxk/roc_nix_ptp.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index f6a8880..d01cf0b 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -20,6 +20,7 @@ sources = files('roc_dev.c', 'roc_nix_mac.c', 'roc_nix_mcast.c', 'roc_nix_npc.c', + 'roc_nix_ptp.c', 'roc_nix_queue.c', 'roc_nix_rss.c', 'roc_npa.c', diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 83388ce..3cc1797 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -17,6 +17,11 @@ enum roc_nix_sq_max_sqe_sz { roc_nix_maxsqesz_w8 = NIX_MAXSQESZ_W8, }; +/* Range to adjust PTP frequency. Valid range is + * (-ROC_NIX_PTP_FREQ_ADJUST, ROC_NIX_PTP_FREQ_ADJUST) + */ +#define ROC_NIX_PTP_FREQ_ADJUST (1 << 9) + /* NIX LF RX offload configuration flags. * These are input flags to roc_nix_lf_alloc:rx_cfg */ @@ -244,6 +249,17 @@ int __roc_api roc_nix_cq_fini(struct roc_nix_cq *cq); int __roc_api roc_nix_sq_init(struct roc_nix *roc_nix, struct roc_nix_sq *sq); int __roc_api roc_nix_sq_fini(struct roc_nix_sq *sq); +/* PTP */ +int __roc_api roc_nix_ptp_rx_ena_dis(struct roc_nix *roc_nix, int enable); +int __roc_api roc_nix_ptp_tx_ena_dis(struct roc_nix *roc_nix, int enable); +int __roc_api roc_nix_ptp_clock_read(struct roc_nix *roc_nix, uint64_t *clock, + uint64_t *tsc, uint8_t is_pmu); +int __roc_api roc_nix_ptp_sync_time_adjust(struct roc_nix *roc_nix, + int64_t delta); +int __roc_api roc_nix_ptp_info_cb_register(struct roc_nix *roc_nix, + ptp_info_update_t ptp_update); +void __roc_api roc_nix_ptp_info_cb_unregister(struct roc_nix *roc_nix); + /* MCAST*/ int __roc_api roc_nix_mcast_mcam_entry_alloc(struct roc_nix *roc_nix, uint16_t nb_entries, diff --git a/drivers/common/cnxk/roc_nix_ptp.c b/drivers/common/cnxk/roc_nix_ptp.c new file mode 100644 index 0000000..03c4c6e --- /dev/null +++ b/drivers/common/cnxk/roc_nix_ptp.c @@ -0,0 +1,122 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +#define PTP_FREQ_ADJUST (1 << 9) + +static inline struct mbox * +get_mbox(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + return dev->mbox; +} + +int +roc_nix_ptp_rx_ena_dis(struct roc_nix *roc_nix, int enable) +{ + struct mbox *mbox = get_mbox(roc_nix); + + if (roc_nix_is_vf_or_sdp(roc_nix) || roc_nix_is_lbk(roc_nix)) + return NIX_ERR_PARAM; + + if (enable) + mbox_alloc_msg_cgx_ptp_rx_enable(mbox); + else + mbox_alloc_msg_cgx_ptp_rx_disable(mbox); + + return mbox_process(mbox); +} + +int +roc_nix_ptp_tx_ena_dis(struct roc_nix *roc_nix, int enable) +{ + struct mbox *mbox = get_mbox(roc_nix); + + if (roc_nix_is_vf_or_sdp(roc_nix) || roc_nix_is_lbk(roc_nix)) + return NIX_ERR_PARAM; + + if (enable) + mbox_alloc_msg_nix_lf_ptp_tx_enable(mbox); + else + mbox_alloc_msg_nix_lf_ptp_tx_disable(mbox); + + return mbox_process(mbox); +} + +int +roc_nix_ptp_clock_read(struct roc_nix *roc_nix, uint64_t *clock, uint64_t *tsc, + uint8_t is_pmu) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct ptp_req *req; + struct ptp_rsp *rsp; + int rc = -ENOSPC; + + req = mbox_alloc_msg_ptp_op(mbox); + if (req == NULL) + return rc; + req->op = PTP_OP_GET_CLOCK; + req->is_pmu = is_pmu; + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + if (clock) + *clock = rsp->clk; + + if (tsc) + *tsc = rsp->tsc; + + return 0; +} + +int +roc_nix_ptp_sync_time_adjust(struct roc_nix *roc_nix, int64_t delta) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct ptp_req *req; + struct ptp_rsp *rsp; + int rc = -ENOSPC; + + if (roc_nix_is_vf_or_sdp(roc_nix) || roc_nix_is_lbk(roc_nix)) + return NIX_ERR_PARAM; + + if ((delta <= -PTP_FREQ_ADJUST) || (delta >= PTP_FREQ_ADJUST)) + return NIX_ERR_INVALID_RANGE; + + req = mbox_alloc_msg_ptp_op(mbox); + if (req == NULL) + return rc; + req->op = PTP_OP_ADJFINE; + req->scaled_ppm = delta; + + return mbox_process_msg(mbox, (void *)&rsp); +} + +int +roc_nix_ptp_info_cb_register(struct roc_nix *roc_nix, + ptp_info_update_t ptp_update) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + if (ptp_update == NULL) + return NIX_ERR_PARAM; + + dev->ops->ptp_info_update = (ptp_info_t)ptp_update; + return 0; +} + +void +roc_nix_ptp_info_cb_unregister(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + dev->ops->ptp_info_update = NULL; +} diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 14601a8..66a1a82 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -55,6 +55,12 @@ INTERNAL { roc_nix_npc_promisc_ena_dis; roc_nix_npc_rx_ena_dis; roc_nix_npc_mcast_config; + roc_nix_ptp_clock_read; + roc_nix_ptp_info_cb_register; + roc_nix_ptp_info_cb_unregister; + roc_nix_ptp_rx_ena_dis; + roc_nix_ptp_sync_time_adjust; + roc_nix_ptp_tx_ena_dis; roc_nix_ras_intr_ena_dis; roc_nix_register_cq_irqs; roc_nix_register_queue_irqs; From patchwork Thu Apr 1 12:37:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90398 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D5067A0548; Thu, 1 Apr 2021 14:42:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A1A58141241; Thu, 1 Apr 2021 14:39:57 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 861EA141241 for ; Thu, 1 Apr 2021 14:39:55 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLGS019096 for ; Thu, 1 Apr 2021 05:39:55 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=RbCU0dtYuIJre5QvrQBLUxYCevgjHC7JU0YX8Q9NgiU=; b=UbtGOSUhnpqO6Kb+NbgM7Zg01gElYnUFhkhTwKMQg5p0lYBGJSaUjj1uJ7wQmmS8GcbF kWj+CDseiW6mLtfahLbC6REkOXk4QJWr0kjJY/GEui/qEVKZ0JtGK2yV+Sx61saRqsl/ g2lWpk+q+eQGCAbuNPdiUlYZW8iLSOtZRvbfEo7R7058Q8VgBZtcUDKo6UqLXcDtmF0f pY2kLG1lnHwtJYK7vhQUgvH94tDgODf9Dd3FH2j+nOAFRbcpjztHxZ3rRuG1pPxTEAzC nQI2RhgLbXdL+k9KXozlT1edMVIKVl9ahYCBEPJDKyKmPso6SEi+MDxKB1NwEit89tKy tA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje31-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:54 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:53 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:52 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 723E73F703F; Thu, 1 Apr 2021 05:39:50 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:51 +0530 Message-ID: <20210401123817.14348-27-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: EflbKWShOp0wUPlb3eLyK_RxA_pHWxZJ X-Proofpoint-ORIG-GUID: EflbKWShOp0wUPlb3eLyK_RxA_pHWxZJ X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 26/52] common/cnxk: add nix stats support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add API to provide Rx and Tx stats for a given NIX. Signed-off-by: Jerin Jacob --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_nix.h | 53 ++++++++ drivers/common/cnxk/roc_nix_stats.c | 239 ++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 4 + 4 files changed, 297 insertions(+) create mode 100644 drivers/common/cnxk/roc_nix_stats.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index d01cf0b..4c48f55 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -23,6 +23,7 @@ sources = files('roc_dev.c', 'roc_nix_ptp.c', 'roc_nix_queue.c', 'roc_nix_rss.c', + 'roc_nix_stats.c', 'roc_npa.c', 'roc_npa_debug.c', 'roc_npa_irq.c', diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 3cc1797..45aca83 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -47,6 +47,49 @@ enum roc_nix_sq_max_sqe_sz { #define ROC_NIX_VWQE_MAX_SIZE_LOG2 11 #define ROC_NIX_VWQE_MIN_SIZE_LOG2 2 +struct roc_nix_stats { + /* Rx */ + uint64_t rx_octs; + uint64_t rx_ucast; + uint64_t rx_bcast; + uint64_t rx_mcast; + uint64_t rx_drop; + uint64_t rx_drop_octs; + uint64_t rx_fcs; + uint64_t rx_err; + uint64_t rx_drop_bcast; + uint64_t rx_drop_mcast; + uint64_t rx_drop_l3_bcast; + uint64_t rx_drop_l3_mcast; + /* Tx */ + uint64_t tx_ucast; + uint64_t tx_bcast; + uint64_t tx_mcast; + uint64_t tx_drop; + uint64_t tx_octs; +}; + +struct roc_nix_stats_queue { + PLT_STD_C11 + union { + struct { + /* Rx */ + uint64_t rx_pkts; + uint64_t rx_octs; + uint64_t rx_drop_pkts; + uint64_t rx_drop_octs; + uint64_t rx_error_pkts; + }; + struct { + /* Tx */ + uint64_t tx_pkts; + uint64_t tx_octs; + uint64_t tx_drop_pkts; + uint64_t tx_drop_octs; + }; + }; +}; + struct roc_nix_rq { /* Input parameters */ uint16_t qid; @@ -237,6 +280,16 @@ int __roc_api roc_nix_rss_flowkey_set(struct roc_nix *roc_nix, uint8_t *alg_idx, int __roc_api roc_nix_rss_default_setup(struct roc_nix *roc_nix, uint32_t flowkey); +/* Stats */ +int __roc_api roc_nix_stats_get(struct roc_nix *roc_nix, + struct roc_nix_stats *stats); +int __roc_api roc_nix_stats_reset(struct roc_nix *roc_nix); +int __roc_api roc_nix_stats_queue_get(struct roc_nix *roc_nix, uint16_t qid, + bool is_rx, + struct roc_nix_stats_queue *qstats); +int __roc_api roc_nix_stats_queue_reset(struct roc_nix *roc_nix, uint16_t qid, + bool is_rx); + /* Queue */ int __roc_api roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena); diff --git a/drivers/common/cnxk/roc_nix_stats.c b/drivers/common/cnxk/roc_nix_stats.c new file mode 100644 index 0000000..e0a776a --- /dev/null +++ b/drivers/common/cnxk/roc_nix_stats.c @@ -0,0 +1,239 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include + +#include "roc_api.h" +#include "roc_priv.h" + +#define NIX_RX_STATS(val) plt_read64(nix->base + NIX_LF_RX_STATX(val)) +#define NIX_TX_STATS(val) plt_read64(nix->base + NIX_LF_TX_STATX(val)) + +int +roc_nix_stats_get(struct roc_nix *roc_nix, struct roc_nix_stats *stats) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + if (stats == NULL) + return NIX_ERR_PARAM; + + stats->rx_octs = NIX_RX_STATS(NIX_STAT_LF_RX_RX_OCTS); + stats->rx_ucast = NIX_RX_STATS(NIX_STAT_LF_RX_RX_UCAST); + stats->rx_bcast = NIX_RX_STATS(NIX_STAT_LF_RX_RX_BCAST); + stats->rx_mcast = NIX_RX_STATS(NIX_STAT_LF_RX_RX_MCAST); + stats->rx_drop = NIX_RX_STATS(NIX_STAT_LF_RX_RX_DROP); + stats->rx_drop_octs = NIX_RX_STATS(NIX_STAT_LF_RX_RX_DROP_OCTS); + stats->rx_fcs = NIX_RX_STATS(NIX_STAT_LF_RX_RX_FCS); + stats->rx_err = NIX_RX_STATS(NIX_STAT_LF_RX_RX_ERR); + stats->rx_drop_bcast = NIX_RX_STATS(NIX_STAT_LF_RX_RX_DRP_BCAST); + stats->rx_drop_mcast = NIX_RX_STATS(NIX_STAT_LF_RX_RX_DRP_MCAST); + stats->rx_drop_l3_bcast = NIX_RX_STATS(NIX_STAT_LF_RX_RX_DRP_L3BCAST); + stats->rx_drop_l3_mcast = NIX_RX_STATS(NIX_STAT_LF_RX_RX_DRP_L3MCAST); + + stats->tx_ucast = NIX_TX_STATS(NIX_STAT_LF_TX_TX_UCAST); + stats->tx_bcast = NIX_TX_STATS(NIX_STAT_LF_TX_TX_BCAST); + stats->tx_mcast = NIX_TX_STATS(NIX_STAT_LF_TX_TX_MCAST); + stats->tx_drop = NIX_TX_STATS(NIX_STAT_LF_TX_TX_DROP); + stats->tx_octs = NIX_TX_STATS(NIX_STAT_LF_TX_TX_OCTS); + return 0; +} + +int +roc_nix_stats_reset(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = (&nix->dev)->mbox; + + if (mbox_alloc_msg_nix_stats_rst(mbox) == NULL) + return -ENOMEM; + + return mbox_process(mbox); +} + +static int +queue_is_valid(struct nix *nix, uint16_t qid, bool is_rx) +{ + uint16_t nb_queues; + + if (is_rx) + nb_queues = nix->nb_rx_queues; + else + nb_queues = nix->nb_tx_queues; + + if (qid >= nb_queues) + return NIX_ERR_QUEUE_INVALID_RANGE; + + return 0; +} + +static uint64_t +qstat_read(struct nix *nix, uint16_t qid, uint32_t off) +{ + uint64_t reg, val; + int64_t *addr; + + addr = (int64_t *)(nix->base + off); + reg = (((uint64_t)qid) << 32); + val = roc_atomic64_add_nosync(reg, addr); + if (val & BIT_ULL(NIX_CQ_OP_STAT_OP_ERR)) + val = 0; + return val; +} + +static void +nix_stat_rx_queue_get(struct nix *nix, uint16_t qid, + struct roc_nix_stats_queue *qstats) +{ + qstats->rx_pkts = qstat_read(nix, qid, NIX_LF_RQ_OP_PKTS); + qstats->rx_octs = qstat_read(nix, qid, NIX_LF_RQ_OP_OCTS); + qstats->rx_drop_pkts = qstat_read(nix, qid, NIX_LF_RQ_OP_DROP_PKTS); + qstats->rx_drop_octs = qstat_read(nix, qid, NIX_LF_RQ_OP_DROP_OCTS); + qstats->rx_error_pkts = qstat_read(nix, qid, NIX_LF_RQ_OP_RE_PKTS); +} + +static void +nix_stat_tx_queue_get(struct nix *nix, uint16_t qid, + struct roc_nix_stats_queue *qstats) +{ + qstats->tx_pkts = qstat_read(nix, qid, NIX_LF_SQ_OP_PKTS); + qstats->tx_octs = qstat_read(nix, qid, NIX_LF_SQ_OP_OCTS); + qstats->tx_drop_pkts = qstat_read(nix, qid, NIX_LF_SQ_OP_DROP_PKTS); + qstats->tx_drop_octs = qstat_read(nix, qid, NIX_LF_SQ_OP_DROP_OCTS); +} + +static int +nix_stat_rx_queue_reset(struct nix *nix, uint16_t qid) +{ + struct mbox *mbox = (&nix->dev)->mbox; + int rc; + + if (roc_model_is_cn9k()) { + struct nix_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + aq->rq.octs = 0; + aq->rq.pkts = 0; + aq->rq.drop_octs = 0; + aq->rq.drop_pkts = 0; + aq->rq.re_pkts = 0; + + aq->rq_mask.octs = ~(aq->rq_mask.octs); + aq->rq_mask.pkts = ~(aq->rq_mask.pkts); + aq->rq_mask.drop_octs = ~(aq->rq_mask.drop_octs); + aq->rq_mask.drop_pkts = ~(aq->rq_mask.drop_pkts); + aq->rq_mask.re_pkts = ~(aq->rq_mask.re_pkts); + } else { + struct nix_cn10k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + aq->rq.octs = 0; + aq->rq.pkts = 0; + aq->rq.drop_octs = 0; + aq->rq.drop_pkts = 0; + aq->rq.re_pkts = 0; + + aq->rq_mask.octs = ~(aq->rq_mask.octs); + aq->rq_mask.pkts = ~(aq->rq_mask.pkts); + aq->rq_mask.drop_octs = ~(aq->rq_mask.drop_octs); + aq->rq_mask.drop_pkts = ~(aq->rq_mask.drop_pkts); + aq->rq_mask.re_pkts = ~(aq->rq_mask.re_pkts); + } + + rc = mbox_process(mbox); + return rc ? NIX_ERR_AQ_WRITE_FAILED : 0; +} + +static int +nix_stat_tx_queue_reset(struct nix *nix, uint16_t qid) +{ + struct mbox *mbox = (&nix->dev)->mbox; + int rc; + + if (roc_model_is_cn9k()) { + struct nix_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_WRITE; + aq->sq.octs = 0; + aq->sq.pkts = 0; + aq->sq.drop_octs = 0; + aq->sq.drop_pkts = 0; + + aq->sq_mask.octs = ~(aq->sq_mask.octs); + aq->sq_mask.pkts = ~(aq->sq_mask.pkts); + aq->sq_mask.drop_octs = ~(aq->sq_mask.drop_octs); + aq->sq_mask.drop_pkts = ~(aq->sq_mask.drop_pkts); + } else { + struct nix_cn10k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_WRITE; + aq->sq.octs = 0; + aq->sq.pkts = 0; + aq->sq.drop_octs = 0; + aq->sq.drop_pkts = 0; + + aq->sq_mask.octs = ~(aq->sq_mask.octs); + aq->sq_mask.pkts = ~(aq->sq_mask.pkts); + aq->sq_mask.drop_octs = ~(aq->sq_mask.drop_octs); + aq->sq_mask.drop_pkts = ~(aq->sq_mask.drop_pkts); + } + + rc = mbox_process(mbox); + return rc ? NIX_ERR_AQ_WRITE_FAILED : 0; +} + +int +roc_nix_stats_queue_get(struct roc_nix *roc_nix, uint16_t qid, bool is_rx, + struct roc_nix_stats_queue *qstats) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + int rc; + + if (qstats == NULL) + return NIX_ERR_PARAM; + + rc = queue_is_valid(nix, qid, is_rx); + if (rc) + goto fail; + + if (is_rx) + nix_stat_rx_queue_get(nix, qid, qstats); + else + nix_stat_tx_queue_get(nix, qid, qstats); + +fail: + return rc; +} + +int +roc_nix_stats_queue_reset(struct roc_nix *roc_nix, uint16_t qid, bool is_rx) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + int rc; + + rc = queue_is_valid(nix, qid, is_rx); + if (rc) + goto fail; + + if (is_rx) + rc = nix_stat_rx_queue_reset(nix, qid); + else + rc = nix_stat_tx_queue_reset(nix, qid); + +fail: + return rc; +} diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 66a1a82..aa79da6 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -79,6 +79,10 @@ INTERNAL { roc_nix_rx_queue_intr_enable; roc_nix_sq_fini; roc_nix_sq_init; + roc_nix_stats_get; + roc_nix_stats_queue_get; + roc_nix_stats_queue_reset; + roc_nix_stats_reset; roc_nix_unregister_cq_irqs; roc_nix_unregister_queue_irqs; roc_npa_aura_limit_modify; From patchwork Thu Apr 1 12:37:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90400 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5625DA0562; Thu, 1 Apr 2021 14:42:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 93858141254; Thu, 1 Apr 2021 14:40:00 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 839B2141244 for ; Thu, 1 Apr 2021 14:39:58 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLXd019083 for ; Thu, 1 Apr 2021 05:39:58 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=OW17KLJJpKSbXyElPQA7qvIy2j6vcNmOW+xND8OsbbY=; b=UNTt17Fs3NzgejcJxV3I4i7xG/vzF0iIEIIXmm3KVBuHEhA7S2piFzIIhJ1IiTBQsQE9 F489VWithdXPI1f1foX5x4hBF8Mi/8LVQaKc3mPXd9KzDmMSnurVq6YSI0N16IA90bUb eyZZ6fNSJn5CtyY4/plZH7XiWrA5l6J/bnSUNZsqxUHiMbXKGTBt8JinN3S8rEe0EJrd xQ9/oO2RhiyZNUqm3Yv3bAe3J0guYGx7WjL7a3YGJ8KqgPmUHKHT4syh8Vqh6+2/PuHQ zN7sdpSNyCkYAkqXla1ohnq5DUQUZkA83UXVQdF7gXtXZmVBKKjFtDz7f9B5ZI9gYps3 kA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje35-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:39:57 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:55 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:55 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 550EC3F7040; Thu, 1 Apr 2021 05:39:53 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:52 +0530 Message-ID: <20210401123817.14348-28-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: el1Ih95x4vf7or0Jv9fKYJWrV5HIYst5 X-Proofpoint-ORIG-GUID: el1Ih95x4vf7or0Jv9fKYJWrV5HIYst5 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 27/52] common/cnxk: add support for nix extended stats X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Satha Rao Add support for retrieving NIX extended stats that are per NIX LF and per LMAC. Signed-off-by: Satha Rao --- drivers/common/cnxk/roc_nix.h | 18 ++++ drivers/common/cnxk/roc_nix_stats.c | 172 +++++++++++++++++++++++++++++ drivers/common/cnxk/roc_nix_xstats.h | 204 +++++++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 3 + 4 files changed, 397 insertions(+) create mode 100644 drivers/common/cnxk/roc_nix_xstats.h diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 45aca83..137889a 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -158,6 +158,18 @@ struct roc_nix_link_info { uint64_t port : 8; }; +/** Maximum name length for extended statistics counters */ +#define ROC_NIX_XSTATS_NAME_SIZE 64 + +struct roc_nix_xstat { + uint64_t id; /**< The index in xstats name array. */ + uint64_t value; /**< The statistic counter value. */ +}; + +struct roc_nix_xstat_name { + char name[ROC_NIX_XSTATS_NAME_SIZE]; +}; + struct roc_nix_ipsec_cfg { uint32_t sa_size; uint32_t tag_const; @@ -289,6 +301,12 @@ int __roc_api roc_nix_stats_queue_get(struct roc_nix *roc_nix, uint16_t qid, struct roc_nix_stats_queue *qstats); int __roc_api roc_nix_stats_queue_reset(struct roc_nix *roc_nix, uint16_t qid, bool is_rx); +int __roc_api roc_nix_num_xstats_get(struct roc_nix *roc_nix); +int __roc_api roc_nix_xstats_get(struct roc_nix *roc_nix, + struct roc_nix_xstat *xstats, unsigned int n); +int __roc_api roc_nix_xstats_names_get(struct roc_nix *roc_nix, + struct roc_nix_xstat_name *xstats_names, + unsigned int limit); /* Queue */ int __roc_api roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, diff --git a/drivers/common/cnxk/roc_nix_stats.c b/drivers/common/cnxk/roc_nix_stats.c index e0a776a..631677b 100644 --- a/drivers/common/cnxk/roc_nix_stats.c +++ b/drivers/common/cnxk/roc_nix_stats.c @@ -5,12 +5,24 @@ #include #include "roc_api.h" +#include "roc_nix_xstats.h" #include "roc_priv.h" #define NIX_RX_STATS(val) plt_read64(nix->base + NIX_LF_RX_STATX(val)) #define NIX_TX_STATS(val) plt_read64(nix->base + NIX_LF_TX_STATX(val)) int +roc_nix_num_xstats_get(struct roc_nix *roc_nix) +{ + if (roc_nix_is_vf_or_sdp(roc_nix)) + return CNXK_NIX_NUM_XSTATS_REG; + else if (roc_model_is_cn9k()) + return CNXK_NIX_NUM_XSTATS_CGX; + + return CNXK_NIX_NUM_XSTATS_RPM; +} + +int roc_nix_stats_get(struct roc_nix *roc_nix, struct roc_nix_stats *stats) { struct nix *nix = roc_nix_to_nix_priv(roc_nix); @@ -237,3 +249,163 @@ roc_nix_stats_queue_reset(struct roc_nix *roc_nix, uint16_t qid, bool is_rx) fail: return rc; } + +int +roc_nix_xstats_get(struct roc_nix *roc_nix, struct roc_nix_xstat *xstats, + unsigned int n) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = (&nix->dev)->mbox; + struct cgx_stats_rsp *cgx_resp; + struct rpm_stats_rsp *rpm_resp; + uint64_t i, count = 0; + struct msg_req *req; + uint32_t xstat_cnt; + int rc; + + xstat_cnt = roc_nix_num_xstats_get(roc_nix); + if (n < xstat_cnt) + return xstat_cnt; + + if (xstats == NULL) + return -EINVAL; + + memset(xstats, 0, (xstat_cnt * sizeof(*xstats))); + for (i = 0; i < CNXK_NIX_NUM_TX_XSTATS; i++) { + xstats[count].value = NIX_TX_STATS(nix_tx_xstats[i].offset); + xstats[count].id = count; + count++; + } + + for (i = 0; i < CNXK_NIX_NUM_RX_XSTATS; i++) { + xstats[count].value = NIX_RX_STATS(nix_rx_xstats[i].offset); + xstats[count].id = count; + count++; + } + + for (i = 0; i < nix->nb_rx_queues; i++) + xstats[count].value += + qstat_read(nix, i, nix_q_xstats[0].offset); + + xstats[count].id = count; + count++; + + if (roc_nix_is_vf_or_sdp(roc_nix)) + return count; + + if (roc_model_is_cn9k()) { + req = mbox_alloc_msg_cgx_stats(mbox); + req->hdr.pcifunc = roc_nix_get_pf_func(roc_nix); + + rc = mbox_process_msg(mbox, (void *)&cgx_resp); + if (rc) + return rc; + + for (i = 0; i < roc_nix_num_rx_xstats(); i++) { + xstats[count].value = + cgx_resp->rx_stats[nix_rx_xstats_cgx[i].offset]; + xstats[count].id = count; + count++; + } + + for (i = 0; i < roc_nix_num_tx_xstats(); i++) { + xstats[count].value = + cgx_resp->tx_stats[nix_tx_xstats_cgx[i].offset]; + xstats[count].id = count; + count++; + } + } else { + req = mbox_alloc_msg_rpm_stats(mbox); + req->hdr.pcifunc = roc_nix_get_pf_func(roc_nix); + + rc = mbox_process_msg(mbox, (void *)&rpm_resp); + if (rc) + return rc; + + for (i = 0; i < roc_nix_num_rx_xstats(); i++) { + xstats[count].value = + rpm_resp->rx_stats[nix_rx_xstats_rpm[i].offset]; + xstats[count].id = count; + count++; + } + + for (i = 0; i < roc_nix_num_tx_xstats(); i++) { + xstats[count].value = + rpm_resp->tx_stats[nix_tx_xstats_rpm[i].offset]; + xstats[count].id = count; + count++; + } + } + + return count; +} + +int +roc_nix_xstats_names_get(struct roc_nix *roc_nix, + struct roc_nix_xstat_name *xstats_names, + unsigned int limit) +{ + unsigned long int i, count = 0; + unsigned int xstat_cnt; + + xstat_cnt = roc_nix_num_xstats_get(roc_nix); + if (limit < xstat_cnt && xstats_names != NULL) + return -ENOMEM; + + if (xstats_names) { + for (i = 0; i < CNXK_NIX_NUM_TX_XSTATS; i++) { + snprintf(xstats_names[count].name, + sizeof(xstats_names[count].name), "%s", + nix_tx_xstats[i].name); + count++; + } + + for (i = 0; i < CNXK_NIX_NUM_RX_XSTATS; i++) { + snprintf(xstats_names[count].name, + sizeof(xstats_names[count].name), "%s", + nix_rx_xstats[i].name); + count++; + } + for (i = 0; i < CNXK_NIX_NUM_QUEUE_XSTATS; i++) { + snprintf(xstats_names[count].name, + sizeof(xstats_names[count].name), "%s", + nix_q_xstats[i].name); + count++; + } + + if (roc_nix_is_vf_or_sdp(roc_nix)) + return count; + + if (roc_model_is_cn9k()) { + for (i = 0; i < roc_nix_num_rx_xstats(); i++) { + snprintf(xstats_names[count].name, + sizeof(xstats_names[count].name), "%s", + nix_rx_xstats_cgx[i].name); + count++; + } + + for (i = 0; i < roc_nix_num_tx_xstats(); i++) { + snprintf(xstats_names[count].name, + sizeof(xstats_names[count].name), "%s", + nix_tx_xstats_cgx[i].name); + count++; + } + } else { + for (i = 0; i < roc_nix_num_rx_xstats(); i++) { + snprintf(xstats_names[count].name, + sizeof(xstats_names[count].name), "%s", + nix_rx_xstats_rpm[i].name); + count++; + } + + for (i = 0; i < roc_nix_num_tx_xstats(); i++) { + snprintf(xstats_names[count].name, + sizeof(xstats_names[count].name), "%s", + nix_tx_xstats_rpm[i].name); + count++; + } + } + } + + return xstat_cnt; +} diff --git a/drivers/common/cnxk/roc_nix_xstats.h b/drivers/common/cnxk/roc_nix_xstats.h new file mode 100644 index 0000000..bde00a6 --- /dev/null +++ b/drivers/common/cnxk/roc_nix_xstats.h @@ -0,0 +1,204 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ +#ifndef _ROC_NIX_XSTAT_H_ +#define _ROC_NIX_XSTAT_H_ + +#include + +struct cnxk_nix_xstats_name { + char name[ROC_NIX_XSTATS_NAME_SIZE]; + uint32_t offset; +}; + +static const struct cnxk_nix_xstats_name nix_tx_xstats[] = { + {"tx_ucast", NIX_STAT_LF_TX_TX_UCAST}, + {"tx_bcast", NIX_STAT_LF_TX_TX_BCAST}, + {"tx_mcast", NIX_STAT_LF_TX_TX_MCAST}, + {"tx_drop", NIX_STAT_LF_TX_TX_DROP}, + {"tx_octs", NIX_STAT_LF_TX_TX_OCTS}, +}; + +static const struct cnxk_nix_xstats_name nix_rx_xstats[] = { + {"rx_octs", NIX_STAT_LF_RX_RX_OCTS}, + {"rx_ucast", NIX_STAT_LF_RX_RX_UCAST}, + {"rx_bcast", NIX_STAT_LF_RX_RX_BCAST}, + {"rx_mcast", NIX_STAT_LF_RX_RX_MCAST}, + {"rx_drop", NIX_STAT_LF_RX_RX_DROP}, + {"rx_drop_octs", NIX_STAT_LF_RX_RX_DROP_OCTS}, + {"rx_fcs", NIX_STAT_LF_RX_RX_FCS}, + {"rx_err", NIX_STAT_LF_RX_RX_ERR}, + {"rx_drp_bcast", NIX_STAT_LF_RX_RX_DRP_BCAST}, + {"rx_drp_mcast", NIX_STAT_LF_RX_RX_DRP_MCAST}, + {"rx_drp_l3bcast", NIX_STAT_LF_RX_RX_DRP_L3BCAST}, + {"rx_drp_l3mcast", NIX_STAT_LF_RX_RX_DRP_L3MCAST}, +}; + +static const struct cnxk_nix_xstats_name nix_q_xstats[] = { + {"rq_op_re_pkts", NIX_LF_RQ_OP_RE_PKTS}, +}; + +static const struct cnxk_nix_xstats_name nix_rx_xstats_rpm[] = { + {"rpm_rx_etherStatsOctets", RPM_MTI_STAT_RX_OCT_CNT}, + {"rpm_rx_OctetsReceivedOK", RPM_MTI_STAT_RX_OCT_RECV_OK}, + {"rpm_rx_aAlignmentErrors", RPM_MTI_STAT_RX_ALIG_ERR}, + {"rpm_rx_aPAUSEMACCtrlFramesReceived", RPM_MTI_STAT_RX_CTRL_FRM_RECV}, + {"rpm_rx_aFrameTooLongErrors", RPM_MTI_STAT_RX_FRM_LONG}, + {"rpm_rx_aInRangeLengthErrors", RPM_MTI_STAT_RX_LEN_ERR}, + {"rpm_rx_aFramesReceivedOK", RPM_MTI_STAT_RX_FRM_RECV}, + {"rpm_rx_aFrameCheckSequenceErrors", RPM_MTI_STAT_RX_FRM_SEQ_ERR}, + {"rpm_rx_VLANReceivedOK", RPM_MTI_STAT_RX_VLAN_OK}, + {"rpm_rx_ifInErrors", RPM_MTI_STAT_RX_IN_ERR}, + {"rpm_rx_ifInUcastPkts", RPM_MTI_STAT_RX_IN_UCAST_PKT}, + {"rpm_rx_ifInMulticastPkts", RPM_MTI_STAT_RX_IN_MCAST_PKT}, + {"rpm_rx_ifInBroadcastPkts", RPM_MTI_STAT_RX_IN_BCAST_PKT}, + {"rpm_rx_etherStatsDropEvents", RPM_MTI_STAT_RX_DRP_EVENTS}, + {"rpm_rx_etherStatsPkts", RPM_MTI_STAT_RX_PKT}, + {"rpm_rx_etherStatsUndersizePkts", RPM_MTI_STAT_RX_UNDER_SIZE}, + {"rpm_rx_etherStatsPkts64Octets", RPM_MTI_STAT_RX_1_64_PKT_CNT}, + {"rpm_rx_etherStatsPkts65to127Octets", RPM_MTI_STAT_RX_65_127_PKT_CNT}, + {"rpm_rx_etherStatsPkts128to255Octets", + RPM_MTI_STAT_RX_128_255_PKT_CNT}, + {"rpm_rx_etherStatsPkts256to511Octets", + RPM_MTI_STAT_RX_256_511_PKT_CNT}, + {"rpm_rx_etherStatsPkts512to1023Octets", + RPM_MTI_STAT_RX_512_1023_PKT_CNT}, + {"rpm_rx_etherStatsPkts1024to1518Octets", + RPM_MTI_STAT_RX_1024_1518_PKT_CNT}, + {"rpm_rx_etherStatsPkts1519toMaxOctets", + RPM_MTI_STAT_RX_1519_MAX_PKT_CNT}, + {"rpm_rx_etherStatsOversizePkts", RPM_MTI_STAT_RX_OVER_SIZE}, + {"rpm_rx_etherStatsJabbers", RPM_MTI_STAT_RX_JABBER}, + {"rpm_rx_etherStatsFragments", RPM_MTI_STAT_RX_ETH_FRAGS}, + {"rpm_rx_CBFC_pause_frames_class_0", RPM_MTI_STAT_RX_CBFC_CLASS_0}, + {"rpm_rx_CBFC_pause_frames_class_1", RPM_MTI_STAT_RX_CBFC_CLASS_1}, + {"rpm_rx_CBFC_pause_frames_class_2", RPM_MTI_STAT_RX_CBFC_CLASS_2}, + {"rpm_rx_CBFC_pause_frames_class_3", RPM_MTI_STAT_RX_CBFC_CLASS_3}, + {"rpm_rx_CBFC_pause_frames_class_4", RPM_MTI_STAT_RX_CBFC_CLASS_4}, + {"rpm_rx_CBFC_pause_frames_class_5", RPM_MTI_STAT_RX_CBFC_CLASS_5}, + {"rpm_rx_CBFC_pause_frames_class_6", RPM_MTI_STAT_RX_CBFC_CLASS_6}, + {"rpm_rx_CBFC_pause_frames_class_7", RPM_MTI_STAT_RX_CBFC_CLASS_7}, + {"rpm_rx_CBFC_pause_frames_class_8", RPM_MTI_STAT_RX_CBFC_CLASS_8}, + {"rpm_rx_CBFC_pause_frames_class_9", RPM_MTI_STAT_RX_CBFC_CLASS_9}, + {"rpm_rx_CBFC_pause_frames_class_10", RPM_MTI_STAT_RX_CBFC_CLASS_10}, + {"rpm_rx_CBFC_pause_frames_class_11", RPM_MTI_STAT_RX_CBFC_CLASS_11}, + {"rpm_rx_CBFC_pause_frames_class_12", RPM_MTI_STAT_RX_CBFC_CLASS_12}, + {"rpm_rx_CBFC_pause_frames_class_13", RPM_MTI_STAT_RX_CBFC_CLASS_13}, + {"rpm_rx_CBFC_pause_frames_class_14", RPM_MTI_STAT_RX_CBFC_CLASS_14}, + {"rpm_rx_CBFC_pause_frames_class_15", RPM_MTI_STAT_RX_CBFC_CLASS_15}, + {"rpm_rx_aMACControlFramesReceived", RPM_MTI_STAT_RX_MAC_CONTROL}, +}; + +static const struct cnxk_nix_xstats_name nix_tx_xstats_rpm[] = { + {"rpm_tx_etherStatsOctets", RPM_MTI_STAT_TX_OCT_CNT}, + {"rpm_tx_OctetsTransmittedOK", RPM_MTI_STAT_TX_OCT_TX_OK}, + {"rpm_tx_aPAUSEMACCtrlFramesTransmitted", + RPM_MTI_STAT_TX_PAUSE_MAC_CTRL}, + {"rpm_tx_aFramesTransmittedOK", RPM_MTI_STAT_TX_FRAMES_OK}, + {"rpm_tx_VLANTransmittedOK", RPM_MTI_STAT_TX_VLAN_OK}, + {"rpm_tx_ifOutErrors", RPM_MTI_STAT_TX_OUT_ERR}, + {"rpm_tx_ifOutUcastPkts", RPM_MTI_STAT_TX_UCAST_PKT_CNT}, + {"rpm_tx_ifOutMulticastPkts", RPM_MTI_STAT_TX_MCAST_PKT_CNT}, + {"rpm_tx_ifOutBroadcastPkts", RPM_MTI_STAT_TX_BCAST_PKT_CNT}, + {"rpm_tx_etherStatsPkts64Octets", RPM_MTI_STAT_TX_1_64_PKT_CNT}, + {"rpm_tx_etherStatsPkts65to127Octets", RPM_MTI_STAT_TX_65_127_PKT_CNT}, + {"rpm_tx_etherStatsPkts128to255Octets", + RPM_MTI_STAT_TX_128_255_PKT_CNT}, + {"rpm_tx_etherStatsPkts256to511Octets", + RPM_MTI_STAT_TX_256_511_PKT_CNT}, + {"rpm_tx_etherStatsPkts512to1023Octets", + RPM_MTI_STAT_TX_512_1023_PKT_CNT}, + {"rpm_tx_etherStatsPkts1024to1518Octets", + RPM_MTI_STAT_TX_1024_1518_PKT_CNT}, + {"rpm_tx_etherStatsPkts1519toMaxOctets", + RPM_MTI_STAT_TX_1519_MAX_PKT_CNT}, + {"rpm_tx_CBFC_pause_frames_class_0", RPM_MTI_STAT_TX_CBFC_CLASS_0}, + {"rpm_tx_CBFC_pause_frames_class_1", RPM_MTI_STAT_TX_CBFC_CLASS_1}, + {"rpm_tx_CBFC_pause_frames_class_2", RPM_MTI_STAT_TX_CBFC_CLASS_2}, + {"rpm_tx_CBFC_pause_frames_class_3", RPM_MTI_STAT_TX_CBFC_CLASS_3}, + {"rpm_tx_CBFC_pause_frames_class_4", RPM_MTI_STAT_TX_CBFC_CLASS_4}, + {"rpm_tx_CBFC_pause_frames_class_5", RPM_MTI_STAT_TX_CBFC_CLASS_5}, + {"rpm_tx_CBFC_pause_frames_class_6", RPM_MTI_STAT_TX_CBFC_CLASS_6}, + {"rpm_tx_CBFC_pause_frames_class_7", RPM_MTI_STAT_TX_CBFC_CLASS_7}, + {"rpm_tx_CBFC_pause_frames_class_8", RPM_MTI_STAT_TX_CBFC_CLASS_8}, + {"rpm_tx_CBFC_pause_frames_class_9", RPM_MTI_STAT_TX_CBFC_CLASS_9}, + {"rpm_tx_CBFC_pause_frames_class_10", RPM_MTI_STAT_TX_CBFC_CLASS_10}, + {"rpm_tx_CBFC_pause_frames_class_11", RPM_MTI_STAT_TX_CBFC_CLASS_11}, + {"rpm_tx_CBFC_pause_frames_class_12", RPM_MTI_STAT_TX_CBFC_CLASS_12}, + {"rpm_tx_CBFC_pause_frames_class_13", RPM_MTI_STAT_TX_CBFC_CLASS_13}, + {"rpm_tx_CBFC_pause_frames_class_14", RPM_MTI_STAT_TX_CBFC_CLASS_14}, + {"rpm_tx_CBFC_pause_frames_class_15", RPM_MTI_STAT_TX_CBFC_CLASS_15}, + {"rpm_tx_aMACControlFramesTransmitted", + RPM_MTI_STAT_TX_MAC_CONTROL_FRAMES}, + {"rpm_tx_etherStatsPkts", RPM_MTI_STAT_TX_PKT_CNT}, +}; + +static const struct cnxk_nix_xstats_name nix_rx_xstats_cgx[] = { + {"cgx_rx_pkts", CGX_RX_PKT_CNT}, + {"cgx_rx_octs", CGX_RX_OCT_CNT}, + {"cgx_rx_pause_pkts", CGX_RX_PAUSE_PKT_CNT}, + {"cgx_rx_pause_octs", CGX_RX_PAUSE_OCT_CNT}, + {"cgx_rx_dmac_filt_pkts", CGX_RX_DMAC_FILT_PKT_CNT}, + {"cgx_rx_dmac_filt_octs", CGX_RX_DMAC_FILT_OCT_CNT}, + {"cgx_rx_fifo_drop_pkts", CGX_RX_FIFO_DROP_PKT_CNT}, + {"cgx_rx_fifo_drop_octs", CGX_RX_FIFO_DROP_OCT_CNT}, + {"cgx_rx_errors", CGX_RX_ERR_CNT}, +}; + +static const struct cnxk_nix_xstats_name nix_tx_xstats_cgx[] = { + {"cgx_tx_collision_drop", CGX_TX_COLLISION_DROP}, + {"cgx_tx_frame_deferred_cnt", CGX_TX_FRAME_DEFER_CNT}, + {"cgx_tx_multiple_collision", CGX_TX_MULTIPLE_COLLISION}, + {"cgx_tx_single_collision", CGX_TX_SINGLE_COLLISION}, + {"cgx_tx_octs", CGX_TX_OCT_CNT}, + {"cgx_tx_pkts", CGX_TX_PKT_CNT}, + {"cgx_tx_1_to_63_oct_frames", CGX_TX_1_63_PKT_CNT}, + {"cgx_tx_64_oct_frames", CGX_TX_64_PKT_CNT}, + {"cgx_tx_65_to_127_oct_frames", CGX_TX_65_127_PKT_CNT}, + {"cgx_tx_128_to_255_oct_frames", CGX_TX_128_255_PKT_CNT}, + {"cgx_tx_256_to_511_oct_frames", CGX_TX_256_511_PKT_CNT}, + {"cgx_tx_512_to_1023_oct_frames", CGX_TX_512_1023_PKT_CNT}, + {"cgx_tx_1024_to_1518_oct_frames", CGX_TX_1024_1518_PKT_CNT}, + {"cgx_tx_1519_to_max_oct_frames", CGX_TX_1519_MAX_PKT_CNT}, + {"cgx_tx_broadcast_packets", CGX_TX_BCAST_PKTS}, + {"cgx_tx_multicast_packets", CGX_TX_MCAST_PKTS}, + {"cgx_tx_underflow_packets", CGX_TX_UFLOW_PKTS}, + {"cgx_tx_pause_packets", CGX_TX_PAUSE_PKTS}, +}; + +#define CNXK_NIX_NUM_RX_XSTATS PLT_DIM(nix_rx_xstats) +#define CNXK_NIX_NUM_TX_XSTATS PLT_DIM(nix_tx_xstats) +#define CNXK_NIX_NUM_QUEUE_XSTATS PLT_DIM(nix_q_xstats) +#define CNXK_NIX_NUM_RX_XSTATS_CGX PLT_DIM(nix_rx_xstats_cgx) +#define CNXK_NIX_NUM_TX_XSTATS_CGX PLT_DIM(nix_tx_xstats_cgx) +#define CNXK_NIX_NUM_RX_XSTATS_RPM PLT_DIM(nix_rx_xstats_rpm) +#define CNXK_NIX_NUM_TX_XSTATS_RPM PLT_DIM(nix_tx_xstats_rpm) + +#define CNXK_NIX_NUM_XSTATS_REG \ + (CNXK_NIX_NUM_RX_XSTATS + CNXK_NIX_NUM_TX_XSTATS + \ + CNXK_NIX_NUM_QUEUE_XSTATS) +#define CNXK_NIX_NUM_XSTATS_CGX \ + (CNXK_NIX_NUM_XSTATS_REG + CNXK_NIX_NUM_RX_XSTATS_CGX + \ + CNXK_NIX_NUM_TX_XSTATS_CGX) +#define CNXK_NIX_NUM_XSTATS_RPM \ + (CNXK_NIX_NUM_XSTATS_REG + CNXK_NIX_NUM_RX_XSTATS_RPM + \ + CNXK_NIX_NUM_TX_XSTATS_RPM) + +static inline unsigned long int +roc_nix_num_rx_xstats(void) +{ + if (roc_model_is_cn9k()) + return CNXK_NIX_NUM_RX_XSTATS_CGX; + + return CNXK_NIX_NUM_RX_XSTATS_RPM; +} + +static inline unsigned long int +roc_nix_num_tx_xstats(void) +{ + if (roc_model_is_cn9k()) + return CNXK_NIX_NUM_TX_XSTATS_CGX; + + return CNXK_NIX_NUM_TX_XSTATS_RPM; +} +#endif /* _ROC_NIX_XSTAT_H_ */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index aa79da6..85b8393 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -83,6 +83,9 @@ INTERNAL { roc_nix_stats_queue_get; roc_nix_stats_queue_reset; roc_nix_stats_reset; + roc_nix_num_xstats_get; + roc_nix_xstats_get; + roc_nix_xstats_names_get; roc_nix_unregister_cq_irqs; roc_nix_unregister_queue_irqs; roc_npa_aura_limit_modify; From patchwork Thu Apr 1 12:37:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90401 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E62F7A0548; Thu, 1 Apr 2021 14:42:57 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2643A14125A; Thu, 1 Apr 2021 14:40:03 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id A6A8E141258 for ; Thu, 1 Apr 2021 14:40:01 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLcY019084 for ; Thu, 1 Apr 2021 05:40:01 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=gY70+y6dy/52eXCif+LEljdpQxmIaqGvfnFQV6lSzwU=; b=fcy+OvKGy++NPSj/rx4OTUORYiq6o2osyTXihWwCFOqT6uz6b/jJmuvJKLHXbiUlV5/e /03wZQB+Y9rRec59cXbdafulwiKTwDek6K3o1/G7VCy3bHQsT3S3wk6SyQ9lYA3233i0 wo16C25+Hz8A5LRSDBYPmjdkRcbf1wncQj/LsqIBsEzN2M5c2n4l4qmA216WwoY7iX3b bi64YeU0JS5wYZRLONidTfsqpuX/q47j4E+I7hm5D7fHTpVpt5LEhAsxj0qwmHbSWKoJ RUSmP3mXItkGg4UNwzrwEtCMmPcwSNEmF3X4j0HIwUR37VTXRJyWmK3f0DK9LT8gT5ck 3w== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje38-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:00 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:39:58 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:39:58 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 373223F703F; Thu, 1 Apr 2021 05:39:55 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:53 +0530 Message-ID: <20210401123817.14348-29-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: Spg0zMRiOnNMyyCqcErLvW0dXZgd7eGE X-Proofpoint-ORIG-GUID: Spg0zMRiOnNMyyCqcErLvW0dXZgd7eGE X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 28/52] common/cnxk: add nix debug dump support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add support to dump NIX RQ, SQ and CQ contexts apart from NIX LF registers. Signed-off-by: Jerin Jacob Signed-off-by: Sunil Kumar Kori --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_nix.h | 10 + drivers/common/cnxk/roc_nix_debug.c | 805 ++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_nix_irq.c | 11 + drivers/common/cnxk/version.map | 8 + 5 files changed, 835 insertions(+) create mode 100644 drivers/common/cnxk/roc_nix_debug.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 4c48f55..57253e4 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -16,6 +16,7 @@ sources = files('roc_dev.c', 'roc_mbox.c', 'roc_model.c', 'roc_nix.c', + 'roc_nix_debug.c', 'roc_nix_irq.c', 'roc_nix_mac.c', 'roc_nix_mcast.c', diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 137889a..048a536 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -227,6 +227,16 @@ int __roc_api roc_nix_lf_free(struct roc_nix *roc_nix); int __roc_api roc_nix_lf_inl_ipsec_cfg(struct roc_nix *roc_nix, struct roc_nix_ipsec_cfg *cfg, bool enb); +/* Debug */ +int __roc_api roc_nix_lf_get_reg_count(struct roc_nix *roc_nix); +int __roc_api roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data); +int __roc_api roc_nix_queues_ctx_dump(struct roc_nix *roc_nix); +void __roc_api roc_nix_cqe_dump(const struct nix_cqe_hdr_s *cq); +void __roc_api roc_nix_rq_dump(struct roc_nix_rq *rq); +void __roc_api roc_nix_cq_dump(struct roc_nix_cq *cq); +void __roc_api roc_nix_sq_dump(struct roc_nix_sq *sq); +void __roc_api roc_nix_dump(struct roc_nix *roc_nix); + /* IRQ */ void __roc_api roc_nix_rx_queue_intr_enable(struct roc_nix *roc_nix, uint16_t rxq_id); diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c new file mode 100644 index 0000000..00712d5 --- /dev/null +++ b/drivers/common/cnxk/roc_nix_debug.c @@ -0,0 +1,805 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +#define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__) +#define NIX_REG_INFO(reg) \ + { \ + reg, #reg \ + } +#define NIX_REG_NAME_SZ 48 + +#define nix_dump_no_nl(fmt, ...) fprintf(stderr, fmt, ##__VA_ARGS__) + +struct nix_lf_reg_info { + uint32_t offset; + const char *name; +}; + +static const struct nix_lf_reg_info nix_lf_reg[] = { + NIX_REG_INFO(NIX_LF_RX_SECRETX(0)), + NIX_REG_INFO(NIX_LF_RX_SECRETX(1)), + NIX_REG_INFO(NIX_LF_RX_SECRETX(2)), + NIX_REG_INFO(NIX_LF_RX_SECRETX(3)), + NIX_REG_INFO(NIX_LF_RX_SECRETX(4)), + NIX_REG_INFO(NIX_LF_RX_SECRETX(5)), + NIX_REG_INFO(NIX_LF_CFG), + NIX_REG_INFO(NIX_LF_GINT), + NIX_REG_INFO(NIX_LF_GINT_W1S), + NIX_REG_INFO(NIX_LF_GINT_ENA_W1C), + NIX_REG_INFO(NIX_LF_GINT_ENA_W1S), + NIX_REG_INFO(NIX_LF_ERR_INT), + NIX_REG_INFO(NIX_LF_ERR_INT_W1S), + NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1C), + NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1S), + NIX_REG_INFO(NIX_LF_RAS), + NIX_REG_INFO(NIX_LF_RAS_W1S), + NIX_REG_INFO(NIX_LF_RAS_ENA_W1C), + NIX_REG_INFO(NIX_LF_RAS_ENA_W1S), + NIX_REG_INFO(NIX_LF_SQ_OP_ERR_DBG), + NIX_REG_INFO(NIX_LF_MNQ_ERR_DBG), + NIX_REG_INFO(NIX_LF_SEND_ERR_DBG), +}; + +int +roc_nix_lf_get_reg_count(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + int reg_count; + + if (roc_nix == NULL) + return NIX_ERR_PARAM; + + reg_count = PLT_DIM(nix_lf_reg); + /* NIX_LF_TX_STATX */ + reg_count += nix->lf_tx_stats; + /* NIX_LF_RX_STATX */ + reg_count += nix->lf_rx_stats; + /* NIX_LF_QINTX_CNT*/ + reg_count += nix->qints; + /* NIX_LF_QINTX_INT */ + reg_count += nix->qints; + /* NIX_LF_QINTX_ENA_W1S */ + reg_count += nix->qints; + /* NIX_LF_QINTX_ENA_W1C */ + reg_count += nix->qints; + /* NIX_LF_CINTX_CNT */ + reg_count += nix->cints; + /* NIX_LF_CINTX_WAIT */ + reg_count += nix->cints; + /* NIX_LF_CINTX_INT */ + reg_count += nix->cints; + /* NIX_LF_CINTX_INT_W1S */ + reg_count += nix->cints; + /* NIX_LF_CINTX_ENA_W1S */ + reg_count += nix->cints; + /* NIX_LF_CINTX_ENA_W1C */ + reg_count += nix->cints; + + return reg_count; +} + +int +roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uintptr_t nix_lf_base = nix->base; + bool dump_stdout; + uint64_t reg; + uint32_t i; + + if (roc_nix == NULL) + return NIX_ERR_PARAM; + + dump_stdout = data ? 0 : 1; + + for (i = 0; i < PLT_DIM(nix_lf_reg); i++) { + reg = plt_read64(nix_lf_base + nix_lf_reg[i].offset); + if (dump_stdout && reg) + nix_dump("%32s = 0x%" PRIx64, nix_lf_reg[i].name, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_TX_STATX */ + for (i = 0; i < nix->lf_tx_stats; i++) { + reg = plt_read64(nix_lf_base + NIX_LF_TX_STATX(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_TX_STATX", i, + reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_RX_STATX */ + for (i = 0; i < nix->lf_rx_stats; i++) { + reg = plt_read64(nix_lf_base + NIX_LF_RX_STATX(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_RX_STATX", i, + reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_QINTX_CNT*/ + for (i = 0; i < nix->qints; i++) { + reg = plt_read64(nix_lf_base + NIX_LF_QINTX_CNT(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_CNT", i, + reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_QINTX_INT */ + for (i = 0; i < nix->qints; i++) { + reg = plt_read64(nix_lf_base + NIX_LF_QINTX_INT(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_INT", i, + reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_QINTX_ENA_W1S */ + for (i = 0; i < nix->qints; i++) { + reg = plt_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1S(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_ENA_W1S", + i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_QINTX_ENA_W1C */ + for (i = 0; i < nix->qints; i++) { + reg = plt_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1C(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_ENA_W1C", + i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_CNT */ + for (i = 0; i < nix->cints; i++) { + reg = plt_read64(nix_lf_base + NIX_LF_CINTX_CNT(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_CNT", i, + reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_WAIT */ + for (i = 0; i < nix->cints; i++) { + reg = plt_read64(nix_lf_base + NIX_LF_CINTX_WAIT(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_WAIT", i, + reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_INT */ + for (i = 0; i < nix->cints; i++) { + reg = plt_read64(nix_lf_base + NIX_LF_CINTX_INT(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_INT", i, + reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_INT_W1S */ + for (i = 0; i < nix->cints; i++) { + reg = plt_read64(nix_lf_base + NIX_LF_CINTX_INT_W1S(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_INT_W1S", + i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_ENA_W1S */ + for (i = 0; i < nix->cints; i++) { + reg = plt_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1S(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_ENA_W1S", + i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_ENA_W1C */ + for (i = 0; i < nix->cints; i++) { + reg = plt_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1C(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_ENA_W1C", + i, reg); + if (data) + *data++ = reg; + } + return 0; +} + +static int +nix_q_ctx_get(struct mbox *mbox, uint8_t ctype, uint16_t qid, __io void **ctx_p) +{ + int rc; + + if (roc_model_is_cn9k()) { + struct nix_aq_enq_rsp *rsp; + struct nix_aq_enq_req *aq; + int rc; + + aq = mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = qid; + aq->ctype = ctype; + aq->op = NIX_AQ_INSTOP_READ; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + if (ctype == NIX_AQ_CTYPE_RQ) + *ctx_p = &rsp->rq; + else if (ctype == NIX_AQ_CTYPE_SQ) + *ctx_p = &rsp->sq; + else + *ctx_p = &rsp->cq; + } else { + struct nix_cn10k_aq_enq_rsp *rsp; + struct nix_cn10k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq->qidx = qid; + aq->ctype = ctype; + aq->op = NIX_AQ_INSTOP_READ; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + if (ctype == NIX_AQ_CTYPE_RQ) + *ctx_p = &rsp->rq; + else if (ctype == NIX_AQ_CTYPE_SQ) + *ctx_p = &rsp->sq; + else + *ctx_p = &rsp->cq; + } + return 0; +} + +static inline void +nix_cn9k_lf_sq_dump(__io struct nix_sq_ctx_s *ctx, uint32_t *sqb_aura_p) +{ + nix_dump("W0: sqe_way_mask \t\t%d\nW0: cq \t\t\t\t%d", + ctx->sqe_way_mask, ctx->cq); + nix_dump("W0: sdp_mcast \t\t\t%d\nW0: substream \t\t\t0x%03x", + ctx->sdp_mcast, ctx->substream); + nix_dump("W0: qint_idx \t\t\t%d\nW0: ena \t\t\t%d\n", ctx->qint_idx, + ctx->ena); + + nix_dump("W1: sqb_count \t\t\t%d\nW1: default_chan \t\t%d", + ctx->sqb_count, ctx->default_chan); + nix_dump("W1: smq_rr_quantum \t\t%d\nW1: sso_ena \t\t\t%d", + ctx->smq_rr_quantum, ctx->sso_ena); + nix_dump("W1: xoff \t\t\t%d\nW1: cq_ena \t\t\t%d\nW1: smq\t\t\t\t%d\n", + ctx->xoff, ctx->cq_ena, ctx->smq); + + nix_dump("W2: sqe_stype \t\t\t%d\nW2: sq_int_ena \t\t\t%d", + ctx->sqe_stype, ctx->sq_int_ena); + nix_dump("W2: sq_int \t\t\t%d\nW2: sqb_aura \t\t\t%d", ctx->sq_int, + ctx->sqb_aura); + nix_dump("W2: smq_rr_count \t\t%d\n", ctx->smq_rr_count); + + nix_dump("W3: smq_next_sq_vld\t\t%d\nW3: smq_pend\t\t\t%d", + ctx->smq_next_sq_vld, ctx->smq_pend); + nix_dump("W3: smenq_next_sqb_vld \t%d\nW3: head_offset\t\t\t%d", + ctx->smenq_next_sqb_vld, ctx->head_offset); + nix_dump("W3: smenq_offset\t\t%d\nW3: tail_offset \t\t%d", + ctx->smenq_offset, ctx->tail_offset); + nix_dump("W3: smq_lso_segnum \t\t%d\nW3: smq_next_sq \t\t%d", + ctx->smq_lso_segnum, ctx->smq_next_sq); + nix_dump("W3: mnq_dis \t\t\t%d\nW3: lmt_dis \t\t\t%d", ctx->mnq_dis, + ctx->lmt_dis); + nix_dump("W3: cq_limit\t\t\t%d\nW3: max_sqe_size\t\t%d\n", + ctx->cq_limit, ctx->max_sqe_size); + + nix_dump("W4: next_sqb \t\t\t0x%" PRIx64 "", ctx->next_sqb); + nix_dump("W5: tail_sqb \t\t\t0x%" PRIx64 "", ctx->tail_sqb); + nix_dump("W6: smenq_sqb \t\t\t0x%" PRIx64 "", ctx->smenq_sqb); + nix_dump("W7: smenq_next_sqb \t\t0x%" PRIx64 "", ctx->smenq_next_sqb); + nix_dump("W8: head_sqb \t\t\t0x%" PRIx64 "", ctx->head_sqb); + + nix_dump("W9: vfi_lso_vld \t\t%d\nW9: vfi_lso_vlan1_ins_ena\t%d", + ctx->vfi_lso_vld, ctx->vfi_lso_vlan1_ins_ena); + nix_dump("W9: vfi_lso_vlan0_ins_ena\t%d\nW9: vfi_lso_mps\t\t\t%d", + ctx->vfi_lso_vlan0_ins_ena, ctx->vfi_lso_mps); + nix_dump("W9: vfi_lso_sb \t\t\t%d\nW9: vfi_lso_sizem1\t\t%d", + ctx->vfi_lso_sb, ctx->vfi_lso_sizem1); + nix_dump("W9: vfi_lso_total\t\t%d", ctx->vfi_lso_total); + + nix_dump("W10: scm_lso_rem \t\t0x%" PRIx64 "", + (uint64_t)ctx->scm_lso_rem); + nix_dump("W11: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs); + nix_dump("W12: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts); + nix_dump("W14: dropped_octs \t\t0x%" PRIx64 "", + (uint64_t)ctx->drop_octs); + nix_dump("W15: dropped_pkts \t\t0x%" PRIx64 "", + (uint64_t)ctx->drop_pkts); + + *sqb_aura_p = ctx->sqb_aura; +} + +static inline void +nix_lf_sq_dump(__io struct nix_cn10k_sq_ctx_s *ctx, uint32_t *sqb_aura_p) +{ + nix_dump("W0: sqe_way_mask \t\t%d\nW0: cq \t\t\t\t%d", + ctx->sqe_way_mask, ctx->cq); + nix_dump("W0: sdp_mcast \t\t\t%d\nW0: substream \t\t\t0x%03x", + ctx->sdp_mcast, ctx->substream); + nix_dump("W0: qint_idx \t\t\t%d\nW0: ena \t\t\t%d\n", ctx->qint_idx, + ctx->ena); + + nix_dump("W1: sqb_count \t\t\t%d\nW1: default_chan \t\t%d", + ctx->sqb_count, ctx->default_chan); + nix_dump("W1: smq_rr_weight \t\t%d\nW1: sso_ena \t\t\t%d", + ctx->smq_rr_weight, ctx->sso_ena); + nix_dump("W1: xoff \t\t\t%d\nW1: cq_ena \t\t\t%d\nW1: smq\t\t\t\t%d\n", + ctx->xoff, ctx->cq_ena, ctx->smq); + + nix_dump("W2: sqe_stype \t\t\t%d\nW2: sq_int_ena \t\t\t%d", + ctx->sqe_stype, ctx->sq_int_ena); + nix_dump("W2: sq_int \t\t\t%d\nW2: sqb_aura \t\t\t%d", ctx->sq_int, + ctx->sqb_aura); + nix_dump("W2: smq_rr_count[ub:lb] \t\t%x:%x\n", ctx->smq_rr_count_ub, + ctx->smq_rr_count_lb); + + nix_dump("W3: smq_next_sq_vld\t\t%d\nW3: smq_pend\t\t\t%d", + ctx->smq_next_sq_vld, ctx->smq_pend); + nix_dump("W3: smenq_next_sqb_vld \t%d\nW3: head_offset\t\t\t%d", + ctx->smenq_next_sqb_vld, ctx->head_offset); + nix_dump("W3: smenq_offset\t\t%d\nW3: tail_offset \t\t%d", + ctx->smenq_offset, ctx->tail_offset); + nix_dump("W3: smq_lso_segnum \t\t%d\nW3: smq_next_sq \t\t%d", + ctx->smq_lso_segnum, ctx->smq_next_sq); + nix_dump("W3: mnq_dis \t\t\t%d\nW3: lmt_dis \t\t\t%d", ctx->mnq_dis, + ctx->lmt_dis); + nix_dump("W3: cq_limit\t\t\t%d\nW3: max_sqe_size\t\t%d\n", + ctx->cq_limit, ctx->max_sqe_size); + + nix_dump("W4: next_sqb \t\t\t0x%" PRIx64 "", ctx->next_sqb); + nix_dump("W5: tail_sqb \t\t\t0x%" PRIx64 "", ctx->tail_sqb); + nix_dump("W6: smenq_sqb \t\t\t0x%" PRIx64 "", ctx->smenq_sqb); + nix_dump("W7: smenq_next_sqb \t\t0x%" PRIx64 "", ctx->smenq_next_sqb); + nix_dump("W8: head_sqb \t\t\t0x%" PRIx64 "", ctx->head_sqb); + + nix_dump("W9: vfi_lso_vld \t\t%d\nW9: vfi_lso_vlan1_ins_ena\t%d", + ctx->vfi_lso_vld, ctx->vfi_lso_vlan1_ins_ena); + nix_dump("W9: vfi_lso_vlan0_ins_ena\t%d\nW9: vfi_lso_mps\t\t\t%d", + ctx->vfi_lso_vlan0_ins_ena, ctx->vfi_lso_mps); + nix_dump("W9: vfi_lso_sb \t\t\t%d\nW9: vfi_lso_sizem1\t\t%d", + ctx->vfi_lso_sb, ctx->vfi_lso_sizem1); + nix_dump("W9: vfi_lso_total\t\t%d", ctx->vfi_lso_total); + + nix_dump("W10: scm_lso_rem \t\t0x%" PRIx64 "", + (uint64_t)ctx->scm_lso_rem); + nix_dump("W11: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs); + nix_dump("W12: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts); + nix_dump("W14: dropped_octs \t\t0x%" PRIx64 "", + (uint64_t)ctx->drop_octs); + nix_dump("W15: dropped_pkts \t\t0x%" PRIx64 "", + (uint64_t)ctx->drop_pkts); + + *sqb_aura_p = ctx->sqb_aura; +} + +static inline void +nix_cn9k_lf_rq_dump(__io struct nix_rq_ctx_s *ctx) +{ + nix_dump("W0: wqe_aura \t\t\t%d\nW0: substream \t\t\t0x%03x", + ctx->wqe_aura, ctx->substream); + nix_dump("W0: cq \t\t\t\t%d\nW0: ena_wqwd \t\t\t%d", ctx->cq, + ctx->ena_wqwd); + nix_dump("W0: ipsech_ena \t\t\t%d\nW0: sso_ena \t\t\t%d", + ctx->ipsech_ena, ctx->sso_ena); + nix_dump("W0: ena \t\t\t%d\n", ctx->ena); + + nix_dump("W1: lpb_drop_ena \t\t%d\nW1: spb_drop_ena \t\t%d", + ctx->lpb_drop_ena, ctx->spb_drop_ena); + nix_dump("W1: xqe_drop_ena \t\t%d\nW1: wqe_caching \t\t%d", + ctx->xqe_drop_ena, ctx->wqe_caching); + nix_dump("W1: pb_caching \t\t\t%d\nW1: sso_tt \t\t\t%d", + ctx->pb_caching, ctx->sso_tt); + nix_dump("W1: sso_grp \t\t\t%d\nW1: lpb_aura \t\t\t%d", ctx->sso_grp, + ctx->lpb_aura); + nix_dump("W1: spb_aura \t\t\t%d\n", ctx->spb_aura); + + nix_dump("W2: xqe_hdr_split \t\t%d\nW2: xqe_imm_copy \t\t%d", + ctx->xqe_hdr_split, ctx->xqe_imm_copy); + nix_dump("W2: xqe_imm_size \t\t%d\nW2: later_skip \t\t\t%d", + ctx->xqe_imm_size, ctx->later_skip); + nix_dump("W2: first_skip \t\t\t%d\nW2: lpb_sizem1 \t\t\t%d", + ctx->first_skip, ctx->lpb_sizem1); + nix_dump("W2: spb_ena \t\t\t%d\nW2: wqe_skip \t\t\t%d", ctx->spb_ena, + ctx->wqe_skip); + nix_dump("W2: spb_sizem1 \t\t\t%d\n", ctx->spb_sizem1); + + nix_dump("W3: spb_pool_pass \t\t%d\nW3: spb_pool_drop \t\t%d", + ctx->spb_pool_pass, ctx->spb_pool_drop); + nix_dump("W3: spb_aura_pass \t\t%d\nW3: spb_aura_drop \t\t%d", + ctx->spb_aura_pass, ctx->spb_aura_drop); + nix_dump("W3: wqe_pool_pass \t\t%d\nW3: wqe_pool_drop \t\t%d", + ctx->wqe_pool_pass, ctx->wqe_pool_drop); + nix_dump("W3: xqe_pass \t\t\t%d\nW3: xqe_drop \t\t\t%d\n", + ctx->xqe_pass, ctx->xqe_drop); + + nix_dump("W4: qint_idx \t\t\t%d\nW4: rq_int_ena \t\t\t%d", + ctx->qint_idx, ctx->rq_int_ena); + nix_dump("W4: rq_int \t\t\t%d\nW4: lpb_pool_pass \t\t%d", ctx->rq_int, + ctx->lpb_pool_pass); + nix_dump("W4: lpb_pool_drop \t\t%d\nW4: lpb_aura_pass \t\t%d", + ctx->lpb_pool_drop, ctx->lpb_aura_pass); + nix_dump("W4: lpb_aura_drop \t\t%d\n", ctx->lpb_aura_drop); + + nix_dump("W5: flow_tagw \t\t\t%d\nW5: bad_utag \t\t\t%d", + ctx->flow_tagw, ctx->bad_utag); + nix_dump("W5: good_utag \t\t\t%d\nW5: ltag \t\t\t%d\n", ctx->good_utag, + ctx->ltag); + + nix_dump("W6: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs); + nix_dump("W7: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts); + nix_dump("W8: drop_octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_octs); + nix_dump("W9: drop_pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_pkts); + nix_dump("W10: re_pkts \t\t\t0x%" PRIx64 "\n", (uint64_t)ctx->re_pkts); +} + +static inline void +nix_lf_rq_dump(__io struct nix_cn10k_rq_ctx_s *ctx) +{ + nix_dump("W0: wqe_aura \t\t\t%d\nW0: len_ol3_dis \t\t\t%d", + ctx->wqe_aura, ctx->len_ol3_dis); + nix_dump("W0: len_ol4_dis \t\t\t%d\nW0: len_il3_dis \t\t\t%d", + ctx->len_ol4_dis, ctx->len_il3_dis); + nix_dump("W0: len_il4_dis \t\t\t%d\nW0: csum_ol4_dis \t\t\t%d", + ctx->len_il4_dis, ctx->csum_ol4_dis); + nix_dump("W0: csum_ol3_dis \t\t\t%d\nW0: lenerr_dis \t\t\t%d", + ctx->csum_ol4_dis, ctx->lenerr_dis); + nix_dump("W0: cq \t\t\t\t%d\nW0: ena_wqwd \t\t\t%d", ctx->cq, + ctx->ena_wqwd); + nix_dump("W0: ipsech_ena \t\t\t%d\nW0: sso_ena \t\t\t%d", + ctx->ipsech_ena, ctx->sso_ena); + nix_dump("W0: ena \t\t\t%d\n", ctx->ena); + + nix_dump("W1: chi_ena \t\t%d\nW1: ipsecd_drop_en \t\t%d", ctx->chi_ena, + ctx->ipsecd_drop_en); + nix_dump("W1: pb_stashing \t\t\t%d", ctx->pb_stashing); + nix_dump("W1: lpb_drop_ena \t\t%d\nW1: spb_drop_ena \t\t%d", + ctx->lpb_drop_ena, ctx->spb_drop_ena); + nix_dump("W1: xqe_drop_ena \t\t%d\nW1: wqe_caching \t\t%d", + ctx->xqe_drop_ena, ctx->wqe_caching); + nix_dump("W1: pb_caching \t\t\t%d\nW1: sso_tt \t\t\t%d", + ctx->pb_caching, ctx->sso_tt); + nix_dump("W1: sso_grp \t\t\t%d\nW1: lpb_aura \t\t\t%d", ctx->sso_grp, + ctx->lpb_aura); + nix_dump("W1: spb_aura \t\t\t%d\n", ctx->spb_aura); + + nix_dump("W2: xqe_hdr_split \t\t%d\nW2: xqe_imm_copy \t\t%d", + ctx->xqe_hdr_split, ctx->xqe_imm_copy); + nix_dump("W2: xqe_imm_size \t\t%d\nW2: later_skip \t\t\t%d", + ctx->xqe_imm_size, ctx->later_skip); + nix_dump("W2: first_skip \t\t\t%d\nW2: lpb_sizem1 \t\t\t%d", + ctx->first_skip, ctx->lpb_sizem1); + nix_dump("W2: spb_ena \t\t\t%d\nW2: wqe_skip \t\t\t%d", ctx->spb_ena, + ctx->wqe_skip); + nix_dump("W2: spb_sizem1 \t\t\t%d\nW2: policer_ena \t\t\t%d", + ctx->spb_sizem1, ctx->policer_ena); + nix_dump("W2: band_prof_id \t\t\t%d", ctx->band_prof_id); + + nix_dump("W3: spb_pool_pass \t\t%d\nW3: spb_pool_drop \t\t%d", + ctx->spb_pool_pass, ctx->spb_pool_drop); + nix_dump("W3: spb_aura_pass \t\t%d\nW3: spb_aura_drop \t\t%d", + ctx->spb_aura_pass, ctx->spb_aura_drop); + nix_dump("W3: wqe_pool_pass \t\t%d\nW3: wqe_pool_drop \t\t%d", + ctx->wqe_pool_pass, ctx->wqe_pool_drop); + nix_dump("W3: xqe_pass \t\t\t%d\nW3: xqe_drop \t\t\t%d\n", + ctx->xqe_pass, ctx->xqe_drop); + + nix_dump("W4: qint_idx \t\t\t%d\nW4: rq_int_ena \t\t\t%d", + ctx->qint_idx, ctx->rq_int_ena); + nix_dump("W4: rq_int \t\t\t%d\nW4: lpb_pool_pass \t\t%d", ctx->rq_int, + ctx->lpb_pool_pass); + nix_dump("W4: lpb_pool_drop \t\t%d\nW4: lpb_aura_pass \t\t%d", + ctx->lpb_pool_drop, ctx->lpb_aura_pass); + nix_dump("W4: lpb_aura_drop \t\t%d\n", ctx->lpb_aura_drop); + + nix_dump("W5: vwqe_skip \t\t\t%d\nW5: max_vsize_exp \t\t\t%d", + ctx->vwqe_skip, ctx->max_vsize_exp); + nix_dump("W5: vtime_wait \t\t\t%d\nW5: vwqe_ena \t\t\t%d", + ctx->vtime_wait, ctx->max_vsize_exp); + nix_dump("W5: ipsec_vwqe \t\t\t%d", ctx->ipsec_vwqe); + nix_dump("W5: flow_tagw \t\t\t%d\nW5: bad_utag \t\t\t%d", + ctx->flow_tagw, ctx->bad_utag); + nix_dump("W5: good_utag \t\t\t%d\nW5: ltag \t\t\t%d\n", ctx->good_utag, + ctx->ltag); + + nix_dump("W6: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs); + nix_dump("W7: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts); + nix_dump("W8: drop_octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_octs); + nix_dump("W9: drop_pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_pkts); + nix_dump("W10: re_pkts \t\t\t0x%" PRIx64 "\n", (uint64_t)ctx->re_pkts); +} + +static inline void +nix_lf_cq_dump(__io struct nix_cq_ctx_s *ctx) +{ + nix_dump("W0: base \t\t\t0x%" PRIx64 "\n", ctx->base); + + nix_dump("W1: wrptr \t\t\t%" PRIx64 "", (uint64_t)ctx->wrptr); + nix_dump("W1: avg_con \t\t\t%d\nW1: cint_idx \t\t\t%d", ctx->avg_con, + ctx->cint_idx); + nix_dump("W1: cq_err \t\t\t%d\nW1: qint_idx \t\t\t%d", ctx->cq_err, + ctx->qint_idx); + nix_dump("W1: bpid \t\t\t%d\nW1: bp_ena \t\t\t%d\n", ctx->bpid, + ctx->bp_ena); + + nix_dump("W2: update_time \t\t%d\nW2: avg_level \t\t\t%d", + ctx->update_time, ctx->avg_level); + nix_dump("W2: head \t\t\t%d\nW2: tail \t\t\t%d\n", ctx->head, + ctx->tail); + + nix_dump("W3: cq_err_int_ena \t\t%d\nW3: cq_err_int \t\t\t%d", + ctx->cq_err_int_ena, ctx->cq_err_int); + nix_dump("W3: qsize \t\t\t%d\nW3: caching \t\t\t%d", ctx->qsize, + ctx->caching); + nix_dump("W3: substream \t\t\t0x%03x\nW3: ena \t\t\t%d", ctx->substream, + ctx->ena); + nix_dump("W3: drop_ena \t\t\t%d\nW3: drop \t\t\t%d", ctx->drop_ena, + ctx->drop); + nix_dump("W3: bp \t\t\t\t%d\n", ctx->bp); +} + +int +roc_nix_queues_ctx_dump(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + int rc = -1, q, rq = nix->nb_rx_queues; + struct mbox *mbox = (&nix->dev)->mbox; + struct npa_aq_enq_rsp *npa_rsp; + struct npa_aq_enq_req *npa_aq; + volatile void *ctx; + int sq = nix->nb_tx_queues; + struct npa_lf *npa_lf; + uint32_t sqb_aura; + + npa_lf = idev_npa_obj_get(); + if (npa_lf == NULL) + return NPA_ERR_DEVICE_NOT_BOUNDED; + + for (q = 0; q < rq; q++) { + rc = nix_q_ctx_get(mbox, NIX_AQ_CTYPE_CQ, q, &ctx); + if (rc) { + plt_err("Failed to get cq context"); + goto fail; + } + nix_dump("============== port=%d cq=%d ===============", + roc_nix->port_id, q); + nix_lf_cq_dump(ctx); + } + + for (q = 0; q < rq; q++) { + rc = nix_q_ctx_get(mbox, NIX_AQ_CTYPE_RQ, q, &ctx); + if (rc) { + plt_err("Failed to get rq context"); + goto fail; + } + nix_dump("============== port=%d rq=%d ===============", + roc_nix->port_id, q); + if (roc_model_is_cn9k()) + nix_cn9k_lf_rq_dump(ctx); + else + nix_lf_rq_dump(ctx); + } + + for (q = 0; q < sq; q++) { + rc = nix_q_ctx_get(mbox, NIX_AQ_CTYPE_SQ, q, &ctx); + if (rc) { + plt_err("Failed to get sq context"); + goto fail; + } + nix_dump("============== port=%d sq=%d ===============", + roc_nix->port_id, q); + if (roc_model_is_cn9k()) + nix_cn9k_lf_sq_dump(ctx, &sqb_aura); + else + nix_lf_sq_dump(ctx, &sqb_aura); + + if (!npa_lf) { + plt_err("NPA LF does not exist"); + continue; + } + + /* Dump SQB Aura minimal info */ + npa_aq = mbox_alloc_msg_npa_aq_enq(npa_lf->mbox); + if (npa_aq == NULL) + return -ENOSPC; + npa_aq->aura_id = sqb_aura; + npa_aq->ctype = NPA_AQ_CTYPE_AURA; + npa_aq->op = NPA_AQ_INSTOP_READ; + + rc = mbox_process_msg(npa_lf->mbox, (void *)&npa_rsp); + if (rc) { + plt_err("Failed to get sq's sqb_aura context"); + continue; + } + + nix_dump("\nSQB Aura W0: Pool addr\t\t0x%" PRIx64 "", + npa_rsp->aura.pool_addr); + nix_dump("SQB Aura W1: ena\t\t\t%d", npa_rsp->aura.ena); + nix_dump("SQB Aura W2: count\t\t%" PRIx64 "", + (uint64_t)npa_rsp->aura.count); + nix_dump("SQB Aura W3: limit\t\t%" PRIx64 "", + (uint64_t)npa_rsp->aura.limit); + nix_dump("SQB Aura W3: fc_ena\t\t%d", npa_rsp->aura.fc_ena); + nix_dump("SQB Aura W4: fc_addr\t\t0x%" PRIx64 "\n", + npa_rsp->aura.fc_addr); + } + +fail: + return rc; +} + +/* Dumps struct nix_cqe_hdr_s and union nix_rx_parse_u */ +void +roc_nix_cqe_dump(const struct nix_cqe_hdr_s *cq) +{ + const union nix_rx_parse_u *rx = + (const union nix_rx_parse_u *)((const uint64_t *)cq + 1); + + nix_dump("tag \t\t0x%x\tq \t\t%d\t\tnode \t\t%d\tcqe_type \t%d", + cq->tag, cq->q, cq->node, cq->cqe_type); + + nix_dump("W0: chan \t%d\t\tdesc_sizem1 \t%d", rx->chan, + rx->desc_sizem1); + nix_dump("W0: imm_copy \t%d\t\texpress \t%d", rx->imm_copy, + rx->express); + nix_dump("W0: wqwd \t%d\t\terrlev \t\t%d\t\terrcode \t%d", rx->wqwd, + rx->errlev, rx->errcode); + nix_dump("W0: latype \t%d\t\tlbtype \t\t%d\t\tlctype \t\t%d", + rx->latype, rx->lbtype, rx->lctype); + nix_dump("W0: ldtype \t%d\t\tletype \t\t%d\t\tlftype \t\t%d", + rx->ldtype, rx->letype, rx->lftype); + nix_dump("W0: lgtype \t%d \t\tlhtype \t\t%d", rx->lgtype, rx->lhtype); + + nix_dump("W1: pkt_lenm1 \t%d", rx->pkt_lenm1); + nix_dump("W1: l2m \t%d\t\tl2b \t\t%d\t\tl3m \t\t%d\tl3b \t\t%d", + rx->l2m, rx->l2b, rx->l3m, rx->l3b); + nix_dump("W1: vtag0_valid %d\t\tvtag0_gone \t%d", rx->vtag0_valid, + rx->vtag0_gone); + nix_dump("W1: vtag1_valid %d\t\tvtag1_gone \t%d", rx->vtag1_valid, + rx->vtag1_gone); + nix_dump("W1: pkind \t%d", rx->pkind); + nix_dump("W1: vtag0_tci \t%d\t\tvtag1_tci \t%d", rx->vtag0_tci, + rx->vtag1_tci); + + nix_dump("W2: laflags \t%d\t\tlbflags\t\t%d\t\tlcflags \t%d", + rx->laflags, rx->lbflags, rx->lcflags); + nix_dump("W2: ldflags \t%d\t\tleflags\t\t%d\t\tlfflags \t%d", + rx->ldflags, rx->leflags, rx->lfflags); + nix_dump("W2: lgflags \t%d\t\tlhflags \t%d", rx->lgflags, rx->lhflags); + + nix_dump("W3: eoh_ptr \t%d\t\twqe_aura \t%d\t\tpb_aura \t%d", + rx->eoh_ptr, rx->wqe_aura, rx->pb_aura); + nix_dump("W3: match_id \t%d", rx->match_id); + + nix_dump("W4: laptr \t%d\t\tlbptr \t\t%d\t\tlcptr \t\t%d", rx->laptr, + rx->lbptr, rx->lcptr); + nix_dump("W4: ldptr \t%d\t\tleptr \t\t%d\t\tlfptr \t\t%d", rx->ldptr, + rx->leptr, rx->lfptr); + nix_dump("W4: lgptr \t%d\t\tlhptr \t\t%d", rx->lgptr, rx->lhptr); + + nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d", + rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg); +} + +void +roc_nix_rq_dump(struct roc_nix_rq *rq) +{ + nix_dump("nix_rq@%p", rq); + nix_dump(" qid = %d", rq->qid); + nix_dump(" aura_handle = 0x%" PRIx64 "", rq->aura_handle); + nix_dump(" ipsec_ena = %d", rq->ipsech_ena); + nix_dump(" first_skip = %d", rq->first_skip); + nix_dump(" later_skip = %d", rq->later_skip); + nix_dump(" lpb_size = %d", rq->lpb_size); + nix_dump(" sso_ena = %d", rq->sso_ena); + nix_dump(" tag_mask = %d", rq->tag_mask); + nix_dump(" flow_tag_width = %d", rq->flow_tag_width); + nix_dump(" tt = %d", rq->tt); + nix_dump(" hwgrp = %d", rq->hwgrp); + nix_dump(" vwqe_ena = %d", rq->vwqe_ena); + nix_dump(" vwqe_first_skip = %d", rq->vwqe_first_skip); + nix_dump(" vwqe_max_sz_exp = %d", rq->vwqe_max_sz_exp); + nix_dump(" vwqe_wait_tmo = %ld", rq->vwqe_wait_tmo); + nix_dump(" vwqe_aura_handle = %ld", rq->vwqe_aura_handle); + nix_dump(" roc_nix = %p", rq->roc_nix); +} + +void +roc_nix_cq_dump(struct roc_nix_cq *cq) +{ + nix_dump("nix_cq@%p", cq); + nix_dump(" qid = %d", cq->qid); + nix_dump(" qnb_desc = %d", cq->nb_desc); + nix_dump(" roc_nix = %p", cq->roc_nix); + nix_dump(" door = 0x%" PRIx64 "", cq->door); + nix_dump(" status = %p", cq->status); + nix_dump(" wdata = 0x%" PRIx64 "", cq->wdata); + nix_dump(" desc_base = %p", cq->desc_base); + nix_dump(" qmask = 0x%" PRIx32 "", cq->qmask); +} + +void +roc_nix_sq_dump(struct roc_nix_sq *sq) +{ + nix_dump("nix_sq@%p", sq); + nix_dump(" qid = %d", sq->qid); + nix_dump(" max_sqe_sz = %d", sq->max_sqe_sz); + nix_dump(" nb_desc = %d", sq->nb_desc); + nix_dump(" sqes_per_sqb_log2 = %d", sq->sqes_per_sqb_log2); + nix_dump(" roc_nix= %p", sq->roc_nix); + nix_dump(" aura_handle = 0x%" PRIx64 "", sq->aura_handle); + nix_dump(" nb_sqb_bufs_adj = %d", sq->nb_sqb_bufs_adj); + nix_dump(" nb_sqb_bufs = %d", sq->nb_sqb_bufs); + nix_dump(" io_addr = 0x%" PRIx64 "", sq->io_addr); + nix_dump(" lmt_addr = %p", sq->lmt_addr); + nix_dump(" sqe_mem = %p", sq->sqe_mem); + nix_dump(" fc = %p", sq->fc); +}; + +void +roc_nix_dump(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + nix_dump("nix@%p", nix); + nix_dump(" pf = %d", dev_get_pf(dev->pf_func)); + nix_dump(" vf = %d", dev_get_vf(dev->pf_func)); + nix_dump(" bar2 = 0x%" PRIx64, dev->bar2); + nix_dump(" bar4 = 0x%" PRIx64, dev->bar4); + nix_dump(" port_id = %d", roc_nix->port_id); + nix_dump(" rss_tag_as_xor = %d", roc_nix->rss_tag_as_xor); + nix_dump(" rss_tag_as_xor = %d", roc_nix->max_sqb_count); + + nix_dump(" \tpci_dev = %p", nix->pci_dev); + nix_dump(" \tbase = 0x%" PRIxPTR "", nix->base); + nix_dump(" \tlmt_base = 0x%" PRIxPTR "", nix->lmt_base); + nix_dump(" \treta_size = %d", nix->reta_sz); + nix_dump(" \ttx_chan_base = %d", nix->tx_chan_base); + nix_dump(" \trx_chan_base = %d", nix->rx_chan_base); + nix_dump(" \tnb_rx_queues = %d", nix->nb_rx_queues); + nix_dump(" \tnb_tx_queues = %d", nix->nb_tx_queues); + nix_dump(" \tlso_tsov6_idx = %d", nix->lso_tsov6_idx); + nix_dump(" \tlso_tsov4_idx = %d", nix->lso_tsov4_idx); + nix_dump(" \tlf_rx_stats = %d", nix->lf_rx_stats); + nix_dump(" \tlf_tx_stats = %d", nix->lf_tx_stats); + nix_dump(" \trx_chan_cnt = %d", nix->rx_chan_cnt); + nix_dump(" \ttx_chan_cnt = %d", nix->tx_chan_cnt); + nix_dump(" \tcgx_links = %d", nix->cgx_links); + nix_dump(" \tlbk_links = %d", nix->lbk_links); + nix_dump(" \tsdp_links = %d", nix->sdp_links); + nix_dump(" \ttx_link = %d", nix->tx_link); + nix_dump(" \tsqb_size = %d", nix->sqb_size); + nix_dump(" \tmsixoff = %d", nix->msixoff); + nix_dump(" \tcints = %d", nix->cints); + nix_dump(" \tqints = %d", nix->qints); + nix_dump(" \tsdp_link = %d", nix->sdp_link); + nix_dump(" \tptp_en = %d", nix->ptp_en); + nix_dump(" \trss_alg_idx = %d", nix->rss_alg_idx); + nix_dump(" \ttx_pause = %d", nix->tx_pause); +} diff --git a/drivers/common/cnxk/roc_nix_irq.c b/drivers/common/cnxk/roc_nix_irq.c index 79f25b0..32be64a 100644 --- a/drivers/common/cnxk/roc_nix_irq.c +++ b/drivers/common/cnxk/roc_nix_irq.c @@ -74,6 +74,9 @@ nix_lf_err_irq(void *param) /* Clear interrupt */ plt_write64(intr, nix->base + NIX_LF_ERR_INT); + /* Dump registers to std out */ + roc_nix_lf_reg_dump(nix_priv_to_roc_nix(nix), NULL); + roc_nix_queues_ctx_dump(nix_priv_to_roc_nix(nix)); } static int @@ -119,6 +122,10 @@ nix_lf_ras_irq(void *param) plt_err("Ras_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf); /* Clear interrupt */ plt_write64(intr, nix->base + NIX_LF_RAS); + + /* Dump registers to std out */ + roc_nix_lf_reg_dump(nix_priv_to_roc_nix(nix), NULL); + roc_nix_queues_ctx_dump(nix_priv_to_roc_nix(nix)); } static int @@ -279,6 +286,10 @@ nix_lf_q_irq(void *param) /* Clear interrupt */ plt_write64(intr, nix->base + NIX_LF_QINTX_INT(qintx)); + + /* Dump registers to std out */ + roc_nix_lf_reg_dump(nix_priv_to_roc_nix(nix), NULL); + roc_nix_queues_ctx_dump(nix_priv_to_roc_nix(nix)); } int diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 85b8393..05f4314 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -13,10 +13,13 @@ INTERNAL { roc_idev_npa_nix_get; roc_idev_num_lmtlines_get; roc_model; + roc_nix_cq_dump; roc_nix_cq_fini; roc_nix_cq_init; + roc_nix_cqe_dump; roc_nix_dev_fini; roc_nix_dev_init; + roc_nix_dump; roc_nix_err_intr_ena_dis; roc_nix_get_base_chan; roc_nix_get_pf; @@ -30,6 +33,8 @@ INTERNAL { roc_nix_lf_alloc; roc_nix_lf_inl_ipsec_cfg; roc_nix_lf_free; + roc_nix_lf_get_reg_count; + roc_nix_lf_reg_dump; roc_nix_mac_addr_add; roc_nix_mac_addr_del; roc_nix_mac_addr_set; @@ -61,9 +66,11 @@ INTERNAL { roc_nix_ptp_rx_ena_dis; roc_nix_ptp_sync_time_adjust; roc_nix_ptp_tx_ena_dis; + roc_nix_queues_ctx_dump; roc_nix_ras_intr_ena_dis; roc_nix_register_cq_irqs; roc_nix_register_queue_irqs; + roc_nix_rq_dump; roc_nix_rq_ena_dis; roc_nix_rq_fini; roc_nix_rq_init; @@ -77,6 +84,7 @@ INTERNAL { roc_nix_rss_reta_set; roc_nix_rx_queue_intr_disable; roc_nix_rx_queue_intr_enable; + roc_nix_sq_dump; roc_nix_sq_fini; roc_nix_sq_init; roc_nix_stats_get; From patchwork Thu Apr 1 12:37:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90402 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 65B17A0548; Thu, 1 Apr 2021 14:43:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B7B40141265; Thu, 1 Apr 2021 14:40:06 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id C5FA3141264 for ; Thu, 1 Apr 2021 14:40:04 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CQnTR001166 for ; Thu, 1 Apr 2021 05:40:04 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=E2A+IfeG7uLrZZ4Ohfee1/D4rLxS9cbNaeNZPZl9D64=; b=LPICHWiW22qrjBYYTqzNWP1ZqH7/JtamrErdKtbp84kIgFgy2H0s+PASL4QcWkqVzU28 QsfYwOdKO7sEzC6+0XfwhWdzHyQF0iMpfnlSLDqWTLqDwB1x61/exOWgMq/QiI1Q7tBI 6E7xJwTlPBUGBik8nQ+H7QKCI0Myg0j9R4K72HhhjEEs/+1E1iOjWx1Fy07PvR4z5IAN 4iL/Hwi3tdvHg8WtpnGi48nCnVyrjcVHWCpX+d8A1vCRVcOl50mp89IZS1YiwJxmDjx3 BAOGD2z8MFT6RShySsz1DliTaygzeT/hOznxifa84O2ZsK+uwZepynDwk0vzO8pHHd9y dw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 37n28j2dt6-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:03 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:02 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:02 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 2B29F3F7041; Thu, 1 Apr 2021 05:39:58 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:54 +0530 Message-ID: <20210401123817.14348-30-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: SNEdjyfIiWC6MRtbn3teZ03PYvS6aSy_ X-Proofpoint-GUID: SNEdjyfIiWC6MRtbn3teZ03PYvS6aSy_ X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 29/52] common/cnxk: add VLAN filter support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add helper API to support VLAN filtering and stripping on Rx and VLAN insertion on Tx. Signed-off-by: Sunil Kumar Kori --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_nix.h | 45 ++++++++ drivers/common/cnxk/roc_nix_vlan.c | 205 +++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 8 ++ 4 files changed, 259 insertions(+) create mode 100644 drivers/common/cnxk/roc_nix_vlan.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 57253e4..f975681 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -25,6 +25,7 @@ sources = files('roc_dev.c', 'roc_nix_queue.c', 'roc_nix_rss.c', 'roc_nix_stats.c', + 'roc_nix_vlan.c', 'roc_npa.c', 'roc_npa_debug.c', 'roc_npa_irq.c', diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 048a536..ba3575b 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -17,6 +17,26 @@ enum roc_nix_sq_max_sqe_sz { roc_nix_maxsqesz_w8 = NIX_MAXSQESZ_W8, }; +enum roc_nix_vlan_type { + ROC_NIX_VLAN_TYPE_INNER = 0x01, + ROC_NIX_VLAN_TYPE_OUTER = 0x02, +}; + +struct roc_nix_vlan_config { + uint32_t type; + union { + struct { + uint32_t vtag_inner; + uint32_t vtag_outer; + } vlan; + + struct { + int idx_inner; + int idx_outer; + } mcam; + }; +}; + /* Range to adjust PTP frequency. Valid range is * (-ROC_NIX_PTP_FREQ_ADJUST, ROC_NIX_PTP_FREQ_ADJUST) */ @@ -341,6 +361,31 @@ int __roc_api roc_nix_ptp_info_cb_register(struct roc_nix *roc_nix, ptp_info_update_t ptp_update); void __roc_api roc_nix_ptp_info_cb_unregister(struct roc_nix *roc_nix); +/* VLAN */ +int __roc_api +roc_nix_vlan_mcam_entry_read(struct roc_nix *roc_nix, uint32_t index, + struct npc_mcam_read_entry_rsp **rsp); +int __roc_api roc_nix_vlan_mcam_entry_write(struct roc_nix *roc_nix, + uint32_t index, + struct mcam_entry *entry, + uint8_t intf, uint8_t enable); +int __roc_api roc_nix_vlan_mcam_entry_alloc_and_write(struct roc_nix *roc_nix, + struct mcam_entry *entry, + uint8_t intf, + uint8_t priority, + uint8_t ref_entry); +int __roc_api roc_nix_vlan_mcam_entry_free(struct roc_nix *roc_nix, + uint32_t index); +int __roc_api roc_nix_vlan_mcam_entry_ena_dis(struct roc_nix *roc_nix, + uint32_t index, const int enable); +int __roc_api roc_nix_vlan_strip_vtag_ena_dis(struct roc_nix *roc_nix, + bool enable); +int __roc_api roc_nix_vlan_insert_ena_dis(struct roc_nix *roc_nix, + struct roc_nix_vlan_config *vlan_cfg, + uint64_t *mcam_index, bool enable); +int __roc_api roc_nix_vlan_tpid_set(struct roc_nix *roc_nix, uint32_t type, + uint16_t tpid); + /* MCAST*/ int __roc_api roc_nix_mcast_mcam_entry_alloc(struct roc_nix *roc_nix, uint16_t nb_entries, diff --git a/drivers/common/cnxk/roc_nix_vlan.c b/drivers/common/cnxk/roc_nix_vlan.c new file mode 100644 index 0000000..66bf8ad --- /dev/null +++ b/drivers/common/cnxk/roc_nix_vlan.c @@ -0,0 +1,205 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +static inline struct mbox * +get_mbox(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + return dev->mbox; +} + +int +roc_nix_vlan_mcam_entry_read(struct roc_nix *roc_nix, uint32_t index, + struct npc_mcam_read_entry_rsp **rsp) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct npc_mcam_read_entry_req *req; + int rc = -ENOSPC; + + req = mbox_alloc_msg_npc_mcam_read_entry(mbox); + if (req == NULL) + return rc; + req->entry = index; + + return mbox_process_msg(mbox, (void **)rsp); +} + +int +roc_nix_vlan_mcam_entry_write(struct roc_nix *roc_nix, uint32_t index, + struct mcam_entry *entry, uint8_t intf, + uint8_t enable) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct npc_mcam_write_entry_req *req; + struct msghdr *rsp; + int rc = -ENOSPC; + + req = mbox_alloc_msg_npc_mcam_write_entry(mbox); + if (req == NULL) + return rc; + req->entry = index; + req->intf = intf; + req->enable_entry = enable; + mbox_memcpy(&req->entry_data, entry, sizeof(struct mcam_entry)); + + return mbox_process_msg(mbox, (void *)&rsp); +} + +int +roc_nix_vlan_mcam_entry_alloc_and_write(struct roc_nix *roc_nix, + struct mcam_entry *entry, uint8_t intf, + uint8_t priority, uint8_t ref_entry) +{ + struct npc_mcam_alloc_and_write_entry_req *req; + struct npc_mcam_alloc_and_write_entry_rsp *rsp; + struct mbox *mbox = get_mbox(roc_nix); + int rc = -ENOSPC; + + req = mbox_alloc_msg_npc_mcam_alloc_and_write_entry(mbox); + if (req == NULL) + return rc; + req->priority = priority; + req->ref_entry = ref_entry; + req->intf = intf; + req->enable_entry = true; + mbox_memcpy(&req->entry_data, entry, sizeof(struct mcam_entry)); + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + return rsp->entry; +} + +int +roc_nix_vlan_mcam_entry_free(struct roc_nix *roc_nix, uint32_t index) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct npc_mcam_free_entry_req *req; + int rc = -ENOSPC; + + req = mbox_alloc_msg_npc_mcam_free_entry(mbox); + if (req == NULL) + return rc; + req->entry = index; + + return mbox_process_msg(mbox, NULL); +} + +int +roc_nix_vlan_mcam_entry_ena_dis(struct roc_nix *roc_nix, uint32_t index, + const int enable) +{ + struct npc_mcam_ena_dis_entry_req *req; + struct mbox *mbox = get_mbox(roc_nix); + int rc = -ENOSPC; + + if (enable) { + req = mbox_alloc_msg_npc_mcam_ena_entry(mbox); + if (req == NULL) + return rc; + } else { + req = mbox_alloc_msg_npc_mcam_dis_entry(mbox); + if (req == NULL) + return rc; + } + + req->entry = index; + return mbox_process_msg(mbox, NULL); +} + +int +roc_nix_vlan_strip_vtag_ena_dis(struct roc_nix *roc_nix, bool enable) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct nix_vtag_config *vtag_cfg; + int rc = -ENOSPC; + + vtag_cfg = mbox_alloc_msg_nix_vtag_cfg(mbox); + if (vtag_cfg == NULL) + return rc; + vtag_cfg->vtag_size = NIX_VTAGSIZE_T4; + vtag_cfg->cfg_type = 1; /* Rx VLAN configuration */ + vtag_cfg->rx.capture_vtag = 1; /* Always capture */ + vtag_cfg->rx.vtag_type = 0; /* Use index 0 */ + + if (enable) + vtag_cfg->rx.strip_vtag = 1; + else + vtag_cfg->rx.strip_vtag = 0; + + return mbox_process(mbox); +} + +int +roc_nix_vlan_insert_ena_dis(struct roc_nix *roc_nix, + struct roc_nix_vlan_config *vlan_cfg, + uint64_t *mcam_index, bool enable) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct nix_vtag_config *vtag_cfg; + struct nix_vtag_config_rsp *rsp; + int rc = -ENOSPC; + + vtag_cfg = mbox_alloc_msg_nix_vtag_cfg(mbox); + if (vtag_cfg == NULL) + return rc; + vtag_cfg->cfg_type = 0; /* Tx VLAN configuration */ + vtag_cfg->vtag_size = NIX_VTAGSIZE_T4; + + if (enable) { + if (vlan_cfg->type & ROC_NIX_VLAN_TYPE_INNER) { + vtag_cfg->tx.vtag0 = vlan_cfg->vlan.vtag_inner; + vtag_cfg->tx.cfg_vtag0 = true; + } + if (vlan_cfg->type & ROC_NIX_VLAN_TYPE_OUTER) { + vtag_cfg->tx.vtag1 = vlan_cfg->vlan.vtag_outer; + vtag_cfg->tx.cfg_vtag1 = true; + } + } else { + if (vlan_cfg->type & ROC_NIX_VLAN_TYPE_INNER) { + vtag_cfg->tx.vtag0_idx = vlan_cfg->mcam.idx_inner; + vtag_cfg->tx.free_vtag0 = true; + } + if (vlan_cfg->type & ROC_NIX_VLAN_TYPE_OUTER) { + vtag_cfg->tx.vtag1_idx = vlan_cfg->mcam.idx_outer; + vtag_cfg->tx.free_vtag1 = true; + } + } + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + if (enable) + *mcam_index = + (((uint64_t)rsp->vtag1_idx << 32) | rsp->vtag0_idx); + + return 0; +} + +int +roc_nix_vlan_tpid_set(struct roc_nix *roc_nix, uint32_t type, uint16_t tpid) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct nix_set_vlan_tpid *tpid_cfg; + int rc = -ENOSPC; + + tpid_cfg = mbox_alloc_msg_nix_set_vlan_tpid(mbox); + if (tpid_cfg == NULL) + return rc; + tpid_cfg->tpid = tpid; + + if (type & ROC_NIX_VLAN_TYPE_OUTER) + tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER; + else + tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER; + + return mbox_process(mbox); +} diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 05f4314..6eae844 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -96,6 +96,14 @@ INTERNAL { roc_nix_xstats_names_get; roc_nix_unregister_cq_irqs; roc_nix_unregister_queue_irqs; + roc_nix_vlan_insert_ena_dis; + roc_nix_vlan_mcam_entry_alloc_and_write; + roc_nix_vlan_mcam_entry_ena_dis; + roc_nix_vlan_mcam_entry_free; + roc_nix_vlan_mcam_entry_read; + roc_nix_vlan_mcam_entry_write; + roc_nix_vlan_strip_vtag_ena_dis; + roc_nix_vlan_tpid_set; roc_npa_aura_limit_modify; roc_npa_aura_op_range_set; roc_npa_ctx_dump; From patchwork Thu Apr 1 12:37:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90403 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A903EA0548; Thu, 1 Apr 2021 14:43:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D5F3A1411D3; Thu, 1 Apr 2021 14:40:10 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 093DC141250 for ; Thu, 1 Apr 2021 14:40:08 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLXh019083 for ; Thu, 1 Apr 2021 05:40:08 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=8uAszwKyN95WiejwrEkasBUt6ch591d9STW+CpFjMR4=; b=Cvko+9V7A325ugHYGgvmS3frsYJcEMHocDhSqT9XhTjGaRA1nnBpaXvleX3R9RWEO0IF 4gga2yxIqfYNPRjlhmFnroxyq53N+10DSn9f5iG86tql1DVNAK9Brt/WfbXKDLtUoTwG 0bQibSMd7jLJaXt8UCj9crRDH9O9rbw4PKe+UlhKbL5rco/kl8KL+6KIqImDLtVzrgwD 6xdD2tKjNgAfjWWnDcjmSCN5gkHt5ndXmOZja6ry+yEm5cHTDVRQshT0IBek5kCha7wB 9XTurpU3NS5zpW/NamIiH8dMqVhdwoap8PPdkyQXLk+WgQZQuVgS5zObHDESL8xHaKZv pA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje3h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:08 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:06 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:06 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 0C4EB3F704A; Thu, 1 Apr 2021 05:40:01 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:55 +0530 Message-ID: <20210401123817.14348-31-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: Wk4Tk4BPDkaVxjfyMu06yzF54SqqGv-A X-Proofpoint-ORIG-GUID: Wk4Tk4BPDkaVxjfyMu06yzF54SqqGv-A X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 30/52] common/cnxk: add nix flow control support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add support to enable/disable Rx/Tx flow control and pause frame configuration on NIX. Signed-off-by: Sunil Kumar Kori --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_nix.h | 34 ++++++ drivers/common/cnxk/roc_nix_fc.c | 251 +++++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 4 + 4 files changed, 290 insertions(+) create mode 100644 drivers/common/cnxk/roc_nix_fc.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index f975681..33eeb8c 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -17,6 +17,7 @@ sources = files('roc_dev.c', 'roc_model.c', 'roc_nix.c', 'roc_nix_debug.c', + 'roc_nix_fc.c', 'roc_nix_irq.c', 'roc_nix_mac.c', 'roc_nix_mcast.c', diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index ba3575b..2158f8c 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -17,6 +17,13 @@ enum roc_nix_sq_max_sqe_sz { roc_nix_maxsqesz_w8 = NIX_MAXSQESZ_W8, }; +enum roc_nix_fc_mode { + ROC_NIX_FC_NONE = 0, + ROC_NIX_FC_RX, + ROC_NIX_FC_TX, + ROC_NIX_FC_FULL +}; + enum roc_nix_vlan_type { ROC_NIX_VLAN_TYPE_INNER = 0x01, ROC_NIX_VLAN_TYPE_OUTER = 0x02, @@ -37,6 +44,21 @@ struct roc_nix_vlan_config { }; }; +struct roc_nix_fc_cfg { + bool cq_cfg_valid; + union { + struct { + bool enable; + } rxchan_cfg; + + struct { + uint32_t rq; + uint16_t cq_drop; + bool enable; + } cq_cfg; + }; +}; + /* Range to adjust PTP frequency. Valid range is * (-ROC_NIX_PTP_FREQ_ADJUST, ROC_NIX_PTP_FREQ_ADJUST) */ @@ -293,6 +315,18 @@ int __roc_api roc_nix_mac_link_cb_register(struct roc_nix *roc_nix, link_status_t link_update); void __roc_api roc_nix_mac_link_cb_unregister(struct roc_nix *roc_nix); +/* Flow control */ +int __roc_api roc_nix_fc_config_set(struct roc_nix *roc_nix, + struct roc_nix_fc_cfg *fc_cfg); + +int __roc_api roc_nix_fc_config_get(struct roc_nix *roc_nix, + struct roc_nix_fc_cfg *fc_cfg); + +int __roc_api roc_nix_fc_mode_set(struct roc_nix *roc_nix, + enum roc_nix_fc_mode mode); + +enum roc_nix_fc_mode __roc_api roc_nix_fc_mode_get(struct roc_nix *roc_nix); + /* NPC */ int __roc_api roc_nix_npc_promisc_ena_dis(struct roc_nix *roc_nix, int enable); diff --git a/drivers/common/cnxk/roc_nix_fc.c b/drivers/common/cnxk/roc_nix_fc.c new file mode 100644 index 0000000..47be8aa --- /dev/null +++ b/drivers/common/cnxk/roc_nix_fc.c @@ -0,0 +1,251 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +static inline struct mbox * +get_mbox(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + return dev->mbox; +} + +static int +nix_fc_rxchan_bpid_get(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + if (nix->chan_cnt != 0) + fc_cfg->rxchan_cfg.enable = true; + else + fc_cfg->rxchan_cfg.enable = false; + + fc_cfg->cq_cfg_valid = false; + + return 0; +} + +static int +nix_fc_rxchan_bpid_set(struct roc_nix *roc_nix, bool enable) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = get_mbox(roc_nix); + struct nix_bp_cfg_req *req; + struct nix_bp_cfg_rsp *rsp; + int rc = -ENOSPC; + + if (roc_nix_is_sdp(roc_nix)) + return 0; + + if (enable) { + req = mbox_alloc_msg_nix_bp_enable(mbox); + if (req == NULL) + return rc; + req->chan_base = 0; + req->chan_cnt = 1; + req->bpid_per_chan = 0; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc || (req->chan_cnt != rsp->chan_cnt)) + goto exit; + + nix->bpid[0] = rsp->chan_bpid[0]; + nix->chan_cnt = rsp->chan_cnt; + } else { + req = mbox_alloc_msg_nix_bp_disable(mbox); + if (req == NULL) + return rc; + req->chan_base = 0; + req->chan_cnt = 1; + + rc = mbox_process(mbox); + if (rc) + goto exit; + + memset(nix->bpid, 0, sizeof(uint16_t) * NIX_MAX_CHAN); + nix->chan_cnt = 0; + } + +exit: + return rc; +} + +static int +nix_fc_cq_config_get(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct nix_aq_enq_rsp *rsp; + int rc; + + if (roc_model_is_cn9k()) { + struct nix_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = fc_cfg->cq_cfg.rq; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_READ; + } else { + struct nix_cn10k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq->qidx = fc_cfg->cq_cfg.rq; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_READ; + } + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto exit; + + fc_cfg->cq_cfg.cq_drop = rsp->cq.bp; + fc_cfg->cq_cfg.enable = rsp->cq.bp_ena; + fc_cfg->cq_cfg_valid = true; + +exit: + return rc; +} + +static int +nix_fc_cq_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = get_mbox(roc_nix); + + if (roc_model_is_cn9k()) { + struct nix_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = fc_cfg->cq_cfg.rq; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + if (fc_cfg->cq_cfg.enable) { + aq->cq.bpid = nix->bpid[0]; + aq->cq_mask.bpid = ~(aq->cq_mask.bpid); + aq->cq.bp = fc_cfg->cq_cfg.cq_drop; + aq->cq_mask.bp = ~(aq->cq_mask.bp); + } + + aq->cq.bp_ena = !!(fc_cfg->cq_cfg.enable); + aq->cq_mask.bp_ena = ~(aq->cq_mask.bp_ena); + } else { + struct nix_cn10k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq->qidx = fc_cfg->cq_cfg.rq; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + if (fc_cfg->cq_cfg.enable) { + aq->cq.bpid = nix->bpid[0]; + aq->cq_mask.bpid = ~(aq->cq_mask.bpid); + aq->cq.bp = fc_cfg->cq_cfg.cq_drop; + aq->cq_mask.bp = ~(aq->cq_mask.bp); + } + + aq->cq.bp_ena = !!(fc_cfg->cq_cfg.enable); + aq->cq_mask.bp_ena = ~(aq->cq_mask.bp_ena); + } + + return mbox_process(mbox); +} + +int +roc_nix_fc_config_get(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) +{ + if (roc_nix_is_vf_or_sdp(roc_nix)) + return 0; + + if (fc_cfg->cq_cfg_valid) + return nix_fc_cq_config_get(roc_nix, fc_cfg); + else + return nix_fc_rxchan_bpid_get(roc_nix, fc_cfg); +} + +int +roc_nix_fc_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) +{ + if (roc_nix_is_vf_or_sdp(roc_nix)) + return 0; + + if (fc_cfg->cq_cfg_valid) + return nix_fc_cq_config_set(roc_nix, fc_cfg); + else + return nix_fc_rxchan_bpid_set(roc_nix, + fc_cfg->rxchan_cfg.enable); +} + +enum roc_nix_fc_mode +roc_nix_fc_mode_get(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = get_mbox(roc_nix); + struct cgx_pause_frm_cfg *req, *rsp; + enum roc_nix_fc_mode mode; + int rc = -ENOSPC; + + if (roc_nix_is_lbk(roc_nix)) + return ROC_NIX_FC_NONE; + + req = mbox_alloc_msg_cgx_cfg_pause_frm(mbox); + if (req == NULL) + return rc; + req->set = 0; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto exit; + + if (rsp->rx_pause && rsp->tx_pause) + mode = ROC_NIX_FC_FULL; + else if (rsp->rx_pause) + mode = ROC_NIX_FC_RX; + else if (rsp->tx_pause) + mode = ROC_NIX_FC_TX; + else + mode = ROC_NIX_FC_NONE; + + nix->rx_pause = rsp->rx_pause; + nix->tx_pause = rsp->tx_pause; + return mode; + +exit: + return ROC_NIX_FC_NONE; +} + +int +roc_nix_fc_mode_set(struct roc_nix *roc_nix, enum roc_nix_fc_mode mode) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = get_mbox(roc_nix); + struct cgx_pause_frm_cfg *req; + uint8_t tx_pause, rx_pause; + int rc = -ENOSPC; + + if (roc_nix_is_lbk(roc_nix)) + return NIX_ERR_OP_NOTSUP; + + rx_pause = (mode == ROC_NIX_FC_FULL) || (mode == ROC_NIX_FC_RX); + tx_pause = (mode == ROC_NIX_FC_FULL) || (mode == ROC_NIX_FC_TX); + + req = mbox_alloc_msg_cgx_cfg_pause_frm(mbox); + if (req == NULL) + return rc; + req->set = 1; + req->rx_pause = rx_pause; + req->tx_pause = tx_pause; + + rc = mbox_process(mbox); + if (rc) + goto exit; + + nix->rx_pause = rx_pause; + nix->tx_pause = tx_pause; + +exit: + return rc; +} diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 6eae844..713fc0b 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -21,6 +21,10 @@ INTERNAL { roc_nix_dev_init; roc_nix_dump; roc_nix_err_intr_ena_dis; + roc_nix_fc_config_get; + roc_nix_fc_config_set; + roc_nix_fc_mode_set; + roc_nix_fc_mode_get; roc_nix_get_base_chan; roc_nix_get_pf; roc_nix_get_pf_func; From patchwork Thu Apr 1 12:37:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90404 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6D4D0A0548; Thu, 1 Apr 2021 14:43:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 248BB14126A; Thu, 1 Apr 2021 14:40:14 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 8F6D2141260 for ; Thu, 1 Apr 2021 14:40:12 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CRJvS021887 for ; Thu, 1 Apr 2021 05:40:12 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=g9c2isbLt1D7Bbh6j8XYICS+NvWz4SwfpiCfD54GmKk=; b=eLSnjOHCcIo9LIecWwg1ELMqpyLwutLtWziE/NYJstq9U4qNH8aGvg14E9109Ljq3U3j 39JsUVvqZM3J6uMksIQNb/3C887zqHAPhSlAnerLeyJUA+f9minotdJYLT12PrRdVnCU JLJvarkzx608xUEPlclanDEwUAJ2N4iBbvF/sZMJ8Mnf/a+U3p7b5p3f9TXTErpe6pxZ k0BiLMYuxnPZPJb5Rzw0ewB4ezo5hVVv18bPDYJO4ifFbiF2ALJKVwYqLDYhbZ6WVowE EgMxXn4+OA6glQsWXtcTBwtrrZnNLXzfO2/5uale0A9ejIeZBoMhcWRGDQPBVMVtW9Vo LQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje3r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:11 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:09 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:09 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id F330B3F7070; Thu, 1 Apr 2021 05:40:04 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:07:56 +0530 Message-ID: <20210401123817.14348-32-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: Ps16kWqXhXmR_44es-BAMi0Eb6uRgmBv X-Proofpoint-ORIG-GUID: Ps16kWqXhXmR_44es-BAMi0Eb6uRgmBv X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 31/52] common/cnxk: add nix LSO support and misc utils X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add support to create LSO formats for TCP segmentation offload for IPv4/IPv6, tunnel and non-tunnel protocols. Tunnel protocol support is for GRE and UDP based tunnel protocols. This patch also adds other helper API to retrieve eeprom info and configure Rx for different switch headers. Signed-off-by: Sunil Kumar Kori --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_nix.h | 28 +++ drivers/common/cnxk/roc_nix_debug.c | 16 ++ drivers/common/cnxk/roc_nix_ops.c | 438 ++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_nix_priv.h | 2 + drivers/common/cnxk/version.map | 5 + 6 files changed, 490 insertions(+) create mode 100644 drivers/common/cnxk/roc_nix_ops.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 33eeb8c..d8514b3 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -22,6 +22,7 @@ sources = files('roc_dev.c', 'roc_nix_mac.c', 'roc_nix_mcast.c', 'roc_nix_npc.c', + 'roc_nix_ops.c', 'roc_nix_ptp.c', 'roc_nix_queue.c', 'roc_nix_rss.c', diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 2158f8c..ce8c252 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -59,6 +59,12 @@ struct roc_nix_fc_cfg { }; }; +struct roc_nix_eeprom_info { +#define ROC_NIX_EEPROM_SIZE 256 + uint16_t sff_id; + uint8_t buf[ROC_NIX_EEPROM_SIZE]; +}; + /* Range to adjust PTP frequency. Valid range is * (-ROC_NIX_PTP_FREQ_ADJUST, ROC_NIX_PTP_FREQ_ADJUST) */ @@ -246,6 +252,14 @@ struct roc_nix { uint8_t reserved[ROC_NIX_MEM_SZ] __plt_cache_aligned; } __plt_cache_aligned; +enum roc_nix_lso_tun_type { + ROC_NIX_LSO_TUN_V4V4, + ROC_NIX_LSO_TUN_V4V6, + ROC_NIX_LSO_TUN_V6V4, + ROC_NIX_LSO_TUN_V6V6, + ROC_NIX_LSO_TUN_MAX, +}; + /* Dev */ int __roc_api roc_nix_dev_init(struct roc_nix *roc_nix); int __roc_api roc_nix_dev_fini(struct roc_nix *roc_nix); @@ -315,6 +329,20 @@ int __roc_api roc_nix_mac_link_cb_register(struct roc_nix *roc_nix, link_status_t link_update); void __roc_api roc_nix_mac_link_cb_unregister(struct roc_nix *roc_nix); +/* Ops */ +int __roc_api roc_nix_switch_hdr_set(struct roc_nix *roc_nix, + uint64_t switch_header_type); +int __roc_api roc_nix_lso_fmt_setup(struct roc_nix *roc_nix); +int __roc_api roc_nix_lso_fmt_get(struct roc_nix *roc_nix, + uint8_t udp_tun[ROC_NIX_LSO_TUN_MAX], + uint8_t tun[ROC_NIX_LSO_TUN_MAX]); +int __roc_api roc_nix_lso_custom_fmt_setup(struct roc_nix *roc_nix, + struct nix_lso_format *fields, + uint16_t nb_fields); + +int __roc_api roc_nix_eeprom_info_get(struct roc_nix *roc_nix, + struct roc_nix_eeprom_info *info); + /* Flow control */ int __roc_api roc_nix_fc_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg); diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c index 00712d5..a0cf98e 100644 --- a/drivers/common/cnxk/roc_nix_debug.c +++ b/drivers/common/cnxk/roc_nix_debug.c @@ -786,6 +786,22 @@ roc_nix_dump(struct roc_nix *roc_nix) nix_dump(" \tnb_tx_queues = %d", nix->nb_tx_queues); nix_dump(" \tlso_tsov6_idx = %d", nix->lso_tsov6_idx); nix_dump(" \tlso_tsov4_idx = %d", nix->lso_tsov4_idx); + nix_dump(" \tlso_udp_tun_v4v4 = %d", + nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V4V4]); + nix_dump(" \tlso_udp_tun_v4v6 = %d", + nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V4V6]); + nix_dump(" \tlso_udp_tun_v6v4 = %d", + nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V6V4]); + nix_dump(" \tlso_udp_tun_v6v6 = %d", + nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V6V6]); + nix_dump(" \tlso_tun_v4v4 = %d", + nix->lso_tun_idx[ROC_NIX_LSO_TUN_V4V4]); + nix_dump(" \tlso_tun_v4v6 = %d", + nix->lso_tun_idx[ROC_NIX_LSO_TUN_V4V6]); + nix_dump(" \tlso_tun_v6v4 = %d", + nix->lso_tun_idx[ROC_NIX_LSO_TUN_V6V4]); + nix_dump(" \tlso_tun_v6v6 = %d", + nix->lso_tun_idx[ROC_NIX_LSO_TUN_V6V6]); nix_dump(" \tlf_rx_stats = %d", nix->lf_rx_stats); nix_dump(" \tlf_tx_stats = %d", nix->lf_tx_stats); nix_dump(" \trx_chan_cnt = %d", nix->rx_chan_cnt); diff --git a/drivers/common/cnxk/roc_nix_ops.c b/drivers/common/cnxk/roc_nix_ops.c new file mode 100644 index 0000000..eeb85a5 --- /dev/null +++ b/drivers/common/cnxk/roc_nix_ops.c @@ -0,0 +1,438 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +static inline struct mbox * +get_mbox(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + return dev->mbox; +} + +static void +nix_lso_tcp(struct nix_lso_format_cfg *req, bool v4) +{ + __io struct nix_lso_format *field; + + /* Format works only with TCP packet marked by OL3/OL4 */ + field = (__io struct nix_lso_format *)&req->fields[0]; + req->field_mask = NIX_LSO_FIELD_MASK; + /* Outer IPv4/IPv6 */ + field->layer = NIX_TXLAYER_OL3; + field->offset = v4 ? 2 : 4; + field->sizem1 = 1; /* 2B */ + field->alg = NIX_LSOALG_ADD_PAYLEN; + field++; + if (v4) { + /* IPID field */ + field->layer = NIX_TXLAYER_OL3; + field->offset = 4; + field->sizem1 = 1; + /* Incremented linearly per segment */ + field->alg = NIX_LSOALG_ADD_SEGNUM; + field++; + } + + /* TCP sequence number update */ + field->layer = NIX_TXLAYER_OL4; + field->offset = 4; + field->sizem1 = 3; /* 4 bytes */ + field->alg = NIX_LSOALG_ADD_OFFSET; + field++; + /* TCP flags field */ + field->layer = NIX_TXLAYER_OL4; + field->offset = 12; + field->sizem1 = 1; + field->alg = NIX_LSOALG_TCP_FLAGS; + field++; +} + +static void +nix_lso_udp_tun_tcp(struct nix_lso_format_cfg *req, bool outer_v4, + bool inner_v4) +{ + __io struct nix_lso_format *field; + + field = (__io struct nix_lso_format *)&req->fields[0]; + req->field_mask = NIX_LSO_FIELD_MASK; + /* Outer IPv4/IPv6 len */ + field->layer = NIX_TXLAYER_OL3; + field->offset = outer_v4 ? 2 : 4; + field->sizem1 = 1; /* 2B */ + field->alg = NIX_LSOALG_ADD_PAYLEN; + field++; + if (outer_v4) { + /* IPID */ + field->layer = NIX_TXLAYER_OL3; + field->offset = 4; + field->sizem1 = 1; + /* Incremented linearly per segment */ + field->alg = NIX_LSOALG_ADD_SEGNUM; + field++; + } + + /* Outer UDP length */ + field->layer = NIX_TXLAYER_OL4; + field->offset = 4; + field->sizem1 = 1; + field->alg = NIX_LSOALG_ADD_PAYLEN; + field++; + + /* Inner IPv4/IPv6 */ + field->layer = NIX_TXLAYER_IL3; + field->offset = inner_v4 ? 2 : 4; + field->sizem1 = 1; /* 2B */ + field->alg = NIX_LSOALG_ADD_PAYLEN; + field++; + if (inner_v4) { + /* IPID field */ + field->layer = NIX_TXLAYER_IL3; + field->offset = 4; + field->sizem1 = 1; + /* Incremented linearly per segment */ + field->alg = NIX_LSOALG_ADD_SEGNUM; + field++; + } + + /* TCP sequence number update */ + field->layer = NIX_TXLAYER_IL4; + field->offset = 4; + field->sizem1 = 3; /* 4 bytes */ + field->alg = NIX_LSOALG_ADD_OFFSET; + field++; + + /* TCP flags field */ + field->layer = NIX_TXLAYER_IL4; + field->offset = 12; + field->sizem1 = 1; + field->alg = NIX_LSOALG_TCP_FLAGS; + field++; +} + +static void +nix_lso_tun_tcp(struct nix_lso_format_cfg *req, bool outer_v4, bool inner_v4) +{ + __io struct nix_lso_format *field; + + field = (__io struct nix_lso_format *)&req->fields[0]; + req->field_mask = NIX_LSO_FIELD_MASK; + /* Outer IPv4/IPv6 len */ + field->layer = NIX_TXLAYER_OL3; + field->offset = outer_v4 ? 2 : 4; + field->sizem1 = 1; /* 2B */ + field->alg = NIX_LSOALG_ADD_PAYLEN; + field++; + if (outer_v4) { + /* IPID */ + field->layer = NIX_TXLAYER_OL3; + field->offset = 4; + field->sizem1 = 1; + /* Incremented linearly per segment */ + field->alg = NIX_LSOALG_ADD_SEGNUM; + field++; + } + + /* Inner IPv4/IPv6 */ + field->layer = NIX_TXLAYER_IL3; + field->offset = inner_v4 ? 2 : 4; + field->sizem1 = 1; /* 2B */ + field->alg = NIX_LSOALG_ADD_PAYLEN; + field++; + if (inner_v4) { + /* IPID field */ + field->layer = NIX_TXLAYER_IL3; + field->offset = 4; + field->sizem1 = 1; + /* Incremented linearly per segment */ + field->alg = NIX_LSOALG_ADD_SEGNUM; + field++; + } + + /* TCP sequence number update */ + field->layer = NIX_TXLAYER_IL4; + field->offset = 4; + field->sizem1 = 3; /* 4 bytes */ + field->alg = NIX_LSOALG_ADD_OFFSET; + field++; + + /* TCP flags field */ + field->layer = NIX_TXLAYER_IL4; + field->offset = 12; + field->sizem1 = 1; + field->alg = NIX_LSOALG_TCP_FLAGS; + field++; +} + +int +roc_nix_lso_custom_fmt_setup(struct roc_nix *roc_nix, + struct nix_lso_format *fields, uint16_t nb_fields) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct nix_lso_format_cfg_rsp *rsp; + struct nix_lso_format_cfg *req; + int rc = -ENOSPC; + + if (nb_fields > NIX_LSO_FIELD_MAX) + return -EINVAL; + + req = mbox_alloc_msg_nix_lso_format_cfg(mbox); + if (req == NULL) + return rc; + + req->field_mask = NIX_LSO_FIELD_MASK; + mbox_memcpy(req->fields, fields, + sizeof(struct nix_lso_format) * nb_fields); + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + plt_nix_dbg("Setup custom format %u", rsp->lso_format_idx); + return rsp->lso_format_idx; +} + +int +roc_nix_lso_fmt_setup(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = get_mbox(roc_nix); + struct nix_lso_format_cfg_rsp *rsp; + struct nix_lso_format_cfg *req; + int rc = -ENOSPC; + + /* + * IPv4/TCP LSO + */ + req = mbox_alloc_msg_nix_lso_format_cfg(mbox); + if (req == NULL) + return rc; + nix_lso_tcp(req, true); + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + if (rsp->lso_format_idx != NIX_LSO_FORMAT_IDX_TSOV4) + return NIX_ERR_INTERNAL; + + plt_nix_dbg("tcpv4 lso fmt=%u\n", rsp->lso_format_idx); + + /* + * IPv6/TCP LSO + */ + req = mbox_alloc_msg_nix_lso_format_cfg(mbox); + if (req == NULL) + return -ENOSPC; + nix_lso_tcp(req, false); + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + if (rsp->lso_format_idx != NIX_LSO_FORMAT_IDX_TSOV6) + return NIX_ERR_INTERNAL; + + plt_nix_dbg("tcpv6 lso fmt=%u\n", rsp->lso_format_idx); + + /* + * IPv4/UDP/TUN HDR/IPv4/TCP LSO + */ + req = mbox_alloc_msg_nix_lso_format_cfg(mbox); + if (req == NULL) + return -ENOSPC; + nix_lso_udp_tun_tcp(req, true, true); + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V4V4] = rsp->lso_format_idx; + plt_nix_dbg("udp tun v4v4 fmt=%u\n", rsp->lso_format_idx); + + /* + * IPv4/UDP/TUN HDR/IPv6/TCP LSO + */ + req = mbox_alloc_msg_nix_lso_format_cfg(mbox); + if (req == NULL) + return -ENOSPC; + nix_lso_udp_tun_tcp(req, true, false); + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V4V6] = rsp->lso_format_idx; + plt_nix_dbg("udp tun v4v6 fmt=%u\n", rsp->lso_format_idx); + + /* + * IPv6/UDP/TUN HDR/IPv4/TCP LSO + */ + req = mbox_alloc_msg_nix_lso_format_cfg(mbox); + if (req == NULL) + return -ENOSPC; + nix_lso_udp_tun_tcp(req, false, true); + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V6V4] = rsp->lso_format_idx; + plt_nix_dbg("udp tun v6v4 fmt=%u\n", rsp->lso_format_idx); + + /* + * IPv6/UDP/TUN HDR/IPv6/TCP LSO + */ + req = mbox_alloc_msg_nix_lso_format_cfg(mbox); + if (req == NULL) + return -ENOSPC; + nix_lso_udp_tun_tcp(req, false, false); + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V6V6] = rsp->lso_format_idx; + plt_nix_dbg("udp tun v6v6 fmt=%u\n", rsp->lso_format_idx); + + /* + * IPv4/TUN HDR/IPv4/TCP LSO + */ + req = mbox_alloc_msg_nix_lso_format_cfg(mbox); + if (req == NULL) + return -ENOSPC; + nix_lso_tun_tcp(req, true, true); + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + nix->lso_tun_idx[ROC_NIX_LSO_TUN_V4V4] = rsp->lso_format_idx; + plt_nix_dbg("tun v4v4 fmt=%u\n", rsp->lso_format_idx); + + /* + * IPv4/TUN HDR/IPv6/TCP LSO + */ + req = mbox_alloc_msg_nix_lso_format_cfg(mbox); + if (req == NULL) + return -ENOSPC; + nix_lso_tun_tcp(req, true, false); + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + nix->lso_tun_idx[ROC_NIX_LSO_TUN_V4V6] = rsp->lso_format_idx; + plt_nix_dbg("tun v4v6 fmt=%u\n", rsp->lso_format_idx); + + /* + * IPv6/TUN HDR/IPv4/TCP LSO + */ + req = mbox_alloc_msg_nix_lso_format_cfg(mbox); + if (req == NULL) + return -ENOSPC; + nix_lso_tun_tcp(req, false, true); + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + nix->lso_tun_idx[ROC_NIX_LSO_TUN_V6V4] = rsp->lso_format_idx; + plt_nix_dbg("tun v6v4 fmt=%u\n", rsp->lso_format_idx); + + /* + * IPv6/TUN HDR/IPv6/TCP LSO + */ + req = mbox_alloc_msg_nix_lso_format_cfg(mbox); + if (req == NULL) + return -ENOSPC; + nix_lso_tun_tcp(req, false, false); + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + nix->lso_tun_idx[ROC_NIX_LSO_TUN_V6V6] = rsp->lso_format_idx; + plt_nix_dbg("tun v6v6 fmt=%u\n", rsp->lso_format_idx); + return 0; +} + +int +roc_nix_lso_fmt_get(struct roc_nix *roc_nix, + uint8_t udp_tun[ROC_NIX_LSO_TUN_MAX], + uint8_t tun[ROC_NIX_LSO_TUN_MAX]) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + memcpy(udp_tun, nix->lso_udp_tun_idx, ROC_NIX_LSO_TUN_MAX); + memcpy(tun, nix->lso_tun_idx, ROC_NIX_LSO_TUN_MAX); + return 0; +} + +int +roc_nix_switch_hdr_set(struct roc_nix *roc_nix, uint64_t switch_header_type) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct npc_set_pkind *req; + struct msg_resp *rsp; + int rc = -ENOSPC; + + if (switch_header_type == 0) + switch_header_type = ROC_PRIV_FLAGS_DEFAULT; + + if (switch_header_type != ROC_PRIV_FLAGS_DEFAULT && + switch_header_type != ROC_PRIV_FLAGS_EDSA && + switch_header_type != ROC_PRIV_FLAGS_HIGIG && + switch_header_type != ROC_PRIV_FLAGS_LEN_90B && + switch_header_type != ROC_PRIV_FLAGS_CUSTOM) { + plt_err("switch header type is not supported"); + return NIX_ERR_PARAM; + } + + if (switch_header_type == ROC_PRIV_FLAGS_LEN_90B && + !roc_nix_is_sdp(roc_nix)) { + plt_err("chlen90b is not supported on non-SDP device"); + return NIX_ERR_PARAM; + } + + if (switch_header_type == ROC_PRIV_FLAGS_HIGIG && + roc_nix_is_vf_or_sdp(roc_nix)) { + plt_err("higig2 is supported on PF devices only"); + return NIX_ERR_PARAM; + } + + req = mbox_alloc_msg_npc_set_pkind(mbox); + if (req == NULL) + return rc; + req->mode = switch_header_type; + req->dir = PKIND_RX; + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + req = mbox_alloc_msg_npc_set_pkind(mbox); + if (req == NULL) + return -ENOSPC; + req->mode = switch_header_type; + req->dir = PKIND_TX; + return mbox_process_msg(mbox, (void *)&rsp); +} + +int +roc_nix_eeprom_info_get(struct roc_nix *roc_nix, + struct roc_nix_eeprom_info *info) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct cgx_fw_data *rsp = NULL; + int rc; + + if (!info) { + plt_err("Input buffer is NULL"); + return NIX_ERR_PARAM; + } + + mbox_alloc_msg_cgx_get_aux_link_info(mbox); + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + plt_err("Failed to get fw data: %d", rc); + return rc; + } + + info->sff_id = rsp->fwdata.sfp_eeprom.sff_id; + mbox_memcpy(info->buf, rsp->fwdata.sfp_eeprom.buf, SFP_EEPROM_SIZE); + return 0; +} diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index 1457696..202bc76 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -45,6 +45,8 @@ struct nix { uint16_t nb_tx_queues; uint8_t lso_tsov6_idx; uint8_t lso_tsov4_idx; + uint8_t lso_udp_tun_idx[ROC_NIX_LSO_TUN_MAX]; + uint8_t lso_tun_idx[ROC_NIX_LSO_TUN_MAX]; uint8_t lf_rx_stats; uint8_t lf_tx_stats; uint8_t rx_chan_cnt; diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 713fc0b..0f43354 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -39,6 +39,9 @@ INTERNAL { roc_nix_lf_free; roc_nix_lf_get_reg_count; roc_nix_lf_reg_dump; + roc_nix_lso_custom_fmt_setup; + roc_nix_lso_fmt_get; + roc_nix_lso_fmt_setup; roc_nix_mac_addr_add; roc_nix_mac_addr_del; roc_nix_mac_addr_set; @@ -98,6 +101,8 @@ INTERNAL { roc_nix_num_xstats_get; roc_nix_xstats_get; roc_nix_xstats_names_get; + roc_nix_switch_hdr_set; + roc_nix_eeprom_info_get; roc_nix_unregister_cq_irqs; roc_nix_unregister_queue_irqs; roc_nix_vlan_insert_ena_dis; From patchwork Thu Apr 1 12:37:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90405 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B1DC8A0548; Thu, 1 Apr 2021 14:43:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DB2ED141277; Thu, 1 Apr 2021 14:40:15 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 28EE714126A for ; Thu, 1 Apr 2021 14:40:13 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CRJvT021887 for ; Thu, 1 Apr 2021 05:40:12 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=SJqCP/C/NrM2b8AfNXYb8oxlTrFPaDRpREp++yHZ07M=; b=AcZfQnjJTe5BUPA+P9xUMx+7j412YJsocr/pM4NDUi0qs4Sife46wC5h9Lpf/VNXqAVP w7Fng+kEpesB1ETFXu5xCn50nPTr8VITymFcmDGKZTSLIX7X426khZWtOg/O3NQ1wt6I k6EvwVXXZtYTWpOmvgoUDQJPaQv5REzJknRL6njdnPGhSMP8OyV6nxm9RTuCF9A+tqUR k1iHeSsGDWRpz7l4gSDXxB6BiCOZewSriG83Xk+725gFq6C2YQVBJ3lnf5oKQlysqV07 E+YsHejXf8rt6zxKuUdQtv2o2z6z0yDDnU9SXF+6SWaRGM12LAicyrSZl4eOFWqKrovQ ew== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje3r-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:12 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:10 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:10 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id E81E93F704D; Thu, 1 Apr 2021 05:40:07 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , , Nithin Dabilpuram Date: Thu, 1 Apr 2021 18:07:57 +0530 Message-ID: <20210401123817.14348-33-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: racxJyeGX9EfWMFQ4qDvWAuew4Q7fVnk X-Proofpoint-ORIG-GUID: racxJyeGX9EfWMFQ4qDvWAuew4Q7fVnk X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 32/52] common/cnxk: add nix traffic management base support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add nix traffic management base support to init/fini node, shaper profile and topology, setup SQ for a given user hierarchy or default internal hierarchy. Signed-off-by: Nithin Dabilpuram --- drivers/common/cnxk/meson.build | 3 + drivers/common/cnxk/roc_nix.c | 7 + drivers/common/cnxk/roc_nix.h | 26 +++ drivers/common/cnxk/roc_nix_priv.h | 219 ++++++++++++++++++ drivers/common/cnxk/roc_nix_queue.c | 9 + drivers/common/cnxk/roc_nix_tm.c | 397 +++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_nix_tm_ops.c | 67 ++++++ drivers/common/cnxk/roc_nix_tm_utils.c | 62 +++++ drivers/common/cnxk/roc_platform.c | 1 + drivers/common/cnxk/roc_platform.h | 2 + drivers/common/cnxk/version.map | 3 + 11 files changed, 796 insertions(+) create mode 100644 drivers/common/cnxk/roc_nix_tm.c create mode 100644 drivers/common/cnxk/roc_nix_tm_ops.c create mode 100644 drivers/common/cnxk/roc_nix_tm_utils.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index d8514b3..b453364 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -27,6 +27,9 @@ sources = files('roc_dev.c', 'roc_nix_queue.c', 'roc_nix_rss.c', 'roc_nix_stats.c', + 'roc_nix_tm.c', + 'roc_nix_tm_ops.c', + 'roc_nix_tm_utils.c', 'roc_nix_vlan.c', 'roc_npa.c', 'roc_npa_debug.c', diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c index 0621976..d6b288f 100644 --- a/drivers/common/cnxk/roc_nix.c +++ b/drivers/common/cnxk/roc_nix.c @@ -396,11 +396,17 @@ roc_nix_dev_init(struct roc_nix *roc_nix) if (rc) goto lf_detach; + rc = nix_tm_conf_init(roc_nix); + if (rc) + goto unregister_irqs; + /* Get NIX HW info */ roc_nix_get_hw_info(roc_nix); nix->dev.drv_inited = true; return 0; +unregister_irqs: + nix_unregister_irqs(nix); lf_detach: nix_lf_detach(nix); dev_fini: @@ -421,6 +427,7 @@ roc_nix_dev_fini(struct roc_nix *roc_nix) if (!nix->dev.drv_inited) goto fini; + nix_tm_conf_fini(roc_nix); nix_unregister_irqs(nix); rc = nix_lf_detach(nix); diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index ce8c252..1ad0e72 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -305,6 +305,32 @@ void __roc_api roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix); int __roc_api roc_nix_register_cq_irqs(struct roc_nix *roc_nix); void __roc_api roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix); +/* Traffic Management */ +#define ROC_NIX_TM_MAX_SCHED_WT ((uint8_t)~0) + +enum roc_nix_tm_tree { + ROC_NIX_TM_DEFAULT = 0, + ROC_NIX_TM_RLIMIT, + ROC_NIX_TM_USER, + ROC_NIX_TM_TREE_MAX, +}; + +enum roc_tm_node_level { + ROC_TM_LVL_ROOT = 0, + ROC_TM_LVL_SCH1, + ROC_TM_LVL_SCH2, + ROC_TM_LVL_SCH3, + ROC_TM_LVL_SCH4, + ROC_TM_LVL_QUEUE, + ROC_TM_LVL_MAX, +}; + +/* + * TM runtime hierarchy init API. + */ +int __roc_api roc_nix_tm_sq_aura_fc(struct roc_nix_sq *sq, bool enable); +int __roc_api roc_nix_tm_sq_flush_spin(struct roc_nix_sq *sq); + /* MAC */ int __roc_api roc_nix_mac_rxtx_start_stop(struct roc_nix *roc_nix, bool start); int __roc_api roc_nix_mac_link_event_start_stop(struct roc_nix *roc_nix, diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index 202bc76..edc3ff1 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -28,6 +28,77 @@ struct nix_qint { uint8_t qintx; }; +/* Traffic Manager */ +#define NIX_TM_MAX_HW_TXSCHQ 512 +#define NIX_TM_HW_ID_INVALID UINT32_MAX + +/* TM flags */ +#define NIX_TM_HIERARCHY_ENA BIT_ULL(0) +#define NIX_TM_TL1_NO_SP BIT_ULL(1) +#define NIX_TM_TL1_ACCESS BIT_ULL(2) + +struct nix_tm_tb { + /** Token bucket rate (bytes per second) */ + uint64_t rate; + + /** Token bucket size (bytes), a.k.a. max burst size */ + uint64_t size; +}; + +struct nix_tm_node { + TAILQ_ENTRY(nix_tm_node) node; + + /* Input params */ + enum roc_nix_tm_tree tree; + uint32_t id; + uint32_t priority; + uint32_t weight; + uint16_t lvl; + uint32_t parent_id; + uint32_t shaper_profile_id; + void (*free_fn)(void *node); + + /* Derived params */ + uint32_t hw_id; + uint16_t hw_lvl; + uint32_t rr_prio; + uint32_t rr_num; + uint32_t max_prio; + uint32_t parent_hw_id; + uint32_t flags : 16; +#define NIX_TM_NODE_HWRES BIT_ULL(0) +#define NIX_TM_NODE_ENABLED BIT_ULL(1) + /* Shaper algorithm for RED state @NIX_REDALG_E */ + uint32_t red_algo : 2; + uint32_t pkt_mode : 1; + uint32_t pkt_mode_set : 1; + + bool child_realloc; + struct nix_tm_node *parent; + + /* Non-leaf node sp count */ + uint32_t n_sp_priorities; + + /* Last stats */ + uint64_t last_pkts; + uint64_t last_bytes; +}; + +struct nix_tm_shaper_profile { + TAILQ_ENTRY(nix_tm_shaper_profile) shaper; + struct nix_tm_tb commit; + struct nix_tm_tb peak; + int32_t pkt_len_adj; + bool pkt_mode; + uint32_t id; + void (*free_fn)(void *profile); + + uint32_t ref_cnt; +}; + +TAILQ_HEAD(nix_tm_node_list, nix_tm_node); +TAILQ_HEAD(nix_tm_shaper_profile_list, nix_tm_shaper_profile); + struct nix { uint16_t reta[ROC_NIX_RSS_GRPS][ROC_NIX_RSS_RETA_MAX]; enum roc_nix_rss_reta_sz reta_sz; @@ -73,6 +144,23 @@ struct nix { bool ptp_en; bool is_nix1; + /* Traffic manager info */ + + /* Contiguous resources per lvl */ + struct plt_bitmap *schq_contig_bmp[NIX_TXSCH_LVL_CNT]; + /* Dis-contiguous resources per lvl */ + struct plt_bitmap *schq_bmp[NIX_TXSCH_LVL_CNT]; + void *schq_bmp_mem; + + struct nix_tm_shaper_profile_list shaper_profile_list; + struct nix_tm_node_list trees[ROC_NIX_TM_TREE_MAX]; + enum roc_nix_tm_tree tm_tree; + uint64_t tm_rate_min; + uint16_t tm_root_lvl; + uint16_t tm_flags; + uint16_t tm_link_cfg_lvl; + uint16_t contig_rsvd[NIX_TXSCH_LVL_CNT]; + uint16_t discontig_rsvd[NIX_TXSCH_LVL_CNT]; } __plt_cache_aligned; enum nix_err_status { @@ -84,6 +172,29 @@ enum nix_err_status { NIX_ERR_QUEUE_INVALID_RANGE, NIX_ERR_AQ_READ_FAILED, NIX_ERR_AQ_WRITE_FAILED, + NIX_ERR_TM_LEAF_NODE_GET, + NIX_ERR_TM_INVALID_LVL, + NIX_ERR_TM_INVALID_PRIO, + NIX_ERR_TM_INVALID_PARENT, + NIX_ERR_TM_NODE_EXISTS, + NIX_ERR_TM_INVALID_NODE, + NIX_ERR_TM_INVALID_SHAPER_PROFILE, + NIX_ERR_TM_PKT_MODE_MISMATCH, + NIX_ERR_TM_WEIGHT_EXCEED, + NIX_ERR_TM_CHILD_EXISTS, + NIX_ERR_TM_INVALID_PEAK_SZ, + NIX_ERR_TM_INVALID_PEAK_RATE, + NIX_ERR_TM_INVALID_COMMIT_SZ, + NIX_ERR_TM_INVALID_COMMIT_RATE, + NIX_ERR_TM_SHAPER_PROFILE_IN_USE, + NIX_ERR_TM_SHAPER_PROFILE_EXISTS, + NIX_ERR_TM_SHAPER_PKT_LEN_ADJUST, + NIX_ERR_TM_INVALID_TREE, + NIX_ERR_TM_PARENT_PRIO_UPDATE, + NIX_ERR_TM_PRIO_EXCEEDED, + NIX_ERR_TM_PRIO_ORDER, + NIX_ERR_TM_MULTIPLE_RR_GROUPS, + NIX_ERR_TM_SQ_UPDATE_FAIL, NIX_ERR_NDC_SYNC, }; @@ -117,4 +228,112 @@ nix_priv_to_roc_nix(struct nix *nix) int nix_register_irqs(struct nix *nix); void nix_unregister_irqs(struct nix *nix); +/* TM */ +#define NIX_TM_TREE_MASK_ALL \ + (BIT(ROC_NIX_TM_DEFAULT) | BIT(ROC_NIX_TM_RLIMIT) | \ + BIT(ROC_NIX_TM_USER)) + +/* NIX_MAX_HW_FRS == + * NIX_TM_DFLT_RR_WT * NIX_TM_RR_QUANTUM_MAX / ROC_NIX_TM_MAX_SCHED_WT + */ +#define NIX_TM_DFLT_RR_WT 71 + +/* Default TL1 priority and Quantum from AF */ +#define NIX_TM_TL1_DFLT_RR_QTM ((1 << 24) - 1) +#define NIX_TM_TL1_DFLT_RR_PRIO 1 + +struct nix_tm_shaper_data { + uint64_t burst_exponent; + uint64_t burst_mantissa; + uint64_t div_exp; + uint64_t exponent; + uint64_t mantissa; + uint64_t burst; + uint64_t rate; +}; + +static inline uint64_t +nix_tm_weight_to_rr_quantum(uint64_t weight) +{ + uint64_t max = (roc_model_is_cn9k() ? NIX_CN9K_TM_RR_QUANTUM_MAX : + NIX_TM_RR_QUANTUM_MAX); + + weight &= (uint64_t)ROC_NIX_TM_MAX_SCHED_WT; + return (weight * max) / ROC_NIX_TM_MAX_SCHED_WT; +} + +static inline bool +nix_tm_have_tl1_access(struct nix *nix) +{ + return !!(nix->tm_flags & NIX_TM_TL1_ACCESS); +} + +static inline bool +nix_tm_is_leaf(struct nix *nix, int lvl) +{ + if (nix_tm_have_tl1_access(nix)) + return (lvl == ROC_TM_LVL_QUEUE); + return (lvl == ROC_TM_LVL_SCH4); +} + +static inline struct nix_tm_node_list * +nix_tm_node_list(struct nix *nix, enum roc_nix_tm_tree tree) +{ + return &nix->trees[tree]; +} + +static inline const char * +nix_tm_hwlvl2str(uint32_t hw_lvl) +{ + switch (hw_lvl) { + case NIX_TXSCH_LVL_MDQ: + return "SMQ/MDQ"; + case NIX_TXSCH_LVL_TL4: + return "TL4"; + case NIX_TXSCH_LVL_TL3: + return "TL3"; + case NIX_TXSCH_LVL_TL2: + return "TL2"; + case NIX_TXSCH_LVL_TL1: + return "TL1"; + default: + break; + } + + return "???"; +} + +static inline const char * +nix_tm_tree2str(enum roc_nix_tm_tree tree) +{ + if (tree == ROC_NIX_TM_DEFAULT) + return "Default Tree"; + else if (tree == ROC_NIX_TM_RLIMIT) + return "Rate Limit Tree"; + else if (tree == ROC_NIX_TM_USER) + return "User Tree"; + return "???"; +} + +/* + * TM priv ops. + */ + +int nix_tm_conf_init(struct roc_nix *roc_nix); +void nix_tm_conf_fini(struct roc_nix *roc_nix); +int nix_tm_leaf_data_get(struct nix *nix, uint16_t sq, uint32_t *rr_quantum, + uint16_t *smq); +int nix_tm_sq_flush_pre(struct roc_nix_sq *sq); +int nix_tm_sq_flush_post(struct roc_nix_sq *sq); +int nix_tm_smq_xoff(struct nix *nix, struct nix_tm_node *node, bool enable); +int nix_tm_clear_path_xoff(struct nix *nix, struct nix_tm_node *node); + +/* + * TM priv utils. + */ +struct nix_tm_node *nix_tm_node_search(struct nix *nix, uint32_t node_id, + enum roc_nix_tm_tree tree); +uint8_t nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable, + volatile uint64_t *reg, volatile uint64_t *regval); + #endif /* _ROC_NIX_PRIV_H_ */ diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c index c5287a9..fbf7efa 100644 --- a/drivers/common/cnxk/roc_nix_queue.c +++ b/drivers/common/cnxk/roc_nix_queue.c @@ -788,6 +788,12 @@ roc_nix_sq_init(struct roc_nix *roc_nix, struct roc_nix_sq *sq) if (rc) goto nomem; + rc = nix_tm_leaf_data_get(nix, sq->qid, &rr_quantum, &smq); + if (rc) { + rc = NIX_ERR_TM_LEAF_NODE_GET; + goto nomem; + } + /* Init SQ context */ if (roc_model_is_cn9k()) sq_cn9k_init(nix, sq, rr_quantum, smq); @@ -831,6 +837,8 @@ roc_nix_sq_fini(struct roc_nix_sq *sq) qid = sq->qid; + rc = nix_tm_sq_flush_pre(sq); + /* Release SQ context */ if (roc_model_is_cn9k()) rc |= sq_cn9k_fini(roc_nix_to_nix_priv(sq->roc_nix), sq); @@ -845,6 +853,7 @@ roc_nix_sq_fini(struct roc_nix_sq *sq) if (mbox_process(mbox)) rc |= NIX_ERR_NDC_SYNC; + rc |= nix_tm_sq_flush_post(sq); rc |= roc_npa_pool_destroy(sq->aura_handle); plt_free(sq->fc); plt_free(sq->sqe_mem); diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c new file mode 100644 index 0000000..4cafc0f --- /dev/null +++ b/drivers/common/cnxk/roc_nix_tm.c @@ -0,0 +1,397 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + + +int +nix_tm_clear_path_xoff(struct nix *nix, struct nix_tm_node *node) +{ + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_txschq_config *req; + struct nix_tm_node *p; + int rc; + + /* Enable nodes in path for flush to succeed */ + if (!nix_tm_is_leaf(nix, node->lvl)) + p = node; + else + p = node->parent; + while (p) { + if (!(p->flags & NIX_TM_NODE_ENABLED) && + (p->flags & NIX_TM_NODE_HWRES)) { + req = mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = p->hw_lvl; + req->num_regs = nix_tm_sw_xoff_prep(p, false, req->reg, + req->regval); + rc = mbox_process(mbox); + if (rc) + return rc; + + p->flags |= NIX_TM_NODE_ENABLED; + } + p = p->parent; + } + + return 0; +} + +int +nix_tm_smq_xoff(struct nix *nix, struct nix_tm_node *node, bool enable) +{ + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_txschq_config *req; + uint16_t smq; + int rc; + + smq = node->hw_id; + plt_tm_dbg("Setting SMQ %u XOFF/FLUSH to %s", smq, + enable ? "enable" : "disable"); + + rc = nix_tm_clear_path_xoff(nix, node); + if (rc) + return rc; + + req = mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = NIX_TXSCH_LVL_SMQ; + req->num_regs = 1; + + req->reg[0] = NIX_AF_SMQX_CFG(smq); + req->regval[0] = enable ? (BIT_ULL(50) | BIT_ULL(49)) : 0; + req->regval_mask[0] = + enable ? ~(BIT_ULL(50) | BIT_ULL(49)) : ~BIT_ULL(50); + + return mbox_process(mbox); +} + +int +nix_tm_leaf_data_get(struct nix *nix, uint16_t sq, uint32_t *rr_quantum, + uint16_t *smq) +{ + struct nix_tm_node *node; + int rc; + + node = nix_tm_node_search(nix, sq, nix->tm_tree); + + /* Check if we found a valid leaf node */ + if (!node || !nix_tm_is_leaf(nix, node->lvl) || !node->parent || + node->parent->hw_id == NIX_TM_HW_ID_INVALID) { + return -EIO; + } + + /* Get SMQ Id of leaf node's parent */ + *smq = node->parent->hw_id; + *rr_quantum = nix_tm_weight_to_rr_quantum(node->weight); + + rc = nix_tm_smq_xoff(nix, node->parent, false); + if (rc) + return rc; + node->flags |= NIX_TM_NODE_ENABLED; + return 0; +} + +int +roc_nix_tm_sq_flush_spin(struct roc_nix_sq *sq) +{ + struct nix *nix = roc_nix_to_nix_priv(sq->roc_nix); + uint16_t sqb_cnt, head_off, tail_off; + uint64_t wdata, val, prev; + uint16_t qid = sq->qid; + int64_t *regaddr; + uint64_t timeout; /* 10's of usec */ + + /* Wait for enough time based on shaper min rate */ + timeout = (sq->nb_desc * roc_nix_max_pkt_len(sq->roc_nix) * 8 * 1E5); + /* Wait for worst case scenario of this SQ being last priority + * and so have to wait for all other SQ's drain out by their own. + */ + timeout = timeout * nix->nb_tx_queues; + timeout = timeout / nix->tm_rate_min; + if (!timeout) + timeout = 10000; + + wdata = ((uint64_t)qid << 32); + regaddr = (int64_t *)(nix->base + NIX_LF_SQ_OP_STATUS); + val = roc_atomic64_add_nosync(wdata, regaddr); + + /* Spin multiple iterations as "sq->fc_cache_pkts" can still + * have space to send pkts even though fc_mem is disabled + */ + + while (true) { + prev = val; + plt_delay_us(10); + val = roc_atomic64_add_nosync(wdata, regaddr); + /* Continue on error */ + if (val & BIT_ULL(63)) + continue; + + if (prev != val) + continue; + + sqb_cnt = val & 0xFFFF; + head_off = (val >> 20) & 0x3F; + tail_off = (val >> 28) & 0x3F; + + /* SQ reached quiescent state */ + if (sqb_cnt <= 1 && head_off == tail_off && + (*(volatile uint64_t *)sq->fc == sq->nb_sqb_bufs)) { + break; + } + + /* Timeout */ + if (!timeout) + goto exit; + timeout--; + } + + return 0; +exit: + roc_nix_queues_ctx_dump(sq->roc_nix); + return -EFAULT; +} + +/* Flush and disable tx queue and its parent SMQ */ +int +nix_tm_sq_flush_pre(struct roc_nix_sq *sq) +{ + struct roc_nix *roc_nix = sq->roc_nix; + struct nix_tm_node *node, *sibling; + struct nix_tm_node_list *list; + enum roc_nix_tm_tree tree; + struct mbox *mbox; + struct nix *nix; + uint16_t qid; + int rc; + + nix = roc_nix_to_nix_priv(roc_nix); + + /* Need not do anything if tree is in disabled state */ + if (!(nix->tm_flags & NIX_TM_HIERARCHY_ENA)) + return 0; + + mbox = (&nix->dev)->mbox; + qid = sq->qid; + + tree = nix->tm_tree; + list = nix_tm_node_list(nix, tree); + + /* Find the node for this SQ */ + node = nix_tm_node_search(nix, qid, tree); + if (!node || !(node->flags & NIX_TM_NODE_ENABLED)) { + plt_err("Invalid node/state for sq %u", qid); + return -EFAULT; + } + + /* Enable CGX RXTX to drain pkts */ + if (!roc_nix->io_enabled) { + /* Though it enables both RX MCAM Entries and CGX Link + * we assume all the rx queues are stopped way back. + */ + mbox_alloc_msg_nix_lf_start_rx(mbox); + rc = mbox_process(mbox); + if (rc) { + plt_err("cgx start failed, rc=%d", rc); + return rc; + } + } + + /* Disable smq xoff for case it was enabled earlier */ + rc = nix_tm_smq_xoff(nix, node->parent, false); + if (rc) { + plt_err("Failed to enable smq %u, rc=%d", node->parent->hw_id, + rc); + return rc; + } + + /* As per HRM, to disable an SQ, all other SQ's + * that feed to same SMQ must be paused before SMQ flush. + */ + TAILQ_FOREACH(sibling, list, node) { + if (sibling->parent != node->parent) + continue; + if (!(sibling->flags & NIX_TM_NODE_ENABLED)) + continue; + + qid = sibling->id; + sq = nix->sqs[qid]; + if (!sq) + continue; + + rc = roc_nix_tm_sq_aura_fc(sq, false); + if (rc) { + plt_err("Failed to disable sqb aura fc, rc=%d", rc); + goto cleanup; + } + + /* Wait for sq entries to be flushed */ + rc = roc_nix_tm_sq_flush_spin(sq); + if (rc) { + plt_err("Failed to drain sq %u, rc=%d\n", sq->qid, rc); + return rc; + } + } + + node->flags &= ~NIX_TM_NODE_ENABLED; + + /* Disable and flush */ + rc = nix_tm_smq_xoff(nix, node->parent, true); + if (rc) { + plt_err("Failed to disable smq %u, rc=%d", node->parent->hw_id, + rc); + goto cleanup; + } +cleanup: + /* Restore cgx state */ + if (!roc_nix->io_enabled) { + mbox_alloc_msg_nix_lf_stop_rx(mbox); + rc |= mbox_process(mbox); + } + + return rc; +} + +int +nix_tm_sq_flush_post(struct roc_nix_sq *sq) +{ + struct roc_nix *roc_nix = sq->roc_nix; + struct nix_tm_node *node, *sibling; + struct nix_tm_node_list *list; + enum roc_nix_tm_tree tree; + struct roc_nix_sq *s_sq; + bool once = false; + uint16_t qid, s_qid; + struct nix *nix; + int rc; + + nix = roc_nix_to_nix_priv(roc_nix); + + /* Need not do anything if tree is in disabled state */ + if (!(nix->tm_flags & NIX_TM_HIERARCHY_ENA)) + return 0; + + qid = sq->qid; + tree = nix->tm_tree; + list = nix_tm_node_list(nix, tree); + + /* Find the node for this SQ */ + node = nix_tm_node_search(nix, qid, tree); + if (!node) { + plt_err("Invalid node for sq %u", qid); + return -EFAULT; + } + + /* Enable all the siblings back */ + TAILQ_FOREACH(sibling, list, node) { + if (sibling->parent != node->parent) + continue; + + if (sibling->id == qid) + continue; + + if (!(sibling->flags & NIX_TM_NODE_ENABLED)) + continue; + + s_qid = sibling->id; + s_sq = nix->sqs[s_qid]; + if (!s_sq) + continue; + + if (!once) { + /* Enable back if any SQ is still present */ + rc = nix_tm_smq_xoff(nix, node->parent, false); + if (rc) { + plt_err("Failed to enable smq %u, rc=%d", + node->parent->hw_id, rc); + return rc; + } + once = true; + } + + rc = roc_nix_tm_sq_aura_fc(s_sq, true); + if (rc) { + plt_err("Failed to enable sqb aura fc, rc=%d", rc); + return rc; + } + } + + return 0; +} + +int +nix_tm_conf_init(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uint32_t bmp_sz, hw_lvl; + void *bmp_mem; + int rc, i; + + nix->tm_flags = 0; + for (i = 0; i < ROC_NIX_TM_TREE_MAX; i++) + TAILQ_INIT(&nix->trees[i]); + + TAILQ_INIT(&nix->shaper_profile_list); + nix->tm_rate_min = 1E9; /* 1Gbps */ + + rc = -ENOMEM; + bmp_sz = plt_bitmap_get_memory_footprint(NIX_TM_MAX_HW_TXSCHQ); + bmp_mem = plt_zmalloc(bmp_sz * NIX_TXSCH_LVL_CNT * 2, 0); + if (!bmp_mem) + return rc; + nix->schq_bmp_mem = bmp_mem; + + /* Init contiguous and discontiguous bitmap per lvl */ + rc = -EIO; + for (hw_lvl = 0; hw_lvl < NIX_TXSCH_LVL_CNT; hw_lvl++) { + /* Bitmap for discontiguous resource */ + nix->schq_bmp[hw_lvl] = + plt_bitmap_init(NIX_TM_MAX_HW_TXSCHQ, bmp_mem, bmp_sz); + if (!nix->schq_bmp[hw_lvl]) + goto exit; + + bmp_mem = PLT_PTR_ADD(bmp_mem, bmp_sz); + + /* Bitmap for contiguous resource */ + nix->schq_contig_bmp[hw_lvl] = + plt_bitmap_init(NIX_TM_MAX_HW_TXSCHQ, bmp_mem, bmp_sz); + if (!nix->schq_contig_bmp[hw_lvl]) + goto exit; + + bmp_mem = PLT_PTR_ADD(bmp_mem, bmp_sz); + } + + /* Disable TL1 Static Priority when VF's are enabled + * as otherwise VF's TL2 reallocation will be needed + * runtime to support a specific topology of PF. + */ + if (nix->pci_dev->max_vfs) + nix->tm_flags |= NIX_TM_TL1_NO_SP; + + /* TL1 access is only for PF's */ + if (roc_nix_is_pf(roc_nix)) { + nix->tm_flags |= NIX_TM_TL1_ACCESS; + nix->tm_root_lvl = NIX_TXSCH_LVL_TL1; + } else { + nix->tm_root_lvl = NIX_TXSCH_LVL_TL2; + } + + return 0; +exit: + nix_tm_conf_fini(roc_nix); + return rc; +} + +void +nix_tm_conf_fini(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uint16_t hw_lvl; + + for (hw_lvl = 0; hw_lvl < NIX_TXSCH_LVL_CNT; hw_lvl++) { + plt_bitmap_free(nix->schq_bmp[hw_lvl]); + plt_bitmap_free(nix->schq_contig_bmp[hw_lvl]); + } + plt_free(nix->schq_bmp_mem); +} diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c new file mode 100644 index 0000000..e2b6d02 --- /dev/null +++ b/drivers/common/cnxk/roc_nix_tm_ops.c @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +int +roc_nix_tm_sq_aura_fc(struct roc_nix_sq *sq, bool enable) +{ + struct npa_aq_enq_req *req; + struct npa_aq_enq_rsp *rsp; + uint64_t aura_handle; + struct npa_lf *lf; + struct mbox *mbox; + int rc = -ENOSPC; + + plt_tm_dbg("Setting SQ %u SQB aura FC to %s", sq->qid, + enable ? "enable" : "disable"); + + lf = idev_npa_obj_get(); + if (!lf) + return NPA_ERR_DEVICE_NOT_BOUNDED; + + mbox = lf->mbox; + /* Set/clear sqb aura fc_ena */ + aura_handle = sq->aura_handle; + req = mbox_alloc_msg_npa_aq_enq(mbox); + if (req == NULL) + return rc; + + req->aura_id = roc_npa_aura_handle_to_aura(aura_handle); + req->ctype = NPA_AQ_CTYPE_AURA; + req->op = NPA_AQ_INSTOP_WRITE; + /* Below is not needed for aura writes but AF driver needs it */ + /* AF will translate to associated poolctx */ + req->aura.pool_addr = req->aura_id; + + req->aura.fc_ena = enable; + req->aura_mask.fc_ena = 1; + + rc = mbox_process(mbox); + if (rc) + return rc; + + /* Read back npa aura ctx */ + req = mbox_alloc_msg_npa_aq_enq(mbox); + if (req == NULL) + return -ENOSPC; + + req->aura_id = roc_npa_aura_handle_to_aura(aura_handle); + req->ctype = NPA_AQ_CTYPE_AURA; + req->op = NPA_AQ_INSTOP_READ; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + /* Init when enabled as there might be no triggers */ + if (enable) + *(volatile uint64_t *)sq->fc = rsp->aura.count; + else + *(volatile uint64_t *)sq->fc = sq->nb_sqb_bufs; + /* Sync write barrier */ + plt_wmb(); + return 0; +} diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c new file mode 100644 index 0000000..a3f683e --- /dev/null +++ b/drivers/common/cnxk/roc_nix_tm_utils.c @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +struct nix_tm_node * +nix_tm_node_search(struct nix *nix, uint32_t node_id, enum roc_nix_tm_tree tree) +{ + struct nix_tm_node_list *list; + struct nix_tm_node *node; + + list = nix_tm_node_list(nix, tree); + TAILQ_FOREACH(node, list, node) { + if (node->id == node_id) + return node; + } + return NULL; +} + +uint8_t +nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable, + volatile uint64_t *reg, volatile uint64_t *regval) +{ + uint32_t hw_lvl = node->hw_lvl; + uint32_t schq = node->hw_id; + uint8_t k = 0; + + plt_tm_dbg("sw xoff config node %s(%u) lvl %u id %u, enable %u (%p)", + nix_tm_hwlvl2str(hw_lvl), schq, node->lvl, node->id, enable, + node); + + regval[k] = enable; + + switch (hw_lvl) { + case NIX_TXSCH_LVL_MDQ: + reg[k] = NIX_AF_MDQX_SW_XOFF(schq); + k++; + break; + case NIX_TXSCH_LVL_TL4: + reg[k] = NIX_AF_TL4X_SW_XOFF(schq); + k++; + break; + case NIX_TXSCH_LVL_TL3: + reg[k] = NIX_AF_TL3X_SW_XOFF(schq); + k++; + break; + case NIX_TXSCH_LVL_TL2: + reg[k] = NIX_AF_TL2X_SW_XOFF(schq); + k++; + break; + case NIX_TXSCH_LVL_TL1: + reg[k] = NIX_AF_TL1X_SW_XOFF(schq); + k++; + break; + default: + break; + } + + return k; +} diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c index 3ac81ce..4d61344 100644 --- a/drivers/common/cnxk/roc_platform.c +++ b/drivers/common/cnxk/roc_platform.c @@ -32,3 +32,4 @@ RTE_LOG_REGISTER(cnxk_logtype_base, pmd.cnxk.base, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_mbox, pmd.cnxk.mbox, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_npa, pmd.mempool.cnxk, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_nix, pmd.net.cnxk, NOTICE); +RTE_LOG_REGISTER(cnxk_logtype_tm, pmd.net.cnxk.tm, NOTICE); diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h index 5e4976b..94bb3c7 100644 --- a/drivers/common/cnxk/roc_platform.h +++ b/drivers/common/cnxk/roc_platform.h @@ -136,6 +136,7 @@ extern int cnxk_logtype_base; extern int cnxk_logtype_mbox; extern int cnxk_logtype_npa; extern int cnxk_logtype_nix; +extern int cnxk_logtype_tm; #define plt_err(fmt, args...) \ RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", __func__, __LINE__, ##args) @@ -155,6 +156,7 @@ extern int cnxk_logtype_nix; #define plt_mbox_dbg(fmt, ...) plt_dbg(mbox, fmt, ##__VA_ARGS__) #define plt_npa_dbg(fmt, ...) plt_dbg(npa, fmt, ##__VA_ARGS__) #define plt_nix_dbg(fmt, ...) plt_dbg(nix, fmt, ##__VA_ARGS__) +#define plt_tm_dbg(fmt, ...) plt_dbg(tm, fmt, ##__VA_ARGS__) #ifdef __cplusplus #define CNXK_PCI_ID(subsystem_dev, dev) \ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 0f43354..5b99467 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -5,6 +5,7 @@ INTERNAL { cnxk_logtype_mbox; cnxk_logtype_nix; cnxk_logtype_npa; + cnxk_logtype_tm; roc_clk_freq_get; roc_error_msg_get; roc_idev_lmt_base_addr_get; @@ -103,6 +104,8 @@ INTERNAL { roc_nix_xstats_names_get; roc_nix_switch_hdr_set; roc_nix_eeprom_info_get; + roc_nix_tm_sq_aura_fc; + roc_nix_tm_sq_flush_spin; roc_nix_unregister_cq_irqs; roc_nix_unregister_queue_irqs; roc_nix_vlan_insert_ena_dis; From patchwork Thu Apr 1 12:37:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90406 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0F86FA0548; Thu, 1 Apr 2021 14:43:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BFA32141289; Thu, 1 Apr 2021 14:40:18 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D65F614126D for ; Thu, 1 Apr 2021 14:40:16 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLca019084 for ; Thu, 1 Apr 2021 05:40:16 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=ZEbVdbOQLp8Do/ND9OHe8l72F4tvj5oQ2y5zkJHiPoI=; b=Dgx99jZ7UCnua93M2xoBE1Rciod51RaAtb77w4HtYq10tb6aW6x6Wz0W5Xoj+/EwaA5t rOC+p8fmw2RneaA6ISKus15Uof49AbgU7zCw9RErsG5Y/oN+UcUYDmT7fuR8yZGd0EYi +k9TF4/2teNCkLkKYw9YBxdjf/iqG1zLl1xtUedCSNDfLs2jjIBpzvI/TznOqFJgaPBF +ZuCrHfkA75+H36acrnjA/bur0Qt3OWfIvkI37FdXyZRJwy/cMpAgucVu/oj50fw4NPf PqFeFlz6ceX2cUwDoTHYA4k0rHqlS8r6uogjM3JjaQHW+x92kVxb6CH544RgbmksJePT lA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje40-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:16 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:13 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:14 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 2919D3F7043; Thu, 1 Apr 2021 05:40:10 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , , Nithin Dabilpuram Date: Thu, 1 Apr 2021 18:07:58 +0530 Message-ID: <20210401123817.14348-34-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: _2TCzhQD7SrhYZbjyTdZaR8_U0s4S3BU X-Proofpoint-ORIG-GUID: _2TCzhQD7SrhYZbjyTdZaR8_U0s4S3BU X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 33/52] common/cnxk: add nix tm support to add/delete node X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support to add/delete nodes in a hierarchy. This patch also adds misc utils to get node name, walk through nodes etc. Signed-off-by: Nithin Dabilpuram Signed-off-by: Satha Rao --- drivers/common/cnxk/roc_nix.h | 42 +++++++ drivers/common/cnxk/roc_nix_priv.h | 14 +++ drivers/common/cnxk/roc_nix_tm.c | 205 +++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_nix_tm_ops.c | 88 ++++++++++++++ drivers/common/cnxk/roc_nix_tm_utils.c | 212 +++++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 7 ++ 6 files changed, 568 insertions(+) diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 1ad0e72..d656909 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -307,6 +307,8 @@ void __roc_api roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix); /* Traffic Management */ #define ROC_NIX_TM_MAX_SCHED_WT ((uint8_t)~0) +#define ROC_NIX_TM_SHAPER_PROFILE_NONE UINT32_MAX +#define ROC_NIX_TM_NODE_ID_INVALID UINT32_MAX enum roc_nix_tm_tree { ROC_NIX_TM_DEFAULT = 0, @@ -331,6 +333,46 @@ enum roc_tm_node_level { int __roc_api roc_nix_tm_sq_aura_fc(struct roc_nix_sq *sq, bool enable); int __roc_api roc_nix_tm_sq_flush_spin(struct roc_nix_sq *sq); +/* + * TM User hierarchy API. + */ + +struct roc_nix_tm_node { +#define ROC_NIX_TM_NODE_SZ (128) + uint8_t reserved[ROC_NIX_TM_NODE_SZ]; + + uint32_t id; + uint32_t parent_id; + uint32_t priority; + uint32_t weight; + uint32_t shaper_profile_id; + uint16_t lvl; + bool pkt_mode; + bool pkt_mode_set; + /* Function to free this memory */ + void (*free_fn)(void *node); +}; + +int __roc_api roc_nix_tm_node_add(struct roc_nix *roc_nix, + struct roc_nix_tm_node *roc_node); +int __roc_api roc_nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id, + bool free); +int __roc_api roc_nix_tm_node_pkt_mode_update(struct roc_nix *roc_nix, + uint32_t node_id, bool pkt_mode); + +struct roc_nix_tm_node *__roc_api roc_nix_tm_node_get(struct roc_nix *roc_nix, + uint32_t node_id); +struct roc_nix_tm_node *__roc_api +roc_nix_tm_node_next(struct roc_nix *roc_nix, struct roc_nix_tm_node *__prev); + +/* + * TM utilities API. + */ +int __roc_api roc_nix_tm_node_lvl(struct roc_nix *roc_nix, uint32_t node_id); +int __roc_api roc_nix_tm_node_name_get(struct roc_nix *roc_nix, + uint32_t node_id, char *buf, + size_t buflen); + /* MAC */ int __roc_api roc_nix_mac_rxtx_start_stop(struct roc_nix *roc_nix, bool start); int __roc_api roc_nix_mac_link_event_start_stop(struct roc_nix *roc_nix, diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index edc3ff1..3b11090 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -326,14 +326,28 @@ int nix_tm_leaf_data_get(struct nix *nix, uint16_t sq, uint32_t *rr_quantum, int nix_tm_sq_flush_pre(struct roc_nix_sq *sq); int nix_tm_sq_flush_post(struct roc_nix_sq *sq); int nix_tm_smq_xoff(struct nix *nix, struct nix_tm_node *node, bool enable); +int nix_tm_node_add(struct roc_nix *roc_nix, struct nix_tm_node *node); +int nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id, + enum roc_nix_tm_tree tree, bool free); +int nix_tm_free_node_resource(struct nix *nix, struct nix_tm_node *node); int nix_tm_clear_path_xoff(struct nix *nix, struct nix_tm_node *node); /* * TM priv utils. */ +uint16_t nix_tm_lvl2nix(struct nix *nix, uint32_t lvl); +uint16_t nix_tm_lvl2nix_tl1_root(uint32_t lvl); +uint16_t nix_tm_lvl2nix_tl2_root(uint32_t lvl); +uint16_t nix_tm_resource_avail(struct nix *nix, uint8_t hw_lvl, bool contig); +int nix_tm_validate_prio(struct nix *nix, uint32_t lvl, uint32_t parent_id, + uint32_t priority, enum roc_nix_tm_tree tree); struct nix_tm_node *nix_tm_node_search(struct nix *nix, uint32_t node_id, enum roc_nix_tm_tree tree); +struct nix_tm_shaper_profile *nix_tm_shaper_profile_search(struct nix *nix, + uint32_t id); uint8_t nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable, volatile uint64_t *reg, volatile uint64_t *regval); +struct nix_tm_node *nix_tm_node_alloc(void); +void nix_tm_node_free(struct nix_tm_node *node); #endif /* _ROC_NIX_PRIV_H_ */ diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c index 4cafc0f..f103e02 100644 --- a/drivers/common/cnxk/roc_nix_tm.c +++ b/drivers/common/cnxk/roc_nix_tm.c @@ -5,6 +5,100 @@ #include "roc_api.h" #include "roc_priv.h" +int +nix_tm_node_add(struct roc_nix *roc_nix, struct nix_tm_node *node) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_tm_shaper_profile *profile; + uint32_t node_id, parent_id, lvl; + struct nix_tm_node *parent_node; + uint32_t priority, profile_id; + uint8_t hw_lvl, exp_next_lvl; + enum roc_nix_tm_tree tree; + int rc; + + node_id = node->id; + priority = node->priority; + parent_id = node->parent_id; + profile_id = node->shaper_profile_id; + lvl = node->lvl; + tree = node->tree; + + plt_tm_dbg("Add node %s lvl %u id %u, prio 0x%x weight 0x%x " + "parent %u profile 0x%x tree %u", + nix_tm_hwlvl2str(nix_tm_lvl2nix(nix, lvl)), lvl, node_id, + priority, node->weight, parent_id, profile_id, tree); + + if (tree >= ROC_NIX_TM_TREE_MAX) + return NIX_ERR_PARAM; + + /* Translate sw level id's to nix hw level id's */ + hw_lvl = nix_tm_lvl2nix(nix, lvl); + if (hw_lvl == NIX_TXSCH_LVL_CNT && !nix_tm_is_leaf(nix, lvl)) + return NIX_ERR_TM_INVALID_LVL; + + /* Leaf nodes have to be same priority */ + if (nix_tm_is_leaf(nix, lvl) && priority != 0) + return NIX_ERR_TM_INVALID_PRIO; + + parent_node = nix_tm_node_search(nix, parent_id, tree); + + if (node_id < nix->nb_tx_queues) + exp_next_lvl = NIX_TXSCH_LVL_SMQ; + else + exp_next_lvl = hw_lvl + 1; + + /* Check if there is no parent node yet */ + if (hw_lvl != nix->tm_root_lvl && + (!parent_node || parent_node->hw_lvl != exp_next_lvl)) + return NIX_ERR_TM_INVALID_PARENT; + + /* Check if a node already exists */ + if (nix_tm_node_search(nix, node_id, tree)) + return NIX_ERR_TM_NODE_EXISTS; + + profile = nix_tm_shaper_profile_search(nix, profile_id); + if (!nix_tm_is_leaf(nix, lvl)) { + /* Check if shaper profile exists for non leaf node */ + if (!profile && profile_id != ROC_NIX_TM_SHAPER_PROFILE_NONE) + return NIX_ERR_TM_INVALID_SHAPER_PROFILE; + + /* Packet mode in profile should match with that of tm node */ + if (profile && profile->pkt_mode != node->pkt_mode) + return NIX_ERR_TM_PKT_MODE_MISMATCH; + } + + /* Check if there is second DWRR already in siblings or holes in prio */ + rc = nix_tm_validate_prio(nix, lvl, parent_id, priority, tree); + if (rc) + return rc; + + if (node->weight > ROC_NIX_TM_MAX_SCHED_WT) + return NIX_ERR_TM_WEIGHT_EXCEED; + + /* Maintain minimum weight */ + if (!node->weight) + node->weight = 1; + + node->hw_lvl = nix_tm_lvl2nix(nix, lvl); + node->rr_prio = 0xF; + node->max_prio = UINT32_MAX; + node->hw_id = NIX_TM_HW_ID_INVALID; + node->flags = 0; + + if (profile) + profile->ref_cnt++; + + node->parent = parent_node; + if (parent_node) + parent_node->child_realloc = true; + node->parent_hw_id = NIX_TM_HW_ID_INVALID; + + TAILQ_INSERT_TAIL(&nix->trees[tree], node, node); + plt_tm_dbg("Added node %s lvl %u id %u (%p)", + nix_tm_hwlvl2str(node->hw_lvl), lvl, node_id, node); + return 0; +} int nix_tm_clear_path_xoff(struct nix *nix, struct nix_tm_node *node) @@ -321,6 +415,115 @@ nix_tm_sq_flush_post(struct roc_nix_sq *sq) } int +nix_tm_free_node_resource(struct nix *nix, struct nix_tm_node *node) +{ + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_txsch_free_req *req; + struct plt_bitmap *bmp; + uint16_t avail, hw_id; + uint8_t hw_lvl; + int rc = -ENOSPC; + + hw_lvl = node->hw_lvl; + hw_id = node->hw_id; + bmp = nix->schq_bmp[hw_lvl]; + /* Free specific HW resource */ + plt_tm_dbg("Free hwres %s(%u) lvl %u id %u (%p)", + nix_tm_hwlvl2str(node->hw_lvl), hw_id, node->lvl, node->id, + node); + + avail = nix_tm_resource_avail(nix, hw_lvl, false); + /* Always for now free to discontiguous queue when avail + * is not sufficient. + */ + if (nix->discontig_rsvd[hw_lvl] && + avail < nix->discontig_rsvd[hw_lvl]) { + PLT_ASSERT(hw_id < NIX_TM_MAX_HW_TXSCHQ); + PLT_ASSERT(plt_bitmap_get(bmp, hw_id) == 0); + plt_bitmap_set(bmp, hw_id); + node->hw_id = NIX_TM_HW_ID_INVALID; + node->flags &= ~NIX_TM_NODE_HWRES; + return 0; + } + + /* Free to AF */ + req = mbox_alloc_msg_nix_txsch_free(mbox); + if (req == NULL) + return rc; + req->flags = 0; + req->schq_lvl = node->hw_lvl; + req->schq = hw_id; + rc = mbox_process(mbox); + if (rc) { + plt_err("failed to release hwres %s(%u) rc %d", + nix_tm_hwlvl2str(node->hw_lvl), hw_id, rc); + return rc; + } + + /* Mark parent as dirty for reallocing it's children */ + if (node->parent) + node->parent->child_realloc = true; + + node->hw_id = NIX_TM_HW_ID_INVALID; + node->flags &= ~NIX_TM_NODE_HWRES; + plt_tm_dbg("Released hwres %s(%u) to af", + nix_tm_hwlvl2str(node->hw_lvl), hw_id); + return 0; +} + +int +nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id, + enum roc_nix_tm_tree tree, bool free) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_tm_shaper_profile *profile; + struct nix_tm_node *node, *child; + struct nix_tm_node_list *list; + uint32_t profile_id; + int rc; + + plt_tm_dbg("Delete node id %u tree %u", node_id, tree); + + node = nix_tm_node_search(nix, node_id, tree); + if (!node) + return NIX_ERR_TM_INVALID_NODE; + + list = nix_tm_node_list(nix, tree); + /* Check for any existing children */ + TAILQ_FOREACH(child, list, node) { + if (child->parent == node) + return NIX_ERR_TM_CHILD_EXISTS; + } + + /* Remove shaper profile reference */ + profile_id = node->shaper_profile_id; + profile = nix_tm_shaper_profile_search(nix, profile_id); + + /* Free hw resource locally */ + if (node->flags & NIX_TM_NODE_HWRES) { + rc = nix_tm_free_node_resource(nix, node); + if (rc) + return rc; + } + + if (profile) + profile->ref_cnt--; + + TAILQ_REMOVE(list, node, node); + + plt_tm_dbg("Deleted node %s lvl %u id %u, prio 0x%x weight 0x%x " + "parent %u profile 0x%x tree %u (%p)", + nix_tm_hwlvl2str(node->hw_lvl), node->lvl, node->id, + node->priority, node->weight, + node->parent ? node->parent->id : UINT32_MAX, + node->shaper_profile_id, tree, node); + /* Free only if requested */ + if (free) + nix_tm_node_free(node); + return 0; +} + +int nix_tm_conf_init(struct roc_nix *roc_nix) { struct nix *nix = roc_nix_to_nix_priv(roc_nix); @@ -328,6 +531,8 @@ nix_tm_conf_init(struct roc_nix *roc_nix) void *bmp_mem; int rc, i; + PLT_STATIC_ASSERT(sizeof(struct nix_tm_node) <= ROC_NIX_TM_NODE_SZ); + nix->tm_flags = 0; for (i = 0; i < ROC_NIX_TM_TREE_MAX; i++) TAILQ_INIT(&nix->trees[i]); diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c index e2b6d02..d0941c0 100644 --- a/drivers/common/cnxk/roc_nix_tm_ops.c +++ b/drivers/common/cnxk/roc_nix_tm_ops.c @@ -65,3 +65,91 @@ roc_nix_tm_sq_aura_fc(struct roc_nix_sq *sq, bool enable) plt_wmb(); return 0; } + +int +roc_nix_tm_node_add(struct roc_nix *roc_nix, struct roc_nix_tm_node *roc_node) +{ + struct nix_tm_node *node; + + node = (struct nix_tm_node *)&roc_node->reserved; + node->id = roc_node->id; + node->priority = roc_node->priority; + node->weight = roc_node->weight; + node->lvl = roc_node->lvl; + node->parent_id = roc_node->parent_id; + node->shaper_profile_id = roc_node->shaper_profile_id; + node->pkt_mode = roc_node->pkt_mode; + node->pkt_mode_set = roc_node->pkt_mode_set; + node->free_fn = roc_node->free_fn; + node->tree = ROC_NIX_TM_USER; + + return nix_tm_node_add(roc_nix, node); +} + +int +roc_nix_tm_node_pkt_mode_update(struct roc_nix *roc_nix, uint32_t node_id, + bool pkt_mode) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_tm_node *node, *child; + struct nix_tm_node_list *list; + int num_children = 0; + + node = nix_tm_node_search(nix, node_id, ROC_NIX_TM_USER); + if (!node) + return NIX_ERR_TM_INVALID_NODE; + + if (node->pkt_mode == pkt_mode) { + node->pkt_mode_set = true; + return 0; + } + + /* Check for any existing children, if there are any, + * then we cannot update the pkt mode as children's quantum + * are already taken in. + */ + list = nix_tm_node_list(nix, ROC_NIX_TM_USER); + TAILQ_FOREACH(child, list, node) { + if (child->parent == node) + num_children++; + } + + /* Cannot update mode if it has children or tree is enabled */ + if ((nix->tm_flags & NIX_TM_HIERARCHY_ENA) && num_children) + return -EBUSY; + + if (node->pkt_mode_set && num_children) + return NIX_ERR_TM_PKT_MODE_MISMATCH; + + node->pkt_mode = pkt_mode; + node->pkt_mode_set = true; + + return 0; +} + +int +roc_nix_tm_node_name_get(struct roc_nix *roc_nix, uint32_t node_id, char *buf, + size_t buflen) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_tm_node *node; + + node = nix_tm_node_search(nix, node_id, ROC_NIX_TM_USER); + if (!node) { + plt_strlcpy(buf, "???", buflen); + return NIX_ERR_TM_INVALID_NODE; + } + + if (node->hw_lvl == NIX_TXSCH_LVL_CNT) + snprintf(buf, buflen, "SQ_%d", node->id); + else + snprintf(buf, buflen, "%s_%d", nix_tm_hwlvl2str(node->hw_lvl), + node->hw_id); + return 0; +} + +int +roc_nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id, bool free) +{ + return nix_tm_node_delete(roc_nix, node_id, ROC_NIX_TM_USER, free); +} diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c index a3f683e..bea23be 100644 --- a/drivers/common/cnxk/roc_nix_tm_utils.c +++ b/drivers/common/cnxk/roc_nix_tm_utils.c @@ -5,6 +5,64 @@ #include "roc_api.h" #include "roc_priv.h" +uint16_t +nix_tm_lvl2nix_tl1_root(uint32_t lvl) +{ + switch (lvl) { + case ROC_TM_LVL_ROOT: + return NIX_TXSCH_LVL_TL1; + case ROC_TM_LVL_SCH1: + return NIX_TXSCH_LVL_TL2; + case ROC_TM_LVL_SCH2: + return NIX_TXSCH_LVL_TL3; + case ROC_TM_LVL_SCH3: + return NIX_TXSCH_LVL_TL4; + case ROC_TM_LVL_SCH4: + return NIX_TXSCH_LVL_SMQ; + default: + return NIX_TXSCH_LVL_CNT; + } +} + +uint16_t +nix_tm_lvl2nix_tl2_root(uint32_t lvl) +{ + switch (lvl) { + case ROC_TM_LVL_ROOT: + return NIX_TXSCH_LVL_TL2; + case ROC_TM_LVL_SCH1: + return NIX_TXSCH_LVL_TL3; + case ROC_TM_LVL_SCH2: + return NIX_TXSCH_LVL_TL4; + case ROC_TM_LVL_SCH3: + return NIX_TXSCH_LVL_SMQ; + default: + return NIX_TXSCH_LVL_CNT; + } +} + +uint16_t +nix_tm_lvl2nix(struct nix *nix, uint32_t lvl) +{ + if (nix_tm_have_tl1_access(nix)) + return nix_tm_lvl2nix_tl1_root(lvl); + else + return nix_tm_lvl2nix_tl2_root(lvl); +} + + +struct nix_tm_shaper_profile * +nix_tm_shaper_profile_search(struct nix *nix, uint32_t id) +{ + struct nix_tm_shaper_profile *profile; + + TAILQ_FOREACH(profile, &nix->shaper_profile_list, shaper) { + if (profile->id == id) + return profile; + } + return NULL; +} + struct nix_tm_node * nix_tm_node_search(struct nix *nix, uint32_t node_id, enum roc_nix_tm_tree tree) { @@ -19,6 +77,70 @@ nix_tm_node_search(struct nix *nix, uint32_t node_id, enum roc_nix_tm_tree tree) return NULL; } +static uint16_t +nix_tm_max_prio(struct nix *nix, uint16_t hw_lvl) +{ + if (hw_lvl >= NIX_TXSCH_LVL_CNT) + return 0; + + /* MDQ does not support SP */ + if (hw_lvl == NIX_TXSCH_LVL_MDQ) + return 0; + + /* PF's TL1 with VF's enabled does not support SP */ + if (hw_lvl == NIX_TXSCH_LVL_TL1 && (!nix_tm_have_tl1_access(nix) || + (nix->tm_flags & NIX_TM_TL1_NO_SP))) + return 0; + + return NIX_TM_TLX_SP_PRIO_MAX - 1; +} + +int +nix_tm_validate_prio(struct nix *nix, uint32_t lvl, uint32_t parent_id, + uint32_t priority, enum roc_nix_tm_tree tree) +{ + uint8_t priorities[NIX_TM_TLX_SP_PRIO_MAX]; + struct nix_tm_node_list *list; + struct nix_tm_node *node; + uint32_t rr_num = 0; + int i; + + list = nix_tm_node_list(nix, tree); + /* Validate priority against max */ + if (priority > nix_tm_max_prio(nix, nix_tm_lvl2nix(nix, lvl - 1))) + return NIX_ERR_TM_PRIO_EXCEEDED; + + if (parent_id == ROC_NIX_TM_NODE_ID_INVALID) + return 0; + + memset(priorities, 0, sizeof(priorities)); + priorities[priority] = 1; + + TAILQ_FOREACH(node, list, node) { + if (!node->parent) + continue; + + if (node->parent->id != parent_id) + continue; + + priorities[node->priority]++; + } + + for (i = 0; i < NIX_TM_TLX_SP_PRIO_MAX; i++) + if (priorities[i] > 1) + rr_num++; + + /* At max, one rr groups per parent */ + if (rr_num > 1) + return NIX_ERR_TM_MULTIPLE_RR_GROUPS; + + /* Check for previous priority to avoid holes in priorities */ + if (priority && !priorities[priority - 1]) + return NIX_ERR_TM_PRIO_ORDER; + + return 0; +} + uint8_t nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable, volatile uint64_t *reg, volatile uint64_t *regval) @@ -60,3 +182,93 @@ nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable, return k; } + +uint16_t +nix_tm_resource_avail(struct nix *nix, uint8_t hw_lvl, bool contig) +{ + uint32_t pos = 0, start_pos = 0; + struct plt_bitmap *bmp; + uint16_t count = 0; + uint64_t slab = 0; + + bmp = contig ? nix->schq_contig_bmp[hw_lvl] : nix->schq_bmp[hw_lvl]; + plt_bitmap_scan_init(bmp); + + if (!plt_bitmap_scan(bmp, &pos, &slab)) + return count; + + /* Count bit set */ + start_pos = pos; + do { + count += __builtin_popcountll(slab); + if (!plt_bitmap_scan(bmp, &pos, &slab)) + break; + } while (pos != start_pos); + + return count; +} + +int +roc_nix_tm_node_lvl(struct roc_nix *roc_nix, uint32_t node_id) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_tm_node *node; + + node = nix_tm_node_search(nix, node_id, ROC_NIX_TM_USER); + if (!node) + return NIX_ERR_TM_INVALID_NODE; + + return node->lvl; +} + +struct roc_nix_tm_node * +roc_nix_tm_node_get(struct roc_nix *roc_nix, uint32_t node_id) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_tm_node *node; + + node = nix_tm_node_search(nix, node_id, ROC_NIX_TM_USER); + return (struct roc_nix_tm_node *)node; +} + +struct roc_nix_tm_node * +roc_nix_tm_node_next(struct roc_nix *roc_nix, struct roc_nix_tm_node *__prev) +{ + struct nix_tm_node *prev = (struct nix_tm_node *)__prev; + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_tm_node_list *list; + + list = nix_tm_node_list(nix, ROC_NIX_TM_USER); + + /* HEAD of the list */ + if (!prev) + return (struct roc_nix_tm_node *)TAILQ_FIRST(list); + + /* Next entry */ + if (prev->tree != ROC_NIX_TM_USER) + return NULL; + + return (struct roc_nix_tm_node *)TAILQ_NEXT(prev, node); +} + +struct nix_tm_node * +nix_tm_node_alloc(void) +{ + struct nix_tm_node *node; + + node = plt_zmalloc(sizeof(struct nix_tm_node), 0); + if (!node) + return NULL; + + node->free_fn = plt_free; + return node; +} + +void +nix_tm_node_free(struct nix_tm_node *node) +{ + if (!node || node->free_fn == NULL) + return; + + (node->free_fn)(node); +} diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 5b99467..dec0ad0 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -104,6 +104,13 @@ INTERNAL { roc_nix_xstats_names_get; roc_nix_switch_hdr_set; roc_nix_eeprom_info_get; + roc_nix_tm_node_add; + roc_nix_tm_node_delete; + roc_nix_tm_node_get; + roc_nix_tm_node_lvl; + roc_nix_tm_node_name_get; + roc_nix_tm_node_next; + roc_nix_tm_node_pkt_mode_update; roc_nix_tm_sq_aura_fc; roc_nix_tm_sq_flush_spin; roc_nix_unregister_cq_irqs; From patchwork Thu Apr 1 12:37:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90407 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0409CA0548; Thu, 1 Apr 2021 14:43:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 02787141238; Thu, 1 Apr 2021 14:40:22 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id DB3C514128C for ; Thu, 1 Apr 2021 14:40:19 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLc2019081 for ; Thu, 1 Apr 2021 05:40:19 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=euYZ3hcjPHAD4NY2yx8NxbjgwGwIABD7pb8Xg6VBrVw=; b=kFTdvGhScXPbmAMOAP7Ac0a7FF/0iHTy7ecCpyjH5u+kFDKtXbU4GJyKVsbGfBXxiLsw H75Q6mDZ9Tb/zEHXYt8B6dNIDEAU9VbXXCdcc956j0VFf8LzHhPkkT5LPuFXSXp+7n8Z EiK3WYICT93eysW1llLGu/RCgUWICEHmjvCcb5cWLihYRUUWBwNC7eqMvPsMsJFleIHx QnNhZkgMw6+r5F5rImhjcwnC7t3qo7iKm3Y1HduIr+edT+8b6lW5ljkte8RFz2vP5ce/ 8lmzLZEu1/iV+so+wcE8aLo+fV77pqpF3wuhZDz9cjP1L9yK1XVBqOyn3c1dExO2tjzw Yw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje48-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:19 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:17 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:17 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 54BCF3F7045; Thu, 1 Apr 2021 05:40:14 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , , Nithin Dabilpuram Date: Thu, 1 Apr 2021 18:07:59 +0530 Message-ID: <20210401123817.14348-35-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: tHb1O4ACS_7YlJ28YrDrA4NkWWQjV9Nj X-Proofpoint-ORIG-GUID: tHb1O4ACS_7YlJ28YrDrA4NkWWQjV9Nj X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 34/52] common/cnxk: add nix tm shaper profile add support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Satha Rao Add support to add/delete/update shaper profile for a given NIX. Also add support to walk through existing shaper profiles. Signed-off-by: Nithin Dabilpuram Signed-off-by: Satha Rao --- drivers/common/cnxk/roc_nix.h | 25 +++++ drivers/common/cnxk/roc_nix_priv.h | 8 ++ drivers/common/cnxk/roc_nix_tm.c | 18 ++++ drivers/common/cnxk/roc_nix_tm_ops.c | 145 ++++++++++++++++++++++++++++ drivers/common/cnxk/roc_nix_tm_utils.c | 167 +++++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 5 + 6 files changed, 368 insertions(+) diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index d656909..ea34cd2 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -353,17 +353,42 @@ struct roc_nix_tm_node { void (*free_fn)(void *node); }; +struct roc_nix_tm_shaper_profile { +#define ROC_NIX_TM_SHAPER_PROFILE_SZ (128) + uint8_t reserved[ROC_NIX_TM_SHAPER_PROFILE_SZ]; + + uint32_t id; + uint64_t commit_rate; + uint64_t commit_sz; + uint64_t peak_rate; + uint64_t peak_sz; + int32_t pkt_len_adj; + bool pkt_mode; + /* Function to free this memory */ + void (*free_fn)(void *profile); +}; + int __roc_api roc_nix_tm_node_add(struct roc_nix *roc_nix, struct roc_nix_tm_node *roc_node); int __roc_api roc_nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id, bool free); int __roc_api roc_nix_tm_node_pkt_mode_update(struct roc_nix *roc_nix, uint32_t node_id, bool pkt_mode); +int __roc_api roc_nix_tm_shaper_profile_add( + struct roc_nix *roc_nix, struct roc_nix_tm_shaper_profile *profile); +int __roc_api roc_nix_tm_shaper_profile_update( + struct roc_nix *roc_nix, struct roc_nix_tm_shaper_profile *profile); +int __roc_api roc_nix_tm_shaper_profile_delete(struct roc_nix *roc_nix, + uint32_t id); struct roc_nix_tm_node *__roc_api roc_nix_tm_node_get(struct roc_nix *roc_nix, uint32_t node_id); struct roc_nix_tm_node *__roc_api roc_nix_tm_node_next(struct roc_nix *roc_nix, struct roc_nix_tm_node *__prev); +struct roc_nix_tm_shaper_profile *__roc_api +roc_nix_tm_shaper_profile_get(struct roc_nix *roc_nix, uint32_t profile_id); +struct roc_nix_tm_shaper_profile *__roc_api roc_nix_tm_shaper_profile_next( + struct roc_nix *roc_nix, struct roc_nix_tm_shaper_profile *__prev); /* * TM utilities API. diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index 3b11090..2e4aa4a 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -331,6 +331,7 @@ int nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id, enum roc_nix_tm_tree tree, bool free); int nix_tm_free_node_resource(struct nix *nix, struct nix_tm_node *node); int nix_tm_clear_path_xoff(struct nix *nix, struct nix_tm_node *node); +void nix_tm_clear_shaper_profiles(struct nix *nix); /* * TM priv utils. @@ -347,7 +348,14 @@ struct nix_tm_shaper_profile *nix_tm_shaper_profile_search(struct nix *nix, uint32_t id); uint8_t nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable, volatile uint64_t *reg, volatile uint64_t *regval); +uint64_t nix_tm_shaper_profile_rate_min(struct nix *nix); +uint64_t nix_tm_shaper_rate_conv(uint64_t value, uint64_t *exponent_p, + uint64_t *mantissa_p, uint64_t *div_exp_p); +uint64_t nix_tm_shaper_burst_conv(uint64_t value, uint64_t *exponent_p, + uint64_t *mantissa_p); struct nix_tm_node *nix_tm_node_alloc(void); void nix_tm_node_free(struct nix_tm_node *node); +struct nix_tm_shaper_profile *nix_tm_shaper_profile_alloc(void); +void nix_tm_shaper_profile_free(struct nix_tm_shaper_profile *profile); #endif /* _ROC_NIX_PRIV_H_ */ diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c index f103e02..d2e2250 100644 --- a/drivers/common/cnxk/roc_nix_tm.c +++ b/drivers/common/cnxk/roc_nix_tm.c @@ -5,6 +5,22 @@ #include "roc_api.h" #include "roc_priv.h" +void +nix_tm_clear_shaper_profiles(struct nix *nix) +{ + struct nix_tm_shaper_profile *shaper_profile; + + shaper_profile = TAILQ_FIRST(&nix->shaper_profile_list); + while (shaper_profile != NULL) { + if (shaper_profile->ref_cnt) + plt_warn("Shaper profile %u has non zero references", + shaper_profile->id); + TAILQ_REMOVE(&nix->shaper_profile_list, shaper_profile, shaper); + nix_tm_shaper_profile_free(shaper_profile); + shaper_profile = TAILQ_FIRST(&nix->shaper_profile_list); + } +} + int nix_tm_node_add(struct roc_nix *roc_nix, struct nix_tm_node *node) { @@ -532,6 +548,8 @@ nix_tm_conf_init(struct roc_nix *roc_nix) int rc, i; PLT_STATIC_ASSERT(sizeof(struct nix_tm_node) <= ROC_NIX_TM_NODE_SZ); + PLT_STATIC_ASSERT(sizeof(struct nix_tm_shaper_profile) <= + ROC_NIX_TM_SHAPER_PROFILE_SZ); nix->tm_flags = 0; for (i = 0; i < ROC_NIX_TM_TREE_MAX; i++) diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c index d0941c0..896ddf2 100644 --- a/drivers/common/cnxk/roc_nix_tm_ops.c +++ b/drivers/common/cnxk/roc_nix_tm_ops.c @@ -66,6 +66,151 @@ roc_nix_tm_sq_aura_fc(struct roc_nix_sq *sq, bool enable) return 0; } +static int +nix_tm_shaper_profile_add(struct roc_nix *roc_nix, + struct nix_tm_shaper_profile *profile, int skip_ins) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uint64_t commit_rate, commit_sz; + uint64_t peak_rate, peak_sz; + uint32_t id; + + id = profile->id; + commit_rate = profile->commit.rate; + commit_sz = profile->commit.size; + peak_rate = profile->peak.rate; + peak_sz = profile->peak.size; + + if (nix_tm_shaper_profile_search(nix, id) && !skip_ins) + return NIX_ERR_TM_SHAPER_PROFILE_EXISTS; + + if (profile->pkt_len_adj < NIX_TM_LENGTH_ADJUST_MIN || + profile->pkt_len_adj > NIX_TM_LENGTH_ADJUST_MAX) + return NIX_ERR_TM_SHAPER_PKT_LEN_ADJUST; + + /* We cannot support both pkt length adjust and pkt mode */ + if (profile->pkt_mode && profile->pkt_len_adj) + return NIX_ERR_TM_SHAPER_PKT_LEN_ADJUST; + + /* commit rate and burst size can be enabled/disabled */ + if (commit_rate || commit_sz) { + if (commit_sz < NIX_TM_MIN_SHAPER_BURST || + commit_sz > NIX_TM_MAX_SHAPER_BURST) + return NIX_ERR_TM_INVALID_COMMIT_SZ; + else if (!nix_tm_shaper_rate_conv(commit_rate, NULL, NULL, + NULL)) + return NIX_ERR_TM_INVALID_COMMIT_RATE; + } + + /* Peak rate and burst size can be enabled/disabled */ + if (peak_sz || peak_rate) { + if (peak_sz < NIX_TM_MIN_SHAPER_BURST || + peak_sz > NIX_TM_MAX_SHAPER_BURST) + return NIX_ERR_TM_INVALID_PEAK_SZ; + else if (!nix_tm_shaper_rate_conv(peak_rate, NULL, NULL, NULL)) + return NIX_ERR_TM_INVALID_PEAK_RATE; + } + + if (!skip_ins) + TAILQ_INSERT_TAIL(&nix->shaper_profile_list, profile, shaper); + + plt_tm_dbg("Added TM shaper profile %u, " + " pir %" PRIu64 " , pbs %" PRIu64 ", cir %" PRIu64 + ", cbs %" PRIu64 " , adj %u, pkt_mode %u", + id, profile->peak.rate, profile->peak.size, + profile->commit.rate, profile->commit.size, + profile->pkt_len_adj, profile->pkt_mode); + + /* Always use PIR for single rate shaping */ + if (!peak_rate && commit_rate) { + profile->peak.rate = profile->commit.rate; + profile->peak.size = profile->commit.size; + profile->commit.rate = 0; + profile->commit.size = 0; + } + + /* update min rate */ + nix->tm_rate_min = nix_tm_shaper_profile_rate_min(nix); + return 0; +} + +int +roc_nix_tm_shaper_profile_add(struct roc_nix *roc_nix, + struct roc_nix_tm_shaper_profile *roc_profile) +{ + struct nix_tm_shaper_profile *profile; + + profile = (struct nix_tm_shaper_profile *)roc_profile->reserved; + + profile->ref_cnt = 0; + profile->id = roc_profile->id; + if (roc_profile->pkt_mode) { + /* Each packet accomulate single count, whereas HW + * considers each unit as Byte, so we need convert + * user pps to bps + */ + profile->commit.rate = roc_profile->commit_rate * 8; + profile->peak.rate = roc_profile->peak_rate * 8; + } else { + profile->commit.rate = roc_profile->commit_rate; + profile->peak.rate = roc_profile->peak_rate; + } + profile->commit.size = roc_profile->commit_sz; + profile->peak.size = roc_profile->peak_sz; + profile->pkt_len_adj = roc_profile->pkt_len_adj; + profile->pkt_mode = roc_profile->pkt_mode; + profile->free_fn = roc_profile->free_fn; + + return nix_tm_shaper_profile_add(roc_nix, profile, 0); +} + +int +roc_nix_tm_shaper_profile_update(struct roc_nix *roc_nix, + struct roc_nix_tm_shaper_profile *roc_profile) +{ + struct nix_tm_shaper_profile *profile; + + profile = (struct nix_tm_shaper_profile *)roc_profile->reserved; + + if (roc_profile->pkt_mode) { + /* Each packet accomulate single count, whereas HW + * considers each unit as Byte, so we need convert + * user pps to bps + */ + profile->commit.rate = roc_profile->commit_rate * 8; + profile->peak.rate = roc_profile->peak_rate * 8; + } else { + profile->commit.rate = roc_profile->commit_rate; + profile->peak.rate = roc_profile->peak_rate; + } + profile->commit.size = roc_profile->commit_sz; + profile->peak.size = roc_profile->peak_sz; + + return nix_tm_shaper_profile_add(roc_nix, profile, 1); +} + +int +roc_nix_tm_shaper_profile_delete(struct roc_nix *roc_nix, uint32_t id) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_tm_shaper_profile *profile; + + profile = nix_tm_shaper_profile_search(nix, id); + if (!profile) + return NIX_ERR_TM_INVALID_SHAPER_PROFILE; + + if (profile->ref_cnt) + return NIX_ERR_TM_SHAPER_PROFILE_IN_USE; + + plt_tm_dbg("Removing TM shaper profile %u", id); + TAILQ_REMOVE(&nix->shaper_profile_list, profile, shaper); + nix_tm_shaper_profile_free(profile); + + /* update min rate */ + nix->tm_rate_min = nix_tm_shaper_profile_rate_min(nix); + return 0; +} + int roc_nix_tm_node_add(struct roc_nix *roc_nix, struct roc_nix_tm_node *roc_node) { diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c index bea23be..7a7e786 100644 --- a/drivers/common/cnxk/roc_nix_tm_utils.c +++ b/drivers/common/cnxk/roc_nix_tm_utils.c @@ -77,6 +77,106 @@ nix_tm_node_search(struct nix *nix, uint32_t node_id, enum roc_nix_tm_tree tree) return NULL; } +uint64_t +nix_tm_shaper_rate_conv(uint64_t value, uint64_t *exponent_p, + uint64_t *mantissa_p, uint64_t *div_exp_p) +{ + uint64_t div_exp, exponent, mantissa; + + /* Boundary checks */ + if (value < NIX_TM_MIN_SHAPER_RATE || value > NIX_TM_MAX_SHAPER_RATE) + return 0; + + if (value <= NIX_TM_SHAPER_RATE(0, 0, 0)) { + /* Calculate rate div_exp and mantissa using + * the following formula: + * + * value = (2E6 * (256 + mantissa) + * / ((1 << div_exp) * 256)) + */ + div_exp = 0; + exponent = 0; + mantissa = NIX_TM_MAX_RATE_MANTISSA; + + while (value < (NIX_TM_SHAPER_RATE_CONST / (1 << div_exp))) + div_exp += 1; + + while (value < ((NIX_TM_SHAPER_RATE_CONST * (256 + mantissa)) / + ((1 << div_exp) * 256))) + mantissa -= 1; + } else { + /* Calculate rate exponent and mantissa using + * the following formula: + * + * value = (2E6 * ((256 + mantissa) << exponent)) / 256 + * + */ + div_exp = 0; + exponent = NIX_TM_MAX_RATE_EXPONENT; + mantissa = NIX_TM_MAX_RATE_MANTISSA; + + while (value < (NIX_TM_SHAPER_RATE_CONST * (1 << exponent))) + exponent -= 1; + + while (value < ((NIX_TM_SHAPER_RATE_CONST * + ((256 + mantissa) << exponent)) / + 256)) + mantissa -= 1; + } + + if (div_exp > NIX_TM_MAX_RATE_DIV_EXP || + exponent > NIX_TM_MAX_RATE_EXPONENT || + mantissa > NIX_TM_MAX_RATE_MANTISSA) + return 0; + + if (div_exp_p) + *div_exp_p = div_exp; + if (exponent_p) + *exponent_p = exponent; + if (mantissa_p) + *mantissa_p = mantissa; + + /* Calculate real rate value */ + return NIX_TM_SHAPER_RATE(exponent, mantissa, div_exp); +} + +uint64_t +nix_tm_shaper_burst_conv(uint64_t value, uint64_t *exponent_p, + uint64_t *mantissa_p) +{ + uint64_t exponent, mantissa; + + if (value < NIX_TM_MIN_SHAPER_BURST || value > NIX_TM_MAX_SHAPER_BURST) + return 0; + + /* Calculate burst exponent and mantissa using + * the following formula: + * + * value = (((256 + mantissa) << (exponent + 1) + / 256) + * + */ + exponent = NIX_TM_MAX_BURST_EXPONENT; + mantissa = NIX_TM_MAX_BURST_MANTISSA; + + while (value < (1ull << (exponent + 1))) + exponent -= 1; + + while (value < ((256 + mantissa) << (exponent + 1)) / 256) + mantissa -= 1; + + if (exponent > NIX_TM_MAX_BURST_EXPONENT || + mantissa > NIX_TM_MAX_BURST_MANTISSA) + return 0; + + if (exponent_p) + *exponent_p = exponent; + if (mantissa_p) + *mantissa_p = mantissa; + + return NIX_TM_SHAPER_BURST(exponent, mantissa); +} + static uint16_t nix_tm_max_prio(struct nix *nix, uint16_t hw_lvl) { @@ -183,6 +283,23 @@ nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable, return k; } +/* Search for min rate in topology */ +uint64_t +nix_tm_shaper_profile_rate_min(struct nix *nix) +{ + struct nix_tm_shaper_profile *profile; + uint64_t rate_min = 1E9; /* 1 Gbps */ + + TAILQ_FOREACH(profile, &nix->shaper_profile_list, shaper) { + if (profile->peak.rate && profile->peak.rate < rate_min) + rate_min = profile->peak.rate; + + if (profile->commit.rate && profile->commit.rate < rate_min) + rate_min = profile->commit.rate; + } + return rate_min; +} + uint16_t nix_tm_resource_avail(struct nix *nix, uint8_t hw_lvl, bool contig) { @@ -251,6 +368,34 @@ roc_nix_tm_node_next(struct roc_nix *roc_nix, struct roc_nix_tm_node *__prev) return (struct roc_nix_tm_node *)TAILQ_NEXT(prev, node); } +struct roc_nix_tm_shaper_profile * +roc_nix_tm_shaper_profile_get(struct roc_nix *roc_nix, uint32_t profile_id) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_tm_shaper_profile *profile; + + profile = nix_tm_shaper_profile_search(nix, profile_id); + return (struct roc_nix_tm_shaper_profile *)profile; +} + +struct roc_nix_tm_shaper_profile * +roc_nix_tm_shaper_profile_next(struct roc_nix *roc_nix, + struct roc_nix_tm_shaper_profile *__prev) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_tm_shaper_profile_list *list; + struct nix_tm_shaper_profile *prev; + + prev = (struct nix_tm_shaper_profile *)__prev; + list = &nix->shaper_profile_list; + + /* HEAD of the list */ + if (!prev) + return (struct roc_nix_tm_shaper_profile *)TAILQ_FIRST(list); + + return (struct roc_nix_tm_shaper_profile *)TAILQ_NEXT(prev, shaper); +} + struct nix_tm_node * nix_tm_node_alloc(void) { @@ -272,3 +417,25 @@ nix_tm_node_free(struct nix_tm_node *node) (node->free_fn)(node); } + +struct nix_tm_shaper_profile * +nix_tm_shaper_profile_alloc(void) +{ + struct nix_tm_shaper_profile *profile; + + profile = plt_zmalloc(sizeof(struct nix_tm_shaper_profile), 0); + if (!profile) + return NULL; + + profile->free_fn = plt_free; + return profile; +} + +void +nix_tm_shaper_profile_free(struct nix_tm_shaper_profile *profile) +{ + if (!profile || !profile->free_fn) + return; + + (profile->free_fn)(profile); +} diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index dec0ad0..c01cd1a 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -111,6 +111,11 @@ INTERNAL { roc_nix_tm_node_name_get; roc_nix_tm_node_next; roc_nix_tm_node_pkt_mode_update; + roc_nix_tm_shaper_profile_add; + roc_nix_tm_shaper_profile_delete; + roc_nix_tm_shaper_profile_get; + roc_nix_tm_shaper_profile_next; + roc_nix_tm_shaper_profile_update; roc_nix_tm_sq_aura_fc; roc_nix_tm_sq_flush_spin; roc_nix_unregister_cq_irqs; From patchwork Thu Apr 1 12:38:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90408 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2784DA0548; Thu, 1 Apr 2021 14:44:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AF1A3141297; Thu, 1 Apr 2021 14:40:24 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 2FFDE14128F for ; Thu, 1 Apr 2021 14:40:23 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CRJvf021887 for ; Thu, 1 Apr 2021 05:40:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=KA0Ysdc1EOnNxpmrhfy8fpbYNMDd9HJAEMI8G4MEmrA=; b=KHJZIVEc/rjGxjty+wa8nsnX8vKCZuGaHusatflPkmdLQpR3VMzLkLZs3xLdy2XJqMEJ LpP378uZ49/MNmWfW9bFK8tLNVsuVE+R1JCENNCZgZVqVtonXg37mnnFjk+mx55qezjx Rbs93rO5ujBKOxugMpH5JekSMNoicqy1toB1NPx4xIITjM7XFS6UuifQLzP3sk+1/Klu tFNGWhH9JtwjEWJRy5YNBSDggJ64t5UrzNVrg1a1qmBW+J1UIQIsOPRQwey84vdNcjjA g/Jkgn5lUqB4G6VgnOU35n8zZECPcpfW0Tipz1tvJ8zlI1S4udEhYNxJ9bJmHQx8F/aC Yw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje4f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:22 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:20 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:20 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 80BC03F7044; Thu, 1 Apr 2021 05:40:17 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , , Nithin Dabilpuram Date: Thu, 1 Apr 2021 18:08:00 +0530 Message-ID: <20210401123817.14348-36-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 6aJ7Jf9vKTFlCpp8nWOCjnhFZusEPJYT X-Proofpoint-ORIG-GUID: 6aJ7Jf9vKTFlCpp8nWOCjnhFZusEPJYT X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 35/52] common/cnxk: add nix tm helper to alloc and free resource X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add TM helper API to estimate, alloc, assign, and free resources for a NIX LF / ethdev. Signed-off-by: Nithin Dabilpuram --- drivers/common/cnxk/roc_nix.h | 1 + drivers/common/cnxk/roc_nix_priv.h | 16 ++ drivers/common/cnxk/roc_nix_tm.c | 461 +++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_nix_tm_ops.c | 11 + drivers/common/cnxk/roc_nix_tm_utils.c | 133 ++++++++++ drivers/common/cnxk/version.map | 1 + 6 files changed, 623 insertions(+) diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index ea34cd2..52e001c 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -372,6 +372,7 @@ int __roc_api roc_nix_tm_node_add(struct roc_nix *roc_nix, struct roc_nix_tm_node *roc_node); int __roc_api roc_nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id, bool free); +int __roc_api roc_nix_tm_free_resources(struct roc_nix *roc_nix, bool hw_only); int __roc_api roc_nix_tm_node_pkt_mode_update(struct roc_nix *roc_nix, uint32_t node_id, bool pkt_mode); int __roc_api roc_nix_tm_shaper_profile_add( diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index 2e4aa4a..5110967 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -330,8 +330,17 @@ int nix_tm_node_add(struct roc_nix *roc_nix, struct nix_tm_node *node); int nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id, enum roc_nix_tm_tree tree, bool free); int nix_tm_free_node_resource(struct nix *nix, struct nix_tm_node *node); +int nix_tm_free_resources(struct roc_nix *roc_nix, uint32_t tree_mask, + bool hw_only); int nix_tm_clear_path_xoff(struct nix *nix, struct nix_tm_node *node); void nix_tm_clear_shaper_profiles(struct nix *nix); +int nix_tm_alloc_txschq(struct nix *nix, enum roc_nix_tm_tree tree); +int nix_tm_assign_resources(struct nix *nix, enum roc_nix_tm_tree tree); +int nix_tm_release_resources(struct nix *nix, uint8_t hw_lvl, bool contig, + bool above_thresh); +void nix_tm_copy_rsp_to_nix(struct nix *nix, struct nix_txsch_alloc_rsp *rsp); + +int nix_tm_update_parent_info(struct nix *nix, enum roc_nix_tm_tree tree); /* * TM priv utils. @@ -348,11 +357,18 @@ struct nix_tm_shaper_profile *nix_tm_shaper_profile_search(struct nix *nix, uint32_t id); uint8_t nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable, volatile uint64_t *reg, volatile uint64_t *regval); +uint32_t nix_tm_check_rr(struct nix *nix, uint32_t parent_id, + enum roc_nix_tm_tree tree, uint32_t *rr_prio, + uint32_t *max_prio); uint64_t nix_tm_shaper_profile_rate_min(struct nix *nix); uint64_t nix_tm_shaper_rate_conv(uint64_t value, uint64_t *exponent_p, uint64_t *mantissa_p, uint64_t *div_exp_p); uint64_t nix_tm_shaper_burst_conv(uint64_t value, uint64_t *exponent_p, uint64_t *mantissa_p); +bool nix_tm_child_res_valid(struct nix_tm_node_list *list, + struct nix_tm_node *parent); +uint16_t nix_tm_resource_estimate(struct nix *nix, uint16_t *schq_contig, + uint16_t *schq, enum roc_nix_tm_tree tree); struct nix_tm_node *nix_tm_node_alloc(void); void nix_tm_node_free(struct nix_tm_node *node); struct nix_tm_shaper_profile *nix_tm_shaper_profile_alloc(void); diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c index d2e2250..581de4b 100644 --- a/drivers/common/cnxk/roc_nix_tm.c +++ b/drivers/common/cnxk/roc_nix_tm.c @@ -5,6 +5,15 @@ #include "roc_api.h" #include "roc_priv.h" +static inline int +bitmap_ctzll(uint64_t slab) +{ + if (slab == 0) + return 0; + + return __builtin_ctzll(slab); +} + void nix_tm_clear_shaper_profiles(struct nix *nix) { @@ -22,6 +31,44 @@ nix_tm_clear_shaper_profiles(struct nix *nix) } int +nix_tm_update_parent_info(struct nix *nix, enum roc_nix_tm_tree tree) +{ + struct nix_tm_node *child, *parent; + struct nix_tm_node_list *list; + uint32_t rr_prio, max_prio; + uint32_t rr_num = 0; + + list = nix_tm_node_list(nix, tree); + + /* Release all the node hw resources locally + * if parent marked as dirty and resource exists. + */ + TAILQ_FOREACH(child, list, node) { + /* Release resource only if parent direct hierarchy changed */ + if (child->flags & NIX_TM_NODE_HWRES && child->parent && + child->parent->child_realloc) { + nix_tm_free_node_resource(nix, child); + } + child->max_prio = UINT32_MAX; + } + + TAILQ_FOREACH(parent, list, node) { + /* Count group of children of same priority i.e are RR */ + rr_num = nix_tm_check_rr(nix, parent->id, tree, &rr_prio, + &max_prio); + + /* Assuming that multiple RR groups are + * not configured based on capability. + */ + parent->rr_prio = rr_prio; + parent->rr_num = rr_num; + parent->max_prio = max_prio; + } + + return 0; +} + +int nix_tm_node_add(struct roc_nix *roc_nix, struct nix_tm_node *node) { struct nix *nix = roc_nix_to_nix_priv(roc_nix); @@ -431,6 +478,71 @@ nix_tm_sq_flush_post(struct roc_nix_sq *sq) } int +nix_tm_release_resources(struct nix *nix, uint8_t hw_lvl, bool contig, + bool above_thresh) +{ + uint16_t avail, thresh, to_free = 0, schq; + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_txsch_free_req *req; + struct plt_bitmap *bmp; + uint64_t slab = 0; + uint32_t pos = 0; + int rc = -ENOSPC; + + bmp = contig ? nix->schq_contig_bmp[hw_lvl] : nix->schq_bmp[hw_lvl]; + thresh = + contig ? nix->contig_rsvd[hw_lvl] : nix->discontig_rsvd[hw_lvl]; + plt_bitmap_scan_init(bmp); + + avail = nix_tm_resource_avail(nix, hw_lvl, contig); + + if (above_thresh) { + /* Release only above threshold */ + if (avail > thresh) + to_free = avail - thresh; + } else { + /* Release everything */ + to_free = avail; + } + + /* Now release resources to AF */ + while (to_free) { + if (!slab && !plt_bitmap_scan(bmp, &pos, &slab)) + break; + + schq = bitmap_ctzll(slab); + slab &= ~(1ULL << schq); + schq += pos; + + /* Free to AF */ + req = mbox_alloc_msg_nix_txsch_free(mbox); + if (req == NULL) + return rc; + req->flags = 0; + req->schq_lvl = hw_lvl; + req->schq = schq; + rc = mbox_process(mbox); + if (rc) { + plt_err("failed to release hwres %s(%u) rc %d", + nix_tm_hwlvl2str(hw_lvl), schq, rc); + return rc; + } + + plt_tm_dbg("Released hwres %s(%u)", nix_tm_hwlvl2str(hw_lvl), + schq); + plt_bitmap_clear(bmp, schq); + to_free--; + } + + if (to_free) { + plt_err("resource inconsistency for %s(%u)", + nix_tm_hwlvl2str(hw_lvl), contig); + return -EFAULT; + } + return 0; +} + +int nix_tm_free_node_resource(struct nix *nix, struct nix_tm_node *node) { struct mbox *mbox = (&nix->dev)->mbox; @@ -539,6 +651,355 @@ nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id, return 0; } +static int +nix_tm_assign_hw_id(struct nix *nix, struct nix_tm_node *parent, + uint16_t *contig_id, int *contig_cnt, + struct nix_tm_node_list *list) +{ + struct nix_tm_node *child; + struct plt_bitmap *bmp; + uint8_t child_hw_lvl; + int spare_schq = -1; + uint32_t pos = 0; + uint64_t slab; + uint16_t schq; + + child_hw_lvl = parent->hw_lvl - 1; + bmp = nix->schq_bmp[child_hw_lvl]; + plt_bitmap_scan_init(bmp); + slab = 0; + + /* Save spare schq if it is case of RR + SP */ + if (parent->rr_prio != 0xf && *contig_cnt > 1) + spare_schq = *contig_id + parent->rr_prio; + + TAILQ_FOREACH(child, list, node) { + if (!child->parent) + continue; + if (child->parent->id != parent->id) + continue; + + /* Resource never expected to be present */ + if (child->flags & NIX_TM_NODE_HWRES) { + plt_err("Resource exists for child (%s)%u, id %u (%p)", + nix_tm_hwlvl2str(child->hw_lvl), child->hw_id, + child->id, child); + return -EFAULT; + } + + if (!slab) + plt_bitmap_scan(bmp, &pos, &slab); + + if (child->priority == parent->rr_prio && spare_schq != -1) { + /* Use spare schq first if present */ + schq = spare_schq; + spare_schq = -1; + *contig_cnt = *contig_cnt - 1; + + } else if (child->priority == parent->rr_prio) { + /* Assign a discontiguous queue */ + if (!slab) { + plt_err("Schq not found for Child %u " + "lvl %u (%p)", + child->id, child->lvl, child); + return -ENOENT; + } + + schq = bitmap_ctzll(slab); + slab &= ~(1ULL << schq); + schq += pos; + plt_bitmap_clear(bmp, schq); + } else { + /* Assign a contiguous queue */ + schq = *contig_id + child->priority; + *contig_cnt = *contig_cnt - 1; + } + + plt_tm_dbg("Resource %s(%u), for lvl %u id %u(%p)", + nix_tm_hwlvl2str(child->hw_lvl), schq, child->lvl, + child->id, child); + + child->hw_id = schq; + child->parent_hw_id = parent->hw_id; + child->flags |= NIX_TM_NODE_HWRES; + } + + return 0; +} + +int +nix_tm_assign_resources(struct nix *nix, enum roc_nix_tm_tree tree) +{ + struct nix_tm_node *parent, *root = NULL; + struct plt_bitmap *bmp, *bmp_contig; + struct nix_tm_node_list *list; + uint8_t child_hw_lvl, hw_lvl; + uint16_t contig_id, j; + uint64_t slab = 0; + uint32_t pos = 0; + int cnt, rc; + + list = nix_tm_node_list(nix, tree); + /* Walk from TL1 to TL4 parents */ + for (hw_lvl = NIX_TXSCH_LVL_TL1; hw_lvl > 0; hw_lvl--) { + TAILQ_FOREACH(parent, list, node) { + child_hw_lvl = parent->hw_lvl - 1; + if (parent->hw_lvl != hw_lvl) + continue; + + /* Remember root for future */ + if (parent->hw_lvl == nix->tm_root_lvl) + root = parent; + + if (!parent->child_realloc) { + /* Skip when parent is not dirty */ + if (nix_tm_child_res_valid(list, parent)) + continue; + plt_err("Parent not dirty but invalid " + "child res parent id %u(lvl %u)", + parent->id, parent->lvl); + return -EFAULT; + } + + bmp_contig = nix->schq_contig_bmp[child_hw_lvl]; + + /* Prealloc contiguous indices for a parent */ + contig_id = NIX_TM_MAX_HW_TXSCHQ; + cnt = (int)parent->max_prio + 1; + if (cnt > 0) { + plt_bitmap_scan_init(bmp_contig); + if (!plt_bitmap_scan(bmp_contig, &pos, &slab)) { + plt_err("Contig schq not found"); + return -ENOENT; + } + contig_id = pos + bitmap_ctzll(slab); + + /* Check if we have enough */ + for (j = contig_id; j < contig_id + cnt; j++) { + if (!plt_bitmap_get(bmp_contig, j)) + break; + } + + if (j != contig_id + cnt) { + plt_err("Contig schq not sufficient"); + return -ENOENT; + } + + for (j = contig_id; j < contig_id + cnt; j++) + plt_bitmap_clear(bmp_contig, j); + } + + /* Assign hw id to all children */ + rc = nix_tm_assign_hw_id(nix, parent, &contig_id, &cnt, + list); + if (cnt || rc) { + plt_err("Unexpected err, contig res alloc, " + "parent %u, of %s, rc=%d, cnt=%d", + parent->id, nix_tm_hwlvl2str(hw_lvl), + rc, cnt); + return -EFAULT; + } + + /* Clear the dirty bit as children's + * resources are reallocated. + */ + parent->child_realloc = false; + } + } + + /* Root is always expected to be there */ + if (!root) + return -EFAULT; + + if (root->flags & NIX_TM_NODE_HWRES) + return 0; + + /* Process root node */ + bmp = nix->schq_bmp[nix->tm_root_lvl]; + plt_bitmap_scan_init(bmp); + if (!plt_bitmap_scan(bmp, &pos, &slab)) { + plt_err("Resource not allocated for root"); + return -EIO; + } + + root->hw_id = pos + bitmap_ctzll(slab); + root->flags |= NIX_TM_NODE_HWRES; + plt_bitmap_clear(bmp, root->hw_id); + + /* Get TL1 id as well when root is not TL1 */ + if (!nix_tm_have_tl1_access(nix)) { + bmp = nix->schq_bmp[NIX_TXSCH_LVL_TL1]; + + plt_bitmap_scan_init(bmp); + if (!plt_bitmap_scan(bmp, &pos, &slab)) { + plt_err("Resource not found for TL1"); + return -EIO; + } + root->parent_hw_id = pos + bitmap_ctzll(slab); + plt_bitmap_clear(bmp, root->parent_hw_id); + } + + plt_tm_dbg("Resource %s(%u) for root(id %u) (%p)", + nix_tm_hwlvl2str(root->hw_lvl), root->hw_id, root->id, root); + + return 0; +} + +void +nix_tm_copy_rsp_to_nix(struct nix *nix, struct nix_txsch_alloc_rsp *rsp) +{ + uint8_t lvl; + uint16_t i; + + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { + for (i = 0; i < rsp->schq[lvl]; i++) + plt_bitmap_set(nix->schq_bmp[lvl], + rsp->schq_list[lvl][i]); + + for (i = 0; i < rsp->schq_contig[lvl]; i++) + plt_bitmap_set(nix->schq_contig_bmp[lvl], + rsp->schq_contig_list[lvl][i]); + } +} + +int +nix_tm_alloc_txschq(struct nix *nix, enum roc_nix_tm_tree tree) +{ + uint16_t schq_contig[NIX_TXSCH_LVL_CNT]; + struct mbox *mbox = (&nix->dev)->mbox; + uint16_t schq[NIX_TXSCH_LVL_CNT]; + struct nix_txsch_alloc_req *req; + struct nix_txsch_alloc_rsp *rsp; + uint8_t hw_lvl, i; + bool pend; + int rc; + + memset(schq, 0, sizeof(schq)); + memset(schq_contig, 0, sizeof(schq_contig)); + + /* Estimate requirement */ + rc = nix_tm_resource_estimate(nix, schq_contig, schq, tree); + if (!rc) + return 0; + + /* Release existing contiguous resources when realloc requested + * as there is no way to guarantee continuity of old with new. + */ + for (hw_lvl = 0; hw_lvl < NIX_TXSCH_LVL_CNT; hw_lvl++) { + if (schq_contig[hw_lvl]) + nix_tm_release_resources(nix, hw_lvl, true, false); + } + + /* Alloc as needed */ + do { + pend = false; + req = mbox_alloc_msg_nix_txsch_alloc(mbox); + if (!req) { + rc = -ENOMEM; + goto alloc_err; + } + mbox_memcpy(req->schq, schq, sizeof(req->schq)); + mbox_memcpy(req->schq_contig, schq_contig, + sizeof(req->schq_contig)); + + /* Each alloc can be at max of MAX_TXSCHQ_PER_FUNC per level. + * So split alloc to multiple requests. + */ + for (i = 0; i < NIX_TXSCH_LVL_CNT; i++) { + if (req->schq[i] > MAX_TXSCHQ_PER_FUNC) + req->schq[i] = MAX_TXSCHQ_PER_FUNC; + schq[i] -= req->schq[i]; + + if (req->schq_contig[i] > MAX_TXSCHQ_PER_FUNC) + req->schq_contig[i] = MAX_TXSCHQ_PER_FUNC; + schq_contig[i] -= req->schq_contig[i]; + + if (schq[i] || schq_contig[i]) + pend = true; + } + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto alloc_err; + + nix_tm_copy_rsp_to_nix(nix, rsp); + } while (pend); + + nix->tm_link_cfg_lvl = rsp->link_cfg_lvl; + return 0; +alloc_err: + for (i = 0; i < NIX_TXSCH_LVL_CNT; i++) { + if (nix_tm_release_resources(nix, i, true, false)) + plt_err("Failed to release contig resources of " + "lvl %d on error", + i); + if (nix_tm_release_resources(nix, i, false, false)) + plt_err("Failed to release discontig resources of " + "lvl %d on error", + i); + } + return rc; +} + +int +nix_tm_free_resources(struct roc_nix *roc_nix, uint32_t tree_mask, bool hw_only) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_tm_shaper_profile *profile; + struct nix_tm_node *node, *next_node; + struct nix_tm_node_list *list; + enum roc_nix_tm_tree tree; + uint32_t profile_id; + int rc = 0; + + for (tree = 0; tree < ROC_NIX_TM_TREE_MAX; tree++) { + if (!(tree_mask & BIT(tree))) + continue; + + plt_tm_dbg("Freeing resources of tree %u", tree); + + list = nix_tm_node_list(nix, tree); + next_node = TAILQ_FIRST(list); + while (next_node) { + node = next_node; + next_node = TAILQ_NEXT(node, node); + + if (!nix_tm_is_leaf(nix, node->lvl) && + node->flags & NIX_TM_NODE_HWRES) { + /* Clear xoff in path for flush to succeed */ + rc = nix_tm_clear_path_xoff(nix, node); + if (rc) + return rc; + rc = nix_tm_free_node_resource(nix, node); + if (rc) + return rc; + } + } + + /* Leave software elements if needed */ + if (hw_only) + continue; + + next_node = TAILQ_FIRST(list); + while (next_node) { + node = next_node; + next_node = TAILQ_NEXT(node, node); + + plt_tm_dbg("Free node lvl %u id %u (%p)", node->lvl, + node->id, node); + + profile_id = node->shaper_profile_id; + profile = nix_tm_shaper_profile_search(nix, profile_id); + if (profile) + profile->ref_cnt--; + + TAILQ_REMOVE(list, node, node); + nix_tm_node_free(node); + } + } + return rc; +} + int nix_tm_conf_init(struct roc_nix *roc_nix) { diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c index 896ddf2..1e952c4 100644 --- a/drivers/common/cnxk/roc_nix_tm_ops.c +++ b/drivers/common/cnxk/roc_nix_tm_ops.c @@ -66,6 +66,17 @@ roc_nix_tm_sq_aura_fc(struct roc_nix_sq *sq, bool enable) return 0; } +int +roc_nix_tm_free_resources(struct roc_nix *roc_nix, bool hw_only) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + if (nix->tm_flags & NIX_TM_HIERARCHY_ENA) + return -EBUSY; + + return nix_tm_free_resources(roc_nix, BIT(ROC_NIX_TM_USER), hw_only); +} + static int nix_tm_shaper_profile_add(struct roc_nix *roc_nix, struct nix_tm_shaper_profile *profile, int skip_ins) diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c index 7a7e786..45de9f6 100644 --- a/drivers/common/cnxk/roc_nix_tm_utils.c +++ b/drivers/common/cnxk/roc_nix_tm_utils.c @@ -177,6 +177,58 @@ nix_tm_shaper_burst_conv(uint64_t value, uint64_t *exponent_p, return NIX_TM_SHAPER_BURST(exponent, mantissa); } +uint32_t +nix_tm_check_rr(struct nix *nix, uint32_t parent_id, enum roc_nix_tm_tree tree, + uint32_t *rr_prio, uint32_t *max_prio) +{ + uint32_t node_cnt[NIX_TM_TLX_SP_PRIO_MAX]; + struct nix_tm_node_list *list; + struct nix_tm_node *node; + uint32_t rr_num = 0, i; + uint32_t children = 0; + uint32_t priority; + + memset(node_cnt, 0, sizeof(node_cnt)); + *rr_prio = 0xF; + *max_prio = UINT32_MAX; + + list = nix_tm_node_list(nix, tree); + TAILQ_FOREACH(node, list, node) { + if (!node->parent) + continue; + + if (!(node->parent->id == parent_id)) + continue; + + priority = node->priority; + node_cnt[priority]++; + children++; + } + + for (i = 0; i < NIX_TM_TLX_SP_PRIO_MAX; i++) { + if (!node_cnt[i]) + break; + + if (node_cnt[i] > rr_num) { + *rr_prio = i; + rr_num = node_cnt[i]; + } + } + + /* RR group of single RR child is considered as SP */ + if (rr_num == 1) { + *rr_prio = 0xF; + rr_num = 0; + } + + /* Max prio will be returned only when we have non zero prio + * or if a parent has single child. + */ + if (i > 1 || (children == 1)) + *max_prio = i - 1; + return rr_num; +} + static uint16_t nix_tm_max_prio(struct nix *nix, uint16_t hw_lvl) { @@ -241,6 +293,21 @@ nix_tm_validate_prio(struct nix *nix, uint32_t lvl, uint32_t parent_id, return 0; } +bool +nix_tm_child_res_valid(struct nix_tm_node_list *list, + struct nix_tm_node *parent) +{ + struct nix_tm_node *child; + + TAILQ_FOREACH(child, list, node) { + if (child->parent != parent) + continue; + if (!(child->flags & NIX_TM_NODE_HWRES)) + return false; + } + return true; +} + uint8_t nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable, volatile uint64_t *reg, volatile uint64_t *regval) @@ -325,6 +392,72 @@ nix_tm_resource_avail(struct nix *nix, uint8_t hw_lvl, bool contig) return count; } +uint16_t +nix_tm_resource_estimate(struct nix *nix, uint16_t *schq_contig, uint16_t *schq, + enum roc_nix_tm_tree tree) +{ + struct nix_tm_node_list *list; + uint8_t contig_cnt, hw_lvl; + struct nix_tm_node *parent; + uint16_t cnt = 0, avail; + + list = nix_tm_node_list(nix, tree); + /* Walk through parents from TL1..TL4 */ + for (hw_lvl = NIX_TXSCH_LVL_TL1; hw_lvl > 0; hw_lvl--) { + TAILQ_FOREACH(parent, list, node) { + if (hw_lvl != parent->hw_lvl) + continue; + + /* Skip accounting for children whose + * parent does not indicate so. + */ + if (!parent->child_realloc) + continue; + + /* Count children needed */ + schq[hw_lvl - 1] += parent->rr_num; + if (parent->max_prio != UINT32_MAX) { + contig_cnt = parent->max_prio + 1; + schq_contig[hw_lvl - 1] += contig_cnt; + /* When we have SP + DWRR at a parent, + * we will always have a spare schq at rr prio + * location in contiguous queues. Hence reduce + * discontiguous count by 1. + */ + if (parent->max_prio > 0 && parent->rr_num) + schq[hw_lvl - 1] -= 1; + } + } + } + + schq[nix->tm_root_lvl] = 1; + if (!nix_tm_have_tl1_access(nix)) + schq[NIX_TXSCH_LVL_TL1] = 1; + + /* Now check for existing resources */ + for (hw_lvl = 0; hw_lvl < NIX_TXSCH_LVL_CNT; hw_lvl++) { + avail = nix_tm_resource_avail(nix, hw_lvl, false); + if (schq[hw_lvl] <= avail) + schq[hw_lvl] = 0; + else + schq[hw_lvl] -= avail; + + /* For contiguous queues, realloc everything */ + avail = nix_tm_resource_avail(nix, hw_lvl, true); + if (schq_contig[hw_lvl] <= avail) + schq_contig[hw_lvl] = 0; + + cnt += schq[hw_lvl]; + cnt += schq_contig[hw_lvl]; + + plt_tm_dbg("Estimate resources needed for %s: dis %u cont %u", + nix_tm_hwlvl2str(hw_lvl), schq[hw_lvl], + schq_contig[hw_lvl]); + } + + return cnt; +} + int roc_nix_tm_node_lvl(struct roc_nix *roc_nix, uint32_t node_id) { diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index c01cd1a..4817fd5 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -104,6 +104,7 @@ INTERNAL { roc_nix_xstats_names_get; roc_nix_switch_hdr_set; roc_nix_eeprom_info_get; + roc_nix_tm_free_resources; roc_nix_tm_node_add; roc_nix_tm_node_delete; roc_nix_tm_node_get; From patchwork Thu Apr 1 12:38:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90409 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 63BBFA0548; Thu, 1 Apr 2021 14:44:09 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 043D314128F; Thu, 1 Apr 2021 14:40:28 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 10EF314129D for ; Thu, 1 Apr 2021 14:40:25 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPv1l000338 for ; Thu, 1 Apr 2021 05:40:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=lVyN06MDMyqcmwbIDreDQBorq8Jl6AsiMy+4j0gLEoQ=; b=HlCT+WDWYkXTvMhyIxUnzj5UlcLKWx53yZVM/4JfDkrPygyje5HXvhN02/Y/HezHk2SH hxr68fzG1AfCkX/ZkpFIU2B/h18UzYrBccxHLRJMaGdft9qNmrKqr4h8KOVLqZFKaobP MZRAwwedOzW209YfnUoYfN27P5cu+sqVMFL5+ltO+Xz2cQn6N2HswwzuD9wdRHuZ2zC/ yYJlPAGIU39ahW2LB1MY/QshKPslkJ3ZS7gkafARzCPnSHl5qSfX1iNs2hQ5bMejZzKG /45/8VosYVTYizpa4PZJ7x9gLhbwEyoMgBn1rmhMu1mtBJSnINwENpV3XdtgxFs1jvGY UA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 37n28j2dv5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:25 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:23 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:23 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id B0C493F7045; Thu, 1 Apr 2021 05:40:20 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , , Nithin Dabilpuram Date: Thu, 1 Apr 2021 18:08:01 +0530 Message-ID: <20210401123817.14348-37-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Hr163trA-GSWkBbl0uLAMfsr0INxbtuk X-Proofpoint-GUID: Hr163trA-GSWkBbl0uLAMfsr0INxbtuk X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 36/52] common/cnxk: add nix tm hierarchy enable/disable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support to enable or disable hierarchy along with allocating node HW resources such as shapers and schedulers and configuring them to match the user created or default hierarchy. Signed-off-by: Nithin Dabilpuram --- drivers/common/cnxk/roc_nix.h | 8 + drivers/common/cnxk/roc_nix_priv.h | 16 ++ drivers/common/cnxk/roc_nix_tm.c | 147 ++++++++++++ drivers/common/cnxk/roc_nix_tm_ops.c | 234 +++++++++++++++++++ drivers/common/cnxk/roc_nix_tm_utils.c | 410 +++++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 2 + 6 files changed, 817 insertions(+) diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 52e001c..7bf3435 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -392,6 +392,14 @@ struct roc_nix_tm_shaper_profile *__roc_api roc_nix_tm_shaper_profile_next( struct roc_nix *roc_nix, struct roc_nix_tm_shaper_profile *__prev); /* + * TM hierarchy enable/disable API. + */ +int __roc_api roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix); +int __roc_api roc_nix_tm_hierarchy_enable(struct roc_nix *roc_nix, + enum roc_nix_tm_tree tree, + bool xmit_enable); + +/* * TM utilities API. */ int __roc_api roc_nix_tm_node_lvl(struct roc_nix *roc_nix, uint32_t node_id); diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index 5110967..a40621c 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -340,7 +340,10 @@ int nix_tm_release_resources(struct nix *nix, uint8_t hw_lvl, bool contig, bool above_thresh); void nix_tm_copy_rsp_to_nix(struct nix *nix, struct nix_txsch_alloc_rsp *rsp); +int nix_tm_txsch_reg_config(struct nix *nix, enum roc_nix_tm_tree tree); int nix_tm_update_parent_info(struct nix *nix, enum roc_nix_tm_tree tree); +int nix_tm_sq_sched_conf(struct nix *nix, struct nix_tm_node *node, + bool rr_quantum_only); /* * TM priv utils. @@ -369,6 +372,19 @@ bool nix_tm_child_res_valid(struct nix_tm_node_list *list, struct nix_tm_node *parent); uint16_t nix_tm_resource_estimate(struct nix *nix, uint16_t *schq_contig, uint16_t *schq, enum roc_nix_tm_tree tree); +uint8_t nix_tm_tl1_default_prep(uint32_t schq, volatile uint64_t *reg, + volatile uint64_t *regval); +uint8_t nix_tm_topology_reg_prep(struct nix *nix, struct nix_tm_node *node, + volatile uint64_t *reg, + volatile uint64_t *regval, + volatile uint64_t *regval_mask); +uint8_t nix_tm_sched_reg_prep(struct nix *nix, struct nix_tm_node *node, + volatile uint64_t *reg, + volatile uint64_t *regval); +uint8_t nix_tm_shaper_reg_prep(struct nix_tm_node *node, + struct nix_tm_shaper_profile *profile, + volatile uint64_t *reg, + volatile uint64_t *regval); struct nix_tm_node *nix_tm_node_alloc(void); void nix_tm_node_free(struct nix_tm_node *node); struct nix_tm_shaper_profile *nix_tm_shaper_profile_alloc(void); diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c index 581de4b..762c85a 100644 --- a/drivers/common/cnxk/roc_nix_tm.c +++ b/drivers/common/cnxk/roc_nix_tm.c @@ -30,6 +30,93 @@ nix_tm_clear_shaper_profiles(struct nix *nix) } } +static int +nix_tm_node_reg_conf(struct nix *nix, struct nix_tm_node *node) +{ + uint64_t regval_mask[MAX_REGS_PER_MBOX_MSG]; + uint64_t regval[MAX_REGS_PER_MBOX_MSG]; + struct nix_tm_shaper_profile *profile; + uint64_t reg[MAX_REGS_PER_MBOX_MSG]; + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_txschq_config *req; + int rc = -EFAULT; + uint32_t hw_lvl; + uint8_t k = 0; + + memset(regval, 0, sizeof(regval)); + memset(regval_mask, 0, sizeof(regval_mask)); + + profile = nix_tm_shaper_profile_search(nix, node->shaper_profile_id); + hw_lvl = node->hw_lvl; + + /* Need this trigger to configure TL1 */ + if (!nix_tm_have_tl1_access(nix) && hw_lvl == NIX_TXSCH_LVL_TL2) { + /* Prepare default conf for TL1 */ + req = mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = NIX_TXSCH_LVL_TL1; + + k = nix_tm_tl1_default_prep(node->parent_hw_id, req->reg, + req->regval); + req->num_regs = k; + rc = mbox_process(mbox); + if (rc) + goto error; + } + + /* Prepare topology config */ + k = nix_tm_topology_reg_prep(nix, node, reg, regval, regval_mask); + + /* Prepare schedule config */ + k += nix_tm_sched_reg_prep(nix, node, ®[k], ®val[k]); + + /* Prepare shaping config */ + k += nix_tm_shaper_reg_prep(node, profile, ®[k], ®val[k]); + + if (!k) + return 0; + + /* Copy and send config mbox */ + req = mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = hw_lvl; + req->num_regs = k; + + mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k); + mbox_memcpy(req->regval, regval, sizeof(uint64_t) * k); + mbox_memcpy(req->regval_mask, regval_mask, sizeof(uint64_t) * k); + + rc = mbox_process(mbox); + if (rc) + goto error; + + return 0; +error: + plt_err("Txschq conf failed for node %p, rc=%d", node, rc); + return rc; +} + +int +nix_tm_txsch_reg_config(struct nix *nix, enum roc_nix_tm_tree tree) +{ + struct nix_tm_node_list *list; + struct nix_tm_node *node; + uint32_t hw_lvl; + int rc = 0; + + list = nix_tm_node_list(nix, tree); + + for (hw_lvl = 0; hw_lvl <= nix->tm_root_lvl; hw_lvl++) { + TAILQ_FOREACH(node, list, node) { + if (node->hw_lvl != hw_lvl) + continue; + rc = nix_tm_node_reg_conf(nix, node); + if (rc) + goto exit; + } + } +exit: + return rc; +} + int nix_tm_update_parent_info(struct nix *nix, enum roc_nix_tm_tree tree) { @@ -478,6 +565,66 @@ nix_tm_sq_flush_post(struct roc_nix_sq *sq) } int +nix_tm_sq_sched_conf(struct nix *nix, struct nix_tm_node *node, + bool rr_quantum_only) +{ + struct mbox *mbox = (&nix->dev)->mbox; + uint16_t qid = node->id, smq; + uint64_t rr_quantum; + int rc; + + smq = node->parent->hw_id; + rr_quantum = nix_tm_weight_to_rr_quantum(node->weight); + + if (rr_quantum_only) + plt_tm_dbg("Update sq(%u) rr_quantum 0x%" PRIx64, qid, + rr_quantum); + else + plt_tm_dbg("Enabling sq(%u)->smq(%u), rr_quantum 0x%" PRIx64, + qid, smq, rr_quantum); + + if (qid > nix->nb_tx_queues) + return -EFAULT; + + if (roc_model_is_cn9k()) { + struct nix_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + /* smq update only when needed */ + if (!rr_quantum_only) { + aq->sq.smq = smq; + aq->sq_mask.smq = ~aq->sq_mask.smq; + } + aq->sq.smq_rr_quantum = rr_quantum; + aq->sq_mask.smq_rr_quantum = ~aq->sq_mask.smq_rr_quantum; + } else { + struct nix_cn10k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + /* smq update only when needed */ + if (!rr_quantum_only) { + aq->sq.smq = smq; + aq->sq_mask.smq = ~aq->sq_mask.smq; + } + aq->sq.smq_rr_weight = rr_quantum; + aq->sq_mask.smq_rr_weight = ~aq->sq_mask.smq_rr_weight; + } + + rc = mbox_process(mbox); + if (rc) + plt_err("Failed to set smq, rc=%d", rc); + return rc; +} + +int nix_tm_release_resources(struct nix *nix, uint8_t hw_lvl, bool contig, bool above_thresh) { diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c index 1e952c4..6bb0766 100644 --- a/drivers/common/cnxk/roc_nix_tm_ops.c +++ b/drivers/common/cnxk/roc_nix_tm_ops.c @@ -309,3 +309,237 @@ roc_nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id, bool free) { return nix_tm_node_delete(roc_nix, node_id, ROC_NIX_TM_USER, free); } + +int +roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uint16_t sqb_cnt, head_off, tail_off; + uint16_t sq_cnt = nix->nb_tx_queues; + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_tm_node_list *list; + enum roc_nix_tm_tree tree; + struct nix_tm_node *node; + struct roc_nix_sq *sq; + uint64_t wdata, val; + uintptr_t regaddr; + int rc = -1, i; + + if (!(nix->tm_flags & NIX_TM_HIERARCHY_ENA)) + return 0; + + plt_tm_dbg("Disabling hierarchy on %s", nix->pci_dev->name); + + tree = nix->tm_tree; + list = nix_tm_node_list(nix, tree); + + /* Enable CGX RXTX to drain pkts */ + if (!roc_nix->io_enabled) { + /* Though it enables both RX MCAM Entries and CGX Link + * we assume all the rx queues are stopped way back. + */ + mbox_alloc_msg_nix_lf_start_rx(mbox); + rc = mbox_process(mbox); + if (rc) { + plt_err("cgx start failed, rc=%d", rc); + return rc; + } + } + + /* XON all SMQ's */ + TAILQ_FOREACH(node, list, node) { + if (node->hw_lvl != NIX_TXSCH_LVL_SMQ) + continue; + if (!(node->flags & NIX_TM_NODE_HWRES)) + continue; + + rc = nix_tm_smq_xoff(nix, node, false); + if (rc) { + plt_err("Failed to enable smq %u, rc=%d", node->hw_id, + rc); + goto cleanup; + } + } + + /* Flush all tx queues */ + for (i = 0; i < sq_cnt; i++) { + sq = nix->sqs[i]; + if (!sq) + continue; + + rc = roc_nix_tm_sq_aura_fc(sq, false); + if (rc) { + plt_err("Failed to disable sqb aura fc, rc=%d", rc); + goto cleanup; + } + + /* Wait for sq entries to be flushed */ + rc = roc_nix_tm_sq_flush_spin(sq); + if (rc) { + plt_err("Failed to drain sq, rc=%d\n", rc); + goto cleanup; + } + } + + /* XOFF & Flush all SMQ's. HRM mandates + * all SQ's empty before SMQ flush is issued. + */ + TAILQ_FOREACH(node, list, node) { + if (node->hw_lvl != NIX_TXSCH_LVL_SMQ) + continue; + if (!(node->flags & NIX_TM_NODE_HWRES)) + continue; + + rc = nix_tm_smq_xoff(nix, node, true); + if (rc) { + plt_err("Failed to enable smq %u, rc=%d", node->hw_id, + rc); + goto cleanup; + } + + node->flags &= ~NIX_TM_NODE_ENABLED; + } + + /* Verify sanity of all tx queues */ + for (i = 0; i < sq_cnt; i++) { + sq = nix->sqs[i]; + if (!sq) + continue; + + wdata = ((uint64_t)sq->qid << 32); + regaddr = nix->base + NIX_LF_SQ_OP_STATUS; + val = roc_atomic64_add_nosync(wdata, (int64_t *)regaddr); + + sqb_cnt = val & 0xFFFF; + head_off = (val >> 20) & 0x3F; + tail_off = (val >> 28) & 0x3F; + + if (sqb_cnt > 1 || head_off != tail_off || + (*(uint64_t *)sq->fc != sq->nb_sqb_bufs)) + plt_err("Failed to gracefully flush sq %u", sq->qid); + } + + nix->tm_flags &= ~NIX_TM_HIERARCHY_ENA; +cleanup: + /* Restore cgx state */ + if (!roc_nix->io_enabled) { + mbox_alloc_msg_nix_lf_stop_rx(mbox); + rc |= mbox_process(mbox); + } + return rc; +} + +int +roc_nix_tm_hierarchy_enable(struct roc_nix *roc_nix, enum roc_nix_tm_tree tree, + bool xmit_enable) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_tm_node_list *list; + struct nix_tm_node *node; + struct roc_nix_sq *sq; + uint32_t tree_mask; + uint16_t sq_id; + int rc; + + if (tree >= ROC_NIX_TM_TREE_MAX) + return NIX_ERR_PARAM; + + if (nix->tm_flags & NIX_TM_HIERARCHY_ENA) { + if (nix->tm_tree != tree) + return -EBUSY; + return 0; + } + + plt_tm_dbg("Enabling hierarchy on %s, xmit_ena %u, tree %u", + nix->pci_dev->name, xmit_enable, tree); + + /* Free hw resources of other trees */ + tree_mask = NIX_TM_TREE_MASK_ALL; + tree_mask &= ~BIT(tree); + + rc = nix_tm_free_resources(roc_nix, tree_mask, true); + if (rc) { + plt_err("failed to free resources of other trees, rc=%d", rc); + return rc; + } + + /* Update active tree before starting to do anything */ + nix->tm_tree = tree; + + nix_tm_update_parent_info(nix, tree); + + rc = nix_tm_alloc_txschq(nix, tree); + if (rc) { + plt_err("TM failed to alloc tm resources=%d", rc); + return rc; + } + + rc = nix_tm_assign_resources(nix, tree); + if (rc) { + plt_err("TM failed to assign tm resources=%d", rc); + return rc; + } + + rc = nix_tm_txsch_reg_config(nix, tree); + if (rc) { + plt_err("TM failed to configure sched registers=%d", rc); + return rc; + } + + list = nix_tm_node_list(nix, tree); + /* Mark all non-leaf's as enabled */ + TAILQ_FOREACH(node, list, node) { + if (!nix_tm_is_leaf(nix, node->lvl)) + node->flags |= NIX_TM_NODE_ENABLED; + } + + if (!xmit_enable) + goto skip_sq_update; + + /* Update SQ Sched Data while SQ is idle */ + TAILQ_FOREACH(node, list, node) { + if (!nix_tm_is_leaf(nix, node->lvl)) + continue; + + rc = nix_tm_sq_sched_conf(nix, node, false); + if (rc) { + plt_err("SQ %u sched update failed, rc=%d", node->id, + rc); + return rc; + } + } + + /* Finally XON all SMQ's */ + TAILQ_FOREACH(node, list, node) { + if (node->hw_lvl != NIX_TXSCH_LVL_SMQ) + continue; + + rc = nix_tm_smq_xoff(nix, node, false); + if (rc) { + plt_err("Failed to enable smq %u, rc=%d", node->hw_id, + rc); + return rc; + } + } + + /* Enable xmit as all the topology is ready */ + TAILQ_FOREACH(node, list, node) { + if (!nix_tm_is_leaf(nix, node->lvl)) + continue; + + sq_id = node->id; + sq = nix->sqs[sq_id]; + + rc = roc_nix_tm_sq_aura_fc(sq, true); + if (rc) { + plt_err("TM sw xon failed on SQ %u, rc=%d", node->id, + rc); + return rc; + } + node->flags |= NIX_TM_NODE_ENABLED; + } + +skip_sq_update: + nix->tm_flags |= NIX_TM_HIERARCHY_ENA; + return 0; +} diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c index 45de9f6..b644716 100644 --- a/drivers/common/cnxk/roc_nix_tm_utils.c +++ b/drivers/common/cnxk/roc_nix_tm_utils.c @@ -5,6 +5,14 @@ #include "roc_api.h" #include "roc_priv.h" +static inline uint64_t +nix_tm_shaper2regval(struct nix_tm_shaper_data *shaper) +{ + return (shaper->burst_exponent << 37) | (shaper->burst_mantissa << 29) | + (shaper->div_exp << 13) | (shaper->exponent << 9) | + (shaper->mantissa << 1); +} + uint16_t nix_tm_lvl2nix_tl1_root(uint32_t lvl) { @@ -50,6 +58,32 @@ nix_tm_lvl2nix(struct nix *nix, uint32_t lvl) return nix_tm_lvl2nix_tl2_root(lvl); } +static uint8_t +nix_tm_relchan_get(struct nix *nix) +{ + return nix->tx_chan_base & 0xff; +} + +static int +nix_tm_find_prio_anchor(struct nix *nix, uint32_t node_id, + enum roc_nix_tm_tree tree) +{ + struct nix_tm_node *child_node; + struct nix_tm_node_list *list; + + list = nix_tm_node_list(nix, tree); + + TAILQ_FOREACH(child_node, list, node) { + if (!child_node->parent) + continue; + if (!(child_node->parent->id == node_id)) + continue; + if (child_node->priority == child_node->parent->rr_prio) + continue; + return child_node->hw_id - child_node->priority; + } + return 0; +} struct nix_tm_shaper_profile * nix_tm_shaper_profile_search(struct nix *nix, uint32_t id) @@ -177,6 +211,39 @@ nix_tm_shaper_burst_conv(uint64_t value, uint64_t *exponent_p, return NIX_TM_SHAPER_BURST(exponent, mantissa); } +static void +nix_tm_shaper_conf_get(struct nix_tm_shaper_profile *profile, + struct nix_tm_shaper_data *cir, + struct nix_tm_shaper_data *pir) +{ + if (!profile) + return; + + /* Calculate CIR exponent and mantissa */ + if (profile->commit.rate) + cir->rate = nix_tm_shaper_rate_conv( + profile->commit.rate, &cir->exponent, &cir->mantissa, + &cir->div_exp); + + /* Calculate PIR exponent and mantissa */ + if (profile->peak.rate) + pir->rate = nix_tm_shaper_rate_conv( + profile->peak.rate, &pir->exponent, &pir->mantissa, + &pir->div_exp); + + /* Calculate CIR burst exponent and mantissa */ + if (profile->commit.size) + cir->burst = nix_tm_shaper_burst_conv(profile->commit.size, + &cir->burst_exponent, + &cir->burst_mantissa); + + /* Calculate PIR burst exponent and mantissa */ + if (profile->peak.size) + pir->burst = nix_tm_shaper_burst_conv(profile->peak.size, + &pir->burst_exponent, + &pir->burst_mantissa); +} + uint32_t nix_tm_check_rr(struct nix *nix, uint32_t parent_id, enum roc_nix_tm_tree tree, uint32_t *rr_prio, uint32_t *max_prio) @@ -309,6 +376,349 @@ nix_tm_child_res_valid(struct nix_tm_node_list *list, } uint8_t +nix_tm_tl1_default_prep(uint32_t schq, volatile uint64_t *reg, + volatile uint64_t *regval) +{ + uint8_t k = 0; + + /* + * Default config for TL1. + * For VF this is always ignored. + */ + plt_tm_dbg("Default config for main root %s(%u)", + nix_tm_hwlvl2str(NIX_TXSCH_LVL_TL1), schq); + + /* Set DWRR quantum */ + reg[k] = NIX_AF_TL1X_SCHEDULE(schq); + regval[k] = NIX_TM_TL1_DFLT_RR_QTM; + k++; + + reg[k] = NIX_AF_TL1X_TOPOLOGY(schq); + regval[k] = (NIX_TM_TL1_DFLT_RR_PRIO << 1); + k++; + + reg[k] = NIX_AF_TL1X_CIR(schq); + regval[k] = 0; + k++; + + return k; +} + +uint8_t +nix_tm_topology_reg_prep(struct nix *nix, struct nix_tm_node *node, + volatile uint64_t *reg, volatile uint64_t *regval, + volatile uint64_t *regval_mask) +{ + uint8_t k = 0, hw_lvl, parent_lvl; + uint64_t parent = 0, child = 0; + enum roc_nix_tm_tree tree; + uint32_t rr_prio, schq; + uint16_t link, relchan; + + tree = node->tree; + schq = node->hw_id; + hw_lvl = node->hw_lvl; + parent_lvl = hw_lvl + 1; + rr_prio = node->rr_prio; + + /* Root node will not have a parent node */ + if (hw_lvl == nix->tm_root_lvl) + parent = node->parent_hw_id; + else + parent = node->parent->hw_id; + + link = nix->tx_link; + relchan = nix_tm_relchan_get(nix); + + if (hw_lvl != NIX_TXSCH_LVL_SMQ) + child = nix_tm_find_prio_anchor(nix, node->id, tree); + + /* Override default rr_prio when TL1 + * Static Priority is disabled + */ + if (hw_lvl == NIX_TXSCH_LVL_TL1 && nix->tm_flags & NIX_TM_TL1_NO_SP) { + rr_prio = NIX_TM_TL1_DFLT_RR_PRIO; + child = 0; + } + + plt_tm_dbg("Topology config node %s(%u)->%s(%" PRIu64 ") lvl %u, id %u" + " prio_anchor %" PRIu64 " rr_prio %u (%p)", + nix_tm_hwlvl2str(hw_lvl), schq, nix_tm_hwlvl2str(parent_lvl), + parent, node->lvl, node->id, child, rr_prio, node); + + /* Prepare Topology and Link config */ + switch (hw_lvl) { + case NIX_TXSCH_LVL_SMQ: + + /* Set xoff which will be cleared later */ + reg[k] = NIX_AF_SMQX_CFG(schq); + regval[k] = (BIT_ULL(50) | NIX_MIN_HW_FRS | + ((nix->mtu & 0xFFFF) << 8)); + regval_mask[k] = + ~(BIT_ULL(50) | GENMASK_ULL(6, 0) | GENMASK_ULL(23, 8)); + k++; + + /* Parent and schedule conf */ + reg[k] = NIX_AF_MDQX_PARENT(schq); + regval[k] = parent << 16; + k++; + + break; + case NIX_TXSCH_LVL_TL4: + /* Parent and schedule conf */ + reg[k] = NIX_AF_TL4X_PARENT(schq); + regval[k] = parent << 16; + k++; + + reg[k] = NIX_AF_TL4X_TOPOLOGY(schq); + regval[k] = (child << 32) | (rr_prio << 1); + k++; + + /* Configure TL4 to send to SDP channel instead of CGX/LBK */ + if (nix->sdp_link) { + reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq); + regval[k] = BIT_ULL(12); + k++; + } + break; + case NIX_TXSCH_LVL_TL3: + /* Parent and schedule conf */ + reg[k] = NIX_AF_TL3X_PARENT(schq); + regval[k] = parent << 16; + k++; + + reg[k] = NIX_AF_TL3X_TOPOLOGY(schq); + regval[k] = (child << 32) | (rr_prio << 1); + k++; + + /* Link configuration */ + if (!nix->sdp_link && + nix->tm_link_cfg_lvl == NIX_TXSCH_LVL_TL3) { + reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link); + regval[k] = BIT_ULL(12) | relchan; + k++; + } + + break; + case NIX_TXSCH_LVL_TL2: + /* Parent and schedule conf */ + reg[k] = NIX_AF_TL2X_PARENT(schq); + regval[k] = parent << 16; + k++; + + reg[k] = NIX_AF_TL2X_TOPOLOGY(schq); + regval[k] = (child << 32) | (rr_prio << 1); + k++; + + /* Link configuration */ + if (!nix->sdp_link && + nix->tm_link_cfg_lvl == NIX_TXSCH_LVL_TL2) { + reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link); + regval[k] = BIT_ULL(12) | relchan; + k++; + } + + break; + case NIX_TXSCH_LVL_TL1: + reg[k] = NIX_AF_TL1X_TOPOLOGY(schq); + regval[k] = (child << 32) | (rr_prio << 1 /*RR_PRIO*/); + k++; + + break; + } + + return k; +} + +uint8_t +nix_tm_sched_reg_prep(struct nix *nix, struct nix_tm_node *node, + volatile uint64_t *reg, volatile uint64_t *regval) +{ + uint64_t strict_prio = node->priority; + uint32_t hw_lvl = node->hw_lvl; + uint32_t schq = node->hw_id; + uint64_t rr_quantum; + uint8_t k = 0; + + rr_quantum = nix_tm_weight_to_rr_quantum(node->weight); + + /* For children to root, strict prio is default if either + * device root is TL2 or TL1 Static Priority is disabled. + */ + if (hw_lvl == NIX_TXSCH_LVL_TL2 && + (!nix_tm_have_tl1_access(nix) || nix->tm_flags & NIX_TM_TL1_NO_SP)) + strict_prio = NIX_TM_TL1_DFLT_RR_PRIO; + + plt_tm_dbg("Schedule config node %s(%u) lvl %u id %u, " + "prio 0x%" PRIx64 ", rr_quantum 0x%" PRIx64 " (%p)", + nix_tm_hwlvl2str(node->hw_lvl), schq, node->lvl, node->id, + strict_prio, rr_quantum, node); + + switch (hw_lvl) { + case NIX_TXSCH_LVL_SMQ: + reg[k] = NIX_AF_MDQX_SCHEDULE(schq); + regval[k] = (strict_prio << 24) | rr_quantum; + k++; + + break; + case NIX_TXSCH_LVL_TL4: + reg[k] = NIX_AF_TL4X_SCHEDULE(schq); + regval[k] = (strict_prio << 24) | rr_quantum; + k++; + + break; + case NIX_TXSCH_LVL_TL3: + reg[k] = NIX_AF_TL3X_SCHEDULE(schq); + regval[k] = (strict_prio << 24) | rr_quantum; + k++; + + break; + case NIX_TXSCH_LVL_TL2: + reg[k] = NIX_AF_TL2X_SCHEDULE(schq); + regval[k] = (strict_prio << 24) | rr_quantum; + k++; + + break; + case NIX_TXSCH_LVL_TL1: + reg[k] = NIX_AF_TL1X_SCHEDULE(schq); + regval[k] = rr_quantum; + k++; + + break; + } + + return k; +} + +uint8_t +nix_tm_shaper_reg_prep(struct nix_tm_node *node, + struct nix_tm_shaper_profile *profile, + volatile uint64_t *reg, volatile uint64_t *regval) +{ + struct nix_tm_shaper_data cir, pir; + uint32_t schq = node->hw_id; + uint64_t adjust = 0; + uint8_t k = 0; + + memset(&cir, 0, sizeof(cir)); + memset(&pir, 0, sizeof(pir)); + nix_tm_shaper_conf_get(profile, &cir, &pir); + + if (node->pkt_mode) + adjust = 1; + else if (profile) + adjust = profile->pkt_len_adj; + + plt_tm_dbg("Shaper config node %s(%u) lvl %u id %u, " + "pir %" PRIu64 "(%" PRIu64 "B)," + " cir %" PRIu64 "(%" PRIu64 "B)" + "adjust 0x%" PRIx64 "(pktmode %u) (%p)", + nix_tm_hwlvl2str(node->hw_lvl), schq, node->lvl, node->id, + pir.rate, pir.burst, cir.rate, cir.burst, adjust, + node->pkt_mode, node); + + switch (node->hw_lvl) { + case NIX_TXSCH_LVL_SMQ: + /* Configure PIR, CIR */ + reg[k] = NIX_AF_MDQX_PIR(schq); + regval[k] = (pir.rate && pir.burst) ? + (nix_tm_shaper2regval(&pir) | 1) : + 0; + k++; + + reg[k] = NIX_AF_MDQX_CIR(schq); + regval[k] = (cir.rate && cir.burst) ? + (nix_tm_shaper2regval(&cir) | 1) : + 0; + k++; + + /* Configure RED ALG */ + reg[k] = NIX_AF_MDQX_SHAPE(schq); + regval[k] = (adjust | (uint64_t)node->red_algo << 9 | + (uint64_t)node->pkt_mode << 24); + k++; + break; + case NIX_TXSCH_LVL_TL4: + /* Configure PIR, CIR */ + reg[k] = NIX_AF_TL4X_PIR(schq); + regval[k] = (pir.rate && pir.burst) ? + (nix_tm_shaper2regval(&pir) | 1) : + 0; + k++; + + reg[k] = NIX_AF_TL4X_CIR(schq); + regval[k] = (cir.rate && cir.burst) ? + (nix_tm_shaper2regval(&cir) | 1) : + 0; + k++; + + /* Configure RED algo */ + reg[k] = NIX_AF_TL4X_SHAPE(schq); + regval[k] = (adjust | (uint64_t)node->red_algo << 9 | + (uint64_t)node->pkt_mode << 24); + k++; + break; + case NIX_TXSCH_LVL_TL3: + /* Configure PIR, CIR */ + reg[k] = NIX_AF_TL3X_PIR(schq); + regval[k] = (pir.rate && pir.burst) ? + (nix_tm_shaper2regval(&pir) | 1) : + 0; + k++; + + reg[k] = NIX_AF_TL3X_CIR(schq); + regval[k] = (cir.rate && cir.burst) ? + (nix_tm_shaper2regval(&cir) | 1) : + 0; + k++; + + /* Configure RED algo */ + reg[k] = NIX_AF_TL3X_SHAPE(schq); + regval[k] = (adjust | (uint64_t)node->red_algo << 9 | + (uint64_t)node->pkt_mode); + k++; + + break; + case NIX_TXSCH_LVL_TL2: + /* Configure PIR, CIR */ + reg[k] = NIX_AF_TL2X_PIR(schq); + regval[k] = (pir.rate && pir.burst) ? + (nix_tm_shaper2regval(&pir) | 1) : + 0; + k++; + + reg[k] = NIX_AF_TL2X_CIR(schq); + regval[k] = (cir.rate && cir.burst) ? + (nix_tm_shaper2regval(&cir) | 1) : + 0; + k++; + + /* Configure RED algo */ + reg[k] = NIX_AF_TL2X_SHAPE(schq); + regval[k] = (adjust | (uint64_t)node->red_algo << 9 | + (uint64_t)node->pkt_mode << 24); + k++; + + break; + case NIX_TXSCH_LVL_TL1: + /* Configure CIR */ + reg[k] = NIX_AF_TL1X_CIR(schq); + regval[k] = (cir.rate && cir.burst) ? + (nix_tm_shaper2regval(&cir) | 1) : + 0; + k++; + + /* Configure length disable and adjust */ + reg[k] = NIX_AF_TL1X_SHAPE(schq); + regval[k] = (adjust | (uint64_t)node->pkt_mode << 24); + k++; + break; + } + + return k; +} + +uint8_t nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable, volatile uint64_t *reg, volatile uint64_t *regval) { diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 4817fd5..9c860ff 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -105,6 +105,8 @@ INTERNAL { roc_nix_switch_hdr_set; roc_nix_eeprom_info_get; roc_nix_tm_free_resources; + roc_nix_tm_hierarchy_disable; + roc_nix_tm_hierarchy_enable; roc_nix_tm_node_add; roc_nix_tm_node_delete; roc_nix_tm_node_get; From patchwork Thu Apr 1 12:38:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90410 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 27C95A0548; Thu, 1 Apr 2021 14:44:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AB4AF1412A5; Thu, 1 Apr 2021 14:40:30 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 603401412A2 for ; Thu, 1 Apr 2021 14:40:29 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLXo019083 for ; Thu, 1 Apr 2021 05:40:28 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=AGBURLfxJZaaXU3xqiPGcdjJFBYKiZRwGumYflZUy1Y=; b=XHRWOhJU2nMgk2VR5Vg00yRLpQEC5H4GIZsTzNAJWoCS7h3zeGjTDNvi2G9cCdSb3fDa vpxxzdvXqBjsCPJZ0IQ+jUIf2k+XvAkFm4YhJ1ZtN/XzRlgyzhYsSzuLXkTpuIEEaj1q numNFbuDh4oHh8zBFyYJpiE51Y5bBJxnPKpyRR+byCZgM4wgySDQCYp7IWRzGAJsJsTw YA/CHUPdBGk9uKt3pa+TLDiB9kaKRgAOda41nHiXHMAN1aD6Ay9RRyGpuq7ALYEg03IN Q4ogbSs3GtOWQ9CTXAy2vhAXaxe+n+yqWyopeGsaOFbC++LK7nzzlpcs6SfRsfug6I/0 3A== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje4w-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:28 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:26 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:26 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id DDF3F3F7043; Thu, 1 Apr 2021 05:40:23 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , , Nithin Dabilpuram Date: Thu, 1 Apr 2021 18:08:02 +0530 Message-ID: <20210401123817.14348-38-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 7UFF_eE0xNpYQb-DjgU_y0xDHoCZNfrX X-Proofpoint-ORIG-GUID: 7UFF_eE0xNpYQb-DjgU_y0xDHoCZNfrX X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 37/52] common/cnxk: add nix tm support for internal hierarchy X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support to create internal TM default hierarchy and ratelimit hierarchy and API to ratelimit SQ to a given rate. This will be used by cnxk ethdev driver's tx queue ratelimit op. Signed-off-by: Nithin Dabilpuram --- drivers/common/cnxk/roc_nix.h | 7 ++ drivers/common/cnxk/roc_nix_priv.h | 2 + drivers/common/cnxk/roc_nix_tm.c | 156 +++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_nix_tm_ops.c | 141 +++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 3 + 5 files changed, 309 insertions(+) diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 7bf3435..8992ad3 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -330,6 +330,8 @@ enum roc_tm_node_level { /* * TM runtime hierarchy init API. */ +int __roc_api roc_nix_tm_init(struct roc_nix *roc_nix); +void __roc_api roc_nix_tm_fini(struct roc_nix *roc_nix); int __roc_api roc_nix_tm_sq_aura_fc(struct roc_nix_sq *sq, bool enable); int __roc_api roc_nix_tm_sq_flush_spin(struct roc_nix_sq *sq); @@ -392,6 +394,11 @@ struct roc_nix_tm_shaper_profile *__roc_api roc_nix_tm_shaper_profile_next( struct roc_nix *roc_nix, struct roc_nix_tm_shaper_profile *__prev); /* + * TM ratelimit tree API. + */ +int __roc_api roc_nix_tm_rlimit_sq(struct roc_nix *roc_nix, uint16_t qid, + uint64_t rate); +/* * TM hierarchy enable/disable API. */ int __roc_api roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix); diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index a40621c..4e1485f 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -326,6 +326,7 @@ int nix_tm_leaf_data_get(struct nix *nix, uint16_t sq, uint32_t *rr_quantum, int nix_tm_sq_flush_pre(struct roc_nix_sq *sq); int nix_tm_sq_flush_post(struct roc_nix_sq *sq); int nix_tm_smq_xoff(struct nix *nix, struct nix_tm_node *node, bool enable); +int nix_tm_prepare_default_tree(struct roc_nix *roc_nix); int nix_tm_node_add(struct roc_nix *roc_nix, struct nix_tm_node *node); int nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id, enum roc_nix_tm_tree tree, bool free); @@ -344,6 +345,7 @@ int nix_tm_txsch_reg_config(struct nix *nix, enum roc_nix_tm_tree tree); int nix_tm_update_parent_info(struct nix *nix, enum roc_nix_tm_tree tree); int nix_tm_sq_sched_conf(struct nix *nix, struct nix_tm_node *node, bool rr_quantum_only); +int nix_tm_prepare_rate_limited_tree(struct roc_nix *roc_nix); /* * TM priv utils. diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c index 762c85a..9b328c9 100644 --- a/drivers/common/cnxk/roc_nix_tm.c +++ b/drivers/common/cnxk/roc_nix_tm.c @@ -1089,6 +1089,162 @@ nix_tm_alloc_txschq(struct nix *nix, enum roc_nix_tm_tree tree) } int +nix_tm_prepare_default_tree(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uint32_t nonleaf_id = nix->nb_tx_queues; + struct nix_tm_node *node = NULL; + uint8_t leaf_lvl, lvl, lvl_end; + uint32_t parent, i; + int rc = 0; + + /* Add ROOT, SCH1, SCH2, SCH3, [SCH4] nodes */ + parent = ROC_NIX_TM_NODE_ID_INVALID; + /* With TL1 access we have an extra level */ + lvl_end = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_SCH4 : + ROC_TM_LVL_SCH3); + + for (lvl = ROC_TM_LVL_ROOT; lvl <= lvl_end; lvl++) { + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = nonleaf_id; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = lvl; + node->tree = ROC_NIX_TM_DEFAULT; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + parent = nonleaf_id; + nonleaf_id++; + } + + parent = nonleaf_id - 1; + leaf_lvl = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_QUEUE : + ROC_TM_LVL_SCH4); + + /* Add leaf nodes */ + for (i = 0; i < nix->nb_tx_queues; i++) { + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = i; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = leaf_lvl; + node->tree = ROC_NIX_TM_DEFAULT; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + } + + return 0; +error: + nix_tm_node_free(node); + return rc; +} + +int +nix_tm_prepare_rate_limited_tree(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uint32_t nonleaf_id = nix->nb_tx_queues; + struct nix_tm_node *node = NULL; + uint8_t leaf_lvl, lvl, lvl_end; + uint32_t parent, i; + int rc = 0; + + /* Add ROOT, SCH1, SCH2 nodes */ + parent = ROC_NIX_TM_NODE_ID_INVALID; + lvl_end = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_SCH3 : + ROC_TM_LVL_SCH2); + + for (lvl = ROC_TM_LVL_ROOT; lvl <= lvl_end; lvl++) { + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = nonleaf_id; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = lvl; + node->tree = ROC_NIX_TM_RLIMIT; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + parent = nonleaf_id; + nonleaf_id++; + } + + /* SMQ is mapped to SCH4 when we have TL1 access and SCH3 otherwise */ + lvl = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_SCH4 : ROC_TM_LVL_SCH3); + + /* Add per queue SMQ nodes i.e SCH4 / SCH3 */ + for (i = 0; i < nix->nb_tx_queues; i++) { + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = nonleaf_id + i; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = lvl; + node->tree = ROC_NIX_TM_RLIMIT; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + } + + parent = nonleaf_id; + leaf_lvl = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_QUEUE : + ROC_TM_LVL_SCH4); + + /* Add leaf nodes */ + for (i = 0; i < nix->nb_tx_queues; i++) { + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = i; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = leaf_lvl; + node->tree = ROC_NIX_TM_RLIMIT; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + } + + return 0; +error: + nix_tm_node_free(node); + return rc; +} + +int nix_tm_free_resources(struct roc_nix *roc_nix, uint32_t tree_mask, bool hw_only) { struct nix *nix = roc_nix_to_nix_priv(roc_nix); diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c index 6bb0766..d13cc8a 100644 --- a/drivers/common/cnxk/roc_nix_tm_ops.c +++ b/drivers/common/cnxk/roc_nix_tm_ops.c @@ -543,3 +543,144 @@ roc_nix_tm_hierarchy_enable(struct roc_nix *roc_nix, enum roc_nix_tm_tree tree, nix->tm_flags |= NIX_TM_HIERARCHY_ENA; return 0; } + +int +roc_nix_tm_init(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uint32_t tree_mask; + int rc; + + if (nix->tm_flags & NIX_TM_HIERARCHY_ENA) { + plt_err("Cannot init while existing hierarchy is enabled"); + return -EBUSY; + } + + /* Free up all user resources already held */ + tree_mask = NIX_TM_TREE_MASK_ALL; + rc = nix_tm_free_resources(roc_nix, tree_mask, false); + if (rc) { + plt_err("Failed to freeup all nodes and resources, rc=%d", rc); + return rc; + } + + /* Prepare default tree */ + rc = nix_tm_prepare_default_tree(roc_nix); + if (rc) { + plt_err("failed to prepare default tm tree, rc=%d", rc); + return rc; + } + + /* Prepare rlimit tree */ + rc = nix_tm_prepare_rate_limited_tree(roc_nix); + if (rc) { + plt_err("failed to prepare rlimit tm tree, rc=%d", rc); + return rc; + } + + return rc; +} + +int +roc_nix_tm_rlimit_sq(struct roc_nix *roc_nix, uint16_t qid, uint64_t rate) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_tm_shaper_profile profile; + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_tm_node *node, *parent; + + volatile uint64_t *reg, *regval; + struct nix_txschq_config *req; + uint16_t flags; + uint8_t k = 0; + int rc; + + if (nix->tm_tree != ROC_NIX_TM_RLIMIT || + !(nix->tm_flags & NIX_TM_HIERARCHY_ENA)) + return NIX_ERR_TM_INVALID_TREE; + + node = nix_tm_node_search(nix, qid, ROC_NIX_TM_RLIMIT); + + /* check if we found a valid leaf node */ + if (!node || !nix_tm_is_leaf(nix, node->lvl) || !node->parent || + node->parent->hw_id == NIX_TM_HW_ID_INVALID) + return NIX_ERR_TM_INVALID_NODE; + + parent = node->parent; + flags = parent->flags; + + req = mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = NIX_TXSCH_LVL_MDQ; + reg = req->reg; + regval = req->regval; + + if (rate == 0) { + k += nix_tm_sw_xoff_prep(parent, true, ®[k], ®val[k]); + flags &= ~NIX_TM_NODE_ENABLED; + goto exit; + } + + if (!(flags & NIX_TM_NODE_ENABLED)) { + k += nix_tm_sw_xoff_prep(parent, false, ®[k], ®val[k]); + flags |= NIX_TM_NODE_ENABLED; + } + + /* Use only PIR for rate limit */ + memset(&profile, 0, sizeof(profile)); + profile.peak.rate = rate; + /* Minimum burst of ~4us Bytes of Tx */ + profile.peak.size = PLT_MAX((uint64_t)roc_nix_max_pkt_len(roc_nix), + (4ul * rate) / ((uint64_t)1E6 * 8)); + if (!nix->tm_rate_min || nix->tm_rate_min > rate) + nix->tm_rate_min = rate; + + k += nix_tm_shaper_reg_prep(parent, &profile, ®[k], ®val[k]); +exit: + req->num_regs = k; + rc = mbox_process(mbox); + if (rc) + return rc; + + parent->flags = flags; + return 0; +} + +void +roc_nix_tm_fini(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_txsch_free_req *req; + uint32_t tree_mask; + uint8_t hw_lvl; + int rc; + + /* Xmit is assumed to be disabled */ + /* Free up resources already held */ + tree_mask = NIX_TM_TREE_MASK_ALL; + rc = nix_tm_free_resources(roc_nix, tree_mask, false); + if (rc) + plt_err("Failed to freeup existing nodes or rsrcs, rc=%d", rc); + + /* Free all other hw resources */ + req = mbox_alloc_msg_nix_txsch_free(mbox); + if (req == NULL) + return; + + req->flags = TXSCHQ_FREE_ALL; + rc = mbox_process(mbox); + if (rc) + plt_err("Failed to freeup all res, rc=%d", rc); + + for (hw_lvl = 0; hw_lvl < NIX_TXSCH_LVL_CNT; hw_lvl++) { + plt_bitmap_reset(nix->schq_bmp[hw_lvl]); + plt_bitmap_reset(nix->schq_contig_bmp[hw_lvl]); + nix->contig_rsvd[hw_lvl] = 0; + nix->discontig_rsvd[hw_lvl] = 0; + } + + /* Clear shaper profiles */ + nix_tm_clear_shaper_profiles(nix); + nix->tm_tree = 0; + nix->tm_flags &= ~NIX_TM_HIERARCHY_ENA; +} diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 9c860ff..854c3c1 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -104,9 +104,11 @@ INTERNAL { roc_nix_xstats_names_get; roc_nix_switch_hdr_set; roc_nix_eeprom_info_get; + roc_nix_tm_fini; roc_nix_tm_free_resources; roc_nix_tm_hierarchy_disable; roc_nix_tm_hierarchy_enable; + roc_nix_tm_init; roc_nix_tm_node_add; roc_nix_tm_node_delete; roc_nix_tm_node_get; @@ -114,6 +116,7 @@ INTERNAL { roc_nix_tm_node_name_get; roc_nix_tm_node_next; roc_nix_tm_node_pkt_mode_update; + roc_nix_tm_rlimit_sq; roc_nix_tm_shaper_profile_add; roc_nix_tm_shaper_profile_delete; roc_nix_tm_shaper_profile_get; From patchwork Thu Apr 1 12:38:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90411 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BC622A0548; Thu, 1 Apr 2021 14:44:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E3608141256; Thu, 1 Apr 2021 14:40:33 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id A40341411B5 for ; Thu, 1 Apr 2021 14:40:32 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLcj019084 for ; Thu, 1 Apr 2021 05:40:31 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=lbb08LKIH4sX7uIAB+IflFjHKrapbB9stcbF1hjmCu8=; b=SmTtl0Pa7Px71qyXLTk4YyGXHMkmr2s146QWQYx8ZtPJmF0GIzJ99LlgDkz+n3jxAE9p 4XLG3qYoLxDPbiGfVquizVjRPPYRMaWoOfTaFIVuSBSPKOHki0u317jiUi+mlO1ECCq8 CDpg2h/5Nn57xXQD+GsB24+Yl/zGgKmrMDz6ybzHDU3KCyro5kVn0SFpfvRF8+f5fzQd pO8cSiWMMDUpm0WQ8SQl6yJ/AMwhWdT5odl67UryDYig5YpaImqlbU+o+eyq/0FzZnqm CyXKD1w9xBfzGzK/8YoZN/8XxdiXc+nqE6MtPonyzYvOBTPucC9zvuRu/PNyId6yikfO AA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje55-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:31 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:29 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:29 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 1A0E23F7041; Thu, 1 Apr 2021 05:40:26 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , , Nithin Dabilpuram Date: Thu, 1 Apr 2021 18:08:03 +0530 Message-ID: <20210401123817.14348-39-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: xHBcke7MZ1QIWRf_A2x7vR4HH9Q4Kf4w X-Proofpoint-ORIG-GUID: xHBcke7MZ1QIWRf_A2x7vR4HH9Q4Kf4w X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 38/52] common/cnxk: add nix tm dynamic update support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for dynamic node update of shaper profile, RR quantum and also support to suspend or resume an active TM node. Signed-off-by: Nithin Dabilpuram --- drivers/common/cnxk/roc_nix.h | 10 ++ drivers/common/cnxk/roc_nix_tm_ops.c | 220 +++++++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 3 + 3 files changed, 233 insertions(+) diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 8992ad3..ad00efe 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -375,6 +375,16 @@ int __roc_api roc_nix_tm_node_add(struct roc_nix *roc_nix, int __roc_api roc_nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id, bool free); int __roc_api roc_nix_tm_free_resources(struct roc_nix *roc_nix, bool hw_only); +int __roc_api roc_nix_tm_node_suspend_resume(struct roc_nix *roc_nix, + uint32_t node_id, bool suspend); +int __roc_api roc_nix_tm_node_parent_update(struct roc_nix *roc_nix, + uint32_t node_id, + uint32_t new_parent_id, + uint32_t priority, uint32_t weight); +int __roc_api roc_nix_tm_node_shaper_update(struct roc_nix *roc_nix, + uint32_t node_id, + uint32_t profile_id, + bool force_update); int __roc_api roc_nix_tm_node_pkt_mode_update(struct roc_nix *roc_nix, uint32_t node_id, bool pkt_mode); int __roc_api roc_nix_tm_shaper_profile_add( diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c index d13cc8a..e4463d1 100644 --- a/drivers/common/cnxk/roc_nix_tm_ops.c +++ b/drivers/common/cnxk/roc_nix_tm_ops.c @@ -545,6 +545,226 @@ roc_nix_tm_hierarchy_enable(struct roc_nix *roc_nix, enum roc_nix_tm_tree tree, } int +roc_nix_tm_node_suspend_resume(struct roc_nix *roc_nix, uint32_t node_id, + bool suspend) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_txschq_config *req; + struct nix_tm_node *node; + uint16_t flags; + int rc; + + node = nix_tm_node_search(nix, node_id, ROC_NIX_TM_USER); + if (!node) + return NIX_ERR_TM_INVALID_NODE; + + flags = node->flags; + flags = suspend ? (flags & ~NIX_TM_NODE_ENABLED) : + (flags | NIX_TM_NODE_ENABLED); + + if (node->flags == flags) + return 0; + + /* send mbox for state change */ + req = mbox_alloc_msg_nix_txschq_cfg(mbox); + + req->lvl = node->hw_lvl; + req->num_regs = + nix_tm_sw_xoff_prep(node, suspend, req->reg, req->regval); + rc = mbox_process(mbox); + if (!rc) + node->flags = flags; + return rc; +} + +int +roc_nix_tm_node_shaper_update(struct roc_nix *roc_nix, uint32_t node_id, + uint32_t profile_id, bool force_update) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_tm_shaper_profile *profile = NULL; + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_txschq_config *req; + struct nix_tm_node *node; + uint8_t k; + int rc; + + /* Shaper updates valid only for user nodes */ + node = nix_tm_node_search(nix, node_id, ROC_NIX_TM_USER); + if (!node || nix_tm_is_leaf(nix, node->lvl)) + return NIX_ERR_TM_INVALID_NODE; + + if (profile_id != ROC_NIX_TM_SHAPER_PROFILE_NONE) { + profile = nix_tm_shaper_profile_search(nix, profile_id); + if (!profile) + return NIX_ERR_TM_INVALID_SHAPER_PROFILE; + } + + /* Pkt mode should match existing node's pkt mode */ + if (profile && profile->pkt_mode != node->pkt_mode) + return NIX_ERR_TM_PKT_MODE_MISMATCH; + + if ((profile_id == node->shaper_profile_id) && !force_update) { + return 0; + } else if (profile_id != node->shaper_profile_id) { + struct nix_tm_shaper_profile *old; + + /* Find old shaper profile and reduce ref count */ + old = nix_tm_shaper_profile_search(nix, + node->shaper_profile_id); + if (old) + old->ref_cnt--; + + if (profile) + profile->ref_cnt++; + + /* Reduce older shaper ref count and increase new one */ + node->shaper_profile_id = profile_id; + } + + /* Nothing to do if hierarchy not yet enabled */ + if (!(nix->tm_flags & NIX_TM_HIERARCHY_ENA)) + return 0; + + node->flags &= ~NIX_TM_NODE_ENABLED; + + /* Flush the specific node with SW_XOFF */ + req = mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = node->hw_lvl; + k = nix_tm_sw_xoff_prep(node, true, req->reg, req->regval); + req->num_regs = k; + + rc = mbox_process(mbox); + if (rc) + return rc; + + /* Update the PIR/CIR and clear SW XOFF */ + req = mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = node->hw_lvl; + + k = nix_tm_shaper_reg_prep(node, profile, req->reg, req->regval); + + k += nix_tm_sw_xoff_prep(node, false, &req->reg[k], &req->regval[k]); + + req->num_regs = k; + rc = mbox_process(mbox); + if (!rc) + node->flags |= NIX_TM_NODE_ENABLED; + return rc; +} + +int +roc_nix_tm_node_parent_update(struct roc_nix *roc_nix, uint32_t node_id, + uint32_t new_parent_id, uint32_t priority, + uint32_t weight) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_tm_node *node, *sibling; + struct nix_tm_node *new_parent; + struct nix_txschq_config *req; + struct nix_tm_node_list *list; + uint8_t k; + int rc; + + node = nix_tm_node_search(nix, node_id, ROC_NIX_TM_USER); + if (!node) + return NIX_ERR_TM_INVALID_NODE; + + /* Parent id valid only for non root nodes */ + if (node->hw_lvl != nix->tm_root_lvl) { + new_parent = + nix_tm_node_search(nix, new_parent_id, ROC_NIX_TM_USER); + if (!new_parent) + return NIX_ERR_TM_INVALID_PARENT; + + /* Current support is only for dynamic weight update */ + if (node->parent != new_parent || node->priority != priority) + return NIX_ERR_TM_PARENT_PRIO_UPDATE; + } + + list = nix_tm_node_list(nix, ROC_NIX_TM_USER); + /* Skip if no change */ + if (node->weight == weight) + return 0; + + node->weight = weight; + + /* Nothing to do if hierarchy not yet enabled */ + if (!(nix->tm_flags & NIX_TM_HIERARCHY_ENA)) + return 0; + + /* For leaf nodes, SQ CTX needs update */ + if (nix_tm_is_leaf(nix, node->lvl)) { + /* Update SQ quantum data on the fly */ + rc = nix_tm_sq_sched_conf(nix, node, true); + if (rc) + return NIX_ERR_TM_SQ_UPDATE_FAIL; + } else { + /* XOFF Parent node */ + req = mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = node->parent->hw_lvl; + req->num_regs = nix_tm_sw_xoff_prep(node->parent, true, + req->reg, req->regval); + rc = mbox_process(mbox); + if (rc) + return rc; + + /* XOFF this node and all other siblings */ + req = mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = node->hw_lvl; + + k = 0; + TAILQ_FOREACH(sibling, list, node) { + if (sibling->parent != node->parent) + continue; + k += nix_tm_sw_xoff_prep(sibling, true, &req->reg[k], + &req->regval[k]); + } + req->num_regs = k; + rc = mbox_process(mbox); + if (rc) + return rc; + + /* Update new weight for current node */ + req = mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = node->hw_lvl; + req->num_regs = + nix_tm_sched_reg_prep(nix, node, req->reg, req->regval); + rc = mbox_process(mbox); + if (rc) + return rc; + + /* XON this node and all other siblings */ + req = mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = node->hw_lvl; + + k = 0; + TAILQ_FOREACH(sibling, list, node) { + if (sibling->parent != node->parent) + continue; + k += nix_tm_sw_xoff_prep(sibling, false, &req->reg[k], + &req->regval[k]); + } + req->num_regs = k; + rc = mbox_process(mbox); + if (rc) + return rc; + + /* XON Parent node */ + req = mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = node->parent->hw_lvl; + req->num_regs = nix_tm_sw_xoff_prep(node->parent, false, + req->reg, req->regval); + rc = mbox_process(mbox); + if (rc) + return rc; + } + return 0; +} + +int roc_nix_tm_init(struct roc_nix *roc_nix) { struct nix *nix = roc_nix_to_nix_priv(roc_nix); diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 854c3c1..a5efbea 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -115,7 +115,10 @@ INTERNAL { roc_nix_tm_node_lvl; roc_nix_tm_node_name_get; roc_nix_tm_node_next; + roc_nix_tm_node_parent_update; roc_nix_tm_node_pkt_mode_update; + roc_nix_tm_node_shaper_update; + roc_nix_tm_node_suspend_resume; roc_nix_tm_rlimit_sq; roc_nix_tm_shaper_profile_add; roc_nix_tm_shaper_profile_delete; From patchwork Thu Apr 1 12:38:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90412 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D303AA0548; Thu, 1 Apr 2021 14:44:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 256531412AF; Thu, 1 Apr 2021 14:40:37 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id CAB2C1412AE for ; Thu, 1 Apr 2021 14:40:35 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLSL019082 for ; Thu, 1 Apr 2021 05:40:35 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=+PviLjaEvymQVmJKggiHcYDGkGYXE4/zDMqPf8U/goQ=; b=bNXt230Z7KbOwQfwIfJGaPOitcXwKeIwB1OpRoh1663mKcXXlN9mZ8mAWeSP98VvF5lu pVq5u/yIXPe8SQU4SyasSHgnJhNwFQ9dTccFXQGpdbGO5Bblv0p/YNFEzAgB1t5kSIUZ uChKpVrTkZ7vczSlfF/OyrTqfNdWK/eUnE7O7ugmLJr55Xms+L3FDTHtsCvmarkbjnzj fEZrVNZGYVGDXCw1LiVE6Sr5VFOt4cPrLxWizQfI7O9kkezXzRn3mIaFOQOipRzkfKec KoyiBhESh7M9rLcOglA1cufY/ltYgAjNVy3ppll8XKOa9lXNrizvm0ohAYFT6FcDNoe7 vw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje5m-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:35 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:32 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:33 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 43C1A3F704B; Thu, 1 Apr 2021 05:40:30 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , , Nithin Dabilpuram Date: Thu, 1 Apr 2021 18:08:04 +0530 Message-ID: <20210401123817.14348-40-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: jxU1lXibmdF3nhlm0grm84Jnr-gPBvap X-Proofpoint-ORIG-GUID: jxU1lXibmdF3nhlm0grm84Jnr-gPBvap X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 39/52] common/cnxk: add nix tm debug support and misc utils X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support to dump TM HW registers and hierarchy on error. This patch also adds support for misc utils such as API to query TM HW resource availability, resource pre-allocation and static priority support on root node. Signed-off-by: Nithin Dabilpuram --- drivers/common/cnxk/roc_nix.h | 9 + drivers/common/cnxk/roc_nix_debug.c | 330 +++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_nix_tm.c | 1 + drivers/common/cnxk/roc_nix_tm_ops.c | 125 +++++++++++++ drivers/common/cnxk/roc_nix_tm_utils.c | 18 ++ drivers/common/cnxk/roc_utils.c | 108 +++++++++++ drivers/common/cnxk/version.map | 6 + 7 files changed, 597 insertions(+) diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index ad00efe..b39f461 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -291,6 +291,7 @@ void __roc_api roc_nix_cqe_dump(const struct nix_cqe_hdr_s *cq); void __roc_api roc_nix_rq_dump(struct roc_nix_rq *rq); void __roc_api roc_nix_cq_dump(struct roc_nix_cq *cq); void __roc_api roc_nix_sq_dump(struct roc_nix_sq *sq); +void __roc_api roc_nix_tm_dump(struct roc_nix *roc_nix); void __roc_api roc_nix_dump(struct roc_nix *roc_nix); /* IRQ */ @@ -394,6 +395,10 @@ int __roc_api roc_nix_tm_shaper_profile_update( int __roc_api roc_nix_tm_shaper_profile_delete(struct roc_nix *roc_nix, uint32_t id); +int __roc_api roc_nix_tm_prealloc_res(struct roc_nix *roc_nix, uint8_t lvl, + uint16_t discontig, uint16_t contig); +uint16_t __roc_api roc_nix_tm_leaf_cnt(struct roc_nix *roc_nix); + struct roc_nix_tm_node *__roc_api roc_nix_tm_node_get(struct roc_nix *roc_nix, uint32_t node_id); struct roc_nix_tm_node *__roc_api @@ -420,6 +425,10 @@ int __roc_api roc_nix_tm_hierarchy_enable(struct roc_nix *roc_nix, * TM utilities API. */ int __roc_api roc_nix_tm_node_lvl(struct roc_nix *roc_nix, uint32_t node_id); +bool __roc_api roc_nix_tm_root_has_sp(struct roc_nix *roc_nix); +void __roc_api roc_nix_tm_rsrc_max(bool pf, uint16_t schq[ROC_TM_LVL_MAX]); +int __roc_api roc_nix_tm_rsrc_count(struct roc_nix *roc_nix, + uint16_t schq[ROC_TM_LVL_MAX]); int __roc_api roc_nix_tm_node_name_get(struct roc_nix *roc_nix, uint32_t node_id, char *buf, size_t buflen); diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c index a0cf98e..6e56513 100644 --- a/drivers/common/cnxk/roc_nix_debug.c +++ b/drivers/common/cnxk/roc_nix_debug.c @@ -44,6 +44,33 @@ static const struct nix_lf_reg_info nix_lf_reg[] = { NIX_REG_INFO(NIX_LF_SEND_ERR_DBG), }; +static void +nix_bitmap_dump(struct plt_bitmap *bmp) +{ + uint32_t pos = 0, start_pos; + uint64_t slab = 0; + int i; + + plt_bitmap_scan_init(bmp); + plt_bitmap_scan(bmp, &pos, &slab); + start_pos = pos; + + nix_dump_no_nl(" \t\t["); + do { + if (!slab) + break; + i = 0; + + for (i = 0; i < 64; i++) + if (slab & (1ULL << i)) + nix_dump_no_nl("%d, ", i); + + if (!plt_bitmap_scan(bmp, &pos, &slab)) + break; + } while (start_pos != pos); + nix_dump_no_nl(" ]"); +} + int roc_nix_lf_get_reg_count(struct roc_nix *roc_nix) { @@ -761,6 +788,309 @@ roc_nix_sq_dump(struct roc_nix_sq *sq) nix_dump(" fc = %p", sq->fc); }; +static uint8_t +nix_tm_reg_dump_prep(uint16_t hw_lvl, uint16_t schq, uint16_t link, + uint64_t *reg, char regstr[][NIX_REG_NAME_SZ]) +{ + uint8_t k = 0; + + switch (hw_lvl) { + case NIX_TXSCH_LVL_SMQ: + reg[k] = NIX_AF_SMQX_CFG(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_SMQ[%u]_CFG", + schq); + + reg[k] = NIX_AF_MDQX_PARENT(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_MDQ[%u]_PARENT", + schq); + + reg[k] = NIX_AF_MDQX_SCHEDULE(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, + "NIX_AF_MDQ[%u]_SCHEDULE", schq); + + reg[k] = NIX_AF_MDQX_PIR(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_MDQ[%u]_PIR", + schq); + + reg[k] = NIX_AF_MDQX_CIR(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_MDQ[%u]_CIR", + schq); + + reg[k] = NIX_AF_MDQX_SHAPE(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_MDQ[%u]_SHAPE", + schq); + + reg[k] = NIX_AF_MDQX_SW_XOFF(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_MDQ[%u]_SW_XOFF", + schq); + break; + case NIX_TXSCH_LVL_TL4: + reg[k] = NIX_AF_TL4X_PARENT(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL4[%u]_PARENT", + schq); + + reg[k] = NIX_AF_TL4X_TOPOLOGY(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, + "NIX_AF_TL4[%u]_TOPOLOGY", schq); + + reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, + "NIX_AF_TL4[%u]_SDP_LINK_CFG", schq); + + reg[k] = NIX_AF_TL4X_SCHEDULE(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, + "NIX_AF_TL4[%u]_SCHEDULE", schq); + + reg[k] = NIX_AF_TL4X_PIR(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL4[%u]_PIR", + schq); + + reg[k] = NIX_AF_TL4X_CIR(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL4[%u]_CIR", + schq); + + reg[k] = NIX_AF_TL4X_SHAPE(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL4[%u]_SHAPE", + schq); + + reg[k] = NIX_AF_TL4X_SW_XOFF(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL4[%u]_SW_XOFF", + schq); + break; + case NIX_TXSCH_LVL_TL3: + reg[k] = NIX_AF_TL3X_PARENT(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL3[%u]_PARENT", + schq); + + reg[k] = NIX_AF_TL3X_TOPOLOGY(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, + "NIX_AF_TL3[%u]_TOPOLOGY", schq); + + reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link); + snprintf(regstr[k++], NIX_REG_NAME_SZ, + "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link); + + reg[k] = NIX_AF_TL3X_SCHEDULE(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, + "NIX_AF_TL3[%u]_SCHEDULE", schq); + + reg[k] = NIX_AF_TL3X_PIR(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL3[%u]_PIR", + schq); + + reg[k] = NIX_AF_TL3X_CIR(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL3[%u]_CIR", + schq); + + reg[k] = NIX_AF_TL3X_SHAPE(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL3[%u]_SHAPE", + schq); + + reg[k] = NIX_AF_TL3X_SW_XOFF(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL3[%u]_SW_XOFF", + schq); + break; + case NIX_TXSCH_LVL_TL2: + reg[k] = NIX_AF_TL2X_PARENT(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL2[%u]_PARENT", + schq); + + reg[k] = NIX_AF_TL2X_TOPOLOGY(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, + "NIX_AF_TL2[%u]_TOPOLOGY", schq); + + reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link); + snprintf(regstr[k++], NIX_REG_NAME_SZ, + "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link); + + reg[k] = NIX_AF_TL2X_SCHEDULE(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, + "NIX_AF_TL2[%u]_SCHEDULE", schq); + + reg[k] = NIX_AF_TL2X_PIR(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL2[%u]_PIR", + schq); + + reg[k] = NIX_AF_TL2X_CIR(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL2[%u]_CIR", + schq); + + reg[k] = NIX_AF_TL2X_SHAPE(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL2[%u]_SHAPE", + schq); + + reg[k] = NIX_AF_TL2X_SW_XOFF(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL2[%u]_SW_XOFF", + schq); + break; + case NIX_TXSCH_LVL_TL1: + + reg[k] = NIX_AF_TL1X_TOPOLOGY(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, + "NIX_AF_TL1[%u]_TOPOLOGY", schq); + + reg[k] = NIX_AF_TL1X_SCHEDULE(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, + "NIX_AF_TL1[%u]_SCHEDULE", schq); + + reg[k] = NIX_AF_TL1X_CIR(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL1[%u]_CIR", + schq); + + reg[k] = NIX_AF_TL1X_SW_XOFF(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL1[%u]_SW_XOFF", + schq); + + reg[k] = NIX_AF_TL1X_DROPPED_PACKETS(schq); + snprintf(regstr[k++], NIX_REG_NAME_SZ, + "NIX_AF_TL1[%u]_DROPPED_PACKETS", schq); + break; + default: + break; + } + + if (k > MAX_REGS_PER_MBOX_MSG) { + nix_dump("\t!!!NIX TM Registers request overflow!!!"); + return 0; + } + return k; +} + +static void +nix_tm_dump_lvl(struct nix *nix, struct nix_tm_node_list *list, uint8_t hw_lvl) +{ + char regstr[MAX_REGS_PER_MBOX_MSG * 2][NIX_REG_NAME_SZ]; + uint64_t reg[MAX_REGS_PER_MBOX_MSG * 2]; + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_txschq_config *req, *rsp; + const char *lvlstr, *parent_lvlstr; + struct nix_tm_node *node, *parent; + struct nix_tm_node *root = NULL; + uint32_t schq, parent_schq; + bool found = false; + uint8_t j, k, rc; + + TAILQ_FOREACH(node, list, node) { + if (node->hw_lvl != hw_lvl) + continue; + + found = true; + parent = node->parent; + if (hw_lvl == NIX_TXSCH_LVL_CNT) { + lvlstr = "SQ"; + schq = node->id; + } else { + lvlstr = nix_tm_hwlvl2str(node->hw_lvl); + schq = node->hw_id; + } + + if (parent) { + parent_schq = parent->hw_id; + parent_lvlstr = nix_tm_hwlvl2str(parent->hw_lvl); + } else if (node->hw_lvl == NIX_TXSCH_LVL_TL1) { + parent_schq = nix->tx_link; + parent_lvlstr = "LINK"; + } else { + parent_schq = node->parent_hw_id; + parent_lvlstr = nix_tm_hwlvl2str(node->hw_lvl + 1); + } + + nix_dump("\t(%p%s) %s_%d->%s_%d", node, + node->child_realloc ? "[CR]" : "", lvlstr, schq, + parent_lvlstr, parent_schq); + + if (!(node->flags & NIX_TM_NODE_HWRES)) + continue; + + /* Need to dump TL1 when root is TL2 */ + if (node->hw_lvl == nix->tm_root_lvl) + root = node; + + /* Dump registers only when HWRES is present */ + k = nix_tm_reg_dump_prep(node->hw_lvl, schq, nix->tx_link, reg, + regstr); + if (!k) + continue; + + req = mbox_alloc_msg_nix_txschq_cfg(mbox); + req->read = 1; + req->lvl = node->hw_lvl; + req->num_regs = k; + mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k); + rc = mbox_process_msg(mbox, (void **)&rsp); + if (!rc) { + for (j = 0; j < k; j++) + nix_dump("\t\t%s=0x%016" PRIx64, regstr[j], + rsp->regval[j]); + } else { + nix_dump("\t!!!Failed to dump registers!!!"); + } + } + + if (found) + nix_dump("\n"); + + /* Dump TL1 node data when root level is TL2 */ + if (root && root->hw_lvl == NIX_TXSCH_LVL_TL2) { + k = nix_tm_reg_dump_prep(NIX_TXSCH_LVL_TL1, root->parent_hw_id, + nix->tx_link, reg, regstr); + if (!k) + return; + + req = mbox_alloc_msg_nix_txschq_cfg(mbox); + req->read = 1; + req->lvl = NIX_TXSCH_LVL_TL1; + req->num_regs = k; + mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k); + rc = mbox_process_msg(mbox, (void **)&rsp); + if (!rc) { + for (j = 0; j < k; j++) + nix_dump("\t\t%s=0x%016" PRIx64, regstr[j], + rsp->regval[j]); + } else { + nix_dump("\t!!!Failed to dump registers!!!"); + } + nix_dump("\n"); + } +} + +void +roc_nix_tm_dump(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + uint8_t hw_lvl, i; + + nix_dump("===TM hierarchy and registers dump of %s (pf:vf) (%d:%d)===", + nix->pci_dev->name, dev_get_pf(dev->pf_func), + dev_get_vf(dev->pf_func)); + + /* Dump all trees */ + for (i = 0; i < ROC_NIX_TM_TREE_MAX; i++) { + nix_dump("\tTM %s:", nix_tm_tree2str(i)); + for (hw_lvl = 0; hw_lvl <= NIX_TXSCH_LVL_CNT; hw_lvl++) + nix_tm_dump_lvl(nix, &nix->trees[i], hw_lvl); + } + + /* Dump unused resources */ + nix_dump("\tTM unused resources:"); + hw_lvl = NIX_TXSCH_LVL_SMQ; + for (; hw_lvl < NIX_TXSCH_LVL_CNT; hw_lvl++) { + nix_dump("\t\ttxschq %7s num = %d", + nix_tm_hwlvl2str(hw_lvl), + nix_tm_resource_avail(nix, hw_lvl, false)); + + nix_bitmap_dump(nix->schq_bmp[hw_lvl]); + nix_dump("\n"); + + nix_dump("\t\ttxschq_contig %7s num = %d", + nix_tm_hwlvl2str(hw_lvl), + nix_tm_resource_avail(nix, hw_lvl, true)); + nix_bitmap_dump(nix->schq_contig_bmp[hw_lvl]); + nix_dump("\n"); + } +} + void roc_nix_dump(struct roc_nix *roc_nix) { diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c index 9b328c9..ad54e17 100644 --- a/drivers/common/cnxk/roc_nix_tm.c +++ b/drivers/common/cnxk/roc_nix_tm.c @@ -393,6 +393,7 @@ roc_nix_tm_sq_flush_spin(struct roc_nix_sq *sq) return 0; exit: + roc_nix_tm_dump(sq->roc_nix); roc_nix_queues_ctx_dump(sq->roc_nix); return -EFAULT; } diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c index e4463d1..ed244d4 100644 --- a/drivers/common/cnxk/roc_nix_tm_ops.c +++ b/drivers/common/cnxk/roc_nix_tm_ops.c @@ -579,6 +579,58 @@ roc_nix_tm_node_suspend_resume(struct roc_nix *roc_nix, uint32_t node_id, } int +roc_nix_tm_prealloc_res(struct roc_nix *roc_nix, uint8_t lvl, + uint16_t discontig, uint16_t contig) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_txsch_alloc_req *req; + struct nix_txsch_alloc_rsp *rsp; + uint8_t hw_lvl; + int rc = -ENOSPC; + + hw_lvl = nix_tm_lvl2nix(nix, lvl); + if (hw_lvl == NIX_TXSCH_LVL_CNT) + return -EINVAL; + + /* Preallocate contiguous */ + if (nix->contig_rsvd[hw_lvl] < contig) { + req = mbox_alloc_msg_nix_txsch_alloc(mbox); + if (req == NULL) + return rc; + req->schq_contig[hw_lvl] = contig - nix->contig_rsvd[hw_lvl]; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + nix_tm_copy_rsp_to_nix(nix, rsp); + } + + /* Preallocate contiguous */ + if (nix->discontig_rsvd[hw_lvl] < discontig) { + req = mbox_alloc_msg_nix_txsch_alloc(mbox); + if (req == NULL) + return -ENOSPC; + req->schq[hw_lvl] = discontig - nix->discontig_rsvd[hw_lvl]; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + nix_tm_copy_rsp_to_nix(nix, rsp); + } + + /* Save thresholds */ + nix->contig_rsvd[hw_lvl] = contig; + nix->discontig_rsvd[hw_lvl] = discontig; + /* Release anything present above thresholds */ + nix_tm_release_resources(nix, hw_lvl, true, true); + nix_tm_release_resources(nix, hw_lvl, false, true); + return 0; +} + +int roc_nix_tm_node_shaper_update(struct roc_nix *roc_nix, uint32_t node_id, uint32_t profile_id, bool force_update) { @@ -904,3 +956,76 @@ roc_nix_tm_fini(struct roc_nix *roc_nix) nix->tm_tree = 0; nix->tm_flags &= ~NIX_TM_HIERARCHY_ENA; } + +int +roc_nix_tm_rsrc_count(struct roc_nix *roc_nix, uint16_t schq[ROC_TM_LVL_MAX]) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = (&nix->dev)->mbox; + struct free_rsrcs_rsp *rsp; + uint8_t hw_lvl; + int rc, i; + + /* Get the current free resources */ + mbox_alloc_msg_free_rsrc_cnt(mbox); + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + for (i = 0; i < ROC_TM_LVL_MAX; i++) { + hw_lvl = nix_tm_lvl2nix(nix, i); + if (hw_lvl == NIX_TXSCH_LVL_CNT) + continue; + + schq[i] = (nix->is_nix1 ? rsp->schq_nix1[hw_lvl] : + rsp->schq[hw_lvl]); + } + + return 0; +} + +void +roc_nix_tm_rsrc_max(bool pf, uint16_t schq[ROC_TM_LVL_MAX]) +{ + uint8_t hw_lvl, i; + uint16_t max; + + for (i = 0; i < ROC_TM_LVL_MAX; i++) { + hw_lvl = pf ? nix_tm_lvl2nix_tl1_root(i) : + nix_tm_lvl2nix_tl2_root(i); + + switch (hw_lvl) { + case NIX_TXSCH_LVL_SMQ: + max = (roc_model_is_cn9k() ? + NIX_CN9K_TXSCH_LVL_SMQ_MAX : + NIX_TXSCH_LVL_SMQ_MAX); + break; + case NIX_TXSCH_LVL_TL4: + max = NIX_TXSCH_LVL_TL4_MAX; + break; + case NIX_TXSCH_LVL_TL3: + max = NIX_TXSCH_LVL_TL3_MAX; + break; + case NIX_TXSCH_LVL_TL2: + max = pf ? NIX_TXSCH_LVL_TL2_MAX : 1; + break; + case NIX_TXSCH_LVL_TL1: + max = pf ? 1 : 0; + break; + default: + max = 0; + break; + } + schq[i] = max; + } +} + +bool +roc_nix_tm_root_has_sp(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + if (nix->tm_flags & NIX_TM_TL1_NO_SP) + return false; + return true; +} diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c index b644716..1d7dd68 100644 --- a/drivers/common/cnxk/roc_nix_tm_utils.c +++ b/drivers/common/cnxk/roc_nix_tm_utils.c @@ -868,6 +868,24 @@ nix_tm_resource_estimate(struct nix *nix, uint16_t *schq_contig, uint16_t *schq, return cnt; } +uint16_t +roc_nix_tm_leaf_cnt(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_tm_node_list *list; + struct nix_tm_node *node; + uint16_t leaf_cnt = 0; + + /* Count leafs only in user list */ + list = nix_tm_node_list(nix, ROC_NIX_TM_USER); + TAILQ_FOREACH(node, list, node) { + if (node->id < nix->nb_tx_queues) + leaf_cnt++; + } + + return leaf_cnt; +} + int roc_nix_tm_node_lvl(struct roc_nix *roc_nix, uint32_t node_id) { diff --git a/drivers/common/cnxk/roc_utils.c b/drivers/common/cnxk/roc_utils.c index 2b157a3..6cfa28d 100644 --- a/drivers/common/cnxk/roc_utils.c +++ b/drivers/common/cnxk/roc_utils.c @@ -38,6 +38,69 @@ roc_error_msg_get(int errorcode) case NIX_ERR_AQ_WRITE_FAILED: err_msg = "AQ write failed"; break; + case NIX_ERR_TM_LEAF_NODE_GET: + err_msg = "TM leaf node get failed"; + break; + case NIX_ERR_TM_INVALID_LVL: + err_msg = "TM node level invalid"; + break; + case NIX_ERR_TM_INVALID_PRIO: + err_msg = "TM node priority invalid"; + break; + case NIX_ERR_TM_INVALID_PARENT: + err_msg = "TM parent id invalid"; + break; + case NIX_ERR_TM_NODE_EXISTS: + err_msg = "TM Node Exists"; + break; + case NIX_ERR_TM_INVALID_NODE: + err_msg = "TM node id invalid"; + break; + case NIX_ERR_TM_INVALID_SHAPER_PROFILE: + err_msg = "TM shaper profile invalid"; + break; + case NIX_ERR_TM_WEIGHT_EXCEED: + err_msg = "TM DWRR weight exceeded"; + break; + case NIX_ERR_TM_CHILD_EXISTS: + err_msg = "TM node children exists"; + break; + case NIX_ERR_TM_INVALID_PEAK_SZ: + err_msg = "TM peak size invalid"; + break; + case NIX_ERR_TM_INVALID_PEAK_RATE: + err_msg = "TM peak rate invalid"; + break; + case NIX_ERR_TM_INVALID_COMMIT_SZ: + err_msg = "TM commit size invalid"; + break; + case NIX_ERR_TM_INVALID_COMMIT_RATE: + err_msg = "TM commit rate invalid"; + break; + case NIX_ERR_TM_SHAPER_PROFILE_IN_USE: + err_msg = "TM shaper profile in use"; + break; + case NIX_ERR_TM_SHAPER_PROFILE_EXISTS: + err_msg = "TM shaper profile exists"; + break; + case NIX_ERR_TM_INVALID_TREE: + err_msg = "TM tree invalid"; + break; + case NIX_ERR_TM_PARENT_PRIO_UPDATE: + err_msg = "TM node parent and prio update failed"; + break; + case NIX_ERR_TM_PRIO_EXCEEDED: + err_msg = "TM node priority exceeded"; + break; + case NIX_ERR_TM_PRIO_ORDER: + err_msg = "TM node priority not in order"; + break; + case NIX_ERR_TM_MULTIPLE_RR_GROUPS: + err_msg = "TM multiple rr groups"; + break; + case NIX_ERR_TM_SQ_UPDATE_FAIL: + err_msg = "TM SQ update failed"; + break; case NIX_ERR_NDC_SYNC: err_msg = "NDC Sync failed"; break; @@ -75,9 +138,54 @@ roc_error_msg_get(int errorcode) case NIX_AF_ERR_AF_LF_ALLOC: err_msg = "NIX LF alloc failed"; break; + case NIX_AF_ERR_TLX_INVALID: + err_msg = "Invalid NIX TLX"; + break; + case NIX_AF_ERR_TLX_ALLOC_FAIL: + err_msg = "NIX TLX alloc failed"; + break; + case NIX_AF_ERR_RSS_SIZE_INVALID: + err_msg = "Invalid RSS size"; + break; + case NIX_AF_ERR_RSS_GRPS_INVALID: + err_msg = "Invalid RSS groups"; + break; + case NIX_AF_ERR_FRS_INVALID: + err_msg = "Invalid frame size"; + break; + case NIX_AF_ERR_RX_LINK_INVALID: + err_msg = "Invalid Rx link"; + break; + case NIX_AF_INVAL_TXSCHQ_CFG: + err_msg = "Invalid Tx scheduling config"; + break; + case NIX_AF_SMQ_FLUSH_FAILED: + err_msg = "SMQ flush failed"; + break; case NIX_AF_ERR_LF_RESET: err_msg = "NIX LF reset failed"; break; + case NIX_AF_ERR_MARK_CFG_FAIL: + err_msg = "Marking config failed"; + break; + case NIX_AF_ERR_LSO_CFG_FAIL: + err_msg = "LSO config failed"; + break; + case NIX_AF_INVAL_NPA_PF_FUNC: + err_msg = "Invalid NPA pf_func"; + break; + case NIX_AF_INVAL_SSO_PF_FUNC: + err_msg = "Invalid SSO pf_func"; + break; + case NIX_AF_ERR_TX_VTAG_NOSPC: + err_msg = "No space for Tx VTAG"; + break; + case NIX_AF_ERR_RX_VTAG_INUSE: + err_msg = "Rx VTAG is in use"; + break; + case NIX_AF_ERR_PTP_CONFIG_FAIL: + err_msg = "PTP config failed"; + break; case UTIL_ERR_FS: err_msg = "file operation failed"; break; diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index a5efbea..21f1d5a 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -104,11 +104,13 @@ INTERNAL { roc_nix_xstats_names_get; roc_nix_switch_hdr_set; roc_nix_eeprom_info_get; + roc_nix_tm_dump; roc_nix_tm_fini; roc_nix_tm_free_resources; roc_nix_tm_hierarchy_disable; roc_nix_tm_hierarchy_enable; roc_nix_tm_init; + roc_nix_tm_leaf_cnt; roc_nix_tm_node_add; roc_nix_tm_node_delete; roc_nix_tm_node_get; @@ -119,7 +121,11 @@ INTERNAL { roc_nix_tm_node_pkt_mode_update; roc_nix_tm_node_shaper_update; roc_nix_tm_node_suspend_resume; + roc_nix_tm_prealloc_res; roc_nix_tm_rlimit_sq; + roc_nix_tm_root_has_sp; + roc_nix_tm_rsrc_count; + roc_nix_tm_rsrc_max; roc_nix_tm_shaper_profile_add; roc_nix_tm_shaper_profile_delete; roc_nix_tm_shaper_profile_get; From patchwork Thu Apr 1 12:38:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90413 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B4733A0548; Thu, 1 Apr 2021 14:44:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C2873141231; Thu, 1 Apr 2021 14:40:40 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 0BCC41412BA for ; Thu, 1 Apr 2021 14:40:38 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLSN019082 for ; Thu, 1 Apr 2021 05:40:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=mxno79jauodqc48Wr3gtD1njNN0o1eNDyOdYxQvbEXw=; b=B7yn97HsPCzBDE9y8WO6NDxBSgiJ/UPI9oiFT8hCWhWIUevtqZxzIbfp5N/ojw/6NUv0 QYwL40uNmzdo/JgW+akE5WAbEKO+W3CAHCRX1rspnzC8xFruxnzVArDbeexia6cayfjh RFnNn6M1nNPiniu3XFnWenkueNEE+Igexy5cF21eQS0QoXnevHADVLlgIHNm1DV/FxDM 4uFMVmX9vc7zO1rOZ4iCILGDrUNVEoo6Cy96WJrNLd9Vt6E+GZymQz+nS5eComvLLVH/ 3HKojCse4zc3kOvopQvmWpxM63RVWb+iBWoGpXP4y3qIkzE7yMhCZ4PBQkZefpf7tG3f MA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje5u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:38 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:35 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:35 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 746243F703F; Thu, 1 Apr 2021 05:40:33 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:08:05 +0530 Message-ID: <20210401123817.14348-41-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: KFpm-wNoTnGynIJqyTUmBekuWted3KpC X-Proofpoint-ORIG-GUID: KFpm-wNoTnGynIJqyTUmBekuWted3KpC X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 40/52] common/cnxk: add npc support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding initial support for programming NPC. NPC is Network Parser and CAM unit that provides Rx and Tx packet parsing and packet manipulation functionality on Marvell CN9K and CN10K SoC's. It is mapped to RTE Flow in DPDK. Signed-off-by: Kiran Kumar K --- drivers/common/cnxk/roc_api.h | 3 + drivers/common/cnxk/roc_npc.h | 129 +++++++++++++ drivers/common/cnxk/roc_npc_priv.h | 381 +++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_platform.c | 1 + drivers/common/cnxk/roc_platform.h | 2 + drivers/common/cnxk/roc_priv.h | 3 + drivers/common/cnxk/roc_utils.c | 24 +++ drivers/common/cnxk/version.map | 1 + 8 files changed, 544 insertions(+) create mode 100644 drivers/common/cnxk/roc_npc.h create mode 100644 drivers/common/cnxk/roc_npc_priv.h diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h index 718916d..8dc8eed 100644 --- a/drivers/common/cnxk/roc_api.h +++ b/drivers/common/cnxk/roc_api.h @@ -82,6 +82,9 @@ /* NPA */ #include "roc_npa.h" +/* NPC */ +#include "roc_npc.h" + /* NIX */ #include "roc_nix.h" diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h new file mode 100644 index 0000000..c841240 --- /dev/null +++ b/drivers/common/cnxk/roc_npc.h @@ -0,0 +1,129 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_NPC_H_ +#define _ROC_NPC_H_ + +#include + +enum roc_npc_item_type { + ROC_NPC_ITEM_TYPE_VOID, + ROC_NPC_ITEM_TYPE_ANY, + ROC_NPC_ITEM_TYPE_ETH, + ROC_NPC_ITEM_TYPE_VLAN, + ROC_NPC_ITEM_TYPE_E_TAG, + ROC_NPC_ITEM_TYPE_IPV4, + ROC_NPC_ITEM_TYPE_IPV6, + ROC_NPC_ITEM_TYPE_ARP_ETH_IPV4, + ROC_NPC_ITEM_TYPE_MPLS, + ROC_NPC_ITEM_TYPE_ICMP, + ROC_NPC_ITEM_TYPE_IGMP, + ROC_NPC_ITEM_TYPE_UDP, + ROC_NPC_ITEM_TYPE_TCP, + ROC_NPC_ITEM_TYPE_SCTP, + ROC_NPC_ITEM_TYPE_ESP, + ROC_NPC_ITEM_TYPE_GRE, + ROC_NPC_ITEM_TYPE_NVGRE, + ROC_NPC_ITEM_TYPE_VXLAN, + ROC_NPC_ITEM_TYPE_GTPC, + ROC_NPC_ITEM_TYPE_GTPU, + ROC_NPC_ITEM_TYPE_GENEVE, + ROC_NPC_ITEM_TYPE_VXLAN_GPE, + ROC_NPC_ITEM_TYPE_IPV6_EXT, + ROC_NPC_ITEM_TYPE_GRE_KEY, + ROC_NPC_ITEM_TYPE_HIGIG2, + ROC_NPC_ITEM_TYPE_CPT_HDR, + ROC_NPC_ITEM_TYPE_L3_CUSTOM, + ROC_NPC_ITEM_TYPE_QINQ, + ROC_NPC_ITEM_TYPE_END, +}; + +struct roc_npc_item_info { + enum roc_npc_item_type type; /* Item type */ + uint32_t size; /* item size */ + const void *spec; /**< Pointer to item specification structure. */ + const void *mask; /**< Bit-mask applied to spec and last. */ + const void *last; /* For range */ +}; + +#define ROC_NPC_MAX_ACTION_COUNT 12 + +enum roc_npc_action_type { + ROC_NPC_ACTION_TYPE_END = (1 << 0), + ROC_NPC_ACTION_TYPE_VOID = (1 << 1), + ROC_NPC_ACTION_TYPE_MARK = (1 << 2), + ROC_NPC_ACTION_TYPE_FLAG = (1 << 3), + ROC_NPC_ACTION_TYPE_DROP = (1 << 4), + ROC_NPC_ACTION_TYPE_QUEUE = (1 << 5), + ROC_NPC_ACTION_TYPE_RSS = (1 << 6), + ROC_NPC_ACTION_TYPE_DUP = (1 << 7), + ROC_NPC_ACTION_TYPE_SEC = (1 << 8), + ROC_NPC_ACTION_TYPE_COUNT = (1 << 9), + ROC_NPC_ACTION_TYPE_PF = (1 << 10), + ROC_NPC_ACTION_TYPE_VF = (1 << 11), +}; + +struct roc_npc_action { + enum roc_npc_action_type type; /**< Action type. */ + const void *conf; /**< Pointer to action configuration object. */ +}; + +struct roc_npc_action_mark { + uint32_t id; /**< Integer value to return with packets. */ +}; + +struct roc_npc_action_vf { + uint32_t original : 1; /**< Use original VF ID if possible. */ + uint32_t reserved : 31; /**< Reserved, must be zero. */ + uint32_t id; /**< VF ID. */ +}; + +struct roc_npc_action_queue { + uint16_t index; /**< Queue index to use. */ +}; + +struct roc_npc_attr { + uint32_t priority; /**< Rule priority level within group. */ + uint32_t ingress : 1; /**< Rule applies to ingress traffic. */ + uint32_t egress : 1; /**< Rule applies to egress traffic. */ + uint32_t reserved : 30; /**< Reserved, must be zero. */ +}; + +struct roc_npc_flow { + uint8_t nix_intf; + uint8_t enable; + uint32_t mcam_id; + int32_t ctr_id; + uint32_t priority; +#define ROC_NPC_MAX_MCAM_WIDTH_DWORDS 7 + /* Contiguous match string */ + uint64_t mcam_data[ROC_NPC_MAX_MCAM_WIDTH_DWORDS]; + uint64_t mcam_mask[ROC_NPC_MAX_MCAM_WIDTH_DWORDS]; + uint64_t npc_action; + uint64_t vtag_action; + + TAILQ_ENTRY(roc_npc_flow) next; +}; + +enum roc_npc_intf { + ROC_NPC_INTF_RX = 0, + ROC_NPC_INTF_TX = 1, + ROC_NPC_INTF_MAX = 2, +}; + +struct roc_npc { + struct roc_nix *roc_nix; + uint8_t switch_header_type; + uint16_t flow_prealloc_size; + uint16_t flow_max_priority; + uint16_t channel; + uint16_t pf_func; + uint64_t kex_capability; + uint64_t rx_parse_nibble; + +#define ROC_NPC_MEM_SZ (5 * 1024) + uint8_t reserved[ROC_NPC_MEM_SZ]; +} __plt_cache_aligned; + +#endif /* _ROC_NPC_H_ */ diff --git a/drivers/common/cnxk/roc_npc_priv.h b/drivers/common/cnxk/roc_npc_priv.h new file mode 100644 index 0000000..4e2418c --- /dev/null +++ b/drivers/common/cnxk/roc_npc_priv.h @@ -0,0 +1,381 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_NPC_PRIV_H_ +#define _ROC_NPC_PRIV_H_ + +#define NPC_IH_LENGTH 8 +#define NPC_TPID_LENGTH 2 +#define NPC_HIGIG2_LENGTH 16 +#define NPC_COUNTER_NONE (-1) + +#define NPC_RSS_GRPS 8 + +#define NPC_ACTION_FLAG_DEFAULT 0xffff + +#define NPC_PFVF_FUNC_MASK 0x3FF + +/* 32 bytes from LDATA_CFG & 32 bytes from FLAGS_CFG */ +#define NPC_MAX_EXTRACT_DATA_LEN (64) +#define NPC_MAX_EXTRACT_HW_LEN (4 * NPC_MAX_EXTRACT_DATA_LEN) +#define NPC_LDATA_LFLAG_LEN (16) +#define NPC_MAX_KEY_NIBBLES (31) + +/* Nibble offsets */ +#define NPC_LAYER_KEYX_SZ (3) +#define NPC_PARSE_KEX_S_LA_OFFSET (7) +#define NPC_PARSE_KEX_S_LID_OFFSET(lid) \ + ((((lid) - (NPC_LID_LA)) * NPC_LAYER_KEYX_SZ) + \ + NPC_PARSE_KEX_S_LA_OFFSET) + +/* This mark value indicates flag action */ +#define NPC_FLOW_FLAG_VAL (0xffff) + +#define NPC_RX_ACT_MATCH_OFFSET (40) +#define NPC_RX_ACT_MATCH_MASK (0xFFFF) + +#define NPC_RSS_ACT_GRP_OFFSET (20) +#define NPC_RSS_ACT_ALG_OFFSET (56) +#define NPC_RSS_ACT_GRP_MASK (0xFFFFF) +#define NPC_RSS_ACT_ALG_MASK (0x1F) + +#define NPC_MCAM_KEX_FIELD_MAX 23 +#define NPC_MCAM_MAX_PROTO_FIELDS (NPC_MCAM_KEX_FIELD_MAX + 1) +#define NPC_MCAM_KEY_X4_WORDS 7 /* Number of 64-bit words */ + +#define NPC_RVUPF_MAX_9XXX 0x10 /* HRM: RVU_PRIV_CONST */ +#define NPC_RVUPF_MAX_10XX 0x20 /* HRM: RVU_PRIV_CONST */ +#define NPC_NIXLF_MAX 0x80 /* HRM: NIX_AF_CONST2 */ +#define NPC_MCAME_PER_PF 2 /* DRV: RSVD_MCAM_ENTRIES_PER_PF */ +#define NPC_MCAME_PER_LF 1 /* DRV: RSVD_MCAM_ENTRIES_PER_NIXLF */ +#define NPC_MCAME_RESVD_9XXX \ + (NPC_NIXLF_MAX * NPC_MCAME_PER_LF + \ + (NPC_RVUPF_MAX_9XXX - 1) * NPC_MCAME_PER_PF) + +#define NPC_MCAME_RESVD_10XX \ + (NPC_NIXLF_MAX * NPC_MCAME_PER_LF + \ + (NPC_RVUPF_MAX_10XX - 1) * NPC_MCAME_PER_PF) + +enum npc_err_status { + NPC_ERR_PARAM = -1024, + NPC_ERR_NO_MEM, + NPC_ERR_INVALID_SPEC, + NPC_ERR_INVALID_MASK, + NPC_ERR_INVALID_RANGE, + NPC_ERR_INVALID_KEX, + NPC_ERR_INVALID_SIZE, + NPC_ERR_INTERNAL, + NPC_ERR_MCAM_ALLOC, + NPC_ERR_ACTION_NOTSUP, + NPC_ERR_PATTERN_NOTSUP, +}; + +enum npc_mcam_intf { NPC_MCAM_RX, NPC_MCAM_TX }; + +typedef union npc_kex_cap_terms_t { + struct { + /** Total length of received packet */ + uint64_t len : 1; + /** Initial (outer) Ethertype only */ + uint64_t ethtype_0 : 1; + /** Ethertype of most inner VLAN tag */ + uint64_t ethtype_x : 1; + /** First VLAN ID (outer) */ + uint64_t vlan_id_0 : 1; + /** Last VLAN ID (inner) */ + uint64_t vlan_id_x : 1; + /** destination MAC address */ + uint64_t dmac : 1; + /** IP Protocol or IPv6 Next Header */ + uint64_t ip_proto : 1; + /** Destination UDP port, implies IPPROTO=17 */ + uint64_t udp_dport : 1; + /** Destination TCP port implies IPPROTO=6 */ + uint64_t tcp_dport : 1; + /** Source UDP Port */ + uint64_t udp_sport : 1; + /** Source TCP port */ + uint64_t tcp_sport : 1; + /** Source IP address */ + uint64_t sip_addr : 1; + /** Destination IP address */ + uint64_t dip_addr : 1; + /** Source IP address */ + uint64_t sip6_addr : 1; + /** Destination IP address */ + uint64_t dip6_addr : 1; + /** IPsec session identifier */ + uint64_t ipsec_spi : 1; + /** NVGRE/VXLAN network identifier */ + uint64_t ld_vni : 1; + /** Custom frame match rule. PMR offset is counted from + * the start of the packet. + */ + uint64_t custom_frame : 1; + /** Custom layer 3 match rule. PMR offset is counted from + * the start of layer 3 in the packet. + */ + uint64_t custom_l3 : 1; + /** IGMP Group address */ + uint64_t igmp_grp_addr : 1; + /** ICMP identifier */ + uint64_t icmp_id : 1; + /** ICMP type */ + uint64_t icmp_type : 1; + /** ICMP code */ + uint64_t icmp_code : 1; + /** Source SCTP port */ + uint64_t sctp_sport : 1; + /** Destination SCTP port */ + uint64_t sctp_dport : 1; + /** GTPU Tunnel endpoint identifier */ + uint64_t gtpu_teid : 1; + + } bit; + /** All bits of the bit field structure */ + uint64_t all_bits; +} npc_kex_cap_terms_t; + +struct npc_parse_item_info { + const void *def_mask; /* default mask */ + void *hw_mask; /* hardware supported mask */ + int len; /* length of item */ + const void *spec; /* spec to use, NULL implies match any */ + const void *mask; /* mask to use */ + uint8_t hw_hdr_len; /* Extra data len at each layer*/ +}; + +struct npc_parse_state { + struct npc *npc; + const struct roc_npc_item_info *pattern; + const struct roc_npc_item_info *last_pattern; + struct roc_npc_flow *flow; + uint8_t nix_intf; + uint8_t tunnel; + uint8_t terminate; + uint8_t layer_mask; + uint8_t lt[NPC_MAX_LID]; + uint8_t flags[NPC_MAX_LID]; + uint8_t *mcam_data; /* point to flow->mcam_data + key_len */ + uint8_t *mcam_mask; /* point to flow->mcam_mask + key_len */ + bool is_vf; +}; + +enum npc_kpu_parser_flag { + NPC_F_NA = 0, + NPC_F_PKI, + NPC_F_PKI_VLAN, + NPC_F_PKI_ETAG, + NPC_F_PKI_ITAG, + NPC_F_PKI_MPLS, + NPC_F_PKI_NSH, + NPC_F_ETYPE_UNK, + NPC_F_ETHER_VLAN, + NPC_F_ETHER_ETAG, + NPC_F_ETHER_ITAG, + NPC_F_ETHER_MPLS, + NPC_F_ETHER_NSH, + NPC_F_STAG_CTAG, + NPC_F_STAG_CTAG_UNK, + NPC_F_STAG_STAG_CTAG, + NPC_F_STAG_STAG_STAG, + NPC_F_QINQ_CTAG, + NPC_F_QINQ_CTAG_UNK, + NPC_F_QINQ_QINQ_CTAG, + NPC_F_QINQ_QINQ_QINQ, + NPC_F_BTAG_ITAG, + NPC_F_BTAG_ITAG_STAG, + NPC_F_BTAG_ITAG_CTAG, + NPC_F_BTAG_ITAG_UNK, + NPC_F_ETAG_CTAG, + NPC_F_ETAG_BTAG_ITAG, + NPC_F_ETAG_STAG, + NPC_F_ETAG_QINQ, + NPC_F_ETAG_ITAG, + NPC_F_ETAG_ITAG_STAG, + NPC_F_ETAG_ITAG_CTAG, + NPC_F_ETAG_ITAG_UNK, + NPC_F_ITAG_STAG_CTAG, + NPC_F_ITAG_STAG, + NPC_F_ITAG_CTAG, + NPC_F_MPLS_4_LABELS, + NPC_F_MPLS_3_LABELS, + NPC_F_MPLS_2_LABELS, + NPC_F_IP_HAS_OPTIONS, + NPC_F_IP_IP_IN_IP, + NPC_F_IP_6TO4, + NPC_F_IP_MPLS_IN_IP, + NPC_F_IP_UNK_PROTO, + NPC_F_IP_IP_IN_IP_HAS_OPTIONS, + NPC_F_IP_6TO4_HAS_OPTIONS, + NPC_F_IP_MPLS_IN_IP_HAS_OPTIONS, + NPC_F_IP_UNK_PROTO_HAS_OPTIONS, + NPC_F_IP6_HAS_EXT, + NPC_F_IP6_TUN_IP6, + NPC_F_IP6_MPLS_IN_IP, + NPC_F_TCP_HAS_OPTIONS, + NPC_F_TCP_HTTP, + NPC_F_TCP_HTTPS, + NPC_F_TCP_PPTP, + NPC_F_TCP_UNK_PORT, + NPC_F_TCP_HTTP_HAS_OPTIONS, + NPC_F_TCP_HTTPS_HAS_OPTIONS, + NPC_F_TCP_PPTP_HAS_OPTIONS, + NPC_F_TCP_UNK_PORT_HAS_OPTIONS, + NPC_F_UDP_VXLAN, + NPC_F_UDP_VXLAN_NOVNI, + NPC_F_UDP_VXLAN_NOVNI_NSH, + NPC_F_UDP_VXLANGPE, + NPC_F_UDP_VXLANGPE_NSH, + NPC_F_UDP_VXLANGPE_MPLS, + NPC_F_UDP_VXLANGPE_NOVNI, + NPC_F_UDP_VXLANGPE_NOVNI_NSH, + NPC_F_UDP_VXLANGPE_NOVNI_MPLS, + NPC_F_UDP_VXLANGPE_UNK, + NPC_F_UDP_VXLANGPE_NONP, + NPC_F_UDP_GTP_GTPC, + NPC_F_UDP_GTP_GTPU_G_PDU, + NPC_F_UDP_GTP_GTPU_UNK, + NPC_F_UDP_UNK_PORT, + NPC_F_UDP_GENEVE, + NPC_F_UDP_GENEVE_OAM, + NPC_F_UDP_GENEVE_CRI_OPT, + NPC_F_UDP_GENEVE_OAM_CRI_OPT, + NPC_F_GRE_NVGRE, + NPC_F_GRE_HAS_SRE, + NPC_F_GRE_HAS_CSUM, + NPC_F_GRE_HAS_KEY, + NPC_F_GRE_HAS_SEQ, + NPC_F_GRE_HAS_CSUM_KEY, + NPC_F_GRE_HAS_CSUM_SEQ, + NPC_F_GRE_HAS_KEY_SEQ, + NPC_F_GRE_HAS_CSUM_KEY_SEQ, + NPC_F_GRE_HAS_ROUTE, + NPC_F_GRE_UNK_PROTO, + NPC_F_GRE_VER1, + NPC_F_GRE_VER1_HAS_SEQ, + NPC_F_GRE_VER1_HAS_ACK, + NPC_F_GRE_VER1_HAS_SEQ_ACK, + NPC_F_GRE_VER1_UNK_PROTO, + NPC_F_TU_ETHER_UNK, + NPC_F_TU_ETHER_CTAG, + NPC_F_TU_ETHER_CTAG_UNK, + NPC_F_TU_ETHER_STAG_CTAG, + NPC_F_TU_ETHER_STAG_CTAG_UNK, + NPC_F_TU_ETHER_STAG, + NPC_F_TU_ETHER_STAG_UNK, + NPC_F_TU_ETHER_QINQ_CTAG, + NPC_F_TU_ETHER_QINQ_CTAG_UNK, + NPC_F_TU_ETHER_QINQ, + NPC_F_TU_ETHER_QINQ_UNK, + NPC_F_LAST /* has to be the last item */ +}; + +#define NPC_ACTION_TERM \ + (ROC_NPC_ACTION_TYPE_DROP | ROC_NPC_ACTION_TYPE_QUEUE | \ + ROC_NPC_ACTION_TYPE_RSS | ROC_NPC_ACTION_TYPE_DUP | \ + ROC_NPC_ACTION_TYPE_SEC) + +struct npc_xtract_info { + /* Length in bytes of pkt data extracted. len = 0 + * indicates that extraction is disabled. + */ + uint8_t len; + uint8_t hdr_off; /* Byte offset of proto hdr: extract_src */ + uint8_t key_off; /* Byte offset in MCAM key where data is placed */ + uint8_t enable; /* Extraction enabled or disabled */ + uint8_t flags_enable; /* Flags extraction enabled */ +}; + +/* Information for a given {LAYER, LTYPE} */ +struct npc_lid_lt_xtract_info { + /* Info derived from parser configuration */ + uint16_t npc_proto; /* Network protocol identified */ + uint8_t valid_flags_mask; /* Flags applicable */ + uint8_t is_terminating : 1; /* No more parsing */ + struct npc_xtract_info xtract[NPC_MAX_LD]; +}; + +union npc_kex_ldata_flags_cfg { + struct { + uint64_t lid : 3; + uint64_t rvsd_62_1 : 61; + } s; + + uint64_t i; +}; + +typedef struct npc_lid_lt_xtract_info npc_dxcfg_t[NPC_MAX_INTF][NPC_MAX_LID] + [NPC_MAX_LT]; +typedef struct npc_lid_lt_xtract_info npc_fxcfg_t[NPC_MAX_INTF][NPC_MAX_LD] + [NPC_MAX_LFL]; +typedef union npc_kex_ldata_flags_cfg npc_ld_flags_t[NPC_MAX_LD]; + +/* MBOX_MSG_NPC_GET_DATAX_CFG Response */ +struct npc_get_datax_cfg { + /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */ + union npc_kex_ldata_flags_cfg ld_flags[NPC_MAX_LD]; + /* Extract information indexed with [LID][LTYPE] */ + struct npc_lid_lt_xtract_info lid_lt_xtract[NPC_MAX_LID][NPC_MAX_LT]; + /* Flags based extract indexed with [LDATA][FLAGS_LOWER_NIBBLE] + * Fields flags_ena_ld0, flags_ena_ld1 in + * struct npc_lid_lt_xtract_info indicate if this is applicable + * for a given {LAYER, LTYPE} + */ + struct npc_xtract_info flag_xtract[NPC_MAX_LD][NPC_MAX_LT]; +}; + +TAILQ_HEAD(npc_flow_list, roc_npc_flow); + +struct npc_mcam_ents_info { + /* Current max & min values of mcam index */ + uint32_t max_id; + uint32_t min_id; + uint32_t free_ent; + uint32_t live_ent; +}; + +struct npc { + struct mbox *mbox; /* Mbox */ + uint32_t keyx_supp_nmask[NPC_MAX_INTF]; /* nibble mask */ + uint8_t profile_name[MKEX_NAME_LEN]; /* KEX profile name */ + uint32_t keyx_len[NPC_MAX_INTF]; /* per intf key len in bits */ + uint32_t datax_len[NPC_MAX_INTF]; /* per intf data len in bits */ + uint32_t keyw[NPC_MAX_INTF]; /* max key + data len bits */ + uint32_t mcam_entries; /* mcam entries supported */ + uint16_t channel; /* RX Channel number */ + uint32_t rss_grps; /* rss groups supported */ + uint16_t flow_prealloc_size; /* Pre allocated mcam size */ + uint16_t flow_max_priority; /* Max priority for flow */ + uint16_t switch_header_type; /* Suppprted switch header type */ + uint32_t mark_actions; /* Number of mark actions */ + uint16_t pf_func; /* pf_func of device */ + npc_dxcfg_t prx_dxcfg; /* intf, lid, lt, extract */ + npc_fxcfg_t prx_fxcfg; /* Flag extract */ + npc_ld_flags_t prx_lfcfg; /* KEX LD_Flags CFG */ + /* mcam entry info per priority level: both free & in-use */ + struct npc_mcam_ents_info *flow_entry_info; + /* Bitmap of free preallocated entries in ascending index & + * descending priority + */ + struct plt_bitmap **free_entries; + /* Bitmap of free preallocated entries in descending index & + * ascending priority + */ + struct plt_bitmap **free_entries_rev; + /* Bitmap of live entries in ascending index & descending priority */ + struct plt_bitmap **live_entries; + /* Bitmap of live entries in descending index & ascending priority */ + struct plt_bitmap **live_entries_rev; + /* Priority bucket wise tail queue of all npc_flow resources */ + struct npc_flow_list *flow_list; + struct plt_bitmap *rss_grp_entries; +}; + +static inline struct npc * +roc_npc_to_npc_priv(struct roc_npc *npc) +{ + return (struct npc *)npc->reserved; +} +#endif /* _ROC_NPC_PRIV_H_ */ diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c index 4d61344..e186f61 100644 --- a/drivers/common/cnxk/roc_platform.c +++ b/drivers/common/cnxk/roc_platform.c @@ -32,4 +32,5 @@ RTE_LOG_REGISTER(cnxk_logtype_base, pmd.cnxk.base, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_mbox, pmd.cnxk.mbox, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_npa, pmd.mempool.cnxk, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_nix, pmd.net.cnxk, NOTICE); +RTE_LOG_REGISTER(cnxk_logtype_npc, pmd.net.cnxk.flow, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_tm, pmd.net.cnxk.tm, NOTICE); diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h index 94bb3c7..ed8faba 100644 --- a/drivers/common/cnxk/roc_platform.h +++ b/drivers/common/cnxk/roc_platform.h @@ -136,6 +136,7 @@ extern int cnxk_logtype_base; extern int cnxk_logtype_mbox; extern int cnxk_logtype_npa; extern int cnxk_logtype_nix; +extern int cnxk_logtype_npc; extern int cnxk_logtype_tm; #define plt_err(fmt, args...) \ @@ -156,6 +157,7 @@ extern int cnxk_logtype_tm; #define plt_mbox_dbg(fmt, ...) plt_dbg(mbox, fmt, ##__VA_ARGS__) #define plt_npa_dbg(fmt, ...) plt_dbg(npa, fmt, ##__VA_ARGS__) #define plt_nix_dbg(fmt, ...) plt_dbg(nix, fmt, ##__VA_ARGS__) +#define plt_npc_dbg(fmt, ...) plt_dbg(npc, fmt, ##__VA_ARGS__) #define plt_tm_dbg(fmt, ...) plt_dbg(tm, fmt, ##__VA_ARGS__) #ifdef __cplusplus diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h index 7371785..2dcd9e7 100644 --- a/drivers/common/cnxk/roc_priv.h +++ b/drivers/common/cnxk/roc_priv.h @@ -23,4 +23,7 @@ /* NIX */ #include "roc_nix_priv.h" +/* NPC */ +#include "roc_npc_priv.h" + #endif /* _ROC_PRIV_H_ */ diff --git a/drivers/common/cnxk/roc_utils.c b/drivers/common/cnxk/roc_utils.c index 6cfa28d..236b4ae 100644 --- a/drivers/common/cnxk/roc_utils.c +++ b/drivers/common/cnxk/roc_utils.c @@ -14,16 +14,20 @@ roc_error_msg_get(int errorcode) case NIX_AF_ERR_PARAM: case NIX_ERR_PARAM: case NPA_ERR_PARAM: + case NPC_ERR_PARAM: case UTIL_ERR_PARAM: err_msg = "Invalid parameter"; break; case NIX_ERR_NO_MEM: + case NPC_ERR_NO_MEM: err_msg = "Out of memory"; break; case NIX_ERR_INVALID_RANGE: + case NPC_ERR_INVALID_RANGE: err_msg = "Range is not supported"; break; case NIX_ERR_INTERNAL: + case NPC_ERR_INTERNAL: err_msg = "Internal error"; break; case NIX_ERR_OP_NOTSUP: @@ -104,6 +108,26 @@ roc_error_msg_get(int errorcode) case NIX_ERR_NDC_SYNC: err_msg = "NDC Sync failed"; break; + case NPC_ERR_INVALID_SPEC: + err_msg = "NPC invalid spec"; + break; + case NPC_ERR_INVALID_MASK: + err_msg = "NPC invalid mask"; + break; + case NPC_ERR_INVALID_KEX: + err_msg = "NPC invalid key"; + break; + case NPC_ERR_INVALID_SIZE: + err_msg = "NPC invalid key size"; + break; + case NPC_ERR_ACTION_NOTSUP: + err_msg = "NPC action not supported"; + break; + case NPC_ERR_PATTERN_NOTSUP: + err_msg = "NPC pattern not supported"; + break; + case NPC_ERR_MCAM_ALLOC: + err_msg = "MCAM entry alloc failed"; break; case NPA_ERR_ALLOC: err_msg = "NPA alloc failed"; diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 21f1d5a..809cdae 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -5,6 +5,7 @@ INTERNAL { cnxk_logtype_mbox; cnxk_logtype_nix; cnxk_logtype_npa; + cnxk_logtype_npc; cnxk_logtype_tm; roc_clk_freq_get; roc_error_msg_get; From patchwork Thu Apr 1 12:38:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90414 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 44DA8A0548; Thu, 1 Apr 2021 14:44:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F3BC71412BA; Thu, 1 Apr 2021 14:40:42 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 3910C1412BB for ; Thu, 1 Apr 2021 14:40:41 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPgxs032597 for ; Thu, 1 Apr 2021 05:40:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=iZZUIXggB3KFXhqZxOrHz5QlBmau+tAnQqkBItJpUlU=; b=bNfwfTtwT+/1Jf2IKXGI6X50nTAce7AtQguFIFYL+0Cq3GpTWhg/kTrjQcgHHvXP8DtA iFWPjZPdbZp+9KAhLvza0xEQCt7T2Zz7FscvM3ceddukAqTSoH2KlGC1uxN6qDn1yoNT 5wLxm3mVb2GSXDVNiYlvjRt0l/bTsYF3jzfaSITw+cG+R36TB9TPTTxbTWJFYHjIlO/f JVnN6sqcNE5LZD3wcVb2cvrX32PXG5x8O+d79UmiFsEbhnIAu1qjK8zPwi6T8P6ZCkkd A8PRS2kY8GkJLq8kfZSLCItSe+UgajEi9qQmlWYYX4GXN8TKzGd76/SVYspLOZyhukB3 Vw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 37n28j2dwf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:40 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:39 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:38 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 5C6153F7041; Thu, 1 Apr 2021 05:40:36 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:08:06 +0530 Message-ID: <20210401123817.14348-42-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 8r8761rYfCT8rQ6zjxhTpnrUYFk3Wk-f X-Proofpoint-GUID: 8r8761rYfCT8rQ6zjxhTpnrUYFk3Wk-f X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 41/52] common/cnxk: add npc helper API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding NPC helper APIs to manage MCAM like pre allocating the mcam, configuring the rules, shifting mcam rules and preparing the data for mcam based on KEX. Signed-off-by: Kiran Kumar K --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_npc_priv.h | 11 + drivers/common/cnxk/roc_npc_utils.c | 631 ++++++++++++++++++++++++++++++++++++ 3 files changed, 643 insertions(+) create mode 100644 drivers/common/cnxk/roc_npc_utils.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index b453364..6163179 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -34,6 +34,7 @@ sources = files('roc_dev.c', 'roc_npa.c', 'roc_npa_debug.c', 'roc_npa_irq.c', + 'roc_npc_utils.c', 'roc_platform.c', 'roc_utils.c') includes += include_directories('../../bus/pci') diff --git a/drivers/common/cnxk/roc_npc_priv.h b/drivers/common/cnxk/roc_npc_priv.h index 4e2418c..9434826 100644 --- a/drivers/common/cnxk/roc_npc_priv.h +++ b/drivers/common/cnxk/roc_npc_priv.h @@ -378,4 +378,15 @@ roc_npc_to_npc_priv(struct roc_npc *npc) { return (struct npc *)npc->reserved; } + +int npc_update_parse_state(struct npc_parse_state *pst, + struct npc_parse_item_info *info, int lid, int lt, + uint8_t flags); +void npc_get_hw_supp_mask(struct npc_parse_state *pst, + struct npc_parse_item_info *info, int lid, int lt); +int npc_parse_item_basic(const struct roc_npc_item_info *item, + struct npc_parse_item_info *info); +int npc_check_preallocated_entry_cache(struct mbox *mbox, + struct roc_npc_flow *flow, + struct npc *npc); #endif /* _ROC_NPC_PRIV_H_ */ diff --git a/drivers/common/cnxk/roc_npc_utils.c b/drivers/common/cnxk/roc_npc_utils.c new file mode 100644 index 0000000..1fb8973 --- /dev/null +++ b/drivers/common/cnxk/roc_npc_utils.c @@ -0,0 +1,631 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ +#include "roc_api.h" +#include "roc_priv.h" + +static void +npc_prep_mcam_ldata(uint8_t *ptr, const uint8_t *data, int len) +{ + int idx; + + for (idx = 0; idx < len; idx++) + ptr[idx] = data[len - 1 - idx]; +} + +static int +npc_check_copysz(size_t size, size_t len) +{ + if (len <= size) + return len; + return NPC_ERR_PARAM; +} + +static inline int +npc_mem_is_zero(const void *mem, int len) +{ + const char *m = mem; + int i; + + for (i = 0; i < len; i++) { + if (m[i] != 0) + return 0; + } + return 1; +} + +static void +npc_set_hw_mask(struct npc_parse_item_info *info, struct npc_xtract_info *xinfo, + char *hw_mask) +{ + int max_off, offset; + int j; + + if (xinfo->enable == 0) + return; + + if (xinfo->hdr_off < info->hw_hdr_len) + return; + + max_off = xinfo->hdr_off + xinfo->len - info->hw_hdr_len; + + if (max_off > info->len) + max_off = info->len; + + offset = xinfo->hdr_off - info->hw_hdr_len; + for (j = offset; j < max_off; j++) + hw_mask[j] = 0xff; +} + +void +npc_get_hw_supp_mask(struct npc_parse_state *pst, + struct npc_parse_item_info *info, int lid, int lt) +{ + struct npc_xtract_info *xinfo, *lfinfo; + char *hw_mask = info->hw_mask; + int lf_cfg = 0; + int i, j; + int intf; + + intf = pst->nix_intf; + xinfo = pst->npc->prx_dxcfg[intf][lid][lt].xtract; + memset(hw_mask, 0, info->len); + + for (i = 0; i < NPC_MAX_LD; i++) + npc_set_hw_mask(info, &xinfo[i], hw_mask); + + for (i = 0; i < NPC_MAX_LD; i++) { + if (xinfo[i].flags_enable == 0) + continue; + + lf_cfg = pst->npc->prx_lfcfg[i].i; + if (lf_cfg == lid) { + for (j = 0; j < NPC_MAX_LFL; j++) { + lfinfo = pst->npc->prx_fxcfg[intf][i][j].xtract; + npc_set_hw_mask(info, &lfinfo[0], hw_mask); + } + } + } +} + +static inline int +npc_mask_is_supported(const char *mask, const char *hw_mask, int len) +{ + /* + * If no hw_mask, assume nothing is supported. + * mask is never NULL + */ + if (hw_mask == NULL) + return npc_mem_is_zero(mask, len); + + while (len--) { + if ((mask[len] | hw_mask[len]) != hw_mask[len]) + return 0; /* False */ + } + return 1; +} + +int +npc_parse_item_basic(const struct roc_npc_item_info *item, + struct npc_parse_item_info *info) +{ + /* Item must not be NULL */ + if (item == NULL) + return NPC_ERR_PARAM; + + /* Don't support ranges */ + if (item->last != NULL) + return NPC_ERR_INVALID_RANGE; + + /* If spec is NULL, both mask and last must be NULL, this + * makes it to match ANY value (eq to mask = 0). + * Setting either mask or last without spec is an error + */ + if (item->spec == NULL) { + if (item->last == NULL && item->mask == NULL) { + info->spec = NULL; + return 0; + } + return NPC_ERR_INVALID_SPEC; + } + + /* We have valid spec */ + info->spec = item->spec; + + /* If mask is not set, use default mask, err if default mask is + * also NULL. + */ + if (item->mask == NULL) { + if (info->def_mask == NULL) + return NPC_ERR_PARAM; + info->mask = info->def_mask; + } else { + info->mask = item->mask; + } + + /* mask specified must be subset of hw supported mask + * mask | hw_mask == hw_mask + */ + if (!npc_mask_is_supported(info->mask, info->hw_mask, info->len)) + return NPC_ERR_INVALID_MASK; + + return 0; +} + +static int +npc_update_extraction_data(struct npc_parse_state *pst, + struct npc_parse_item_info *info, + struct npc_xtract_info *xinfo) +{ + uint8_t int_info_mask[NPC_MAX_EXTRACT_DATA_LEN]; + uint8_t int_info[NPC_MAX_EXTRACT_DATA_LEN]; + struct npc_xtract_info *x; + int hdr_off; + int len = 0; + + x = xinfo; + len = x->len; + hdr_off = x->hdr_off; + + if (hdr_off < info->hw_hdr_len) + return 0; + + if (x->enable == 0) + return 0; + + hdr_off -= info->hw_hdr_len; + + if (hdr_off >= info->len) + return 0; + + if (hdr_off + len > info->len) + len = info->len - hdr_off; + + len = npc_check_copysz((ROC_NPC_MAX_MCAM_WIDTH_DWORDS * 8) - x->key_off, + len); + if (len < 0) + return NPC_ERR_INVALID_SIZE; + + /* Need to reverse complete structure so that dest addr is at + * MSB so as to program the MCAM using mcam_data & mcam_mask + * arrays + */ + npc_prep_mcam_ldata(int_info, (const uint8_t *)info->spec + hdr_off, + x->len); + npc_prep_mcam_ldata(int_info_mask, + (const uint8_t *)info->mask + hdr_off, x->len); + + memcpy(pst->mcam_mask + x->key_off, int_info_mask, len); + memcpy(pst->mcam_data + x->key_off, int_info, len); + return 0; +} + +int +npc_update_parse_state(struct npc_parse_state *pst, + struct npc_parse_item_info *info, int lid, int lt, + uint8_t flags) +{ + struct npc_lid_lt_xtract_info *xinfo; + struct npc_xtract_info *lfinfo; + int intf, lf_cfg; + int i, j, rc = 0; + + pst->layer_mask |= lid; + pst->lt[lid] = lt; + pst->flags[lid] = flags; + + intf = pst->nix_intf; + xinfo = &pst->npc->prx_dxcfg[intf][lid][lt]; + if (xinfo->is_terminating) + pst->terminate = 1; + + if (info->spec == NULL) + goto done; + + for (i = 0; i < NPC_MAX_LD; i++) { + rc = npc_update_extraction_data(pst, info, &xinfo->xtract[i]); + if (rc != 0) + return rc; + } + + for (i = 0; i < NPC_MAX_LD; i++) { + if (xinfo->xtract[i].flags_enable == 0) + continue; + + lf_cfg = pst->npc->prx_lfcfg[i].i; + if (lf_cfg == lid) { + for (j = 0; j < NPC_MAX_LFL; j++) { + lfinfo = pst->npc->prx_fxcfg[intf][i][j].xtract; + rc = npc_update_extraction_data(pst, info, + &lfinfo[0]); + if (rc != 0) + return rc; + + if (lfinfo[0].enable) + pst->flags[lid] = j; + } + } + } + +done: + pst->pattern++; + return 0; +} + +static int +npc_first_set_bit(uint64_t slab) +{ + int num = 0; + + if ((slab & 0xffffffff) == 0) { + num += 32; + slab >>= 32; + } + if ((slab & 0xffff) == 0) { + num += 16; + slab >>= 16; + } + if ((slab & 0xff) == 0) { + num += 8; + slab >>= 8; + } + if ((slab & 0xf) == 0) { + num += 4; + slab >>= 4; + } + if ((slab & 0x3) == 0) { + num += 2; + slab >>= 2; + } + if ((slab & 0x1) == 0) + num += 1; + + return num; +} + +static int +npc_shift_lv_ent(struct mbox *mbox, struct roc_npc_flow *flow, struct npc *npc, + uint32_t old_ent, uint32_t new_ent) +{ + struct npc_mcam_shift_entry_req *req; + struct npc_mcam_shift_entry_rsp *rsp; + struct npc_flow_list *list; + struct roc_npc_flow *flow_iter; + int rc = -ENOSPC; + + list = &npc->flow_list[flow->priority]; + + /* Old entry is disabled & it's contents are moved to new_entry, + * new entry is enabled finally. + */ + req = mbox_alloc_msg_npc_mcam_shift_entry(mbox); + if (req == NULL) + return rc; + req->curr_entry[0] = old_ent; + req->new_entry[0] = new_ent; + req->shift_count = 1; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + /* Remove old node from list */ + TAILQ_FOREACH(flow_iter, list, next) { + if (flow_iter->mcam_id == old_ent) + TAILQ_REMOVE(list, flow_iter, next); + } + + /* Insert node with new mcam id at right place */ + TAILQ_FOREACH(flow_iter, list, next) { + if (flow_iter->mcam_id > new_ent) + TAILQ_INSERT_BEFORE(flow_iter, flow, next); + } + return rc; +} + +/* Exchange all required entries with a given priority level */ +static int +npc_shift_ent(struct mbox *mbox, struct roc_npc_flow *flow, struct npc *npc, + struct npc_mcam_alloc_entry_rsp *rsp, int dir, int prio_lvl) +{ + struct plt_bitmap *fr_bmp, *fr_bmp_rev, *lv_bmp, *lv_bmp_rev, *bmp; + uint32_t e_fr = 0, e_lv = 0, e, e_id = 0, mcam_entries; + uint64_t fr_bit_pos = 0, lv_bit_pos = 0, bit_pos = 0; + /* Bit position within the slab */ + uint32_t sl_fr_bit_off = 0, sl_lv_bit_off = 0; + /* Overall bit position of the start of slab */ + /* free & live entry index */ + int rc_fr = 0, rc_lv = 0, rc = 0, idx = 0; + struct npc_mcam_ents_info *ent_info; + /* free & live bitmap slab */ + uint64_t sl_fr = 0, sl_lv = 0, *sl; + + fr_bmp = npc->free_entries[prio_lvl]; + fr_bmp_rev = npc->free_entries_rev[prio_lvl]; + lv_bmp = npc->live_entries[prio_lvl]; + lv_bmp_rev = npc->live_entries_rev[prio_lvl]; + ent_info = &npc->flow_entry_info[prio_lvl]; + mcam_entries = npc->mcam_entries; + + /* New entries allocated are always contiguous, but older entries + * already in free/live bitmap can be non-contiguous: so return + * shifted entries should be in non-contiguous format. + */ + while (idx <= rsp->count) { + if (!sl_fr && !sl_lv) { + /* Lower index elements to be exchanged */ + if (dir < 0) { + rc_fr = plt_bitmap_scan(fr_bmp, &e_fr, &sl_fr); + rc_lv = plt_bitmap_scan(lv_bmp, &e_lv, &sl_lv); + } else { + rc_fr = plt_bitmap_scan(fr_bmp_rev, + &sl_fr_bit_off, &sl_fr); + rc_lv = plt_bitmap_scan(lv_bmp_rev, + &sl_lv_bit_off, &sl_lv); + } + } + + if (rc_fr) { + fr_bit_pos = npc_first_set_bit(sl_fr); + e_fr = sl_fr_bit_off + fr_bit_pos; + } else { + e_fr = ~(0); + } + + if (rc_lv) { + lv_bit_pos = npc_first_set_bit(sl_lv); + e_lv = sl_lv_bit_off + lv_bit_pos; + } else { + e_lv = ~(0); + } + + /* First entry is from free_bmap */ + if (e_fr < e_lv) { + bmp = fr_bmp; + e = e_fr; + sl = &sl_fr; + bit_pos = fr_bit_pos; + if (dir > 0) + e_id = mcam_entries - e - 1; + else + e_id = e; + } else { + bmp = lv_bmp; + e = e_lv; + sl = &sl_lv; + bit_pos = lv_bit_pos; + if (dir > 0) + e_id = mcam_entries - e - 1; + else + e_id = e; + + if (idx < rsp->count) + rc = npc_shift_lv_ent(mbox, flow, npc, e_id, + rsp->entry + idx); + } + + plt_bitmap_clear(bmp, e); + plt_bitmap_set(bmp, rsp->entry + idx); + /* Update entry list, use non-contiguous + * list now. + */ + rsp->entry_list[idx] = e_id; + *sl &= ~(1UL << bit_pos); + + /* Update min & max entry identifiers in current + * priority level. + */ + if (dir < 0) { + ent_info->max_id = rsp->entry + idx; + ent_info->min_id = e_id; + } else { + ent_info->max_id = e_id; + ent_info->min_id = rsp->entry; + } + + idx++; + } + return rc; +} + +/* Validate if newly allocated entries lie in the correct priority zone + * since NPC_MCAM_LOWER_PRIO & NPC_MCAM_HIGHER_PRIO don't ensure zone accuracy. + * If not properly aligned, shift entries to do so + */ +static int +npc_validate_and_shift_prio_ent(struct mbox *mbox, struct roc_npc_flow *flow, + struct npc *npc, + struct npc_mcam_alloc_entry_rsp *rsp, + int req_prio) +{ + int prio_idx = 0, rc = 0, needs_shift = 0, idx, prio = flow->priority; + struct npc_mcam_ents_info *info = npc->flow_entry_info; + int dir = (req_prio == NPC_MCAM_HIGHER_PRIO) ? 1 : -1; + uint32_t tot_ent = 0; + + if (dir < 0) + prio_idx = npc->flow_max_priority - 1; + + /* Only live entries needs to be shifted, free entries can just be + * moved by bits manipulation. + */ + + /* For dir = -1(NPC_MCAM_LOWER_PRIO), when shifting, + * NPC_MAX_PREALLOC_ENT are exchanged with adjoining higher priority + * level entries(lower indexes). + * + * For dir = +1(NPC_MCAM_HIGHER_PRIO), during shift, + * NPC_MAX_PREALLOC_ENT are exchanged with adjoining lower priority + * level entries(higher indexes) with highest indexes. + */ + do { + tot_ent = info[prio_idx].free_ent + info[prio_idx].live_ent; + + if (dir < 0 && prio_idx != prio && + rsp->entry > info[prio_idx].max_id && tot_ent) { + needs_shift = 1; + } else if ((dir > 0) && (prio_idx != prio) && + (rsp->entry < info[prio_idx].min_id) && tot_ent) { + needs_shift = 1; + } + + if (needs_shift) { + needs_shift = 0; + rc = npc_shift_ent(mbox, flow, npc, rsp, dir, prio_idx); + } else { + for (idx = 0; idx < rsp->count; idx++) + rsp->entry_list[idx] = rsp->entry + idx; + } + } while ((prio_idx != prio) && (prio_idx += dir)); + + return rc; +} + +static int +npc_find_ref_entry(struct npc *npc, int *prio, int prio_lvl) +{ + struct npc_mcam_ents_info *info = npc->flow_entry_info; + int step = 1; + + while (step < npc->flow_max_priority) { + if (((prio_lvl + step) < npc->flow_max_priority) && + info[prio_lvl + step].live_ent) { + *prio = NPC_MCAM_HIGHER_PRIO; + return info[prio_lvl + step].min_id; + } + + if (((prio_lvl - step) >= 0) && + info[prio_lvl - step].live_ent) { + *prio = NPC_MCAM_LOWER_PRIO; + return info[prio_lvl - step].max_id; + } + step++; + } + *prio = NPC_MCAM_ANY_PRIO; + return 0; +} + +static int +npc_fill_entry_cache(struct mbox *mbox, struct roc_npc_flow *flow, + struct npc *npc, uint32_t *free_ent) +{ + struct plt_bitmap *free_bmp, *free_bmp_rev, *live_bmp, *live_bmp_rev; + struct npc_mcam_alloc_entry_rsp rsp_local; + struct npc_mcam_alloc_entry_rsp *rsp_cmd; + struct npc_mcam_alloc_entry_req *req; + struct npc_mcam_alloc_entry_rsp *rsp; + struct npc_mcam_ents_info *info; + int rc = -ENOSPC, prio; + uint16_t ref_ent, idx; + + info = &npc->flow_entry_info[flow->priority]; + free_bmp = npc->free_entries[flow->priority]; + free_bmp_rev = npc->free_entries_rev[flow->priority]; + live_bmp = npc->live_entries[flow->priority]; + live_bmp_rev = npc->live_entries_rev[flow->priority]; + + ref_ent = npc_find_ref_entry(npc, &prio, flow->priority); + + req = mbox_alloc_msg_npc_mcam_alloc_entry(mbox); + if (req == NULL) + return rc; + req->contig = 1; + req->count = npc->flow_prealloc_size; + req->priority = prio; + req->ref_entry = ref_ent; + + rc = mbox_process_msg(mbox, (void *)&rsp_cmd); + if (rc) + return rc; + + rsp = &rsp_local; + memcpy(rsp, rsp_cmd, sizeof(*rsp)); + + /* Non-first ent cache fill */ + if (prio != NPC_MCAM_ANY_PRIO) { + npc_validate_and_shift_prio_ent(mbox, flow, npc, rsp, prio); + } else { + /* Copy into response entry list */ + for (idx = 0; idx < rsp->count; idx++) + rsp->entry_list[idx] = rsp->entry + idx; + } + + /* Update free entries, reverse free entries list, + * min & max entry ids. + */ + for (idx = 0; idx < rsp->count; idx++) { + if (unlikely(rsp->entry_list[idx] < info->min_id)) + info->min_id = rsp->entry_list[idx]; + + if (unlikely(rsp->entry_list[idx] > info->max_id)) + info->max_id = rsp->entry_list[idx]; + + /* Skip entry to be returned, not to be part of free + * list. + */ + if (prio == NPC_MCAM_HIGHER_PRIO) { + if (unlikely(idx == (rsp->count - 1))) { + *free_ent = rsp->entry_list[idx]; + continue; + } + } else { + if (unlikely(!idx)) { + *free_ent = rsp->entry_list[idx]; + continue; + } + } + info->free_ent++; + plt_bitmap_set(free_bmp, rsp->entry_list[idx]); + plt_bitmap_set(free_bmp_rev, + npc->mcam_entries - rsp->entry_list[idx] - 1); + } + + info->live_ent++; + plt_bitmap_set(live_bmp, *free_ent); + plt_bitmap_set(live_bmp_rev, npc->mcam_entries - *free_ent - 1); + + return 0; +} + +int +npc_check_preallocated_entry_cache(struct mbox *mbox, struct roc_npc_flow *flow, + struct npc *npc) +{ + struct plt_bitmap *free, *free_rev, *live, *live_rev; + uint32_t pos = 0, free_ent = 0, mcam_entries; + struct npc_mcam_ents_info *info; + uint64_t slab = 0; + int rc; + + info = &npc->flow_entry_info[flow->priority]; + + free_rev = npc->free_entries_rev[flow->priority]; + free = npc->free_entries[flow->priority]; + live_rev = npc->live_entries_rev[flow->priority]; + live = npc->live_entries[flow->priority]; + mcam_entries = npc->mcam_entries; + + if (info->free_ent) { + rc = plt_bitmap_scan(free, &pos, &slab); + if (rc) { + /* Get free_ent from free entry bitmap */ + free_ent = pos + __builtin_ctzll(slab); + /* Remove from free bitmaps and add to live ones */ + plt_bitmap_clear(free, free_ent); + plt_bitmap_set(live, free_ent); + plt_bitmap_clear(free_rev, mcam_entries - free_ent - 1); + plt_bitmap_set(live_rev, mcam_entries - free_ent - 1); + + info->free_ent--; + info->live_ent++; + return free_ent; + } + return NPC_ERR_INTERNAL; + } + + rc = npc_fill_entry_cache(mbox, flow, npc, &free_ent); + if (rc) + return rc; + + return free_ent; +} From patchwork Thu Apr 1 12:38:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90415 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F16EA0548; Thu, 1 Apr 2021 14:44:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AACE5141251; Thu, 1 Apr 2021 14:40:46 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id C27571412C8 for ; Thu, 1 Apr 2021 14:40:44 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLXv019083 for ; Thu, 1 Apr 2021 05:40:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=mm0H2PYaZ7vfUBL9Zum9lfOw9rib3zC0niA/OuAx3uw=; b=TkrbhoVwN2BRl4Oh17QExRXTIBFpmMp08rofl1gpZo2aQ11VSCrWlIf06XJszAUToVOq EjMRKGFFK5K0Ln6OrYBk00KqcitkqhIb0DS3jBjg+THFW/k3Kxr1C+IHYToueGHZsrSn DnQV4v0sxm09hHSyZ8CK0QOHbWg/nbmJtlUG28d4U+2XU43Y4XiLHqaaZLc850prUtRC yyqiAj7AxMIE0YCzA4GEvKUEPLfPd94J7zcUJMLt/tvplN3fBD9bEmxBrGB90cRCYlbn EjO847X83XeqR8MVHTCoZwqXpXktkH1pXCeX5tvShSAl7eYSsVzwCV/Jx7l6zqDXFPW6 mg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje6a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:43 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:41 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:41 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 552153F704A; Thu, 1 Apr 2021 05:40:39 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:08:07 +0530 Message-ID: <20210401123817.14348-43-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: nhIwdilP0PkQyVFRZry1kjbhEqHBWuFk X-Proofpoint-ORIG-GUID: nhIwdilP0PkQyVFRZry1kjbhEqHBWuFk X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 42/52] common/cnxk: add mcam utility API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding mcam utility functions like reading KEX and reserving and writing mcam rules. Signed-off-by: Kiran Kumar K --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_npc_mcam.c | 708 +++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_npc_priv.h | 21 ++ 3 files changed, 730 insertions(+) create mode 100644 drivers/common/cnxk/roc_npc_mcam.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 6163179..7c83050 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -34,6 +34,7 @@ sources = files('roc_dev.c', 'roc_npa.c', 'roc_npa_debug.c', 'roc_npa_irq.c', + 'roc_npc_mcam.c', 'roc_npc_utils.c', 'roc_platform.c', 'roc_utils.c') diff --git a/drivers/common/cnxk/roc_npc_mcam.c b/drivers/common/cnxk/roc_npc_mcam.c new file mode 100644 index 0000000..572c52d --- /dev/null +++ b/drivers/common/cnxk/roc_npc_mcam.c @@ -0,0 +1,708 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ +#include "roc_api.h" +#include "roc_priv.h" + +static int +npc_mcam_alloc_counter(struct npc *npc, uint16_t *ctr) +{ + struct npc_mcam_alloc_counter_req *req; + struct npc_mcam_alloc_counter_rsp *rsp; + struct mbox *mbox = npc->mbox; + int rc = -ENOSPC; + + req = mbox_alloc_msg_npc_mcam_alloc_counter(mbox); + if (req == NULL) + return rc; + req->count = 1; + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + *ctr = rsp->cntr_list[0]; + return rc; +} + +int +npc_mcam_free_counter(struct npc *npc, uint16_t ctr_id) +{ + struct npc_mcam_oper_counter_req *req; + struct mbox *mbox = npc->mbox; + int rc = -ENOSPC; + + req = mbox_alloc_msg_npc_mcam_free_counter(mbox); + if (req == NULL) + return rc; + req->cntr = ctr_id; + return mbox_process(mbox); +} + +int +npc_mcam_read_counter(struct npc *npc, uint32_t ctr_id, uint64_t *count) +{ + struct npc_mcam_oper_counter_req *req; + struct npc_mcam_oper_counter_rsp *rsp; + struct mbox *mbox = npc->mbox; + int rc = -ENOSPC; + + req = mbox_alloc_msg_npc_mcam_counter_stats(mbox); + if (req == NULL) + return rc; + req->cntr = ctr_id; + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + *count = rsp->stat; + return rc; +} + +int +npc_mcam_clear_counter(struct npc *npc, uint32_t ctr_id) +{ + struct npc_mcam_oper_counter_req *req; + struct mbox *mbox = npc->mbox; + int rc = -ENOSPC; + + req = mbox_alloc_msg_npc_mcam_clear_counter(mbox); + if (req == NULL) + return rc; + req->cntr = ctr_id; + return mbox_process(mbox); +} + +int +npc_mcam_free_entry(struct npc *npc, uint32_t entry) +{ + struct npc_mcam_free_entry_req *req; + struct mbox *mbox = npc->mbox; + int rc = -ENOSPC; + + req = mbox_alloc_msg_npc_mcam_free_entry(mbox); + if (req == NULL) + return rc; + req->entry = entry; + return mbox_process(mbox); +} + +int +npc_mcam_free_all_entries(struct npc *npc) +{ + struct npc_mcam_free_entry_req *req; + struct mbox *mbox = npc->mbox; + int rc = -ENOSPC; + + req = mbox_alloc_msg_npc_mcam_free_entry(mbox); + if (req == NULL) + return rc; + req->all = 1; + return mbox_process(mbox); +} + +static int +npc_supp_key_len(uint32_t supp_mask) +{ + int nib_count = 0; + + while (supp_mask) { + nib_count++; + supp_mask &= (supp_mask - 1); + } + return nib_count * 4; +} + +/** + * Returns true if any LDATA bits are extracted for specific LID+LTYPE. + * + * No LFLAG extraction is taken into account. + */ +static int +npc_lid_lt_in_kex(struct npc *npc, uint8_t lid, uint8_t lt) +{ + struct npc_xtract_info *x_info; + int i; + + for (i = 0; i < NPC_MAX_LD; i++) { + x_info = &npc->prx_dxcfg[NIX_INTF_RX][lid][lt].xtract[i]; + /* Check for LDATA */ + if (x_info->enable && x_info->len > 0) + return true; + } + + return false; +} + +static void +npc_construct_ldata_mask(struct npc *npc, struct plt_bitmap *bmap, uint8_t lid, + uint8_t lt, uint8_t ld) +{ + struct npc_xtract_info *x_info, *infoflag; + int hdr_off, keylen; + npc_dxcfg_t *p; + npc_fxcfg_t *q; + int i, j; + + p = &npc->prx_dxcfg; + x_info = &(*p)[0][lid][lt].xtract[ld]; + + if (x_info->enable == 0) + return; + + hdr_off = x_info->hdr_off * 8; + keylen = x_info->len * 8; + for (i = hdr_off; i < (hdr_off + keylen); i++) + plt_bitmap_set(bmap, i); + + if (x_info->flags_enable == 0) + return; + + if ((npc->prx_lfcfg[0].i & 0x7) != lid) + return; + + q = &npc->prx_fxcfg; + for (j = 0; j < NPC_MAX_LFL; j++) { + infoflag = &(*q)[0][ld][j].xtract[0]; + if (infoflag->enable) { + hdr_off = infoflag->hdr_off * 8; + keylen = infoflag->len * 8; + for (i = hdr_off; i < (hdr_off + keylen); i++) + plt_bitmap_set(bmap, i); + } + } +} + +/** + * Check if given LID+LTYPE combination is present in KEX + * + * len is non-zero, this function will return true if KEX extracts len bytes + * at given offset. Otherwise it'll return true if any bytes are extracted + * specifically for given LID+LTYPE combination (meaning not LFLAG based). + * The second case increases flexibility for custom frames whose extracted + * bits may change depending on KEX profile loaded. + * + * @param npc NPC context structure + * @param lid Layer ID to check for + * @param lt Layer Type to check for + * @param offset offset into the layer header to match + * @param len length of the match + */ +static bool +npc_is_kex_enabled(struct npc *npc, uint8_t lid, uint8_t lt, int offset, + int len) +{ + struct plt_bitmap *bmap; + uint32_t bmap_sz; + uint8_t *mem; + int i; + + if (!len) + return npc_lid_lt_in_kex(npc, lid, lt); + + bmap_sz = plt_bitmap_get_memory_footprint(300 * 8); + mem = plt_zmalloc(bmap_sz, 0); + if (mem == NULL) { + plt_err("mem alloc failed"); + return false; + } + bmap = plt_bitmap_init(300 * 8, mem, bmap_sz); + if (bmap == NULL) { + plt_err("mem alloc failed"); + plt_free(mem); + return false; + } + + npc_construct_ldata_mask(npc, bmap, lid, lt, 0); + npc_construct_ldata_mask(npc, bmap, lid, lt, 1); + + for (i = offset; i < (offset + len); i++) { + if (plt_bitmap_get(bmap, i) != 0x1) { + plt_free(mem); + return false; + } + } + + plt_free(mem); + return true; +} + +uint64_t +npc_get_kex_capability(struct npc *npc) +{ + npc_kex_cap_terms_t kex_cap; + + memset(&kex_cap, 0, sizeof(kex_cap)); + + /* Ethtype: Offset 12B, len 2B */ + kex_cap.bit.ethtype_0 = npc_is_kex_enabled( + npc, NPC_LID_LA, NPC_LT_LA_ETHER, 12 * 8, 2 * 8); + /* QINQ VLAN Ethtype: ofset 8B, len 2B */ + kex_cap.bit.ethtype_x = npc_is_kex_enabled( + npc, NPC_LID_LB, NPC_LT_LB_STAG_QINQ, 8 * 8, 2 * 8); + /* VLAN ID0 : Outer VLAN: Offset 2B, len 2B */ + kex_cap.bit.vlan_id_0 = npc_is_kex_enabled( + npc, NPC_LID_LB, NPC_LT_LB_CTAG, 2 * 8, 2 * 8); + /* VLAN ID0 : Inner VLAN: offset 6B, len 2B */ + kex_cap.bit.vlan_id_x = npc_is_kex_enabled( + npc, NPC_LID_LB, NPC_LT_LB_STAG_QINQ, 6 * 8, 2 * 8); + /* DMCA: offset 0B, len 6B */ + kex_cap.bit.dmac = npc_is_kex_enabled(npc, NPC_LID_LA, NPC_LT_LA_ETHER, + 0 * 8, 6 * 8); + /* IP proto: offset 9B, len 1B */ + kex_cap.bit.ip_proto = + npc_is_kex_enabled(npc, NPC_LID_LC, NPC_LT_LC_IP, 9 * 8, 1 * 8); + /* UDP dport: offset 2B, len 2B */ + kex_cap.bit.udp_dport = npc_is_kex_enabled(npc, NPC_LID_LD, + NPC_LT_LD_UDP, 2 * 8, 2 * 8); + /* UDP sport: offset 0B, len 2B */ + kex_cap.bit.udp_sport = npc_is_kex_enabled(npc, NPC_LID_LD, + NPC_LT_LD_UDP, 0 * 8, 2 * 8); + /* TCP dport: offset 2B, len 2B */ + kex_cap.bit.tcp_dport = npc_is_kex_enabled(npc, NPC_LID_LD, + NPC_LT_LD_TCP, 2 * 8, 2 * 8); + /* TCP sport: offset 0B, len 2B */ + kex_cap.bit.tcp_sport = npc_is_kex_enabled(npc, NPC_LID_LD, + NPC_LT_LD_TCP, 0 * 8, 2 * 8); + /* IP SIP: offset 12B, len 4B */ + kex_cap.bit.sip_addr = npc_is_kex_enabled(npc, NPC_LID_LC, NPC_LT_LC_IP, + 12 * 8, 4 * 8); + /* IP DIP: offset 14B, len 4B */ + kex_cap.bit.dip_addr = npc_is_kex_enabled(npc, NPC_LID_LC, NPC_LT_LC_IP, + 14 * 8, 4 * 8); + /* IP6 SIP: offset 8B, len 16B */ + kex_cap.bit.sip6_addr = npc_is_kex_enabled( + npc, NPC_LID_LC, NPC_LT_LC_IP6, 8 * 8, 16 * 8); + /* IP6 DIP: offset 24B, len 16B */ + kex_cap.bit.dip6_addr = npc_is_kex_enabled( + npc, NPC_LID_LC, NPC_LT_LC_IP6, 24 * 8, 16 * 8); + /* ESP SPI: offset 0B, len 4B */ + kex_cap.bit.ipsec_spi = npc_is_kex_enabled(npc, NPC_LID_LE, + NPC_LT_LE_ESP, 0 * 8, 4 * 8); + /* VXLAN VNI: offset 4B, len 3B */ + kex_cap.bit.ld_vni = npc_is_kex_enabled(npc, NPC_LID_LE, + NPC_LT_LE_VXLAN, 0 * 8, 3 * 8); + + /* Custom L3 frame: varied offset and lengths */ + kex_cap.bit.custom_l3 = + npc_is_kex_enabled(npc, NPC_LID_LC, NPC_LT_LC_CUSTOM0, 0, 0); + kex_cap.bit.custom_l3 |= + npc_is_kex_enabled(npc, NPC_LID_LC, NPC_LT_LC_CUSTOM1, 0, 0); + /* SCTP sport : offset 0B, len 2B */ + kex_cap.bit.sctp_sport = npc_is_kex_enabled( + npc, NPC_LID_LD, NPC_LT_LD_SCTP, 0 * 8, 2 * 8); + /* SCTP dport : offset 2B, len 2B */ + kex_cap.bit.sctp_dport = npc_is_kex_enabled( + npc, NPC_LID_LD, NPC_LT_LD_SCTP, 2 * 8, 2 * 8); + /* ICMP type : offset 0B, len 1B */ + kex_cap.bit.icmp_type = npc_is_kex_enabled( + npc, NPC_LID_LD, NPC_LT_LD_ICMP, 0 * 8, 1 * 8); + /* ICMP code : offset 1B, len 1B */ + kex_cap.bit.icmp_code = npc_is_kex_enabled( + npc, NPC_LID_LD, NPC_LT_LD_ICMP, 1 * 8, 1 * 8); + /* ICMP id : offset 4B, len 2B */ + kex_cap.bit.icmp_id = npc_is_kex_enabled(npc, NPC_LID_LD, + NPC_LT_LD_ICMP, 4 * 8, 2 * 8); + /* IGMP grp_addr : offset 4B, len 4B */ + kex_cap.bit.igmp_grp_addr = npc_is_kex_enabled( + npc, NPC_LID_LD, NPC_LT_LD_IGMP, 4 * 8, 4 * 8); + /* GTPU teid : offset 4B, len 4B */ + kex_cap.bit.gtpu_teid = npc_is_kex_enabled( + npc, NPC_LID_LE, NPC_LT_LE_GTPU, 4 * 8, 4 * 8); + return kex_cap.all_bits; +} + +#define BYTESM1_SHIFT 16 +#define HDR_OFF_SHIFT 8 +static void +npc_update_kex_info(struct npc_xtract_info *xtract_info, uint64_t val) +{ + xtract_info->len = ((val >> BYTESM1_SHIFT) & 0xf) + 1; + xtract_info->hdr_off = (val >> HDR_OFF_SHIFT) & 0xff; + xtract_info->key_off = val & 0x3f; + xtract_info->enable = ((val >> 7) & 0x1); + xtract_info->flags_enable = ((val >> 6) & 0x1); +} + +int +npc_mcam_alloc_entries(struct npc *npc, int ref_mcam, int *alloc_entry, + int req_count, int prio, int *resp_count) +{ + struct npc_mcam_alloc_entry_req *req; + struct npc_mcam_alloc_entry_rsp *rsp; + struct mbox *mbox = npc->mbox; + int rc = -ENOSPC; + int i; + + req = mbox_alloc_msg_npc_mcam_alloc_entry(mbox); + if (req == NULL) + return rc; + req->contig = 0; + req->count = req_count; + req->priority = prio; + req->ref_entry = ref_mcam; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + for (i = 0; i < rsp->count; i++) + alloc_entry[i] = rsp->entry_list[i]; + *resp_count = rsp->count; + return 0; +} + +int +npc_mcam_alloc_entry(struct npc *npc, struct roc_npc_flow *mcam, + struct roc_npc_flow *ref_mcam, int prio, int *resp_count) +{ + struct npc_mcam_alloc_entry_req *req; + struct npc_mcam_alloc_entry_rsp *rsp; + struct mbox *mbox = npc->mbox; + int rc = -ENOSPC; + + req = mbox_alloc_msg_npc_mcam_alloc_entry(mbox); + if (req == NULL) + return rc; + req->contig = 1; + req->count = 1; + req->priority = prio; + req->ref_entry = ref_mcam->mcam_id; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + memset(mcam, 0, sizeof(struct roc_npc_flow)); + mcam->mcam_id = rsp->entry; + mcam->nix_intf = ref_mcam->nix_intf; + *resp_count = rsp->count; + return 0; +} + +int +npc_mcam_ena_dis_entry(struct npc *npc, struct roc_npc_flow *mcam, bool enable) +{ + struct npc_mcam_ena_dis_entry_req *req; + struct mbox *mbox = npc->mbox; + int rc = -ENOSPC; + + if (enable) + req = mbox_alloc_msg_npc_mcam_ena_entry(mbox); + else + req = mbox_alloc_msg_npc_mcam_dis_entry(mbox); + + if (req == NULL) + return rc; + req->entry = mcam->mcam_id; + mcam->enable = enable; + return mbox_process(mbox); +} + +int +npc_mcam_write_entry(struct npc *npc, struct roc_npc_flow *mcam) +{ + struct npc_mcam_write_entry_req *req; + struct mbox *mbox = npc->mbox; + struct mbox_msghdr *rsp; + int rc = -ENOSPC; + int i; + + req = mbox_alloc_msg_npc_mcam_write_entry(mbox); + if (req == NULL) + return rc; + req->entry = mcam->mcam_id; + req->intf = mcam->nix_intf; + req->enable_entry = mcam->enable; + req->entry_data.action = mcam->npc_action; + req->entry_data.vtag_action = mcam->vtag_action; + for (i = 0; i < NPC_MCAM_KEY_X4_WORDS; i++) { + req->entry_data.kw[i] = mcam->mcam_data[i]; + req->entry_data.kw_mask[i] = mcam->mcam_mask[i]; + } + return mbox_process_msg(mbox, (void *)&rsp); +} + +static void +npc_mcam_process_mkex_cfg(struct npc *npc, struct npc_get_kex_cfg_rsp *kex_rsp) +{ + volatile uint64_t( + *q)[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD]; + struct npc_xtract_info *x_info = NULL; + int lid, lt, ld, fl, ix; + npc_dxcfg_t *p; + uint64_t keyw; + uint64_t val; + + npc->keyx_supp_nmask[NPC_MCAM_RX] = + kex_rsp->rx_keyx_cfg & 0x7fffffffULL; + npc->keyx_supp_nmask[NPC_MCAM_TX] = + kex_rsp->tx_keyx_cfg & 0x7fffffffULL; + npc->keyx_len[NPC_MCAM_RX] = + npc_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_RX]); + npc->keyx_len[NPC_MCAM_TX] = + npc_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_TX]); + + keyw = (kex_rsp->rx_keyx_cfg >> 32) & 0x7ULL; + npc->keyw[NPC_MCAM_RX] = keyw; + keyw = (kex_rsp->tx_keyx_cfg >> 32) & 0x7ULL; + npc->keyw[NPC_MCAM_TX] = keyw; + + /* Update KEX_LD_FLAG */ + for (ix = 0; ix < NPC_MAX_INTF; ix++) { + for (ld = 0; ld < NPC_MAX_LD; ld++) { + for (fl = 0; fl < NPC_MAX_LFL; fl++) { + x_info = &npc->prx_fxcfg[ix][ld][fl].xtract[0]; + val = kex_rsp->intf_ld_flags[ix][ld][fl]; + npc_update_kex_info(x_info, val); + } + } + } + + /* Update LID, LT and LDATA cfg */ + p = &npc->prx_dxcfg; + q = (volatile uint64_t(*)[][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD])( + &kex_rsp->intf_lid_lt_ld); + for (ix = 0; ix < NPC_MAX_INTF; ix++) { + for (lid = 0; lid < NPC_MAX_LID; lid++) { + for (lt = 0; lt < NPC_MAX_LT; lt++) { + for (ld = 0; ld < NPC_MAX_LD; ld++) { + x_info = &(*p)[ix][lid][lt].xtract[ld]; + val = (*q)[ix][lid][lt][ld]; + npc_update_kex_info(x_info, val); + } + } + } + } + /* Update LDATA Flags cfg */ + npc->prx_lfcfg[0].i = kex_rsp->kex_ld_flags[0]; + npc->prx_lfcfg[1].i = kex_rsp->kex_ld_flags[1]; +} + +int +npc_mcam_fetch_kex_cfg(struct npc *npc) +{ + struct npc_get_kex_cfg_rsp *kex_rsp; + struct mbox *mbox = npc->mbox; + int rc = 0; + + mbox_alloc_msg_npc_get_kex_cfg(mbox); + rc = mbox_process_msg(mbox, (void *)&kex_rsp); + if (rc) { + plt_err("Failed to fetch NPC KEX config"); + goto done; + } + + mbox_memcpy((char *)npc->profile_name, kex_rsp->mkex_pfl_name, + MKEX_NAME_LEN); + + npc_mcam_process_mkex_cfg(npc, kex_rsp); + +done: + return rc; +} + +int +npc_mcam_alloc_and_write(struct npc *npc, struct roc_npc_flow *flow, + struct npc_parse_state *pst) +{ + int use_ctr = (flow->ctr_id == NPC_COUNTER_NONE ? 0 : 1); + struct npc_mcam_write_entry_req *req; + struct mbox *mbox = npc->mbox; + struct mbox_msghdr *rsp; + uint16_t ctr = ~(0); + int rc, idx; + int entry; + + PLT_SET_USED(pst); + + if (use_ctr) { + rc = npc_mcam_alloc_counter(npc, &ctr); + if (rc) + return rc; + } + + entry = npc_check_preallocated_entry_cache(mbox, flow, npc); + if (entry < 0) { + npc_mcam_free_counter(npc, ctr); + return NPC_ERR_MCAM_ALLOC; + } + + req = mbox_alloc_msg_npc_mcam_write_entry(mbox); + if (req == NULL) + return -ENOSPC; + req->set_cntr = use_ctr; + req->cntr = ctr; + req->entry = entry; + + req->intf = (flow->nix_intf == NIX_INTF_RX) ? NPC_MCAM_RX : NPC_MCAM_TX; + req->enable_entry = 1; + req->entry_data.action = flow->npc_action; + + /* + * Driver sets vtag action on per interface basis, not + * per flow basis. It is a matter of how we decide to support + * this pmd specific behavior. There are two ways: + * 1. Inherit the vtag action from the one configured + * for this interface. This can be read from the + * vtag_action configured for default mcam entry of + * this pf_func. + * 2. Do not support vtag action with npc_flow. + * + * Second approach is used now. + */ + req->entry_data.vtag_action = 0ULL; + + for (idx = 0; idx < ROC_NPC_MAX_MCAM_WIDTH_DWORDS; idx++) { + req->entry_data.kw[idx] = flow->mcam_data[idx]; + req->entry_data.kw_mask[idx] = flow->mcam_mask[idx]; + } + + if (flow->nix_intf == NIX_INTF_RX) { + req->entry_data.kw[0] |= (uint64_t)npc->channel; + req->entry_data.kw_mask[0] |= (BIT_ULL(12) - 1); + } else { + uint16_t pf_func = (flow->npc_action >> 4) & 0xffff; + + pf_func = plt_cpu_to_be_16(pf_func); + req->entry_data.kw[0] |= ((uint64_t)pf_func << 32); + req->entry_data.kw_mask[0] |= ((uint64_t)0xffff << 32); + } + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc != 0) + return rc; + + flow->mcam_id = entry; + if (use_ctr) + flow->ctr_id = ctr; + return 0; +} + +int +npc_program_mcam(struct npc *npc, struct npc_parse_state *pst, bool mcam_alloc) +{ + struct npc_mcam_read_base_rule_rsp *base_rule_rsp; + /* This is non-LDATA part in search key */ + uint64_t key_data[2] = {0ULL, 0ULL}; + uint64_t key_mask[2] = {0ULL, 0ULL}; + int key_len, bit = 0, index, rc = 0; + int intf = pst->flow->nix_intf; + struct mcam_entry *base_entry; + int off, idx, data_off = 0; + uint8_t lid, mask, data; + uint16_t layer_info; + uint64_t lt, flags; + + /* Skip till Layer A data start */ + while (bit < NPC_PARSE_KEX_S_LA_OFFSET) { + if (npc->keyx_supp_nmask[intf] & (1 << bit)) + data_off++; + bit++; + } + + /* Each bit represents 1 nibble */ + data_off *= 4; + + index = 0; + for (lid = 0; lid < NPC_MAX_LID; lid++) { + /* Offset in key */ + off = NPC_PARSE_KEX_S_LID_OFFSET(lid); + lt = pst->lt[lid] & 0xf; + flags = pst->flags[lid] & 0xff; + + /* NPC_LAYER_KEX_S */ + layer_info = ((npc->keyx_supp_nmask[intf] >> off) & 0x7); + + if (layer_info) { + for (idx = 0; idx <= 2; idx++) { + if (layer_info & (1 << idx)) { + if (idx == 2) + data = lt; + else if (idx == 1) + data = ((flags >> 4) & 0xf); + else + data = (flags & 0xf); + + if (data_off >= 64) { + data_off = 0; + index++; + } + key_data[index] |= + ((uint64_t)data << data_off); + mask = 0xf; + if (lt == 0) + mask = 0; + key_mask[index] |= + ((uint64_t)mask << data_off); + data_off += 4; + } + } + } + } + + /* Copy this into mcam string */ + key_len = (pst->npc->keyx_len[intf] + 7) / 8; + memcpy(pst->flow->mcam_data, key_data, key_len); + memcpy(pst->flow->mcam_mask, key_mask, key_len); + + if (pst->is_vf) { + (void)mbox_alloc_msg_npc_read_base_steer_rule(npc->mbox); + rc = mbox_process_msg(npc->mbox, (void *)&base_rule_rsp); + if (rc) { + plt_err("Failed to fetch VF's base MCAM entry"); + return rc; + } + base_entry = &base_rule_rsp->entry_data; + for (idx = 0; idx < ROC_NPC_MAX_MCAM_WIDTH_DWORDS; idx++) { + pst->flow->mcam_data[idx] |= base_entry->kw[idx]; + pst->flow->mcam_mask[idx] |= base_entry->kw_mask[idx]; + } + } + + /* + * Now we have mcam data and mask formatted as + * [Key_len/4 nibbles][0 or 1 nibble hole][data] + * hole is present if key_len is odd number of nibbles. + * mcam data must be split into 64 bits + 48 bits segments + * for each back W0, W1. + */ + + if (mcam_alloc) + return npc_mcam_alloc_and_write(npc, pst->flow, pst); + else + return 0; +} + +int +npc_flow_free_all_resources(struct npc *npc) +{ + struct npc_mcam_ents_info *info; + struct roc_npc_flow *flow; + struct plt_bitmap *bmap; + int entry_count = 0; + int rc, idx; + + for (idx = 0; idx < npc->flow_max_priority; idx++) { + info = &npc->flow_entry_info[idx]; + entry_count += info->live_ent; + } + + if (entry_count == 0) + return 0; + + /* Free all MCAM entries allocated */ + rc = npc_mcam_free_all_entries(npc); + + /* Free any MCAM counters and delete flow list */ + for (idx = 0; idx < npc->flow_max_priority; idx++) { + while ((flow = TAILQ_FIRST(&npc->flow_list[idx])) != NULL) { + if (flow->ctr_id != NPC_COUNTER_NONE) + rc |= npc_mcam_free_counter(npc, flow->ctr_id); + + TAILQ_REMOVE(&npc->flow_list[idx], flow, next); + plt_free(flow); + bmap = npc->live_entries[flow->priority]; + plt_bitmap_clear(bmap, flow->mcam_id); + } + info = &npc->flow_entry_info[idx]; + info->free_ent = 0; + info->live_ent = 0; + } + return rc; +} diff --git a/drivers/common/cnxk/roc_npc_priv.h b/drivers/common/cnxk/roc_npc_priv.h index 9434826..13768f9 100644 --- a/drivers/common/cnxk/roc_npc_priv.h +++ b/drivers/common/cnxk/roc_npc_priv.h @@ -379,6 +379,22 @@ roc_npc_to_npc_priv(struct roc_npc *npc) return (struct npc *)npc->reserved; } +int npc_mcam_free_counter(struct npc *npc, uint16_t ctr_id); +int npc_mcam_read_counter(struct npc *npc, uint32_t ctr_id, uint64_t *count); +int npc_mcam_clear_counter(struct npc *npc, uint32_t ctr_id); +int npc_mcam_free_entry(struct npc *npc, uint32_t entry); +int npc_mcam_free_all_entries(struct npc *npc); +int npc_mcam_alloc_and_write(struct npc *npc, struct roc_npc_flow *flow, + struct npc_parse_state *pst); +int npc_mcam_alloc_entry(struct npc *npc, struct roc_npc_flow *mcam, + struct roc_npc_flow *ref_mcam, int prio, + int *resp_count); +int npc_mcam_alloc_entries(struct npc *npc, int ref_mcam, int *alloc_entry, + int req_count, int prio, int *resp_count); + +int npc_mcam_ena_dis_entry(struct npc *npc, struct roc_npc_flow *mcam, + bool enable); +int npc_mcam_write_entry(struct npc *npc, struct roc_npc_flow *mcam); int npc_update_parse_state(struct npc_parse_state *pst, struct npc_parse_item_info *info, int lid, int lt, uint8_t flags); @@ -386,7 +402,12 @@ void npc_get_hw_supp_mask(struct npc_parse_state *pst, struct npc_parse_item_info *info, int lid, int lt); int npc_parse_item_basic(const struct roc_npc_item_info *item, struct npc_parse_item_info *info); +int npc_mcam_fetch_kex_cfg(struct npc *npc); int npc_check_preallocated_entry_cache(struct mbox *mbox, struct roc_npc_flow *flow, struct npc *npc); +int npc_flow_free_all_resources(struct npc *npc); +int npc_program_mcam(struct npc *npc, struct npc_parse_state *pst, + bool mcam_alloc); +uint64_t npc_get_kex_capability(struct npc *npc); #endif /* _ROC_NPC_PRIV_H_ */ From patchwork Thu Apr 1 12:38:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90416 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 102CCA0548; Thu, 1 Apr 2021 14:45:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0CD841412CD; Thu, 1 Apr 2021 14:40:49 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 6EF851412CB for ; Thu, 1 Apr 2021 14:40:47 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLcs019084 for ; Thu, 1 Apr 2021 05:40:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=dmmaZWVUSDayGwvoYN8LIFgWfwZjTID8C0CFFK4DbtY=; b=Irgnhl1KqUy2dgB/xWZg9TC71utv5WE8x45LYBDmtwDtjohHgDhE2YOoxzj9EPp/NjEU 6DB4lg6M6MQf1D4wDprZX4iHB8qj3uR8lcGSUp4vj4ucbVnie5ToN3DXJRVSSni+12iv s4yE9Lxe00fppWvdXPK5Z5bIKXAyJ5Wa/MWyHqIkiszMt1lNsIJs8Rnmj/vPU8/CMDGI Tq0egcLk23HiNU77nODggeyGrTcpbsuvuIrQR3h2hJJXjKjaByvSshbgOeRbeD3xXyxg vjFUa0Q/g9BWJWQ3Z6/Y/fbQSTCgvVc5PPD/NUkMiJLPMZw+dEgn6yebitO6+EBAXFqO Hw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje6j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:46 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:44 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:44 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 3B7703F7044; Thu, 1 Apr 2021 05:40:42 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:08:08 +0530 Message-ID: <20210401123817.14348-44-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: HEmiy4YTsha9lk3kwuCegK_YEsww5D47 X-Proofpoint-ORIG-GUID: HEmiy4YTsha9lk3kwuCegK_YEsww5D47 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 43/52] common/cnxk: add npc parsing API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding npc parsing API support to parse different patterns and actions. Based on the pattern and actions ltype values will be chosen and mcam data will be configured at perticular offsets. Signed-off-by: Kiran Kumar K --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_npc_parse.c | 703 ++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_npc_priv.h | 13 + 3 files changed, 717 insertions(+) create mode 100644 drivers/common/cnxk/roc_npc_parse.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 7c83050..9dd4c23 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -35,6 +35,7 @@ sources = files('roc_dev.c', 'roc_npa_debug.c', 'roc_npa_irq.c', 'roc_npc_mcam.c', + 'roc_npc_parse.c', 'roc_npc_utils.c', 'roc_platform.c', 'roc_utils.c') diff --git a/drivers/common/cnxk/roc_npc_parse.c b/drivers/common/cnxk/roc_npc_parse.c new file mode 100644 index 0000000..d07f91d --- /dev/null +++ b/drivers/common/cnxk/roc_npc_parse.c @@ -0,0 +1,703 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ +#include "roc_api.h" +#include "roc_priv.h" + +const struct roc_npc_item_info * +npc_parse_skip_void_and_any_items(const struct roc_npc_item_info *pattern) +{ + while ((pattern->type == ROC_NPC_ITEM_TYPE_VOID) || + (pattern->type == ROC_NPC_ITEM_TYPE_ANY)) + pattern++; + + return pattern; +} + +int +npc_parse_meta_items(struct npc_parse_state *pst) +{ + PLT_SET_USED(pst); + return 0; +} + +int +npc_parse_cpt_hdr(struct npc_parse_state *pst) +{ + uint8_t hw_mask[NPC_MAX_EXTRACT_HW_LEN]; + struct npc_parse_item_info info; + int lid, lt; + int rc; + + /* Identify the pattern type into lid, lt */ + if (pst->pattern->type != ROC_NPC_ITEM_TYPE_CPT_HDR) + return 0; + + lid = NPC_LID_LA; + lt = NPC_LT_LA_CPT_HDR; + info.hw_hdr_len = 0; + + /* Prepare for parsing the item */ + info.hw_mask = &hw_mask; + info.len = pst->pattern->size; + npc_get_hw_supp_mask(pst, &info, lid, lt); + info.spec = NULL; + info.mask = NULL; + + /* Basic validation of item parameters */ + rc = npc_parse_item_basic(pst->pattern, &info); + if (rc) + return rc; + + /* Update pst if not validate only? clash check? */ + return npc_update_parse_state(pst, &info, lid, lt, 0); +} + +int +npc_parse_higig2_hdr(struct npc_parse_state *pst) +{ + uint8_t hw_mask[NPC_MAX_EXTRACT_HW_LEN]; + struct npc_parse_item_info info; + int lid, lt; + int rc; + + /* Identify the pattern type into lid, lt */ + if (pst->pattern->type != ROC_NPC_ITEM_TYPE_HIGIG2) + return 0; + + lid = NPC_LID_LA; + lt = NPC_LT_LA_HIGIG2_ETHER; + info.hw_hdr_len = 0; + + if (pst->flow->nix_intf == NIX_INTF_TX) { + lt = NPC_LT_LA_IH_NIX_HIGIG2_ETHER; + info.hw_hdr_len = NPC_IH_LENGTH; + } + + /* Prepare for parsing the item */ + info.hw_mask = &hw_mask; + info.len = pst->pattern->size; + npc_get_hw_supp_mask(pst, &info, lid, lt); + info.spec = NULL; + info.mask = NULL; + + /* Basic validation of item parameters */ + rc = npc_parse_item_basic(pst->pattern, &info); + if (rc) + return rc; + + /* Update pst if not validate only? clash check? */ + return npc_update_parse_state(pst, &info, lid, lt, 0); +} + +int +npc_parse_la(struct npc_parse_state *pst) +{ + uint8_t hw_mask[NPC_MAX_EXTRACT_HW_LEN]; + struct npc_parse_item_info info; + int lid, lt; + int rc; + + /* Identify the pattern type into lid, lt */ + if (pst->pattern->type != ROC_NPC_ITEM_TYPE_ETH) + return 0; + + lid = NPC_LID_LA; + lt = NPC_LT_LA_ETHER; + info.hw_hdr_len = 0; + + if (pst->flow->nix_intf == NIX_INTF_TX) { + lt = NPC_LT_LA_IH_NIX_ETHER; + info.hw_hdr_len = NPC_IH_LENGTH; + if (pst->npc->switch_header_type == ROC_PRIV_FLAGS_HIGIG) { + lt = NPC_LT_LA_IH_NIX_HIGIG2_ETHER; + info.hw_hdr_len += NPC_HIGIG2_LENGTH; + } + } else { + if (pst->npc->switch_header_type == ROC_PRIV_FLAGS_HIGIG) { + lt = NPC_LT_LA_HIGIG2_ETHER; + info.hw_hdr_len = NPC_HIGIG2_LENGTH; + } + } + + /* Prepare for parsing the item */ + info.hw_mask = &hw_mask; + info.len = pst->pattern->size; + npc_get_hw_supp_mask(pst, &info, lid, lt); + info.spec = NULL; + info.mask = NULL; + + /* Basic validation of item parameters */ + rc = npc_parse_item_basic(pst->pattern, &info); + if (rc) + return rc; + + /* Update pst if not validate only? clash check? */ + return npc_update_parse_state(pst, &info, lid, lt, 0); +} + +int +npc_parse_lb(struct npc_parse_state *pst) +{ + const struct roc_npc_item_info *pattern = pst->pattern; + const struct roc_npc_item_info *last_pattern; + char hw_mask[NPC_MAX_EXTRACT_HW_LEN]; + struct npc_parse_item_info info; + int lid, lt, lflags; + int nr_vlans = 0; + int rc; + + info.spec = NULL; + info.mask = NULL; + info.def_mask = NULL; + info.hw_hdr_len = NPC_TPID_LENGTH; + + lid = NPC_LID_LB; + lflags = 0; + last_pattern = pattern; + + if (pst->pattern->type == ROC_NPC_ITEM_TYPE_VLAN) { + /* RTE vlan is either 802.1q or 802.1ad, + * this maps to either CTAG/STAG. We need to decide + * based on number of VLANS present. Matching is + * supported on first tag only. + */ + info.hw_mask = NULL; + info.len = pst->pattern->size; + + pattern = pst->pattern; + while (pattern->type == ROC_NPC_ITEM_TYPE_VLAN) { + nr_vlans++; + + /* Basic validation of Second/Third vlan item */ + if (nr_vlans > 1) { + rc = npc_parse_item_basic(pattern, &info); + if (rc != 0) + return rc; + } + last_pattern = pattern; + pattern++; + pattern = npc_parse_skip_void_and_any_items(pattern); + } + + switch (nr_vlans) { + case 1: + lt = NPC_LT_LB_CTAG; + break; + case 2: + lt = NPC_LT_LB_STAG_QINQ; + lflags = NPC_F_STAG_CTAG; + break; + case 3: + lt = NPC_LT_LB_STAG_QINQ; + lflags = NPC_F_STAG_STAG_CTAG; + break; + default: + return NPC_ERR_PATTERN_NOTSUP; + } + } else if (pst->pattern->type == ROC_NPC_ITEM_TYPE_E_TAG) { + /* we can support ETAG and match a subsequent CTAG + * without any matching support. + */ + lt = NPC_LT_LB_ETAG; + lflags = 0; + + last_pattern = pst->pattern; + pattern = npc_parse_skip_void_and_any_items(pst->pattern + 1); + if (pattern->type == ROC_NPC_ITEM_TYPE_VLAN) { + /* set supported mask to NULL for vlan tag */ + info.hw_mask = NULL; + info.len = pattern->size; + rc = npc_parse_item_basic(pattern, &info); + if (rc != 0) + return rc; + + lflags = NPC_F_ETAG_CTAG; + last_pattern = pattern; + } + info.len = pattern->size; + } else if (pst->pattern->type == ROC_NPC_ITEM_TYPE_QINQ) { + info.hw_mask = NULL; + info.len = pst->pattern->size; + lt = NPC_LT_LB_STAG_QINQ; + lflags = NPC_F_STAG_CTAG; + } else { + return 0; + } + + info.hw_mask = &hw_mask; + info.spec = NULL; + info.mask = NULL; + npc_get_hw_supp_mask(pst, &info, lid, lt); + + rc = npc_parse_item_basic(pst->pattern, &info); + if (rc != 0) + return rc; + + /* Point pattern to last item consumed */ + pst->pattern = last_pattern; + return npc_update_parse_state(pst, &info, lid, lt, lflags); +} + +static int +npc_parse_mpls_label_stack(struct npc_parse_state *pst, int *flag) +{ + uint8_t flag_list[] = {0, NPC_F_MPLS_2_LABELS, NPC_F_MPLS_3_LABELS, + NPC_F_MPLS_4_LABELS}; + const struct roc_npc_item_info *pattern = pst->pattern; + struct npc_parse_item_info info; + int nr_labels = 0; + int rc; + + /* + * pst->pattern points to first MPLS label. We only check + * that subsequent labels do not have anything to match. + */ + info.hw_mask = NULL; + info.len = pattern->size; + info.spec = NULL; + info.mask = NULL; + info.hw_hdr_len = 0; + info.def_mask = NULL; + + while (pattern->type == ROC_NPC_ITEM_TYPE_MPLS) { + nr_labels++; + + /* Basic validation of Second/Third/Fourth mpls item */ + if (nr_labels > 1) { + rc = npc_parse_item_basic(pattern, &info); + if (rc != 0) + return rc; + } + pst->last_pattern = pattern; + pattern++; + pattern = npc_parse_skip_void_and_any_items(pattern); + } + + if (nr_labels < 1 || nr_labels > 4) + return NPC_ERR_PATTERN_NOTSUP; + + *flag = flag_list[nr_labels - 1]; + return 0; +} + +static int +npc_parse_mpls(struct npc_parse_state *pst, int lid) +{ + /* Find number of MPLS labels */ + uint8_t hw_mask[NPC_MAX_EXTRACT_HW_LEN]; + struct npc_parse_item_info info; + int lt, lflags; + int rc; + + lflags = 0; + + if (lid == NPC_LID_LC) + lt = NPC_LT_LC_MPLS; + else if (lid == NPC_LID_LD) + lt = NPC_LT_LD_TU_MPLS_IN_IP; + else + lt = NPC_LT_LE_TU_MPLS_IN_UDP; + + /* Prepare for parsing the first item */ + info.hw_mask = &hw_mask; + info.len = pst->pattern->size; + info.spec = NULL; + info.mask = NULL; + info.hw_hdr_len = 0; + + npc_get_hw_supp_mask(pst, &info, lid, lt); + rc = npc_parse_item_basic(pst->pattern, &info); + if (rc != 0) + return rc; + + /* + * Parse for more labels. + * This sets lflags and pst->last_pattern correctly. + */ + rc = npc_parse_mpls_label_stack(pst, &lflags); + if (rc != 0) + return rc; + + pst->tunnel = 1; + pst->pattern = pst->last_pattern; + + return npc_update_parse_state(pst, &info, lid, lt, lflags); +} + +static inline void +npc_check_lc_ip_tunnel(struct npc_parse_state *pst) +{ + const struct roc_npc_item_info *pattern = pst->pattern + 1; + + pattern = npc_parse_skip_void_and_any_items(pattern); + if (pattern->type == ROC_NPC_ITEM_TYPE_MPLS || + pattern->type == ROC_NPC_ITEM_TYPE_IPV4 || + pattern->type == ROC_NPC_ITEM_TYPE_IPV6) + pst->tunnel = 1; +} + +int +npc_parse_lc(struct npc_parse_state *pst) +{ + uint8_t hw_mask[NPC_MAX_EXTRACT_HW_LEN]; + struct npc_parse_item_info info; + int lid, lt; + int rc; + + if (pst->pattern->type == ROC_NPC_ITEM_TYPE_MPLS) + return npc_parse_mpls(pst, NPC_LID_LC); + + info.hw_mask = &hw_mask; + info.spec = NULL; + info.mask = NULL; + info.hw_hdr_len = 0; + lid = NPC_LID_LC; + + switch (pst->pattern->type) { + case ROC_NPC_ITEM_TYPE_IPV4: + lt = NPC_LT_LC_IP; + info.len = pst->pattern->size; + break; + case ROC_NPC_ITEM_TYPE_IPV6: + lid = NPC_LID_LC; + lt = NPC_LT_LC_IP6; + info.len = pst->pattern->size; + break; + case ROC_NPC_ITEM_TYPE_ARP_ETH_IPV4: + lt = NPC_LT_LC_ARP; + info.len = pst->pattern->size; + break; + case ROC_NPC_ITEM_TYPE_IPV6_EXT: + lid = NPC_LID_LC; + lt = NPC_LT_LC_IP6_EXT; + info.len = pst->pattern->size; + info.hw_hdr_len = 40; + break; + case ROC_NPC_ITEM_TYPE_L3_CUSTOM: + lt = NPC_LT_LC_CUSTOM0; + info.len = pst->pattern->size; + break; + default: + /* No match at this layer */ + return 0; + } + + /* Identify if IP tunnels MPLS or IPv4/v6 */ + npc_check_lc_ip_tunnel(pst); + + npc_get_hw_supp_mask(pst, &info, lid, lt); + rc = npc_parse_item_basic(pst->pattern, &info); + if (rc != 0) + return rc; + + return npc_update_parse_state(pst, &info, lid, lt, 0); +} + +int +npc_parse_ld(struct npc_parse_state *pst) +{ + char hw_mask[NPC_MAX_EXTRACT_HW_LEN]; + struct npc_parse_item_info info; + int lid, lt, lflags; + int rc; + + if (pst->tunnel) { + /* We have already parsed MPLS or IPv4/v6 followed + * by MPLS or IPv4/v6. Subsequent TCP/UDP etc + * would be parsed as tunneled versions. Skip + * this layer, except for tunneled MPLS. If LC is + * MPLS, we have anyway skipped all stacked MPLS + * labels. + */ + if (pst->pattern->type == ROC_NPC_ITEM_TYPE_MPLS) + return npc_parse_mpls(pst, NPC_LID_LD); + return 0; + } + info.hw_mask = &hw_mask; + info.spec = NULL; + info.mask = NULL; + info.def_mask = NULL; + info.len = 0; + info.hw_hdr_len = 0; + + lid = NPC_LID_LD; + lflags = 0; + + switch (pst->pattern->type) { + case ROC_NPC_ITEM_TYPE_ICMP: + if (pst->lt[NPC_LID_LC] == NPC_LT_LC_IP6) + lt = NPC_LT_LD_ICMP6; + else + lt = NPC_LT_LD_ICMP; + info.len = pst->pattern->size; + break; + case ROC_NPC_ITEM_TYPE_UDP: + lt = NPC_LT_LD_UDP; + info.len = pst->pattern->size; + break; + case ROC_NPC_ITEM_TYPE_IGMP: + lt = NPC_LT_LD_IGMP; + info.len = pst->pattern->size; + break; + case ROC_NPC_ITEM_TYPE_TCP: + lt = NPC_LT_LD_TCP; + info.len = pst->pattern->size; + break; + case ROC_NPC_ITEM_TYPE_SCTP: + lt = NPC_LT_LD_SCTP; + info.len = pst->pattern->size; + break; + case ROC_NPC_ITEM_TYPE_GRE: + lt = NPC_LT_LD_GRE; + info.len = pst->pattern->size; + break; + case ROC_NPC_ITEM_TYPE_GRE_KEY: + lt = NPC_LT_LD_GRE; + info.len = pst->pattern->size; + info.hw_hdr_len = 4; + break; + case ROC_NPC_ITEM_TYPE_NVGRE: + lt = NPC_LT_LD_NVGRE; + lflags = NPC_F_GRE_NVGRE; + info.len = pst->pattern->size; + /* Further IP/Ethernet are parsed as tunneled */ + pst->tunnel = 1; + break; + default: + return 0; + } + + npc_get_hw_supp_mask(pst, &info, lid, lt); + rc = npc_parse_item_basic(pst->pattern, &info); + if (rc != 0) + return rc; + + return npc_update_parse_state(pst, &info, lid, lt, lflags); +} + +int +npc_parse_le(struct npc_parse_state *pst) +{ + const struct roc_npc_item_info *pattern = pst->pattern; + char hw_mask[NPC_MAX_EXTRACT_HW_LEN]; + struct npc_parse_item_info info; + int lid, lt, lflags; + int rc; + + if (pst->tunnel) + return 0; + + if (pst->pattern->type == ROC_NPC_ITEM_TYPE_MPLS) + return npc_parse_mpls(pst, NPC_LID_LE); + + info.spec = NULL; + info.mask = NULL; + info.hw_mask = NULL; + info.def_mask = NULL; + info.len = 0; + info.hw_hdr_len = 0; + lid = NPC_LID_LE; + lflags = 0; + + /* Ensure we are not matching anything in UDP */ + rc = npc_parse_item_basic(pattern, &info); + if (rc) + return rc; + + info.hw_mask = &hw_mask; + pattern = npc_parse_skip_void_and_any_items(pattern); + switch (pattern->type) { + case ROC_NPC_ITEM_TYPE_VXLAN: + lflags = NPC_F_UDP_VXLAN; + info.len = pattern->size; + lt = NPC_LT_LE_VXLAN; + break; + case ROC_NPC_ITEM_TYPE_GTPC: + lflags = NPC_F_UDP_GTP_GTPC; + info.len = pattern->size; + lt = NPC_LT_LE_GTPC; + break; + case ROC_NPC_ITEM_TYPE_GTPU: + lflags = NPC_F_UDP_GTP_GTPU_G_PDU; + info.len = pattern->size; + lt = NPC_LT_LE_GTPU; + break; + case ROC_NPC_ITEM_TYPE_GENEVE: + lflags = NPC_F_UDP_GENEVE; + info.len = pattern->size; + lt = NPC_LT_LE_GENEVE; + break; + case ROC_NPC_ITEM_TYPE_VXLAN_GPE: + lflags = NPC_F_UDP_VXLANGPE; + info.len = pattern->size; + lt = NPC_LT_LE_VXLANGPE; + break; + case ROC_NPC_ITEM_TYPE_ESP: + lt = NPC_LT_LE_ESP; + info.len = pst->pattern->size; + break; + default: + return 0; + } + + pst->tunnel = 1; + + npc_get_hw_supp_mask(pst, &info, lid, lt); + rc = npc_parse_item_basic(pattern, &info); + if (rc != 0) + return rc; + + return npc_update_parse_state(pst, &info, lid, lt, lflags); +} + +int +npc_parse_lf(struct npc_parse_state *pst) +{ + const struct roc_npc_item_info *pattern, *last_pattern; + char hw_mask[NPC_MAX_EXTRACT_HW_LEN]; + struct npc_parse_item_info info; + int lid, lt, lflags; + int nr_vlans = 0; + int rc; + + /* We hit this layer if there is a tunneling protocol */ + if (!pst->tunnel) + return 0; + + if (pst->pattern->type != ROC_NPC_ITEM_TYPE_ETH) + return 0; + + lid = NPC_LID_LF; + lt = NPC_LT_LF_TU_ETHER; + lflags = 0; + + /* No match support for vlan tags */ + info.hw_mask = NULL; + info.len = pst->pattern->size; + info.spec = NULL; + info.mask = NULL; + info.hw_hdr_len = 0; + + /* Look ahead and find out any VLAN tags. These can be + * detected but no data matching is available. + */ + last_pattern = pst->pattern; + pattern = pst->pattern + 1; + pattern = npc_parse_skip_void_and_any_items(pattern); + while (pattern->type == ROC_NPC_ITEM_TYPE_VLAN) { + nr_vlans++; + last_pattern = pattern; + pattern++; + pattern = npc_parse_skip_void_and_any_items(pattern); + } + switch (nr_vlans) { + case 0: + break; + case 1: + lflags = NPC_F_TU_ETHER_CTAG; + break; + case 2: + lflags = NPC_F_TU_ETHER_STAG_CTAG; + break; + default: + return NPC_ERR_PATTERN_NOTSUP; + } + + info.hw_mask = &hw_mask; + info.len = pst->pattern->size; + info.hw_hdr_len = 0; + npc_get_hw_supp_mask(pst, &info, lid, lt); + info.spec = NULL; + info.mask = NULL; + + rc = npc_parse_item_basic(pst->pattern, &info); + if (rc != 0) + return rc; + + pst->pattern = last_pattern; + + return npc_update_parse_state(pst, &info, lid, lt, lflags); +} + +int +npc_parse_lg(struct npc_parse_state *pst) +{ + char hw_mask[NPC_MAX_EXTRACT_HW_LEN]; + struct npc_parse_item_info info; + int lid, lt; + int rc; + + if (!pst->tunnel) + return 0; + + info.hw_mask = &hw_mask; + info.spec = NULL; + info.mask = NULL; + info.hw_hdr_len = 0; + lid = NPC_LID_LG; + + if (pst->pattern->type == ROC_NPC_ITEM_TYPE_IPV4) { + lt = NPC_LT_LG_TU_IP; + info.len = pst->pattern->size; + } else if (pst->pattern->type == ROC_NPC_ITEM_TYPE_IPV6) { + lt = NPC_LT_LG_TU_IP6; + info.len = pst->pattern->size; + } else { + /* There is no tunneled IP header */ + return 0; + } + + npc_get_hw_supp_mask(pst, &info, lid, lt); + rc = npc_parse_item_basic(pst->pattern, &info); + if (rc != 0) + return rc; + + return npc_update_parse_state(pst, &info, lid, lt, 0); +} + +int +npc_parse_lh(struct npc_parse_state *pst) +{ + char hw_mask[NPC_MAX_EXTRACT_HW_LEN]; + struct npc_parse_item_info info; + int lid, lt; + int rc; + + if (!pst->tunnel) + return 0; + + info.hw_mask = &hw_mask; + info.spec = NULL; + info.mask = NULL; + info.hw_hdr_len = 0; + lid = NPC_LID_LH; + + switch (pst->pattern->type) { + case ROC_NPC_ITEM_TYPE_UDP: + lt = NPC_LT_LH_TU_UDP; + info.len = pst->pattern->size; + break; + case ROC_NPC_ITEM_TYPE_TCP: + lt = NPC_LT_LH_TU_TCP; + info.len = pst->pattern->size; + break; + case ROC_NPC_ITEM_TYPE_SCTP: + lt = NPC_LT_LH_TU_SCTP; + info.len = pst->pattern->size; + break; + case ROC_NPC_ITEM_TYPE_ESP: + lt = NPC_LT_LH_TU_ESP; + info.len = pst->pattern->size; + break; + default: + return 0; + } + + npc_get_hw_supp_mask(pst, &info, lid, lt); + rc = npc_parse_item_basic(pst->pattern, &info); + if (rc != 0) + return rc; + + return npc_update_parse_state(pst, &info, lid, lt, 0); +} diff --git a/drivers/common/cnxk/roc_npc_priv.h b/drivers/common/cnxk/roc_npc_priv.h index 13768f9..dcf26c0 100644 --- a/drivers/common/cnxk/roc_npc_priv.h +++ b/drivers/common/cnxk/roc_npc_priv.h @@ -402,11 +402,24 @@ void npc_get_hw_supp_mask(struct npc_parse_state *pst, struct npc_parse_item_info *info, int lid, int lt); int npc_parse_item_basic(const struct roc_npc_item_info *item, struct npc_parse_item_info *info); +int npc_parse_meta_items(struct npc_parse_state *pst); +int npc_parse_higig2_hdr(struct npc_parse_state *pst); +int npc_parse_cpt_hdr(struct npc_parse_state *pst); +int npc_parse_la(struct npc_parse_state *pst); +int npc_parse_lb(struct npc_parse_state *pst); +int npc_parse_lc(struct npc_parse_state *pst); +int npc_parse_ld(struct npc_parse_state *pst); +int npc_parse_le(struct npc_parse_state *pst); +int npc_parse_lf(struct npc_parse_state *pst); +int npc_parse_lg(struct npc_parse_state *pst); +int npc_parse_lh(struct npc_parse_state *pst); int npc_mcam_fetch_kex_cfg(struct npc *npc); int npc_check_preallocated_entry_cache(struct mbox *mbox, struct roc_npc_flow *flow, struct npc *npc); int npc_flow_free_all_resources(struct npc *npc); +const struct roc_npc_item_info * +npc_parse_skip_void_and_any_items(const struct roc_npc_item_info *pattern); int npc_program_mcam(struct npc *npc, struct npc_parse_state *pst, bool mcam_alloc); uint64_t npc_get_kex_capability(struct npc *npc); From patchwork Thu Apr 1 12:38:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90417 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3254FA0548; Thu, 1 Apr 2021 14:45:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9C3691412D8; Thu, 1 Apr 2021 14:40:51 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 0E9FE1412D0 for ; Thu, 1 Apr 2021 14:40:49 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPNXT032520 for ; Thu, 1 Apr 2021 05:40:49 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=RNZ9Dnf37PNusB46YPnkLGsh+gMgk9K+JKSjhpAmPDI=; b=JgVuMDt9ty/rXMJb+tG+mxw2yoq/CgOVWGmuT9wosjAu4OunqXRJPrpQW58QYWa0Pq3/ WB3zLmEPMaO0Me36zx9DcB5w6TMWNkA6MtUUY8USR0V2a9FXP2hNdOTI6115vZ8ExXWr Cd9FGOGjMyq000Ulfb75RtNT8jdytsRwS+d8NLhWT9ZXK3gVquerALNaAM50kg2JFucY MivJlfW2hVm+YYWg0gG8JhaeAvdiE4J1PJ0rRPhbIr7AJPnLrHO2umOOTd6pwMe0/wGk DiikZIvlAJ9kJncVi/pgAYL1IxdqTz8YFODf5dGfLf6IKs87vtPxmPfTEEvJL6SWjnmc Ag== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 37n28j2dx0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:49 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:47 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:47 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 1F9B83F704A; Thu, 1 Apr 2021 05:40:44 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:08:09 +0530 Message-ID: <20210401123817.14348-45-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: X-ZeRumatPtFu53X4lHFQPQWUhQd25Dn X-Proofpoint-GUID: X-ZeRumatPtFu53X4lHFQPQWUhQd25Dn X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 44/52] common/cnxk: add npc init and fini support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding support initialize and fini the npc. Further, adding APIs to create and destroy the npc rules. Signed-off-by: Kiran Kumar K --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_npc.c | 713 ++++++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_npc.h | 40 +++ drivers/common/cnxk/version.map | 16 + 4 files changed, 770 insertions(+) create mode 100644 drivers/common/cnxk/roc_npc.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 9dd4c23..f12e344 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -34,6 +34,7 @@ sources = files('roc_dev.c', 'roc_npa.c', 'roc_npa_debug.c', 'roc_npa_irq.c', + 'roc_npc.c', 'roc_npc_mcam.c', 'roc_npc_parse.c', 'roc_npc_utils.c', diff --git a/drivers/common/cnxk/roc_npc.c b/drivers/common/cnxk/roc_npc.c new file mode 100644 index 0000000..c1ac3c7 --- /dev/null +++ b/drivers/common/cnxk/roc_npc.c @@ -0,0 +1,713 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +int +roc_npc_mcam_free_counter(struct roc_npc *roc_npc, uint16_t ctr_id) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + + return npc_mcam_free_counter(npc, ctr_id); +} + +int +roc_npc_mcam_read_counter(struct roc_npc *roc_npc, uint32_t ctr_id, + uint64_t *count) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + + return npc_mcam_read_counter(npc, ctr_id, count); +} + +int +roc_npc_mcam_clear_counter(struct roc_npc *roc_npc, uint32_t ctr_id) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + + return npc_mcam_clear_counter(npc, ctr_id); +} + +int +roc_npc_mcam_free_entry(struct roc_npc *roc_npc, uint32_t entry) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + + return npc_mcam_free_entry(npc, entry); +} + +int +roc_npc_mcam_free_all_resources(struct roc_npc *roc_npc) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + + return npc_flow_free_all_resources(npc); +} + +int +roc_npc_mcam_alloc_entries(struct roc_npc *roc_npc, int ref_entry, + int *alloc_entry, int req_count, int priority, + int *resp_count) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + + return npc_mcam_alloc_entries(npc, ref_entry, alloc_entry, req_count, + priority, resp_count); +} + +int +roc_npc_mcam_alloc_entry(struct roc_npc *roc_npc, struct roc_npc_flow *mcam, + struct roc_npc_flow *ref_mcam, int prio, + int *resp_count) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + + return npc_mcam_alloc_entry(npc, mcam, ref_mcam, prio, resp_count); +} + +int +roc_npc_mcam_ena_dis_entry(struct roc_npc *roc_npc, struct roc_npc_flow *mcam, + bool enable) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + + return npc_mcam_ena_dis_entry(npc, mcam, enable); +} + +int +roc_npc_mcam_write_entry(struct roc_npc *roc_npc, struct roc_npc_flow *mcam) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + + return npc_mcam_write_entry(npc, mcam); +} + +int +roc_npc_get_low_priority_mcam(struct roc_npc *roc_npc) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + + if (roc_model_is_cn10k()) + return (npc->mcam_entries - NPC_MCAME_RESVD_10XX - 1); + else + return (npc->mcam_entries - NPC_MCAME_RESVD_9XXX - 1); +} + +static int +npc_mcam_tot_entries(void) +{ + /* FIXME: change to reading in AF from NPC_AF_CONST1/2 + * MCAM_BANK_DEPTH(_EXT) * MCAM_BANKS + */ + if (roc_model_is_cn10k()) + return 16 * 1024; /* MCAM_BANKS = 4, BANK_DEPTH_EXT = 4096 */ + else + return 4 * 1024; /* MCAM_BANKS = 4, BANK_DEPTH_EXT = 1024 */ +} + +const char * +roc_npc_profile_name_get(struct roc_npc *roc_npc) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + + return (char *)npc->profile_name; +} + +int +roc_npc_init(struct roc_npc *roc_npc) +{ + uint8_t *mem = NULL, *nix_mem = NULL, *npc_mem = NULL; + struct nix *nix = roc_nix_to_nix_priv(roc_npc->roc_nix); + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + uint32_t bmap_sz; + int rc = 0, idx; + size_t sz; + + PLT_STATIC_ASSERT(sizeof(struct npc) <= ROC_NPC_MEM_SZ); + + memset(npc, 0, sizeof(*npc)); + npc->mbox = (&nix->dev)->mbox; + roc_npc->channel = nix->rx_chan_base; + roc_npc->pf_func = (&nix->dev)->pf_func; + npc->channel = roc_npc->channel; + npc->pf_func = roc_npc->pf_func; + npc->flow_max_priority = roc_npc->flow_max_priority; + npc->switch_header_type = roc_npc->switch_header_type; + npc->flow_prealloc_size = roc_npc->flow_prealloc_size; + + if (npc->mbox == NULL) + return NPC_ERR_PARAM; + + rc = npc_mcam_fetch_kex_cfg(npc); + if (rc) + goto done; + + roc_npc->kex_capability = npc_get_kex_capability(npc); + roc_npc->rx_parse_nibble = npc->keyx_supp_nmask[NPC_MCAM_RX]; + + npc->mark_actions = 0; + + npc->mcam_entries = npc_mcam_tot_entries() >> npc->keyw[NPC_MCAM_RX]; + + /* Free, free_rev, live and live_rev entries */ + bmap_sz = plt_bitmap_get_memory_footprint(npc->mcam_entries); + mem = plt_zmalloc(4 * bmap_sz * npc->flow_max_priority, 0); + if (mem == NULL) { + plt_err("Bmap alloc failed"); + rc = NPC_ERR_NO_MEM; + return rc; + } + + sz = npc->flow_max_priority * sizeof(struct npc_mcam_ents_info); + npc->flow_entry_info = plt_zmalloc(sz, 0); + if (npc->flow_entry_info == NULL) { + plt_err("flow_entry_info alloc failed"); + rc = NPC_ERR_NO_MEM; + goto done; + } + + sz = npc->flow_max_priority * sizeof(struct plt_bitmap *); + npc->free_entries = plt_zmalloc(sz, 0); + if (npc->free_entries == NULL) { + plt_err("free_entries alloc failed"); + rc = NPC_ERR_NO_MEM; + goto done; + } + + sz = npc->flow_max_priority * sizeof(struct plt_bitmap *); + npc->free_entries_rev = plt_zmalloc(sz, 0); + if (npc->free_entries_rev == NULL) { + plt_err("free_entries_rev alloc failed"); + rc = NPC_ERR_NO_MEM; + goto done; + } + + sz = npc->flow_max_priority * sizeof(struct plt_bitmap *); + npc->live_entries = plt_zmalloc(sz, 0); + if (npc->live_entries == NULL) { + plt_err("live_entries alloc failed"); + rc = NPC_ERR_NO_MEM; + goto done; + } + + sz = npc->flow_max_priority * sizeof(struct plt_bitmap *); + npc->live_entries_rev = plt_zmalloc(sz, 0); + if (npc->live_entries_rev == NULL) { + plt_err("live_entries_rev alloc failed"); + rc = NPC_ERR_NO_MEM; + goto done; + } + + sz = npc->flow_max_priority * sizeof(struct npc_flow_list); + npc->flow_list = plt_zmalloc(sz, 0); + if (npc->flow_list == NULL) { + plt_err("flow_list alloc failed"); + rc = NPC_ERR_NO_MEM; + goto done; + } + + npc_mem = mem; + for (idx = 0; idx < npc->flow_max_priority; idx++) { + TAILQ_INIT(&npc->flow_list[idx]); + + npc->free_entries[idx] = + plt_bitmap_init(npc->mcam_entries, mem, bmap_sz); + mem += bmap_sz; + + npc->free_entries_rev[idx] = + plt_bitmap_init(npc->mcam_entries, mem, bmap_sz); + mem += bmap_sz; + + npc->live_entries[idx] = + plt_bitmap_init(npc->mcam_entries, mem, bmap_sz); + mem += bmap_sz; + + npc->live_entries_rev[idx] = + plt_bitmap_init(npc->mcam_entries, mem, bmap_sz); + mem += bmap_sz; + + npc->flow_entry_info[idx].free_ent = 0; + npc->flow_entry_info[idx].live_ent = 0; + npc->flow_entry_info[idx].max_id = 0; + npc->flow_entry_info[idx].min_id = ~(0); + } + + npc->rss_grps = NPC_RSS_GRPS; + + bmap_sz = plt_bitmap_get_memory_footprint(npc->rss_grps); + nix_mem = plt_zmalloc(bmap_sz, 0); + if (nix_mem == NULL) { + plt_err("Bmap alloc failed"); + rc = NPC_ERR_NO_MEM; + goto done; + } + + npc->rss_grp_entries = plt_bitmap_init(npc->rss_grps, nix_mem, bmap_sz); + + if (!npc->rss_grp_entries) { + plt_err("bitmap init failed"); + rc = NPC_ERR_NO_MEM; + goto done; + } + + /* Group 0 will be used for RSS, + * 1 -7 will be used for npc_flow RSS action + */ + plt_bitmap_set(npc->rss_grp_entries, 0); + + return rc; + +done: + if (npc->flow_list) + plt_free(npc->flow_list); + if (npc->live_entries_rev) + plt_free(npc->live_entries_rev); + if (npc->live_entries) + plt_free(npc->live_entries); + if (npc->free_entries_rev) + plt_free(npc->free_entries_rev); + if (npc->free_entries) + plt_free(npc->free_entries); + if (npc->flow_entry_info) + plt_free(npc->flow_entry_info); + if (npc_mem) + plt_free(npc_mem); + return rc; +} + +int +roc_npc_fini(struct roc_npc *roc_npc) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + int rc; + + rc = npc_flow_free_all_resources(npc); + if (rc) { + plt_err("Error when deleting NPC MCAM entries, counters"); + return rc; + } + + if (npc->flow_list) { + plt_free(npc->flow_list); + npc->flow_list = NULL; + } + + if (npc->live_entries_rev) { + plt_free(npc->live_entries_rev); + npc->live_entries_rev = NULL; + } + + if (npc->live_entries) { + plt_free(npc->live_entries); + npc->live_entries = NULL; + } + + if (npc->free_entries_rev) { + plt_free(npc->free_entries_rev); + npc->free_entries_rev = NULL; + } + + if (npc->free_entries) { + plt_free(npc->free_entries); + npc->free_entries = NULL; + } + + if (npc->flow_entry_info) { + plt_free(npc->flow_entry_info); + npc->flow_entry_info = NULL; + } + + return 0; +} + +static int +npc_parse_actions(struct npc *npc, const struct roc_npc_attr *attr, + const struct roc_npc_action actions[], + struct roc_npc_flow *flow) +{ + const struct roc_npc_action_mark *act_mark; + const struct roc_npc_action_queue *act_q; + const struct roc_npc_action_vf *vf_act; + int sel_act, req_act = 0; + uint16_t pf_func, vf_id; + int errcode = 0; + int mark = 0; + int rq = 0; + + /* Initialize actions */ + flow->ctr_id = NPC_COUNTER_NONE; + pf_func = npc->pf_func; + + for (; actions->type != ROC_NPC_ACTION_TYPE_END; actions++) { + switch (actions->type) { + case ROC_NPC_ACTION_TYPE_VOID: + break; + case ROC_NPC_ACTION_TYPE_MARK: + act_mark = (const struct roc_npc_action_mark *) + actions->conf; + if (act_mark->id > (NPC_FLOW_FLAG_VAL - 2)) { + plt_err("mark value must be < 0xfffe"); + goto err_exit; + } + mark = act_mark->id + 1; + req_act |= ROC_NPC_ACTION_TYPE_MARK; + npc->mark_actions += 1; + break; + + case ROC_NPC_ACTION_TYPE_FLAG: + mark = NPC_FLOW_FLAG_VAL; + req_act |= ROC_NPC_ACTION_TYPE_FLAG; + npc->mark_actions += 1; + break; + + case ROC_NPC_ACTION_TYPE_COUNT: + /* Indicates, need a counter */ + flow->ctr_id = 1; + req_act |= ROC_NPC_ACTION_TYPE_COUNT; + break; + + case ROC_NPC_ACTION_TYPE_DROP: + req_act |= ROC_NPC_ACTION_TYPE_DROP; + break; + + case ROC_NPC_ACTION_TYPE_PF: + req_act |= ROC_NPC_ACTION_TYPE_PF; + pf_func &= (0xfc00); + break; + + case ROC_NPC_ACTION_TYPE_VF: + vf_act = + (const struct roc_npc_action_vf *)actions->conf; + req_act |= ROC_NPC_ACTION_TYPE_VF; + vf_id = vf_act->id & RVU_PFVF_FUNC_MASK; + pf_func &= (0xfc00); + pf_func = (pf_func | (vf_id + 1)); + break; + + case ROC_NPC_ACTION_TYPE_QUEUE: + act_q = (const struct roc_npc_action_queue *) + actions->conf; + rq = act_q->index; + req_act |= ROC_NPC_ACTION_TYPE_QUEUE; + break; + + case ROC_NPC_ACTION_TYPE_RSS: + req_act |= ROC_NPC_ACTION_TYPE_RSS; + break; + + case ROC_NPC_ACTION_TYPE_SEC: + /* Assumes user has already configured security + * session for this flow. Associated conf is + * opaque. When security is implemented, + * we need to verify that for specified security + * session: + * action_type == + * NPC_SECURITY_ACTION_TYPE_INLINE_PROTOCOL && + * session_protocol == + * NPC_SECURITY_PROTOCOL_IPSEC + * + * RSS is not supported with inline ipsec. Get the + * rq from associated conf, or make + * ROC_NPC_ACTION_TYPE_QUEUE compulsory with this + * action. + * Currently, rq = 0 is assumed. + */ + req_act |= ROC_NPC_ACTION_TYPE_SEC; + rq = 0; + break; + default: + errcode = NPC_ERR_ACTION_NOTSUP; + goto err_exit; + } + } + + /* Check if actions specified are compatible */ + if (attr->egress) { + /* Only DROP/COUNT is supported */ + if (!(req_act & ROC_NPC_ACTION_TYPE_DROP)) { + errcode = NPC_ERR_ACTION_NOTSUP; + goto err_exit; + } else if (req_act & ~(ROC_NPC_ACTION_TYPE_DROP | + ROC_NPC_ACTION_TYPE_COUNT)) { + errcode = NPC_ERR_ACTION_NOTSUP; + goto err_exit; + } + flow->npc_action = NIX_TX_ACTIONOP_DROP; + goto set_pf_func; + } + + /* We have already verified the attr, this is ingress. + * - Exactly one terminating action is supported + * - Exactly one of MARK or FLAG is supported + * - If terminating action is DROP, only count is valid. + */ + sel_act = req_act & NPC_ACTION_TERM; + if ((sel_act & (sel_act - 1)) != 0) { + errcode = NPC_ERR_ACTION_NOTSUP; + goto err_exit; + } + + if (req_act & ROC_NPC_ACTION_TYPE_DROP) { + sel_act = req_act & ~ROC_NPC_ACTION_TYPE_COUNT; + if ((sel_act & (sel_act - 1)) != 0) { + errcode = NPC_ERR_ACTION_NOTSUP; + goto err_exit; + } + } + + if ((req_act & (ROC_NPC_ACTION_TYPE_FLAG | ROC_NPC_ACTION_TYPE_MARK)) == + (ROC_NPC_ACTION_TYPE_FLAG | ROC_NPC_ACTION_TYPE_MARK)) { + errcode = NPC_ERR_ACTION_NOTSUP; + goto err_exit; + } + + /* Set NIX_RX_ACTIONOP */ + if (req_act & (ROC_NPC_ACTION_TYPE_PF | ROC_NPC_ACTION_TYPE_VF)) { + flow->npc_action = NIX_RX_ACTIONOP_UCAST; + if (req_act & ROC_NPC_ACTION_TYPE_QUEUE) + flow->npc_action |= (uint64_t)rq << 20; + } else if (req_act & ROC_NPC_ACTION_TYPE_DROP) { + flow->npc_action = NIX_RX_ACTIONOP_DROP; + } else if (req_act & ROC_NPC_ACTION_TYPE_QUEUE) { + flow->npc_action = NIX_RX_ACTIONOP_UCAST; + flow->npc_action |= (uint64_t)rq << 20; + } else if (req_act & ROC_NPC_ACTION_TYPE_RSS) { + flow->npc_action = NIX_RX_ACTIONOP_UCAST; + } else if (req_act & ROC_NPC_ACTION_TYPE_SEC) { + flow->npc_action = NIX_RX_ACTIONOP_UCAST_IPSEC; + flow->npc_action |= (uint64_t)rq << 20; + } else if (req_act & + (ROC_NPC_ACTION_TYPE_FLAG | ROC_NPC_ACTION_TYPE_MARK)) { + flow->npc_action = NIX_RX_ACTIONOP_UCAST; + } else if (req_act & ROC_NPC_ACTION_TYPE_COUNT) { + /* Keep ROC_NPC_ACTION_TYPE_COUNT_ACT always at the end + * This is default action, when user specify only + * COUNT ACTION + */ + flow->npc_action = NIX_RX_ACTIONOP_UCAST; + } else { + /* Should never reach here */ + errcode = NPC_ERR_ACTION_NOTSUP; + goto err_exit; + } + + if (mark) + flow->npc_action |= (uint64_t)mark << 40; + +set_pf_func: + /* Ideally AF must ensure that correct pf_func is set */ + flow->npc_action |= (uint64_t)pf_func << 4; + + return 0; + +err_exit: + return errcode; +} + +typedef int (*npc_parse_stage_func_t)(struct npc_parse_state *pst); + +static int +npc_parse_pattern(struct npc *npc, const struct roc_npc_item_info pattern[], + struct roc_npc_flow *flow, struct npc_parse_state *pst) +{ + npc_parse_stage_func_t parse_stage_funcs[] = { + npc_parse_meta_items, npc_parse_cpt_hdr, npc_parse_higig2_hdr, + npc_parse_la, npc_parse_lb, npc_parse_lc, + npc_parse_ld, npc_parse_le, npc_parse_lf, + npc_parse_lg, npc_parse_lh, + }; + uint8_t layer = 0; + int key_offset; + int rc; + + if (pattern == NULL) + return NPC_ERR_PARAM; + + memset(pst, 0, sizeof(*pst)); + pst->npc = npc; + pst->flow = flow; + + /* Use integral byte offset */ + key_offset = pst->npc->keyx_len[flow->nix_intf]; + key_offset = (key_offset + 7) / 8; + + /* Location where LDATA would begin */ + pst->mcam_data = (uint8_t *)flow->mcam_data; + pst->mcam_mask = (uint8_t *)flow->mcam_mask; + + while (pattern->type != ROC_NPC_ITEM_TYPE_END && + layer < PLT_DIM(parse_stage_funcs)) { + /* Skip place-holders */ + pattern = npc_parse_skip_void_and_any_items(pattern); + + pst->pattern = pattern; + rc = parse_stage_funcs[layer](pst); + if (rc != 0) + return rc; + + layer++; + + /* + * Parse stage function sets pst->pattern to + * 1 past the last item it consumed. + */ + pattern = pst->pattern; + + if (pst->terminate) + break; + } + + /* Skip trailing place-holders */ + pattern = npc_parse_skip_void_and_any_items(pattern); + + /* Are there more items than what we can handle? */ + if (pattern->type != ROC_NPC_ITEM_TYPE_END) + return NPC_ERR_PATTERN_NOTSUP; + + return 0; +} + +static int +npc_parse_attr(struct npc *npc, const struct roc_npc_attr *attr, + struct roc_npc_flow *flow) +{ + if (attr == NULL) + return NPC_ERR_PARAM; + else if (attr->priority >= npc->flow_max_priority) + return NPC_ERR_PARAM; + else if ((!attr->egress && !attr->ingress) || + (attr->egress && attr->ingress)) + return NPC_ERR_PARAM; + + if (attr->ingress) + flow->nix_intf = ROC_NPC_INTF_RX; + else + flow->nix_intf = ROC_NPC_INTF_TX; + + flow->priority = attr->priority; + return 0; +} + +static int +npc_parse_rule(struct npc *npc, const struct roc_npc_attr *attr, + const struct roc_npc_item_info pattern[], + const struct roc_npc_action actions[], struct roc_npc_flow *flow, + struct npc_parse_state *pst) +{ + int err; + + /* Check attr */ + err = npc_parse_attr(npc, attr, flow); + if (err) + return err; + + /* Check pattern */ + err = npc_parse_pattern(npc, pattern, flow, pst); + if (err) + return err; + + /* Check action */ + err = npc_parse_actions(npc, attr, actions, flow); + if (err) + return err; + return 0; +} + +int +roc_npc_flow_parse(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, + const struct roc_npc_item_info pattern[], + const struct roc_npc_action actions[], + struct roc_npc_flow *flow) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + struct npc_parse_state parse_state = {0}; + int rc; + + rc = npc_parse_rule(npc, attr, pattern, actions, flow, &parse_state); + if (rc) + return rc; + + parse_state.is_vf = !roc_nix_is_pf(roc_npc->roc_nix); + + return npc_program_mcam(npc, &parse_state, 0); +} + +struct roc_npc_flow * +roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, + const struct roc_npc_item_info pattern[], + const struct roc_npc_action actions[], int *errcode) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + struct roc_npc_flow *flow, *flow_iter; + struct npc_parse_state parse_state; + struct npc_flow_list *list; + int rc; + + flow = plt_zmalloc(sizeof(*flow), 0); + if (flow == NULL) { + *errcode = NPC_ERR_NO_MEM; + return NULL; + } + memset(flow, 0, sizeof(*flow)); + + rc = npc_parse_rule(npc, attr, pattern, actions, flow, &parse_state); + if (rc != 0) { + *errcode = rc; + goto err_exit; + } + + parse_state.is_vf = !roc_nix_is_pf(roc_npc->roc_nix); + + rc = npc_program_mcam(npc, &parse_state, 1); + if (rc != 0) { + *errcode = rc; + goto err_exit; + } + + list = &npc->flow_list[flow->priority]; + /* List in ascending order of mcam entries */ + TAILQ_FOREACH(flow_iter, list, next) { + if (flow_iter->mcam_id > flow->mcam_id) { + TAILQ_INSERT_BEFORE(flow_iter, flow, next); + return flow; + } + } + + TAILQ_INSERT_TAIL(list, flow, next); + return flow; + +err_exit: + plt_free(flow); + return NULL; +} + +int +roc_npc_flow_destroy(struct roc_npc *roc_npc, struct roc_npc_flow *flow) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + struct plt_bitmap *bmap; + uint16_t match_id; + int rc; + + match_id = (flow->npc_action >> NPC_RX_ACT_MATCH_OFFSET) & + NPC_RX_ACT_MATCH_MASK; + + if (match_id && match_id < NPC_ACTION_FLAG_DEFAULT) { + if (npc->mark_actions == 0) + return NPC_ERR_PARAM; + } + + rc = npc_mcam_free_entry(npc, flow->mcam_id); + if (rc != 0) + return rc; + + TAILQ_REMOVE(&npc->flow_list[flow->priority], flow, next); + + bmap = npc->live_entries[flow->priority]; + plt_bitmap_clear(bmap, flow->mcam_id); + + plt_free(flow); + return 0; +} diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h index c841240..996739e 100644 --- a/drivers/common/cnxk/roc_npc.h +++ b/drivers/common/cnxk/roc_npc.h @@ -126,4 +126,44 @@ struct roc_npc { uint8_t reserved[ROC_NPC_MEM_SZ]; } __plt_cache_aligned; +int __roc_api roc_npc_init(struct roc_npc *roc_npc); +int __roc_api roc_npc_fini(struct roc_npc *roc_npc); +const char *__roc_api roc_npc_profile_name_get(struct roc_npc *roc_npc); + +struct roc_npc_flow *__roc_api +roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, + const struct roc_npc_item_info pattern[], + const struct roc_npc_action actions[], int *errcode); +int __roc_api roc_npc_flow_destroy(struct roc_npc *roc_npc, + struct roc_npc_flow *flow); +int __roc_api roc_npc_mcam_free_entry(struct roc_npc *roc_npc, uint32_t entry); +int __roc_api roc_npc_mcam_alloc_entry(struct roc_npc *roc_npc, + struct roc_npc_flow *mcam, + struct roc_npc_flow *ref_mcam, int prio, + int *resp_count); +int __roc_api roc_npc_mcam_alloc_entries(struct roc_npc *roc_npc, int ref_entry, + int *alloc_entry, int req_count, + int priority, int *resp_count); +int __roc_api roc_npc_mcam_ena_dis_entry(struct roc_npc *roc_npc, + struct roc_npc_flow *mcam, + bool enable); +int __roc_api roc_npc_mcam_write_entry(struct roc_npc *roc_npc, + struct roc_npc_flow *mcam); +int __roc_api roc_npc_flow_parse(struct roc_npc *roc_npc, + const struct roc_npc_attr *attr, + const struct roc_npc_item_info pattern[], + const struct roc_npc_action actions[], + struct roc_npc_flow *flow); +int __roc_api roc_npc_get_low_priority_mcam(struct roc_npc *roc_npc); + +int __roc_api roc_npc_mcam_free_counter(struct roc_npc *roc_npc, + uint16_t ctr_id); + +int __roc_api roc_npc_mcam_read_counter(struct roc_npc *roc_npc, + uint32_t ctr_id, uint64_t *count); +int __roc_api roc_npc_mcam_clear_counter(struct roc_npc *roc_npc, + uint32_t ctr_id); + +int __roc_api roc_npc_mcam_free_all_resources(struct roc_npc *roc_npc); + #endif /* _ROC_NPC_H_ */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 809cdae..3b9cf34 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -156,6 +156,22 @@ INTERNAL { roc_npa_pool_destroy; roc_npa_pool_op_pc_reset; roc_npa_pool_range_update_check; + roc_npc_fini; + roc_npc_flow_create; + roc_npc_flow_destroy; + roc_npc_flow_parse; + roc_npc_get_low_priority_mcam; + roc_npc_init; + roc_npc_mcam_alloc_entries; + roc_npc_mcam_alloc_entry; + roc_npc_mcam_clear_counter; + roc_npc_mcam_ena_dis_entry; + roc_npc_mcam_free_all_resources; + roc_npc_mcam_free_counter; + roc_npc_mcam_free_entry; + roc_npc_mcam_write_entry; + roc_npc_mcam_read_counter; + roc_npc_profile_name_get; roc_plt_init; local: *; From patchwork Thu Apr 1 12:38:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90418 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 00E84A0548; Thu, 1 Apr 2021 14:45:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DB72514129C; Thu, 1 Apr 2021 14:40:54 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 547D4141247 for ; Thu, 1 Apr 2021 14:40:53 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPcB9019150 for ; Thu, 1 Apr 2021 05:40:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=slGqbmFhliCZM7Wi0V0UuRkdBC9HGXc7Cs0NqZYJgXM=; b=SHG9wCOafRFw70I7HEM7aR43wFRC7kV1jtRsUuR2QfVIqaigMyZyGToklma7NeMhVTVA 6XyWOLTj0hECZedCwTiL2HdgghycpR4Vjt3HIQ/3Q6jBjy9ACYsj2GluHGggtKH15lM8 3RuBTAyeso0nxAsaF5/udO0HS/mlnFc8KBKf/nKWAuY3Tzqj3mKBFSDx1Rm4OghoSju/ kf9nUbiC1WAPugQ/fks0R3rdHGB1obTvOR2c7oaWiaW8p9YkK463Hk3cmVSk/VHwKcXb L6RMbxLkdxgy45xUsj7+BrpWAvAG3ZrFIG505qjPJ65O7AZjB/5lZi5PXmo2Ow4s14bA WA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje74-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:52 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:50 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:50 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 17D973F7043; Thu, 1 Apr 2021 05:40:47 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:08:10 +0530 Message-ID: <20210401123817.14348-46-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 3y7WDbfzsp8Peqo9mMsOsZZcQfTuhYbl X-Proofpoint-ORIG-GUID: 3y7WDbfzsp8Peqo9mMsOsZZcQfTuhYbl X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 45/52] common/cnxk: add base sso device support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add SSO device init and fini which attach SSO LF resources to the RVU PF/VF and SSO HWS and HWGRP LFs alloc, free. Signed-off-by: Pavan Nikhilesh --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_api.h | 3 + drivers/common/cnxk/roc_idev.c | 27 ++++ drivers/common/cnxk/roc_idev_priv.h | 5 + drivers/common/cnxk/roc_nix.c | 1 + drivers/common/cnxk/roc_platform.c | 1 + drivers/common/cnxk/roc_platform.h | 2 + drivers/common/cnxk/roc_priv.h | 3 + drivers/common/cnxk/roc_sso.c | 273 ++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_sso.h | 34 +++++ drivers/common/cnxk/roc_sso_priv.h | 33 +++++ drivers/common/cnxk/roc_utils.c | 1 + drivers/common/cnxk/version.map | 5 + 13 files changed, 389 insertions(+) create mode 100644 drivers/common/cnxk/roc_sso.c create mode 100644 drivers/common/cnxk/roc_sso.h create mode 100644 drivers/common/cnxk/roc_sso_priv.h diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index f12e344..79c8eaa 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -39,5 +39,6 @@ sources = files('roc_dev.c', 'roc_npc_parse.c', 'roc_npc_utils.c', 'roc_platform.c', + 'roc_sso.c', 'roc_utils.c') includes += include_directories('../../bus/pci') diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h index 8dc8eed..b7fc3b7 100644 --- a/drivers/common/cnxk/roc_api.h +++ b/drivers/common/cnxk/roc_api.h @@ -88,6 +88,9 @@ /* NIX */ #include "roc_nix.h" +/* SSO */ +#include "roc_sso.h" + /* Utils */ #include "roc_utils.h" diff --git a/drivers/common/cnxk/roc_idev.c b/drivers/common/cnxk/roc_idev.c index a92ac6a..63cc040 100644 --- a/drivers/common/cnxk/roc_idev.c +++ b/drivers/common/cnxk/roc_idev.c @@ -29,6 +29,7 @@ idev_get_cfg(void) void idev_set_defaults(struct idev_cfg *idev) { + idev->sso_pf_func = 0; idev->npa = NULL; idev->npa_pf_func = 0; idev->max_pools = 128; @@ -39,6 +40,32 @@ idev_set_defaults(struct idev_cfg *idev) } uint16_t +idev_sso_pffunc_get(void) +{ + struct idev_cfg *idev; + uint16_t sso_pf_func; + + idev = idev_get_cfg(); + sso_pf_func = 0; + if (idev != NULL) + sso_pf_func = __atomic_load_n(&idev->sso_pf_func, + __ATOMIC_ACQUIRE); + + return sso_pf_func; +} + +void +idev_sso_pffunc_set(uint16_t sso_pf_func) +{ + struct idev_cfg *idev; + + idev = idev_get_cfg(); + if (idev != NULL) + __atomic_store_n(&idev->sso_pf_func, sso_pf_func, + __ATOMIC_RELEASE); +} + +uint16_t idev_npa_pffunc_get(void) { struct idev_cfg *idev; diff --git a/drivers/common/cnxk/roc_idev_priv.h b/drivers/common/cnxk/roc_idev_priv.h index 36cdb33..ff10a90 100644 --- a/drivers/common/cnxk/roc_idev_priv.h +++ b/drivers/common/cnxk/roc_idev_priv.h @@ -8,6 +8,7 @@ /* Intra device related functions */ struct npa_lf; struct idev_cfg { + uint16_t sso_pf_func; uint16_t npa_pf_func; struct npa_lf *npa; uint16_t npa_refcnt; @@ -28,6 +29,10 @@ uint32_t idev_npa_maxpools_get(void); void idev_npa_maxpools_set(uint32_t max_pools); uint16_t idev_npa_lf_active(struct dev *dev); +/* idev sso */ +void idev_sso_pffunc_set(uint16_t sso_pf_func); +uint16_t idev_sso_pffunc_get(void); + /* idev lmt */ uint16_t idev_lmt_pffunc_get(void); diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c index d6b288f..23d508b 100644 --- a/drivers/common/cnxk/roc_nix.c +++ b/drivers/common/cnxk/roc_nix.c @@ -143,6 +143,7 @@ roc_nix_lf_alloc(struct roc_nix *roc_nix, uint32_t nb_rxq, uint32_t nb_txq, req->rss_sz = nix->reta_sz; req->rss_grps = ROC_NIX_RSS_GRPS; req->npa_func = idev_npa_pffunc_get(); + req->sso_func = idev_sso_pffunc_get(); req->rx_cfg = rx_cfg; if (!roc_nix->rss_tag_as_xor) diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c index e186f61..408fe3d 100644 --- a/drivers/common/cnxk/roc_platform.c +++ b/drivers/common/cnxk/roc_platform.c @@ -33,4 +33,5 @@ RTE_LOG_REGISTER(cnxk_logtype_mbox, pmd.cnxk.mbox, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_npa, pmd.mempool.cnxk, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_nix, pmd.net.cnxk, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_npc, pmd.net.cnxk.flow, NOTICE); +RTE_LOG_REGISTER(cnxk_logtype_sso, pmd.event.cnxk, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_tm, pmd.net.cnxk.tm, NOTICE); diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h index ed8faba..9ea7b1a 100644 --- a/drivers/common/cnxk/roc_platform.h +++ b/drivers/common/cnxk/roc_platform.h @@ -137,6 +137,7 @@ extern int cnxk_logtype_mbox; extern int cnxk_logtype_npa; extern int cnxk_logtype_nix; extern int cnxk_logtype_npc; +extern int cnxk_logtype_sso; extern int cnxk_logtype_tm; #define plt_err(fmt, args...) \ @@ -158,6 +159,7 @@ extern int cnxk_logtype_tm; #define plt_npa_dbg(fmt, ...) plt_dbg(npa, fmt, ##__VA_ARGS__) #define plt_nix_dbg(fmt, ...) plt_dbg(nix, fmt, ##__VA_ARGS__) #define plt_npc_dbg(fmt, ...) plt_dbg(npc, fmt, ##__VA_ARGS__) +#define plt_sso_dbg(fmt, ...) plt_dbg(sso, fmt, ##__VA_ARGS__) #define plt_tm_dbg(fmt, ...) plt_dbg(tm, fmt, ##__VA_ARGS__) #ifdef __cplusplus diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h index 2dcd9e7..dd9d87a 100644 --- a/drivers/common/cnxk/roc_priv.h +++ b/drivers/common/cnxk/roc_priv.h @@ -26,4 +26,7 @@ /* NPC */ #include "roc_npc_priv.h" +/* SSO */ +#include "roc_sso_priv.h" + #endif /* _ROC_PRIV_H_ */ diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c new file mode 100644 index 0000000..6875b08 --- /dev/null +++ b/drivers/common/cnxk/roc_sso.c @@ -0,0 +1,273 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +/* Private functions. */ +static int +sso_lf_alloc(struct roc_sso *roc_sso, enum sso_lf_type lf_type, uint16_t nb_lf, + void **rsp) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + int rc = -ENOSPC; + + switch (lf_type) { + case SSO_LF_TYPE_HWS: { + struct ssow_lf_alloc_req *req; + + req = mbox_alloc_msg_ssow_lf_alloc(dev->mbox); + if (req == NULL) + return rc; + req->hws = nb_lf; + } break; + case SSO_LF_TYPE_HWGRP: { + struct sso_lf_alloc_req *req; + + req = mbox_alloc_msg_sso_lf_alloc(dev->mbox); + if (req == NULL) + return rc; + req->hwgrps = nb_lf; + } break; + default: + break; + } + + rc = mbox_process_msg(dev->mbox, rsp); + if (rc < 0) + return rc; + + return 0; +} + +static int +sso_lf_free(struct roc_sso *roc_sso, enum sso_lf_type lf_type, uint16_t nb_lf) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + int rc = -ENOSPC; + + switch (lf_type) { + case SSO_LF_TYPE_HWS: { + struct ssow_lf_free_req *req; + + req = mbox_alloc_msg_ssow_lf_free(dev->mbox); + if (req == NULL) + return rc; + req->hws = nb_lf; + } break; + case SSO_LF_TYPE_HWGRP: { + struct sso_lf_free_req *req; + + req = mbox_alloc_msg_sso_lf_free(dev->mbox); + if (req == NULL) + return rc; + req->hwgrps = nb_lf; + } break; + default: + break; + } + + rc = mbox_process(dev->mbox); + if (rc < 0) + return rc; + + return 0; +} + +static int +sso_rsrc_attach(struct roc_sso *roc_sso, enum sso_lf_type lf_type, + uint16_t nb_lf) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct rsrc_attach_req *req; + int rc = -ENOSPC; + + req = mbox_alloc_msg_attach_resources(dev->mbox); + if (req == NULL) + return rc; + switch (lf_type) { + case SSO_LF_TYPE_HWS: + req->ssow = nb_lf; + break; + case SSO_LF_TYPE_HWGRP: + req->sso = nb_lf; + break; + default: + return SSO_ERR_PARAM; + } + + req->modify = true; + if (mbox_process(dev->mbox) < 0) + return -EIO; + + return 0; +} + +static int +sso_rsrc_detach(struct roc_sso *roc_sso, enum sso_lf_type lf_type) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct rsrc_detach_req *req; + int rc = -ENOSPC; + + req = mbox_alloc_msg_detach_resources(dev->mbox); + if (req == NULL) + return rc; + switch (lf_type) { + case SSO_LF_TYPE_HWS: + req->ssow = true; + break; + case SSO_LF_TYPE_HWGRP: + req->sso = true; + break; + default: + return SSO_ERR_PARAM; + } + + req->partial = true; + if (mbox_process(dev->mbox) < 0) + return -EIO; + + return 0; +} + +static int +sso_rsrc_get(struct roc_sso *roc_sso) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct free_rsrcs_rsp *rsrc_cnt; + int rc; + + mbox_alloc_msg_free_rsrc_cnt(dev->mbox); + rc = mbox_process_msg(dev->mbox, (void **)&rsrc_cnt); + if (rc < 0) { + plt_err("Failed to get free resource count\n"); + return rc; + } + + roc_sso->max_hwgrp = rsrc_cnt->sso; + roc_sso->max_hws = rsrc_cnt->ssow; + + return 0; +} + +int +roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) +{ + struct sso_lf_alloc_rsp *rsp_hwgrp; + int rc; + + if (roc_sso->max_hwgrp < nb_hwgrp) + return -ENOENT; + if (roc_sso->max_hws < nb_hws) + return -ENOENT; + + rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWS, nb_hws); + if (rc < 0) { + plt_err("Unable to attach SSO HWS LFs"); + return rc; + } + + rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWGRP, nb_hwgrp); + if (rc < 0) { + plt_err("Unable to attach SSO HWGRP LFs"); + goto hwgrp_atch_fail; + } + + rc = sso_lf_alloc(roc_sso, SSO_LF_TYPE_HWS, nb_hws, NULL); + if (rc < 0) { + plt_err("Unable to alloc SSO HWS LFs"); + goto hws_alloc_fail; + } + + rc = sso_lf_alloc(roc_sso, SSO_LF_TYPE_HWGRP, nb_hwgrp, + (void **)&rsp_hwgrp); + if (rc < 0) { + plt_err("Unable to alloc SSO HWGRP Lfs"); + goto hwgrp_alloc_fail; + } + + roc_sso->xaq_buf_size = rsp_hwgrp->xaq_buf_size; + roc_sso->xae_waes = rsp_hwgrp->xaq_wq_entries; + roc_sso->iue = rsp_hwgrp->in_unit_entries; + + roc_sso->nb_hwgrp = nb_hwgrp; + roc_sso->nb_hws = nb_hws; + + return 0; +hwgrp_alloc_fail: + sso_lf_free(roc_sso, SSO_LF_TYPE_HWS, nb_hws); +hws_alloc_fail: + sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWGRP); +hwgrp_atch_fail: + sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWS); + return rc; +} + +void +roc_sso_rsrc_fini(struct roc_sso *roc_sso) +{ + if (!roc_sso->nb_hws && !roc_sso->nb_hwgrp) + return; + + sso_lf_free(roc_sso, SSO_LF_TYPE_HWS, roc_sso->nb_hws); + sso_lf_free(roc_sso, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp); + + sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWS); + sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWGRP); + + roc_sso->nb_hwgrp = 0; + roc_sso->nb_hws = 0; +} + +int +roc_sso_dev_init(struct roc_sso *roc_sso) +{ + struct plt_pci_device *pci_dev; + struct sso *sso; + int rc; + + if (roc_sso == NULL || roc_sso->pci_dev == NULL) + return SSO_ERR_PARAM; + + PLT_STATIC_ASSERT(sizeof(struct sso) <= ROC_SSO_MEM_SZ); + sso = roc_sso_to_sso_priv(roc_sso); + memset(sso, 0, sizeof(*sso)); + pci_dev = roc_sso->pci_dev; + + rc = dev_init(&sso->dev, pci_dev); + if (rc < 0) { + plt_err("Failed to init roc device"); + goto fail; + } + + rc = sso_rsrc_get(roc_sso); + if (rc < 0) { + plt_err("Failed to get SSO resources"); + goto rsrc_fail; + } + rc = -ENOMEM; + + idev_sso_pffunc_set(sso->dev.pf_func); + sso->pci_dev = pci_dev; + sso->dev.drv_inited = true; + roc_sso->lmt_base = sso->dev.lmt_base; + + return 0; +rsrc_fail: + rc |= dev_fini(&sso->dev, pci_dev); +fail: + return rc; +} + +int +roc_sso_dev_fini(struct roc_sso *roc_sso) +{ + struct sso *sso; + + sso = roc_sso_to_sso_priv(roc_sso); + sso->dev.drv_inited = false; + + return dev_fini(&sso->dev, sso->pci_dev); +} diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h new file mode 100644 index 0000000..4f37f14 --- /dev/null +++ b/drivers/common/cnxk/roc_sso.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_SSO_H_ +#define _ROC_SSO_H_ + +struct roc_sso { + struct plt_pci_device *pci_dev; + /* Public data. */ + uint16_t max_hwgrp; + uint16_t max_hws; + uint16_t nb_hwgrp; + uint8_t nb_hws; + uintptr_t lmt_base; + /* HW Const. */ + uint32_t xae_waes; + uint32_t xaq_buf_size; + uint32_t iue; + /* Private data. */ +#define ROC_SSO_MEM_SZ (16 * 1024) + uint8_t reserved[ROC_SSO_MEM_SZ] __plt_cache_aligned; +} __plt_cache_aligned; + +/* SSO device initialization */ +int __roc_api roc_sso_dev_init(struct roc_sso *roc_sso); +int __roc_api roc_sso_dev_fini(struct roc_sso *roc_sso); + +/* SSO device configuration */ +int __roc_api roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, + uint16_t nb_hwgrp); +void __roc_api roc_sso_rsrc_fini(struct roc_sso *roc_sso); + +#endif /* _ROC_SSOW_H_ */ diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h new file mode 100644 index 0000000..1ab3f5b --- /dev/null +++ b/drivers/common/cnxk/roc_sso_priv.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_SSO_PRIV_H_ +#define _ROC_SSO_PRIV_H_ + +struct sso_rsrc { + uint16_t rsrc_id; + uint64_t base; +}; + +struct sso { + struct plt_pci_device *pci_dev; + struct dev dev; +} __plt_cache_aligned; + +enum sso_err_status { + SSO_ERR_PARAM = -4096, +}; + +enum sso_lf_type { + SSO_LF_TYPE_HWS, + SSO_LF_TYPE_HWGRP, +}; + +static inline struct sso * +roc_sso_to_sso_priv(struct roc_sso *roc_sso) +{ + return (struct sso *)&roc_sso->reserved[0]; +} + +#endif /* _ROC_SSO_PRIV_H_ */ diff --git a/drivers/common/cnxk/roc_utils.c b/drivers/common/cnxk/roc_utils.c index 236b4ae..542252f 100644 --- a/drivers/common/cnxk/roc_utils.c +++ b/drivers/common/cnxk/roc_utils.c @@ -15,6 +15,7 @@ roc_error_msg_get(int errorcode) case NIX_ERR_PARAM: case NPA_ERR_PARAM: case NPC_ERR_PARAM: + case SSO_ERR_PARAM: case UTIL_ERR_PARAM: err_msg = "Invalid parameter"; break; diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 3b9cf34..aabe344 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -6,6 +6,7 @@ INTERNAL { cnxk_logtype_nix; cnxk_logtype_npa; cnxk_logtype_npc; + cnxk_logtype_sso; cnxk_logtype_tm; roc_clk_freq_get; roc_error_msg_get; @@ -173,6 +174,10 @@ INTERNAL { roc_npc_mcam_read_counter; roc_npc_profile_name_get; roc_plt_init; + roc_sso_dev_fini; + roc_sso_dev_init; + roc_sso_rsrc_fini; + roc_sso_rsrc_init; local: *; }; From patchwork Thu Apr 1 12:38:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90419 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A27D7A0548; Thu, 1 Apr 2021 14:45:28 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 82FC61412B5; Thu, 1 Apr 2021 14:40:57 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id B3FA9141213 for ; Thu, 1 Apr 2021 14:40:55 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPNXU032520 for ; Thu, 1 Apr 2021 05:40:55 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=6XwI69tg8Eu8RYo/5BF95YMljkLs25xYtBJ8VBmKgJc=; b=P+t7B1tFUs8XCvJdQF9peuKGpRy2pC7+lcj1u3zKsVwJJ2kdYbOisxegUMkOy+lHA61C JAQt71ybkXvFvVeM75TtGVtumgyZCmT990jwHyCjZE2r/MMcZIyAjEMSzLWbVfPFo2cL WL7xgLNLmWwA5kau2S2+DHkj2xO61lRPFkuFTA9xEjfk80GOuM14X/PnFqOe4i59pE0a pRZ1x606rFHX7UiiE6El6vCR/7nQxJb13B3TwhabW9FumSP0eyswu3yPcBVtAMsoRPvV uGIpvOYZTDNmM2SZxT9dJ/LNIku0l/kooFAekXCvI1nZA5oei/yJKUfWVxH/0x7yX2DC zQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 37n28j2dxh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:54 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:53 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:53 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 0BD733F703F; Thu, 1 Apr 2021 05:40:50 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:08:11 +0530 Message-ID: <20210401123817.14348-47-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Osp6eAX21_qfbE6zHEaUYWZg4cVyX9cc X-Proofpoint-GUID: Osp6eAX21_qfbE6zHEaUYWZg4cVyX9cc X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 46/52] common/cnxk: add sso hws interface X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add SSO HWS interface for setting/unsetting links, retrieving base address and nanoseconds to getwork timeout. Signed-off-by: Pavan Nikhilesh --- drivers/common/cnxk/roc_sso.c | 128 ++++++++++++++++++++++++++++++++++++- drivers/common/cnxk/roc_sso.h | 6 ++ drivers/common/cnxk/roc_sso_priv.h | 3 + drivers/common/cnxk/version.map | 4 ++ 4 files changed, 140 insertions(+), 1 deletion(-) diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c index 6875b08..ba9dc3b 100644 --- a/drivers/common/cnxk/roc_sso.c +++ b/drivers/common/cnxk/roc_sso.c @@ -152,6 +152,104 @@ sso_rsrc_get(struct roc_sso *roc_sso) return 0; } +static void +sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, + uint16_t hwgrp[], uint16_t n, uint16_t enable) +{ + uint64_t reg; + int i, j, k; + + i = 0; + while (n) { + uint64_t mask[4] = { + 0x8000, + 0x8000, + 0x8000, + 0x8000, + }; + + k = n % 4; + k = k ? k : 4; + for (j = 0; j < k; j++) { + mask[j] = hwgrp[i + j] | enable << 14; + enable ? plt_bitmap_set(bmp, hwgrp[i + j]) : + plt_bitmap_clear(bmp, hwgrp[i + j]); + plt_sso_dbg("HWS %d Linked to HWGRP %d", hws, + hwgrp[i + j]); + } + + n -= j; + i += j; + reg = mask[0] | mask[1] << 16 | mask[2] << 32 | mask[3] << 48; + plt_write64(reg, base + SSOW_LF_GWS_GRPMSK_CHG); + } +} + +/* Public Functions. */ +uintptr_t +roc_sso_hws_base_get(struct roc_sso *roc_sso, uint8_t hws) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + + return dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12); +} + +uint64_t +roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + uint64_t current_us, current_ns, new_ns; + uintptr_t base; + + base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20); + current_us = plt_read64(base + SSOW_LF_GWS_NW_TIM); + /* From HRM, table 14-19: + * The SSOW_LF_GWS_NW_TIM[NW_TIM] period is specified in n-1 notation. + */ + current_us += 1; + + /* From HRM, table 14-1: + * SSOW_LF_GWS_NW_TIM[NW_TIM] specifies the minimum timeout. The SSO + * hardware times out a GET_WORK request within 2 usec of the minimum + * timeout specified by SSOW_LF_GWS_NW_TIM[NW_TIM]. + */ + current_us += 2; + current_ns = current_us * 1E3; + new_ns = (ns - PLT_MIN(ns, current_ns)); + new_ns = !new_ns ? 1 : new_ns; + return (new_ns * plt_tsc_hz()) / 1E9; +} + +int +roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], + uint16_t nb_hwgrp) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso; + uintptr_t base; + + sso = roc_sso_to_sso_priv(roc_sso); + base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12); + sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 1); + + return nb_hwgrp; +} + +int +roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], + uint16_t nb_hwgrp) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso; + uintptr_t base; + + sso = roc_sso_to_sso_priv(roc_sso); + base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12); + sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 0); + + return nb_hwgrp; +} + int roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) { @@ -225,8 +323,10 @@ int roc_sso_dev_init(struct roc_sso *roc_sso) { struct plt_pci_device *pci_dev; + uint32_t link_map_sz; struct sso *sso; - int rc; + void *link_mem; + int i, rc; if (roc_sso == NULL || roc_sso->pci_dev == NULL) return SSO_ERR_PARAM; @@ -249,12 +349,38 @@ roc_sso_dev_init(struct roc_sso *roc_sso) } rc = -ENOMEM; + sso->link_map = + plt_zmalloc(sizeof(struct plt_bitmap *) * roc_sso->max_hws, 0); + if (sso->link_map == NULL) { + plt_err("Failed to allocate memory for link_map array"); + goto rsrc_fail; + } + + link_map_sz = plt_bitmap_get_memory_footprint(roc_sso->max_hwgrp); + sso->link_map_mem = plt_zmalloc(link_map_sz * roc_sso->max_hws, 0); + if (sso->link_map_mem == NULL) { + plt_err("Failed to get link_map memory"); + goto rsrc_fail; + } + + link_mem = sso->link_map_mem; + for (i = 0; i < roc_sso->max_hws; i++) { + sso->link_map[i] = plt_bitmap_init(roc_sso->max_hwgrp, link_mem, + link_map_sz); + if (sso->link_map[i] == NULL) { + plt_err("Failed to allocate link map"); + goto link_mem_free; + } + link_mem = PLT_PTR_ADD(link_mem, link_map_sz); + } idev_sso_pffunc_set(sso->dev.pf_func); sso->pci_dev = pci_dev; sso->dev.drv_inited = true; roc_sso->lmt_base = sso->dev.lmt_base; return 0; +link_mem_free: + plt_free(sso->link_map_mem); rsrc_fail: rc |= dev_fini(&sso->dev, pci_dev); fail: diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h index 4f37f14..7236045 100644 --- a/drivers/common/cnxk/roc_sso.h +++ b/drivers/common/cnxk/roc_sso.h @@ -30,5 +30,11 @@ int __roc_api roc_sso_dev_fini(struct roc_sso *roc_sso); int __roc_api roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp); void __roc_api roc_sso_rsrc_fini(struct roc_sso *roc_sso); +uint64_t __roc_api roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns); +int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, + uint16_t hwgrp[], uint16_t nb_hwgrp); +int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, + uint16_t hwgrp[], uint16_t nb_hwgrp); +uintptr_t __roc_api roc_sso_hws_base_get(struct roc_sso *roc_sso, uint8_t hws); #endif /* _ROC_SSOW_H_ */ diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h index 1ab3f5b..ad35be1 100644 --- a/drivers/common/cnxk/roc_sso_priv.h +++ b/drivers/common/cnxk/roc_sso_priv.h @@ -13,6 +13,9 @@ struct sso_rsrc { struct sso { struct plt_pci_device *pci_dev; struct dev dev; + /* SSO link mapping. */ + struct plt_bitmap **link_map; + void *link_map_mem; } __plt_cache_aligned; enum sso_err_status { diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index aabe344..2656e11 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -176,6 +176,10 @@ INTERNAL { roc_plt_init; roc_sso_dev_fini; roc_sso_dev_init; + roc_sso_hws_base_get; + roc_sso_hws_link; + roc_sso_hws_unlink; + roc_sso_ns_to_gw; roc_sso_rsrc_fini; roc_sso_rsrc_init; From patchwork Thu Apr 1 12:38:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90420 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D01A0A0548; Thu, 1 Apr 2021 14:45:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C54BF1412E7; Thu, 1 Apr 2021 14:41:00 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 072BC1412E7 for ; Thu, 1 Apr 2021 14:40:58 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLGn019096 for ; Thu, 1 Apr 2021 05:40:58 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=MJb5awcBQjXlf9k6fs02GmMFlJ4BSHx+cWDcI2FDFcs=; b=SkO/Ao3FJ1opM4QACnNhZyZ62Ck+q9X2s3lY/KX1i+qbC5T9xXDrDpdf8qafxQj02QRH KY9tFPris9CP2O82U0/9yqGJEdQF8ElhI9T/AjsK+wvLi3y0ZxL+uIKDVOXPrl7mX0gu oEGl0HKqQ7PjafDe+MfyReNDOI8eXIUwLP+6pP9Za5YkBNypwMc917KP24yURpB2sX7R xEtVhKN5sLYeiNNdAvrwYRgQd5f3ip5L3AXNMBR4Z3FtedRY9PknmHKaIy9f9rfNYeSX RcmZhdzJvFw9ZMIApYT9cB8WazSmqAweGEjCW1KUbZL0IZ/8f0+E1K+RCpWdXukwGgSp 3A== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje7k-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:58 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:56 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:56 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id F305E3F7044; Thu, 1 Apr 2021 05:40:53 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:08:12 +0530 Message-ID: <20210401123817.14348-48-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: ocwRa_uexkOs7iwazMYdpmwEtT7W8J_M X-Proofpoint-ORIG-GUID: ocwRa_uexkOs7iwazMYdpmwEtT7W8J_M X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 47/52] common/cnxk: add sso hwgrp interface X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add SSO HWGRP interface for configuring XAQ pool, setting priority and internal HW buffer limits for each HWGRP. Signed-off-by: Pavan Nikhilesh --- drivers/common/cnxk/roc_sso.c | 110 ++++++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_sso.h | 21 ++++++++ drivers/common/cnxk/version.map | 6 +++ 3 files changed, 137 insertions(+) diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c index ba9dc3b..f4c4e5b 100644 --- a/drivers/common/cnxk/roc_sso.c +++ b/drivers/common/cnxk/roc_sso.c @@ -194,6 +194,14 @@ roc_sso_hws_base_get(struct roc_sso *roc_sso, uint8_t hws) return dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12); } +uintptr_t +roc_sso_hwgrp_base_get(struct roc_sso *roc_sso, uint16_t hwgrp) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + + return dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 | hwgrp << 12); +} + uint64_t roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns) { @@ -251,6 +259,108 @@ roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], } int +roc_sso_hwgrp_hws_link_status(struct roc_sso *roc_sso, uint8_t hws, + uint16_t hwgrp) +{ + struct sso *sso; + + sso = roc_sso_to_sso_priv(roc_sso); + return plt_bitmap_get(sso->link_map[hws], hwgrp); +} + +int +roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos, + uint8_t nb_qos, uint32_t nb_xaq) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso_grp_qos_cfg *req; + int i, rc; + + for (i = 0; i < nb_qos; i++) { + uint8_t xaq_prcnt = qos[i].xaq_prcnt; + uint8_t iaq_prcnt = qos[i].iaq_prcnt; + uint8_t taq_prcnt = qos[i].taq_prcnt; + + req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox); + if (req == NULL) { + rc = mbox_process(dev->mbox); + if (rc < 0) + return rc; + req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox); + if (req == NULL) + return -ENOSPC; + } + req->grp = qos[i].hwgrp; + req->xaq_limit = (nb_xaq * (xaq_prcnt ? xaq_prcnt : 100)) / 100; + req->taq_thr = (SSO_HWGRP_IAQ_MAX_THR_MASK * + (iaq_prcnt ? iaq_prcnt : 100)) / + 100; + req->iaq_thr = (SSO_HWGRP_TAQ_MAX_THR_MASK * + (taq_prcnt ? taq_prcnt : 100)) / + 100; + } + + return mbox_process(dev->mbox); +} + +int +roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id, + uint16_t hwgrps) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso_hw_setconfig *req; + int rc = -ENOSPC; + + req = mbox_alloc_msg_sso_hw_setconfig(dev->mbox); + if (req == NULL) + return rc; + req->npa_pf_func = idev_npa_pffunc_get(); + req->npa_aura_id = npa_aura_id; + req->hwgrps = hwgrps; + + return mbox_process(dev->mbox); +} + +int +roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso_hw_xaq_release *req; + + req = mbox_alloc_msg_sso_hw_release_xaq_aura(dev->mbox); + if (req == NULL) + return -EINVAL; + req->hwgrps = hwgrps; + + return mbox_process(dev->mbox); +} + +int +roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso, uint16_t hwgrp, + uint8_t weight, uint8_t affinity, uint8_t priority) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso_grp_priority *req; + int rc = -ENOSPC; + + req = mbox_alloc_msg_sso_grp_set_priority(dev->mbox); + if (req == NULL) + return rc; + req->grp = hwgrp; + req->weight = weight; + req->affinity = affinity; + req->priority = priority; + + rc = mbox_process(dev->mbox); + if (rc < 0) + return rc; + plt_sso_dbg("HWGRP %d weight %d affinity %d priority %d", hwgrp, weight, + affinity, priority); + + return 0; +} + +int roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) { struct sso_lf_alloc_rsp *rsp_hwgrp; diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h index 7236045..ed2713c 100644 --- a/drivers/common/cnxk/roc_sso.h +++ b/drivers/common/cnxk/roc_sso.h @@ -5,6 +5,13 @@ #ifndef _ROC_SSO_H_ #define _ROC_SSO_H_ +struct roc_sso_hwgrp_qos { + uint16_t hwgrp; + uint8_t xaq_prcnt; + uint8_t iaq_prcnt; + uint8_t taq_prcnt; +}; + struct roc_sso { struct plt_pci_device *pci_dev; /* Public data. */ @@ -30,11 +37,25 @@ int __roc_api roc_sso_dev_fini(struct roc_sso *roc_sso); int __roc_api roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp); void __roc_api roc_sso_rsrc_fini(struct roc_sso *roc_sso); +int __roc_api roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, + struct roc_sso_hwgrp_qos *qos, + uint8_t nb_qos, uint32_t nb_xaq); +int __roc_api roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, + uint32_t npa_aura_id, uint16_t hwgrps); +int __roc_api roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, + uint16_t hwgrps); +int __roc_api roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso, + uint16_t hwgrp, uint8_t weight, + uint8_t affinity, uint8_t priority); uint64_t __roc_api roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns); int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp); int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp); +int __roc_api roc_sso_hwgrp_hws_link_status(struct roc_sso *roc_sso, + uint8_t hws, uint16_t hwgrp); uintptr_t __roc_api roc_sso_hws_base_get(struct roc_sso *roc_sso, uint8_t hws); +uintptr_t __roc_api roc_sso_hwgrp_base_get(struct roc_sso *roc_sso, + uint16_t hwgrp); #endif /* _ROC_SSOW_H_ */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 2656e11..3e3d9dc 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -176,6 +176,12 @@ INTERNAL { roc_plt_init; roc_sso_dev_fini; roc_sso_dev_init; + roc_sso_hwgrp_alloc_xaq; + roc_sso_hwgrp_base_get; + roc_sso_hwgrp_hws_link_status; + roc_sso_hwgrp_qos_config; + roc_sso_hwgrp_release_xaq; + roc_sso_hwgrp_set_priority; roc_sso_hws_base_get; roc_sso_hws_link; roc_sso_hws_unlink; From patchwork Thu Apr 1 12:38:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90421 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 11369A0548; Thu, 1 Apr 2021 14:45:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 24540141200; Thu, 1 Apr 2021 14:41:05 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id E77991412ED for ; Thu, 1 Apr 2021 14:41:02 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLcN019081 for ; Thu, 1 Apr 2021 05:41:01 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=2k4RUd2KWFsgkUs5ivv3GfS09sCD5HK7frQzBuxbKC0=; b=RLufjia49/k1TSQ4Wk5lpb1Un5/QRhiGI17z/m9J2pKewxK/JGcvIEk3bKIh30QoRJeN 9DiCW67Yih2eubsGDCtzi6SVV9oZCKlW1QIAMcXLkVGph1BIC3rD/LIkKrBkwwS5QQeY BB7wFFviSv/K/Nr7wj4kCk3/2MoeyBVusK1zeSLFV4wyLNwDOFuT44ZFStzwwTW7zUgM XvUQT7JVpZUnMElmqZ1daX2ukvx7Ra0cX1AGaFwhE3lHx2VM4bcFM2ldvTQ91/6rRJD4 My8bn6aaJkikaxuvBPBhSGC7M8k+NWgZ8LIhwdS1z2deXKY34x1oAta2JOqrGpliVahV CQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje7t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:41:01 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:59 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:59 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id E66173F7043; Thu, 1 Apr 2021 05:40:56 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:08:13 +0530 Message-ID: <20210401123817.14348-49-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: Yac_ypilrvR-Zoh52EZ6TatljE6YJlqh X-Proofpoint-ORIG-GUID: Yac_ypilrvR-Zoh52EZ6TatljE6YJlqh X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 48/52] common/cnxk: add sso irq support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add support to registering and un-registering SSO HWS and HWGRP IRQs. Signed-off-by: Pavan Nikhilesh --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_sso.c | 41 ++++++++++ drivers/common/cnxk/roc_sso_irq.c | 164 +++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_sso_priv.h | 14 ++++ 4 files changed, 220 insertions(+) create mode 100644 drivers/common/cnxk/roc_sso_irq.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 79c8eaa..d28e273 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -40,5 +40,6 @@ sources = files('roc_dev.c', 'roc_npc_utils.c', 'roc_platform.c', 'roc_sso.c', + 'roc_sso_irq.c', 'roc_utils.c') includes += include_directories('../../bus/pci') diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c index f4c4e5b..80d0320 100644 --- a/drivers/common/cnxk/roc_sso.c +++ b/drivers/common/cnxk/roc_sso.c @@ -185,6 +185,27 @@ sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, } } +static int +sso_msix_fill(struct roc_sso *roc_sso, uint16_t nb_hws, uint16_t nb_hwgrp) +{ + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct msix_offset_rsp *rsp; + struct dev *dev = &sso->dev; + int i, rc; + + mbox_alloc_msg_msix_offset(dev->mbox); + rc = mbox_process_msg(dev->mbox, (void **)&rsp); + if (rc < 0) + return rc; + + for (i = 0; i < nb_hws; i++) + sso->hws_msix_offset[i] = rsp->ssow_msixoff[i]; + for (i = 0; i < nb_hwgrp; i++) + sso->hwgrp_msix_offset[i] = rsp->sso_msixoff[i]; + + return 0; +} + /* Public Functions. */ uintptr_t roc_sso_hws_base_get(struct roc_sso *roc_sso, uint8_t hws) @@ -363,6 +384,7 @@ roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso, uint16_t hwgrp, int roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) { + struct sso *sso = roc_sso_to_sso_priv(roc_sso); struct sso_lf_alloc_rsp *rsp_hwgrp; int rc; @@ -400,10 +422,25 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) roc_sso->xae_waes = rsp_hwgrp->xaq_wq_entries; roc_sso->iue = rsp_hwgrp->in_unit_entries; + rc = sso_msix_fill(roc_sso, nb_hws, nb_hwgrp); + if (rc < 0) { + plt_err("Unable to get MSIX offsets for SSO LFs"); + goto sso_msix_fail; + } + + rc = sso_register_irqs_priv(roc_sso, &sso->pci_dev->intr_handle, nb_hws, + nb_hwgrp); + if (rc < 0) { + plt_err("Failed to register SSO LF IRQs"); + goto sso_msix_fail; + } + roc_sso->nb_hwgrp = nb_hwgrp; roc_sso->nb_hws = nb_hws; return 0; +sso_msix_fail: + sso_lf_free(roc_sso, SSO_LF_TYPE_HWGRP, nb_hwgrp); hwgrp_alloc_fail: sso_lf_free(roc_sso, SSO_LF_TYPE_HWS, nb_hws); hws_alloc_fail: @@ -416,9 +453,13 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) void roc_sso_rsrc_fini(struct roc_sso *roc_sso) { + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + if (!roc_sso->nb_hws && !roc_sso->nb_hwgrp) return; + sso_unregister_irqs_priv(roc_sso, &sso->pci_dev->intr_handle, + roc_sso->nb_hws, roc_sso->nb_hwgrp); sso_lf_free(roc_sso, SSO_LF_TYPE_HWS, roc_sso->nb_hws); sso_lf_free(roc_sso, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp); diff --git a/drivers/common/cnxk/roc_sso_irq.c b/drivers/common/cnxk/roc_sso_irq.c new file mode 100644 index 0000000..bf41482 --- /dev/null +++ b/drivers/common/cnxk/roc_sso_irq.c @@ -0,0 +1,164 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +static void +sso_hwgrp_irq(void *param) +{ + struct sso_rsrc *rsrc = param; + uint64_t intr; + + intr = plt_read64(rsrc->base + SSO_LF_GGRP_INT); + if (intr == 0) + return; + + plt_err("GGRP %d GGRP_INT=0x%" PRIx64 "", rsrc->rsrc_id, intr); + + /* Clear interrupt */ + plt_write64(intr, rsrc->base + SSO_LF_GGRP_INT); +} + +static int +sso_hwgrp_register_irq(struct plt_intr_handle *handle, uint16_t ggrp_msixoff, + struct sso_rsrc *rsrc) +{ + int rc, vec; + + vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP; + + /* Clear err interrupt */ + plt_write64(~0ull, rsrc->base + SSO_LF_GGRP_INT_ENA_W1C); + /* Set used interrupt vectors */ + rc = dev_irq_register(handle, sso_hwgrp_irq, (void *)rsrc, vec); + /* Enable hw interrupt */ + plt_write64(~0ull, rsrc->base + SSO_LF_GGRP_INT_ENA_W1S); + + return rc; +} + +static void +sso_hws_irq(void *param) +{ + struct sso_rsrc *rsrc = param; + uint64_t intr; + + intr = plt_read64(rsrc->base + SSOW_LF_GWS_INT); + if (intr == 0) + return; + + plt_err("GWS %d GWS_INT=0x%" PRIx64 "", rsrc->rsrc_id, intr); + + /* Clear interrupt */ + plt_write64(intr, rsrc->base + SSOW_LF_GWS_INT); +} + +static int +sso_hws_register_irq(struct plt_intr_handle *handle, uint16_t hws_msixoff, + struct sso_rsrc *rsrc) +{ + int rc, vec; + + vec = hws_msixoff + SSOW_LF_INT_VEC_IOP; + + /* Clear err interrupt */ + plt_write64(~0ull, rsrc->base + SSOW_LF_GWS_INT_ENA_W1C); + /* Set used interrupt vectors */ + rc = dev_irq_register(handle, sso_hws_irq, (void *)rsrc, vec); + /* Enable hw interrupt */ + plt_write64(~0ull, rsrc->base + SSOW_LF_GWS_INT_ENA_W1S); + + return rc; +} + +int +sso_register_irqs_priv(struct roc_sso *roc_sso, struct plt_intr_handle *handle, + uint16_t nb_hws, uint16_t nb_hwgrp) +{ + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; + int i, rc = SSO_ERR_PARAM; + + for (i = 0; i < nb_hws; i++) { + if (sso->hws_msix_offset[i] == MSIX_VECTOR_INVALID) { + plt_err("Invalid SSO HWS MSIX offset[%d] vector 0x%x", + i, sso->hws_msix_offset[i]); + goto fail; + } + } + + for (i = 0; i < nb_hwgrp; i++) { + if (sso->hwgrp_msix_offset[i] == MSIX_VECTOR_INVALID) { + plt_err("Invalid SSO HWGRP MSIX offset[%d] vector 0x%x", + i, sso->hwgrp_msix_offset[i]); + goto fail; + } + } + + for (i = 0; i < nb_hws; i++) { + uintptr_t base = + dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | i << 12); + + sso->hws_rsrc[i].rsrc_id = i; + sso->hws_rsrc[i].base = base; + rc = sso_hws_register_irq(handle, sso->hws_msix_offset[i], + &sso->hws_rsrc[i]); + } + + for (i = 0; i < nb_hwgrp; i++) { + uintptr_t base = + dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 | i << 12); + + sso->hwgrp_rsrc[i].rsrc_id = i; + sso->hwgrp_rsrc[i].base = base; + rc = sso_hwgrp_register_irq(handle, sso->hwgrp_msix_offset[i], + &sso->hwgrp_rsrc[i]); + } +fail: + return rc; +} + +static void +sso_hwgrp_unregister_irq(struct plt_intr_handle *handle, uint16_t ggrp_msixoff, + struct sso_rsrc *rsrc) +{ + int vec; + + vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP; + + /* Clear err interrupt */ + plt_write64(~0ull, rsrc->base + SSO_LF_GGRP_INT_ENA_W1C); + dev_irq_unregister(handle, sso_hwgrp_irq, (void *)rsrc, vec); +} + +static void +sso_hws_unregister_irq(struct plt_intr_handle *handle, uint16_t gws_msixoff, + struct sso_rsrc *rsrc) +{ + int vec; + + vec = gws_msixoff + SSOW_LF_INT_VEC_IOP; + + /* Clear err interrupt */ + plt_write64(~0ull, rsrc->base + SSOW_LF_GWS_INT_ENA_W1C); + dev_irq_unregister(handle, sso_hws_irq, (void *)rsrc, vec); +} + +void +sso_unregister_irqs_priv(struct roc_sso *roc_sso, + struct plt_intr_handle *handle, uint16_t nb_hws, + uint16_t nb_hwgrp) +{ + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + int i; + + for (i = 0; i < nb_hwgrp; i++) + sso_hwgrp_unregister_irq(handle, sso->hwgrp_msix_offset[i], + &sso->hwgrp_rsrc[i]); + + for (i = 0; i < nb_hws; i++) + sso_hws_unregister_irq(handle, sso->hws_msix_offset[i], + &sso->hws_rsrc[i]); +} diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h index ad35be1..5361d4f 100644 --- a/drivers/common/cnxk/roc_sso_priv.h +++ b/drivers/common/cnxk/roc_sso_priv.h @@ -13,6 +13,12 @@ struct sso_rsrc { struct sso { struct plt_pci_device *pci_dev; struct dev dev; + /* Interrupt handler args. */ + struct sso_rsrc hws_rsrc[MAX_RVU_BLKLF_CNT]; + struct sso_rsrc hwgrp_rsrc[MAX_RVU_BLKLF_CNT]; + /* MSIX offsets */ + uint16_t hws_msix_offset[MAX_RVU_BLKLF_CNT]; + uint16_t hwgrp_msix_offset[MAX_RVU_BLKLF_CNT]; /* SSO link mapping. */ struct plt_bitmap **link_map; void *link_map_mem; @@ -33,4 +39,12 @@ roc_sso_to_sso_priv(struct roc_sso *roc_sso) return (struct sso *)&roc_sso->reserved[0]; } +/* SSO IRQ */ +int sso_register_irqs_priv(struct roc_sso *roc_sso, + struct plt_intr_handle *handle, uint16_t nb_hws, + uint16_t nb_hwgrp); +void sso_unregister_irqs_priv(struct roc_sso *roc_sso, + struct plt_intr_handle *handle, uint16_t nb_hws, + uint16_t nb_hwgrp); + #endif /* _ROC_SSO_PRIV_H_ */ From patchwork Thu Apr 1 12:38:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90422 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 417A9A0548; Thu, 1 Apr 2021 14:45:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D47531412F6; Thu, 1 Apr 2021 14:41:06 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 7DDE51411E8 for ; Thu, 1 Apr 2021 14:41:05 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLXw019083 for ; Thu, 1 Apr 2021 05:41:05 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=/ZefIrSiGDfS6jmP3F+6kzn+LjTwFprzonmmPB10vbQ=; b=f4gHkwwO42IL3RCKxfrd6QSoNkf1IE6GwJxHEJ+wEXNFayVIs5yLnAFPTlHuQt2ChD5D PGyF4YcZ18tQxlhTFuMaw4i8cE1EBU8kI0Z3LZTnsu/CF6snLDWzrV2/b8SjlYRmDrXL fTZBp7tVLPdLHS9uW2ybw8fdPzlYPcbDRl8lSr6Bes/0llFyfjFpL07WNxhPzAQKlB3y p7aoRG5IbefOIK/4qyIZDn0wmxWOE77S2M/KLiy478PGge8WXUEbocIMAJQsA5vWM1e+ Sb9n4+Dl0e96e1sqsEa5adwBo5X3AKtajEXQuurtEGAFn2ZXnRvAPBjd4B6sUz2YQZoq HQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje80-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:41:04 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:41:02 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:41:02 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id E421E3F7048; Thu, 1 Apr 2021 05:40:59 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:08:14 +0530 Message-ID: <20210401123817.14348-50-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 3Pbg9yXPKSoWjwc3OJ_CRnxj6NmN3iZR X-Proofpoint-ORIG-GUID: 3Pbg9yXPKSoWjwc3OJ_CRnxj6NmN3iZR X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 49/52] common/cnxk: add sso debug support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add sso debug dump support. This dumps all SSO LF register values to a given file handle. Signed-off-by: Pavan Nikhilesh --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_sso.h | 4 +++ drivers/common/cnxk/roc_sso_debug.c | 68 +++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 1 + 4 files changed, 74 insertions(+) create mode 100644 drivers/common/cnxk/roc_sso_debug.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index d28e273..e6e2ad3 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -40,6 +40,7 @@ sources = files('roc_dev.c', 'roc_npc_utils.c', 'roc_platform.c', 'roc_sso.c', + 'roc_sso_debug.c', 'roc_sso_irq.c', 'roc_utils.c') includes += include_directories('../../bus/pci') diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h index ed2713c..f85799b 100644 --- a/drivers/common/cnxk/roc_sso.h +++ b/drivers/common/cnxk/roc_sso.h @@ -58,4 +58,8 @@ uintptr_t __roc_api roc_sso_hws_base_get(struct roc_sso *roc_sso, uint8_t hws); uintptr_t __roc_api roc_sso_hwgrp_base_get(struct roc_sso *roc_sso, uint16_t hwgrp); +/* Debug */ +void __roc_api roc_sso_dump(struct roc_sso *roc_sso, uint8_t nb_hws, + uint16_t hwgrp, FILE *f); + #endif /* _ROC_SSOW_H_ */ diff --git a/drivers/common/cnxk/roc_sso_debug.c b/drivers/common/cnxk/roc_sso_debug.c new file mode 100644 index 0000000..7571bad --- /dev/null +++ b/drivers/common/cnxk/roc_sso_debug.c @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +static void +sso_hws_dump(uintptr_t base, FILE *f) +{ + fprintf(f, "SSOW_LF_GWS Base addr 0x%" PRIx64 "\n", (uint64_t)base); + fprintf(f, "SSOW_LF_GWS_LINKS 0x%" PRIx64 "\n", + plt_read64(base + SSOW_LF_GWS_LINKS)); + fprintf(f, "SSOW_LF_GWS_PENDWQP 0x%" PRIx64 "\n", + plt_read64(base + SSOW_LF_GWS_PENDWQP)); + fprintf(f, "SSOW_LF_GWS_PENDSTATE 0x%" PRIx64 "\n", + plt_read64(base + SSOW_LF_GWS_PENDSTATE)); + fprintf(f, "SSOW_LF_GWS_NW_TIM 0x%" PRIx64 "\n", + plt_read64(base + SSOW_LF_GWS_NW_TIM)); + fprintf(f, "SSOW_LF_GWS_TAG 0x%" PRIx64 "\n", + plt_read64(base + SSOW_LF_GWS_TAG)); + fprintf(f, "SSOW_LF_GWS_WQP 0x%" PRIx64 "\n", + plt_read64(base + SSOW_LF_GWS_TAG)); + fprintf(f, "SSOW_LF_GWS_SWTP 0x%" PRIx64 "\n", + plt_read64(base + SSOW_LF_GWS_SWTP)); + fprintf(f, "SSOW_LF_GWS_PENDTAG 0x%" PRIx64 "\n", + plt_read64(base + SSOW_LF_GWS_PENDTAG)); +} + +static void +sso_hwgrp_dump(uintptr_t base, FILE *f) +{ + fprintf(f, "SSO_LF_GGRP Base addr 0x%" PRIx64 "\n", (uint64_t)base); + fprintf(f, "SSO_LF_GGRP_QCTL 0x%" PRIx64 "\n", + plt_read64(base + SSO_LF_GGRP_QCTL)); + fprintf(f, "SSO_LF_GGRP_XAQ_CNT 0x%" PRIx64 "\n", + plt_read64(base + SSO_LF_GGRP_XAQ_CNT)); + fprintf(f, "SSO_LF_GGRP_INT_THR 0x%" PRIx64 "\n", + plt_read64(base + SSO_LF_GGRP_INT_THR)); + fprintf(f, "SSO_LF_GGRP_INT_CNT 0x%" PRIX64 "\n", + plt_read64(base + SSO_LF_GGRP_INT_CNT)); + fprintf(f, "SSO_LF_GGRP_AQ_CNT 0x%" PRIX64 "\n", + plt_read64(base + SSO_LF_GGRP_AQ_CNT)); + fprintf(f, "SSO_LF_GGRP_AQ_THR 0x%" PRIX64 "\n", + plt_read64(base + SSO_LF_GGRP_AQ_THR)); + fprintf(f, "SSO_LF_GGRP_MISC_CNT 0x%" PRIx64 "\n", + plt_read64(base + SSO_LF_GGRP_MISC_CNT)); +} + +void +roc_sso_dump(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t hwgrp, FILE *f) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + uintptr_t base; + int i; + + /* Dump SSOW registers */ + for (i = 0; i < nb_hws; i++) { + base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | i << 12); + sso_hws_dump(base, f); + } + + /* Dump SSO registers */ + for (i = 0; i < hwgrp; i++) { + base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 | i << 12); + sso_hwgrp_dump(base, f); + } +} diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 3e3d9dc..e93d623 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -176,6 +176,7 @@ INTERNAL { roc_plt_init; roc_sso_dev_fini; roc_sso_dev_init; + roc_sso_dump; roc_sso_hwgrp_alloc_xaq; roc_sso_hwgrp_base_get; roc_sso_hwgrp_hws_link_status; From patchwork Thu Apr 1 12:38:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90423 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 18871A0548; Thu, 1 Apr 2021 14:45:57 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3B05C1412F0; Thu, 1 Apr 2021 14:41:10 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 13B081412FD for ; Thu, 1 Apr 2021 14:41:07 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CQnTk001166 for ; Thu, 1 Apr 2021 05:41:07 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=GUQAnNSVY4HrEJNgolnMkXlNkqxmbCuPruXJZ5s0R1c=; b=HbphbM+KlN5ZTCzLyyhYlJNk41tz9hpBtgXnT4ynUiFNl0VqqxZqGr/AOyVRr+IG+jJ2 w0ivqd67AuKl8spqTa+yBgBzt7yKd7Ac2EY2hhofrGBDufiDXmy4//75ETU68yI0F9pO Br63Tbf5n8RdKXKQfYZoPmiGDu9qwuWMkL3+UpAjQFLPJWG0byEH5fuyfrNdmG4rcRMk NER4Bp0x8MHS32aynjdiXCX6v2nk/i59WUSxddcROjvFtOMGtn4SUqDK6CYMLyyP/RV7 iZfZb9LinZ+0Trk5zzZC/gVrV+5wctPPDEpx58eohKTEyj27DXB4XnhQJHuv10BWOHi0 Qg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 37n28j2dy8-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:41:07 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:41:05 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:41:05 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id EFE213F7040; Thu, 1 Apr 2021 05:41:02 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:08:15 +0530 Message-ID: <20210401123817.14348-51-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: lBg8LbjysEH3YP2z5io1kfXzqIsI95nl X-Proofpoint-GUID: lBg8LbjysEH3YP2z5io1kfXzqIsI95nl X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 50/52] common/cnxk: add base tim device support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add TIM device init, fini which are used to attach TIM LF resources to the RVU PF/VF and TIM LF alloc and free. Signed-off-by: Pavan Nikhilesh --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_api.h | 3 + drivers/common/cnxk/roc_platform.c | 1 + drivers/common/cnxk/roc_platform.h | 2 + drivers/common/cnxk/roc_priv.h | 3 + drivers/common/cnxk/roc_tim.c | 263 +++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_tim.h | 43 ++++++ drivers/common/cnxk/roc_tim_priv.h | 21 +++ drivers/common/cnxk/version.map | 9 ++ 9 files changed, 346 insertions(+) create mode 100644 drivers/common/cnxk/roc_tim.c create mode 100644 drivers/common/cnxk/roc_tim.h create mode 100644 drivers/common/cnxk/roc_tim_priv.h diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index e6e2ad3..1b02178 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -42,5 +42,6 @@ sources = files('roc_dev.c', 'roc_sso.c', 'roc_sso_debug.c', 'roc_sso_irq.c', + 'roc_tim.c', 'roc_utils.c') includes += include_directories('../../bus/pci') diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h index b7fc3b7..67f5d13 100644 --- a/drivers/common/cnxk/roc_api.h +++ b/drivers/common/cnxk/roc_api.h @@ -91,6 +91,9 @@ /* SSO */ #include "roc_sso.h" +/* TIM */ +#include "roc_tim.h" + /* Utils */ #include "roc_utils.h" diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c index 408fe3d..28598cf 100644 --- a/drivers/common/cnxk/roc_platform.c +++ b/drivers/common/cnxk/roc_platform.c @@ -34,4 +34,5 @@ RTE_LOG_REGISTER(cnxk_logtype_npa, pmd.mempool.cnxk, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_nix, pmd.net.cnxk, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_npc, pmd.net.cnxk.flow, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_sso, pmd.event.cnxk, NOTICE); +RTE_LOG_REGISTER(cnxk_logtype_tim, pmd.event.cnxk.timer, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_tm, pmd.net.cnxk.tm, NOTICE); diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h index 9ea7b1a..c886dc5 100644 --- a/drivers/common/cnxk/roc_platform.h +++ b/drivers/common/cnxk/roc_platform.h @@ -138,6 +138,7 @@ extern int cnxk_logtype_npa; extern int cnxk_logtype_nix; extern int cnxk_logtype_npc; extern int cnxk_logtype_sso; +extern int cnxk_logtype_tim; extern int cnxk_logtype_tm; #define plt_err(fmt, args...) \ @@ -160,6 +161,7 @@ extern int cnxk_logtype_tm; #define plt_nix_dbg(fmt, ...) plt_dbg(nix, fmt, ##__VA_ARGS__) #define plt_npc_dbg(fmt, ...) plt_dbg(npc, fmt, ##__VA_ARGS__) #define plt_sso_dbg(fmt, ...) plt_dbg(sso, fmt, ##__VA_ARGS__) +#define plt_tim_dbg(fmt, ...) plt_dbg(tim, fmt, ##__VA_ARGS__) #define plt_tm_dbg(fmt, ...) plt_dbg(tm, fmt, ##__VA_ARGS__) #ifdef __cplusplus diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h index dd9d87a..5e7564c 100644 --- a/drivers/common/cnxk/roc_priv.h +++ b/drivers/common/cnxk/roc_priv.h @@ -29,4 +29,7 @@ /* SSO */ #include "roc_sso_priv.h" +/* TIM */ +#include "roc_tim_priv.h" + #endif /* _ROC_PRIV_H_ */ diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c new file mode 100644 index 0000000..37faa37 --- /dev/null +++ b/drivers/common/cnxk/roc_tim.c @@ -0,0 +1,263 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +static void +tim_err_desc(int rc) +{ + switch (rc) { + case TIM_AF_NO_RINGS_LEFT: + plt_err("Unable to allocate new TIM ring."); + break; + case TIM_AF_INVALID_NPA_PF_FUNC: + plt_err("Invalid NPA pf func."); + break; + case TIM_AF_INVALID_SSO_PF_FUNC: + plt_err("Invalid SSO pf func."); + break; + case TIM_AF_RING_STILL_RUNNING: + plt_err("Ring busy."); + break; + case TIM_AF_LF_INVALID: + plt_err("Invalid Ring id."); + break; + case TIM_AF_CSIZE_NOT_ALIGNED: + plt_err("Chunk size specified needs to be multiple of 16."); + break; + case TIM_AF_CSIZE_TOO_SMALL: + plt_err("Chunk size too small."); + break; + case TIM_AF_CSIZE_TOO_BIG: + plt_err("Chunk size too big."); + break; + case TIM_AF_INTERVAL_TOO_SMALL: + plt_err("Bucket traversal interval too small."); + break; + case TIM_AF_INVALID_BIG_ENDIAN_VALUE: + plt_err("Invalid Big endian value."); + break; + case TIM_AF_INVALID_CLOCK_SOURCE: + plt_err("Invalid Clock source specified."); + break; + case TIM_AF_GPIO_CLK_SRC_NOT_ENABLED: + plt_err("GPIO clock source not enabled."); + break; + case TIM_AF_INVALID_BSIZE: + plt_err("Invalid bucket size."); + break; + case TIM_AF_INVALID_ENABLE_PERIODIC: + plt_err("Invalid bucket size."); + break; + case TIM_AF_INVALID_ENABLE_DONTFREE: + plt_err("Invalid Don't free value."); + break; + case TIM_AF_ENA_DONTFRE_NSET_PERIODIC: + plt_err("Don't free bit not set when periodic is enabled."); + break; + case TIM_AF_RING_ALREADY_DISABLED: + plt_err("Ring already stopped"); + break; + default: + plt_err("Unknown Error."); + } +} + +int +roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *start_tsc, + uint32_t *cur_bkt) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct tim_enable_rsp *rsp; + struct tim_ring_req *req; + int rc = -ENOSPC; + + req = mbox_alloc_msg_tim_enable_ring(dev->mbox); + if (req == NULL) + return rc; + req->ring = ring_id; + + rc = mbox_process_msg(dev->mbox, (void **)&rsp); + if (rc < 0) { + tim_err_desc(rc); + return rc; + } + + if (cur_bkt) + *cur_bkt = rsp->currentbucket; + if (start_tsc) + *start_tsc = rsp->timestarted; + + return 0; +} + +int +roc_tim_lf_disable(struct roc_tim *roc_tim, uint8_t ring_id) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct tim_ring_req *req; + int rc = -ENOSPC; + + req = mbox_alloc_msg_tim_disable_ring(dev->mbox); + if (req == NULL) + return rc; + req->ring = ring_id; + + rc = mbox_process(dev->mbox); + if (rc < 0) { + tim_err_desc(rc); + return rc; + } + + return 0; +} + +uintptr_t +roc_tim_lf_base_get(struct roc_tim *roc_tim, uint8_t ring_id) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + + return dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | ring_id << 12); +} + +int +roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id, + enum roc_tim_clk_src clk_src, uint8_t ena_periodic, + uint8_t ena_dfb, uint32_t bucket_sz, uint32_t chunk_sz, + uint32_t interval) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct tim_config_req *req; + int rc = -ENOSPC; + + req = mbox_alloc_msg_tim_config_ring(dev->mbox); + if (req == NULL) + return rc; + req->ring = ring_id; + req->bigendian = false; + req->bucketsize = bucket_sz; + req->chunksize = chunk_sz; + req->clocksource = clk_src; + req->enableperiodic = ena_periodic; + req->enabledontfreebuffer = ena_dfb; + req->interval = interval; + req->gpioedge = TIM_GPIO_LTOH_TRANS; + + rc = mbox_process(dev->mbox); + if (rc < 0) { + tim_err_desc(rc); + return rc; + } + + return 0; +} + +int +roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk) +{ + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct tim_lf_alloc_req *req; + struct tim_lf_alloc_rsp *rsp; + struct dev *dev = &sso->dev; + int rc = -ENOSPC; + + req = mbox_alloc_msg_tim_lf_alloc(dev->mbox); + if (req == NULL) + return rc; + req->npa_pf_func = idev_npa_pffunc_get(); + req->sso_pf_func = idev_sso_pffunc_get(); + req->ring = ring_id; + + rc = mbox_process_msg(dev->mbox, (void **)&rsp); + if (rc < 0) { + tim_err_desc(rc); + return rc; + } + + if (clk) + *clk = rsp->tenns_clk; + + return rc; +} + +int +roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id) +{ + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct dev *dev = &sso->dev; + struct tim_ring_req *req; + int rc = -ENOSPC; + + req = mbox_alloc_msg_tim_lf_free(dev->mbox); + if (req == NULL) + return rc; + req->ring = ring_id; + + rc = mbox_process(dev->mbox); + if (rc < 0) { + tim_err_desc(rc); + return rc; + } + + return 0; +} + +int +roc_tim_init(struct roc_tim *roc_tim) +{ + struct rsrc_attach_req *attach_req; + struct free_rsrcs_rsp *free_rsrc; + struct dev *dev; + uint16_t nb_lfs; + int rc; + + if (roc_tim == NULL || roc_tim->roc_sso == NULL) + return TIM_ERR_PARAM; + + PLT_STATIC_ASSERT(sizeof(struct tim) <= TIM_MEM_SZ); + dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + nb_lfs = roc_tim->nb_lfs; + mbox_alloc_msg_free_rsrc_cnt(dev->mbox); + rc = mbox_process_msg(dev->mbox, (void *)&free_rsrc); + if (rc < 0) { + plt_err("Unable to get free rsrc count."); + return 0; + } + + if (nb_lfs && (free_rsrc->tim < nb_lfs)) { + plt_tim_dbg("Requested LFs : %d Available LFs : %d", nb_lfs, + free_rsrc->tim); + return 0; + } + + attach_req = mbox_alloc_msg_attach_resources(dev->mbox); + if (attach_req == NULL) + return -ENOSPC; + attach_req->modify = true; + attach_req->timlfs = nb_lfs ? nb_lfs : free_rsrc->tim; + nb_lfs = attach_req->timlfs; + + rc = mbox_process(dev->mbox); + if (rc < 0) { + plt_err("Unable to attach TIM LFs."); + return 0; + } + + return nb_lfs; +} + +void +roc_tim_fini(struct roc_tim *roc_tim) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct rsrc_detach_req *detach_req; + + detach_req = mbox_alloc_msg_detach_resources(dev->mbox); + PLT_ASSERT(detach_req); + detach_req->partial = true; + detach_req->timlfs = true; + + mbox_process(dev->mbox); +} diff --git a/drivers/common/cnxk/roc_tim.h b/drivers/common/cnxk/roc_tim.h new file mode 100644 index 0000000..159b021 --- /dev/null +++ b/drivers/common/cnxk/roc_tim.h @@ -0,0 +1,43 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_TIM_H_ +#define _ROC_TIM_H_ + +enum roc_tim_clk_src { + ROC_TIM_CLK_SRC_10NS = 0, + ROC_TIM_CLK_SRC_GPIO, + ROC_TIM_CLK_SRC_GTI, + ROC_TIM_CLK_SRC_PTP, + ROC_TIM_CLK_SRC_INVALID, +}; + +struct roc_tim { + struct roc_sso *roc_sso; + /* Public data. */ + uint16_t nb_lfs; + /* Private data. */ +#define TIM_MEM_SZ (1 * 1024) + uint8_t reserved[TIM_MEM_SZ] __plt_cache_aligned; +} __plt_cache_aligned; + +int __roc_api roc_tim_init(struct roc_tim *roc_tim); +void __roc_api roc_tim_fini(struct roc_tim *roc_tim); + +/* TIM config */ +int __roc_api roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, + uint64_t *start_tsc, uint32_t *cur_bkt); +int __roc_api roc_tim_lf_disable(struct roc_tim *roc_tim, uint8_t ring_id); +int __roc_api roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id, + enum roc_tim_clk_src clk_src, + uint8_t ena_periodic, uint8_t ena_dfb, + uint32_t bucket_sz, uint32_t chunk_sz, + uint32_t interval); +int __roc_api roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, + uint64_t *clk); +int __roc_api roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id); +uintptr_t __roc_api roc_tim_lf_base_get(struct roc_tim *roc_tim, + uint8_t ring_id); + +#endif diff --git a/drivers/common/cnxk/roc_tim_priv.h b/drivers/common/cnxk/roc_tim_priv.h new file mode 100644 index 0000000..08697f6 --- /dev/null +++ b/drivers/common/cnxk/roc_tim_priv.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _ROC_TIM_PRIV_H_ +#define _ROC_TIM_PRIV_H_ + +struct tim { +}; + +enum tim_err_status { + TIM_ERR_PARAM = -5120, +}; + +static inline struct tim * +roc_tim_to_tim_priv(struct roc_tim *roc_tim) +{ + return (struct tim *)&roc_tim->reserved[0]; +} + +#endif /* _ROC_TIM_PRIV_H_ */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index e93d623..9b3c5a0 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -7,6 +7,7 @@ INTERNAL { cnxk_logtype_npa; cnxk_logtype_npc; cnxk_logtype_sso; + cnxk_logtype_tim; cnxk_logtype_tm; roc_clk_freq_get; roc_error_msg_get; @@ -189,6 +190,14 @@ INTERNAL { roc_sso_ns_to_gw; roc_sso_rsrc_fini; roc_sso_rsrc_init; + roc_tim_fini; + roc_tim_init; + roc_tim_lf_alloc; + roc_tim_lf_base_get; + roc_tim_lf_config; + roc_tim_lf_disable; + roc_tim_lf_enable; + roc_tim_lf_free; local: *; }; From patchwork Thu Apr 1 12:38:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90424 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CF138A0548; Thu, 1 Apr 2021 14:46:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 86B6F1412FD; Thu, 1 Apr 2021 14:41:12 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id DB70F141302 for ; Thu, 1 Apr 2021 14:41:10 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLSc019082 for ; Thu, 1 Apr 2021 05:41:10 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=re8tqxSx1jHrsnmtgjrzBDrxh9rIOpL6L4PRK+2XPFY=; b=FlwY3ROWuwM5EHV7DPZnZ7jiK71ihNB3sb0snnh11NQ96qmAgn5Io1xcWtLhCxRtH7Nu sJNBmHv5fI6RvY961X6cv8ptpXLfWsRTWKgOTqAgFcZ50QIHA4AiaSbOdPzHz7SylKc7 C0R8tgwmVTc3RPPOobiNGnntb21tdIqFssibboT0MFYp+ETXqFvbISfKoFM5aIKgfFxt qhT2M/86KNTUZPa0ce9HpIyh3zthW9zza+sb0kz6JN2P08mswkTZiq15EIUF+2GDIc5d 4FKTnUvgeKS3ChOJPqKys4+LnCfbPQS8+ZUxCQHYvHFpXEYtk8yxtxwUckyu4WU0MAUY eg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje87-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:41:10 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:41:08 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:41:08 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id D5E6C3F7041; Thu, 1 Apr 2021 05:41:05 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:08:16 +0530 Message-ID: <20210401123817.14348-52-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 8lOL1mZkrjgnB3GSapP6Kxg23jPu-Brb X-Proofpoint-ORIG-GUID: 8lOL1mZkrjgnB3GSapP6Kxg23jPu-Brb X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 51/52] common/cnxk: add tim irq support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add TIM LF IRQ register and un-register functions. Signed-off-by: Pavan Nikhilesh --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_tim.c | 51 ++++++++++++++++++ drivers/common/cnxk/roc_tim_irq.c | 104 +++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_tim_priv.h | 9 ++++ 4 files changed, 165 insertions(+) create mode 100644 drivers/common/cnxk/roc_tim_irq.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 1b02178..4573d13 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -43,5 +43,6 @@ sources = files('roc_dev.c', 'roc_sso_debug.c', 'roc_sso_irq.c', 'roc_tim.c', + 'roc_tim_irq.c', 'roc_utils.c') includes += include_directories('../../bus/pci') diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c index 37faa37..387164b 100644 --- a/drivers/common/cnxk/roc_tim.c +++ b/drivers/common/cnxk/roc_tim.c @@ -5,6 +5,25 @@ #include "roc_api.h" #include "roc_priv.h" +static int +tim_fill_msix(struct roc_tim *roc_tim, uint16_t nb_ring) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct tim *tim = roc_tim_to_tim_priv(roc_tim); + struct msix_offset_rsp *rsp; + int i, rc; + + mbox_alloc_msg_msix_offset(dev->mbox); + rc = mbox_process_msg(dev->mbox, (void **)&rsp); + if (rc < 0) + return rc; + + for (i = 0; i < nb_ring; i++) + tim->tim_msix_offsets[i] = rsp->timlf_msixoff[i]; + + return 0; +} + static void tim_err_desc(int rc) { @@ -158,6 +177,8 @@ int roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk) { struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct tim *tim = roc_tim_to_tim_priv(roc_tim); + struct tim_ring_req *free_req; struct tim_lf_alloc_req *req; struct tim_lf_alloc_rsp *rsp; struct dev *dev = &sso->dev; @@ -179,6 +200,17 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk) if (clk) *clk = rsp->tenns_clk; + rc = tim_register_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id, + tim->tim_msix_offsets[ring_id]); + if (rc < 0) { + plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id); + free_req = mbox_alloc_msg_tim_lf_free(dev->mbox); + if (free_req == NULL) + return -ENOSPC; + free_req->ring = ring_id; + mbox_process(dev->mbox); + } + return rc; } @@ -186,10 +218,14 @@ int roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id) { struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct tim *tim = roc_tim_to_tim_priv(roc_tim); struct dev *dev = &sso->dev; struct tim_ring_req *req; int rc = -ENOSPC; + tim_unregister_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id, + tim->tim_msix_offsets[ring_id]); + req = mbox_alloc_msg_tim_lf_free(dev->mbox); if (req == NULL) return rc; @@ -208,6 +244,7 @@ int roc_tim_init(struct roc_tim *roc_tim) { struct rsrc_attach_req *attach_req; + struct rsrc_detach_req *detach_req; struct free_rsrcs_rsp *free_rsrc; struct dev *dev; uint16_t nb_lfs; @@ -245,6 +282,20 @@ roc_tim_init(struct roc_tim *roc_tim) return 0; } + rc = tim_fill_msix(roc_tim, nb_lfs); + if (rc < 0) { + plt_err("Unable to get TIM MSIX vectors"); + + detach_req = mbox_alloc_msg_detach_resources(dev->mbox); + if (detach_req == NULL) + return -ENOSPC; + detach_req->partial = true; + detach_req->timlfs = true; + mbox_process(dev->mbox); + + return 0; + } + return nb_lfs; } diff --git a/drivers/common/cnxk/roc_tim_irq.c b/drivers/common/cnxk/roc_tim_irq.c new file mode 100644 index 0000000..7bd3e76 --- /dev/null +++ b/drivers/common/cnxk/roc_tim_irq.c @@ -0,0 +1,104 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +static void +tim_lf_irq(void *param) +{ + uintptr_t base = (uintptr_t)param; + uint64_t intr; + uint8_t ring; + + ring = (base >> 12) & 0xFF; + + intr = plt_read64(base + TIM_LF_NRSPERR_INT); + plt_err("TIM RING %d TIM_LF_NRSPERR_INT=0x%" PRIx64 "", ring, intr); + intr = plt_read64(base + TIM_LF_RAS_INT); + plt_err("TIM RING %d TIM_LF_RAS_INT=0x%" PRIx64 "", ring, intr); + + /* Clear interrupt */ + plt_write64(intr, base + TIM_LF_NRSPERR_INT); + plt_write64(intr, base + TIM_LF_RAS_INT); +} + +static int +tim_lf_register_irq(uintptr_t base, struct plt_intr_handle *handle, + uint16_t msix_offset) +{ + unsigned int vec; + int rc; + + vec = msix_offset + TIM_LF_INT_VEC_NRSPERR_INT; + + /* Clear err interrupt */ + plt_write64(~0ull, base + TIM_LF_NRSPERR_INT); + /* Set used interrupt vectors */ + rc = dev_irq_register(handle, tim_lf_irq, (void *)base, vec); + /* Enable hw interrupt */ + plt_write64(~0ull, base + TIM_LF_NRSPERR_INT_ENA_W1S); + + vec = msix_offset + TIM_LF_INT_VEC_RAS_INT; + + /* Clear err interrupt */ + plt_write64(~0ull, base + TIM_LF_RAS_INT); + /* Set used interrupt vectors */ + rc = dev_irq_register(handle, tim_lf_irq, (void *)base, vec); + /* Enable hw interrupt */ + plt_write64(~0ull, base + TIM_LF_RAS_INT_ENA_W1S); + + return rc; +} + +int +tim_register_irq_priv(struct roc_tim *roc_tim, struct plt_intr_handle *handle, + uint8_t ring_id, uint16_t msix_offset) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + uintptr_t base; + + if (msix_offset == MSIX_VECTOR_INVALID) { + plt_err("Invalid MSIX offset for TIM LF %d", ring_id); + return TIM_ERR_PARAM; + } + + base = dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | ring_id << 12); + return tim_lf_register_irq(base, handle, msix_offset); +} + +static void +tim_lf_unregister_irq(uintptr_t base, struct plt_intr_handle *handle, + uint16_t msix_offset) +{ + unsigned int vec; + + vec = msix_offset + TIM_LF_INT_VEC_NRSPERR_INT; + + /* Clear err interrupt */ + plt_write64(~0ull, base + TIM_LF_NRSPERR_INT_ENA_W1C); + dev_irq_unregister(handle, tim_lf_irq, (void *)base, vec); + + vec = msix_offset + TIM_LF_INT_VEC_RAS_INT; + + /* Clear err interrupt */ + plt_write64(~0ull, base + TIM_LF_RAS_INT_ENA_W1C); + dev_irq_unregister(handle, tim_lf_irq, (void *)base, vec); +} + +void +tim_unregister_irq_priv(struct roc_tim *roc_tim, struct plt_intr_handle *handle, + uint8_t ring_id, uint16_t msix_offset) +{ + struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + uintptr_t base; + + if (msix_offset == MSIX_VECTOR_INVALID) { + plt_err("Invalid MSIX offset for TIM LF %d", ring_id); + return; + } + + base = dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | ring_id << 12); + tim_lf_unregister_irq(base, handle, msix_offset); +} diff --git a/drivers/common/cnxk/roc_tim_priv.h b/drivers/common/cnxk/roc_tim_priv.h index 08697f6..cc083d2 100644 --- a/drivers/common/cnxk/roc_tim_priv.h +++ b/drivers/common/cnxk/roc_tim_priv.h @@ -6,6 +6,7 @@ #define _ROC_TIM_PRIV_H_ struct tim { + uint16_t tim_msix_offsets[MAX_RVU_BLKLF_CNT]; }; enum tim_err_status { @@ -18,4 +19,12 @@ roc_tim_to_tim_priv(struct roc_tim *roc_tim) return (struct tim *)&roc_tim->reserved[0]; } +/* TIM IRQ*/ +int tim_register_irq_priv(struct roc_tim *roc_tim, + struct plt_intr_handle *handle, uint8_t ring_id, + uint16_t msix_offset); +void tim_unregister_irq_priv(struct roc_tim *roc_tim, + struct plt_intr_handle *handle, uint8_t ring_id, + uint16_t msix_offset); + #endif /* _ROC_TIM_PRIV_H_ */ From patchwork Thu Apr 1 12:38:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90425 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BD892A0548; Thu, 1 Apr 2021 14:46:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1A93F14130D; Thu, 1 Apr 2021 14:41:15 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id CBF1E14130B for ; Thu, 1 Apr 2021 14:41:13 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLSd019082 for ; Thu, 1 Apr 2021 05:41:13 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=o8ezS+6bRkxyspVpmZTPrOwkoR0xR7WmPwk5kUvVsq8=; b=T0vK76yphsuQ4HIN3g+s0AEGpmNbL7Qo2Zn+NDFwO9yhHpI/DKFTZxe+sE8uDn/RVnQg bhoIUPcZQAto4IcompsIP7nDSGfvT8qHRwHl9ww6NZbEADOasP8Uc0xMLe62KYiJphL1 r4eN3k8xjsKlXnI83DDcu/xZzzTM/3oTc0a4ivVCFhQZlQEDJMtoT2/7dO6RPk0d058N fjsOjGnNc25qwZtDiMwmf0zMLRH4PxCYhsyo90zLEC04vfmgHhMTCw7s2yqeY8NQNsgT x/HxM6hHFFUwMaIXe682SyhCxOfojuyXRZF1avectdVnTpKxdCuslyBdpCDB/SdGPR55 uQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje89-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:41:13 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:41:11 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:41:11 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id B95C23F703F; Thu, 1 Apr 2021 05:41:08 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Thu, 1 Apr 2021 18:08:17 +0530 Message-ID: <20210401123817.14348-53-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 2aBXAf_6TOwQNuc8Suqzr6AUE5FUEsPi X-Proofpoint-ORIG-GUID: 2aBXAf_6TOwQNuc8Suqzr6AUE5FUEsPi X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 52/52] common/cnxk: add support for rss action in rte_flow X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Satheesh Paul Added support for allocating rss group and setting it as action of an NPC rule for rte_flow. Signed-off-by: Satheesh Paul --- drivers/common/cnxk/roc_npc.c | 159 +++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_npc.h | 20 +++++ drivers/common/cnxk/roc_npc_mcam.c | 1 + drivers/common/cnxk/roc_npc_priv.h | 9 +++ 4 files changed, 189 insertions(+) diff --git a/drivers/common/cnxk/roc_npc.c b/drivers/common/cnxk/roc_npc.c index c1ac3c7..abaef77 100644 --- a/drivers/common/cnxk/roc_npc.c +++ b/drivers/common/cnxk/roc_npc.c @@ -634,6 +634,129 @@ roc_npc_flow_parse(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, return npc_program_mcam(npc, &parse_state, 0); } +int +npc_rss_free_grp_get(struct npc *npc, uint32_t *pos) +{ + struct plt_bitmap *bmap = npc->rss_grp_entries; + + for (*pos = 0; *pos < ROC_NIX_RSS_GRPS; ++*pos) { + if (!plt_bitmap_get(bmap, *pos)) + break; + } + return *pos < ROC_NIX_RSS_GRPS ? 0 : -1; +} + +int +npc_rss_action_configure(struct roc_npc *roc_npc, + const struct roc_npc_action_rss *rss, uint8_t *alg_idx, + uint32_t *rss_grp, uint32_t mcam_id) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + struct roc_nix *roc_nix = roc_npc->roc_nix; + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uint32_t flowkey_cfg, rss_grp_idx, i, rem; + uint8_t key[ROC_NIX_RSS_KEY_LEN]; + const uint8_t *key_ptr; + uint8_t flowkey_algx; + uint16_t *reta; + int rc; + + rc = npc_rss_free_grp_get(npc, &rss_grp_idx); + /* RSS group :0 is not usable for flow rss action */ + if (rc < 0 || rss_grp_idx == 0) + return -ENOSPC; + + for (i = 0; i < rss->queue_num; i++) { + if (rss->queue[i] >= nix->nb_rx_queues) { + plt_err("queue id > max number of queues"); + return -EINVAL; + } + } + + *rss_grp = rss_grp_idx; + + if (rss->key == NULL) { + roc_nix_rss_key_default_fill(roc_nix, key); + key_ptr = key; + } else { + key_ptr = rss->key; + } + + roc_nix_rss_key_set(roc_nix, key_ptr); + + /* If queue count passed in the rss action is less than + * HW configured reta size, replicate rss action reta + * across HW reta table. + */ + reta = nix->reta[rss_grp_idx]; + + if (rss->queue_num > nix->reta_sz) { + plt_err("too many queues for RSS context"); + return -ENOTSUP; + } + + for (i = 0; i < (nix->reta_sz / rss->queue_num); i++) + memcpy(reta + i * rss->queue_num, rss->queue, + sizeof(uint16_t) * rss->queue_num); + + rem = nix->reta_sz % rss->queue_num; + if (rem) + memcpy(&reta[i * rss->queue_num], rss->queue, + rem * sizeof(uint16_t)); + + rc = roc_nix_rss_reta_set(roc_nix, *rss_grp, reta); + if (rc) { + plt_err("Failed to init rss table rc = %d", rc); + return rc; + } + + flowkey_cfg = roc_npc->flowkey_cfg_state; + + rc = roc_nix_rss_flowkey_set(roc_nix, &flowkey_algx, flowkey_cfg, + *rss_grp, mcam_id); + if (rc) { + plt_err("Failed to set rss hash function rc = %d", rc); + return rc; + } + + *alg_idx = flowkey_algx; + + plt_bitmap_set(npc->rss_grp_entries, *rss_grp); + + return 0; +} + +int +npc_rss_action_program(struct roc_npc *roc_npc, + const struct roc_npc_action actions[], + struct roc_npc_flow *flow) +{ + const struct roc_npc_action_rss *rss; + uint32_t rss_grp; + uint8_t alg_idx; + int rc; + + for (; actions->type != ROC_NPC_ACTION_TYPE_END; actions++) { + if (actions->type == ROC_NPC_ACTION_TYPE_RSS) { + rss = (const struct roc_npc_action_rss *)actions->conf; + rc = npc_rss_action_configure(roc_npc, rss, &alg_idx, + &rss_grp, flow->mcam_id); + if (rc) + return rc; + + flow->npc_action &= (~(0xfULL)); + flow->npc_action |= NIX_RX_ACTIONOP_RSS; + flow->npc_action |= + ((uint64_t)(alg_idx & NPC_RSS_ACT_ALG_MASK) + << NPC_RSS_ACT_ALG_OFFSET) | + ((uint64_t)(rss_grp & NPC_RSS_ACT_GRP_MASK) + << NPC_RSS_ACT_GRP_OFFSET); + break; + } + } + return 0; +} + struct roc_npc_flow * roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, const struct roc_npc_item_info pattern[], @@ -666,6 +789,12 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, goto err_exit; } + rc = npc_rss_action_program(roc_npc, actions, flow); + if (rc != 0) { + *errcode = rc; + goto set_rss_failed; + } + list = &npc->flow_list[flow->priority]; /* List in ascending order of mcam entries */ TAILQ_FOREACH(flow_iter, list, next) { @@ -678,12 +807,36 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, TAILQ_INSERT_TAIL(list, flow, next); return flow; +set_rss_failed: + rc = npc_mcam_free_entry(npc, flow->mcam_id); + if (rc != 0) { + *errcode = rc; + plt_free(flow); + return NULL; + } err_exit: plt_free(flow); return NULL; } int +npc_rss_group_free(struct npc *npc, struct roc_npc_flow *flow) +{ + uint32_t rss_grp; + + if ((flow->npc_action & 0xF) == NIX_RX_ACTIONOP_RSS) { + rss_grp = (flow->npc_action >> NPC_RSS_ACT_GRP_OFFSET) & + NPC_RSS_ACT_GRP_MASK; + if (rss_grp == 0 || rss_grp >= npc->rss_grps) + return -EINVAL; + + plt_bitmap_clear(npc->rss_grp_entries, rss_grp); + } + + return 0; +} + +int roc_npc_flow_destroy(struct roc_npc *roc_npc, struct roc_npc_flow *flow) { struct npc *npc = roc_npc_to_npc_priv(roc_npc); @@ -699,6 +852,12 @@ roc_npc_flow_destroy(struct roc_npc *roc_npc, struct roc_npc_flow *flow) return NPC_ERR_PARAM; } + rc = npc_rss_group_free(npc, flow); + if (rc != 0) { + plt_err("Failed to free rss action rc = %d", rc); + return rc; + } + rc = npc_mcam_free_entry(npc, flow->mcam_id); if (rc != 0) return rc; diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h index 996739e..223c4ba 100644 --- a/drivers/common/cnxk/roc_npc.h +++ b/drivers/common/cnxk/roc_npc.h @@ -106,6 +106,24 @@ struct roc_npc_flow { TAILQ_ENTRY(roc_npc_flow) next; }; +enum roc_npc_rss_hash_function { + ROC_NPC_RSS_HASH_FUNCTION_DEFAULT = 0, + ROC_NPC_RSS_HASH_FUNCTION_TOEPLITZ, /**< Toeplitz */ + ROC_NPC_RSS_HASH_FUNCTION_SIMPLE_XOR, /**< Simple XOR */ + ROC_NPC_RSS_HASH_FUNCTION_SYMMETRIC_TOEPLITZ, + ROC_NPC_RSS_HASH_FUNCTION_MAX, +}; + +struct roc_npc_action_rss { + enum roc_npc_rss_hash_function func; + uint32_t level; + uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */ + uint32_t key_len; /**< Hash key length in bytes. */ + uint32_t queue_num; /**< Number of entries in @p queue. */ + const uint8_t *key; /**< Hash key. */ + const uint16_t *queue; /**< Queue indices to use. */ +}; + enum roc_npc_intf { ROC_NPC_INTF_RX = 0, ROC_NPC_INTF_TX = 1, @@ -121,6 +139,8 @@ struct roc_npc { uint16_t pf_func; uint64_t kex_capability; uint64_t rx_parse_nibble; + /* Parsed RSS Flowkey cfg for current flow being created */ + uint32_t flowkey_cfg_state; #define ROC_NPC_MEM_SZ (5 * 1024) uint8_t reserved[ROC_NPC_MEM_SZ]; diff --git a/drivers/common/cnxk/roc_npc_mcam.c b/drivers/common/cnxk/roc_npc_mcam.c index 572c52d..ff0676d 100644 --- a/drivers/common/cnxk/roc_npc_mcam.c +++ b/drivers/common/cnxk/roc_npc_mcam.c @@ -692,6 +692,7 @@ npc_flow_free_all_resources(struct npc *npc) /* Free any MCAM counters and delete flow list */ for (idx = 0; idx < npc->flow_max_priority; idx++) { while ((flow = TAILQ_FIRST(&npc->flow_list[idx])) != NULL) { + npc_rss_group_free(npc, flow); if (flow->ctr_id != NPC_COUNTER_NONE) rc |= npc_mcam_free_counter(npc, flow->ctr_id); diff --git a/drivers/common/cnxk/roc_npc_priv.h b/drivers/common/cnxk/roc_npc_priv.h index dcf26c0..8bc5bac 100644 --- a/drivers/common/cnxk/roc_npc_priv.h +++ b/drivers/common/cnxk/roc_npc_priv.h @@ -423,4 +423,13 @@ npc_parse_skip_void_and_any_items(const struct roc_npc_item_info *pattern); int npc_program_mcam(struct npc *npc, struct npc_parse_state *pst, bool mcam_alloc); uint64_t npc_get_kex_capability(struct npc *npc); +int npc_rss_free_grp_get(struct npc *npc, uint32_t *grp); +int npc_rss_action_configure(struct roc_npc *roc_npc, + const struct roc_npc_action_rss *rss, + uint8_t *alg_idx, uint32_t *rss_grp, + uint32_t mcam_id); +int npc_rss_action_program(struct roc_npc *roc_npc, + const struct roc_npc_action actions[], + struct roc_npc_flow *flow); +int npc_rss_group_free(struct npc *npc, struct roc_npc_flow *flow); #endif /* _ROC_NPC_PRIV_H_ */