From patchwork Tue Dec 19 17:39:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135347 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 62A2543747; Tue, 19 Dec 2023 18:40:41 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 10A1642E49; Tue, 19 Dec 2023 18:40:39 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id B9DC242E4E for ; Tue, 19 Dec 2023 18:40:36 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJA5YUX028895; Tue, 19 Dec 2023 09:40:35 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=e2fkafFDs1+zqes5f4c7H RU8wKE0UJbUP2ZEyfCw9CI=; b=fdBBauUzKe3bwOZYqwPe/Jfx/tXaLAMVM7NCM 5lMchw0KPWTCDhkc1ekY25OQg5yQYpCyXejuiLfNbye+fdydoOeWSLpnouzDZsgT qD56BIcolwRYK2uvIAg6QjLPn0KhJcny6QqoF1du/7zKzxsYjvRPvSGsW4+HF8Nf tHw2rRN+/IPyO7JpFvAm+l09+IZmLcmOoz9jPtCNeYNm2YKNF+K7ighXVfpQ8zkk g1KlX4a/OVihXjS6yuwtK1A37klIX8hH6/F36rug7BHNrcLeyo3GsV3CvXAgNtLf PGoLCV6MTnaTMlb1qj8GP2Akcs1xSScAPYsrDDEsQ+3+dd8BA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3v39491rvp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 19 Dec 2023 09:40:35 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:40:34 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:40:34 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 6E61A3F708F; Tue, 19 Dec 2023 09:40:31 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra , Anatoly Burakov CC: , Subject: [PATCH v2 02/24] net/cnxk: implementing eswitch device Date: Tue, 19 Dec 2023 23:09:41 +0530 Message-ID: <20231219174003.72901-3-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: QJc-XBuWYFEvVxUuWVTSkibUMx1TOH_S X-Proofpoint-ORIG-GUID: QJc-XBuWYFEvVxUuWVTSkibUMx1TOH_S X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Eswitch device is a parent or base device behind all the representors, acting as transport layer between representors and representees Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.c | 465 ++++++++++++++++++++++++++++++++ drivers/net/cnxk/cnxk_eswitch.h | 103 +++++++ drivers/net/cnxk/meson.build | 1 + 3 files changed, 569 insertions(+) create mode 100644 drivers/net/cnxk/cnxk_eswitch.c create mode 100644 drivers/net/cnxk/cnxk_eswitch.h diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c new file mode 100644 index 0000000000..51110a762d --- /dev/null +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -0,0 +1,465 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include + +#define CNXK_NIX_DEF_SQ_COUNT 512 + +static int +cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) +{ + struct cnxk_eswitch_dev *eswitch_dev; + struct roc_nix *nix; + int rc = 0; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + eswitch_dev = cnxk_eswitch_pmd_priv(); + + /* Check if this device is hosting common resource */ + nix = roc_idev_npa_nix_get(); + if (!nix || nix->pci_dev != pci_dev) { + rc = -EINVAL; + goto exit; + } + + /* Try nix fini now */ + rc = roc_nix_dev_fini(&eswitch_dev->nix); + if (rc == -EAGAIN) { + plt_info("%s: common resource in use by other devices", pci_dev->name); + goto exit; + } else if (rc) { + plt_err("Failed in nix dev fini, rc=%d", rc); + goto exit; + } + + rte_free(eswitch_dev); +exit: + return rc; +} + +static int +eswitch_dev_nix_flow_ctrl_set(struct cnxk_eswitch_dev *eswitch_dev) +{ + /* TODO enable flow control */ + return 0; + enum roc_nix_fc_mode mode_map[] = {ROC_NIX_FC_NONE, ROC_NIX_FC_RX, ROC_NIX_FC_TX, + ROC_NIX_FC_FULL}; + struct roc_nix *nix = &eswitch_dev->nix; + struct roc_nix_fc_cfg fc_cfg; + uint8_t rx_pause, tx_pause; + struct roc_nix_sq *sq; + struct roc_nix_cq *cq; + struct roc_nix_rq *rq; + uint8_t tc; + int rc, i; + + rx_pause = 1; + tx_pause = 1; + + /* Check if TX pause frame is already enabled or not */ + tc = tx_pause ? 0 : ROC_NIX_PFC_CLASS_INVALID; + + for (i = 0; i < eswitch_dev->nb_rxq; i++) { + memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg)); + + rq = &eswitch_dev->rxq[i].rqs; + cq = &eswitch_dev->cxq[i].cqs; + + fc_cfg.type = ROC_NIX_FC_RQ_CFG; + fc_cfg.rq_cfg.enable = !!tx_pause; + fc_cfg.rq_cfg.tc = tc; + fc_cfg.rq_cfg.rq = rq->qid; + fc_cfg.rq_cfg.pool = rq->aura_handle; + fc_cfg.rq_cfg.spb_pool = rq->spb_aura_handle; + fc_cfg.rq_cfg.cq_drop = cq->drop_thresh; + fc_cfg.rq_cfg.pool_drop_pct = ROC_NIX_AURA_THRESH; + + rc = roc_nix_fc_config_set(nix, &fc_cfg); + if (rc) + return rc; + } + + /* Check if RX pause frame is enabled or not */ + tc = rx_pause ? 0 : ROC_NIX_PFC_CLASS_INVALID; + for (i = 0; i < eswitch_dev->nb_txq; i++) { + memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg)); + + sq = &eswitch_dev->txq[i].sqs; + + fc_cfg.type = ROC_NIX_FC_TM_CFG; + fc_cfg.tm_cfg.sq = sq->qid; + fc_cfg.tm_cfg.tc = tc; + fc_cfg.tm_cfg.enable = !!rx_pause; + rc = roc_nix_fc_config_set(nix, &fc_cfg); + if (rc && rc != EEXIST) + return rc; + } + + rc = roc_nix_fc_mode_set(nix, mode_map[ROC_NIX_FC_FULL]); + if (rc) + return rc; + + return rc; +} + +int +cnxk_eswitch_nix_rsrc_start(struct cnxk_eswitch_dev *eswitch_dev) +{ + int rc; + + /* Update Flow control configuration */ + rc = eswitch_dev_nix_flow_ctrl_set(eswitch_dev); + if (rc) { + plt_err("Failed to enable flow control. error code(%d)", rc); + goto done; + } + + /* Enable Rx in NPC */ + rc = roc_nix_npc_rx_ena_dis(&eswitch_dev->nix, true); + if (rc) { + plt_err("Failed to enable NPC rx %d", rc); + goto done; + } + + rc = roc_npc_mcam_enable_all_entries(&eswitch_dev->npc, 1); + if (rc) { + plt_err("Failed to enable NPC entries %d", rc); + goto done; + } + +done: + return 0; +} + +int +cnxk_eswitch_txq_start(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid) +{ + struct roc_nix_sq *sq = &eswitch_dev->txq[qid].sqs; + int rc = -EINVAL; + + if (eswitch_dev->txq[qid].state == CNXK_ESWITCH_QUEUE_STATE_STARTED) + return 0; + + if (eswitch_dev->txq[qid].state != CNXK_ESWITCH_QUEUE_STATE_CONFIGURED) { + plt_err("Eswitch txq %d not configured yet", qid); + goto done; + } + + rc = roc_nix_tm_sq_aura_fc(sq, true); + if (rc) { + plt_err("Failed to enable sq aura fc, txq=%u, rc=%d", qid, rc); + goto done; + } + + eswitch_dev->txq[qid].state = CNXK_ESWITCH_QUEUE_STATE_STARTED; +done: + return rc; +} + +int +cnxk_eswitch_txq_stop(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid) +{ + struct roc_nix_sq *sq = &eswitch_dev->txq[qid].sqs; + int rc = -EINVAL; + + if (eswitch_dev->txq[qid].state == CNXK_ESWITCH_QUEUE_STATE_STOPPED || + eswitch_dev->txq[qid].state == CNXK_ESWITCH_QUEUE_STATE_RELEASED) + return 0; + + if (eswitch_dev->txq[qid].state != CNXK_ESWITCH_QUEUE_STATE_STARTED) { + plt_err("Eswitch txq %d not started", qid); + goto done; + } + + rc = roc_nix_tm_sq_aura_fc(sq, false); + if (rc) { + plt_err("Failed to disable sqb aura fc, txq=%u, rc=%d", qid, rc); + goto done; + } + + eswitch_dev->txq[qid].state = CNXK_ESWITCH_QUEUE_STATE_STOPPED; +done: + return rc; +} + +int +cnxk_eswitch_rxq_start(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid) +{ + struct roc_nix_rq *rq = &eswitch_dev->rxq[qid].rqs; + int rc = -EINVAL; + + if (eswitch_dev->rxq[qid].state == CNXK_ESWITCH_QUEUE_STATE_STARTED) + return 0; + + if (eswitch_dev->rxq[qid].state != CNXK_ESWITCH_QUEUE_STATE_CONFIGURED) { + plt_err("Eswitch rxq %d not configured yet", qid); + goto done; + } + + rc = roc_nix_rq_ena_dis(rq, true); + if (rc) { + plt_err("Failed to enable rxq=%u, rc=%d", qid, rc); + goto done; + } + + eswitch_dev->rxq[qid].state = CNXK_ESWITCH_QUEUE_STATE_STARTED; +done: + return rc; +} + +int +cnxk_eswitch_rxq_stop(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid) +{ + struct roc_nix_rq *rq = &eswitch_dev->rxq[qid].rqs; + int rc = -EINVAL; + + if (eswitch_dev->rxq[qid].state == CNXK_ESWITCH_QUEUE_STATE_STOPPED || + eswitch_dev->rxq[qid].state == CNXK_ESWITCH_QUEUE_STATE_RELEASED) + return 0; + + if (eswitch_dev->rxq[qid].state != CNXK_ESWITCH_QUEUE_STATE_STARTED) { + plt_err("Eswitch rxq %d not started", qid); + goto done; + } + + rc = roc_nix_rq_ena_dis(rq, false); + if (rc) { + plt_err("Failed to disable rxq=%u, rc=%d", qid, rc); + goto done; + } + + eswitch_dev->rxq[qid].state = CNXK_ESWITCH_QUEUE_STATE_STOPPED; +done: + return rc; +} + +int +cnxk_eswitch_rxq_release(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid) +{ + struct roc_nix_rq *rq; + struct roc_nix_cq *cq; + int rc; + + if (eswitch_dev->rxq[qid].state == CNXK_ESWITCH_QUEUE_STATE_RELEASED) + return 0; + + /* Cleanup ROC SQ */ + rq = &eswitch_dev->rxq[qid].rqs; + rc = roc_nix_rq_fini(rq); + if (rc) { + plt_err("Failed to cleanup sq, rc=%d", rc); + goto fail; + } + + eswitch_dev->rxq[qid].state = CNXK_ESWITCH_QUEUE_STATE_RELEASED; + + /* Cleanup ROC CQ */ + cq = &eswitch_dev->cxq[qid].cqs; + rc = roc_nix_cq_fini(cq); + if (rc) { + plt_err("Failed to cleanup cq, rc=%d", rc); + goto fail; + } + + eswitch_dev->cxq[qid].state = CNXK_ESWITCH_QUEUE_STATE_RELEASED; +fail: + return rc; +} + +int +cnxk_eswitch_rxq_setup(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, uint16_t nb_desc, + const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) +{ + struct roc_nix *nix = &eswitch_dev->nix; + struct rte_mempool *lpb_pool = mp; + struct rte_mempool_ops *ops; + const char *platform_ops; + struct roc_nix_rq *rq; + struct roc_nix_cq *cq; + uint16_t first_skip; + int rc = -EINVAL; + + if (eswitch_dev->rxq[qid].state != CNXK_ESWITCH_QUEUE_STATE_RELEASED || + eswitch_dev->cxq[qid].state != CNXK_ESWITCH_QUEUE_STATE_RELEASED) { + plt_err("Queue %d is in invalid state %d, cannot be setup", qid, + eswitch_dev->txq[qid].state); + goto fail; + } + + RTE_SET_USED(rx_conf); + platform_ops = rte_mbuf_platform_mempool_ops(); + /* This driver needs cnxk_npa mempool ops to work */ + ops = rte_mempool_get_ops(lpb_pool->ops_index); + if (strncmp(ops->name, platform_ops, RTE_MEMPOOL_OPS_NAMESIZE)) { + plt_err("mempool ops should be of cnxk_npa type"); + goto fail; + } + + if (lpb_pool->pool_id == 0) { + plt_err("Invalid pool_id"); + goto fail; + } + + /* Setup ROC CQ */ + cq = &eswitch_dev->cxq[qid].cqs; + memset(cq, 0, sizeof(struct roc_nix_cq)); + cq->qid = qid; + cq->nb_desc = nb_desc; + rc = roc_nix_cq_init(nix, cq); + if (rc) { + plt_err("Failed to init roc cq for rq=%d, rc=%d", qid, rc); + goto fail; + } + eswitch_dev->cxq[qid].state = CNXK_ESWITCH_QUEUE_STATE_CONFIGURED; + + /* Setup ROC RQ */ + rq = &eswitch_dev->rxq[qid].rqs; + memset(rq, 0, sizeof(struct roc_nix_rq)); + rq->qid = qid; + rq->cqid = cq->qid; + rq->aura_handle = lpb_pool->pool_id; + rq->flow_tag_width = 32; + rq->sso_ena = false; + + /* Calculate first mbuf skip */ + first_skip = (sizeof(struct rte_mbuf)); + first_skip += RTE_PKTMBUF_HEADROOM; + first_skip += rte_pktmbuf_priv_size(lpb_pool); + rq->first_skip = first_skip; + rq->later_skip = sizeof(struct rte_mbuf) + rte_pktmbuf_priv_size(lpb_pool); + rq->lpb_size = lpb_pool->elt_size; + if (roc_errata_nix_no_meta_aura()) + rq->lpb_drop_ena = true; + + rc = roc_nix_rq_init(nix, rq, true); + if (rc) { + plt_err("Failed to init roc rq for rq=%d, rc=%d", qid, rc); + goto cq_fini; + } + eswitch_dev->rxq[qid].state = CNXK_ESWITCH_QUEUE_STATE_CONFIGURED; + + return 0; +cq_fini: + rc |= roc_nix_cq_fini(cq); +fail: + return rc; +} + +int +cnxk_eswitch_txq_release(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid) +{ + struct roc_nix_sq *sq; + int rc = 0; + + if (eswitch_dev->txq[qid].state == CNXK_ESWITCH_QUEUE_STATE_RELEASED) + return 0; + + /* Cleanup ROC SQ */ + sq = &eswitch_dev->txq[qid].sqs; + rc = roc_nix_sq_fini(sq); + if (rc) { + plt_err("Failed to cleanup sq, rc=%d", rc); + goto fail; + } + + eswitch_dev->txq[qid].state = CNXK_ESWITCH_QUEUE_STATE_RELEASED; +fail: + return rc; +} + +int +cnxk_eswitch_txq_setup(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, uint16_t nb_desc, + const struct rte_eth_txconf *tx_conf) +{ + struct roc_nix_sq *sq; + int rc = 0; + + if (eswitch_dev->txq[qid].state != CNXK_ESWITCH_QUEUE_STATE_RELEASED) { + plt_err("Queue %d is in invalid state %d, cannot be setup", qid, + eswitch_dev->txq[qid].state); + rc = -EINVAL; + goto fail; + } + RTE_SET_USED(tx_conf); + /* Setup ROC SQ */ + sq = &eswitch_dev->txq[qid].sqs; + memset(sq, 0, sizeof(struct roc_nix_sq)); + sq->qid = qid; + sq->nb_desc = nb_desc; + /* TODO: Revisit to enable MSEG nix_sq_max_sqe_sz(dev) */ + sq->max_sqe_sz = NIX_MAXSQESZ_W8; + if (sq->nb_desc >= CNXK_NIX_DEF_SQ_COUNT) + sq->fc_hyst_bits = 0x1; + + rc = roc_nix_sq_init(&eswitch_dev->nix, sq); + if (rc) + plt_err("Failed to init sq=%d, rc=%d", qid, rc); + + eswitch_dev->txq[qid].state = CNXK_ESWITCH_QUEUE_STATE_CONFIGURED; + +fail: + return rc; +} + +static int +cnxk_eswitch_dev_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) +{ + struct cnxk_eswitch_dev *eswitch_dev; + const struct rte_memzone *mz = NULL; + int rc = -ENOMEM; + + RTE_SET_USED(pci_drv); + + eswitch_dev = cnxk_eswitch_pmd_priv(); + if (!eswitch_dev) { + rc = roc_plt_init(); + if (rc) { + plt_err("Failed to initialize platform model, rc=%d", rc); + return rc; + } + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + mz = rte_memzone_reserve_aligned(CNXK_REP_ESWITCH_DEV_MZ, sizeof(*eswitch_dev), + SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); + if (mz == NULL) { + plt_err("Failed to reserve a memzone"); + goto fail; + } + + eswitch_dev = mz->addr; + eswitch_dev->pci_dev = pci_dev; + } + + /* Spinlock for synchronization between representors traffic and control + * messages + */ + rte_spinlock_init(&eswitch_dev->rep_lock); + + return rc; +fail: + return rc; +} + +static const struct rte_pci_id cnxk_eswitch_pci_map[] = { + {RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CNXK_RVU_ESWITCH_PF)}, + { + .vendor_id = 0, + }, +}; + +static struct rte_pci_driver cnxk_eswitch_pci = { + .id_table = cnxk_eswitch_pci_map, + .drv_flags = + RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA | RTE_PCI_DRV_PROBE_AGAIN, + .probe = cnxk_eswitch_dev_probe, + .remove = cnxk_eswitch_dev_remove, +}; + +RTE_PMD_REGISTER_PCI(cnxk_eswitch, cnxk_eswitch_pci); +RTE_PMD_REGISTER_PCI_TABLE(cnxk_eswitch, cnxk_eswitch_pci_map); +RTE_PMD_REGISTER_KMOD_DEP(cnxk_eswitch, "vfio-pci"); diff --git a/drivers/net/cnxk/cnxk_eswitch.h b/drivers/net/cnxk/cnxk_eswitch.h new file mode 100644 index 0000000000..331397021b --- /dev/null +++ b/drivers/net/cnxk/cnxk_eswitch.h @@ -0,0 +1,103 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#ifndef __CNXK_ESWITCH_H__ +#define __CNXK_ESWITCH_H__ + +#include +#include + +#include + +#include "cn10k_tx.h" + +#define CNXK_ESWITCH_CTRL_MSG_SOCK_PATH "/tmp/cxk_rep_ctrl_msg_sock" +#define CNXK_REP_ESWITCH_DEV_MZ "cnxk_eswitch_dev" +#define CNXK_ESWITCH_VLAN_TPID 0x8100 /* TODO change */ +#define CNXK_ESWITCH_MAX_TXQ 256 +#define CNXK_ESWITCH_MAX_RXQ 256 +#define CNXK_ESWITCH_LBK_CHAN 63 +#define CNXK_ESWITCH_VFPF_SHIFT 8 + +#define CNXK_ESWITCH_QUEUE_STATE_RELEASED 0 +#define CNXK_ESWITCH_QUEUE_STATE_CONFIGURED 1 +#define CNXK_ESWITCH_QUEUE_STATE_STARTED 2 +#define CNXK_ESWITCH_QUEUE_STATE_STOPPED 3 + +struct cnxk_rep_info { + struct rte_eth_dev *rep_eth_dev; +}; + +struct cnxk_eswitch_txq { + struct roc_nix_sq sqs; + uint8_t state; +}; + +struct cnxk_eswitch_rxq { + struct roc_nix_rq rqs; + uint8_t state; +}; + +struct cnxk_eswitch_cxq { + struct roc_nix_cq cqs; + uint8_t state; +}; + +TAILQ_HEAD(eswitch_flow_list, roc_npc_flow); +struct cnxk_eswitch_dev { + /* Input parameters */ + struct plt_pci_device *pci_dev; + /* ROC NIX */ + struct roc_nix nix; + + /* ROC NPC */ + struct roc_npc npc; + + /* ROC NPA */ + struct rte_mempool *ctrl_chan_pool; + const struct plt_memzone *pktmem_mz; + uint64_t pkt_aura; + + /* Eswitch RQs, SQs and CQs */ + struct cnxk_eswitch_txq *txq; + struct cnxk_eswitch_rxq *rxq; + struct cnxk_eswitch_cxq *cxq; + + /* Configured queue count */ + uint16_t nb_rxq; + uint16_t nb_txq; + uint16_t rep_cnt; + uint8_t configured; + + /* Port representor fields */ + rte_spinlock_t rep_lock; + uint16_t switch_domain_id; + uint16_t eswitch_vdev; + struct cnxk_rep_info *rep_info; +}; + +static inline struct cnxk_eswitch_dev * +cnxk_eswitch_pmd_priv(void) +{ + const struct rte_memzone *mz; + + mz = rte_memzone_lookup(CNXK_REP_ESWITCH_DEV_MZ); + if (!mz) + return NULL; + + return mz->addr; +} + +int cnxk_eswitch_nix_rsrc_start(struct cnxk_eswitch_dev *eswitch_dev); +int cnxk_eswitch_txq_setup(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, uint16_t nb_desc, + const struct rte_eth_txconf *tx_conf); +int cnxk_eswitch_txq_release(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid); +int cnxk_eswitch_rxq_setup(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, uint16_t nb_desc, + const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp); +int cnxk_eswitch_rxq_release(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid); +int cnxk_eswitch_rxq_start(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid); +int cnxk_eswitch_rxq_stop(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid); +int cnxk_eswitch_txq_start(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid); +int cnxk_eswitch_txq_stop(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid); +#endif /* __CNXK_ESWITCH_H__ */ diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build index e83f3c9050..012d098f80 100644 --- a/drivers/net/cnxk/meson.build +++ b/drivers/net/cnxk/meson.build @@ -28,6 +28,7 @@ sources = files( 'cnxk_ethdev_sec.c', 'cnxk_ethdev_telemetry.c', 'cnxk_ethdev_sec_telemetry.c', + 'cnxk_eswitch.c', 'cnxk_link.c', 'cnxk_lookup.c', 'cnxk_ptp.c',