From patchwork Tue Dec 19 17:39:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135346 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0097F43747; Tue, 19 Dec 2023 18:40:35 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E4E7042E4A; Tue, 19 Dec 2023 18:40:35 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id BF87442DED for ; Tue, 19 Dec 2023 18:40:33 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJ9d52r021347 for ; Tue, 19 Dec 2023 09:40:33 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=9pc9hk2Dfg07oPWHBjYHh YvUjInrDKApkpWp7q1ViIA=; b=E0h07I8+NMp98kTBha+KWW9zZ8NuhgziArb1n CUlzglNUg32/aVcjBDX33z693Qpbkojvz4DqpJu1wybFSdklJ/fpQKC2H62yrers EKzdXTaS8taTifmzZ1w0mP5fya3D9HtJx/dcgGSJs70U/Ofv1h6CcoHLW4TlpwJX M9hq+pm4jGiLZ1DiMs+MwDjbruW4lH65Q7RQ+W+0BUwQC521UnMXkmYjrBaXviq5 z+nPqqQfTdlXI0yDsEqnZo6/tZgScdNIVfjDB+3WyKhtaLI8cn5paeILEe+LGI0q ZUoNB3nYho7oQtan082d5iWLoYxONYYco7x8leXctWGr7wdhQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3v1c9kumct-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:40:33 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:40:30 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:40:30 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 7A67B3F7050; Tue, 19 Dec 2023 09:40:28 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 01/24] common/cnxk: add support for representors Date: Tue, 19 Dec 2023 23:09:40 +0530 Message-ID: <20231219174003.72901-2-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Xj6TXBfkOp8WcwoNYcAoMeYidZzepvWi X-Proofpoint-GUID: Xj6TXBfkOp8WcwoNYcAoMeYidZzepvWi X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org - Mailbox for registering base device behind all representors - Registering debug log type for representors Signed-off-by: Harman Kalra --- drivers/common/cnxk/roc_constants.h | 1 + drivers/common/cnxk/roc_mbox.h | 8 ++++++++ drivers/common/cnxk/roc_nix.c | 31 +++++++++++++++++++++++++++++ drivers/common/cnxk/roc_nix.h | 3 +++ drivers/common/cnxk/roc_platform.c | 2 ++ drivers/common/cnxk/roc_platform.h | 4 ++++ drivers/common/cnxk/version.map | 3 +++ 7 files changed, 52 insertions(+) diff --git a/drivers/common/cnxk/roc_constants.h b/drivers/common/cnxk/roc_constants.h index 291b6a4bc9..cb4edbea58 100644 --- a/drivers/common/cnxk/roc_constants.h +++ b/drivers/common/cnxk/roc_constants.h @@ -43,6 +43,7 @@ #define PCI_DEVID_CNXK_RVU_NIX_INL_VF 0xA0F1 #define PCI_DEVID_CNXK_RVU_REE_PF 0xA0f4 #define PCI_DEVID_CNXK_RVU_REE_VF 0xA0f5 +#define PCI_DEVID_CNXK_RVU_ESWITCH_PF 0xA0E0 #define PCI_DEVID_CN9K_CGX 0xA059 #define PCI_DEVID_CN10K_RPM 0xA060 diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h index 3257a370bc..b7e2f43d45 100644 --- a/drivers/common/cnxk/roc_mbox.h +++ b/drivers/common/cnxk/roc_mbox.h @@ -68,6 +68,7 @@ struct mbox_msghdr { M(NDC_SYNC_OP, 0x009, ndc_sync_op, ndc_sync_op, msg_rsp) \ M(LMTST_TBL_SETUP, 0x00a, lmtst_tbl_setup, lmtst_tbl_setup_req, \ msg_rsp) \ + M(GET_REP_CNT, 0x00d, get_rep_cnt, msg_req, get_rep_cnt_rsp) \ /* CGX mbox IDs (range 0x200 - 0x3FF) */ \ M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \ M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \ @@ -546,6 +547,13 @@ struct lmtst_tbl_setup_req { uint64_t __io rsvd[2]; /* Future use */ }; +#define MAX_PFVF_REP 64 +struct get_rep_cnt_rsp { + struct mbox_msghdr hdr; + uint16_t __io rep_cnt; + uint16_t __io rep_pfvf_map[MAX_PFVF_REP]; +}; + /* CGX mbox message formats */ /* CGX mailbox error codes * Range 1101 - 1200. diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c index f64933a1d9..7e327a7e6e 100644 --- a/drivers/common/cnxk/roc_nix.c +++ b/drivers/common/cnxk/roc_nix.c @@ -531,3 +531,34 @@ roc_nix_dev_fini(struct roc_nix *roc_nix) rc |= dev_fini(&nix->dev, nix->pci_dev); return rc; } + +int +roc_nix_max_rep_count(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + struct mbox *mbox = mbox_get(dev->mbox); + struct get_rep_cnt_rsp *rsp; + struct msg_req *req; + int rc, i; + + req = mbox_alloc_msg_get_rep_cnt(mbox); + if (!req) { + rc = -ENOSPC; + goto exit; + } + + req->hdr.pcifunc = roc_nix_get_pf_func(roc_nix); + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto exit; + + roc_nix->rep_cnt = rsp->rep_cnt; + for (i = 0; i < rsp->rep_cnt; i++) + roc_nix->rep_pfvf_map[i] = rsp->rep_pfvf_map[i]; + +exit: + mbox_put(mbox); + return rc; +} diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 84e6fc3df5..b369335fc4 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -483,6 +483,8 @@ struct roc_nix { uint32_t buf_sz; uint64_t meta_aura_handle; uintptr_t meta_mempool; + uint16_t rep_cnt; + uint16_t rep_pfvf_map[MAX_PFVF_REP]; TAILQ_ENTRY(roc_nix) next; #define ROC_NIX_MEM_SZ (6 * 1070) @@ -1013,4 +1015,5 @@ int __roc_api roc_nix_mcast_list_setup(struct mbox *mbox, uint8_t intf, int nb_e uint16_t *pf_funcs, uint16_t *channels, uint32_t *rqs, uint32_t *grp_index, uint32_t *start_index); int __roc_api roc_nix_mcast_list_free(struct mbox *mbox, uint32_t mcast_grp_index); +int __roc_api roc_nix_max_rep_count(struct roc_nix *roc_nix); #endif /* _ROC_NIX_H_ */ diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c index 15cbb6d68f..181902a585 100644 --- a/drivers/common/cnxk/roc_platform.c +++ b/drivers/common/cnxk/roc_platform.c @@ -96,4 +96,6 @@ RTE_LOG_REGISTER_DEFAULT(cnxk_logtype_sso, NOTICE); RTE_LOG_REGISTER_DEFAULT(cnxk_logtype_tim, NOTICE); RTE_LOG_REGISTER_DEFAULT(cnxk_logtype_tm, NOTICE); RTE_LOG_REGISTER_DEFAULT(cnxk_logtype_dpi, NOTICE); +RTE_LOG_REGISTER_DEFAULT(cnxk_logtype_rep, NOTICE); +RTE_LOG_REGISTER_DEFAULT(cnxk_logtype_esw, NOTICE); RTE_LOG_REGISTER_DEFAULT(cnxk_logtype_ree, NOTICE); diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h index ba23b2e0d7..e08eb7f6ba 100644 --- a/drivers/common/cnxk/roc_platform.h +++ b/drivers/common/cnxk/roc_platform.h @@ -264,6 +264,8 @@ extern int cnxk_logtype_tim; extern int cnxk_logtype_tm; extern int cnxk_logtype_ree; extern int cnxk_logtype_dpi; +extern int cnxk_logtype_rep; +extern int cnxk_logtype_esw; #define plt_err(fmt, args...) \ RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", __func__, __LINE__, ##args) @@ -293,6 +295,8 @@ extern int cnxk_logtype_dpi; #define plt_tm_dbg(fmt, ...) plt_dbg(tm, fmt, ##__VA_ARGS__) #define plt_ree_dbg(fmt, ...) plt_dbg(ree, fmt, ##__VA_ARGS__) #define plt_dpi_dbg(fmt, ...) plt_dbg(dpi, fmt, ##__VA_ARGS__) +#define plt_rep_dbg(fmt, ...) plt_dbg(rep, fmt, ##__VA_ARGS__) +#define plt_esw_dbg(fmt, ...) plt_dbg(esw, fmt, ##__VA_ARGS__) /* Datapath logs */ #define plt_dp_err(fmt, args...) \ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 7b6afa63a9..bd28803013 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -8,12 +8,14 @@ INTERNAL { cnxk_logtype_base; cnxk_logtype_cpt; cnxk_logtype_dpi; + cnxk_logtype_esw; cnxk_logtype_mbox; cnxk_logtype_ml; cnxk_logtype_nix; cnxk_logtype_npa; cnxk_logtype_npc; cnxk_logtype_ree; + cnxk_logtype_rep; cnxk_logtype_sso; cnxk_logtype_tim; cnxk_logtype_tm; @@ -216,6 +218,7 @@ INTERNAL { roc_nix_get_base_chan; roc_nix_get_pf; roc_nix_get_pf_func; + roc_nix_max_rep_count; roc_nix_get_rx_chan_cnt; roc_nix_get_vf; roc_nix_get_vwqe_interval; From patchwork Tue Dec 19 17:39:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135347 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 62A2543747; Tue, 19 Dec 2023 18:40:41 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 10A1642E49; Tue, 19 Dec 2023 18:40:39 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id B9DC242E4E for ; Tue, 19 Dec 2023 18:40:36 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJA5YUX028895; Tue, 19 Dec 2023 09:40:35 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=e2fkafFDs1+zqes5f4c7H RU8wKE0UJbUP2ZEyfCw9CI=; b=fdBBauUzKe3bwOZYqwPe/Jfx/tXaLAMVM7NCM 5lMchw0KPWTCDhkc1ekY25OQg5yQYpCyXejuiLfNbye+fdydoOeWSLpnouzDZsgT qD56BIcolwRYK2uvIAg6QjLPn0KhJcny6QqoF1du/7zKzxsYjvRPvSGsW4+HF8Nf tHw2rRN+/IPyO7JpFvAm+l09+IZmLcmOoz9jPtCNeYNm2YKNF+K7ighXVfpQ8zkk g1KlX4a/OVihXjS6yuwtK1A37klIX8hH6/F36rug7BHNrcLeyo3GsV3CvXAgNtLf PGoLCV6MTnaTMlb1qj8GP2Akcs1xSScAPYsrDDEsQ+3+dd8BA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3v39491rvp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 19 Dec 2023 09:40:35 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:40:34 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:40:34 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 6E61A3F708F; Tue, 19 Dec 2023 09:40:31 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra , Anatoly Burakov CC: , Subject: [PATCH v2 02/24] net/cnxk: implementing eswitch device Date: Tue, 19 Dec 2023 23:09:41 +0530 Message-ID: <20231219174003.72901-3-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: QJc-XBuWYFEvVxUuWVTSkibUMx1TOH_S X-Proofpoint-ORIG-GUID: QJc-XBuWYFEvVxUuWVTSkibUMx1TOH_S X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Eswitch device is a parent or base device behind all the representors, acting as transport layer between representors and representees Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.c | 465 ++++++++++++++++++++++++++++++++ drivers/net/cnxk/cnxk_eswitch.h | 103 +++++++ drivers/net/cnxk/meson.build | 1 + 3 files changed, 569 insertions(+) create mode 100644 drivers/net/cnxk/cnxk_eswitch.c create mode 100644 drivers/net/cnxk/cnxk_eswitch.h diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c new file mode 100644 index 0000000000..51110a762d --- /dev/null +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -0,0 +1,465 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include + +#define CNXK_NIX_DEF_SQ_COUNT 512 + +static int +cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) +{ + struct cnxk_eswitch_dev *eswitch_dev; + struct roc_nix *nix; + int rc = 0; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + eswitch_dev = cnxk_eswitch_pmd_priv(); + + /* Check if this device is hosting common resource */ + nix = roc_idev_npa_nix_get(); + if (!nix || nix->pci_dev != pci_dev) { + rc = -EINVAL; + goto exit; + } + + /* Try nix fini now */ + rc = roc_nix_dev_fini(&eswitch_dev->nix); + if (rc == -EAGAIN) { + plt_info("%s: common resource in use by other devices", pci_dev->name); + goto exit; + } else if (rc) { + plt_err("Failed in nix dev fini, rc=%d", rc); + goto exit; + } + + rte_free(eswitch_dev); +exit: + return rc; +} + +static int +eswitch_dev_nix_flow_ctrl_set(struct cnxk_eswitch_dev *eswitch_dev) +{ + /* TODO enable flow control */ + return 0; + enum roc_nix_fc_mode mode_map[] = {ROC_NIX_FC_NONE, ROC_NIX_FC_RX, ROC_NIX_FC_TX, + ROC_NIX_FC_FULL}; + struct roc_nix *nix = &eswitch_dev->nix; + struct roc_nix_fc_cfg fc_cfg; + uint8_t rx_pause, tx_pause; + struct roc_nix_sq *sq; + struct roc_nix_cq *cq; + struct roc_nix_rq *rq; + uint8_t tc; + int rc, i; + + rx_pause = 1; + tx_pause = 1; + + /* Check if TX pause frame is already enabled or not */ + tc = tx_pause ? 0 : ROC_NIX_PFC_CLASS_INVALID; + + for (i = 0; i < eswitch_dev->nb_rxq; i++) { + memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg)); + + rq = &eswitch_dev->rxq[i].rqs; + cq = &eswitch_dev->cxq[i].cqs; + + fc_cfg.type = ROC_NIX_FC_RQ_CFG; + fc_cfg.rq_cfg.enable = !!tx_pause; + fc_cfg.rq_cfg.tc = tc; + fc_cfg.rq_cfg.rq = rq->qid; + fc_cfg.rq_cfg.pool = rq->aura_handle; + fc_cfg.rq_cfg.spb_pool = rq->spb_aura_handle; + fc_cfg.rq_cfg.cq_drop = cq->drop_thresh; + fc_cfg.rq_cfg.pool_drop_pct = ROC_NIX_AURA_THRESH; + + rc = roc_nix_fc_config_set(nix, &fc_cfg); + if (rc) + return rc; + } + + /* Check if RX pause frame is enabled or not */ + tc = rx_pause ? 0 : ROC_NIX_PFC_CLASS_INVALID; + for (i = 0; i < eswitch_dev->nb_txq; i++) { + memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg)); + + sq = &eswitch_dev->txq[i].sqs; + + fc_cfg.type = ROC_NIX_FC_TM_CFG; + fc_cfg.tm_cfg.sq = sq->qid; + fc_cfg.tm_cfg.tc = tc; + fc_cfg.tm_cfg.enable = !!rx_pause; + rc = roc_nix_fc_config_set(nix, &fc_cfg); + if (rc && rc != EEXIST) + return rc; + } + + rc = roc_nix_fc_mode_set(nix, mode_map[ROC_NIX_FC_FULL]); + if (rc) + return rc; + + return rc; +} + +int +cnxk_eswitch_nix_rsrc_start(struct cnxk_eswitch_dev *eswitch_dev) +{ + int rc; + + /* Update Flow control configuration */ + rc = eswitch_dev_nix_flow_ctrl_set(eswitch_dev); + if (rc) { + plt_err("Failed to enable flow control. error code(%d)", rc); + goto done; + } + + /* Enable Rx in NPC */ + rc = roc_nix_npc_rx_ena_dis(&eswitch_dev->nix, true); + if (rc) { + plt_err("Failed to enable NPC rx %d", rc); + goto done; + } + + rc = roc_npc_mcam_enable_all_entries(&eswitch_dev->npc, 1); + if (rc) { + plt_err("Failed to enable NPC entries %d", rc); + goto done; + } + +done: + return 0; +} + +int +cnxk_eswitch_txq_start(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid) +{ + struct roc_nix_sq *sq = &eswitch_dev->txq[qid].sqs; + int rc = -EINVAL; + + if (eswitch_dev->txq[qid].state == CNXK_ESWITCH_QUEUE_STATE_STARTED) + return 0; + + if (eswitch_dev->txq[qid].state != CNXK_ESWITCH_QUEUE_STATE_CONFIGURED) { + plt_err("Eswitch txq %d not configured yet", qid); + goto done; + } + + rc = roc_nix_tm_sq_aura_fc(sq, true); + if (rc) { + plt_err("Failed to enable sq aura fc, txq=%u, rc=%d", qid, rc); + goto done; + } + + eswitch_dev->txq[qid].state = CNXK_ESWITCH_QUEUE_STATE_STARTED; +done: + return rc; +} + +int +cnxk_eswitch_txq_stop(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid) +{ + struct roc_nix_sq *sq = &eswitch_dev->txq[qid].sqs; + int rc = -EINVAL; + + if (eswitch_dev->txq[qid].state == CNXK_ESWITCH_QUEUE_STATE_STOPPED || + eswitch_dev->txq[qid].state == CNXK_ESWITCH_QUEUE_STATE_RELEASED) + return 0; + + if (eswitch_dev->txq[qid].state != CNXK_ESWITCH_QUEUE_STATE_STARTED) { + plt_err("Eswitch txq %d not started", qid); + goto done; + } + + rc = roc_nix_tm_sq_aura_fc(sq, false); + if (rc) { + plt_err("Failed to disable sqb aura fc, txq=%u, rc=%d", qid, rc); + goto done; + } + + eswitch_dev->txq[qid].state = CNXK_ESWITCH_QUEUE_STATE_STOPPED; +done: + return rc; +} + +int +cnxk_eswitch_rxq_start(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid) +{ + struct roc_nix_rq *rq = &eswitch_dev->rxq[qid].rqs; + int rc = -EINVAL; + + if (eswitch_dev->rxq[qid].state == CNXK_ESWITCH_QUEUE_STATE_STARTED) + return 0; + + if (eswitch_dev->rxq[qid].state != CNXK_ESWITCH_QUEUE_STATE_CONFIGURED) { + plt_err("Eswitch rxq %d not configured yet", qid); + goto done; + } + + rc = roc_nix_rq_ena_dis(rq, true); + if (rc) { + plt_err("Failed to enable rxq=%u, rc=%d", qid, rc); + goto done; + } + + eswitch_dev->rxq[qid].state = CNXK_ESWITCH_QUEUE_STATE_STARTED; +done: + return rc; +} + +int +cnxk_eswitch_rxq_stop(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid) +{ + struct roc_nix_rq *rq = &eswitch_dev->rxq[qid].rqs; + int rc = -EINVAL; + + if (eswitch_dev->rxq[qid].state == CNXK_ESWITCH_QUEUE_STATE_STOPPED || + eswitch_dev->rxq[qid].state == CNXK_ESWITCH_QUEUE_STATE_RELEASED) + return 0; + + if (eswitch_dev->rxq[qid].state != CNXK_ESWITCH_QUEUE_STATE_STARTED) { + plt_err("Eswitch rxq %d not started", qid); + goto done; + } + + rc = roc_nix_rq_ena_dis(rq, false); + if (rc) { + plt_err("Failed to disable rxq=%u, rc=%d", qid, rc); + goto done; + } + + eswitch_dev->rxq[qid].state = CNXK_ESWITCH_QUEUE_STATE_STOPPED; +done: + return rc; +} + +int +cnxk_eswitch_rxq_release(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid) +{ + struct roc_nix_rq *rq; + struct roc_nix_cq *cq; + int rc; + + if (eswitch_dev->rxq[qid].state == CNXK_ESWITCH_QUEUE_STATE_RELEASED) + return 0; + + /* Cleanup ROC SQ */ + rq = &eswitch_dev->rxq[qid].rqs; + rc = roc_nix_rq_fini(rq); + if (rc) { + plt_err("Failed to cleanup sq, rc=%d", rc); + goto fail; + } + + eswitch_dev->rxq[qid].state = CNXK_ESWITCH_QUEUE_STATE_RELEASED; + + /* Cleanup ROC CQ */ + cq = &eswitch_dev->cxq[qid].cqs; + rc = roc_nix_cq_fini(cq); + if (rc) { + plt_err("Failed to cleanup cq, rc=%d", rc); + goto fail; + } + + eswitch_dev->cxq[qid].state = CNXK_ESWITCH_QUEUE_STATE_RELEASED; +fail: + return rc; +} + +int +cnxk_eswitch_rxq_setup(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, uint16_t nb_desc, + const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) +{ + struct roc_nix *nix = &eswitch_dev->nix; + struct rte_mempool *lpb_pool = mp; + struct rte_mempool_ops *ops; + const char *platform_ops; + struct roc_nix_rq *rq; + struct roc_nix_cq *cq; + uint16_t first_skip; + int rc = -EINVAL; + + if (eswitch_dev->rxq[qid].state != CNXK_ESWITCH_QUEUE_STATE_RELEASED || + eswitch_dev->cxq[qid].state != CNXK_ESWITCH_QUEUE_STATE_RELEASED) { + plt_err("Queue %d is in invalid state %d, cannot be setup", qid, + eswitch_dev->txq[qid].state); + goto fail; + } + + RTE_SET_USED(rx_conf); + platform_ops = rte_mbuf_platform_mempool_ops(); + /* This driver needs cnxk_npa mempool ops to work */ + ops = rte_mempool_get_ops(lpb_pool->ops_index); + if (strncmp(ops->name, platform_ops, RTE_MEMPOOL_OPS_NAMESIZE)) { + plt_err("mempool ops should be of cnxk_npa type"); + goto fail; + } + + if (lpb_pool->pool_id == 0) { + plt_err("Invalid pool_id"); + goto fail; + } + + /* Setup ROC CQ */ + cq = &eswitch_dev->cxq[qid].cqs; + memset(cq, 0, sizeof(struct roc_nix_cq)); + cq->qid = qid; + cq->nb_desc = nb_desc; + rc = roc_nix_cq_init(nix, cq); + if (rc) { + plt_err("Failed to init roc cq for rq=%d, rc=%d", qid, rc); + goto fail; + } + eswitch_dev->cxq[qid].state = CNXK_ESWITCH_QUEUE_STATE_CONFIGURED; + + /* Setup ROC RQ */ + rq = &eswitch_dev->rxq[qid].rqs; + memset(rq, 0, sizeof(struct roc_nix_rq)); + rq->qid = qid; + rq->cqid = cq->qid; + rq->aura_handle = lpb_pool->pool_id; + rq->flow_tag_width = 32; + rq->sso_ena = false; + + /* Calculate first mbuf skip */ + first_skip = (sizeof(struct rte_mbuf)); + first_skip += RTE_PKTMBUF_HEADROOM; + first_skip += rte_pktmbuf_priv_size(lpb_pool); + rq->first_skip = first_skip; + rq->later_skip = sizeof(struct rte_mbuf) + rte_pktmbuf_priv_size(lpb_pool); + rq->lpb_size = lpb_pool->elt_size; + if (roc_errata_nix_no_meta_aura()) + rq->lpb_drop_ena = true; + + rc = roc_nix_rq_init(nix, rq, true); + if (rc) { + plt_err("Failed to init roc rq for rq=%d, rc=%d", qid, rc); + goto cq_fini; + } + eswitch_dev->rxq[qid].state = CNXK_ESWITCH_QUEUE_STATE_CONFIGURED; + + return 0; +cq_fini: + rc |= roc_nix_cq_fini(cq); +fail: + return rc; +} + +int +cnxk_eswitch_txq_release(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid) +{ + struct roc_nix_sq *sq; + int rc = 0; + + if (eswitch_dev->txq[qid].state == CNXK_ESWITCH_QUEUE_STATE_RELEASED) + return 0; + + /* Cleanup ROC SQ */ + sq = &eswitch_dev->txq[qid].sqs; + rc = roc_nix_sq_fini(sq); + if (rc) { + plt_err("Failed to cleanup sq, rc=%d", rc); + goto fail; + } + + eswitch_dev->txq[qid].state = CNXK_ESWITCH_QUEUE_STATE_RELEASED; +fail: + return rc; +} + +int +cnxk_eswitch_txq_setup(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, uint16_t nb_desc, + const struct rte_eth_txconf *tx_conf) +{ + struct roc_nix_sq *sq; + int rc = 0; + + if (eswitch_dev->txq[qid].state != CNXK_ESWITCH_QUEUE_STATE_RELEASED) { + plt_err("Queue %d is in invalid state %d, cannot be setup", qid, + eswitch_dev->txq[qid].state); + rc = -EINVAL; + goto fail; + } + RTE_SET_USED(tx_conf); + /* Setup ROC SQ */ + sq = &eswitch_dev->txq[qid].sqs; + memset(sq, 0, sizeof(struct roc_nix_sq)); + sq->qid = qid; + sq->nb_desc = nb_desc; + /* TODO: Revisit to enable MSEG nix_sq_max_sqe_sz(dev) */ + sq->max_sqe_sz = NIX_MAXSQESZ_W8; + if (sq->nb_desc >= CNXK_NIX_DEF_SQ_COUNT) + sq->fc_hyst_bits = 0x1; + + rc = roc_nix_sq_init(&eswitch_dev->nix, sq); + if (rc) + plt_err("Failed to init sq=%d, rc=%d", qid, rc); + + eswitch_dev->txq[qid].state = CNXK_ESWITCH_QUEUE_STATE_CONFIGURED; + +fail: + return rc; +} + +static int +cnxk_eswitch_dev_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) +{ + struct cnxk_eswitch_dev *eswitch_dev; + const struct rte_memzone *mz = NULL; + int rc = -ENOMEM; + + RTE_SET_USED(pci_drv); + + eswitch_dev = cnxk_eswitch_pmd_priv(); + if (!eswitch_dev) { + rc = roc_plt_init(); + if (rc) { + plt_err("Failed to initialize platform model, rc=%d", rc); + return rc; + } + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + mz = rte_memzone_reserve_aligned(CNXK_REP_ESWITCH_DEV_MZ, sizeof(*eswitch_dev), + SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); + if (mz == NULL) { + plt_err("Failed to reserve a memzone"); + goto fail; + } + + eswitch_dev = mz->addr; + eswitch_dev->pci_dev = pci_dev; + } + + /* Spinlock for synchronization between representors traffic and control + * messages + */ + rte_spinlock_init(&eswitch_dev->rep_lock); + + return rc; +fail: + return rc; +} + +static const struct rte_pci_id cnxk_eswitch_pci_map[] = { + {RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CNXK_RVU_ESWITCH_PF)}, + { + .vendor_id = 0, + }, +}; + +static struct rte_pci_driver cnxk_eswitch_pci = { + .id_table = cnxk_eswitch_pci_map, + .drv_flags = + RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA | RTE_PCI_DRV_PROBE_AGAIN, + .probe = cnxk_eswitch_dev_probe, + .remove = cnxk_eswitch_dev_remove, +}; + +RTE_PMD_REGISTER_PCI(cnxk_eswitch, cnxk_eswitch_pci); +RTE_PMD_REGISTER_PCI_TABLE(cnxk_eswitch, cnxk_eswitch_pci_map); +RTE_PMD_REGISTER_KMOD_DEP(cnxk_eswitch, "vfio-pci"); diff --git a/drivers/net/cnxk/cnxk_eswitch.h b/drivers/net/cnxk/cnxk_eswitch.h new file mode 100644 index 0000000000..331397021b --- /dev/null +++ b/drivers/net/cnxk/cnxk_eswitch.h @@ -0,0 +1,103 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#ifndef __CNXK_ESWITCH_H__ +#define __CNXK_ESWITCH_H__ + +#include +#include + +#include + +#include "cn10k_tx.h" + +#define CNXK_ESWITCH_CTRL_MSG_SOCK_PATH "/tmp/cxk_rep_ctrl_msg_sock" +#define CNXK_REP_ESWITCH_DEV_MZ "cnxk_eswitch_dev" +#define CNXK_ESWITCH_VLAN_TPID 0x8100 /* TODO change */ +#define CNXK_ESWITCH_MAX_TXQ 256 +#define CNXK_ESWITCH_MAX_RXQ 256 +#define CNXK_ESWITCH_LBK_CHAN 63 +#define CNXK_ESWITCH_VFPF_SHIFT 8 + +#define CNXK_ESWITCH_QUEUE_STATE_RELEASED 0 +#define CNXK_ESWITCH_QUEUE_STATE_CONFIGURED 1 +#define CNXK_ESWITCH_QUEUE_STATE_STARTED 2 +#define CNXK_ESWITCH_QUEUE_STATE_STOPPED 3 + +struct cnxk_rep_info { + struct rte_eth_dev *rep_eth_dev; +}; + +struct cnxk_eswitch_txq { + struct roc_nix_sq sqs; + uint8_t state; +}; + +struct cnxk_eswitch_rxq { + struct roc_nix_rq rqs; + uint8_t state; +}; + +struct cnxk_eswitch_cxq { + struct roc_nix_cq cqs; + uint8_t state; +}; + +TAILQ_HEAD(eswitch_flow_list, roc_npc_flow); +struct cnxk_eswitch_dev { + /* Input parameters */ + struct plt_pci_device *pci_dev; + /* ROC NIX */ + struct roc_nix nix; + + /* ROC NPC */ + struct roc_npc npc; + + /* ROC NPA */ + struct rte_mempool *ctrl_chan_pool; + const struct plt_memzone *pktmem_mz; + uint64_t pkt_aura; + + /* Eswitch RQs, SQs and CQs */ + struct cnxk_eswitch_txq *txq; + struct cnxk_eswitch_rxq *rxq; + struct cnxk_eswitch_cxq *cxq; + + /* Configured queue count */ + uint16_t nb_rxq; + uint16_t nb_txq; + uint16_t rep_cnt; + uint8_t configured; + + /* Port representor fields */ + rte_spinlock_t rep_lock; + uint16_t switch_domain_id; + uint16_t eswitch_vdev; + struct cnxk_rep_info *rep_info; +}; + +static inline struct cnxk_eswitch_dev * +cnxk_eswitch_pmd_priv(void) +{ + const struct rte_memzone *mz; + + mz = rte_memzone_lookup(CNXK_REP_ESWITCH_DEV_MZ); + if (!mz) + return NULL; + + return mz->addr; +} + +int cnxk_eswitch_nix_rsrc_start(struct cnxk_eswitch_dev *eswitch_dev); +int cnxk_eswitch_txq_setup(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, uint16_t nb_desc, + const struct rte_eth_txconf *tx_conf); +int cnxk_eswitch_txq_release(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid); +int cnxk_eswitch_rxq_setup(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, uint16_t nb_desc, + const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp); +int cnxk_eswitch_rxq_release(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid); +int cnxk_eswitch_rxq_start(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid); +int cnxk_eswitch_rxq_stop(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid); +int cnxk_eswitch_txq_start(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid); +int cnxk_eswitch_txq_stop(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid); +#endif /* __CNXK_ESWITCH_H__ */ diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build index e83f3c9050..012d098f80 100644 --- a/drivers/net/cnxk/meson.build +++ b/drivers/net/cnxk/meson.build @@ -28,6 +28,7 @@ sources = files( 'cnxk_ethdev_sec.c', 'cnxk_ethdev_telemetry.c', 'cnxk_ethdev_sec_telemetry.c', + 'cnxk_eswitch.c', 'cnxk_link.c', 'cnxk_lookup.c', 'cnxk_ptp.c', From patchwork Tue Dec 19 17:39:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135348 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A37943747; Tue, 19 Dec 2023 18:40:50 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A544842E5C; Tue, 19 Dec 2023 18:40:41 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 9B58442E53 for ; Tue, 19 Dec 2023 18:40:39 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJA5Xd0028888 for ; Tue, 19 Dec 2023 09:40:38 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=qkfiBXdeWz0rGxU6Z7lcE ej9dLJHyt3DQxgpwoPbgHQ=; b=USNlNKTJGawYgxoyzJmMiM0IUZMESR+lA1PiN JhZUfRaZIqjCJ33jMWT0wXlXZiqDWCWrwmPafmeGmSDMehh/3U+ROySR7Uf3igl+ M3B/yGqws2Y+7cNQx7w8zfEsyU6KWPiHclRS6mbK8EFc8yz14JOdi/OASr8uLPHO zn5H8w2Lqi1GSHTGx2TTCAMQZri+wxM4/cXVyt1PYM4RDRgWa3HNA4C75EHCT1Ew WvTg+xiW7AzE8unfhZFh4yw+Mct17nkxXM0kzbuimdluOcJG86NYgISoqJa9qR42 O8tKAgfzf4jPpNts0hHUyZRcBsDfkM+DNkFYfjdoCbfs3u3Uw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3v39491rw0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:40:38 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:40:37 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:40:37 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id AA0DC3F7094; Tue, 19 Dec 2023 09:40:34 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 03/24] net/cnxk: eswitch HW resource configuration Date: Tue, 19 Dec 2023 23:09:42 +0530 Message-ID: <20231219174003.72901-4-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: OC8dsHVn4I0d51vglRJUuLb89VdIK42b X-Proofpoint-ORIG-GUID: OC8dsHVn4I0d51vglRJUuLb89VdIK42b X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Configuring the hardware resources used by the eswitch device. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.c | 206 ++++++++++++++++++++++++++++++++ 1 file changed, 206 insertions(+) diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c index 51110a762d..306edc6037 100644 --- a/drivers/net/cnxk/cnxk_eswitch.c +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -6,6 +6,30 @@ #define CNXK_NIX_DEF_SQ_COUNT 512 +static int +eswitch_hw_rsrc_cleanup(struct cnxk_eswitch_dev *eswitch_dev) +{ + struct roc_nix *nix; + int rc = 0; + + nix = &eswitch_dev->nix; + + roc_nix_unregister_queue_irqs(nix); + roc_nix_tm_fini(nix); + rc = roc_nix_lf_free(nix); + if (rc) { + plt_err("Failed to cleanup sq, rc %d", rc); + goto exit; + } + + rte_free(eswitch_dev->txq); + rte_free(eswitch_dev->rxq); + rte_free(eswitch_dev->cxq); + +exit: + return rc; +} + static int cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) { @@ -18,6 +42,7 @@ cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) eswitch_dev = cnxk_eswitch_pmd_priv(); + eswitch_hw_rsrc_cleanup(eswitch_dev); /* Check if this device is hosting common resource */ nix = roc_idev_npa_nix_get(); if (!nix || nix->pci_dev != pci_dev) { @@ -404,6 +429,178 @@ cnxk_eswitch_txq_setup(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, uint1 return rc; } +static int +nix_lf_setup(struct cnxk_eswitch_dev *eswitch_dev) +{ + uint16_t nb_rxq, nb_txq, nb_cq; + struct roc_nix_fc_cfg fc_cfg; + struct roc_nix *nix; + uint64_t rx_cfg; + void *qs; + int rc; + + /* Initialize base roc nix */ + nix = &eswitch_dev->nix; + nix->pci_dev = eswitch_dev->pci_dev; + nix->hw_vlan_ins = true; + nix->reta_sz = ROC_NIX_RSS_RETA_SZ_256; + rc = roc_nix_dev_init(nix); + if (rc) { + plt_err("Failed to init nix eswitch device, rc=%d(%s)", rc, roc_error_msg_get(rc)); + goto fail; + } + + /* Get the representors count */ + rc = roc_nix_max_rep_count(&eswitch_dev->nix); + if (rc) { + plt_err("Failed to get rep cnt, rc=%d(%s)", rc, roc_error_msg_get(rc)); + goto free_cqs; + } + + /* Allocating an NIX LF */ + nb_rxq = CNXK_ESWITCH_MAX_RXQ; + nb_txq = CNXK_ESWITCH_MAX_TXQ; + nb_cq = CNXK_ESWITCH_MAX_RXQ; + rx_cfg = ROC_NIX_LF_RX_CFG_DIS_APAD; + rc = roc_nix_lf_alloc(nix, nb_rxq, nb_txq, rx_cfg); + if (rc) { + plt_err("lf alloc failed = %s(%d)", roc_error_msg_get(rc), rc); + goto dev_fini; + } + + if (nb_rxq) { + /* Allocate memory for eswitch rq's and cq's */ + qs = plt_zmalloc(sizeof(struct cnxk_eswitch_rxq) * nb_rxq, 0); + if (!qs) { + plt_err("Failed to alloc eswitch rxq"); + goto lf_free; + } + eswitch_dev->rxq = qs; + } + + if (nb_txq) { + /* Allocate memory for roc sq's */ + qs = plt_zmalloc(sizeof(struct cnxk_eswitch_txq) * nb_txq, 0); + if (!qs) { + plt_err("Failed to alloc eswitch txq"); + goto free_rqs; + } + eswitch_dev->txq = qs; + } + + if (nb_cq) { + qs = plt_zmalloc(sizeof(struct cnxk_eswitch_cxq) * nb_cq, 0); + if (!qs) { + plt_err("Failed to alloc eswitch cxq"); + goto free_sqs; + } + eswitch_dev->cxq = qs; + } + + eswitch_dev->nb_rxq = nb_rxq; + eswitch_dev->nb_txq = nb_txq; + + /* Re-enable NIX LF error interrupts */ + roc_nix_err_intr_ena_dis(nix, true); + roc_nix_ras_intr_ena_dis(nix, true); + + rc = roc_nix_lso_fmt_setup(nix); + if (rc) { + plt_err("lso setup failed = %s(%d)", roc_error_msg_get(rc), rc); + goto free_cqs; + } + + rc = roc_nix_switch_hdr_set(nix, 0, 0, 0, 0); + if (rc) { + plt_err("switch hdr set failed = %s(%d)", roc_error_msg_get(rc), rc); + goto free_cqs; + } + + rc = roc_nix_rss_default_setup(nix, + FLOW_KEY_TYPE_IPV4 | FLOW_KEY_TYPE_TCP | FLOW_KEY_TYPE_UDP); + if (rc) { + plt_err("rss default setup failed = %s(%d)", roc_error_msg_get(rc), rc); + goto free_cqs; + } + + rc = roc_nix_tm_init(nix); + if (rc) { + plt_err("tm failed = %s(%d)", roc_error_msg_get(rc), rc); + goto free_cqs; + } + + /* Register queue IRQs */ + rc = roc_nix_register_queue_irqs(nix); + if (rc) { + plt_err("Failed to register queue interrupts rc=%d", rc); + goto tm_fini; + } + + /* Enable default tree */ + rc = roc_nix_tm_hierarchy_enable(nix, ROC_NIX_TM_DEFAULT, false); + if (rc) { + plt_err("tm default hierarchy enable failed = %s(%d)", roc_error_msg_get(rc), rc); + goto q_irq_fini; + } + + /* TODO: Revisit Enable flow control */ + memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg)); + fc_cfg.rxchan_cfg.enable = false; + rc = roc_nix_fc_config_set(nix, &fc_cfg); + if (rc) { + plt_err("Failed to setup flow control, rc=%d(%s)", rc, roc_error_msg_get(rc)); + goto q_irq_fini; + } + + roc_nix_fc_mode_get(nix); + + return rc; +q_irq_fini: + roc_nix_unregister_queue_irqs(nix); +tm_fini: + roc_nix_tm_fini(nix); +free_cqs: + rte_free(eswitch_dev->cxq); +free_sqs: + rte_free(eswitch_dev->txq); +free_rqs: + rte_free(eswitch_dev->rxq); +lf_free: + roc_nix_lf_free(nix); +dev_fini: + roc_nix_dev_fini(nix); +fail: + return rc; +} + +static int +eswitch_hw_rsrc_setup(struct cnxk_eswitch_dev *eswitch_dev) +{ + struct roc_nix *nix; + int rc; + + nix = &eswitch_dev->nix; + rc = nix_lf_setup(eswitch_dev); + if (rc) { + plt_err("Failed to setup hw rsrc, rc=%d(%s)", rc, roc_error_msg_get(rc)); + goto fail; + } + + /* Initialize roc npc */ + eswitch_dev->npc.roc_nix = nix; + eswitch_dev->npc.flow_max_priority = 3; + eswitch_dev->npc.flow_prealloc_size = 1; + rc = roc_npc_init(&eswitch_dev->npc); + if (rc) + goto rsrc_cleanup; + + return rc; +rsrc_cleanup: + eswitch_hw_rsrc_cleanup(eswitch_dev); +fail: + return rc; +} + static int cnxk_eswitch_dev_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) { @@ -433,6 +630,12 @@ cnxk_eswitch_dev_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pc eswitch_dev = mz->addr; eswitch_dev->pci_dev = pci_dev; + + rc = eswitch_hw_rsrc_setup(eswitch_dev); + if (rc) { + plt_err("Failed to setup hw rsrc, rc=%d(%s)", rc, roc_error_msg_get(rc)); + goto free_mem; + } } /* Spinlock for synchronization between representors traffic and control @@ -441,6 +644,9 @@ cnxk_eswitch_dev_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pc rte_spinlock_init(&eswitch_dev->rep_lock); return rc; +free_mem: + if (mz) + rte_memzone_free(mz); fail: return rc; } From patchwork Tue Dec 19 17:39:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135349 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ECEA743747; Tue, 19 Dec 2023 18:40:57 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DCECC42E53; Tue, 19 Dec 2023 18:40:44 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 0C4C842E5E for ; Tue, 19 Dec 2023 18:40:42 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJ93T8W016883 for ; Tue, 19 Dec 2023 09:40:42 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=F/r+yjC6WSKCba9sWmIt6 RLrjx9S5KQhijeSVR0hsQA=; b=PKPn5NtnvQ1og3pu9mpxxiCuWFyf3o2o1+7aY fk6lB1A/CbESpgBrbL8E28VYwzlFJHdLzw3B6fhBk2JrPvlCySljIELg9TTmgFR1 VyKRTKA9LeWOtwg9U8lA3FyRxZ11pud7xnhK+gyS0Lh5HKX9JeAF/I6ZgkAYTZ0V zFWV4nTuEK4CNvlEF1GARv86IRSjsG+NpkYdDebEu5Zi2zW1YkQns0O6sunNkGZu fZ1JcaPMZ0nhBXyES/i/m/3arxJ2kEU5y77Yr/OZU62q6fnInOX8FT00QG0NeA3J H309e+R3Y+G6ruo6q5zg2nrVtag9MfHmhp+4XpMRl58at74Fg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3v1c9kumdh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:40:42 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:40:40 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:40:40 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id A8F913F7098; Tue, 19 Dec 2023 09:40:37 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 04/24] net/cnxk: eswitch devargs parsing Date: Tue, 19 Dec 2023 23:09:43 +0530 Message-ID: <20231219174003.72901-5-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: QJpGhgC1ZB97IHK1aBW41X-sxocXo3hW X-Proofpoint-GUID: QJpGhgC1ZB97IHK1aBW41X-sxocXo3hW X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implementing the devargs parsing logic via which the representors pattern is provided. These patterns define for which representies representors shall be created. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.c | 88 +++++++++ drivers/net/cnxk/cnxk_eswitch.h | 52 ++++++ drivers/net/cnxk/cnxk_eswitch_devargs.c | 236 ++++++++++++++++++++++++ drivers/net/cnxk/meson.build | 1 + 4 files changed, 377 insertions(+) create mode 100644 drivers/net/cnxk/cnxk_eswitch_devargs.c diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c index 306edc6037..739a09c034 100644 --- a/drivers/net/cnxk/cnxk_eswitch.c +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -456,6 +456,7 @@ nix_lf_setup(struct cnxk_eswitch_dev *eswitch_dev) plt_err("Failed to get rep cnt, rc=%d(%s)", rc, roc_error_msg_get(rc)); goto free_cqs; } + eswitch_dev->repr_cnt.max_repr = eswitch_dev->nix.rep_cnt; /* Allocating an NIX LF */ nb_rxq = CNXK_ESWITCH_MAX_RXQ; @@ -601,11 +602,73 @@ eswitch_hw_rsrc_setup(struct cnxk_eswitch_dev *eswitch_dev) return rc; } +int +cnxk_eswitch_representor_info_get(struct cnxk_eswitch_dev *eswitch_dev, + struct rte_eth_representor_info *info) +{ + struct cnxk_eswitch_devargs *esw_da; + int rc = 0, n_entries, i, j = 0, k = 0; + + for (i = 0; i < eswitch_dev->nb_esw_da; i++) { + for (j = 0; j < eswitch_dev->esw_da[i].nb_repr_ports; j++) + k++; + } + n_entries = k; + + if (info == NULL) + goto out; + + if ((uint32_t)n_entries > info->nb_ranges_alloc) + n_entries = info->nb_ranges_alloc; + + k = 0; + info->controller = 0; + info->pf = 0; + for (i = 0; i < eswitch_dev->nb_esw_da; i++) { + esw_da = &eswitch_dev->esw_da[i]; + info->ranges[k].type = esw_da->da.type; + switch (esw_da->da.type) { + case RTE_ETH_REPRESENTOR_PF: + info->ranges[k].controller = 0; + info->ranges[k].pf = esw_da->repr_hw_info[0].pfvf; + info->ranges[k].vf = 0; + info->ranges[k].id_base = info->ranges[i].pf; + info->ranges[k].id_end = info->ranges[i].pf; + snprintf(info->ranges[k].name, sizeof(info->ranges[k].name), "pf%d", + info->ranges[k].pf); + k++; + break; + case RTE_ETH_REPRESENTOR_VF: + for (j = 0; j < esw_da->nb_repr_ports; j++) { + info->ranges[k].controller = 0; + info->ranges[k].pf = esw_da->da.ports[0]; + info->ranges[k].vf = esw_da->repr_hw_info[j].pfvf; + info->ranges[k].id_base = esw_da->repr_hw_info[j].port_id; + info->ranges[k].id_end = esw_da->repr_hw_info[j].port_id; + snprintf(info->ranges[k].name, sizeof(info->ranges[k].name), + "pf%dvf%d", info->ranges[k].pf, info->ranges[k].vf); + k++; + } + break; + default: + plt_err("Invalid type %d", esw_da->da.type); + rc = 0; + goto fail; + }; + } + info->nb_ranges = k; +fail: + return rc; +out: + return n_entries; +} + static int cnxk_eswitch_dev_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) { struct cnxk_eswitch_dev *eswitch_dev; const struct rte_memzone *mz = NULL; + uint16_t num_reps; int rc = -ENOMEM; RTE_SET_USED(pci_drv); @@ -638,12 +701,37 @@ cnxk_eswitch_dev_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pc } } + if (pci_dev->device.devargs) { + rc = cnxk_eswitch_repr_devargs(pci_dev, eswitch_dev); + if (rc) + goto rsrc_cleanup; + } + + if (eswitch_dev->repr_cnt.nb_repr_created > eswitch_dev->repr_cnt.max_repr) { + plt_err("Representors to be created %d can be greater than max allowed %d", + eswitch_dev->repr_cnt.nb_repr_created, eswitch_dev->repr_cnt.max_repr); + rc = -EINVAL; + goto rsrc_cleanup; + } + + num_reps = eswitch_dev->repr_cnt.nb_repr_created; + if (!num_reps) { + plt_err("No representors enabled"); + goto fail; + } + + plt_esw_dbg("Max no of reps %d reps to be created %d Eswtch pfunc %x", + eswitch_dev->repr_cnt.max_repr, eswitch_dev->repr_cnt.nb_repr_created, + roc_nix_get_pf_func(&eswitch_dev->nix)); + /* Spinlock for synchronization between representors traffic and control * messages */ rte_spinlock_init(&eswitch_dev->rep_lock); return rc; +rsrc_cleanup: + eswitch_hw_rsrc_cleanup(eswitch_dev); free_mem: if (mz) rte_memzone_free(mz); diff --git a/drivers/net/cnxk/cnxk_eswitch.h b/drivers/net/cnxk/cnxk_eswitch.h index 331397021b..dcb787cf02 100644 --- a/drivers/net/cnxk/cnxk_eswitch.h +++ b/drivers/net/cnxk/cnxk_eswitch.h @@ -25,6 +25,47 @@ #define CNXK_ESWITCH_QUEUE_STATE_STARTED 2 #define CNXK_ESWITCH_QUEUE_STATE_STOPPED 3 +enum cnxk_esw_da_pattern_type { + CNXK_ESW_DA_TYPE_LIST = 0, + CNXK_ESW_DA_TYPE_PFVF, +}; + +struct cnxk_esw_repr_hw_info { + /* Representee pcifunc value */ + uint16_t hw_func; + /* rep id in sync with kernel */ + uint16_t rep_id; + /* pf or vf id */ + uint16_t pfvf; + /* representor port id assigned to representee */ + uint16_t port_id; +}; + +/* Structure representing per devarg information - this can be per representee + * or range of representee + */ +struct cnxk_eswitch_devargs { + /* Devargs populated */ + struct rte_eth_devargs da; + /* HW info of representee */ + struct cnxk_esw_repr_hw_info *repr_hw_info; + /* No of representor ports */ + uint16_t nb_repr_ports; + /* Devargs pattern type */ + enum cnxk_esw_da_pattern_type type; +}; + +struct cnxk_eswitch_repr_cnt { + /* Max possible representors */ + uint16_t max_repr; + /* Representors to be created as per devargs passed */ + uint16_t nb_repr_created; + /* Representors probed successfully */ + uint16_t nb_repr_probed; + /* Representors started representing a representee */ + uint16_t nb_repr_started; +}; + struct cnxk_rep_info { struct rte_eth_dev *rep_eth_dev; }; @@ -70,6 +111,14 @@ struct cnxk_eswitch_dev { uint16_t rep_cnt; uint8_t configured; + /* Eswitch Representors Devargs */ + uint16_t nb_esw_da; + uint16_t last_probed; + struct cnxk_eswitch_devargs esw_da[RTE_MAX_ETHPORTS]; + + /* No of representors */ + struct cnxk_eswitch_repr_cnt repr_cnt; + /* Port representor fields */ rte_spinlock_t rep_lock; uint16_t switch_domain_id; @@ -90,6 +139,9 @@ cnxk_eswitch_pmd_priv(void) } int cnxk_eswitch_nix_rsrc_start(struct cnxk_eswitch_dev *eswitch_dev); +int cnxk_eswitch_repr_devargs(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev); +int cnxk_eswitch_representor_info_get(struct cnxk_eswitch_dev *eswitch_dev, + struct rte_eth_representor_info *info); int cnxk_eswitch_txq_setup(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, uint16_t nb_desc, const struct rte_eth_txconf *tx_conf); int cnxk_eswitch_txq_release(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid); diff --git a/drivers/net/cnxk/cnxk_eswitch_devargs.c b/drivers/net/cnxk/cnxk_eswitch_devargs.c new file mode 100644 index 0000000000..f1a1b05a99 --- /dev/null +++ b/drivers/net/cnxk/cnxk_eswitch_devargs.c @@ -0,0 +1,236 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include + +#define PF_SHIFT 10 +static inline int +get_hw_func(uint16_t pf, uint16_t vf) +{ + return (pf << PF_SHIFT) | vf; +} + +static int +devargs_enlist(uint16_t *list, uint16_t *len_list, const uint16_t max_list, uint16_t val) +{ + uint16_t i; + + for (i = 0; i < *len_list; i++) { + if (list[i] == val) + return 0; + } + if (*len_list >= max_list) + return -1; + list[(*len_list)++] = val; + return 0; +} + +static char * +devargs_process_range(char *str, uint16_t *list, uint16_t *len_list, const uint16_t max_list) +{ + uint16_t lo, hi, val; + int result, n = 0; + char *pos = str; + + result = sscanf(str, "%hu%n-%hu%n", &lo, &n, &hi, &n); + if (result == 1) { + if (devargs_enlist(list, len_list, max_list, lo) != 0) + return NULL; + } else if (result == 2) { + if (lo > hi) + return NULL; + for (val = lo; val <= hi; val++) { + if (devargs_enlist(list, len_list, max_list, val) != 0) + return NULL; + } + } else { + return NULL; + } + + return pos + n; +} + +static char * +devargs_process_list(char *str, uint16_t *list, uint16_t *len_list, const uint16_t max_list) +{ + char *pos = str; + + if (*pos == '[') + pos++; + while (1) { + pos = devargs_process_range(pos, list, len_list, max_list); + if (pos == NULL) + return NULL; + if (*pos != ',') /* end of list */ + break; + pos++; + } + if (*str == '[' && *pos != ']') + return NULL; + if (*pos == ']') + pos++; + return pos; +} + +static int +devargs_parse_representor_ports(char *str, void *data) +{ + struct rte_eth_devargs *eth_da = data; + + if (str[0] == 'p' && str[1] == 'f') { + eth_da->type = RTE_ETH_REPRESENTOR_PF; + str += 2; + str = devargs_process_list(str, eth_da->ports, ð_da->nb_ports, + RTE_DIM(eth_da->ports)); + if (str == NULL || str[0] == '\0') + goto done; + } + + if (str[0] == 'v' && str[1] == 'f') { + eth_da->type = RTE_ETH_REPRESENTOR_VF; + str += 2; + } else if (str[0] == 's' && str[1] == 'f') { + eth_da->type = RTE_ETH_REPRESENTOR_SF; + str += 2; + } else { + /* 'pf' must followed by 'vf' or 'sf'. */ + if (eth_da->type == RTE_ETH_REPRESENTOR_PF) { + str = NULL; + goto done; + } + eth_da->type = RTE_ETH_REPRESENTOR_VF; + } + str = devargs_process_list(str, eth_da->representor_ports, ð_da->nb_representor_ports, + RTE_DIM(eth_da->representor_ports)); +done: + if (str == NULL) + plt_err("wrong representor format: %s", str); + return str == NULL ? -1 : 0; +} + +static int +populate_repr_hw_info(struct cnxk_eswitch_dev *eswitch_dev, struct rte_eth_devargs *eth_da, + uint16_t idx) +{ + struct cnxk_eswitch_devargs *esw_da = &eswitch_dev->esw_da[idx]; + uint16_t nb_repr_ports, hw_func; + int rc, i, j; + + if (eth_da->type == RTE_ETH_REPRESENTOR_NONE) { + plt_err("No representor type found"); + return -EINVAL; + } + + if (eth_da->type != RTE_ETH_REPRESENTOR_VF && eth_da->type != RTE_ETH_REPRESENTOR_PF && + eth_da->type != RTE_ETH_REPRESENTOR_SF) { + plt_err("unsupported representor type %d\n", eth_da->type); + return -ENOTSUP; + } + + nb_repr_ports = (eth_da->type == RTE_ETH_REPRESENTOR_PF) ? eth_da->nb_ports : + eth_da->nb_representor_ports; + esw_da->nb_repr_ports = nb_repr_ports; + /* If plain list is provided as representor pattern */ + if (eth_da->nb_ports == 0) + return 0; + + esw_da->repr_hw_info = plt_zmalloc(nb_repr_ports * sizeof(struct cnxk_esw_repr_hw_info), 0); + if (!esw_da->repr_hw_info) { + plt_err("Failed to allocate memory"); + rc = -ENOMEM; + goto fail; + } + + plt_esw_dbg("Representor param %d has %d pfvf", idx, nb_repr_ports); + /* Check if representor can be created for PFVF and populating HW func list */ + for (i = 0; i < nb_repr_ports; i++) { + if (eth_da->type == RTE_ETH_REPRESENTOR_PF) + hw_func = get_hw_func(eth_da->ports[0], 0); + else + hw_func = get_hw_func(eth_da->ports[0], eth_da->representor_ports[i] + 1); + + for (j = 0; j < eswitch_dev->repr_cnt.max_repr; j++) { + if (eswitch_dev->nix.rep_pfvf_map[j] == hw_func) + break; + } + + /* HW func which does not match the map table received from AF, no + * representor port is assigned. + */ + if (j == eswitch_dev->repr_cnt.max_repr) { + plt_err("Representor port cant be created for PF%dVF%d", eth_da->ports[0], + eth_da->representor_ports[i]); + rc = -EINVAL; + goto fail; + } + + esw_da->repr_hw_info[i].hw_func = hw_func; + esw_da->repr_hw_info[i].rep_id = j; + esw_da->repr_hw_info[i].pfvf = (eth_da->type == RTE_ETH_REPRESENTOR_PF) ? + eth_da->ports[0] : + eth_da->representor_ports[i]; + plt_esw_dbg(" HW func %x index %d", hw_func, j); + } + + esw_da->type = CNXK_ESW_DA_TYPE_PFVF; + + return 0; +fail: + return rc; +} + +int +cnxk_eswitch_repr_devargs(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev) +{ + struct rte_devargs *devargs = pci_dev->device.devargs; + struct rte_eth_devargs *eth_da; + struct rte_kvargs *kvlist; + uint32_t i; + int rc, j; + + if (devargs == NULL) { + plt_err("No devargs passed"); + rc = -EINVAL; + goto fail; + } + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) { + plt_err("Failed to find representor key in devargs list"); + rc = -EINVAL; + goto fail; + } + + if (rte_kvargs_count(kvlist, "representor") <= 0) { + plt_err("Invalid representor key count"); + rc = -EINVAL; + goto fail; + } + + j = eswitch_dev->nb_esw_da; + for (i = 0; i < kvlist->count; i++) { + eth_da = &eswitch_dev->esw_da[j].da; + memset(eth_da, 0, sizeof(*eth_da)); + rc = devargs_parse_representor_ports(kvlist->pairs[i].value, eth_da); + if (rc) { + plt_err("Failed to parse the representor devargs, err %d", rc); + goto fail; + } + + rc = populate_repr_hw_info(eswitch_dev, eth_da, j); + if (rc) { + plt_err("Failed to populate representer hw funcs, err %d", rc); + goto fail; + } + + /* No of representor ports to be created */ + eswitch_dev->repr_cnt.nb_repr_created += eswitch_dev->esw_da[j].nb_repr_ports; + j++; + } + eswitch_dev->nb_esw_da += kvlist->count; + + return 0; +fail: + return rc; +} diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build index 012d098f80..ea7e363e89 100644 --- a/drivers/net/cnxk/meson.build +++ b/drivers/net/cnxk/meson.build @@ -29,6 +29,7 @@ sources = files( 'cnxk_ethdev_telemetry.c', 'cnxk_ethdev_sec_telemetry.c', 'cnxk_eswitch.c', + 'cnxk_eswitch_devargs.c', 'cnxk_link.c', 'cnxk_lookup.c', 'cnxk_ptp.c', From patchwork Tue Dec 19 17:39:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135350 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6D51D43747; Tue, 19 Dec 2023 18:41:06 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 27A5F42E5F; Tue, 19 Dec 2023 18:40:48 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id F081242E5F for ; Tue, 19 Dec 2023 18:40:45 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJA5Sn2028842; Tue, 19 Dec 2023 09:40:45 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=rhp1Db0Ks0Lhs9jHko5bo KpGITUdkqZvVXsR0ZyKT2E=; b=Qq2H24FNUicfTUU1qf4TPsyjNYQqt5K6az3/d ZsJjZmiDSlg0HOunKpnZKV8YK9ibw1nHnr6LVKVJ2gjOnJyBIcA/e/7zJIGivYux M8aAvSGRgHg3WncmPrf0h76ZKokj6mwcjZdnEQmg05aB26slZGn3A4Ieja9LvvH3 G3IZ5WLuOxziDDtaiFYAemETGVenHM0VcqcRJ8j/sQA2Q2Tou4nk3uqmdrJZ4jPm LUeq1JqZyMBBbEWCKCrMDQwNTeFiREug6/PNoe8kuRf+pHnVClfp6IcmWJ1qf2hH lDQ/i2qHaQo21hBfewUezy+dE9X1jGMrtq4L0QrZgkMVX6kpA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3v39491rw9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 19 Dec 2023 09:40:44 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:40:43 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:40:43 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id ADD933F7094; Tue, 19 Dec 2023 09:40:40 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra , Anatoly Burakov CC: , Subject: [PATCH v2 05/24] net/cnxk: probing representor ports Date: Tue, 19 Dec 2023 23:09:44 +0530 Message-ID: <20231219174003.72901-6-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 7p_UObDWnF1tF2-wnbeP1BD1CAQltJSp X-Proofpoint-ORIG-GUID: 7p_UObDWnF1tF2-wnbeP1BD1CAQltJSp X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Basic skeleton for probing representor devices. If PF device is passed with "representor" devargs, representor ports gets probed as a separate ethdev device. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.c | 12 ++ drivers/net/cnxk/cnxk_eswitch.h | 8 +- drivers/net/cnxk/cnxk_rep.c | 256 ++++++++++++++++++++++++++++++++ drivers/net/cnxk/cnxk_rep.h | 50 +++++++ drivers/net/cnxk/cnxk_rep_ops.c | 129 ++++++++++++++++ drivers/net/cnxk/meson.build | 2 + 6 files changed, 456 insertions(+), 1 deletion(-) create mode 100644 drivers/net/cnxk/cnxk_rep.c create mode 100644 drivers/net/cnxk/cnxk_rep.h create mode 100644 drivers/net/cnxk/cnxk_rep_ops.c diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c index 739a09c034..563b224a6c 100644 --- a/drivers/net/cnxk/cnxk_eswitch.c +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -3,6 +3,7 @@ */ #include +#include #define CNXK_NIX_DEF_SQ_COUNT 512 @@ -42,6 +43,10 @@ cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) eswitch_dev = cnxk_eswitch_pmd_priv(); + /* Remove representor devices associated with PF */ + if (eswitch_dev->repr_cnt.nb_repr_created) + cnxk_rep_dev_remove(eswitch_dev); + eswitch_hw_rsrc_cleanup(eswitch_dev); /* Check if this device is hosting common resource */ nix = roc_idev_npa_nix_get(); @@ -724,6 +729,13 @@ cnxk_eswitch_dev_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pc eswitch_dev->repr_cnt.max_repr, eswitch_dev->repr_cnt.nb_repr_created, roc_nix_get_pf_func(&eswitch_dev->nix)); + /* Probe representor ports */ + rc = cnxk_rep_dev_probe(pci_dev, eswitch_dev); + if (rc) { + plt_err("Failed to probe representor ports"); + goto rsrc_cleanup; + } + /* Spinlock for synchronization between representors traffic and control * messages */ diff --git a/drivers/net/cnxk/cnxk_eswitch.h b/drivers/net/cnxk/cnxk_eswitch.h index dcb787cf02..4908c3ba95 100644 --- a/drivers/net/cnxk/cnxk_eswitch.h +++ b/drivers/net/cnxk/cnxk_eswitch.h @@ -66,6 +66,11 @@ struct cnxk_eswitch_repr_cnt { uint16_t nb_repr_started; }; +struct cnxk_eswitch_switch_domain { + uint16_t switch_domain_id; + uint16_t pf; +}; + struct cnxk_rep_info { struct rte_eth_dev *rep_eth_dev; }; @@ -121,7 +126,8 @@ struct cnxk_eswitch_dev { /* Port representor fields */ rte_spinlock_t rep_lock; - uint16_t switch_domain_id; + uint16_t nb_switch_domain; + struct cnxk_eswitch_switch_domain sw_dom[RTE_MAX_ETHPORTS]; uint16_t eswitch_vdev; struct cnxk_rep_info *rep_info; }; diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c new file mode 100644 index 0000000000..295bea3724 --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep.c @@ -0,0 +1,256 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ +#include + +#define PF_SHIFT 10 +#define PF_MASK 0x3F + +static uint16_t +get_pf(uint16_t hw_func) +{ + return (hw_func >> PF_SHIFT) & PF_MASK; +} + +static uint16_t +switch_domain_id_allocate(struct cnxk_eswitch_dev *eswitch_dev, uint16_t pf) +{ + int i = 0; + + for (i = 0; i < eswitch_dev->nb_switch_domain; i++) { + if (eswitch_dev->sw_dom[i].pf == pf) + return eswitch_dev->sw_dom[i].switch_domain_id; + } + + return RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID; +} + +int +cnxk_rep_dev_uninit(struct rte_eth_dev *ethdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + plt_rep_dbg("Representor port:%d uninit", ethdev->data->port_id); + rte_free(ethdev->data->mac_addrs); + ethdev->data->mac_addrs = NULL; + + return 0; +} + +int +cnxk_rep_dev_remove(struct cnxk_eswitch_dev *eswitch_dev) +{ + int i, rc = 0; + + for (i = 0; i < eswitch_dev->nb_switch_domain; i++) { + rc = rte_eth_switch_domain_free(eswitch_dev->sw_dom[i].switch_domain_id); + if (rc) + plt_err("Failed to alloc switch domain: %d", rc); + } + + return rc; +} + +static int +cnxk_rep_parent_setup(struct cnxk_eswitch_dev *eswitch_dev) +{ + uint16_t pf, prev_pf = 0, switch_domain_id; + int rc, i, j = 0; + + if (eswitch_dev->rep_info) + return 0; + + eswitch_dev->rep_info = + plt_zmalloc(sizeof(eswitch_dev->rep_info[0]) * eswitch_dev->repr_cnt.max_repr, 0); + if (!eswitch_dev->rep_info) { + plt_err("Failed to alloc memory for rep info"); + rc = -ENOMEM; + goto fail; + } + + /* Allocate switch domain for all PFs (VFs will be under same domain as PF) */ + for (i = 0; i < eswitch_dev->repr_cnt.max_repr; i++) { + pf = get_pf(eswitch_dev->nix.rep_pfvf_map[i]); + if (pf == prev_pf) + continue; + + rc = rte_eth_switch_domain_alloc(&switch_domain_id); + if (rc) { + plt_err("Failed to alloc switch domain: %d", rc); + goto fail; + } + plt_rep_dbg("Allocated switch domain id %d for pf %d\n", switch_domain_id, pf); + eswitch_dev->sw_dom[j].switch_domain_id = switch_domain_id; + eswitch_dev->sw_dom[j].pf = pf; + prev_pf = pf; + j++; + } + eswitch_dev->nb_switch_domain = j; + + return 0; +fail: + return rc; +} + +static uint16_t +cnxk_rep_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + PLT_SET_USED(tx_queue); + PLT_SET_USED(tx_pkts); + PLT_SET_USED(nb_pkts); + + return 0; +} + +static uint16_t +cnxk_rep_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + PLT_SET_USED(rx_queue); + PLT_SET_USED(rx_pkts); + PLT_SET_USED(nb_pkts); + + return 0; +} + +static int +cnxk_rep_dev_init(struct rte_eth_dev *eth_dev, void *params) +{ + struct cnxk_rep_dev *rep_params = (struct cnxk_rep_dev *)params; + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + + rep_dev->port_id = rep_params->port_id; + rep_dev->switch_domain_id = rep_params->switch_domain_id; + rep_dev->parent_dev = rep_params->parent_dev; + rep_dev->hw_func = rep_params->hw_func; + rep_dev->rep_id = rep_params->rep_id; + + eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR; + eth_dev->data->representor_id = rep_params->port_id; + eth_dev->data->backer_port_id = eth_dev->data->port_id; + + eth_dev->data->mac_addrs = plt_zmalloc(RTE_ETHER_ADDR_LEN, 0); + if (!eth_dev->data->mac_addrs) { + plt_err("Failed to allocate memory for mac addr"); + return -ENOMEM; + } + + rte_eth_random_addr(rep_dev->mac_addr); + memcpy(eth_dev->data->mac_addrs, rep_dev->mac_addr, RTE_ETHER_ADDR_LEN); + + /* Set the device operations */ + eth_dev->dev_ops = &cnxk_rep_dev_ops; + + /* Rx/Tx functions stubs to avoid crashing */ + eth_dev->rx_pkt_burst = cnxk_rep_rx_burst; + eth_dev->tx_pkt_burst = cnxk_rep_tx_burst; + + /* Only single queues for representor devices */ + eth_dev->data->nb_rx_queues = 1; + eth_dev->data->nb_tx_queues = 1; + + eth_dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE; + eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP; + eth_dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED; + + return 0; +} + +static int +create_representor_ethdev(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev, + struct cnxk_eswitch_devargs *esw_da, int idx) +{ + char name[RTE_ETH_NAME_MAX_LEN]; + struct rte_eth_dev *rep_eth_dev; + uint16_t hw_func; + int rc = 0; + + struct cnxk_rep_dev rep = {.port_id = eswitch_dev->repr_cnt.nb_repr_probed, + .parent_dev = eswitch_dev}; + + if (esw_da->type == CNXK_ESW_DA_TYPE_PFVF) { + hw_func = esw_da->repr_hw_info[idx].hw_func; + rep.switch_domain_id = switch_domain_id_allocate(eswitch_dev, get_pf(hw_func)); + if (rep.switch_domain_id == RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID) { + plt_err("Failed to get a valid switch domain id"); + rc = -EINVAL; + goto fail; + } + + esw_da->repr_hw_info[idx].port_id = rep.port_id; + /* Representor port net_bdf_port */ + snprintf(name, sizeof(name), "net_%s_hw_%x_representor_%d", pci_dev->device.name, + hw_func, rep.port_id); + + rep.hw_func = hw_func; + rep.rep_id = esw_da->repr_hw_info[idx].rep_id; + + } else { + snprintf(name, sizeof(name), "net_%s_representor_%d", pci_dev->device.name, + rep.port_id); + rep.switch_domain_id = RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID; + } + + rc = rte_eth_dev_create(&pci_dev->device, name, sizeof(struct cnxk_rep_dev), NULL, NULL, + cnxk_rep_dev_init, &rep); + if (rc) { + plt_err("Failed to create cnxk vf representor %s", name); + rc = -EINVAL; + goto fail; + } + + rep_eth_dev = rte_eth_dev_allocated(name); + if (!rep_eth_dev) { + plt_err("Failed to find the eth_dev for VF-Rep: %s.", name); + rc = -ENODEV; + goto fail; + } + + plt_rep_dbg("Representor portid %d (%s) type %d probe done", rep_eth_dev->data->port_id, + name, esw_da->da.type); + eswitch_dev->rep_info[rep.port_id].rep_eth_dev = rep_eth_dev; + eswitch_dev->repr_cnt.nb_repr_probed++; + + return 0; +fail: + return rc; +} + +int +cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev) +{ + struct cnxk_eswitch_devargs *esw_da; + uint16_t num_rep; + int i, j, rc; + + if (eswitch_dev->repr_cnt.nb_repr_created > RTE_MAX_ETHPORTS) { + plt_err("nb_representor_ports %d > %d MAX ETHPORTS\n", + eswitch_dev->repr_cnt.nb_repr_created, RTE_MAX_ETHPORTS); + rc = -EINVAL; + goto fail; + } + + /* Initialize the internals of representor ports */ + rc = cnxk_rep_parent_setup(eswitch_dev); + if (rc) { + plt_err("Failed to setup the parent device, err %d", rc); + goto fail; + } + + for (i = eswitch_dev->last_probed; i < eswitch_dev->nb_esw_da; i++) { + esw_da = &eswitch_dev->esw_da[i]; + /* Check the representor devargs */ + num_rep = esw_da->nb_repr_ports; + for (j = 0; j < num_rep; j++) { + rc = create_representor_ethdev(pci_dev, eswitch_dev, esw_da, j); + if (rc) + goto fail; + } + } + eswitch_dev->last_probed = i; + + return 0; +fail: + return rc; +} diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h new file mode 100644 index 0000000000..2cb3ae8ac5 --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ +#include +#include + +#ifndef __CNXK_REP_H__ +#define __CNXK_REP_H__ + +/* Common ethdev ops */ +extern struct eth_dev_ops cnxk_rep_dev_ops; + +struct cnxk_rep_dev { + uint16_t port_id; + uint16_t rep_id; + uint16_t switch_domain_id; + struct cnxk_eswitch_dev *parent_dev; + uint16_t hw_func; + uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; +}; + +static inline struct cnxk_rep_dev * +cnxk_rep_pmd_priv(const struct rte_eth_dev *eth_dev) +{ + return eth_dev->data->dev_private; +} + +int cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev); +int cnxk_rep_dev_remove(struct cnxk_eswitch_dev *eswitch_dev); +int cnxk_rep_dev_uninit(struct rte_eth_dev *ethdev); +int cnxk_rep_dev_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info); +int cnxk_rep_representor_info_get(struct rte_eth_dev *dev, struct rte_eth_representor_info *info); +int cnxk_rep_dev_configure(struct rte_eth_dev *eth_dev); + +int cnxk_rep_link_update(struct rte_eth_dev *eth_dev, int wait_to_compl); +int cnxk_rep_dev_start(struct rte_eth_dev *eth_dev); +int cnxk_rep_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t queue_idx, uint16_t nb_desc, + unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); +int cnxk_rep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t queue_idx, uint16_t nb_desc, + unsigned int socket_id, const struct rte_eth_txconf *tx_conf); +void cnxk_rep_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx); +void cnxk_rep_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx); +int cnxk_rep_dev_stop(struct rte_eth_dev *eth_dev); +int cnxk_rep_dev_close(struct rte_eth_dev *eth_dev); +int cnxk_rep_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats); +int cnxk_rep_stats_reset(struct rte_eth_dev *eth_dev); +int cnxk_rep_flow_ops_get(struct rte_eth_dev *ethdev, const struct rte_flow_ops **ops); + +#endif /* __CNXK_REP_H__ */ diff --git a/drivers/net/cnxk/cnxk_rep_ops.c b/drivers/net/cnxk/cnxk_rep_ops.c new file mode 100644 index 0000000000..67dcc422e3 --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep_ops.c @@ -0,0 +1,129 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include + +int +cnxk_rep_link_update(struct rte_eth_dev *ethdev, int wait_to_complete) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(wait_to_complete); + return 0; +} + +int +cnxk_rep_dev_info_get(struct rte_eth_dev *ethdev, struct rte_eth_dev_info *devinfo) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(devinfo); + return 0; +} + +int +cnxk_rep_dev_configure(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_dev_start(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_dev_close(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_dev_stop(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_rx_queue_setup(struct rte_eth_dev *ethdev, uint16_t rx_queue_id, uint16_t nb_rx_desc, + unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mb_pool) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(rx_queue_id); + PLT_SET_USED(nb_rx_desc); + PLT_SET_USED(socket_id); + PLT_SET_USED(rx_conf); + PLT_SET_USED(mb_pool); + return 0; +} + +void +cnxk_rep_rx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(queue_id); +} + +int +cnxk_rep_tx_queue_setup(struct rte_eth_dev *ethdev, uint16_t tx_queue_id, uint16_t nb_tx_desc, + unsigned int socket_id, const struct rte_eth_txconf *tx_conf) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(tx_queue_id); + PLT_SET_USED(nb_tx_desc); + PLT_SET_USED(socket_id); + PLT_SET_USED(tx_conf); + return 0; +} + +void +cnxk_rep_tx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(queue_id); +} + +int +cnxk_rep_stats_get(struct rte_eth_dev *ethdev, struct rte_eth_stats *stats) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(stats); + return 0; +} + +int +cnxk_rep_stats_reset(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_flow_ops_get(struct rte_eth_dev *ethdev, const struct rte_flow_ops **ops) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(ops); + return 0; +} + +/* CNXK platform representor dev ops */ +struct eth_dev_ops cnxk_rep_dev_ops = { + .dev_infos_get = cnxk_rep_dev_info_get, + .dev_configure = cnxk_rep_dev_configure, + .dev_start = cnxk_rep_dev_start, + .rx_queue_setup = cnxk_rep_rx_queue_setup, + .rx_queue_release = cnxk_rep_rx_queue_release, + .tx_queue_setup = cnxk_rep_tx_queue_setup, + .tx_queue_release = cnxk_rep_tx_queue_release, + .link_update = cnxk_rep_link_update, + .dev_close = cnxk_rep_dev_close, + .dev_stop = cnxk_rep_dev_stop, + .stats_get = cnxk_rep_stats_get, + .stats_reset = cnxk_rep_stats_reset, + .flow_ops_get = cnxk_rep_flow_ops_get +}; diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build index ea7e363e89..fcd5d3d569 100644 --- a/drivers/net/cnxk/meson.build +++ b/drivers/net/cnxk/meson.build @@ -34,6 +34,8 @@ sources = files( 'cnxk_lookup.c', 'cnxk_ptp.c', 'cnxk_flow.c', + 'cnxk_rep.c', + 'cnxk_rep_ops.c', 'cnxk_stats.c', 'cnxk_tm.c', ) From patchwork Tue Dec 19 17:39:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135351 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6E68243747; Tue, 19 Dec 2023 18:41:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B3F2942E65; Tue, 19 Dec 2023 18:40:50 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 8526E42E6C for ; Tue, 19 Dec 2023 18:40:49 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJACDTm006452 for ; Tue, 19 Dec 2023 09:40:49 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=I5tT8Ay8pVrq/oQ0Kb6/K +Fb0guC1iSuFt4fXENFakc=; b=HBNMSPfBrmY4/6xw+trf6a6AB7PAAyvdQWa/w 6Tt/LzsRMoZ1p3MYZDSXDd0vvdvY0opmuqjWQq92rCxpUQDzzhAXAbFFxMyn7+wr nVL291PlLLzldvNJ7WO2ampDJCO3TRNm8RaVt1jZO/kqJC/C73jXJfTzp8U009q8 mkfYUUC588Fx7cm9RmWAS0fhG50wt8pt4LkDHxcK9dvsH0yT5Jv3wCb0ZFyRtuTT lRprmO5/KhDJmO6t+GwFrXOxu9UTParz3zXUBPHUi66uVBLJC5jUM+dyMHMP65Fu OyyZYD7kcgJcWu+bcN0mDgOMtCU1com4COM/0ZO3qjb6IBqhQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3v1c9kumdy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:40:48 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:40:46 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:40:46 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id F2E893F7050; Tue, 19 Dec 2023 09:40:43 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 06/24] common/cnxk: common NPC changes for eswitch Date: Tue, 19 Dec 2023 23:09:45 +0530 Message-ID: <20231219174003.72901-7-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 7CazFgjl_CUY_XiGGujjlE84tOpSn8Ra X-Proofpoint-GUID: 7CazFgjl_CUY_XiGGujjlE84tOpSn8Ra X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org - adding support for installing flow using npc_install_flow mbox - rss action configuration for eswitch - new mcam helper apis Signed-off-by: Harman Kalra --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_api.h | 3 + drivers/common/cnxk/roc_eswitch.c | 285 +++++++++++++++++++++++++++++ drivers/common/cnxk/roc_eswitch.h | 21 +++ drivers/common/cnxk/roc_mbox.h | 25 +++ drivers/common/cnxk/roc_npc.c | 26 ++- drivers/common/cnxk/roc_npc.h | 5 +- drivers/common/cnxk/roc_npc_mcam.c | 2 +- drivers/common/cnxk/roc_npc_priv.h | 3 +- drivers/common/cnxk/version.map | 6 + 10 files changed, 368 insertions(+), 9 deletions(-) create mode 100644 drivers/common/cnxk/roc_eswitch.c create mode 100644 drivers/common/cnxk/roc_eswitch.h diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 56eea52909..e0e4600989 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -20,6 +20,7 @@ sources = files( 'roc_cpt_debug.c', 'roc_dev.c', 'roc_dpi.c', + 'roc_eswitch.c', 'roc_hash.c', 'roc_idev.c', 'roc_irq.c', diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h index f630853088..6a86863c57 100644 --- a/drivers/common/cnxk/roc_api.h +++ b/drivers/common/cnxk/roc_api.h @@ -117,4 +117,7 @@ /* MACsec */ #include "roc_mcs.h" +/* Eswitch */ +#include "roc_eswitch.h" + #endif /* _ROC_API_H_ */ diff --git a/drivers/common/cnxk/roc_eswitch.c b/drivers/common/cnxk/roc_eswitch.c new file mode 100644 index 0000000000..42a27e7442 --- /dev/null +++ b/drivers/common/cnxk/roc_eswitch.c @@ -0,0 +1,285 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include + +#include "roc_api.h" +#include "roc_priv.h" + +static int +eswitch_vlan_rx_cfg(uint16_t pcifunc, struct mbox *mbox) +{ + struct nix_vtag_config *vtag_cfg; + int rc; + + vtag_cfg = mbox_alloc_msg_nix_vtag_cfg(mbox_get(mbox)); + + /* config strip, capture and size */ + vtag_cfg->hdr.pcifunc = pcifunc; + vtag_cfg->vtag_size = NIX_VTAGSIZE_T4; + vtag_cfg->cfg_type = VTAG_RX; /* rx vlan cfg */ + vtag_cfg->rx.vtag_type = NIX_RX_VTAG_TYPE0; + vtag_cfg->rx.strip_vtag = true; + vtag_cfg->rx.capture_vtag = true; + + rc = mbox_process(mbox); + if (rc) + goto exit; + + rc = 0; +exit: + mbox_put(mbox); + return rc; +} + +static int +eswitch_vlan_tx_cfg(struct roc_npc_flow *flow, uint16_t pcifunc, struct mbox *mbox, + uint16_t vlan_tci, uint16_t *vidx) +{ + struct nix_vtag_config *vtag_cfg; + struct nix_vtag_config_rsp *rsp; + int rc; + + union { + uint64_t reg; + struct nix_tx_vtag_action_s act; + } tx_vtag_action; + + vtag_cfg = mbox_alloc_msg_nix_vtag_cfg(mbox_get(mbox)); + + /* Insert vlan tag */ + vtag_cfg->hdr.pcifunc = pcifunc; + vtag_cfg->vtag_size = NIX_VTAGSIZE_T4; + vtag_cfg->cfg_type = VTAG_TX; /* tx vlan cfg */ + vtag_cfg->tx.cfg_vtag0 = true; + vtag_cfg->tx.vtag0 = (((uint32_t)ROC_ESWITCH_VLAN_TPID << 16) | vlan_tci); + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto exit; + + if (rsp->vtag0_idx < 0) { + plt_err("Failed to config TX VTAG action"); + rc = -EINVAL; + goto exit; + } + + *vidx = rsp->vtag0_idx; + tx_vtag_action.reg = 0; + tx_vtag_action.act.vtag0_def = rsp->vtag0_idx; + tx_vtag_action.act.vtag0_lid = NPC_LID_LA; + tx_vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT; + tx_vtag_action.act.vtag0_relptr = NIX_TX_VTAGACTION_VTAG0_RELPTR; + + flow->vtag_action = tx_vtag_action.reg; + + rc = 0; +exit: + mbox_put(mbox); + return rc; + + return 0; +} + +int +roc_eswitch_npc_mcam_tx_rule(struct roc_npc *roc_npc, struct roc_npc_flow *flow, uint16_t pcifunc, + uint32_t vlan_tci) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + struct npc_install_flow_req *req; + struct npc_install_flow_rsp *rsp; + struct mbox *mbox = npc->mbox; + uint16_t vidx = 0, lbkid; + int rc; + + rc = eswitch_vlan_tx_cfg(flow, roc_npc->pf_func, mbox, vlan_tci, &vidx); + if (rc) { + plt_err("Failed to configure VLAN TX, err %d", rc); + goto fail; + } + + req = mbox_alloc_msg_npc_install_flow(mbox_get(mbox)); + + lbkid = 0; + req->hdr.pcifunc = roc_npc->pf_func; /* Eswitch PF is requester */ + req->vf = pcifunc; + req->entry = flow->mcam_id; + req->intf = NPC_MCAM_TX; + req->op = NIX_TX_ACTIONOP_UCAST_CHAN; + req->index = (lbkid << 8) | ROC_ESWITCH_LBK_CHAN; + req->set_cntr = 1; + req->vtag0_def = vidx; + req->vtag0_op = 1; + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto exit; + + flow->nix_intf = NIX_INTF_TX; +exit: + mbox_put(mbox); +fail: + return rc; +} + +static int +eswitch_vtag_cfg_delete(struct roc_npc *roc_npc, struct roc_npc_flow *flow) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + struct nix_vtag_config *vtag_cfg; + struct nix_vtag_config_rsp *rsp; + struct mbox *mbox = npc->mbox; + int rc = 0; + + union { + uint64_t reg; + struct nix_tx_vtag_action_s act; + } tx_vtag_action; + + tx_vtag_action.reg = flow->vtag_action; + vtag_cfg = mbox_alloc_msg_nix_vtag_cfg(mbox_get(mbox)); + + if (vtag_cfg == NULL) { + rc = -ENOSPC; + goto exit; + } + + vtag_cfg->cfg_type = VTAG_TX; + vtag_cfg->vtag_size = NIX_VTAGSIZE_T4; + vtag_cfg->tx.vtag0_idx = tx_vtag_action.act.vtag0_def; + vtag_cfg->tx.free_vtag0 = true; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto exit; + + rc = rsp->hdr.rc; +exit: + mbox_put(mbox); + return rc; +} + +int +roc_eswitch_npc_mcam_delete_rule(struct roc_npc *roc_npc, struct roc_npc_flow *flow) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + struct npc_delete_flow_req *req; + struct msg_rsp *rsp; + struct mbox *mbox = npc->mbox; + int rc = 0; + + /* Removing the VLAN TX config */ + if (flow->nix_intf == NIX_INTF_TX) { + rc = eswitch_vtag_cfg_delete(roc_npc, flow); + if (rc) + plt_err("Failed to delete TX vtag config"); + } + + req = mbox_alloc_msg_npc_delete_flow(mbox_get(mbox)); + + req->entry = flow->mcam_id; + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto exit; + + rc = rsp->hdr.rc; +exit: + mbox_put(mbox); + return rc; +} + +int +roc_eswitch_npc_mcam_rx_rule(struct roc_npc *roc_npc, struct roc_npc_flow *flow, uint16_t pcifunc, + uint16_t vlan_tci, uint16_t vlan_tci_mask) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + struct npc_install_flow_req *req; + struct npc_install_flow_rsp *rsp; + struct mbox *mbox = npc->mbox; + bool is_esw_dev; + int rc; + + /* For ESW PF/VF */ + is_esw_dev = (dev_get_pf(roc_npc->pf_func) == dev_get_pf(pcifunc)); + /* VLAN Rx config */ + if (is_esw_dev) { + rc = eswitch_vlan_rx_cfg(roc_npc->pf_func, mbox); + if (rc) { + plt_err("Failed to configure VLAN RX rule, err %d", rc); + goto fail; + } + } + + req = mbox_alloc_msg_npc_install_flow(mbox_get(mbox)); + req->vf = pcifunc; + /* Action */ + req->op = NIX_RX_ACTIONOP_DEFAULT; + req->index = 0; + req->entry = flow->mcam_id; + req->hdr.pcifunc = roc_npc->pf_func; /* Eswitch PF is requester */ + req->features = BIT_ULL(NPC_OUTER_VID) | BIT_ULL(NPC_VLAN_ETYPE_CTAG); + req->vtag0_valid = true; + /* For ESW PF/VF using configured vlan rx cfg while for other + * representees using standard vlan_type = 7 which is strip. + */ + req->vtag0_type = is_esw_dev ? NIX_RX_VTAG_TYPE0 : NIX_RX_VTAG_TYPE7; + req->packet.vlan_etype = ROC_ESWITCH_VLAN_TPID; + req->mask.vlan_etype = 0xFFFF; + req->packet.vlan_tci = ntohs(vlan_tci & 0xFFFF); + req->mask.vlan_tci = ntohs(vlan_tci_mask); + + req->channel = ROC_ESWITCH_LBK_CHAN; + req->chan_mask = 0xffff; + req->intf = NPC_MCAM_RX; + req->set_cntr = 1; + req->cntr_val = flow->ctr_id; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto exit; + + flow->nix_intf = NIX_INTF_RX; +exit: + mbox_put(mbox); +fail: + return rc; +} + +int +roc_eswitch_npc_rss_action_configure(struct roc_npc *roc_npc, struct roc_npc_flow *flow, + uint32_t flowkey_cfg, uint16_t *reta_tbl) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + struct roc_nix *roc_nix = roc_npc->roc_nix; + uint32_t rss_grp_idx; + uint8_t flowkey_algx; + int rc; + + rc = npc_rss_free_grp_get(npc, &rss_grp_idx); + /* RSS group :0 is not usable for flow rss action */ + if (rc < 0 || rss_grp_idx == 0) + return -ENOSPC; + + /* Populating reta table for the specific RSS group */ + rc = roc_nix_rss_reta_set(roc_nix, rss_grp_idx, reta_tbl); + if (rc) { + plt_err("Failed to init rss table rc = %d", rc); + return rc; + } + + rc = roc_nix_rss_flowkey_set(roc_nix, &flowkey_algx, flowkey_cfg, rss_grp_idx, + flow->mcam_id); + if (rc) { + plt_err("Failed to set rss hash function rc = %d", rc); + return rc; + } + + plt_bitmap_set(npc->rss_grp_entries, rss_grp_idx); + + flow->npc_action &= (~(0xfULL)); + flow->npc_action |= NIX_RX_ACTIONOP_RSS; + flow->npc_action |= + ((uint64_t)(flowkey_algx & NPC_RSS_ACT_ALG_MASK) << NPC_RSS_ACT_ALG_OFFSET) | + ((uint64_t)(rss_grp_idx & NPC_RSS_ACT_GRP_MASK) << NPC_RSS_ACT_GRP_OFFSET); + return 0; +} diff --git a/drivers/common/cnxk/roc_eswitch.h b/drivers/common/cnxk/roc_eswitch.h new file mode 100644 index 0000000000..35976b7ff6 --- /dev/null +++ b/drivers/common/cnxk/roc_eswitch.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#ifndef __ROC_ESWITCH_H__ +#define __ROC_ESWITCH_H__ + +#define ROC_ESWITCH_VLAN_TPID 0x8100 +#define ROC_ESWITCH_LBK_CHAN 63 + +/* NPC */ +int __roc_api roc_eswitch_npc_mcam_rx_rule(struct roc_npc *roc_npc, struct roc_npc_flow *flow, + uint16_t pcifunc, uint16_t vlan_tci, + uint16_t vlan_tci_mask); +int __roc_api roc_eswitch_npc_mcam_tx_rule(struct roc_npc *roc_npc, struct roc_npc_flow *flow, + uint16_t pcifunc, uint32_t vlan_tci); +int __roc_api roc_eswitch_npc_mcam_delete_rule(struct roc_npc *roc_npc, struct roc_npc_flow *flow); +int __roc_api roc_eswitch_npc_rss_action_configure(struct roc_npc *roc_npc, + struct roc_npc_flow *flow, uint32_t flowkey_cfg, + uint16_t *reta_tbl); +#endif /* __ROC_ESWITCH_H__ */ diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h index b7e2f43d45..4c846f0757 100644 --- a/drivers/common/cnxk/roc_mbox.h +++ b/drivers/common/cnxk/roc_mbox.h @@ -386,6 +386,18 @@ enum rvu_af_status { RVU_INVALID_VF_ID = -256, }; +/* For NIX RX vtag action */ +enum nix_rx_vtag0_type { + NIX_RX_VTAG_TYPE0, + NIX_RX_VTAG_TYPE1, + NIX_RX_VTAG_TYPE2, + NIX_RX_VTAG_TYPE3, + NIX_RX_VTAG_TYPE4, + NIX_RX_VTAG_TYPE5, + NIX_RX_VTAG_TYPE6, + NIX_RX_VTAG_TYPE7, +}; + struct ready_msg_rsp { struct mbox_msghdr hdr; uint16_t __io sclk_freq; /* SCLK frequency */ @@ -2442,6 +2454,8 @@ enum header_fields { NPC_DMAC, NPC_SMAC, NPC_ETYPE, + NPC_VLAN_ETYPE_CTAG, /* 0x8100 */ + NPC_VLAN_ETYPE_STAG, /* 0x88A8 */ NPC_OUTER_VID, NPC_TOS, NPC_SIP_IPV4, @@ -2476,6 +2490,14 @@ struct flow_msg { uint8_t __io tc; uint16_t __io sport; uint16_t __io dport; + union { + uint8_t __io ip_flag; + uint8_t __io next_header; + }; + uint16_t __io vlan_itci; + uint32_t __io gtpu_teid; + uint32_t __io gtpc_teid; + uint16_t __io sq_id; }; struct npc_install_flow_req { @@ -2485,6 +2507,7 @@ struct npc_install_flow_req { uint64_t __io features; uint16_t __io entry; uint16_t __io channel; + uint16_t __io chan_mask; uint8_t __io intf; uint8_t __io set_cntr; uint8_t __io default_rule; @@ -2507,6 +2530,8 @@ struct npc_install_flow_req { uint8_t __io vtag0_op; uint16_t __io vtag1_def; uint8_t __io vtag1_op; + /* old counter value */ + uint16_t __io cntr_val; }; struct npc_install_flow_rsp { diff --git a/drivers/common/cnxk/roc_npc.c b/drivers/common/cnxk/roc_npc.c index 9a0fe5f4e2..67a660a2bc 100644 --- a/drivers/common/cnxk/roc_npc.c +++ b/drivers/common/cnxk/roc_npc.c @@ -77,8 +77,23 @@ roc_npc_inl_mcam_clear_counter(uint32_t ctr_id) } int -roc_npc_mcam_read_counter(struct roc_npc *roc_npc, uint32_t ctr_id, - uint64_t *count) +roc_npc_mcam_alloc_counter(struct roc_npc *roc_npc, uint16_t *ctr_id) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + + return npc_mcam_alloc_counter(npc->mbox, ctr_id); +} + +int +roc_npc_get_free_mcam_entry(struct roc_npc *roc_npc, struct roc_npc_flow *flow) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + + return npc_get_free_mcam_entry(npc->mbox, flow, npc); +} + +int +roc_npc_mcam_read_counter(struct roc_npc *roc_npc, uint32_t ctr_id, uint64_t *count) { struct npc *npc = roc_npc_to_npc_priv(roc_npc); @@ -157,14 +172,13 @@ roc_npc_mcam_free_all_resources(struct roc_npc *roc_npc) } int -roc_npc_mcam_alloc_entries(struct roc_npc *roc_npc, int ref_entry, - int *alloc_entry, int req_count, int priority, - int *resp_count) +roc_npc_mcam_alloc_entries(struct roc_npc *roc_npc, int ref_entry, int *alloc_entry, int req_count, + int priority, int *resp_count, bool is_conti) { struct npc *npc = roc_npc_to_npc_priv(roc_npc); return npc_mcam_alloc_entries(npc->mbox, ref_entry, alloc_entry, req_count, priority, - resp_count, 0); + resp_count, is_conti); } int diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h index e880a7fa67..349c7f9d22 100644 --- a/drivers/common/cnxk/roc_npc.h +++ b/drivers/common/cnxk/roc_npc.h @@ -431,7 +431,8 @@ int __roc_api roc_npc_mcam_enable_all_entries(struct roc_npc *roc_npc, bool enab int __roc_api roc_npc_mcam_alloc_entry(struct roc_npc *roc_npc, struct roc_npc_flow *mcam, struct roc_npc_flow *ref_mcam, int prio, int *resp_count); int __roc_api roc_npc_mcam_alloc_entries(struct roc_npc *roc_npc, int ref_entry, int *alloc_entry, - int req_count, int priority, int *resp_count); + int req_count, int priority, int *resp_count, + bool is_conti); int __roc_api roc_npc_mcam_ena_dis_entry(struct roc_npc *roc_npc, struct roc_npc_flow *mcam, bool enable); int __roc_api roc_npc_mcam_write_entry(struct roc_npc *roc_npc, struct roc_npc_flow *mcam); @@ -442,6 +443,8 @@ int __roc_api roc_npc_get_low_priority_mcam(struct roc_npc *roc_npc); int __roc_api roc_npc_mcam_free_counter(struct roc_npc *roc_npc, uint16_t ctr_id); int __roc_api roc_npc_mcam_read_counter(struct roc_npc *roc_npc, uint32_t ctr_id, uint64_t *count); int __roc_api roc_npc_mcam_clear_counter(struct roc_npc *roc_npc, uint32_t ctr_id); +int __roc_api roc_npc_mcam_alloc_counter(struct roc_npc *roc_npc, uint16_t *ctr_id); +int __roc_api roc_npc_get_free_mcam_entry(struct roc_npc *roc_npc, struct roc_npc_flow *flow); int __roc_api roc_npc_inl_mcam_read_counter(uint32_t ctr_id, uint64_t *count); int __roc_api roc_npc_inl_mcam_clear_counter(uint32_t ctr_id); int __roc_api roc_npc_mcam_free_all_resources(struct roc_npc *roc_npc); diff --git a/drivers/common/cnxk/roc_npc_mcam.c b/drivers/common/cnxk/roc_npc_mcam.c index 3ef189e184..2de988a44b 100644 --- a/drivers/common/cnxk/roc_npc_mcam.c +++ b/drivers/common/cnxk/roc_npc_mcam.c @@ -4,7 +4,7 @@ #include "roc_api.h" #include "roc_priv.h" -static int +int npc_mcam_alloc_counter(struct mbox *mbox, uint16_t *ctr) { struct npc_mcam_alloc_counter_req *req; diff --git a/drivers/common/cnxk/roc_npc_priv.h b/drivers/common/cnxk/roc_npc_priv.h index c0809407a6..50b62b1244 100644 --- a/drivers/common/cnxk/roc_npc_priv.h +++ b/drivers/common/cnxk/roc_npc_priv.h @@ -432,6 +432,7 @@ roc_npc_to_npc_priv(struct roc_npc *npc) return (struct npc *)npc->reserved; } +int npc_mcam_alloc_counter(struct mbox *mbox, uint16_t *ctr); int npc_mcam_free_counter(struct mbox *mbox, uint16_t ctr_id); int npc_mcam_read_counter(struct mbox *mbox, uint32_t ctr_id, uint64_t *count); int npc_mcam_clear_counter(struct mbox *mbox, uint32_t ctr_id); @@ -480,7 +481,6 @@ uint64_t npc_get_kex_capability(struct npc *npc); int npc_process_ipv6_field_hash(const struct roc_npc_flow_item_ipv6 *ipv6_spec, const struct roc_npc_flow_item_ipv6 *ipv6_mask, struct npc_parse_state *pst, uint8_t type); -int npc_rss_free_grp_get(struct npc *npc, uint32_t *grp); int npc_rss_action_configure(struct roc_npc *roc_npc, const struct roc_npc_action_rss *rss, uint8_t *alg_idx, uint32_t *rss_grp, uint32_t mcam_id); int npc_rss_action_program(struct roc_npc *roc_npc, const struct roc_npc_action actions[], @@ -496,4 +496,5 @@ void npc_aged_flows_bitmap_free(struct roc_npc *roc_npc); int npc_aging_ctrl_thread_create(struct roc_npc *roc_npc, const struct roc_npc_action_age *age, struct roc_npc_flow *flow); void npc_aging_ctrl_thread_destroy(struct roc_npc *roc_npc); +int npc_rss_free_grp_get(struct npc *npc, uint32_t *pos); #endif /* _ROC_NPC_PRIV_H_ */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index bd28803013..feda34b852 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -91,6 +91,10 @@ INTERNAL { roc_dpi_disable; roc_dpi_enable; roc_error_msg_get; + roc_eswitch_npc_mcam_delete_rule; + roc_eswitch_npc_mcam_rx_rule; + roc_eswitch_npc_mcam_tx_rule; + roc_eswitch_npc_rss_action_configure; roc_hash_md5_gen; roc_hash_sha1_gen; roc_hash_sha256_gen; @@ -443,6 +447,7 @@ INTERNAL { roc_npc_flow_dump; roc_npc_flow_mcam_dump; roc_npc_flow_parse; + roc_npc_get_free_mcam_entry; roc_npc_get_low_priority_mcam; roc_npc_init; roc_npc_kex_capa_get; @@ -450,6 +455,7 @@ INTERNAL { roc_npc_mark_actions_sub_return; roc_npc_vtag_actions_get; roc_npc_vtag_actions_sub_return; + roc_npc_mcam_alloc_counter; roc_npc_mcam_alloc_entries; roc_npc_mcam_alloc_entry; roc_npc_mcam_clear_counter; From patchwork Tue Dec 19 17:39:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135352 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9B49A43747; Tue, 19 Dec 2023 18:41:24 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E197C42E6E; Tue, 19 Dec 2023 18:40:53 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id D207042E6E for ; Tue, 19 Dec 2023 18:40:51 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJA5Xd2028888 for ; Tue, 19 Dec 2023 09:40:51 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=qpzB+E5Ub0WIzFA71WuMP mGT2UCwpXnu3I0z8HLbczg=; b=jKVBeYHLFhhjAe38gPFEuRJdegiwbgkm+ScTd fos1apvItWuPg2bgXPhV3xSzTovXgI0XgWznp9jl4GPd6h/MsvBHnFTmCJM5rrvT tu4aprzfmlWDgCr2J8mOWMNNIBVQP7vLbom5zqprOQK9TkYYxH27/h1SVN7bo36G BXnb+Si4OviAzTARQHEIRjo0+7tL4vjovrxyTKAtK+Tgp5KsDBpyvLfyMOUzA7cF SYyK/jYC8jN/toMKYCkrU9IqB3hOVJOijJRsNQYtNh191BLgZ9U8JT+k4sHHD1xY MfbR8/FLKpWJmmm1etQ5mR++/z9sz6isCdA8aVBpizXhegPFQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3v39491rwp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:40:50 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:40:49 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:40:49 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id F2BCF3F708D; Tue, 19 Dec 2023 09:40:46 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 07/24] common/cnxk: interface to update VLAN TPID Date: Tue, 19 Dec 2023 23:09:46 +0530 Message-ID: <20231219174003.72901-8-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: Q_FRzLB8d9IPLz3uxlYtnZRUe0uoyx2g X-Proofpoint-ORIG-GUID: Q_FRzLB8d9IPLz3uxlYtnZRUe0uoyx2g X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introducing eswitch variant of set vlan tpid api which can be using for PF and VF Signed-off-by: Harman Kalra --- drivers/common/cnxk/roc_eswitch.c | 15 +++++++++++++++ drivers/common/cnxk/roc_eswitch.h | 4 ++++ drivers/common/cnxk/roc_nix_priv.h | 4 ++-- drivers/common/cnxk/roc_nix_vlan.c | 23 ++++++++++++++++++----- drivers/common/cnxk/version.map | 1 + 5 files changed, 40 insertions(+), 7 deletions(-) diff --git a/drivers/common/cnxk/roc_eswitch.c b/drivers/common/cnxk/roc_eswitch.c index 42a27e7442..7f2a8e6c06 100644 --- a/drivers/common/cnxk/roc_eswitch.c +++ b/drivers/common/cnxk/roc_eswitch.c @@ -283,3 +283,18 @@ roc_eswitch_npc_rss_action_configure(struct roc_npc *roc_npc, struct roc_npc_flo ((uint64_t)(rss_grp_idx & NPC_RSS_ACT_GRP_MASK) << NPC_RSS_ACT_GRP_OFFSET); return 0; } + +int +roc_eswitch_nix_vlan_tpid_set(struct roc_nix *roc_nix, uint32_t type, uint16_t tpid, bool is_vf) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + int rc = 0; + + /* Configuring for PF/VF */ + rc = nix_vlan_tpid_set(dev->mbox, dev->pf_func | is_vf, type, tpid); + if (rc) + plt_err("Failed to set tpid for PF, rc %d", rc); + + return rc; +} diff --git a/drivers/common/cnxk/roc_eswitch.h b/drivers/common/cnxk/roc_eswitch.h index 35976b7ff6..0dd23ff76a 100644 --- a/drivers/common/cnxk/roc_eswitch.h +++ b/drivers/common/cnxk/roc_eswitch.h @@ -18,4 +18,8 @@ int __roc_api roc_eswitch_npc_mcam_delete_rule(struct roc_npc *roc_npc, struct r int __roc_api roc_eswitch_npc_rss_action_configure(struct roc_npc *roc_npc, struct roc_npc_flow *flow, uint32_t flowkey_cfg, uint16_t *reta_tbl); + +/* NIX */ +int __roc_api roc_eswitch_nix_vlan_tpid_set(struct roc_nix *nix, uint32_t type, uint16_t tpid, + bool is_vf); #endif /* __ROC_ESWITCH_H__ */ diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index a582b9df33..8767a62577 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -473,9 +473,9 @@ int nix_lf_stat_reg_dump(uintptr_t nix_lf_base, uint64_t *data, uint8_t lf_tx_stats, uint8_t lf_rx_stats); int nix_lf_int_reg_dump(uintptr_t nix_lf_base, uint64_t *data, uint16_t qints, uint16_t cints); -int nix_q_ctx_get(struct dev *dev, uint8_t ctype, uint16_t qid, - __io void **ctx_p); +int nix_q_ctx_get(struct dev *dev, uint8_t ctype, uint16_t qid, __io void **ctx_p); uint8_t nix_tm_lbk_relchan_get(struct nix *nix); +int nix_vlan_tpid_set(struct mbox *mbox, uint16_t pcifunc, uint32_t type, uint16_t tpid); /* * Telemetry diff --git a/drivers/common/cnxk/roc_nix_vlan.c b/drivers/common/cnxk/roc_nix_vlan.c index abd2eb0571..db218593ad 100644 --- a/drivers/common/cnxk/roc_nix_vlan.c +++ b/drivers/common/cnxk/roc_nix_vlan.c @@ -211,18 +211,17 @@ roc_nix_vlan_insert_ena_dis(struct roc_nix *roc_nix, } int -roc_nix_vlan_tpid_set(struct roc_nix *roc_nix, uint32_t type, uint16_t tpid) +nix_vlan_tpid_set(struct mbox *mbox, uint16_t pcifunc, uint32_t type, uint16_t tpid) { - struct nix *nix = roc_nix_to_nix_priv(roc_nix); - struct dev *dev = &nix->dev; - struct mbox *mbox = mbox_get(dev->mbox); struct nix_set_vlan_tpid *tpid_cfg; int rc = -ENOSPC; - tpid_cfg = mbox_alloc_msg_nix_set_vlan_tpid(mbox); + /* Configuring for PF */ + tpid_cfg = mbox_alloc_msg_nix_set_vlan_tpid(mbox_get(mbox)); if (tpid_cfg == NULL) goto exit; tpid_cfg->tpid = tpid; + tpid_cfg->hdr.pcifunc = pcifunc; if (type & ROC_NIX_VLAN_TYPE_OUTER) tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER; @@ -234,3 +233,17 @@ roc_nix_vlan_tpid_set(struct roc_nix *roc_nix, uint32_t type, uint16_t tpid) mbox_put(mbox); return rc; } + +int +roc_nix_vlan_tpid_set(struct roc_nix *roc_nix, uint32_t type, uint16_t tpid) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + int rc; + + rc = nix_vlan_tpid_set(dev->mbox, dev->pf_func, type, tpid); + if (rc) + plt_err("Failed to set tpid for PF, rc %d", rc); + + return rc; +} diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index feda34b852..78c421677d 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -91,6 +91,7 @@ INTERNAL { roc_dpi_disable; roc_dpi_enable; roc_error_msg_get; + roc_eswitch_nix_vlan_tpid_set; roc_eswitch_npc_mcam_delete_rule; roc_eswitch_npc_mcam_rx_rule; roc_eswitch_npc_mcam_tx_rule; From patchwork Tue Dec 19 17:39:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135353 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 35CB343747; Tue, 19 Dec 2023 18:41:32 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2C15442E5A; Tue, 19 Dec 2023 18:40:57 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 2986742E76 for ; Tue, 19 Dec 2023 18:40:55 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJA5XSg028882 for ; Tue, 19 Dec 2023 09:40:54 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=OC3HTz4GmgsFlx8CumSJl /YKrp5IXkwY1mDszfDdxI8=; b=XSVFMd88OHRTzf53/NU8KgzbBTsd3iwGyaRX/ NtrVcN5kfSxTCINnxaLkPTMkZTUfSzbIr7kFs5BIIueE3eAJqLvVFg4GiXTdQ2Te CGpHC4KcL80r17HSRo0hQjHDpO0WGI8b7AlZfHounmSROdYEgP4FwFrJ9ApCBcoO wzgTkJvGu7tkndU5sZGanONZ1qX2figJKPNJhmZ/rF0gwTQKTxkrt+bcMY5zJeAb UVs75RSto4nOGVnl7+AhfCIUu2ZamY9OYHebIZXjLen+Tdh3HDsx83tOEIsbxJ2w 2AzCEgxik7KvpLMYQGtMSo2Khx5SUAIBSrjtTRcv+80Nr3jlA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3v39491rx0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:40:54 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:40:52 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:40:52 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 005B73F7094; Tue, 19 Dec 2023 09:40:49 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 08/24] net/cnxk: eswitch flow configurations Date: Tue, 19 Dec 2023 23:09:47 +0530 Message-ID: <20231219174003.72901-9-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: EXODlIpXx37RHfkxhrePrH0FE3bKU7wT X-Proofpoint-ORIG-GUID: EXODlIpXx37RHfkxhrePrH0FE3bKU7wT X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org - Adding flow rules for eswitch PF and VF - Interfaces to delete shift flow rules Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.c | 43 ++- drivers/net/cnxk/cnxk_eswitch.h | 25 +- drivers/net/cnxk/cnxk_eswitch_devargs.c | 1 + drivers/net/cnxk/cnxk_eswitch_flow.c | 445 ++++++++++++++++++++++++ drivers/net/cnxk/meson.build | 1 + 5 files changed, 511 insertions(+), 4 deletions(-) create mode 100644 drivers/net/cnxk/cnxk_eswitch_flow.c diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c index 563b224a6c..1cb0f0310a 100644 --- a/drivers/net/cnxk/cnxk_eswitch.c +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -2,11 +2,30 @@ * Copyright(C) 2023 Marvell. */ +#include + #include #include #define CNXK_NIX_DEF_SQ_COUNT 512 +struct cnxk_esw_repr_hw_info * +cnxk_eswitch_representor_hw_info(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func) +{ + struct cnxk_eswitch_devargs *esw_da; + int i, j; + + /* Traversing the initialized represented list */ + for (i = 0; i < eswitch_dev->nb_esw_da; i++) { + esw_da = &eswitch_dev->esw_da[i]; + for (j = 0; j < esw_da->nb_repr_ports; j++) { + if (esw_da->repr_hw_info[j].hw_func == hw_func) + return &esw_da->repr_hw_info[j]; + } + } + return NULL; +} + static int eswitch_hw_rsrc_cleanup(struct cnxk_eswitch_dev *eswitch_dev) { @@ -48,6 +67,10 @@ cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) cnxk_rep_dev_remove(eswitch_dev); eswitch_hw_rsrc_cleanup(eswitch_dev); + + /* Cleanup NPC rxtx flow rules */ + cnxk_eswitch_flow_rules_remove_list(eswitch_dev, &eswitch_dev->esw_flow_list); + /* Check if this device is hosting common resource */ nix = roc_idev_npa_nix_get(); if (!nix || nix->pci_dev != pci_dev) { @@ -58,7 +81,7 @@ cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) /* Try nix fini now */ rc = roc_nix_dev_fini(&eswitch_dev->nix); if (rc == -EAGAIN) { - plt_info("%s: common resource in use by other devices", pci_dev->name); + plt_esw_dbg("%s: common resource in use by other devices", pci_dev->name); goto exit; } else if (rc) { plt_err("Failed in nix dev fini, rc=%d", rc); @@ -154,6 +177,21 @@ cnxk_eswitch_nix_rsrc_start(struct cnxk_eswitch_dev *eswitch_dev) goto done; } + /* Install eswitch PF mcam rules */ + rc = cnxk_eswitch_pfvf_flow_rules_install(eswitch_dev, false); + if (rc) { + plt_err("Failed to install rxtx rules, rc %d", rc); + goto done; + } + + /* Configure TPID for Eswitch PF LFs */ + rc = roc_eswitch_nix_vlan_tpid_set(&eswitch_dev->nix, ROC_NIX_VLAN_TYPE_OUTER, + CNXK_ESWITCH_VLAN_TPID, false); + if (rc) { + plt_err("Failed to configure tpid, rc %d", rc); + goto done; + } + rc = roc_npc_mcam_enable_all_entries(&eswitch_dev->npc, 1); if (rc) { plt_err("Failed to enable NPC entries %d", rc); @@ -600,6 +638,9 @@ eswitch_hw_rsrc_setup(struct cnxk_eswitch_dev *eswitch_dev) if (rc) goto rsrc_cleanup; + /* List for eswitch default flows */ + TAILQ_INIT(&eswitch_dev->esw_flow_list); + return rc; rsrc_cleanup: eswitch_hw_rsrc_cleanup(eswitch_dev); diff --git a/drivers/net/cnxk/cnxk_eswitch.h b/drivers/net/cnxk/cnxk_eswitch.h index 4908c3ba95..470e4035bf 100644 --- a/drivers/net/cnxk/cnxk_eswitch.h +++ b/drivers/net/cnxk/cnxk_eswitch.h @@ -13,11 +13,10 @@ #include "cn10k_tx.h" #define CNXK_ESWITCH_CTRL_MSG_SOCK_PATH "/tmp/cxk_rep_ctrl_msg_sock" +#define CNXK_ESWITCH_VLAN_TPID ROC_ESWITCH_VLAN_TPID #define CNXK_REP_ESWITCH_DEV_MZ "cnxk_eswitch_dev" -#define CNXK_ESWITCH_VLAN_TPID 0x8100 /* TODO change */ #define CNXK_ESWITCH_MAX_TXQ 256 #define CNXK_ESWITCH_MAX_RXQ 256 -#define CNXK_ESWITCH_LBK_CHAN 63 #define CNXK_ESWITCH_VFPF_SHIFT 8 #define CNXK_ESWITCH_QUEUE_STATE_RELEASED 0 @@ -25,6 +24,7 @@ #define CNXK_ESWITCH_QUEUE_STATE_STARTED 2 #define CNXK_ESWITCH_QUEUE_STATE_STOPPED 3 +TAILQ_HEAD(eswitch_flow_list, roc_npc_flow); enum cnxk_esw_da_pattern_type { CNXK_ESW_DA_TYPE_LIST = 0, CNXK_ESW_DA_TYPE_PFVF, @@ -39,6 +39,9 @@ struct cnxk_esw_repr_hw_info { uint16_t pfvf; /* representor port id assigned to representee */ uint16_t port_id; + uint16_t num_flow_entries; + + TAILQ_HEAD(flow_list, roc_npc_flow) repr_flow_list; }; /* Structure representing per devarg information - this can be per representee @@ -90,7 +93,6 @@ struct cnxk_eswitch_cxq { uint8_t state; }; -TAILQ_HEAD(eswitch_flow_list, roc_npc_flow); struct cnxk_eswitch_dev { /* Input parameters */ struct plt_pci_device *pci_dev; @@ -116,6 +118,13 @@ struct cnxk_eswitch_dev { uint16_t rep_cnt; uint8_t configured; + /* NPC rxtx rules */ + struct flow_list esw_flow_list; + uint16_t num_entries; + bool eswitch_vf_rules_setup; + uint16_t esw_pf_entry; + uint16_t esw_vf_entry; + /* Eswitch Representors Devargs */ uint16_t nb_esw_da; uint16_t last_probed; @@ -144,7 +153,10 @@ cnxk_eswitch_pmd_priv(void) return mz->addr; } +/* HW Resources */ int cnxk_eswitch_nix_rsrc_start(struct cnxk_eswitch_dev *eswitch_dev); +struct cnxk_esw_repr_hw_info *cnxk_eswitch_representor_hw_info(struct cnxk_eswitch_dev *eswitch_dev, + uint16_t hw_func); int cnxk_eswitch_repr_devargs(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev); int cnxk_eswitch_representor_info_get(struct cnxk_eswitch_dev *eswitch_dev, struct rte_eth_representor_info *info); @@ -158,4 +170,11 @@ int cnxk_eswitch_rxq_start(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid); int cnxk_eswitch_rxq_stop(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid); int cnxk_eswitch_txq_start(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid); int cnxk_eswitch_txq_stop(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid); +/* Flow Rules */ +int cnxk_eswitch_flow_rules_install(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func); +int cnxk_eswitch_flow_rules_delete(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func); +int cnxk_eswitch_pfvf_flow_rules_install(struct cnxk_eswitch_dev *eswitch_dev, bool is_vf); +int cnxk_eswitch_flow_rule_shift(uint16_t hw_func, uint16_t *new_entry); +int cnxk_eswitch_flow_rules_remove_list(struct cnxk_eswitch_dev *eswitch_dev, + struct flow_list *list); #endif /* __CNXK_ESWITCH_H__ */ diff --git a/drivers/net/cnxk/cnxk_eswitch_devargs.c b/drivers/net/cnxk/cnxk_eswitch_devargs.c index f1a1b05a99..aaefad2085 100644 --- a/drivers/net/cnxk/cnxk_eswitch_devargs.c +++ b/drivers/net/cnxk/cnxk_eswitch_devargs.c @@ -170,6 +170,7 @@ populate_repr_hw_info(struct cnxk_eswitch_dev *eswitch_dev, struct rte_eth_devar esw_da->repr_hw_info[i].pfvf = (eth_da->type == RTE_ETH_REPRESENTOR_PF) ? eth_da->ports[0] : eth_da->representor_ports[i]; + TAILQ_INIT(&esw_da->repr_hw_info[i].repr_flow_list); plt_esw_dbg(" HW func %x index %d", hw_func, j); } diff --git a/drivers/net/cnxk/cnxk_eswitch_flow.c b/drivers/net/cnxk/cnxk_eswitch_flow.c new file mode 100644 index 0000000000..f2ad87c75a --- /dev/null +++ b/drivers/net/cnxk/cnxk_eswitch_flow.c @@ -0,0 +1,445 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include + +#include + +const uint8_t eswitch_vlan_rss_key[ROC_NIX_RSS_KEY_LEN] = { + 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, + 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, + 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, + 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, + 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, + 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE, 0xFE}; + +int +cnxk_eswitch_flow_rules_remove_list(struct cnxk_eswitch_dev *eswitch_dev, struct flow_list *list) +{ + struct roc_npc_flow *flow, *tvar; + int rc = 0; + + RTE_TAILQ_FOREACH_SAFE(flow, list, next, tvar) { + plt_esw_dbg("Removing flow %d", flow->mcam_id); + rc = roc_eswitch_npc_mcam_delete_rule(&eswitch_dev->npc, flow); + if (rc) + plt_err("Failed to delete rule %d", flow->mcam_id); + rc = roc_npc_mcam_free(&eswitch_dev->npc, flow); + if (rc) + plt_err("Failed to free entry %d", flow->mcam_id); + TAILQ_REMOVE(list, flow, next); + rte_free(flow); + } + + return rc; +} + +static int +eswitch_npc_vlan_rss_configure(struct roc_npc *roc_npc, struct roc_npc_flow *flow) +{ + struct roc_nix *roc_nix = roc_npc->roc_nix; + uint32_t qid, idx, hash, vlan_tci; + uint16_t *reta, reta_sz, id; + int rc = 0; + + id = flow->mcam_id; + /* Setting up the key */ + roc_nix_rss_key_set(roc_nix, eswitch_vlan_rss_key); + + reta_sz = roc_nix->reta_sz; + reta = plt_zmalloc(reta_sz * sizeof(uint16_t), 0); + if (!reta) { + plt_err("Failed to allocate mem for reta table"); + rc = -ENOMEM; + goto fail; + } + for (qid = 0; qid < reta_sz; qid++) { + vlan_tci = (1 << CNXK_ESWITCH_VFPF_SHIFT) | qid; + hash = rte_softrss(&vlan_tci, 1, eswitch_vlan_rss_key); + idx = hash & 0xFF; + reta[idx] = qid; + } + flow->mcam_id = id; + rc = roc_eswitch_npc_rss_action_configure(roc_npc, flow, FLOW_KEY_TYPE_VLAN, reta); + if (rc) { + plt_err("Failed to configure rss action, err %d", rc); + goto done; + } + +done: + plt_free(reta); +fail: + return rc; +} + +static int +eswitch_pfvf_mcam_install_rules(struct cnxk_eswitch_dev *eswitch_dev, struct roc_npc_flow *flow, + bool is_vf) +{ + uint16_t vlan_tci = 0, hw_func; + int rc; + + hw_func = eswitch_dev->npc.pf_func | is_vf; + if (!is_vf) { + /* Eswitch PF RX VLAN rule */ + vlan_tci = 1ULL << CNXK_ESWITCH_VFPF_SHIFT; + rc = roc_eswitch_npc_mcam_rx_rule(&eswitch_dev->npc, flow, hw_func, vlan_tci, + 0xFF00); + if (rc) { + plt_err("Failed to install RX rule for ESW PF to ESW VF, rc %d", rc); + goto exit; + } + plt_esw_dbg("Installed eswitch PF RX rule %d", flow->mcam_id); + rc = eswitch_npc_vlan_rss_configure(&eswitch_dev->npc, flow); + if (rc) + goto exit; + flow->enable = true; + } else { + /* Eswitch VF RX VLAN rule */ + rc = roc_eswitch_npc_mcam_rx_rule(&eswitch_dev->npc, flow, hw_func, vlan_tci, + 0xFF00); + if (rc) { + plt_err("Failed to install RX rule for ESW VF to ESW PF, rc %d", rc); + goto exit; + } + flow->enable = true; + plt_esw_dbg("Installed eswitch PF RX rule %d", flow->mcam_id); + } + + return 0; +exit: + return rc; +} + +static int +eswitch_npc_get_counter(struct roc_npc *npc, struct roc_npc_flow *flow) +{ + uint16_t ctr_id; + int rc; + + rc = roc_npc_mcam_alloc_counter(npc, &ctr_id); + if (rc < 0) { + plt_err("Failed to allocate counter, rc %d", rc); + goto fail; + } + flow->ctr_id = ctr_id; + flow->use_ctr = true; + + rc = roc_npc_mcam_clear_counter(npc, flow->ctr_id); + if (rc < 0) { + plt_err("Failed to clear counter idx %d, rc %d", flow->ctr_id, rc); + goto free; + } + return 0; +free: + roc_npc_mcam_free_counter(npc, ctr_id); +fail: + return rc; +} + +static int +eswitch_npc_get_counter_entry_ref(struct roc_npc *npc, struct roc_npc_flow *flow, + struct roc_npc_flow *ref_flow) +{ + int rc = 0, resp_count; + + rc = eswitch_npc_get_counter(npc, flow); + if (rc) + goto free; + + /* Allocate an entry viz higher priority than ref flow */ + rc = roc_npc_mcam_alloc_entry(npc, flow, ref_flow, NPC_MCAM_HIGHER_PRIO, &resp_count); + if (rc) { + plt_err("Failed to allocate entry, err %d", rc); + goto free; + } + plt_esw_dbg("New entry %d ref entry %d resp_count %d", flow->mcam_id, ref_flow->mcam_id, + resp_count); + + return 0; +free: + roc_npc_mcam_free_counter(npc, flow->ctr_id); + return rc; +} + +int +cnxk_eswitch_flow_rule_shift(uint16_t hw_func, uint16_t *entry) +{ + struct cnxk_esw_repr_hw_info *repr_info; + struct cnxk_eswitch_dev *eswitch_dev; + struct roc_npc_flow *ref_flow, *flow; + uint16_t curr_entry, new_entry; + int rc = 0, resp_count; + + eswitch_dev = cnxk_eswitch_pmd_priv(); + repr_info = cnxk_eswitch_representor_hw_info(eswitch_dev, hw_func); + if (!repr_info) { + plt_warn("Failed to get representor group for %x", hw_func); + rc = -ENOENT; + goto fail; + } + + ref_flow = TAILQ_FIRST(&repr_info->repr_flow_list); + if (*entry > ref_flow->mcam_id) { + flow = plt_zmalloc(sizeof(struct roc_npc_flow), 0); + if (!flow) { + plt_err("Failed to allocate memory"); + rc = -ENOMEM; + goto fail; + } + + /* Allocate a higher priority flow rule */ + rc = roc_npc_mcam_alloc_entry(&eswitch_dev->npc, flow, ref_flow, + NPC_MCAM_HIGHER_PRIO, &resp_count); + if (rc < 0) { + plt_err("Failed to allocate a newmcam entry, rc %d", rc); + goto fail; + } + + if (flow->mcam_id > ref_flow->mcam_id) { + plt_err("New flow %d is still at higher priority than ref_flow %d", + flow->mcam_id, ref_flow->mcam_id); + rc = -EINVAL; + goto free_entry; + } + + plt_info("Before shift: HW_func %x curr_entry %d ref flow id %d new_entry %d", + hw_func, *entry, ref_flow->mcam_id, flow->mcam_id); + + curr_entry = *entry; + new_entry = flow->mcam_id; + + rc = roc_npc_mcam_move(&eswitch_dev->npc, curr_entry, new_entry); + if (rc) { + plt_err("Failed to shift the new index %d to curr index %d, err %d", *entry, + curr_entry, rc); + goto free_entry; + } + *entry = flow->mcam_id; + + /* Freeing the current entry */ + rc = roc_npc_mcam_free_entry(&eswitch_dev->npc, curr_entry); + if (rc) { + plt_err("Failed to free the old entry. err %d", rc); + goto free_entry; + } + + plt_free(flow); + plt_info("After shift: HW_func %x old_entry %d new_entry %d", hw_func, curr_entry, + *entry); + } + + return 0; +free_entry: + +fail: + return rc; +} + +int +cnxk_eswitch_flow_rules_delete(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func) +{ + struct cnxk_esw_repr_hw_info *repr_info; + struct flow_list *list; + int rc = 0; + + repr_info = cnxk_eswitch_representor_hw_info(eswitch_dev, hw_func); + if (!repr_info) { + plt_warn("Failed to get representor group for %x", hw_func); + rc = -ENOENT; + goto fail; + } + list = &repr_info->repr_flow_list; + + plt_esw_dbg("Deleting flows for %x", hw_func); + rc = cnxk_eswitch_flow_rules_remove_list(eswitch_dev, list); + if (rc) + plt_err("Failed to delete rules for hw func %x", hw_func); + +fail: + return rc; +} + +int +cnxk_eswitch_flow_rules_install(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func) +{ + struct roc_npc_flow *rx_flow, *tx_flow, *flow_iter, *esw_pf_flow = NULL; + struct cnxk_esw_repr_hw_info *repr_info; + struct flow_list *list; + uint16_t vlan_tci; + int rc = 0; + + repr_info = cnxk_eswitch_representor_hw_info(eswitch_dev, hw_func); + if (!repr_info) { + plt_err("Failed to get representor group for %x", hw_func); + rc = -EINVAL; + goto fail; + } + list = &repr_info->repr_flow_list; + + /* Taking ESW PF as reference entry for installing new rules */ + TAILQ_FOREACH(flow_iter, &eswitch_dev->esw_flow_list, next) { + if (flow_iter->mcam_id == eswitch_dev->esw_pf_entry) { + esw_pf_flow = flow_iter; + break; + } + } + + if (!esw_pf_flow) { + plt_err("Failed to get the ESW PF flow"); + rc = -EINVAL; + goto fail; + } + + /* Installing RX rule */ + rx_flow = plt_zmalloc(sizeof(struct roc_npc_flow), 0); + if (!rx_flow) { + plt_err("Failed to allocate memory"); + rc = -ENOMEM; + goto fail; + } + + rc = eswitch_npc_get_counter_entry_ref(&eswitch_dev->npc, rx_flow, esw_pf_flow); + if (rc) { + plt_err("Failed to get counter and mcam entry, rc %d", rc); + goto free_rx_flow; + } + + /* VLAN TCI value for this representee is the rep id from AF driver */ + vlan_tci = repr_info->rep_id; + rc = roc_eswitch_npc_mcam_rx_rule(&eswitch_dev->npc, rx_flow, hw_func, vlan_tci, 0xFFFF); + if (rc) { + plt_err("Failed to install RX rule for ESW PF to ESW VF, rc %d", rc); + goto free_rx_entry; + } + rx_flow->enable = true; + /* List in ascending order of mcam entries */ + TAILQ_FOREACH(flow_iter, list, next) { + if (flow_iter->mcam_id > rx_flow->mcam_id) { + TAILQ_INSERT_BEFORE(flow_iter, rx_flow, next); + goto done_rx; + } + } + TAILQ_INSERT_TAIL(list, rx_flow, next); +done_rx: + repr_info->num_flow_entries++; + plt_esw_dbg("Installed RX flow rule %d for representee %x with vlan tci %x MCAM id %d", + eswitch_dev->num_entries, hw_func, vlan_tci, rx_flow->mcam_id); + + /* Installing TX rule */ + tx_flow = plt_zmalloc(sizeof(struct roc_npc_flow), 0); + if (!tx_flow) { + plt_err("Failed to allocate memory"); + rc = -ENOMEM; + goto remove_rx_rule; + } + + rc = eswitch_npc_get_counter_entry_ref(&eswitch_dev->npc, tx_flow, esw_pf_flow); + if (rc) { + plt_err("Failed to get counter and mcam entry, rc %d", rc); + goto free_tx_flow; + } + + vlan_tci = (1ULL << CNXK_ESWITCH_VFPF_SHIFT) | repr_info->rep_id; + rc = roc_eswitch_npc_mcam_tx_rule(&eswitch_dev->npc, tx_flow, hw_func, vlan_tci); + if (rc) { + plt_err("Failed to install RX rule for ESW PF to ESW VF, rc %d", rc); + goto free_tx_entry; + } + tx_flow->enable = true; + /* List in ascending order of mcam entries */ + TAILQ_FOREACH(flow_iter, list, next) { + if (flow_iter->mcam_id > tx_flow->mcam_id) { + TAILQ_INSERT_BEFORE(flow_iter, tx_flow, next); + goto done_tx; + } + } + TAILQ_INSERT_TAIL(list, tx_flow, next); +done_tx: + repr_info->num_flow_entries++; + plt_esw_dbg("Installed TX flow rule %d for representee %x with vlan tci %x MCAM id %d", + repr_info->num_flow_entries, hw_func, vlan_tci, tx_flow->mcam_id); + + return 0; +free_tx_entry: + roc_npc_mcam_free(&eswitch_dev->npc, tx_flow); +free_tx_flow: + rte_free(tx_flow); +remove_rx_rule: + TAILQ_REMOVE(list, rx_flow, next); +free_rx_entry: + roc_npc_mcam_free(&eswitch_dev->npc, rx_flow); +free_rx_flow: + rte_free(rx_flow); +fail: + return rc; +} + +int +cnxk_eswitch_pfvf_flow_rules_install(struct cnxk_eswitch_dev *eswitch_dev, bool is_vf) +{ + struct roc_npc_flow *flow, *flow_iter; + struct flow_list *list; + int rc = 0; + + list = &eswitch_dev->esw_flow_list; + flow = plt_zmalloc(sizeof(struct roc_npc_flow), 0); + if (!flow) { + plt_err("Failed to allocate memory"); + rc = -ENOMEM; + goto fail; + } + + rc = eswitch_npc_get_counter(&eswitch_dev->npc, flow); + if (rc) { + plt_err("Failed to get counter and mcam entry, rc %d", rc); + goto free_flow; + } + if (!is_vf) { + /* Reserving an entry for esw VF but will not be installed */ + rc = roc_npc_get_free_mcam_entry(&eswitch_dev->npc, flow); + if (rc < 0) { + plt_err("Failed to allocate entry for vf, err %d", rc); + goto free_flow; + } + eswitch_dev->esw_vf_entry = flow->mcam_id; + /* Allocate an entry for esw PF */ + rc = eswitch_npc_get_counter_entry_ref(&eswitch_dev->npc, flow, flow); + if (rc) { + plt_err("Failed to allocate entry for pf, err %d", rc); + goto free_flow; + } + eswitch_dev->esw_pf_entry = flow->mcam_id; + plt_esw_dbg("Allocated entries for esw: PF %d and VF %d", eswitch_dev->esw_pf_entry, + eswitch_dev->esw_vf_entry); + } else { + flow->mcam_id = eswitch_dev->esw_vf_entry; + } + + rc = eswitch_pfvf_mcam_install_rules(eswitch_dev, flow, is_vf); + if (rc) { + plt_err("Failed to install entries, rc %d", rc); + goto free_flow; + } + + /* List in ascending order of mcam entries */ + TAILQ_FOREACH(flow_iter, list, next) { + if (flow_iter->mcam_id > flow->mcam_id) { + TAILQ_INSERT_BEFORE(flow_iter, flow, next); + goto done; + } + } + TAILQ_INSERT_TAIL(list, flow, next); +done: + eswitch_dev->num_entries++; + plt_esw_dbg("Installed new eswitch flow rule %d with MCAM id %d", eswitch_dev->num_entries, + flow->mcam_id); + + return 0; + +free_flow: + cnxk_eswitch_flow_rules_remove_list(eswitch_dev, &eswitch_dev->esw_flow_list); +fail: + return rc; +} diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build index fcd5d3d569..488e89253d 100644 --- a/drivers/net/cnxk/meson.build +++ b/drivers/net/cnxk/meson.build @@ -30,6 +30,7 @@ sources = files( 'cnxk_ethdev_sec_telemetry.c', 'cnxk_eswitch.c', 'cnxk_eswitch_devargs.c', + 'cnxk_eswitch_flow.c', 'cnxk_link.c', 'cnxk_lookup.c', 'cnxk_ptp.c', From patchwork Tue Dec 19 17:39:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135354 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1EE0943747; Tue, 19 Dec 2023 18:41:42 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CFC8942E81; Tue, 19 Dec 2023 18:41:01 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 6581242E7A for ; Tue, 19 Dec 2023 18:40:58 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJ9toOe029106 for ; Tue, 19 Dec 2023 09:40:57 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=ZNxzuHSxe9eomoVihytAi 2qcBNB7vN65FluhBDDBYt8=; b=FEkaAVV7zTML8IVWfObISFmUHjOM8mIVHx1DI Kx0N7Jh2urIOH+Qbe/CjbL/REkT4a9qv/alV9EykKN7ddjTtdx0SiEeZpxdfCfKH 5AYMS7IOlX017gJdxRdx9WIqlkrQXr0H5wXWpEhR9NmOFP0G2uDKxJT1brGmmWOp 9keA/B82UhLTkSNr07EgWWluquefUjjBWgKke8Umt9Ra7Q++zEwA/86AljbK7dg6 p/hiwgfp20G3GhpIvSXol7aNN9bgOeK5gnfcNEgKqJeid8fN0SSGLVbHipRpTKAP ZfSxbbsH58JH/VOAoPmVixo5mkb+mapzEKXlHLGhQyzaxkJWA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3v1c9kumey-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:40:57 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:40:55 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:40:55 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id F40F43F7091; Tue, 19 Dec 2023 09:40:52 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 09/24] net/cnxk: eswitch fastpath routines Date: Tue, 19 Dec 2023 23:09:48 +0530 Message-ID: <20231219174003.72901-10-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: nt_boWy08oxyPTBG2SNMJF_tfKhmzY5H X-Proofpoint-GUID: nt_boWy08oxyPTBG2SNMJF_tfKhmzY5H X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implementing fastpath RX and TX fast path routines which can be invoked from respective representors rx burst and tx burst Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.h | 5 + drivers/net/cnxk/cnxk_eswitch_rxtx.c | 212 +++++++++++++++++++++++++++ drivers/net/cnxk/meson.build | 1 + 3 files changed, 218 insertions(+) create mode 100644 drivers/net/cnxk/cnxk_eswitch_rxtx.c diff --git a/drivers/net/cnxk/cnxk_eswitch.h b/drivers/net/cnxk/cnxk_eswitch.h index 470e4035bf..d92c4f4778 100644 --- a/drivers/net/cnxk/cnxk_eswitch.h +++ b/drivers/net/cnxk/cnxk_eswitch.h @@ -177,4 +177,9 @@ int cnxk_eswitch_pfvf_flow_rules_install(struct cnxk_eswitch_dev *eswitch_dev, b int cnxk_eswitch_flow_rule_shift(uint16_t hw_func, uint16_t *new_entry); int cnxk_eswitch_flow_rules_remove_list(struct cnxk_eswitch_dev *eswitch_dev, struct flow_list *list); +/* RX TX fastpath routines */ +uint16_t cnxk_eswitch_dev_tx_burst(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, + struct rte_mbuf **pkts, uint16_t nb_tx, const uint16_t flags); +uint16_t cnxk_eswitch_dev_rx_burst(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, + struct rte_mbuf **pkts, uint16_t nb_pkts); #endif /* __CNXK_ESWITCH_H__ */ diff --git a/drivers/net/cnxk/cnxk_eswitch_rxtx.c b/drivers/net/cnxk/cnxk_eswitch_rxtx.c new file mode 100644 index 0000000000..b5a69e3338 --- /dev/null +++ b/drivers/net/cnxk/cnxk_eswitch_rxtx.c @@ -0,0 +1,212 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include + +static __rte_always_inline struct rte_mbuf * +eswitch_nix_get_mbuf_from_cqe(void *cq, const uint64_t data_off) +{ + rte_iova_t buff; + + /* Skip CQE, NIX_RX_PARSE_S and SG HDR(9 DWORDs) and peek buff addr */ + buff = *((rte_iova_t *)((uint64_t *)cq + 9)); + return (struct rte_mbuf *)(buff - data_off); +} + +static inline uint64_t +eswitch_nix_rx_nb_pkts(struct roc_nix_cq *cq, const uint64_t wdata, const uint32_t qmask) +{ + uint64_t reg, head, tail; + uint32_t available; + + /* Update the available count if cached value is not enough */ + + /* Use LDADDA version to avoid reorder */ + reg = roc_atomic64_add_sync(wdata, cq->status); + /* CQ_OP_STATUS operation error */ + if (reg & BIT_ULL(NIX_CQ_OP_STAT_OP_ERR) || reg & BIT_ULL(NIX_CQ_OP_STAT_CQ_ERR)) + return 0; + + tail = reg & 0xFFFFF; + head = (reg >> 20) & 0xFFFFF; + if (tail < head) + available = tail - head + qmask + 1; + else + available = tail - head; + + return available; +} + +static inline void +nix_cn9k_xmit_one(uint64_t *cmd, void *lmt_addr, const plt_iova_t io_addr) +{ + uint64_t lmt_status; + + do { + roc_lmt_mov(lmt_addr, cmd, 0); + lmt_status = roc_lmt_submit_ldeor(io_addr); + } while (lmt_status == 0); +} + +uint16_t +cnxk_eswitch_dev_tx_burst(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, + struct rte_mbuf **pkts, uint16_t nb_xmit, const uint16_t flags) +{ + struct roc_nix_sq *sq = &eswitch_dev->txq[qid].sqs; + struct roc_nix_rq *rq = &eswitch_dev->rxq[qid].rqs; + uint16_t lmt_id, pkt = 0, nb_tx = 0; + struct nix_send_ext_s *send_hdr_ext; + uint64_t aura_handle, cmd[6], data; + struct nix_send_hdr_s *send_hdr; + uint16_t vlan_tci = qid; + union nix_send_sg_s *sg; + uintptr_t lmt_base, pa; + int64_t fc_pkts, dw_m1; + rte_iova_t io_addr; + + if (unlikely(eswitch_dev->txq[qid].state != CNXK_ESWITCH_QUEUE_STATE_STARTED)) + return 0; + + lmt_base = sq->roc_nix->lmt_base; + io_addr = sq->io_addr; + aura_handle = rq->aura_handle; + /* Get LMT base address and LMT ID as per thread ID */ + lmt_id = roc_plt_control_lmt_id_get(); + lmt_base += ((uint64_t)lmt_id << ROC_LMT_LINE_SIZE_LOG2); + /* Double word minus 1: LMTST size-1 in units of 128 bits */ + /* 2(HDR) + 2(EXT_HDR) + 1(SG) + 1(IOVA) = 6/2 - 1 = 2 */ + dw_m1 = cn10k_nix_tx_ext_subs(flags) + 1; + + memset(cmd, 0, sizeof(cmd)); + send_hdr = (struct nix_send_hdr_s *)&cmd[0]; + send_hdr->w0.sizem1 = dw_m1; + send_hdr->w0.sq = sq->qid; + + if (dw_m1 >= 2) { + send_hdr_ext = (struct nix_send_ext_s *)&cmd[2]; + send_hdr_ext->w0.subdc = NIX_SUBDC_EXT; + if (flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) { + send_hdr_ext->w1.vlan0_ins_ena = true; + /* 2B before end of l2 header */ + send_hdr_ext->w1.vlan0_ins_ptr = 12; + send_hdr_ext->w1.vlan0_ins_tci = 0; + } + sg = (union nix_send_sg_s *)&cmd[4]; + } else { + sg = (union nix_send_sg_s *)&cmd[2]; + } + + sg->subdc = NIX_SUBDC_SG; + sg->segs = 1; + sg->ld_type = NIX_SENDLDTYPE_LDD; + + /* Tx */ + fc_pkts = ((int64_t)sq->nb_sqb_bufs_adj - *((uint64_t *)sq->fc)) << sq->sqes_per_sqb_log2; + + if (fc_pkts < 0) + nb_tx = 0; + else + nb_tx = PLT_MIN(nb_xmit, (uint64_t)fc_pkts); + + for (pkt = 0; pkt < nb_tx; pkt++) { + send_hdr->w0.total = pkts[pkt]->pkt_len; + /* TODO: revsit */ + if (pkts[pkt]->pool) { + aura_handle = pkts[pkt]->pool->pool_id; + send_hdr->w0.aura = roc_npa_aura_handle_to_aura(aura_handle); + } else { + send_hdr->w0.df = 1; + } + if (flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) + send_hdr_ext->w1.vlan0_ins_tci = vlan_tci; + sg->seg1_size = pkts[pkt]->pkt_len; + *(plt_iova_t *)(sg + 1) = rte_mbuf_data_iova(pkts[pkt]); + + plt_esw_dbg("Transmitting pkt %d (%p) vlan tci %x on sq %d esw qid %d", pkt, + pkts[pkt], vlan_tci, sq->qid, qid); + if (roc_model_is_cn9k()) { + nix_cn9k_xmit_one(cmd, sq->lmt_addr, sq->io_addr); + } else { + cn10k_nix_xmit_mv_lmt_base(lmt_base, cmd, flags); + /* PA<6:4> = LMTST size-1 in units of 128 bits. Size of the first LMTST in + * burst. + */ + pa = io_addr | (dw_m1 << 4); + data &= ~0x7ULL; + /*<15:12> = CNTM1: Count minus one of LMTSTs in the burst */ + data = (0ULL << 12); + /* *<10:0> = LMT_ID: Identifies which LMT line is used for the first LMTST + */ + data |= (uint64_t)lmt_id; + + /* STEOR0 */ + roc_lmt_submit_steorl(data, pa); + rte_io_wmb(); + } + } + + return nb_tx; +} + +uint16_t +cnxk_eswitch_dev_rx_burst(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, + struct rte_mbuf **pkts, uint16_t nb_pkts) +{ + struct roc_nix_rq *rq = &eswitch_dev->rxq[qid].rqs; + struct roc_nix_cq *cq = &eswitch_dev->cxq[qid].cqs; + const union nix_rx_parse_u *rx; + struct nix_cqe_hdr_s *cqe; + uint64_t pkt = 0, nb_rx; + struct rte_mbuf *mbuf; + uint64_t wdata; + uint32_t qmask; + uintptr_t desc; + uint32_t head; + + if (unlikely(eswitch_dev->rxq[qid].state != CNXK_ESWITCH_QUEUE_STATE_STARTED)) + return 0; + + wdata = cq->wdata; + qmask = cq->qmask; + desc = (uintptr_t)cq->desc_base; + nb_rx = eswitch_nix_rx_nb_pkts(cq, wdata, qmask); + nb_rx = RTE_MIN(nb_rx, nb_pkts); + head = cq->head; + + /* Nothing to receive */ + if (!nb_rx) + return 0; + + /* Rx */ + for (pkt = 0; pkt < nb_rx; pkt++) { + /* Prefetch N desc ahead */ + rte_prefetch_non_temporal((void *)(desc + (CQE_SZ((head + 2) & qmask)))); + cqe = (struct nix_cqe_hdr_s *)(desc + CQE_SZ(head)); + rx = (const union nix_rx_parse_u *)((const uint64_t *)cqe + 1); + + /* Skip QE, NIX_RX_PARSE_S and SG HDR(9 DWORDs) and peek buff addr */ + mbuf = eswitch_nix_get_mbuf_from_cqe(cqe, rq->first_skip); + mbuf->pkt_len = rx->pkt_lenm1 + 1; + mbuf->data_len = rx->pkt_lenm1 + 1; + mbuf->data_off = 128; + /* Rx parse to capture vlan info */ + if (rx->vtag0_valid) + mbuf->vlan_tci = rx->vtag0_tci; + /* Populate RSS hash */ + mbuf->hash.rss = cqe->tag; + mbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH; + pkts[pkt] = mbuf; + roc_prefetch_store_keep(mbuf); + plt_esw_dbg("Packet %d rec on queue %d esw qid %d hash %x mbuf %p vlan tci %d", + (uint32_t)pkt, rq->qid, qid, mbuf->hash.rss, mbuf, mbuf->vlan_tci); + head++; + head &= qmask; + } + + /* Free all the CQs that we've processed */ + rte_write64_relaxed((wdata | nb_rx), (void *)cq->door); + cq->head = head; + + return nb_rx; +} diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build index 488e89253d..7121845dc6 100644 --- a/drivers/net/cnxk/meson.build +++ b/drivers/net/cnxk/meson.build @@ -31,6 +31,7 @@ sources = files( 'cnxk_eswitch.c', 'cnxk_eswitch_devargs.c', 'cnxk_eswitch_flow.c', + 'cnxk_eswitch_rxtx.c', 'cnxk_link.c', 'cnxk_lookup.c', 'cnxk_ptp.c', From patchwork Tue Dec 19 17:39:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135355 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3556F43747; Tue, 19 Dec 2023 18:41:50 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 07E9C42E88; Tue, 19 Dec 2023 18:41:03 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 7AA7642E73 for ; Tue, 19 Dec 2023 18:41:01 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJA5YUc028895 for ; Tue, 19 Dec 2023 09:41:00 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=58rjXPqBhz7Mb9cTytw86 owCDsFshbws0JicwXsnv5k=; b=jfbUVOh/3oJpIuqXR6L7/FPIWP8IRQDgjUdJU A/+4eT4R8O/U40gEd9BmSdZIaJLhdZFThKpbYeimp57W7xwg76lAEjYS1KVQRclD sqCA8YavSV9Te7N5ofenAC2dqP5pfHtGqKYgB0sOsywD3vjJjbW3PccHquFlXAkr apQKHC6mIwRqG1pmW3HYSc6w0qLF3FfTx+7C7QupjKW/nDbY4FEp+4eQlid5KmbV HOn65/DBBU92MYekPyeYD6eSUMaEI6mBylIQcoTYihbx+lQNIn66YFp8EHgyWcSW vgayenkeXqymyjJx4D5MiuGOgDDuxf5NzhMbiifgrjmWzBJ+w== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3v39491rxg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:41:00 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:40:58 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:40:58 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 04A5D3F708F; Tue, 19 Dec 2023 09:40:55 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 10/24] net/cnxk: add representor control plane Date: Tue, 19 Dec 2023 23:09:49 +0530 Message-ID: <20231219174003.72901-11-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: OlmBb9ZglUfiWCUC1y1O1v8YOj6hIRfh X-Proofpoint-ORIG-GUID: OlmBb9ZglUfiWCUC1y1O1v8YOj6hIRfh X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implementing the control path for representor ports, where represented ports can be configured using TLV messaging. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.c | 67 ++- drivers/net/cnxk/cnxk_eswitch.h | 8 + drivers/net/cnxk/cnxk_rep.c | 52 ++ drivers/net/cnxk/cnxk_rep.h | 3 + drivers/net/cnxk/cnxk_rep_msg.c | 823 ++++++++++++++++++++++++++++++++ drivers/net/cnxk/cnxk_rep_msg.h | 95 ++++ drivers/net/cnxk/meson.build | 1 + 7 files changed, 1041 insertions(+), 8 deletions(-) create mode 100644 drivers/net/cnxk/cnxk_rep_msg.c create mode 100644 drivers/net/cnxk/cnxk_rep_msg.h diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c index 1cb0f0310a..ffcf89b1b1 100644 --- a/drivers/net/cnxk/cnxk_eswitch.c +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -9,6 +9,27 @@ #define CNXK_NIX_DEF_SQ_COUNT 512 +int +cnxk_eswitch_representor_id(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func, + uint16_t *rep_id) +{ + struct cnxk_esw_repr_hw_info *repr_info; + int rc = 0; + + repr_info = cnxk_eswitch_representor_hw_info(eswitch_dev, hw_func); + if (!repr_info) { + plt_warn("Failed to get representor group for %x", hw_func); + rc = -ENOENT; + goto fail; + } + + *rep_id = repr_info->rep_id; + + return 0; +fail: + return rc; +} + struct cnxk_esw_repr_hw_info * cnxk_eswitch_representor_hw_info(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func) { @@ -63,8 +84,38 @@ cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) eswitch_dev = cnxk_eswitch_pmd_priv(); /* Remove representor devices associated with PF */ - if (eswitch_dev->repr_cnt.nb_repr_created) + if (eswitch_dev->repr_cnt.nb_repr_created) { + /* Exiting the rep msg ctrl thread */ + if (eswitch_dev->start_ctrl_msg_thrd) { + uint32_t sunlen; + struct sockaddr_un sun = {0}; + int sock_fd; + + eswitch_dev->start_ctrl_msg_thrd = false; + if (!eswitch_dev->client_connected) { + plt_esw_dbg("Establishing connection for teardown"); + sock_fd = socket(AF_UNIX, SOCK_STREAM, 0); + if (sock_fd == -1) { + plt_err("Failed to open socket. err %d", -errno); + return -errno; + } + sun.sun_family = AF_UNIX; + sunlen = sizeof(struct sockaddr_un); + strncpy(sun.sun_path, CNXK_ESWITCH_CTRL_MSG_SOCK_PATH, + sizeof(sun.sun_path) - 1); + + if (connect(sock_fd, (struct sockaddr *)&sun, sunlen) < 0) { + plt_err("Failed to connect socket: %s, err %d", + CNXK_ESWITCH_CTRL_MSG_SOCK_PATH, errno); + return -errno; + } + } + rte_thread_join(eswitch_dev->rep_ctrl_msg_thread, NULL); + } + + /* Remove representor devices associated with PF */ cnxk_rep_dev_remove(eswitch_dev); + } eswitch_hw_rsrc_cleanup(eswitch_dev); @@ -170,13 +221,6 @@ cnxk_eswitch_nix_rsrc_start(struct cnxk_eswitch_dev *eswitch_dev) goto done; } - /* Enable Rx in NPC */ - rc = roc_nix_npc_rx_ena_dis(&eswitch_dev->nix, true); - if (rc) { - plt_err("Failed to enable NPC rx %d", rc); - goto done; - } - /* Install eswitch PF mcam rules */ rc = cnxk_eswitch_pfvf_flow_rules_install(eswitch_dev, false); if (rc) { @@ -192,6 +236,13 @@ cnxk_eswitch_nix_rsrc_start(struct cnxk_eswitch_dev *eswitch_dev) goto done; } + /* Enable Rx in NPC */ + rc = roc_nix_npc_rx_ena_dis(&eswitch_dev->nix, true); + if (rc) { + plt_err("Failed to enable NPC rx %d", rc); + goto done; + } + rc = roc_npc_mcam_enable_all_entries(&eswitch_dev->npc, 1); if (rc) { plt_err("Failed to enable NPC entries %d", rc); diff --git a/drivers/net/cnxk/cnxk_eswitch.h b/drivers/net/cnxk/cnxk_eswitch.h index d92c4f4778..a2f4aa0fcc 100644 --- a/drivers/net/cnxk/cnxk_eswitch.h +++ b/drivers/net/cnxk/cnxk_eswitch.h @@ -133,6 +133,12 @@ struct cnxk_eswitch_dev { /* No of representors */ struct cnxk_eswitch_repr_cnt repr_cnt; + /* Representor control channel field */ + bool start_ctrl_msg_thrd; + rte_thread_t rep_ctrl_msg_thread; + bool client_connected; + int sock_fd; + /* Port representor fields */ rte_spinlock_t rep_lock; uint16_t nb_switch_domain; @@ -155,6 +161,8 @@ cnxk_eswitch_pmd_priv(void) /* HW Resources */ int cnxk_eswitch_nix_rsrc_start(struct cnxk_eswitch_dev *eswitch_dev); +int cnxk_eswitch_representor_id(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func, + uint16_t *rep_id); struct cnxk_esw_repr_hw_info *cnxk_eswitch_representor_hw_info(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func); int cnxk_eswitch_repr_devargs(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev); diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c index 295bea3724..f8e1d5b965 100644 --- a/drivers/net/cnxk/cnxk_rep.c +++ b/drivers/net/cnxk/cnxk_rep.c @@ -2,6 +2,7 @@ * Copyright(C) 2023 Marvell. */ #include +#include #define PF_SHIFT 10 #define PF_MASK 0x3F @@ -25,6 +26,48 @@ switch_domain_id_allocate(struct cnxk_eswitch_dev *eswitch_dev, uint16_t pf) return RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID; } +int +cnxk_rep_state_update(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func, uint16_t *rep_id) +{ + struct cnxk_rep_dev *rep_dev = NULL; + struct rte_eth_dev *rep_eth_dev; + int i, rc = 0; + + /* Delete the individual PFVF flows as common eswitch VF rule will be used. */ + rc = cnxk_eswitch_flow_rules_delete(eswitch_dev, hw_func); + if (rc) { + if (rc != -ENOENT) { + plt_err("Failed to delete %x flow rules", hw_func); + goto fail; + } + } + /* Rep ID for respective HW func */ + rc = cnxk_eswitch_representor_id(eswitch_dev, hw_func, rep_id); + if (rc) { + if (rc != -ENOENT) { + plt_err("Failed to get rep info for %x", hw_func); + goto fail; + } + } + /* Update the state - representee is standalone or part of companian app */ + for (i = 0; i < eswitch_dev->repr_cnt.nb_repr_probed; i++) { + rep_eth_dev = eswitch_dev->rep_info[i].rep_eth_dev; + if (!rep_eth_dev) { + plt_err("Failed to get rep ethdev handle"); + rc = -EINVAL; + goto fail; + } + + rep_dev = cnxk_rep_pmd_priv(rep_eth_dev); + if (rep_dev->hw_func == hw_func && rep_dev->is_vf_active) + rep_dev->native_repte = false; + } + + return 0; +fail: + return rc; +} + int cnxk_rep_dev_uninit(struct rte_eth_dev *ethdev) { @@ -250,6 +293,15 @@ cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswi } eswitch_dev->last_probed = i; + /* Launch a thread to handle control messages */ + if (!eswitch_dev->start_ctrl_msg_thrd) { + rc = cnxk_rep_msg_control_thread_launch(eswitch_dev); + if (rc) { + plt_err("Failed to launch message ctrl thread"); + goto fail; + } + } + return 0; fail: return rc; diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h index 2cb3ae8ac5..a62d9b0ae8 100644 --- a/drivers/net/cnxk/cnxk_rep.h +++ b/drivers/net/cnxk/cnxk_rep.h @@ -16,6 +16,8 @@ struct cnxk_rep_dev { uint16_t switch_domain_id; struct cnxk_eswitch_dev *parent_dev; uint16_t hw_func; + bool is_vf_active; + bool native_repte; uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; }; @@ -46,5 +48,6 @@ int cnxk_rep_dev_close(struct rte_eth_dev *eth_dev); int cnxk_rep_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats); int cnxk_rep_stats_reset(struct rte_eth_dev *eth_dev); int cnxk_rep_flow_ops_get(struct rte_eth_dev *ethdev, const struct rte_flow_ops **ops); +int cnxk_rep_state_update(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func, uint16_t *rep_id); #endif /* __CNXK_REP_H__ */ diff --git a/drivers/net/cnxk/cnxk_rep_msg.c b/drivers/net/cnxk/cnxk_rep_msg.c new file mode 100644 index 0000000000..f538c3f27f --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep_msg.c @@ -0,0 +1,823 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include +#include + +#define CTRL_MSG_RCV_TIMEOUT_MS 2000 +#define CTRL_MSG_READY_WAIT_US 2000 +#define CTRL_MSG_THRD_NAME_LEN 35 +#define CTRL_MSG_BUFFER_SZ 1500 +#define CTRL_MSG_SIGNATURE 0xcdacdeadbeefcadc + +static void +close_socket(int fd) +{ + close(fd); + unlink(CNXK_ESWITCH_CTRL_MSG_SOCK_PATH); +} + +static int +receive_control_message(int socketfd, void *data, uint32_t len) +{ + char ctl[CMSG_SPACE(sizeof(int)) + CMSG_SPACE(sizeof(struct ucred))] = {0}; + struct ucred *cr __rte_unused; + struct msghdr mh = {0}; + struct cmsghdr *cmsg; + static uint64_t rec; + struct iovec iov[1]; + ssize_t size; + int afd = -1; + + iov[0].iov_base = data; + iov[0].iov_len = len; + mh.msg_iov = iov; + mh.msg_iovlen = 1; + mh.msg_control = ctl; + mh.msg_controllen = sizeof(ctl); + + size = recvmsg(socketfd, &mh, MSG_DONTWAIT); + if (size < 0) { + if (errno == EAGAIN) + return 0; + plt_err("recvmsg err %d invalid size %ld", errno, size); + return -errno; + } else if (size == 0) { + return 0; + } + + rec++; + plt_rep_dbg("Packet %" PRId64 " Received %" PRId64 " bytes over socketfd %d", + rec, size, socketfd); + + cr = 0; + cmsg = CMSG_FIRSTHDR(&mh); + while (cmsg) { + if (cmsg->cmsg_level == SOL_SOCKET) { + if (cmsg->cmsg_type == SCM_CREDENTIALS) { + cr = (struct ucred *)CMSG_DATA(cmsg); + } else if (cmsg->cmsg_type == SCM_RIGHTS) { + rte_memcpy(&afd, CMSG_DATA(cmsg), sizeof(int)); + plt_rep_dbg("afd %d", afd); + } + } + cmsg = CMSG_NXTHDR(&mh, cmsg); + } + return size; +} + +static int +send_message_on_socket(int socketfd, void *data, uint32_t len, int afd) +{ + char ctl[CMSG_SPACE(sizeof(int))]; + struct msghdr mh = {0}; + struct cmsghdr *cmsg; + static uint64_t sent; + struct iovec iov[1]; + int size; + + iov[0].iov_base = data; + iov[0].iov_len = len; + mh.msg_iov = iov; + mh.msg_iovlen = 1; + + if (afd > 0) { + memset(&ctl, 0, sizeof(ctl)); + mh.msg_control = ctl; + mh.msg_controllen = sizeof(ctl); + cmsg = CMSG_FIRSTHDR(&mh); + cmsg->cmsg_len = CMSG_LEN(sizeof(int)); + cmsg->cmsg_level = SOL_SOCKET; + cmsg->cmsg_type = SCM_RIGHTS; + rte_memcpy(CMSG_DATA(cmsg), &afd, sizeof(int)); + } + + size = sendmsg(socketfd, &mh, MSG_DONTWAIT); + if (size < 0) { + if (errno == EAGAIN) + return 0; + plt_err("Failed to send message, err %d", -errno); + return -errno; + } else if (size == 0) { + return 0; + } + sent++; + plt_rep_dbg("Sent %" PRId64 " packets of size %d on socketfd %d", sent, size, socketfd); + + return size; +} + +static int +open_socket_ctrl_channel(void) +{ + struct sockaddr_un un; + int sock_fd; + + sock_fd = socket(AF_UNIX, SOCK_STREAM, 0); + if (sock_fd < 0) { + RTE_LOG(ERR, EAL, "failed to create unix socket\n"); + return -1; + } + + /* Set unix socket path and bind */ + memset(&un, 0, sizeof(un)); + un.sun_family = AF_UNIX; + + if (strlen(CNXK_ESWITCH_CTRL_MSG_SOCK_PATH) > sizeof(un.sun_path) - 1) { + plt_err("Server socket path too long: %s", CNXK_ESWITCH_CTRL_MSG_SOCK_PATH); + close(sock_fd); + return -E2BIG; + } + + if (remove(CNXK_ESWITCH_CTRL_MSG_SOCK_PATH) == -1 && errno != ENOENT) { + plt_err("remove-%s", CNXK_ESWITCH_CTRL_MSG_SOCK_PATH); + close(sock_fd); + return -errno; + } + + memset(&un, 0, sizeof(struct sockaddr_un)); + un.sun_family = AF_UNIX; + strncpy(un.sun_path, CNXK_ESWITCH_CTRL_MSG_SOCK_PATH, sizeof(un.sun_path) - 1); + + if (bind(sock_fd, (struct sockaddr *)&un, sizeof(un)) < 0) { + plt_err("Failed to bind %s: %s", un.sun_path, strerror(errno)); + close(sock_fd); + return -errno; + } + + if (listen(sock_fd, 1) < 0) { + plt_err("Failed to listen, err %s", strerror(errno)); + close(sock_fd); + return -errno; + } + + plt_rep_dbg("Unix socket path %s", un.sun_path); + return sock_fd; +} + +static int +send_control_message(struct cnxk_eswitch_dev *eswitch_dev, void *buffer, uint32_t len) +{ + int sz; + int rc = 0; + + sz = send_message_on_socket(eswitch_dev->sock_fd, buffer, len, 0); + if (sz < 0) { + plt_err("Error sending message, err %d", sz); + rc = sz; + goto done; + } + + /* Ensuring entire message has been processed */ + if (sz != (int)len) { + plt_err("Out of %d bytes only %d bytes sent", sz, len); + rc = -EFAULT; + goto done; + } + plt_rep_dbg("Sent %d bytes of buffer", sz); +done: + return rc; +} + +void +cnxk_rep_msg_populate_msg_end(void *buffer, uint32_t *length) +{ + cnxk_rep_msg_populate_command(buffer, length, CNXK_REP_MSG_END, 0); +} + +void +cnxk_rep_msg_populate_type(void *buffer, uint32_t *length, cnxk_type_t type, uint32_t sz) +{ + uint32_t len = *length; + cnxk_type_data_t data; + + /* Prepare type data */ + data.type = type; + data.length = sz; + + /* Populate the type data */ + rte_memcpy(RTE_PTR_ADD(buffer, len), &data, sizeof(cnxk_type_data_t)); + len += sizeof(cnxk_type_data_t); + + *length = len; +} + +void +cnxk_rep_msg_populate_header(void *buffer, uint32_t *length) +{ + cnxk_header_t hdr; + int len; + + memset(&hdr, 0, sizeof(cnxk_header_t)); + cnxk_rep_msg_populate_type(buffer, length, CNXK_TYPE_HEADER, sizeof(cnxk_header_t)); + + len = *length; + /* Prepare header data */ + hdr.signature = CTRL_MSG_SIGNATURE; + + /* Populate header data */ + rte_memcpy(RTE_PTR_ADD(buffer, len), &hdr, sizeof(cnxk_header_t)); + len += sizeof(cnxk_header_t); + + *length = len; +} + +void +cnxk_rep_msg_populate_command(void *buffer, uint32_t *length, cnxk_rep_msg_t type, uint32_t size) +{ + cnxk_rep_msg_data_t msg_data; + uint32_t len; + uint16_t sz = sizeof(cnxk_rep_msg_data_t); + + cnxk_rep_msg_populate_type(buffer, length, CNXK_TYPE_MSG, sz); + + len = *length; + /* Prepare command data */ + msg_data.type = type; + msg_data.length = size; + + /* Populate the command */ + rte_memcpy(RTE_PTR_ADD(buffer, len), &msg_data, sz); + len += sz; + + *length = len; +} + +void +cnxk_rep_msg_populate_command_meta(void *buffer, uint32_t *length, void *msg_meta, uint32_t sz, + cnxk_rep_msg_t msg) +{ + uint32_t len; + + cnxk_rep_msg_populate_command(buffer, length, msg, sz); + + len = *length; + /* Populate command data */ + rte_memcpy(RTE_PTR_ADD(buffer, len), msg_meta, sz); + len += sz; + + *length = len; +} + +static int +parse_validate_header(void *msg_buf, uint32_t *buf_trav_len) +{ + cnxk_type_data_t *tdata = NULL; + cnxk_header_t *hdr = NULL; + void *data = NULL; + uint16_t len = 0; + + /* Read first bytes of type data */ + data = msg_buf; + tdata = (cnxk_type_data_t *)data; + if (tdata->type != CNXK_TYPE_HEADER) { + plt_err("Invalid type %d, type header expected", tdata->type); + goto fail; + } + + /* Get the header value */ + data = RTE_PTR_ADD(msg_buf, sizeof(cnxk_type_data_t)); + len += sizeof(cnxk_type_data_t); + + /* Validate the header */ + hdr = (cnxk_header_t *)data; + if (hdr->signature != CTRL_MSG_SIGNATURE) { + plt_err("Invalid signature %" PRIu64 " detected", hdr->signature); + goto fail; + } + + /* Update length read till point */ + len += tdata->length; + + *buf_trav_len = len; + return 0; +fail: + return errno; +} + +static cnxk_rep_msg_data_t * +message_data_extract(void *msg_buf, uint32_t *buf_trav_len) +{ + cnxk_type_data_t *tdata = NULL; + cnxk_rep_msg_data_t *msg = NULL; + uint16_t len = *buf_trav_len; + void *data; + + tdata = (cnxk_type_data_t *)RTE_PTR_ADD(msg_buf, len); + if (tdata->type != CNXK_TYPE_MSG) { + plt_err("Invalid type %d, type MSG expected", tdata->type); + goto fail; + } + + /* Get the message type */ + len += sizeof(cnxk_type_data_t); + data = RTE_PTR_ADD(msg_buf, len); + msg = (cnxk_rep_msg_data_t *)data; + + /* Advance to actual message data */ + len += tdata->length; + *buf_trav_len = len; + + return msg; +fail: + return NULL; +} + +static void +process_ack_message(void *msg_buf, uint32_t *buf_trav_len, uint32_t msg_len, void *data) +{ + cnxk_rep_msg_ack_data_t *adata = (cnxk_rep_msg_ack_data_t *)data; + uint16_t len = *buf_trav_len; + void *buf; + + /* Get the message type data viz ack data */ + buf = RTE_PTR_ADD(msg_buf, len); + adata->u.data = rte_zmalloc("Ack data", msg_len, 0); + adata->size = msg_len; + if (adata->size == sizeof(uint64_t)) + rte_memcpy(&adata->u.data, buf, msg_len); + else + rte_memcpy(adata->u.data, buf, msg_len); + plt_rep_dbg("Address %p val 0x%" PRIu64 " sval %" PRId64 " msg_len %d", + adata->u.data, adata->u.val, adata->u.sval, msg_len); + + /* Advance length to nex message */ + len += msg_len; + *buf_trav_len = len; +} + +static int +notify_rep_dev_ready(cnxk_rep_msg_ready_data_t *rdata, void *data, + cnxk_rep_msg_ack_data1_t **padata) +{ + struct cnxk_eswitch_dev *eswitch_dev; + uint64_t rep_id_arr[RTE_MAX_ETHPORTS]; + cnxk_rep_msg_ack_data1_t *adata; + uint16_t rep_id, sz, total_sz; + int rc, i, j = 0; + + PLT_SET_USED(data); + eswitch_dev = cnxk_eswitch_pmd_priv(); + if (!eswitch_dev) { + plt_err("Failed to get PF ethdev handle"); + rc = -EINVAL; + goto fail; + } + + /* For ready state */ + if ((rdata->nb_ports / 2) > eswitch_dev->repr_cnt.nb_repr_probed) { + rc = CNXK_REP_CTRL_MSG_NACK_INV_REP_CNT; + goto fail; + } + + for (i = 0; i < rdata->nb_ports / 2; i++) { + rep_id = UINT16_MAX; + rc = cnxk_rep_state_update(eswitch_dev, rdata->data[i], &rep_id); + if (rc) { + rc = CNXK_REP_CTRL_MSG_NACK_REP_STAT_UP_FAIL; + goto fail; + } + if (rep_id != UINT16_MAX) + rep_id_arr[j++] = rep_id; + } + + /* Send Rep Id array to companian app */ + sz = j * sizeof(uint64_t); + total_sz = sizeof(cnxk_rep_msg_ack_data1_t) + sz; + adata = plt_zmalloc(total_sz, 0); + rte_memcpy(adata->data, rep_id_arr, sz); + adata->size = sz; + *padata = adata; + + plt_rep_dbg("Installing NPC rules for Eswitch VF"); + /* Install RX VLAN rule for eswitch VF */ + if (!eswitch_dev->eswitch_vf_rules_setup) { + rc = cnxk_eswitch_pfvf_flow_rules_install(eswitch_dev, true); + if (rc) { + plt_err("Failed to install rxtx rules, rc %d", rc); + goto fail; + } + + /* Configure TPID for Eswitch PF LFs */ + rc = roc_eswitch_nix_vlan_tpid_set(&eswitch_dev->nix, ROC_NIX_VLAN_TYPE_OUTER, + CNXK_ESWITCH_VLAN_TPID, true); + if (rc) { + plt_err("Failed to configure tpid, rc %d", rc); + goto fail; + } + eswitch_dev->eswitch_vf_rules_setup = true; + } + + return 0; +fail: + sz = sizeof(cnxk_rep_msg_ack_data1_t) + sizeof(uint64_t); + adata = plt_zmalloc(sz, 0); + adata->data[0] = rc; + adata->size = sizeof(uint64_t); + *padata = adata; + + return rc; +} + +static int +process_ready_message(void *msg_buf, uint32_t *buf_trav_len, uint32_t msg_len, void *data, + cnxk_rep_msg_ack_data1_t **padata) +{ + cnxk_rep_msg_ready_data_t *rdata = NULL; + cnxk_rep_msg_ack_data1_t *adata; + uint16_t len = *buf_trav_len; + void *buf; + int rc = 0, sz; + + /* Get the message type data viz ready data */ + buf = RTE_PTR_ADD(msg_buf, len); + rdata = (cnxk_rep_msg_ready_data_t *)buf; + + plt_rep_dbg("Ready data received %d, nb_ports %d", rdata->val, rdata->nb_ports); + + /* Wait required to ensure other side ready for receiving the ack */ + usleep(CTRL_MSG_READY_WAIT_US); + + /* Update all representor about ready message */ + if (rdata->val) { + rc = notify_rep_dev_ready(rdata, data, padata); + } else { + sz = sizeof(cnxk_rep_msg_ack_data1_t) + sizeof(uint64_t); + adata = plt_zmalloc(sz, 0); + adata->data[0] = CNXK_REP_CTRL_MSG_NACK_INV_RDY_DATA; + adata->size = sizeof(uint64_t); + *padata = adata; + } + + /* Advance length to nex message */ + len += msg_len; + *buf_trav_len = len; + + return rc; +} + +static int +notify_rep_dev_exit(cnxk_rep_msg_exit_data_t *edata, void *data) +{ + struct cnxk_eswitch_dev *eswitch_dev; + struct cnxk_rep_dev *rep_dev = NULL; + struct rte_eth_dev *rep_eth_dev; + int i, rc = 0; + + PLT_SET_USED(data); + eswitch_dev = cnxk_eswitch_pmd_priv(); + if (!eswitch_dev) { + plt_err("Failed to get PF ethdev handle"); + rc = -EINVAL; + goto fail; + } + if ((edata->nb_ports / 2) > eswitch_dev->repr_cnt.nb_repr_probed) { + rc = CNXK_REP_CTRL_MSG_NACK_INV_REP_CNT; + goto fail; + } + + for (i = 0; i < eswitch_dev->repr_cnt.nb_repr_probed; i++) { + rep_eth_dev = eswitch_dev->rep_info[i].rep_eth_dev; + if (!rep_eth_dev) { + plt_err("Failed to get rep ethdev handle"); + rc = -EINVAL; + goto fail; + } + + rep_dev = cnxk_rep_pmd_priv(rep_eth_dev); + if (!rep_dev->native_repte) + rep_dev->is_vf_active = false; + } + /* For Exit message */ + eswitch_dev->client_connected = false; + return 0; +fail: + return rc; +} + +static void +process_exit_message(void *msg_buf, uint32_t *buf_trav_len, uint32_t msg_len, void *data) +{ + cnxk_rep_msg_exit_data_t *edata = NULL; + uint16_t len = *buf_trav_len; + void *buf; + + /* Get the message type data viz exit data */ + buf = RTE_PTR_ADD(msg_buf, len); + edata = (cnxk_rep_msg_exit_data_t *)buf; + + plt_rep_dbg("Exit data received %d", edata->val); + + /* Update all representor about ready/exit message */ + if (edata->val) + notify_rep_dev_exit(edata, data); + + /* Advance length to nex message */ + len += msg_len; + *buf_trav_len = len; +} + +static void +populate_ack_msg(void *buffer, uint32_t *length, cnxk_rep_msg_ack_data1_t *adata) +{ + uint32_t sz = sizeof(cnxk_rep_msg_ack_data1_t) + adata->size; + uint32_t len; + + cnxk_rep_msg_populate_command(buffer, length, CNXK_REP_MSG_ACK, sz); + + len = *length; + + /* Populate ACK message data */ + rte_memcpy(RTE_PTR_ADD(buffer, len), adata, sz); + + len += sz; + + *length = len; +} + +static int +send_ack_message(void *data, cnxk_rep_msg_ack_data1_t *adata) +{ + struct cnxk_eswitch_dev *eswitch_dev = (struct cnxk_eswitch_dev *)data; + uint32_t len = 0, size; + void *buffer; + int rc = 0; + + /* Allocate memory for preparing a message */ + size = CTRL_MSG_BUFFER_SZ; + buffer = rte_zmalloc("ACK msg", size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + return -ENOMEM; + } + + /* Prepare the ACK message */ + cnxk_rep_msg_populate_header(buffer, &len); + populate_ack_msg(buffer, &len, adata); + cnxk_rep_msg_populate_msg_end(buffer, &len); + + /* Length check to avoid buffer overflow */ + if (len > CTRL_MSG_BUFFER_SZ) { + plt_err("Invalid length %d for max sized buffer %d", len, CTRL_MSG_BUFFER_SZ); + rc = -EFAULT; + goto done; + } + + /* Send it to the peer */ + rc = send_control_message(eswitch_dev, buffer, len); + if (rc) + plt_err("Failed send ack"); + +done: + return rc; +} + +static int +process_message(void *msg_buf, uint32_t *buf_trav_len, void *data) +{ + cnxk_rep_msg_data_t *msg = NULL; + cnxk_rep_msg_ack_data1_t *adata = NULL; + bool send_ack; + int rc = 0, sz; + + /* Get the message data */ + msg = message_data_extract(msg_buf, buf_trav_len); + if (!msg) { + plt_err("Failed to get message data"); + rc = -EINVAL; + goto fail; + } + + /* Different message type processing */ + while (msg->type != CNXK_REP_MSG_END) { + send_ack = true; + switch (msg->type) { + case CNXK_REP_MSG_ACK: + plt_rep_dbg("Received ack response"); + process_ack_message(msg_buf, buf_trav_len, msg->length, data); + send_ack = false; + break; + case CNXK_REP_MSG_READY: + plt_rep_dbg("Received ready message"); + process_ready_message(msg_buf, buf_trav_len, msg->length, data, &adata); + adata->type = CNXK_REP_MSG_READY; + break; + case CNXK_REP_MSG_EXIT: + plt_rep_dbg("Received exit message"); + process_exit_message(msg_buf, buf_trav_len, msg->length, data); + sz = sizeof(cnxk_rep_msg_ack_data1_t) + sizeof(uint64_t); + adata = plt_zmalloc(sz, 0); + adata->type = CNXK_REP_MSG_EXIT; + adata->data[0] = 0; + adata->size = sizeof(uint64_t); + break; + default: + send_ack = false; + plt_err("Invalid message type: %d", msg->type); + rc = -EINVAL; + }; + + /* Send ACK */ + if (send_ack) + send_ack_message(data, adata); + + /* Advance to next message */ + msg = message_data_extract(msg_buf, buf_trav_len); + if (!msg) { + plt_err("Failed to get message data"); + rc = -EINVAL; + goto fail; + } + } + + return 0; +fail: + return rc; +} + +static int +process_control_message(void *msg_buf, void *data, size_t sz) +{ + uint32_t buf_trav_len = 0; + int rc; + + /* Validate the validity of the received message */ + parse_validate_header(msg_buf, &buf_trav_len); + + /* Detect message and process */ + rc = process_message(msg_buf, &buf_trav_len, data); + if (rc) { + plt_err("Failed to process message"); + goto fail; + } + + /* Ensuring entire message has been processed */ + if (sz != buf_trav_len) { + plt_err("Out of %" PRId64 " bytes %d bytes of msg_buf processed", sz, buf_trav_len); + rc = -EFAULT; + goto fail; + } + + return 0; +fail: + return rc; +} + +static int +receive_control_msg_resp(struct cnxk_eswitch_dev *eswitch_dev, void *data) +{ + uint32_t wait_us = CTRL_MSG_RCV_TIMEOUT_MS * 1000; + uint32_t timeout = 0, sleep = 1; + int sz = 0; + int rc = -1; + uint32_t len = BUFSIZ; + void *msg_buf; + + msg_buf = plt_zmalloc(len, 0); + + do { + sz = receive_control_message(eswitch_dev->sock_fd, msg_buf, len); + if (sz != 0) + break; + + /* Timeout after CTRL_MSG_RCV_TIMEOUT_MS */ + if (timeout >= wait_us) { + plt_err("Control message wait timedout"); + return -ETIMEDOUT; + } + + plt_delay_us(sleep); + timeout += sleep; + } while ((sz == 0) || (timeout < wait_us)); + + if (sz > 0) { + plt_rep_dbg("Received %d sized response packet", sz); + rc = process_control_message(msg_buf, data, sz); + plt_free(msg_buf); + } + + return rc; +} + +int +cnxk_rep_msg_send_process(struct cnxk_rep_dev *rep_dev, void *buffer, uint32_t len, + cnxk_rep_msg_ack_data_t *adata) +{ + struct cnxk_eswitch_dev *eswitch_dev; + int rc = 0; + + eswitch_dev = rep_dev->parent_dev; + if (!eswitch_dev) { + plt_err("Failed to get parent eswitch handle"); + rc = -1; + goto fail; + } + + plt_spinlock_lock(&eswitch_dev->rep_lock); + rc = send_control_message(eswitch_dev, buffer, len); + if (rc) { + plt_err("Failed to send the message, err %d", rc); + goto free; + } + + /* Get response of the command sent */ + rc = receive_control_msg_resp(eswitch_dev, adata); + if (rc) { + plt_err("Failed to receive the response, err %d", rc); + goto free; + } + plt_spinlock_unlock(&eswitch_dev->rep_lock); + + return 0; +free: + plt_spinlock_unlock(&eswitch_dev->rep_lock); +fail: + return rc; +} + +static void +poll_for_control_msg(void *data) +{ + struct cnxk_eswitch_dev *eswitch_dev = (struct cnxk_eswitch_dev *)data; + uint32_t len = BUFSIZ; + int sz = 0; + void *msg_buf; + + while (eswitch_dev->client_connected) { + msg_buf = plt_zmalloc(len, 0); + do { + plt_spinlock_lock(&eswitch_dev->rep_lock); + sz = receive_control_message(eswitch_dev->sock_fd, msg_buf, len); + plt_spinlock_unlock(&eswitch_dev->rep_lock); + if (sz != 0) + break; + plt_delay_us(2000); + } while (sz == 0); + + if (sz > 0) { + plt_rep_dbg("Received new %d bytes control message", sz); + plt_spinlock_lock(&eswitch_dev->rep_lock); + process_control_message(msg_buf, data, sz); + plt_spinlock_unlock(&eswitch_dev->rep_lock); + plt_free(msg_buf); + } + } + plt_rep_dbg("Exiting poll for control message loop"); +} + +static uint32_t +rep_ctrl_msg_thread_main(void *arg) +{ + struct cnxk_eswitch_dev *eswitch_dev = (struct cnxk_eswitch_dev *)arg; + struct sockaddr_un client; + int addr_len; + int ssock_fd; + int sock_fd; + + ssock_fd = open_socket_ctrl_channel(); + if (ssock_fd < 0) { + plt_err("Failed to open socket for ctrl channel, err %d", ssock_fd); + return UINT32_MAX; + } + + addr_len = sizeof(client); + while (eswitch_dev->start_ctrl_msg_thrd) { + /* Accept client connection until the thread is running */ + sock_fd = accept(ssock_fd, (struct sockaddr *)&client, (socklen_t *)&addr_len); + if (sock_fd < 0) { + plt_err("Failed to accept connection request on socket fd %d", ssock_fd); + break; + } + + plt_rep_dbg("Client %s: Connection request accepted.", client.sun_path); + eswitch_dev->sock_fd = sock_fd; + if (eswitch_dev->start_ctrl_msg_thrd) { + eswitch_dev->client_connected = true; + poll_for_control_msg(eswitch_dev); + } + eswitch_dev->sock_fd = -1; + } + + /* Closing the opened socket */ + close_socket(ssock_fd); + plt_rep_dbg("Exiting representor ctrl thread"); + + return 0; +} + +int +cnxk_rep_msg_control_thread_launch(struct cnxk_eswitch_dev *eswitch_dev) +{ + char name[CTRL_MSG_THRD_NAME_LEN]; + int rc = 0; + + rte_strscpy(name, "rep_ctrl_msg_hndlr", CTRL_MSG_THRD_NAME_LEN); + eswitch_dev->start_ctrl_msg_thrd = true; + rc = rte_thread_create_internal_control(&eswitch_dev->rep_ctrl_msg_thread, name, + rep_ctrl_msg_thread_main, eswitch_dev); + if (rc) + plt_err("Failed to create rep control message handling"); + + return rc; +} diff --git a/drivers/net/cnxk/cnxk_rep_msg.h b/drivers/net/cnxk/cnxk_rep_msg.h new file mode 100644 index 0000000000..fb84d58848 --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep_msg.h @@ -0,0 +1,95 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#ifndef __CNXK_REP_MSG_H__ +#define __CNXK_REP_MSG_H__ + +#include + +#define CNXK_REP_MSG_MAX_BUFFER_SZ 1500 + +typedef enum CNXK_TYPE { + CNXK_TYPE_HEADER = 0, + CNXK_TYPE_MSG, +} cnxk_type_t; + +typedef enum CNXK_REP_MSG { + /* General sync messages */ + CNXK_REP_MSG_READY = 0, + CNXK_REP_MSG_ACK, + CNXK_REP_MSG_EXIT, + /* End of messaging sequence */ + CNXK_REP_MSG_END, +} cnxk_rep_msg_t; + +typedef enum CNXK_NACK_CODE { + CNXK_REP_CTRL_MSG_NACK_INV_RDY_DATA = 0x501, + CNXK_REP_CTRL_MSG_NACK_INV_REP_CNT = 0x502, + CNXK_REP_CTRL_MSG_NACK_REP_STAT_UP_FAIL = 0x503, +} cnxk_nack_code_t; + +/* Types */ +typedef struct cnxk_type_data { + cnxk_type_t type; + uint32_t length; + uint64_t data[]; +} __rte_packed cnxk_type_data_t; + +/* Header */ +typedef struct cnxk_header { + uint64_t signature; + uint16_t nb_hops; +} __rte_packed cnxk_header_t; + +/* Message meta */ +typedef struct cnxk_rep_msg_data { + cnxk_rep_msg_t type; + uint32_t length; + uint64_t data[]; +} __rte_packed cnxk_rep_msg_data_t; + +/* Ack msg */ +typedef struct cnxk_rep_msg_ack_data { + cnxk_rep_msg_t type; + uint32_t size; + union { + void *data; + uint64_t val; + int64_t sval; + } u; +} __rte_packed cnxk_rep_msg_ack_data_t; + +/* Ack msg */ +typedef struct cnxk_rep_msg_ack_data1 { + cnxk_rep_msg_t type; + uint32_t size; + uint64_t data[]; +} __rte_packed cnxk_rep_msg_ack_data1_t; + +/* Ready msg */ +typedef struct cnxk_rep_msg_ready_data { + uint8_t val; + uint16_t nb_ports; + uint16_t data[]; +} __rte_packed cnxk_rep_msg_ready_data_t; + +/* Exit msg */ +typedef struct cnxk_rep_msg_exit_data { + uint8_t val; + uint16_t nb_ports; + uint16_t data[]; +} __rte_packed cnxk_rep_msg_exit_data_t; + +void cnxk_rep_msg_populate_command(void *buffer, uint32_t *length, cnxk_rep_msg_t type, + uint32_t size); +void cnxk_rep_msg_populate_command_meta(void *buffer, uint32_t *length, void *msg_meta, uint32_t sz, + cnxk_rep_msg_t msg); +void cnxk_rep_msg_populate_msg_end(void *buffer, uint32_t *length); +void cnxk_rep_msg_populate_type(void *buffer, uint32_t *length, cnxk_type_t type, uint32_t sz); +void cnxk_rep_msg_populate_header(void *buffer, uint32_t *length); +int cnxk_rep_msg_send_process(struct cnxk_rep_dev *rep_dev, void *buffer, uint32_t len, + cnxk_rep_msg_ack_data_t *adata); +int cnxk_rep_msg_control_thread_launch(struct cnxk_eswitch_dev *eswitch_dev); + +#endif /* __CNXK_REP_MSG_H__ */ diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build index 7121845dc6..9ca7732713 100644 --- a/drivers/net/cnxk/meson.build +++ b/drivers/net/cnxk/meson.build @@ -37,6 +37,7 @@ sources = files( 'cnxk_ptp.c', 'cnxk_flow.c', 'cnxk_rep.c', + 'cnxk_rep_msg.c', 'cnxk_rep_ops.c', 'cnxk_stats.c', 'cnxk_tm.c', From patchwork Tue Dec 19 17:39:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135356 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 98C8F43747; Tue, 19 Dec 2023 18:42:00 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B168042E77; Tue, 19 Dec 2023 18:41:17 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id CFF0042E76 for ; Tue, 19 Dec 2023 18:41:04 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJ93T8Z016883 for ; Tue, 19 Dec 2023 09:41:04 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=cuI4O85bHUgXhyQ0IaguH eGfiCm4lEavm/9UcwZE9BQ=; b=SikeCRUq8i5RMUCFXAq5VKd2zdSrdNobddAfG L/z04B8TcB7aHOeAAX0xqPMCKNZnm3ZhC9Qi8+b9O/eoMmH+q4djL/7iLlpx7Cs7 xFIUBmDmZmPXeG7vyhG9/YO9B+Z/SP7/+SZIdfR/Q9zfnNOcAHqfW9jD9bs9yvdZ uE4Zaj+VPuLLjygvSyuN/jjK7C6uOogfFifRBrsbmYcjY4dEtMWPDi6dKpY+h2Jq gGYrBsmBB7JUYuu+duzjl0azxeWEGW28jGgj41Ell3C8D33cH35EEovM6uLM5smL mHOIAahJ0DdGQJ1WVeHDtrmIXghphzpMr09Y1Jk5cpTYPZhXg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3v1c9kumff-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:41:04 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:41:01 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:41:01 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 482BC3F708D; Tue, 19 Dec 2023 09:40:59 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 11/24] common/cnxk: representee notification callback Date: Tue, 19 Dec 2023 23:09:50 +0530 Message-ID: <20231219174003.72901-12-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 6x0auBkIVJttefZZpUfgXfjK9G6q00ti X-Proofpoint-GUID: 6x0auBkIVJttefZZpUfgXfjK9G6q00ti X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Setting up a callback which gets invoked every time a representee comes up or goes down. Later this callback gets handled by network conterpart. Signed-off-by: Harman Kalra --- drivers/common/cnxk/roc_dev.c | 24 ++++++++++++++++++++++++ drivers/common/cnxk/roc_dev_priv.h | 3 +++ drivers/common/cnxk/roc_eswitch.c | 23 +++++++++++++++++++++++ drivers/common/cnxk/roc_eswitch.h | 6 ++++++ drivers/common/cnxk/roc_mbox.c | 2 ++ drivers/common/cnxk/roc_mbox.h | 10 +++++++++- drivers/common/cnxk/version.map | 2 ++ 7 files changed, 69 insertions(+), 1 deletion(-) diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c index e7e89bf3d6..b12732de34 100644 --- a/drivers/common/cnxk/roc_dev.c +++ b/drivers/common/cnxk/roc_dev.c @@ -538,6 +538,29 @@ pf_vf_mbox_send_up_msg(struct dev *dev, void *rec_msg) } } +static int +mbox_up_handler_esw_repte_notify(struct dev *dev, struct esw_repte_req *req, struct msg_rsp *rsp) +{ + int rc = 0; + + plt_base_dbg("pf:%d/vf:%d msg id 0x%x (%s) from: pf:%d/vf:%d", dev_get_pf(dev->pf_func), + dev_get_vf(dev->pf_func), req->hdr.id, mbox_id2name(req->hdr.id), + dev_get_pf(req->hdr.pcifunc), dev_get_vf(req->hdr.pcifunc)); + + plt_base_dbg("repte pcifunc %x, enable %d", req->repte_pcifunc, req->enable); + + if (dev->ops && dev->ops->repte_notify) { + rc = dev->ops->repte_notify(dev->roc_nix, req->repte_pcifunc, + req->enable); + if (rc < 0) + plt_err("Failed to sent new representee %x notification to %s", + req->repte_pcifunc, (req->enable == true) ? "enable" : "disable"); + } + + rsp->hdr.rc = rc; + return rc; +} + static int mbox_up_handler_mcs_intr_notify(struct dev *dev, struct mcs_intr_info *info, struct msg_rsp *rsp) { @@ -712,6 +735,7 @@ mbox_process_msgs_up(struct dev *dev, struct mbox_msghdr *req) } MBOX_UP_CGX_MESSAGES MBOX_UP_MCS_MESSAGES + MBOX_UP_ESW_MESSAGES #undef M } diff --git a/drivers/common/cnxk/roc_dev_priv.h b/drivers/common/cnxk/roc_dev_priv.h index 5b2c5096f8..dd694b8572 100644 --- a/drivers/common/cnxk/roc_dev_priv.h +++ b/drivers/common/cnxk/roc_dev_priv.h @@ -36,12 +36,15 @@ typedef void (*q_err_cb_t)(void *roc_nix, void *data); /* Link status get callback */ typedef void (*link_status_get_t)(void *roc_nix, struct cgx_link_user_info *link); +/* Representee notification callback */ +typedef int (*repte_notify_t)(void *roc_nix, uint16_t pf_func, bool enable); struct dev_ops { link_info_t link_status_update; ptp_info_t ptp_info_update; link_status_get_t link_status_get; q_err_cb_t q_err_cb; + repte_notify_t repte_notify; }; #define dev_is_vf(dev) ((dev)->hwcap & DEV_HWCAP_F_VF) diff --git a/drivers/common/cnxk/roc_eswitch.c b/drivers/common/cnxk/roc_eswitch.c index 7f2a8e6c06..31bdba3985 100644 --- a/drivers/common/cnxk/roc_eswitch.c +++ b/drivers/common/cnxk/roc_eswitch.c @@ -298,3 +298,26 @@ roc_eswitch_nix_vlan_tpid_set(struct roc_nix *roc_nix, uint32_t type, uint16_t t return rc; } + +int +roc_eswitch_nix_process_repte_notify_cb_register(struct roc_nix *roc_nix, + process_repte_notify_t proc_repte_nt) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + if (proc_repte_nt == NULL) + return NIX_ERR_PARAM; + + dev->ops->repte_notify = (repte_notify_t)proc_repte_nt; + return 0; +} + +void +roc_eswitch_nix_process_repte_notify_cb_unregister(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + dev->ops->repte_notify = NULL; +} diff --git a/drivers/common/cnxk/roc_eswitch.h b/drivers/common/cnxk/roc_eswitch.h index 0dd23ff76a..8837e19b22 100644 --- a/drivers/common/cnxk/roc_eswitch.h +++ b/drivers/common/cnxk/roc_eswitch.h @@ -8,6 +8,9 @@ #define ROC_ESWITCH_VLAN_TPID 0x8100 #define ROC_ESWITCH_LBK_CHAN 63 +/* Process representee notification callback */ +typedef int (*process_repte_notify_t)(void *roc_nix, uint16_t pf_func, bool enable); + /* NPC */ int __roc_api roc_eswitch_npc_mcam_rx_rule(struct roc_npc *roc_npc, struct roc_npc_flow *flow, uint16_t pcifunc, uint16_t vlan_tci, @@ -22,4 +25,7 @@ int __roc_api roc_eswitch_npc_rss_action_configure(struct roc_npc *roc_npc, /* NIX */ int __roc_api roc_eswitch_nix_vlan_tpid_set(struct roc_nix *nix, uint32_t type, uint16_t tpid, bool is_vf); +int __roc_api roc_eswitch_nix_process_repte_notify_cb_register(struct roc_nix *roc_nix, + process_repte_notify_t proc_repte_nt); +void __roc_api roc_eswitch_nix_process_repte_notify_cb_unregister(struct roc_nix *roc_nix); #endif /* __ROC_ESWITCH_H__ */ diff --git a/drivers/common/cnxk/roc_mbox.c b/drivers/common/cnxk/roc_mbox.c index 7b734fcd24..cb486b2505 100644 --- a/drivers/common/cnxk/roc_mbox.c +++ b/drivers/common/cnxk/roc_mbox.c @@ -499,6 +499,7 @@ mbox_id2name(uint16_t id) return #_name; MBOX_MESSAGES MBOX_UP_CGX_MESSAGES + MBOX_UP_ESW_MESSAGES #undef M } } @@ -514,6 +515,7 @@ mbox_id2size(uint16_t id) return sizeof(struct _req_type); MBOX_MESSAGES MBOX_UP_CGX_MESSAGES + MBOX_UP_ESW_MESSAGES #undef M } } diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h index 4c846f0757..2bedf1fb81 100644 --- a/drivers/common/cnxk/roc_mbox.h +++ b/drivers/common/cnxk/roc_mbox.h @@ -355,9 +355,11 @@ struct mbox_msghdr { #define MBOX_UP_MCS_MESSAGES M(MCS_INTR_NOTIFY, 0xE00, mcs_intr_notify, mcs_intr_info, msg_rsp) +#define MBOX_UP_ESW_MESSAGES M(ESW_REPTE_NOTIFY, 0xF00, esw_repte_notify, esw_repte_req, msg_rsp) + enum { #define M(_name, _id, _1, _2, _3) MBOX_MSG_##_name = _id, - MBOX_MESSAGES MBOX_UP_CGX_MESSAGES MBOX_UP_MCS_MESSAGES + MBOX_MESSAGES MBOX_UP_CGX_MESSAGES MBOX_UP_MCS_MESSAGES MBOX_UP_ESW_MESSAGES #undef M }; @@ -2778,4 +2780,10 @@ struct nix_spi_to_sa_delete_req { uint16_t __io hash_index; uint8_t __io way; }; + +struct esw_repte_req { + struct mbox_msghdr hdr; + uint16_t __io repte_pcifunc; + bool __io enable; +}; #endif /* __ROC_MBOX_H__ */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 78c421677d..e170a6a63a 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -91,6 +91,8 @@ INTERNAL { roc_dpi_disable; roc_dpi_enable; roc_error_msg_get; + roc_eswitch_nix_process_repte_notify_cb_register; + roc_eswitch_nix_process_repte_notify_cb_unregister; roc_eswitch_nix_vlan_tpid_set; roc_eswitch_npc_mcam_delete_rule; roc_eswitch_npc_mcam_rx_rule; From patchwork Tue Dec 19 17:39:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135357 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 10D6343747; Tue, 19 Dec 2023 18:42:08 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EA85842E8F; Tue, 19 Dec 2023 18:41:18 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 9B6BC42E4F for ; Tue, 19 Dec 2023 18:41:07 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJ9u0tQ018591 for ; Tue, 19 Dec 2023 09:41:07 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=6foJ2RqNYc6R6eZUAvDDk ENR48GMNzzbUyYk/aXrjE8=; b=UARVbMEvdCkTkWN/r6ob15LOECvl3DOyxBUbM Pri35VX5nSC/lxhXt9GOhAx6D5jV+dC3plD64QLUqJn2M26Ousxe34k3mdSb3kYy SPVfzKt8beuB6+49CEKDFcElPSKjXKAspUvdR2TP7rFT2RRduaZhmTqrU1SSVqox s8m0qeIKa/BXXO+WFj7MsRa80QWtmquRSQgK14SIS9BMFCdigtq110fODcyZDfrW gI36/z7lD5dR1ogmV+gAd2/w3LqNUNZkJt+W+Gjvs24ZwRYMBdXLmp++D7A/WNfr kJmsqW3X25IFRUK0VQeZVK8agIhwkKzAhGtnaKnM3G/uxKFnw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3v1c9kumfp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:41:06 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:41:04 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:41:04 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 487A13F7091; Tue, 19 Dec 2023 09:41:02 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 12/24] net/cnxk: handling representee notification Date: Tue, 19 Dec 2023 23:09:51 +0530 Message-ID: <20231219174003.72901-13-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: iDjTbqpZ7pBl1lzaWHr3oVXDuBOIq3b2 X-Proofpoint-GUID: iDjTbqpZ7pBl1lzaWHr3oVXDuBOIq3b2 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In case of any representee coming up or going down, kernel sends a mbox up call which signals a thread to process these messages and enable/disable HW resources accordingly. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.c | 8 + drivers/net/cnxk/cnxk_eswitch.h | 20 +++ drivers/net/cnxk/cnxk_rep.c | 263 ++++++++++++++++++++++++++++++++ drivers/net/cnxk/cnxk_rep.h | 36 +++++ 4 files changed, 327 insertions(+) diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c index ffcf89b1b1..35c517f124 100644 --- a/drivers/net/cnxk/cnxk_eswitch.c +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -113,6 +113,14 @@ cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) rte_thread_join(eswitch_dev->rep_ctrl_msg_thread, NULL); } + if (eswitch_dev->repte_msg_proc.start_thread) { + eswitch_dev->repte_msg_proc.start_thread = false; + pthread_cond_signal(&eswitch_dev->repte_msg_proc.repte_msg_cond); + rte_thread_join(eswitch_dev->repte_msg_proc.repte_msg_thread, NULL); + pthread_mutex_destroy(&eswitch_dev->repte_msg_proc.mutex); + pthread_cond_destroy(&eswitch_dev->repte_msg_proc.repte_msg_cond); + } + /* Remove representor devices associated with PF */ cnxk_rep_dev_remove(eswitch_dev); } diff --git a/drivers/net/cnxk/cnxk_eswitch.h b/drivers/net/cnxk/cnxk_eswitch.h index a2f4aa0fcc..8aab3e8a72 100644 --- a/drivers/net/cnxk/cnxk_eswitch.h +++ b/drivers/net/cnxk/cnxk_eswitch.h @@ -30,6 +30,23 @@ enum cnxk_esw_da_pattern_type { CNXK_ESW_DA_TYPE_PFVF, }; +struct cnxk_esw_repte_msg { + uint16_t hw_func; + bool enable; + + TAILQ_ENTRY(cnxk_esw_repte_msg) next; +}; + +struct cnxk_esw_repte_msg_proc { + bool start_thread; + uint8_t msg_avail; + rte_thread_t repte_msg_thread; + pthread_cond_t repte_msg_cond; + pthread_mutex_t mutex; + + TAILQ_HEAD(esw_repte_msg_list, cnxk_esw_repte_msg) msg_list; +}; + struct cnxk_esw_repr_hw_info { /* Representee pcifunc value */ uint16_t hw_func; @@ -139,6 +156,9 @@ struct cnxk_eswitch_dev { bool client_connected; int sock_fd; + /* Representee notification */ + struct cnxk_esw_repte_msg_proc repte_msg_proc; + /* Port representor fields */ rte_spinlock_t rep_lock; uint16_t nb_switch_domain; diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c index f8e1d5b965..3b01856bc8 100644 --- a/drivers/net/cnxk/cnxk_rep.c +++ b/drivers/net/cnxk/cnxk_rep.c @@ -4,6 +4,8 @@ #include #include +#define REPTE_MSG_PROC_THRD_NAME_MAX_LEN 30 + #define PF_SHIFT 10 #define PF_MASK 0x3F @@ -86,6 +88,7 @@ cnxk_rep_dev_remove(struct cnxk_eswitch_dev *eswitch_dev) { int i, rc = 0; + roc_eswitch_nix_process_repte_notify_cb_unregister(&eswitch_dev->nix); for (i = 0; i < eswitch_dev->nb_switch_domain; i++) { rc = rte_eth_switch_domain_free(eswitch_dev->sw_dom[i].switch_domain_id); if (rc) @@ -95,6 +98,236 @@ cnxk_rep_dev_remove(struct cnxk_eswitch_dev *eswitch_dev) return rc; } +static int +cnxk_representee_release(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func) +{ + struct cnxk_rep_dev *rep_dev = NULL; + struct rte_eth_dev *rep_eth_dev; + int i, rc = 0; + + for (i = 0; i < eswitch_dev->repr_cnt.nb_repr_probed; i++) { + rep_eth_dev = eswitch_dev->rep_info[i].rep_eth_dev; + if (!rep_eth_dev) { + plt_err("Failed to get rep ethdev handle"); + rc = -EINVAL; + goto done; + } + + rep_dev = cnxk_rep_pmd_priv(rep_eth_dev); + if (rep_dev->hw_func == hw_func && + (!rep_dev->native_repte || rep_dev->is_vf_active)) { + rep_dev->is_vf_active = false; + rc = cnxk_rep_dev_stop(rep_eth_dev); + if (rc) { + plt_err("Failed to stop repr port %d, rep id %d", rep_dev->port_id, + rep_dev->rep_id); + goto done; + } + + cnxk_rep_rx_queue_release(rep_eth_dev, 0); + cnxk_rep_tx_queue_release(rep_eth_dev, 0); + plt_rep_dbg("Released representor ID %d representing %x", rep_dev->rep_id, + hw_func); + break; + } + } +done: + return rc; +} + +static int +cnxk_representee_setup(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func, uint16_t rep_id) +{ + struct cnxk_rep_dev *rep_dev = NULL; + struct rte_eth_dev *rep_eth_dev; + int i, rc = 0; + + for (i = 0; i < eswitch_dev->repr_cnt.nb_repr_probed; i++) { + rep_eth_dev = eswitch_dev->rep_info[i].rep_eth_dev; + if (!rep_eth_dev) { + plt_err("Failed to get rep ethdev handle"); + rc = -EINVAL; + goto done; + } + + rep_dev = cnxk_rep_pmd_priv(rep_eth_dev); + if (rep_dev->hw_func == hw_func && !rep_dev->is_vf_active) { + rep_dev->is_vf_active = true; + rep_dev->native_repte = true; + if (rep_dev->rep_id != rep_id) { + plt_err("Rep ID assigned during init %d does not match %d", + rep_dev->rep_id, rep_id); + rc = -EINVAL; + goto done; + } + + rc = cnxk_rep_rx_queue_setup(rep_eth_dev, rep_dev->rxq->qid, + rep_dev->rxq->nb_desc, 0, + rep_dev->rxq->rx_conf, rep_dev->rxq->mpool); + if (rc) { + plt_err("Failed to setup rxq repr port %d, rep id %d", + rep_dev->port_id, rep_dev->rep_id); + goto done; + } + + rc = cnxk_rep_tx_queue_setup(rep_eth_dev, rep_dev->txq->qid, + rep_dev->txq->nb_desc, 0, + rep_dev->txq->tx_conf); + if (rc) { + plt_err("Failed to setup txq repr port %d, rep id %d", + rep_dev->port_id, rep_dev->rep_id); + goto done; + } + + rc = cnxk_rep_dev_start(rep_eth_dev); + if (rc) { + plt_err("Failed to start repr port %d, rep id %d", rep_dev->port_id, + rep_dev->rep_id); + goto done; + } + break; + } + } +done: + return rc; +} + +static int +cnxk_representee_msg_process(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func, bool enable) +{ + struct cnxk_eswitch_devargs *esw_da; + uint16_t rep_id = UINT16_MAX; + int rc = 0, i, j; + + /* Traversing the initialized represented list */ + for (i = 0; i < eswitch_dev->nb_esw_da; i++) { + esw_da = &eswitch_dev->esw_da[i]; + for (j = 0; j < esw_da->nb_repr_ports; j++) { + if (esw_da->repr_hw_info[j].hw_func == hw_func) { + rep_id = esw_da->repr_hw_info[j].rep_id; + break; + } + } + if (rep_id != UINT16_MAX) + break; + } + /* No action on PF func for which representor has not been created */ + if (rep_id == UINT16_MAX) + goto done; + + if (enable) { + rc = cnxk_representee_setup(eswitch_dev, hw_func, rep_id); + if (rc) { + plt_err("Failed to setup representee, err %d", rc); + goto fail; + } + plt_rep_dbg(" Representor ID %d representing %x", rep_id, hw_func); + rc = cnxk_eswitch_flow_rules_install(eswitch_dev, hw_func); + if (rc) { + plt_err("Failed to install rxtx flow rules for %x", hw_func); + goto fail; + } + } else { + rc = cnxk_eswitch_flow_rules_delete(eswitch_dev, hw_func); + if (rc) { + plt_err("Failed to delete flow rules for %x", hw_func); + goto fail; + } + rc = cnxk_representee_release(eswitch_dev, hw_func); + if (rc) { + plt_err("Failed to release representee, err %d", rc); + goto fail; + } + } + +done: + return 0; +fail: + return rc; +} + +static uint32_t +cnxk_representee_msg_thread_main(void *arg) +{ + struct cnxk_eswitch_dev *eswitch_dev = (struct cnxk_eswitch_dev *)arg; + struct cnxk_esw_repte_msg_proc *repte_msg_proc; + struct cnxk_esw_repte_msg *msg, *next_msg; + int count, rc; + + repte_msg_proc = &eswitch_dev->repte_msg_proc; + pthread_mutex_lock(&eswitch_dev->repte_msg_proc.mutex); + while (eswitch_dev->repte_msg_proc.start_thread) { + do { + rc = pthread_cond_wait(&eswitch_dev->repte_msg_proc.repte_msg_cond, + &eswitch_dev->repte_msg_proc.mutex); + } while (rc != 0); + + /* Go through list pushed from interrupt context and process each message */ + next_msg = TAILQ_FIRST(&repte_msg_proc->msg_list); + count = 0; + while (next_msg) { + msg = next_msg; + next_msg = TAILQ_NEXT(msg, next); + count++; + plt_rep_dbg(" Processing msg %d: hw_func %x action %s", count, + msg->hw_func, msg->enable ? "enable" : "disable"); + + /* Unlocking for interrupt thread to grab lock + * while thread process the message. + */ + pthread_mutex_unlock(&eswitch_dev->repte_msg_proc.mutex); + /* Processing the message */ + cnxk_representee_msg_process(eswitch_dev, msg->hw_func, msg->enable); + TAILQ_REMOVE(&repte_msg_proc->msg_list, msg, next); + rte_free(msg); + /* Locking as cond wait will unlock before wait */ + pthread_mutex_lock(&eswitch_dev->repte_msg_proc.mutex); + } + } + + pthread_mutex_unlock(&eswitch_dev->repte_msg_proc.mutex); + + return 0; +} + +static int +cnxk_representee_notification(void *roc_nix, uint16_t hw_func, bool enable) +{ + struct cnxk_esw_repte_msg_proc *repte_msg_proc; + struct cnxk_eswitch_dev *eswitch_dev; + struct cnxk_esw_repte_msg *msg; + int rc = 0; + + RTE_SET_USED(roc_nix); + eswitch_dev = cnxk_eswitch_pmd_priv(); + if (!eswitch_dev) { + plt_err("Failed to get PF ethdev handle"); + rc = -EINVAL; + goto done; + } + + repte_msg_proc = &eswitch_dev->repte_msg_proc; + msg = rte_zmalloc("msg", sizeof(struct cnxk_esw_repte_msg), 0); + if (!msg) { + plt_err("Failed to allocate memory for repte msg"); + rc = -ENOMEM; + goto done; + } + + msg->hw_func = hw_func; + msg->enable = enable; + + plt_rep_dbg("Pushing new notification : hw_func %x enable %d\n", msg->hw_func, enable); + pthread_mutex_lock(&eswitch_dev->repte_msg_proc.mutex); + TAILQ_INSERT_TAIL(&repte_msg_proc->msg_list, msg, next); + /* Signal vf message handler thread */ + pthread_cond_signal(&eswitch_dev->repte_msg_proc.repte_msg_cond); + pthread_mutex_unlock(&eswitch_dev->repte_msg_proc.mutex); + +done: + return rc; +} + static int cnxk_rep_parent_setup(struct cnxk_eswitch_dev *eswitch_dev) { @@ -263,6 +496,7 @@ create_representor_ethdev(struct rte_pci_device *pci_dev, struct cnxk_eswitch_de int cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev) { + char name[REPTE_MSG_PROC_THRD_NAME_MAX_LEN]; struct cnxk_eswitch_devargs *esw_da; uint16_t num_rep; int i, j, rc; @@ -302,7 +536,36 @@ cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswi } } + if (!eswitch_dev->repte_msg_proc.start_thread) { + /* Register callback for representee notification */ + if (roc_eswitch_nix_process_repte_notify_cb_register(&eswitch_dev->nix, + cnxk_representee_notification)) { + plt_err("Failed to register callback for representee notification"); + rc = -EINVAL; + goto fail; + } + + /* Create a thread for handling msgs from VFs */ + TAILQ_INIT(&eswitch_dev->repte_msg_proc.msg_list); + pthread_cond_init(&eswitch_dev->repte_msg_proc.repte_msg_cond, NULL); + pthread_mutex_init(&eswitch_dev->repte_msg_proc.mutex, NULL); + + rte_strscpy(name, "repte_msg_proc_thrd", REPTE_MSG_PROC_THRD_NAME_MAX_LEN); + eswitch_dev->repte_msg_proc.start_thread = true; + rc = + rte_thread_create_internal_control(&eswitch_dev->repte_msg_proc.repte_msg_thread, + name, cnxk_representee_msg_thread_main, + eswitch_dev); + if (rc != 0) { + plt_err("Failed to create thread for VF mbox handling\n"); + goto thread_fail; + } + } + return 0; +thread_fail: + pthread_mutex_destroy(&eswitch_dev->repte_msg_proc.mutex); + pthread_cond_destroy(&eswitch_dev->repte_msg_proc.repte_msg_cond); fail: return rc; } diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h index a62d9b0ae8..9172fae641 100644 --- a/drivers/net/cnxk/cnxk_rep.h +++ b/drivers/net/cnxk/cnxk_rep.h @@ -10,6 +10,40 @@ /* Common ethdev ops */ extern struct eth_dev_ops cnxk_rep_dev_ops; +struct cnxk_rep_queue_stats { + uint64_t pkts; + uint64_t bytes; +}; + +struct cnxk_rep_rxq { + /* Parent rep device */ + struct cnxk_rep_dev *rep_dev; + /* Queue ID */ + uint16_t qid; + /* No of desc */ + uint16_t nb_desc; + /* mempool handle */ + struct rte_mempool *mpool; + /* RX config parameters */ + const struct rte_eth_rxconf *rx_conf; + /* Per queue TX statistics */ + struct cnxk_rep_queue_stats stats; +}; + +struct cnxk_rep_txq { + /* Parent rep device */ + struct cnxk_rep_dev *rep_dev; + /* Queue ID */ + uint16_t qid; + /* No of desc */ + uint16_t nb_desc; + /* TX config parameters */ + const struct rte_eth_txconf *tx_conf; + /* Per queue TX statistics */ + struct cnxk_rep_queue_stats stats; +}; + +/* Representor port configurations */ struct cnxk_rep_dev { uint16_t port_id; uint16_t rep_id; @@ -18,6 +52,8 @@ struct cnxk_rep_dev { uint16_t hw_func; bool is_vf_active; bool native_repte; + struct cnxk_rep_rxq *rxq; + struct cnxk_rep_txq *txq; uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; }; From patchwork Tue Dec 19 17:39:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135358 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 92AC943747; Tue, 19 Dec 2023 18:42:15 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2C1C142E6D; Tue, 19 Dec 2023 18:41:20 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 0C01542DED for ; Tue, 19 Dec 2023 18:41:10 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJ93T8d016883 for ; Tue, 19 Dec 2023 09:41:10 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=w3tB6mkk1I7KoKLAcC+Sf wzKjJpoqflF4NeiKd0W86A=; b=BJF3GrxYmDdQYuBUJ1RAFPmLSkqA5DEMPhFoT /AYxRiRmbrxWTsVZkhwhutU2bBdgPP/WNj6rgSzipmbH+vL0LHHp6tfZGWGYcUNn 8deXCX5zHfviuG1cSAptXTT37jYF84xClKf0GnZUqBD8HDUht/Q2/URQIrpBfAr/ rGX+LzwgqxIzABevkxo/fix79RGqgamM2RT2S44p37ut9WABL/uhd28DOR32EZ+2 pRxIJh1EuT62QVGzGp8MMREL/U7cGS4uq/x9gjEdUqI9XX6GLo0t12dB+DBH8bEY GXuoT66AYNUvcEEGFOHl8PCrTn0jjOz33e9W4PjqARuIlypjw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3v1c9kumfy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:41:10 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:41:07 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:41:07 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 4C6373F7050; Tue, 19 Dec 2023 09:41:05 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 13/24] net/cnxk: representor ethdev ops Date: Tue, 19 Dec 2023 23:09:52 +0530 Message-ID: <20231219174003.72901-14-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Gu_23iLGBex_51dsxsARdn63K2msTtfI X-Proofpoint-GUID: Gu_23iLGBex_51dsxsARdn63K2msTtfI X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implementing ethernet device operation callbacks for port representors PMD Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_rep.c | 28 +- drivers/net/cnxk/cnxk_rep.h | 35 +++ drivers/net/cnxk/cnxk_rep_msg.h | 8 + drivers/net/cnxk/cnxk_rep_ops.c | 495 ++++++++++++++++++++++++++++++-- 4 files changed, 523 insertions(+), 43 deletions(-) diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c index 3b01856bc8..6e2424db40 100644 --- a/drivers/net/cnxk/cnxk_rep.c +++ b/drivers/net/cnxk/cnxk_rep.c @@ -73,6 +73,8 @@ cnxk_rep_state_update(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func, ui int cnxk_rep_dev_uninit(struct rte_eth_dev *ethdev) { + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; @@ -80,6 +82,8 @@ cnxk_rep_dev_uninit(struct rte_eth_dev *ethdev) rte_free(ethdev->data->mac_addrs); ethdev->data->mac_addrs = NULL; + rep_dev->parent_dev->repr_cnt.nb_repr_probed--; + return 0; } @@ -369,26 +373,6 @@ cnxk_rep_parent_setup(struct cnxk_eswitch_dev *eswitch_dev) return rc; } -static uint16_t -cnxk_rep_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) -{ - PLT_SET_USED(tx_queue); - PLT_SET_USED(tx_pkts); - PLT_SET_USED(nb_pkts); - - return 0; -} - -static uint16_t -cnxk_rep_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) -{ - PLT_SET_USED(rx_queue); - PLT_SET_USED(rx_pkts); - PLT_SET_USED(nb_pkts); - - return 0; -} - static int cnxk_rep_dev_init(struct rte_eth_dev *eth_dev, void *params) { @@ -418,8 +402,8 @@ cnxk_rep_dev_init(struct rte_eth_dev *eth_dev, void *params) eth_dev->dev_ops = &cnxk_rep_dev_ops; /* Rx/Tx functions stubs to avoid crashing */ - eth_dev->rx_pkt_burst = cnxk_rep_rx_burst; - eth_dev->tx_pkt_burst = cnxk_rep_tx_burst; + eth_dev->rx_pkt_burst = cnxk_rep_rx_burst_dummy; + eth_dev->tx_pkt_burst = cnxk_rep_tx_burst_dummy; /* Only single queues for representor devices */ eth_dev->data->nb_rx_queues = 1; diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h index 9172fae641..266dd4a688 100644 --- a/drivers/net/cnxk/cnxk_rep.h +++ b/drivers/net/cnxk/cnxk_rep.h @@ -7,6 +7,13 @@ #ifndef __CNXK_REP_H__ #define __CNXK_REP_H__ +#define CNXK_REP_TX_OFFLOAD_CAPA \ + (RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \ + RTE_ETH_TX_OFFLOAD_MULTI_SEGS) + +#define CNXK_REP_RX_OFFLOAD_CAPA \ + (RTE_ETH_RX_OFFLOAD_SCATTER | RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_VLAN_STRIP) + /* Common ethdev ops */ extern struct eth_dev_ops cnxk_rep_dev_ops; @@ -57,12 +64,33 @@ struct cnxk_rep_dev { uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; }; +/* Inline functions */ +static inline void +cnxk_rep_lock(struct cnxk_rep_dev *rep) +{ + rte_spinlock_lock(&rep->parent_dev->rep_lock); +} + +static inline void +cnxk_rep_unlock(struct cnxk_rep_dev *rep) +{ + rte_spinlock_unlock(&rep->parent_dev->rep_lock); +} + static inline struct cnxk_rep_dev * cnxk_rep_pmd_priv(const struct rte_eth_dev *eth_dev) { return eth_dev->data->dev_private; } +static __rte_always_inline void +cnxk_rep_pool_buffer_stats(struct rte_mempool *pool) +{ + plt_rep_dbg(" pool %s size %d buffer count in use %d available %d\n", pool->name, + pool->size, rte_mempool_in_use_count(pool), rte_mempool_avail_count(pool)); +} + +/* Prototypes */ int cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev); int cnxk_rep_dev_remove(struct cnxk_eswitch_dev *eswitch_dev); int cnxk_rep_dev_uninit(struct rte_eth_dev *ethdev); @@ -85,5 +113,12 @@ int cnxk_rep_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats) int cnxk_rep_stats_reset(struct rte_eth_dev *eth_dev); int cnxk_rep_flow_ops_get(struct rte_eth_dev *ethdev, const struct rte_flow_ops **ops); int cnxk_rep_state_update(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func, uint16_t *rep_id); +int cnxk_rep_promiscuous_enable(struct rte_eth_dev *ethdev); +int cnxk_rep_promiscuous_disable(struct rte_eth_dev *ethdev); +int cnxk_rep_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr); +uint16_t cnxk_rep_tx_burst_dummy(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t cnxk_rep_rx_burst_dummy(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +void cnxk_rep_tx_queue_stop(struct rte_eth_dev *ethdev, uint16_t queue_id); +void cnxk_rep_rx_queue_stop(struct rte_eth_dev *ethdev, uint16_t queue_id); #endif /* __CNXK_REP_H__ */ diff --git a/drivers/net/cnxk/cnxk_rep_msg.h b/drivers/net/cnxk/cnxk_rep_msg.h index fb84d58848..37953ac74f 100644 --- a/drivers/net/cnxk/cnxk_rep_msg.h +++ b/drivers/net/cnxk/cnxk_rep_msg.h @@ -19,6 +19,8 @@ typedef enum CNXK_REP_MSG { CNXK_REP_MSG_READY = 0, CNXK_REP_MSG_ACK, CNXK_REP_MSG_EXIT, + /* Ethernet operation msgs */ + CNXK_REP_MSG_ETH_SET_MAC, /* End of messaging sequence */ CNXK_REP_MSG_END, } cnxk_rep_msg_t; @@ -81,6 +83,12 @@ typedef struct cnxk_rep_msg_exit_data { uint16_t data[]; } __rte_packed cnxk_rep_msg_exit_data_t; +/* Ethernet op - set mac */ +typedef struct cnxk_rep_msg_eth_mac_set_meta { + uint16_t portid; + uint8_t addr_bytes[RTE_ETHER_ADDR_LEN]; +} __rte_packed cnxk_rep_msg_eth_set_mac_meta_t; + void cnxk_rep_msg_populate_command(void *buffer, uint32_t *length, cnxk_rep_msg_t type, uint32_t size); void cnxk_rep_msg_populate_command_meta(void *buffer, uint32_t *length, void *msg_meta, uint32_t sz, diff --git a/drivers/net/cnxk/cnxk_rep_ops.c b/drivers/net/cnxk/cnxk_rep_ops.c index 67dcc422e3..4b3fe28acc 100644 --- a/drivers/net/cnxk/cnxk_rep_ops.c +++ b/drivers/net/cnxk/cnxk_rep_ops.c @@ -3,25 +3,221 @@ */ #include +#include + +#define MEMPOOL_CACHE_SIZE 256 +#define TX_DESC_PER_QUEUE 512 +#define RX_DESC_PER_QUEUE 256 +#define NB_REP_VDEV_MBUF 1024 + +static uint16_t +cnxk_rep_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct cnxk_rep_txq *txq = tx_queue; + struct cnxk_rep_dev *rep_dev; + uint16_t n_tx; + + if (unlikely(!txq)) + return 0; + + rep_dev = txq->rep_dev; + plt_rep_dbg("Transmitting %d packets on eswitch queue %d", nb_pkts, txq->qid); + n_tx = cnxk_eswitch_dev_tx_burst(rep_dev->parent_dev, txq->qid, tx_pkts, nb_pkts, + NIX_TX_OFFLOAD_VLAN_QINQ_F); + return n_tx; +} + +static uint16_t +cnxk_rep_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + struct cnxk_rep_rxq *rxq = rx_queue; + struct cnxk_rep_dev *rep_dev; + uint16_t n_rx; + + if (unlikely(!rxq)) + return 0; + + rep_dev = rxq->rep_dev; + n_rx = cnxk_eswitch_dev_rx_burst(rep_dev->parent_dev, rxq->qid, rx_pkts, nb_pkts); + if (n_rx == 0) + return 0; + + plt_rep_dbg("Received %d packets on eswitch queue %d", n_rx, rxq->qid); + return n_rx; +} + +uint16_t +cnxk_rep_tx_burst_dummy(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + PLT_SET_USED(tx_queue); + PLT_SET_USED(tx_pkts); + PLT_SET_USED(nb_pkts); + + return 0; +} + +uint16_t +cnxk_rep_rx_burst_dummy(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + PLT_SET_USED(rx_queue); + PLT_SET_USED(rx_pkts); + PLT_SET_USED(nb_pkts); + + return 0; +} int cnxk_rep_link_update(struct rte_eth_dev *ethdev, int wait_to_complete) { - PLT_SET_USED(ethdev); + struct rte_eth_link link; PLT_SET_USED(wait_to_complete); + + memset(&link, 0, sizeof(link)); + if (ethdev->data->dev_started) + link.link_status = RTE_ETH_LINK_UP; + else + link.link_status = RTE_ETH_LINK_DOWN; + + link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + link.link_autoneg = RTE_ETH_LINK_FIXED; + link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + + return rte_eth_linkstatus_set(ethdev, &link); +} + +int +cnxk_rep_dev_info_get(struct rte_eth_dev *ethdev, struct rte_eth_dev_info *dev_info) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + uint32_t max_rx_pktlen; + + max_rx_pktlen = (roc_nix_max_pkt_len(&rep_dev->parent_dev->nix) + RTE_ETHER_CRC_LEN - + CNXK_NIX_MAX_VTAG_ACT_SIZE); + + dev_info->min_rx_bufsize = NIX_MIN_HW_FRS + RTE_ETHER_CRC_LEN; + dev_info->max_rx_pktlen = max_rx_pktlen; + dev_info->max_mac_addrs = roc_nix_mac_max_entries_get(&rep_dev->parent_dev->nix); + + dev_info->rx_offload_capa = CNXK_REP_RX_OFFLOAD_CAPA; + dev_info->tx_offload_capa = CNXK_REP_TX_OFFLOAD_CAPA; + dev_info->rx_queue_offload_capa = 0; + dev_info->tx_queue_offload_capa = 0; + + /* For the sake of symmetry, max_rx_queues = max_tx_queues */ + dev_info->max_rx_queues = 1; + dev_info->max_tx_queues = 1; + + /* MTU specifics */ + dev_info->max_mtu = dev_info->max_rx_pktlen - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN); + dev_info->min_mtu = dev_info->min_rx_bufsize - CNXK_NIX_L2_OVERHEAD; + + /* Switch info specific */ + dev_info->switch_info.name = ethdev->device->name; + dev_info->switch_info.domain_id = rep_dev->switch_domain_id; + dev_info->switch_info.port_id = rep_dev->port_id; + return 0; } int -cnxk_rep_dev_info_get(struct rte_eth_dev *ethdev, struct rte_eth_dev_info *devinfo) +cnxk_rep_representor_info_get(struct rte_eth_dev *ethdev, struct rte_eth_representor_info *info) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + + return cnxk_eswitch_representor_info_get(rep_dev->parent_dev, info); +} + +static int +rep_eth_conf_chk(const struct rte_eth_conf *conf, uint16_t nb_rx_queues) +{ + const struct rte_eth_rss_conf *rss_conf; + int ret = 0; + + if (conf->link_speeds != 0) { + plt_err("specific link speeds not supported"); + ret = -EINVAL; + } + + switch (conf->rxmode.mq_mode) { + case RTE_ETH_MQ_RX_RSS: + if (nb_rx_queues != 1) { + plt_err("Rx RSS is not supported with %u queues", nb_rx_queues); + ret = -EINVAL; + break; + } + + rss_conf = &conf->rx_adv_conf.rss_conf; + if (rss_conf->rss_key != NULL || rss_conf->rss_key_len != 0 || + rss_conf->rss_hf != 0) { + plt_err("Rx RSS configuration is not supported"); + ret = -EINVAL; + } + break; + case RTE_ETH_MQ_RX_NONE: + break; + default: + plt_err("Rx mode MQ modes other than RSS not supported"); + ret = -EINVAL; + break; + } + + if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) { + plt_err("Tx mode MQ modes not supported"); + ret = -EINVAL; + } + + if (conf->lpbk_mode != 0) { + plt_err("loopback not supported"); + ret = -EINVAL; + } + + if (conf->dcb_capability_en != 0) { + plt_err("priority-based flow control not supported"); + ret = -EINVAL; + } + + if (conf->intr_conf.lsc != 0) { + plt_err("link status change interrupt not supported"); + ret = -EINVAL; + } + + if (conf->intr_conf.rxq != 0) { + plt_err("receive queue interrupt not supported"); + ret = -EINVAL; + } + + if (conf->intr_conf.rmv != 0) { + plt_err("remove interrupt not supported"); + ret = -EINVAL; + } + + return ret; +} + +int +cnxk_rep_dev_configure(struct rte_eth_dev *ethdev) +{ + struct rte_eth_dev_data *ethdev_data = ethdev->data; + int rc = -1; + + rc = rep_eth_conf_chk(ðdev_data->dev_conf, ethdev_data->nb_rx_queues); + if (rc) + goto fail; + + return 0; +fail: + return rc; +} + +int +cnxk_rep_promiscuous_enable(struct rte_eth_dev *ethdev) { PLT_SET_USED(ethdev); - PLT_SET_USED(devinfo); return 0; } int -cnxk_rep_dev_configure(struct rte_eth_dev *ethdev) +cnxk_rep_promiscuous_disable(struct rte_eth_dev *ethdev) { PLT_SET_USED(ethdev); return 0; @@ -30,21 +226,73 @@ cnxk_rep_dev_configure(struct rte_eth_dev *ethdev) int cnxk_rep_dev_start(struct rte_eth_dev *ethdev) { - PLT_SET_USED(ethdev); + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + int rc = 0, qid; + + ethdev->rx_pkt_burst = cnxk_rep_rx_burst; + ethdev->tx_pkt_burst = cnxk_rep_tx_burst; + + if (!rep_dev->is_vf_active) + return 0; + + if (!rep_dev->rxq || !rep_dev->txq) { + plt_err("Invalid rxq or txq for representor id %d", rep_dev->rep_id); + rc = -EINVAL; + goto fail; + } + + /* Start rx queues */ + qid = rep_dev->rxq->qid; + rc = cnxk_eswitch_rxq_start(rep_dev->parent_dev, qid); + if (rc) { + plt_err("Failed to start rxq %d, rc=%d", qid, rc); + goto fail; + } + + /* Start tx queues */ + qid = rep_dev->txq->qid; + rc = cnxk_eswitch_txq_start(rep_dev->parent_dev, qid); + if (rc) { + plt_err("Failed to start txq %d, rc=%d", qid, rc); + goto fail; + } + + /* Start rep_xport device only once after first representor gets active */ + if (!rep_dev->parent_dev->repr_cnt.nb_repr_started) { + rc = cnxk_eswitch_nix_rsrc_start(rep_dev->parent_dev); + if (rc) { + plt_err("Failed to start nix dev, rc %d", rc); + goto fail; + } + } + + ethdev->data->tx_queue_state[0] = RTE_ETH_QUEUE_STATE_STARTED; + ethdev->data->rx_queue_state[0] = RTE_ETH_QUEUE_STATE_STARTED; + + rep_dev->parent_dev->repr_cnt.nb_repr_started++; + return 0; +fail: + return rc; } int cnxk_rep_dev_close(struct rte_eth_dev *ethdev) { - PLT_SET_USED(ethdev); - return 0; + return cnxk_rep_dev_uninit(ethdev); } int cnxk_rep_dev_stop(struct rte_eth_dev *ethdev) { - PLT_SET_USED(ethdev); + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + + ethdev->rx_pkt_burst = cnxk_rep_rx_burst_dummy; + ethdev->tx_pkt_burst = cnxk_rep_tx_burst_dummy; + cnxk_rep_rx_queue_stop(ethdev, 0); + cnxk_rep_tx_queue_stop(ethdev, 0); + rep_dev->parent_dev->repr_cnt.nb_repr_started--; + return 0; } @@ -53,39 +301,189 @@ cnxk_rep_rx_queue_setup(struct rte_eth_dev *ethdev, uint16_t rx_queue_id, uint16 unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mb_pool) { - PLT_SET_USED(ethdev); - PLT_SET_USED(rx_queue_id); - PLT_SET_USED(nb_rx_desc); + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + struct cnxk_rep_rxq *rxq = NULL; + uint16_t qid = 0; + int rc; + PLT_SET_USED(socket_id); - PLT_SET_USED(rx_conf); - PLT_SET_USED(mb_pool); + /* If no representee assigned, store the respective rxq parameters */ + if (!rep_dev->is_vf_active && !rep_dev->rxq) { + rxq = plt_zmalloc(sizeof(*rxq), RTE_CACHE_LINE_SIZE); + if (!rxq) { + rc = -ENOMEM; + plt_err("Failed to alloc RxQ for rep id %d", rep_dev->rep_id); + goto fail; + } + + rxq->qid = qid; + rxq->nb_desc = nb_rx_desc; + rxq->rep_dev = rep_dev; + rxq->mpool = mb_pool; + rxq->rx_conf = rx_conf; + rep_dev->rxq = rxq; + ethdev->data->rx_queues[rx_queue_id] = NULL; + + return 0; + } + + qid = rep_dev->rep_id; + rc = cnxk_eswitch_rxq_setup(rep_dev->parent_dev, qid, nb_rx_desc, rx_conf, mb_pool); + if (rc) { + plt_err("failed to setup eswitch queue id %d", qid); + goto fail; + } + + rxq = rep_dev->rxq; + if (!rxq) { + plt_err("Invalid RXQ handle for representor port %d rep id %d", rep_dev->port_id, + rep_dev->rep_id); + goto free_queue; + } + + rxq->qid = qid; + ethdev->data->rx_queues[rx_queue_id] = rxq; + ethdev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + plt_rep_dbg("representor id %d portid %d rxq id %d", rep_dev->port_id, + ethdev->data->port_id, rxq->qid); + return 0; +free_queue: + cnxk_eswitch_rxq_release(rep_dev->parent_dev, qid); +fail: + return rc; +} + +void +cnxk_rep_rx_queue_stop(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + struct cnxk_rep_rxq *rxq = ethdev->data->rx_queues[queue_id]; + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + int rc; + + if (!rxq) + return; + + plt_rep_dbg("Stopping rxq %u", rxq->qid); + + rc = cnxk_eswitch_rxq_stop(rep_dev->parent_dev, rxq->qid); + if (rc) + plt_err("Failed to stop rxq %d, rc=%d", rc, rxq->qid); + + ethdev->data->rx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; } void cnxk_rep_rx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id) { - PLT_SET_USED(ethdev); - PLT_SET_USED(queue_id); + struct cnxk_rep_rxq *rxq = ethdev->data->rx_queues[queue_id]; + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + int rc; + + if (!rxq) { + plt_err("Invalid rxq retrieved for rep_id %d", rep_dev->rep_id); + return; + } + + plt_rep_dbg("Releasing rxq %u", rxq->qid); + + rc = cnxk_eswitch_rxq_release(rep_dev->parent_dev, rxq->qid); + if (rc) + plt_err("Failed to release rxq %d, rc=%d", rc, rxq->qid); } int cnxk_rep_tx_queue_setup(struct rte_eth_dev *ethdev, uint16_t tx_queue_id, uint16_t nb_tx_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf) { - PLT_SET_USED(ethdev); - PLT_SET_USED(tx_queue_id); - PLT_SET_USED(nb_tx_desc); + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + struct cnxk_rep_txq *txq = NULL; + int rc = 0, qid = 0; + PLT_SET_USED(socket_id); - PLT_SET_USED(tx_conf); + /* If no representee assigned, store the respective rxq parameters */ + if (!rep_dev->is_vf_active && !rep_dev->txq) { + txq = plt_zmalloc(sizeof(*txq), RTE_CACHE_LINE_SIZE); + if (!txq) { + rc = -ENOMEM; + plt_err("failed to alloc txq for rep id %d", rep_dev->rep_id); + goto free_queue; + } + + txq->qid = qid; + txq->nb_desc = nb_tx_desc; + txq->tx_conf = tx_conf; + txq->rep_dev = rep_dev; + rep_dev->txq = txq; + + ethdev->data->tx_queues[tx_queue_id] = NULL; + + return 0; + } + + qid = rep_dev->rep_id; + rc = cnxk_eswitch_txq_setup(rep_dev->parent_dev, qid, nb_tx_desc, tx_conf); + if (rc) { + plt_err("failed to setup eswitch queue id %d", qid); + goto fail; + } + + txq = rep_dev->txq; + if (!txq) { + plt_err("Invalid TXQ handle for representor port %d rep id %d", rep_dev->port_id, + rep_dev->rep_id); + goto free_queue; + } + + txq->qid = qid; + ethdev->data->tx_queues[tx_queue_id] = txq; + ethdev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + plt_rep_dbg("representor id %d portid %d txq id %d", rep_dev->port_id, + ethdev->data->port_id, txq->qid); + return 0; +free_queue: + cnxk_eswitch_txq_release(rep_dev->parent_dev, qid); +fail: + return rc; +} + +void +cnxk_rep_tx_queue_stop(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + struct cnxk_rep_txq *txq = ethdev->data->tx_queues[queue_id]; + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + int rc; + + if (!txq) + return; + + plt_rep_dbg("Releasing txq %u", txq->qid); + + rc = cnxk_eswitch_txq_stop(rep_dev->parent_dev, txq->qid); + if (rc) + plt_err("Failed to stop txq %d, rc=%d", rc, txq->qid); + + ethdev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; } void cnxk_rep_tx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id) { - PLT_SET_USED(ethdev); - PLT_SET_USED(queue_id); + struct cnxk_rep_txq *txq = ethdev->data->tx_queues[queue_id]; + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + int rc; + + if (!txq) { + plt_err("Invalid txq retrieved for rep_id %d", rep_dev->rep_id); + return; + } + + plt_rep_dbg("Releasing txq %u", txq->qid); + + rc = cnxk_eswitch_txq_release(rep_dev->parent_dev, txq->qid); + if (rc) + plt_err("Failed to release txq %d, rc=%d", rc, txq->qid); } int @@ -111,15 +509,70 @@ cnxk_rep_flow_ops_get(struct rte_eth_dev *ethdev, const struct rte_flow_ops **op return 0; } +int +cnxk_rep_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + cnxk_rep_msg_eth_set_mac_meta_t msg_sm_meta; + cnxk_rep_msg_ack_data_t adata; + uint32_t len = 0, rc; + void *buffer; + size_t size; + + /* If representor not representing any VF, return 0 */ + if (!rep_dev->is_vf_active) + return 0; + + size = CNXK_REP_MSG_MAX_BUFFER_SZ; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + cnxk_rep_msg_populate_header(buffer, &len); + + msg_sm_meta.portid = rep_dev->rep_id; + rte_memcpy(&msg_sm_meta.addr_bytes, addr->addr_bytes, RTE_ETHER_ADDR_LEN); + cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_sm_meta, + sizeof(cnxk_rep_msg_eth_set_mac_meta_t), + CNXK_REP_MSG_ETH_SET_MAC); + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, &adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto fail; + } + + if (adata.u.sval < 0) { + rc = adata.u.sval; + plt_err("Failed to set mac address, err %d", rc); + goto fail; + } + + rte_free(buffer); + + return 0; +fail: + rte_free(buffer); + return rc; +} + /* CNXK platform representor dev ops */ struct eth_dev_ops cnxk_rep_dev_ops = { .dev_infos_get = cnxk_rep_dev_info_get, + .representor_info_get = cnxk_rep_representor_info_get, .dev_configure = cnxk_rep_dev_configure, .dev_start = cnxk_rep_dev_start, .rx_queue_setup = cnxk_rep_rx_queue_setup, .rx_queue_release = cnxk_rep_rx_queue_release, .tx_queue_setup = cnxk_rep_tx_queue_setup, .tx_queue_release = cnxk_rep_tx_queue_release, + .promiscuous_enable = cnxk_rep_promiscuous_enable, + .promiscuous_disable = cnxk_rep_promiscuous_disable, + .mac_addr_set = cnxk_rep_mac_addr_set, .link_update = cnxk_rep_link_update, .dev_close = cnxk_rep_dev_close, .dev_stop = cnxk_rep_dev_stop, From patchwork Tue Dec 19 17:39:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135359 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A75D843747; Tue, 19 Dec 2023 18:42:23 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4495E42E95; Tue, 19 Dec 2023 18:41:21 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id BFC7A42DED for ; Tue, 19 Dec 2023 18:41:13 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJA5Sn7028842 for ; Tue, 19 Dec 2023 09:41:12 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=0V0Fflgs5Es8YJuVGlYXc ZGjLzCyoJ2Que4VqlDAtvs=; b=FCqPgACo6ldOzgIDPLf6+vAZ3tTgafnuPX6X3 huZT3zyXQ0ymLhWi+PB1tglSqFAD9H2u02twnSu0HnOjdexOUsp/gCQnGnAx3Wpa /MSFvgCUdohY4nvHPVtgv3AVRSrCt/leQ07J7XHNfyRFpmKpuDhqk/JTRyymi90i Flx2s9bBJn6hb0bSYzd3iL5GnLehhusF/4NX1yuthFlCDQqvwbXjWlQcWQd46Ge7 mVI1G/JmeAtrIVf0fnbuLgmNLMk+5qrDYTAxG7MCndps0nkrFRtgVjGCH7kr8eGi e+stKS9grW3Wzc4vuGvRQHKbcSO7LRMPEIYdm6WeAZvEfBwUg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3v39491ryp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:41:12 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:41:10 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:41:10 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 4E56F3F7091; Tue, 19 Dec 2023 09:41:08 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 14/24] common/cnxk: get representees ethernet stats Date: Tue, 19 Dec 2023 23:09:53 +0530 Message-ID: <20231219174003.72901-15-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: opk6x0Vh2B_lKPoqLCesywgmvG_MTzjw X-Proofpoint-ORIG-GUID: opk6x0Vh2B_lKPoqLCesywgmvG_MTzjw X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implementing an mbox interface to fetch the representees's ethernet stats from the kernel. Signed-off-by: Harman Kalra --- drivers/common/cnxk/roc_eswitch.c | 45 +++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_eswitch.h | 2 ++ drivers/common/cnxk/roc_mbox.h | 30 +++++++++++++++++++++ drivers/common/cnxk/version.map | 1 + 4 files changed, 78 insertions(+) diff --git a/drivers/common/cnxk/roc_eswitch.c b/drivers/common/cnxk/roc_eswitch.c index 31bdba3985..034a5e6c92 100644 --- a/drivers/common/cnxk/roc_eswitch.c +++ b/drivers/common/cnxk/roc_eswitch.c @@ -321,3 +321,48 @@ roc_eswitch_nix_process_repte_notify_cb_unregister(struct roc_nix *roc_nix) dev->ops->repte_notify = NULL; } + +int +roc_eswitch_nix_repte_stats(struct roc_nix *roc_nix, uint16_t pf_func, struct roc_nix_stats *stats) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + struct nix_get_lf_stats_req *req; + struct nix_lf_stats_rsp *rsp; + struct mbox *mbox; + int rc; + + mbox = mbox_get(dev->mbox); + req = mbox_alloc_msg_nix_get_lf_stats(mbox); + if (!req) { + rc = -ENOSPC; + goto exit; + } + + req->hdr.pcifunc = roc_nix_get_pf_func(roc_nix); + req->pcifunc = pf_func; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto exit; + + stats->rx_octs = rsp->rx.octs; + stats->rx_ucast = rsp->rx.ucast; + stats->rx_bcast = rsp->rx.bcast; + stats->rx_mcast = rsp->rx.mcast; + stats->rx_drop = rsp->rx.drop; + stats->rx_drop_octs = rsp->rx.drop_octs; + stats->rx_drop_bcast = rsp->rx.drop_bcast; + stats->rx_drop_mcast = rsp->rx.drop_mcast; + stats->rx_err = rsp->rx.err; + + stats->tx_ucast = rsp->tx.ucast; + stats->tx_bcast = rsp->tx.bcast; + stats->tx_mcast = rsp->tx.mcast; + stats->tx_drop = rsp->tx.drop; + stats->tx_octs = rsp->tx.octs; + +exit: + mbox_put(mbox); + return rc; +} diff --git a/drivers/common/cnxk/roc_eswitch.h b/drivers/common/cnxk/roc_eswitch.h index 8837e19b22..907e6c37c6 100644 --- a/drivers/common/cnxk/roc_eswitch.h +++ b/drivers/common/cnxk/roc_eswitch.h @@ -25,6 +25,8 @@ int __roc_api roc_eswitch_npc_rss_action_configure(struct roc_npc *roc_npc, /* NIX */ int __roc_api roc_eswitch_nix_vlan_tpid_set(struct roc_nix *nix, uint32_t type, uint16_t tpid, bool is_vf); +int __roc_api roc_eswitch_nix_repte_stats(struct roc_nix *roc_nix, uint16_t pf_func, + struct roc_nix_stats *stats); int __roc_api roc_eswitch_nix_process_repte_notify_cb_register(struct roc_nix *roc_nix, process_repte_notify_t proc_repte_nt); void __roc_api roc_eswitch_nix_process_repte_notify_cb_unregister(struct roc_nix *roc_nix); diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h index 2bedf1fb81..1a6bb2f5a2 100644 --- a/drivers/common/cnxk/roc_mbox.h +++ b/drivers/common/cnxk/roc_mbox.h @@ -304,6 +304,7 @@ struct mbox_msghdr { M(NIX_MCAST_GRP_DESTROY, 0x802c, nix_mcast_grp_destroy, nix_mcast_grp_destroy_req, msg_rsp)\ M(NIX_MCAST_GRP_UPDATE, 0x802d, nix_mcast_grp_update, nix_mcast_grp_update_req, \ nix_mcast_grp_update_rsp) \ + M(NIX_GET_LF_STATS, 0x802e, nix_get_lf_stats, nix_get_lf_stats_req, nix_lf_stats_rsp) \ /* MCS mbox IDs (range 0xa000 - 0xbFFF) */ \ M(MCS_ALLOC_RESOURCES, 0xa000, mcs_alloc_resources, mcs_alloc_rsrc_req, \ mcs_alloc_rsrc_rsp) \ @@ -1846,6 +1847,35 @@ struct nix_mcast_grp_update_rsp { uint32_t __io mce_start_index; }; +struct nix_get_lf_stats_req { + struct mbox_msghdr hdr; + uint16_t __io pcifunc; + uint64_t __io rsvd; +}; + +struct nix_lf_stats_rsp { + struct mbox_msghdr hdr; + struct { + uint64_t __io octs; + uint64_t __io ucast; + uint64_t __io bcast; + uint64_t __io mcast; + uint64_t __io drop; + uint64_t __io drop_octs; + uint64_t __io drop_mcast; + uint64_t __io drop_bcast; + uint64_t __io err; + uint64_t __io rsvd[5]; + } rx; + struct { + uint64_t __io ucast; + uint64_t __io bcast; + uint64_t __io mcast; + uint64_t __io drop; + uint64_t __io octs; + } tx; +}; + /* Global NIX inline IPSec configuration */ struct nix_inline_ipsec_cfg { struct mbox_msghdr hdr; diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index e170a6a63a..87c9d7511f 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -93,6 +93,7 @@ INTERNAL { roc_error_msg_get; roc_eswitch_nix_process_repte_notify_cb_register; roc_eswitch_nix_process_repte_notify_cb_unregister; + roc_eswitch_nix_repte_stats; roc_eswitch_nix_vlan_tpid_set; roc_eswitch_npc_mcam_delete_rule; roc_eswitch_npc_mcam_rx_rule; From patchwork Tue Dec 19 17:39:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135360 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A2BC143747; Tue, 19 Dec 2023 18:42:33 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DBB8642EA1; Tue, 19 Dec 2023 18:41:22 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 070D442DED for ; Tue, 19 Dec 2023 18:41:16 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJA5Xd6028888 for ; Tue, 19 Dec 2023 09:41:16 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=Iy3h4i+Fq+ljljs2ygMUW dF+EBfxCpTN1YH/g3m8ImQ=; b=aqen/bcC/pKh0nOc0ufEb4o64zCe8UPzsGPuh zVCTOvXuZEbVp5sYxfkmy7Jq56Zt7i3b1McREzwFREiQ46aBgEbDmAPCzzt6kLXx aUJtg3tyR3XuirJ0/Giu95EQb1Wi8zaPtbqxAE4ZbcdGaEN803TLgvx2ZboAVBQU 413I0bEDO7jVCz+QN+kPf7z5OSBGp1cqrJ21EW8RP8jgdJR6NOn+B32tjkT4l2Rd MGlwvULWXSOp1tFOil/7y/sdNprNIul7S/a3rN1iwEdVuEvwA5MxAWxBxW86VgZF rl564igsBHxtdTrw7rfN5XZSCzEk2J3kR7rSyizs6RDd9aAbQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3v39491s01-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:41:15 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:41:13 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:41:13 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 4C32F3F7098; Tue, 19 Dec 2023 09:41:11 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 15/24] net/cnxk: ethernet statistic for representor Date: Tue, 19 Dec 2023 23:09:54 +0530 Message-ID: <20231219174003.72901-16-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: Xf4kVXu2fNWwskuZFRIKetif2SfVbe_r X-Proofpoint-ORIG-GUID: Xf4kVXu2fNWwskuZFRIKetif2SfVbe_r X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adding representor ethernet statistics support which can fetch stats for representees which are operating independently or part of companian app. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_rep_msg.h | 7 ++ drivers/net/cnxk/cnxk_rep_ops.c | 140 +++++++++++++++++++++++++++++++- 2 files changed, 143 insertions(+), 4 deletions(-) diff --git a/drivers/net/cnxk/cnxk_rep_msg.h b/drivers/net/cnxk/cnxk_rep_msg.h index 37953ac74f..3236de50ad 100644 --- a/drivers/net/cnxk/cnxk_rep_msg.h +++ b/drivers/net/cnxk/cnxk_rep_msg.h @@ -21,6 +21,8 @@ typedef enum CNXK_REP_MSG { CNXK_REP_MSG_EXIT, /* Ethernet operation msgs */ CNXK_REP_MSG_ETH_SET_MAC, + CNXK_REP_MSG_ETH_STATS_GET, + CNXK_REP_MSG_ETH_STATS_CLEAR, /* End of messaging sequence */ CNXK_REP_MSG_END, } cnxk_rep_msg_t; @@ -89,6 +91,11 @@ typedef struct cnxk_rep_msg_eth_mac_set_meta { uint8_t addr_bytes[RTE_ETHER_ADDR_LEN]; } __rte_packed cnxk_rep_msg_eth_set_mac_meta_t; +/* Ethernet op - get/clear stats */ +typedef struct cnxk_rep_msg_eth_stats_meta { + uint16_t portid; +} __rte_packed cnxk_rep_msg_eth_stats_meta_t; + void cnxk_rep_msg_populate_command(void *buffer, uint32_t *length, cnxk_rep_msg_t type, uint32_t size); void cnxk_rep_msg_populate_command_meta(void *buffer, uint32_t *length, void *msg_meta, uint32_t sz, diff --git a/drivers/net/cnxk/cnxk_rep_ops.c b/drivers/net/cnxk/cnxk_rep_ops.c index 4b3fe28acc..e07c63dcb2 100644 --- a/drivers/net/cnxk/cnxk_rep_ops.c +++ b/drivers/net/cnxk/cnxk_rep_ops.c @@ -486,19 +486,151 @@ cnxk_rep_tx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id) plt_err("Failed to release txq %d, rc=%d", rc, txq->qid); } +static int +process_eth_stats(struct cnxk_rep_dev *rep_dev, cnxk_rep_msg_ack_data_t *adata, cnxk_rep_msg_t msg) +{ + cnxk_rep_msg_eth_stats_meta_t msg_st_meta; + uint32_t len = 0, rc; + void *buffer; + size_t size; + + size = CNXK_REP_MSG_MAX_BUFFER_SZ; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + cnxk_rep_msg_populate_header(buffer, &len); + + msg_st_meta.portid = rep_dev->rep_id; + cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_st_meta, + sizeof(cnxk_rep_msg_eth_stats_meta_t), msg); + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto fail; + } + + rte_free(buffer); + + return 0; +fail: + rte_free(buffer); + return rc; +} + +static int +native_repte_eth_stats(struct cnxk_rep_dev *rep_dev, struct rte_eth_stats *stats) +{ + struct roc_nix_stats nix_stats; + int rc = 0; + + rc = roc_eswitch_nix_repte_stats(&rep_dev->parent_dev->nix, rep_dev->hw_func, &nix_stats); + if (rc) { + plt_err("Failed to get stats for representee %x, err %d", rep_dev->hw_func, rc); + goto fail; + } + + memset(stats, 0, sizeof(struct rte_eth_stats)); + stats->opackets = nix_stats.tx_ucast; + stats->opackets += nix_stats.tx_mcast; + stats->opackets += nix_stats.tx_bcast; + stats->oerrors = nix_stats.tx_drop; + stats->obytes = nix_stats.tx_octs; + + stats->ipackets = nix_stats.rx_ucast; + stats->ipackets += nix_stats.rx_mcast; + stats->ipackets += nix_stats.rx_bcast; + stats->imissed = nix_stats.rx_drop; + stats->ibytes = nix_stats.rx_octs; + stats->ierrors = nix_stats.rx_err; + + return 0; +fail: + return rc; +} + int cnxk_rep_stats_get(struct rte_eth_dev *ethdev, struct rte_eth_stats *stats) { - PLT_SET_USED(ethdev); - PLT_SET_USED(stats); + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + struct rte_eth_stats vf_stats; + cnxk_rep_msg_ack_data_t adata; + int rc; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) + return 0; + + if (rep_dev->native_repte) { + /* For representees which are independent */ + rc = native_repte_eth_stats(rep_dev, &vf_stats); + if (rc) { + plt_err("Failed to get stats for vf rep %x (hw_func %x), err %d", + rep_dev->port_id, rep_dev->hw_func, rc); + goto fail; + } + } else { + /* For representees which are part of companian app */ + rc = process_eth_stats(rep_dev, &adata, CNXK_REP_MSG_ETH_STATS_GET); + if (rc || adata.u.sval < 0) { + if (adata.u.sval < 0) + rc = adata.u.sval; + + plt_err("Failed to get stats for vf rep %x, err %d", rep_dev->port_id, rc); + } + + if (adata.size != sizeof(struct rte_eth_stats)) { + rc = -EINVAL; + plt_err("Incomplete stats received for vf rep %d", rep_dev->port_id); + goto fail; + } + + rte_memcpy(&vf_stats, adata.u.data, adata.size); + } + + stats->q_ipackets[0] = vf_stats.ipackets; + stats->q_ibytes[0] = vf_stats.ibytes; + stats->ipackets = vf_stats.ipackets; + stats->ibytes = vf_stats.ibytes; + + stats->q_opackets[0] = vf_stats.opackets; + stats->q_obytes[0] = vf_stats.obytes; + stats->opackets = vf_stats.opackets; + stats->obytes = vf_stats.obytes; + + plt_rep_dbg("Input packets %" PRId64 " Output packets %" PRId64 "", stats->ipackets, + stats->opackets); + return 0; +fail: + return rc; } int cnxk_rep_stats_reset(struct rte_eth_dev *ethdev) { - PLT_SET_USED(ethdev); - return 0; + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + cnxk_rep_msg_ack_data_t adata; + int rc = 0; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) + return 0; + + rc = process_eth_stats(rep_dev, &adata, CNXK_REP_MSG_ETH_STATS_CLEAR); + if (rc || adata.u.sval < 0) { + if (adata.u.sval < 0) + rc = adata.u.sval; + + plt_err("Failed to clear stats for vf rep %x, err %d", rep_dev->port_id, rc); + } + + return rc; } int From patchwork Tue Dec 19 17:39:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135361 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9B80E43747; Tue, 19 Dec 2023 18:42:41 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3AE1742E97; Tue, 19 Dec 2023 18:41:24 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 3C30F42E93 for ; Tue, 19 Dec 2023 18:41:19 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJA5YhL028899 for ; Tue, 19 Dec 2023 09:41:18 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=WkD2C49kCHmUHcbirTgIi TZfY+pkg539uvx9JzhsPLQ=; b=Q00h9soCTVk/mjS0r57b6HOIfSDj4DnkHBVb5 zu0MJm2F/LXShunNLZILYQQGUJRG4mgIhyDPYyYm7Ex22zFvBWaf5Tww+cxNBf2p 0A1tlFj9lei7Pg0sGZxj9j/Kg6ERVuRLAfzn06Ljtb7OzRxxqtBby5fLfiWLdSV1 YFhro3ND2KptvPj9S1kuzSz28Rhhb+V5YO0JLsuC7WugqWDHIMeRLoKEswdvVn4s 9gzzJBLK6EtUmKj2oCWH6Dcx9L7p5XIPQKran+8EOBaNAOL7XGar5R2+VIW6Aa3h AUVce9IZLcpInyvkNAjTCDB3D9qZkd1jOdmGLAFrWBVdiLuyg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3v39491s0d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:41:18 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:41:16 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:41:16 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 4FDDF3F7091; Tue, 19 Dec 2023 09:41:14 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 16/24] common/cnxk: base support for eswitch VF Date: Tue, 19 Dec 2023 23:09:55 +0530 Message-ID: <20231219174003.72901-17-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: KRQyTXEmOV3tw3YXq4djJ490m8D3a6dX X-Proofpoint-ORIG-GUID: KRQyTXEmOV3tw3YXq4djJ490m8D3a6dX X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org - ROC layer changes for supporting eswitch VF - NIX lbk changes for esw Signed-off-by: Harman Kalra --- drivers/common/cnxk/roc_constants.h | 1 + drivers/common/cnxk/roc_dev.c | 1 + drivers/common/cnxk/roc_nix.c | 15 +++++++++++++-- drivers/common/cnxk/roc_nix.h | 1 + drivers/common/cnxk/roc_nix_priv.h | 1 + drivers/common/cnxk/version.map | 1 + 6 files changed, 18 insertions(+), 2 deletions(-) diff --git a/drivers/common/cnxk/roc_constants.h b/drivers/common/cnxk/roc_constants.h index cb4edbea58..21b3998cee 100644 --- a/drivers/common/cnxk/roc_constants.h +++ b/drivers/common/cnxk/roc_constants.h @@ -44,6 +44,7 @@ #define PCI_DEVID_CNXK_RVU_REE_PF 0xA0f4 #define PCI_DEVID_CNXK_RVU_REE_VF 0xA0f5 #define PCI_DEVID_CNXK_RVU_ESWITCH_PF 0xA0E0 +#define PCI_DEVID_CNXK_RVU_ESWITCH_VF 0xA0E1 #define PCI_DEVID_CN9K_CGX 0xA059 #define PCI_DEVID_CN10K_RPM 0xA060 diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c index b12732de34..4d4cfeaaca 100644 --- a/drivers/common/cnxk/roc_dev.c +++ b/drivers/common/cnxk/roc_dev.c @@ -1225,6 +1225,7 @@ dev_vf_hwcap_update(struct plt_pci_device *pci_dev, struct dev *dev) case PCI_DEVID_CNXK_RVU_VF: case PCI_DEVID_CNXK_RVU_SDP_VF: case PCI_DEVID_CNXK_RVU_NIX_INL_VF: + case PCI_DEVID_CNXK_RVU_ESWITCH_VF: dev->hwcap |= DEV_HWCAP_F_VF; break; } diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c index 7e327a7e6e..f1eaca3ab4 100644 --- a/drivers/common/cnxk/roc_nix.c +++ b/drivers/common/cnxk/roc_nix.c @@ -13,6 +13,14 @@ roc_nix_is_lbk(struct roc_nix *roc_nix) return nix->lbk_link; } +bool +roc_nix_is_esw(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + return nix->esw_link; +} + int roc_nix_get_base_chan(struct roc_nix *roc_nix) { @@ -156,7 +164,7 @@ roc_nix_max_pkt_len(struct roc_nix *roc_nix) if (roc_model_is_cn9k()) return NIX_CN9K_MAX_HW_FRS; - if (nix->lbk_link) + if (nix->lbk_link || nix->esw_link) return NIX_LBK_MAX_HW_FRS; return NIX_RPM_MAX_HW_FRS; @@ -349,7 +357,7 @@ roc_nix_get_hw_info(struct roc_nix *roc_nix) rc = mbox_process_msg(mbox, (void *)&hw_info); if (rc == 0) { nix->vwqe_interval = hw_info->vwqe_delay; - if (nix->lbk_link) + if (nix->lbk_link || nix->esw_link) roc_nix->dwrr_mtu = hw_info->lbk_dwrr_mtu; else if (nix->sdp_link) roc_nix->dwrr_mtu = hw_info->sdp_dwrr_mtu; @@ -366,6 +374,7 @@ sdp_lbk_id_update(struct plt_pci_device *pci_dev, struct nix *nix) { nix->sdp_link = false; nix->lbk_link = false; + nix->esw_link = false; /* Update SDP/LBK link based on PCI device id */ switch (pci_dev->id.device_id) { @@ -374,7 +383,9 @@ sdp_lbk_id_update(struct plt_pci_device *pci_dev, struct nix *nix) nix->sdp_link = true; break; case PCI_DEVID_CNXK_RVU_AF_VF: + case PCI_DEVID_CNXK_RVU_ESWITCH_VF: nix->lbk_link = true; + nix->esw_link = true; break; default: break; diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index b369335fc4..ffea84dae8 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -527,6 +527,7 @@ int __roc_api roc_nix_dev_fini(struct roc_nix *roc_nix); /* Type */ bool __roc_api roc_nix_is_lbk(struct roc_nix *roc_nix); +bool __roc_api roc_nix_is_esw(struct roc_nix *roc_nix); bool __roc_api roc_nix_is_sdp(struct roc_nix *roc_nix); bool __roc_api roc_nix_is_pf(struct roc_nix *roc_nix); bool __roc_api roc_nix_is_vf_or_sdp(struct roc_nix *roc_nix); diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index 8767a62577..e2f65a49c8 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -170,6 +170,7 @@ struct nix { uintptr_t base; bool sdp_link; bool lbk_link; + bool esw_link; bool ptp_en; bool is_nix1; diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 87c9d7511f..cdb46d8739 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -276,6 +276,7 @@ INTERNAL { roc_nix_inl_outb_cpt_lfs_dump; roc_nix_cpt_ctx_cache_sync; roc_nix_is_lbk; + roc_nix_is_esw; roc_nix_is_pf; roc_nix_is_sdp; roc_nix_is_vf_or_sdp; From patchwork Tue Dec 19 17:39:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135362 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5D3CB43747; Tue, 19 Dec 2023 18:42:48 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6DA5F42EA3; Tue, 19 Dec 2023 18:41:25 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id BF71842E9F for ; Tue, 19 Dec 2023 18:41:22 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJ9Guf7029878 for ; Tue, 19 Dec 2023 09:41:22 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=wt/7DvL4xiMtAaaEAMuju ve1OW+lg0g0JB/yEcO1mYg=; b=j2sHavWemSvSb55cJg9S/KyTACbkTmjOHyV2z RAnJ/yTdmbE1HdVKVufmUV2KcJHzXpUzFZ3xQaPVvV2EVmsAqNzfS9np4m+SVaJC kj0tESt+RzazGEMg81le6PeIONpnu+H2/fD2qvqdU197xtiFDVi1436VRohOIlJR DvpbHTUWI4a9hhOoPLYbaqlnYhiUA6qfRB9Rewyxxh8NeZgscWrxnGiote3EKrj5 8/0UnrUhXmFaG2p/miT1MgKveGQMg230WkldWwGEgQrfK1fmcOKWzMefjZ5LnsTG l8lsSbyF1SVUU0lU3PUukWTGBcZYBlIcGE131DXAND4qwWVJQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3v1c9kumgs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:41:22 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:41:19 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:41:19 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 50D1C3F7092; Tue, 19 Dec 2023 09:41:17 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 17/24] net/cnxk: eswitch VF as ethernet device Date: Tue, 19 Dec 2023 23:09:56 +0530 Message-ID: <20231219174003.72901-18-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: R9uENLJVJd9WWuD8L0u-0b5xqtkOGC9Y X-Proofpoint-GUID: R9uENLJVJd9WWuD8L0u-0b5xqtkOGC9Y X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adding support for eswitch VF to probe as normal cnxk ethernet device Signed-off-by: Harman Kalra --- drivers/net/cnxk/cn10k_ethdev.c | 1 + drivers/net/cnxk/cnxk_ethdev.c | 39 ++++++++++++++++++++++-------- drivers/net/cnxk/cnxk_ethdev.h | 3 +++ drivers/net/cnxk/cnxk_ethdev_ops.c | 4 +++ drivers/net/cnxk/cnxk_link.c | 3 ++- 5 files changed, 39 insertions(+), 11 deletions(-) diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c index a2e943a3d0..9a072b72a7 100644 --- a/drivers/net/cnxk/cn10k_ethdev.c +++ b/drivers/net/cnxk/cn10k_ethdev.c @@ -963,6 +963,7 @@ static const struct rte_pci_id cn10k_pci_nix_map[] = { CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KB, PCI_DEVID_CNXK_RVU_PF), CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CNF10KB, PCI_DEVID_CNXK_RVU_PF), CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_ESWITCH_VF), CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_VF), CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CNF10KA, PCI_DEVID_CNXK_RVU_VF), CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KB, PCI_DEVID_CNXK_RVU_VF), diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index 2372a4e793..50f1641c38 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -1449,12 +1449,14 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev) goto cq_fini; /* Init flow control configuration */ - fc_cfg.type = ROC_NIX_FC_RXCHAN_CFG; - fc_cfg.rxchan_cfg.enable = true; - rc = roc_nix_fc_config_set(nix, &fc_cfg); - if (rc) { - plt_err("Failed to initialize flow control rc=%d", rc); - goto cq_fini; + if (!roc_nix_is_esw(nix)) { + fc_cfg.type = ROC_NIX_FC_RXCHAN_CFG; + fc_cfg.rxchan_cfg.enable = true; + rc = roc_nix_fc_config_set(nix, &fc_cfg); + if (rc) { + plt_err("Failed to initialize flow control rc=%d", rc); + goto cq_fini; + } } /* Update flow control configuration to PMD */ @@ -1688,10 +1690,12 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev) } /* Update Flow control configuration */ - rc = nix_update_flow_ctrl_config(eth_dev); - if (rc) { - plt_err("Failed to enable flow control. error code(%d)", rc); - return rc; + if (!roc_nix_is_esw(&dev->nix)) { + rc = nix_update_flow_ctrl_config(eth_dev); + if (rc) { + plt_err("Failed to enable flow control. error code(%d)", rc); + return rc; + } } /* Enable Rx in NPC */ @@ -1976,6 +1980,16 @@ cnxk_eth_dev_init(struct rte_eth_dev *eth_dev) TAILQ_INIT(&dev->mcs_list); } + /* Reserve a switch domain for eswitch device */ + if (pci_dev->id.device_id == PCI_DEVID_CNXK_RVU_ESWITCH_VF) { + eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR; + rc = rte_eth_switch_domain_alloc(&dev->switch_domain_id); + if (rc) { + plt_err("Failed to alloc switch domain: %d", rc); + goto free_mac_addrs; + } + } + plt_nix_dbg("Port=%d pf=%d vf=%d ver=%s hwcap=0x%" PRIx64 " rxoffload_capa=0x%" PRIx64 " txoffload_capa=0x%" PRIx64, eth_dev->data->port_id, roc_nix_get_pf(nix), @@ -2046,6 +2060,11 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool reset) } } + /* Free switch domain ID reserved for eswitch device */ + if ((eth_dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR) && + rte_eth_switch_domain_free(dev->switch_domain_id)) + plt_err("Failed to free switch domain"); + /* Disable and free rte_meter entries */ nix_meter_fini(dev); diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index 4d3ebf123b..d8eba5e1dd 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -424,6 +424,9 @@ struct cnxk_eth_dev { /* MCS device */ struct cnxk_mcs_dev *mcs_dev; struct cnxk_macsec_sess_list mcs_list; + + /* Eswitch domain ID */ + uint16_t switch_domain_id; }; struct cnxk_eth_rxq_sp { diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c index 5de2919047..67fbf7c269 100644 --- a/drivers/net/cnxk/cnxk_ethdev_ops.c +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c @@ -71,6 +71,10 @@ cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; devinfo->max_rx_mempools = CNXK_NIX_NUM_POOLS_MAX; + if (eth_dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR) { + devinfo->switch_info.name = eth_dev->device->name; + devinfo->switch_info.domain_id = dev->switch_domain_id; + } return 0; } diff --git a/drivers/net/cnxk/cnxk_link.c b/drivers/net/cnxk/cnxk_link.c index 127c9e72e7..903b44de2c 100644 --- a/drivers/net/cnxk/cnxk_link.c +++ b/drivers/net/cnxk/cnxk_link.c @@ -13,7 +13,8 @@ cnxk_nix_toggle_flag_link_cfg(struct cnxk_eth_dev *dev, bool set) dev->flags &= ~CNXK_LINK_CFG_IN_PROGRESS_F; /* Update link info for LBK */ - if (!set && (roc_nix_is_lbk(&dev->nix) || roc_nix_is_sdp(&dev->nix))) { + if (!set && + (roc_nix_is_lbk(&dev->nix) || roc_nix_is_sdp(&dev->nix) || roc_nix_is_esw(&dev->nix))) { struct rte_eth_link link; link.link_status = RTE_ETH_LINK_UP; From patchwork Tue Dec 19 17:39:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135363 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 80B6643747; Tue, 19 Dec 2023 18:42:57 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D423342EAC; Tue, 19 Dec 2023 18:41:26 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id F0B2542EA5 for ; Tue, 19 Dec 2023 18:41:25 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJ93T8k016883 for ; Tue, 19 Dec 2023 09:41:25 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=LSRKo22XyirCknB5pVSzm bt92XLyjnzKYLF+pmZmats=; b=hjsXQzBYi/ol6CXyr2IMBJHhB7inY5uPGtf70 SfaVMbIXCb7SO8bLJQ/nEe9NajF5DNl7hJXEN05Qv2L5AF9o8+RiX7elzRsUex2W QHDZHJrROdFT44pc1z8x/VwDltPD97fpA57apUhIOB4YVUdiAmG54vFxqxOtErPH f8IADpgL0TmTlh8G7cYIz9NPbedmK7ak6QUvj+LKF/RKwmBRJMq0Ny76JyYaLzLw C5/K1gjf5gPQUWG8H7ngZ5vV4mO5gqW7961ZNQ+RhaSMYN8c6ylALJVM3jyVpT3J RuzSuk2Z9LdPSYyuL/yY96b29KEADAy/5pPe2g8UsW1i8ClIQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3v1c9kumh1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:41:25 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:41:22 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:41:22 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 537EB3F7094; Tue, 19 Dec 2023 09:41:20 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , , Satheesh Paul Subject: [PATCH v2 18/24] common/cnxk: support port representor and represented port Date: Tue, 19 Dec 2023 23:09:57 +0530 Message-ID: <20231219174003.72901-19-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: ybh_y41QBf6-kMfr3FJosEfXMPiJhgki X-Proofpoint-GUID: ybh_y41QBf6-kMfr3FJosEfXMPiJhgki X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implementing the common infrastructural changes for supporting port representors and represented ports used as action and pattern in net layer. Signed-off-by: Kiran Kumar K Signed-off-by: Satheesh Paul Signed-off-by: Harman Kalra --- drivers/common/cnxk/roc_npc.c | 63 +++++++++++++++++++++++------ drivers/common/cnxk/roc_npc.h | 13 +++++- drivers/common/cnxk/roc_npc_mcam.c | 62 +++++++++++++++------------- drivers/common/cnxk/roc_npc_parse.c | 28 ++++++++++++- drivers/common/cnxk/roc_npc_priv.h | 2 + drivers/net/cnxk/cnxk_flow.c | 2 +- 6 files changed, 125 insertions(+), 45 deletions(-) diff --git a/drivers/common/cnxk/roc_npc.c b/drivers/common/cnxk/roc_npc.c index 67a660a2bc..5a836f16f5 100644 --- a/drivers/common/cnxk/roc_npc.c +++ b/drivers/common/cnxk/roc_npc.c @@ -570,6 +570,8 @@ npc_parse_actions(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, flow->ctr_id = NPC_COUNTER_NONE; flow->mtr_id = ROC_NIX_MTR_ID_INVALID; pf_func = npc->pf_func; + if (flow->has_rep) + pf_func = flow->rep_pf_func; for (; actions->type != ROC_NPC_ACTION_TYPE_END; actions++) { switch (actions->type) { @@ -898,10 +900,14 @@ npc_parse_pattern(struct npc *npc, const struct roc_npc_item_info pattern[], struct roc_npc_flow *flow, struct npc_parse_state *pst) { npc_parse_stage_func_t parse_stage_funcs[] = { - npc_parse_meta_items, npc_parse_mark_item, npc_parse_pre_l2, npc_parse_cpt_hdr, - npc_parse_higig2_hdr, npc_parse_tx_queue, npc_parse_la, npc_parse_lb, - npc_parse_lc, npc_parse_ld, npc_parse_le, npc_parse_lf, - npc_parse_lg, npc_parse_lh, + npc_parse_meta_items, npc_parse_port_representor_id, + npc_parse_mark_item, npc_parse_pre_l2, + npc_parse_cpt_hdr, npc_parse_higig2_hdr, + npc_parse_tx_queue, npc_parse_la, + npc_parse_lb, npc_parse_lc, + npc_parse_ld, npc_parse_le, + npc_parse_lf, npc_parse_lg, + npc_parse_lh, }; uint8_t layer = 0; int key_offset; @@ -1140,15 +1146,20 @@ npc_rss_action_program(struct roc_npc *roc_npc, struct roc_npc_flow *flow) { const struct roc_npc_action_rss *rss; + struct roc_npc *npc = roc_npc; uint32_t rss_grp; uint8_t alg_idx; int rc; + if (flow->has_rep) { + npc = roc_npc->rep_npc; + npc->flowkey_cfg_state = roc_npc->flowkey_cfg_state; + } + for (; actions->type != ROC_NPC_ACTION_TYPE_END; actions++) { if (actions->type == ROC_NPC_ACTION_TYPE_RSS) { rss = (const struct roc_npc_action_rss *)actions->conf; - rc = npc_rss_action_configure(roc_npc, rss, &alg_idx, - &rss_grp, flow->mcam_id); + rc = npc_rss_action_configure(npc, rss, &alg_idx, &rss_grp, flow->mcam_id); if (rc) return rc; @@ -1171,7 +1182,7 @@ npc_vtag_cfg_delete(struct roc_npc *roc_npc, struct roc_npc_flow *flow) struct roc_nix *roc_nix = roc_npc->roc_nix; struct nix_vtag_config *vtag_cfg; struct nix_vtag_config_rsp *rsp; - struct mbox *mbox; + struct mbox *mbox, *ombox; struct nix *nix; int rc = 0; @@ -1181,7 +1192,10 @@ npc_vtag_cfg_delete(struct roc_npc *roc_npc, struct roc_npc_flow *flow) } tx_vtag_action; nix = roc_nix_to_nix_priv(roc_nix); - mbox = mbox_get((&nix->dev)->mbox); + ombox = (&nix->dev)->mbox; + if (flow->has_rep) + ombox = flow->rep_mbox; + mbox = mbox_get(ombox); tx_vtag_action.reg = flow->vtag_action; vtag_cfg = mbox_alloc_msg_nix_vtag_cfg(mbox); @@ -1400,6 +1414,7 @@ npc_vtag_strip_action_configure(struct mbox *mbox, rx_vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15); rx_vtag_action |= ((uint64_t)NPC_LID_LB << 8); + rx_vtag_action |= (NIX_RX_VTAG_TYPE7 << 12); rx_vtag_action |= NIX_RX_VTAGACTION_VTAG0_RELPTR; if (*strip_cnt == 2) { @@ -1432,6 +1447,8 @@ npc_vtag_action_program(struct roc_npc *roc_npc, nix = roc_nix_to_nix_priv(roc_nix); mbox = (&nix->dev)->mbox; + if (flow->has_rep) + mbox = flow->rep_mbox; memset(vlan_info, 0, sizeof(vlan_info)); @@ -1448,6 +1465,7 @@ npc_vtag_action_program(struct roc_npc *roc_npc, if (rc) return rc; + plt_npc_dbg("VLAN strip action, strip_cnt %d", strip_cnt); if (strip_cnt == 2) actions++; @@ -1587,6 +1605,17 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, memset(flow, 0, sizeof(*flow)); memset(&parse_state, 0, sizeof(parse_state)); + flow->port_id = -1; + if (roc_npc->rep_npc) { + flow->rep_channel = roc_nix_to_nix_priv(roc_npc->rep_npc->roc_nix)->rx_chan_base; + flow->rep_pf_func = roc_npc->rep_pf_func; + flow->rep_mbox = roc_npc_to_npc_priv(roc_npc->rep_npc)->mbox; + flow->has_rep = true; + flow->is_rep_vf = !roc_nix_is_pf(roc_npc->rep_npc->roc_nix); + flow->port_id = roc_npc->rep_port_id; + flow->rep_npc = roc_npc_to_npc_priv(roc_npc->rep_npc); + } + parse_state.dst_pf_func = dst_pf_func; rc = npc_parse_rule(roc_npc, attr, pattern, actions, flow, &parse_state); @@ -1629,6 +1658,7 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, *errcode = rc; goto set_rss_failed; } + roc_npc->rep_npc = NULL; if (flow->has_age_action) npc_age_flow_list_entry_add(roc_npc, flow); @@ -1641,6 +1671,7 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, TAILQ_FOREACH(flow_iter, list, next) { if (flow_iter->mcam_id > flow->mcam_id) { TAILQ_INSERT_BEFORE(flow_iter, flow, next); + roc_npc->rep_npc = NULL; return flow; } } @@ -1649,6 +1680,7 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, return flow; set_rss_failed: + roc_npc->rep_npc = NULL; if (flow->use_pre_alloc == 0) { rc = roc_npc_mcam_free_entry(roc_npc, flow->mcam_id); if (rc != 0) { @@ -1660,6 +1692,7 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, npc_inline_dev_ipsec_action_free(npc, flow); } err_exit: + roc_npc->rep_npc = NULL; plt_free(flow); return NULL; } @@ -1667,15 +1700,19 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, int npc_rss_group_free(struct npc *npc, struct roc_npc_flow *flow) { + struct npc *lnpc = npc; uint32_t rss_grp; + if (flow->has_rep) + lnpc = flow->rep_npc; + if ((flow->npc_action & 0xF) == NIX_RX_ACTIONOP_RSS) { rss_grp = (flow->npc_action >> NPC_RSS_ACT_GRP_OFFSET) & NPC_RSS_ACT_GRP_MASK; if (rss_grp == 0 || rss_grp >= npc->rss_grps) return -EINVAL; - plt_bitmap_clear(npc->rss_grp_entries, rss_grp); + plt_bitmap_clear(lnpc->rss_grp_entries, rss_grp); } return 0; @@ -1770,7 +1807,7 @@ roc_npc_flow_destroy(struct roc_npc *roc_npc, struct roc_npc_flow *flow) } void -roc_npc_flow_dump(FILE *file, struct roc_npc *roc_npc) +roc_npc_flow_dump(FILE *file, struct roc_npc *roc_npc, int rep_port_id) { struct npc *npc = roc_npc_to_npc_priv(roc_npc); struct roc_npc_flow *flow_iter; @@ -1784,12 +1821,14 @@ roc_npc_flow_dump(FILE *file, struct roc_npc *roc_npc) /* List in ascending order of mcam entries */ TAILQ_FOREACH(flow_iter, list, next) { - roc_npc_flow_mcam_dump(file, roc_npc, flow_iter); + if (rep_port_id == -1 || rep_port_id == flow_iter->port_id) + roc_npc_flow_mcam_dump(file, roc_npc, flow_iter); } } TAILQ_FOREACH(flow_iter, &npc->ipsec_list, next) { - roc_npc_flow_mcam_dump(file, roc_npc, flow_iter); + if (rep_port_id == -1 || rep_port_id == flow_iter->port_id) + roc_npc_flow_mcam_dump(file, roc_npc, flow_iter); } } diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h index 349c7f9d22..03432909c7 100644 --- a/drivers/common/cnxk/roc_npc.h +++ b/drivers/common/cnxk/roc_npc.h @@ -42,6 +42,7 @@ enum roc_npc_item_type { ROC_NPC_ITEM_TYPE_MARK, ROC_NPC_ITEM_TYPE_TX_QUEUE, ROC_NPC_ITEM_TYPE_IPV6_ROUTING_EXT, + ROC_NPC_ITEM_TYPE_REPRESENTED_PORT, ROC_NPC_ITEM_TYPE_END, }; @@ -339,6 +340,13 @@ struct roc_npc_flow { #define ROC_NPC_MIRROR_LIST_SIZE 2 uint16_t mcast_pf_funcs[ROC_NPC_MIRROR_LIST_SIZE]; uint16_t mcast_channels[ROC_NPC_MIRROR_LIST_SIZE]; + uint16_t rep_pf_func; + uint16_t rep_channel; + struct mbox *rep_mbox; + bool has_rep; + bool is_rep_vf; + struct npc *rep_npc; + int port_id; TAILQ_ENTRY(roc_npc_flow) next; }; @@ -407,6 +415,9 @@ struct roc_npc { uint16_t sdp_channel; uint16_t sdp_channel_mask; struct roc_npc_flow_age flow_age; + struct roc_npc *rep_npc; + uint16_t rep_pf_func; + int rep_port_id; #define ROC_NPC_MEM_SZ (6 * 1024) uint8_t reserved[ROC_NPC_MEM_SZ]; @@ -448,7 +459,7 @@ int __roc_api roc_npc_get_free_mcam_entry(struct roc_npc *roc_npc, struct roc_np int __roc_api roc_npc_inl_mcam_read_counter(uint32_t ctr_id, uint64_t *count); int __roc_api roc_npc_inl_mcam_clear_counter(uint32_t ctr_id); int __roc_api roc_npc_mcam_free_all_resources(struct roc_npc *roc_npc); -void __roc_api roc_npc_flow_dump(FILE *file, struct roc_npc *roc_npc); +void __roc_api roc_npc_flow_dump(FILE *file, struct roc_npc *roc_npc, int rep_port_id); void __roc_api roc_npc_flow_mcam_dump(FILE *file, struct roc_npc *roc_npc, struct roc_npc_flow *mcam); int __roc_api roc_npc_mark_actions_get(struct roc_npc *roc_npc); diff --git a/drivers/common/cnxk/roc_npc_mcam.c b/drivers/common/cnxk/roc_npc_mcam.c index 2de988a44b..f2d5004c78 100644 --- a/drivers/common/cnxk/roc_npc_mcam.c +++ b/drivers/common/cnxk/roc_npc_mcam.c @@ -143,8 +143,8 @@ npc_lid_lt_in_kex(struct npc *npc, uint8_t lid, uint8_t lt) } static void -npc_construct_ldata_mask(struct npc *npc, struct plt_bitmap *bmap, uint8_t lid, - uint8_t lt, uint8_t ld) +npc_construct_ldata_mask(struct npc *npc, struct plt_bitmap *bmap, uint8_t lid, uint8_t lt, + uint8_t ld) { struct npc_xtract_info *x_info, *infoflag; int hdr_off, keylen; @@ -197,8 +197,7 @@ npc_construct_ldata_mask(struct npc *npc, struct plt_bitmap *bmap, uint8_t lid, * @param len length of the match */ static bool -npc_is_kex_enabled(struct npc *npc, uint8_t lid, uint8_t lt, int offset, - int len) +npc_is_kex_enabled(struct npc *npc, uint8_t lid, uint8_t lt, int offset, int len) { struct plt_bitmap *bmap; uint32_t bmap_sz; @@ -349,8 +348,8 @@ npc_mcam_alloc_entries(struct mbox *mbox, int ref_mcam, int *alloc_entry, int re } int -npc_mcam_alloc_entry(struct npc *npc, struct roc_npc_flow *mcam, - struct roc_npc_flow *ref_mcam, int prio, int *resp_count) +npc_mcam_alloc_entry(struct npc *npc, struct roc_npc_flow *mcam, struct roc_npc_flow *ref_mcam, + int prio, int *resp_count) { struct npc_mcam_alloc_entry_req *req; struct npc_mcam_alloc_entry_rsp *rsp; @@ -450,22 +449,17 @@ npc_mcam_write_entry(struct mbox *mbox, struct roc_npc_flow *mcam) static void npc_mcam_process_mkex_cfg(struct npc *npc, struct npc_get_kex_cfg_rsp *kex_rsp) { - volatile uint64_t( - *q)[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD]; + volatile uint64_t(*q)[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD]; struct npc_xtract_info *x_info = NULL; int lid, lt, ld, fl, ix; npc_dxcfg_t *p; uint64_t keyw; uint64_t val; - npc->keyx_supp_nmask[NPC_MCAM_RX] = - kex_rsp->rx_keyx_cfg & 0x7fffffffULL; - npc->keyx_supp_nmask[NPC_MCAM_TX] = - kex_rsp->tx_keyx_cfg & 0x7fffffffULL; - npc->keyx_len[NPC_MCAM_RX] = - npc_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_RX]); - npc->keyx_len[NPC_MCAM_TX] = - npc_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_TX]); + npc->keyx_supp_nmask[NPC_MCAM_RX] = kex_rsp->rx_keyx_cfg & 0x7fffffffULL; + npc->keyx_supp_nmask[NPC_MCAM_TX] = kex_rsp->tx_keyx_cfg & 0x7fffffffULL; + npc->keyx_len[NPC_MCAM_RX] = npc_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_RX]); + npc->keyx_len[NPC_MCAM_TX] = npc_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_TX]); keyw = (kex_rsp->rx_keyx_cfg >> 32) & 0x7ULL; npc->keyw[NPC_MCAM_RX] = keyw; @@ -485,8 +479,7 @@ npc_mcam_process_mkex_cfg(struct npc *npc, struct npc_get_kex_cfg_rsp *kex_rsp) /* Update LID, LT and LDATA cfg */ p = &npc->prx_dxcfg; - q = (volatile uint64_t(*)[][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD])( - &kex_rsp->intf_lid_lt_ld); + q = (volatile uint64_t(*)[][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD])(&kex_rsp->intf_lid_lt_ld); for (ix = 0; ix < NPC_MAX_INTF; ix++) { for (lid = 0; lid < NPC_MAX_LID; lid++) { for (lt = 0; lt < NPC_MAX_LT; lt++) { @@ -539,8 +532,7 @@ npc_mcam_fetch_kex_cfg(struct npc *npc) goto done; } - mbox_memcpy((char *)npc->profile_name, kex_rsp->mkex_pfl_name, - MKEX_NAME_LEN); + mbox_memcpy((char *)npc->profile_name, kex_rsp->mkex_pfl_name, MKEX_NAME_LEN); npc->exact_match_ena = (kex_rsp->rx_keyx_cfg >> 40) & 0xF; npc_mcam_process_mkex_cfg(npc, kex_rsp); @@ -551,9 +543,8 @@ npc_mcam_fetch_kex_cfg(struct npc *npc) } static void -npc_mcam_set_channel(struct roc_npc_flow *flow, - struct npc_mcam_write_entry_req *req, uint16_t channel, - uint16_t chan_mask, bool is_second_pass) +npc_mcam_set_channel(struct roc_npc_flow *flow, struct npc_mcam_write_entry_req *req, + uint16_t channel, uint16_t chan_mask, bool is_second_pass) { uint16_t chan = 0, mask = 0; @@ -683,6 +674,9 @@ npc_mcam_alloc_and_write(struct npc *npc, struct roc_npc_flow *flow, struct npc_ if (flow->nix_intf == NIX_INTF_TX) { uint16_t pf_func = (flow->npc_action >> 4) & 0xffff; + if (flow->has_rep) + pf_func = flow->rep_pf_func; + pf_func = plt_cpu_to_be_16(pf_func); rc = npc_mcam_set_pf_func(npc, flow, pf_func); @@ -759,6 +753,14 @@ npc_mcam_alloc_and_write(struct npc *npc, struct roc_npc_flow *flow, struct npc_ npc_mcam_set_channel(flow, req, inl_dev->channel, inl_dev->chan_mask, false); + } else if (flow->has_rep) { + pf_func = flow->rep_pf_func; + req->entry_data.action &= ~(GENMASK(19, 4)); + req->entry_data.action |= (uint64_t)pf_func << 4; + flow->npc_action &= ~(GENMASK(19, 4)); + flow->npc_action |= (uint64_t)pf_func << 4; + npc_mcam_set_channel(flow, req, flow->rep_channel, (BIT_ULL(12) - 1), + false); } else if (npc->is_sdp_link) { npc_mcam_set_channel(flow, req, npc->sdp_channel, npc->sdp_channel_mask, pst->is_second_pass_rule); @@ -932,13 +934,11 @@ npc_program_mcam(struct npc *npc, struct npc_parse_state *pst, bool mcam_alloc) data_off = 0; index++; } - key_data[index] |= - ((uint64_t)data << data_off); + key_data[index] |= ((uint64_t)data << data_off); if (lt == 0) mask = 0; - key_mask[index] |= - ((uint64_t)mask << data_off); + key_mask[index] |= ((uint64_t)mask << data_off); data_off += 4; } } @@ -963,8 +963,12 @@ npc_program_mcam(struct npc *npc, struct npc_parse_state *pst, bool mcam_alloc) (pst->flow->npc_action & NIX_RX_ACTIONOP_UCAST_IPSEC)) skip_base_rule = true; - if (pst->is_vf && pst->flow->nix_intf == NIX_INTF_RX && !skip_base_rule) { - mbox = mbox_get(npc->mbox); + if ((pst->is_vf || pst->flow->is_rep_vf) && pst->flow->nix_intf == NIX_INTF_RX && + !skip_base_rule) { + if (pst->flow->has_rep) + mbox = mbox_get(pst->flow->rep_mbox); + else + mbox = mbox_get(npc->mbox); (void)mbox_alloc_msg_npc_read_base_steer_rule(mbox); rc = mbox_process_msg(mbox, (void *)&base_rule_rsp); if (rc) { diff --git a/drivers/common/cnxk/roc_npc_parse.c b/drivers/common/cnxk/roc_npc_parse.c index 9ceb707ebb..af1b9f79dd 100644 --- a/drivers/common/cnxk/roc_npc_parse.c +++ b/drivers/common/cnxk/roc_npc_parse.c @@ -35,11 +35,35 @@ npc_parse_mark_item(struct npc_parse_state *pst) return 0; } +int +npc_parse_port_representor_id(struct npc_parse_state *pst) +{ + if (pst->pattern->type != ROC_NPC_ITEM_TYPE_REPRESENTED_PORT) + return 0; + + pst->pattern++; + + return 0; +} + +int +npc_parse_represented_port_id(struct npc_parse_state *pst) +{ + if (pst->pattern->type != ROC_NPC_ITEM_TYPE_REPRESENTED_PORT) + return 0; + + if (pst->flow->nix_intf != NIX_INTF_RX) + return -EINVAL; + + pst->pattern++; + + return 0; +} + static int npc_flow_raw_item_prepare(const struct roc_npc_flow_item_raw *raw_spec, const struct roc_npc_flow_item_raw *raw_mask, - struct npc_parse_item_info *info, uint8_t *spec_buf, - uint8_t *mask_buf) + struct npc_parse_item_info *info, uint8_t *spec_buf, uint8_t *mask_buf) { memset(spec_buf, 0, NPC_MAX_RAW_ITEM_LEN); diff --git a/drivers/common/cnxk/roc_npc_priv.h b/drivers/common/cnxk/roc_npc_priv.h index 50b62b1244..069c625911 100644 --- a/drivers/common/cnxk/roc_npc_priv.h +++ b/drivers/common/cnxk/roc_npc_priv.h @@ -457,6 +457,8 @@ int npc_mask_is_supported(const char *mask, const char *hw_mask, int len); int npc_parse_item_basic(const struct roc_npc_item_info *item, struct npc_parse_item_info *info); int npc_parse_meta_items(struct npc_parse_state *pst); int npc_parse_mark_item(struct npc_parse_state *pst); +int npc_parse_port_representor_id(struct npc_parse_state *pst); +int npc_parse_represented_port_id(struct npc_parse_state *pst); int npc_parse_pre_l2(struct npc_parse_state *pst); int npc_parse_higig2_hdr(struct npc_parse_state *pst); int npc_parse_cpt_hdr(struct npc_parse_state *pst); diff --git a/drivers/net/cnxk/cnxk_flow.c b/drivers/net/cnxk/cnxk_flow.c index a92b61c332..5f74c356b1 100644 --- a/drivers/net/cnxk/cnxk_flow.c +++ b/drivers/net/cnxk/cnxk_flow.c @@ -594,7 +594,7 @@ cnxk_flow_dev_dump(struct rte_eth_dev *eth_dev, struct rte_flow *flow, return -EINVAL; } - roc_npc_flow_dump(file, npc); + roc_npc_flow_dump(file, npc, -1); return 0; } From patchwork Tue Dec 19 17:39:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135364 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D4E9A43747; Tue, 19 Dec 2023 18:43:04 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0144842E98; Tue, 19 Dec 2023 18:41:30 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 0918E42E87 for ; Tue, 19 Dec 2023 18:41:28 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJ9d533021347 for ; Tue, 19 Dec 2023 09:41:28 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=VwRp3CswtUpv5x/tjCcwV 55CvIrgf8rpBoZQd/kYtGY=; b=R46lU3jU1q8+UgSWcO59YmkrRlIeAsU2G/EXL lq2qliIncAZZcqMDHvH9ya8InmVUyI95ToYUFNSypHWCHGJW6Zo9HXjp2+ip1HE5 vMUSF78gX1AdlLxgL2BqTNWr0Kppa592IoBx8AzkZSgZLlVAWZ/8av7Eux4a8FTJ DI4rXECYmsPvST0/BzYIOZfhC0lZ2EZjtwPSWEbe7Zm4puexi2WCMr2dg+0X2Vxp /PehnnecfiHOPBoWbSFDe7EG4a640I5VC3x4YYY1xPDv09g+Kv/rzuY6VtFI/K2x W0bsSHezSr8NyjbC8TgsL4bMfekG2mgz4TL6C+Qc3nvwA1bng== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3v1c9kumh9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:41:28 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:41:25 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:41:25 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 8F95B3F7050; Tue, 19 Dec 2023 09:41:23 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 19/24] net/cnxk: add represented port pattern and action Date: Tue, 19 Dec 2023 23:09:58 +0530 Message-ID: <20231219174003.72901-20-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: BT8m7t8SYVq_TIsjwr8S-xTKLX8tayLr X-Proofpoint-GUID: BT8m7t8SYVq_TIsjwr8S-xTKLX8tayLr X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adding support for represented_port item matching and action. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_flow.c | 107 +++++++++++++++++++---------------- 1 file changed, 57 insertions(+), 50 deletions(-) diff --git a/drivers/net/cnxk/cnxk_flow.c b/drivers/net/cnxk/cnxk_flow.c index 5f74c356b1..a3b21f761f 100644 --- a/drivers/net/cnxk/cnxk_flow.c +++ b/drivers/net/cnxk/cnxk_flow.c @@ -4,67 +4,48 @@ #include const struct cnxk_rte_flow_term_info term[] = { - [RTE_FLOW_ITEM_TYPE_ETH] = {ROC_NPC_ITEM_TYPE_ETH, - sizeof(struct rte_flow_item_eth)}, - [RTE_FLOW_ITEM_TYPE_VLAN] = {ROC_NPC_ITEM_TYPE_VLAN, - sizeof(struct rte_flow_item_vlan)}, - [RTE_FLOW_ITEM_TYPE_E_TAG] = {ROC_NPC_ITEM_TYPE_E_TAG, - sizeof(struct rte_flow_item_e_tag)}, - [RTE_FLOW_ITEM_TYPE_IPV4] = {ROC_NPC_ITEM_TYPE_IPV4, - sizeof(struct rte_flow_item_ipv4)}, - [RTE_FLOW_ITEM_TYPE_IPV6] = {ROC_NPC_ITEM_TYPE_IPV6, - sizeof(struct rte_flow_item_ipv6)}, - [RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT] = { - ROC_NPC_ITEM_TYPE_IPV6_FRAG_EXT, - sizeof(struct rte_flow_item_ipv6_frag_ext)}, - [RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4] = { - ROC_NPC_ITEM_TYPE_ARP_ETH_IPV4, - sizeof(struct rte_flow_item_arp_eth_ipv4)}, - [RTE_FLOW_ITEM_TYPE_MPLS] = {ROC_NPC_ITEM_TYPE_MPLS, - sizeof(struct rte_flow_item_mpls)}, - [RTE_FLOW_ITEM_TYPE_ICMP] = {ROC_NPC_ITEM_TYPE_ICMP, - sizeof(struct rte_flow_item_icmp)}, - [RTE_FLOW_ITEM_TYPE_UDP] = {ROC_NPC_ITEM_TYPE_UDP, - sizeof(struct rte_flow_item_udp)}, - [RTE_FLOW_ITEM_TYPE_TCP] = {ROC_NPC_ITEM_TYPE_TCP, - sizeof(struct rte_flow_item_tcp)}, - [RTE_FLOW_ITEM_TYPE_SCTP] = {ROC_NPC_ITEM_TYPE_SCTP, - sizeof(struct rte_flow_item_sctp)}, - [RTE_FLOW_ITEM_TYPE_ESP] = {ROC_NPC_ITEM_TYPE_ESP, - sizeof(struct rte_flow_item_esp)}, - [RTE_FLOW_ITEM_TYPE_GRE] = {ROC_NPC_ITEM_TYPE_GRE, - sizeof(struct rte_flow_item_gre)}, - [RTE_FLOW_ITEM_TYPE_NVGRE] = {ROC_NPC_ITEM_TYPE_NVGRE, - sizeof(struct rte_flow_item_nvgre)}, - [RTE_FLOW_ITEM_TYPE_VXLAN] = {ROC_NPC_ITEM_TYPE_VXLAN, - sizeof(struct rte_flow_item_vxlan)}, - [RTE_FLOW_ITEM_TYPE_GTPC] = {ROC_NPC_ITEM_TYPE_GTPC, - sizeof(struct rte_flow_item_gtp)}, - [RTE_FLOW_ITEM_TYPE_GTPU] = {ROC_NPC_ITEM_TYPE_GTPU, - sizeof(struct rte_flow_item_gtp)}, + [RTE_FLOW_ITEM_TYPE_ETH] = {ROC_NPC_ITEM_TYPE_ETH, sizeof(struct rte_flow_item_eth)}, + [RTE_FLOW_ITEM_TYPE_VLAN] = {ROC_NPC_ITEM_TYPE_VLAN, sizeof(struct rte_flow_item_vlan)}, + [RTE_FLOW_ITEM_TYPE_E_TAG] = {ROC_NPC_ITEM_TYPE_E_TAG, sizeof(struct rte_flow_item_e_tag)}, + [RTE_FLOW_ITEM_TYPE_IPV4] = {ROC_NPC_ITEM_TYPE_IPV4, sizeof(struct rte_flow_item_ipv4)}, + [RTE_FLOW_ITEM_TYPE_IPV6] = {ROC_NPC_ITEM_TYPE_IPV6, sizeof(struct rte_flow_item_ipv6)}, + [RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT] = {ROC_NPC_ITEM_TYPE_IPV6_FRAG_EXT, + sizeof(struct rte_flow_item_ipv6_frag_ext)}, + [RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4] = {ROC_NPC_ITEM_TYPE_ARP_ETH_IPV4, + sizeof(struct rte_flow_item_arp_eth_ipv4)}, + [RTE_FLOW_ITEM_TYPE_MPLS] = {ROC_NPC_ITEM_TYPE_MPLS, sizeof(struct rte_flow_item_mpls)}, + [RTE_FLOW_ITEM_TYPE_ICMP] = {ROC_NPC_ITEM_TYPE_ICMP, sizeof(struct rte_flow_item_icmp)}, + [RTE_FLOW_ITEM_TYPE_UDP] = {ROC_NPC_ITEM_TYPE_UDP, sizeof(struct rte_flow_item_udp)}, + [RTE_FLOW_ITEM_TYPE_TCP] = {ROC_NPC_ITEM_TYPE_TCP, sizeof(struct rte_flow_item_tcp)}, + [RTE_FLOW_ITEM_TYPE_SCTP] = {ROC_NPC_ITEM_TYPE_SCTP, sizeof(struct rte_flow_item_sctp)}, + [RTE_FLOW_ITEM_TYPE_ESP] = {ROC_NPC_ITEM_TYPE_ESP, sizeof(struct rte_flow_item_esp)}, + [RTE_FLOW_ITEM_TYPE_GRE] = {ROC_NPC_ITEM_TYPE_GRE, sizeof(struct rte_flow_item_gre)}, + [RTE_FLOW_ITEM_TYPE_NVGRE] = {ROC_NPC_ITEM_TYPE_NVGRE, sizeof(struct rte_flow_item_nvgre)}, + [RTE_FLOW_ITEM_TYPE_VXLAN] = {ROC_NPC_ITEM_TYPE_VXLAN, sizeof(struct rte_flow_item_vxlan)}, + [RTE_FLOW_ITEM_TYPE_GTPC] = {ROC_NPC_ITEM_TYPE_GTPC, sizeof(struct rte_flow_item_gtp)}, + [RTE_FLOW_ITEM_TYPE_GTPU] = {ROC_NPC_ITEM_TYPE_GTPU, sizeof(struct rte_flow_item_gtp)}, [RTE_FLOW_ITEM_TYPE_GENEVE] = {ROC_NPC_ITEM_TYPE_GENEVE, sizeof(struct rte_flow_item_geneve)}, - [RTE_FLOW_ITEM_TYPE_VXLAN_GPE] = { - ROC_NPC_ITEM_TYPE_VXLAN_GPE, - sizeof(struct rte_flow_item_vxlan_gpe)}, + [RTE_FLOW_ITEM_TYPE_VXLAN_GPE] = {ROC_NPC_ITEM_TYPE_VXLAN_GPE, + sizeof(struct rte_flow_item_vxlan_gpe)}, [RTE_FLOW_ITEM_TYPE_IPV6_EXT] = {ROC_NPC_ITEM_TYPE_IPV6_EXT, sizeof(struct rte_flow_item_ipv6_ext)}, [RTE_FLOW_ITEM_TYPE_VOID] = {ROC_NPC_ITEM_TYPE_VOID, 0}, [RTE_FLOW_ITEM_TYPE_ANY] = {ROC_NPC_ITEM_TYPE_ANY, 0}, - [RTE_FLOW_ITEM_TYPE_GRE_KEY] = {ROC_NPC_ITEM_TYPE_GRE_KEY, - sizeof(uint32_t)}, + [RTE_FLOW_ITEM_TYPE_GRE_KEY] = {ROC_NPC_ITEM_TYPE_GRE_KEY, sizeof(uint32_t)}, [RTE_FLOW_ITEM_TYPE_HIGIG2] = {ROC_NPC_ITEM_TYPE_HIGIG2, sizeof(struct rte_flow_item_higig2_hdr)}, - [RTE_FLOW_ITEM_TYPE_RAW] = {ROC_NPC_ITEM_TYPE_RAW, - sizeof(struct rte_flow_item_raw)}, - [RTE_FLOW_ITEM_TYPE_MARK] = {ROC_NPC_ITEM_TYPE_MARK, - sizeof(struct rte_flow_item_mark)}, + [RTE_FLOW_ITEM_TYPE_RAW] = {ROC_NPC_ITEM_TYPE_RAW, sizeof(struct rte_flow_item_raw)}, + [RTE_FLOW_ITEM_TYPE_MARK] = {ROC_NPC_ITEM_TYPE_MARK, sizeof(struct rte_flow_item_mark)}, [RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT] = {ROC_NPC_ITEM_TYPE_IPV6_ROUTING_EXT, - sizeof(struct rte_flow_item_ipv6_routing_ext)}, + sizeof(struct rte_flow_item_ipv6_routing_ext)}, [RTE_FLOW_ITEM_TYPE_TX_QUEUE] = {ROC_NPC_ITEM_TYPE_TX_QUEUE, - sizeof(struct rte_flow_item_tx_queue)}, + sizeof(struct rte_flow_item_tx_queue)}, + [RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT] = {ROC_NPC_ITEM_TYPE_REPRESENTED_PORT, + sizeof(struct rte_flow_item_ethdev)}, [RTE_FLOW_ITEM_TYPE_PPPOES] = {ROC_NPC_ITEM_TYPE_PPPOES, - sizeof(struct rte_flow_item_pppoe)}}; + sizeof(struct rte_flow_item_pppoe)} +}; static int npc_rss_action_validate(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, @@ -372,6 +353,11 @@ cnxk_map_flow_data(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr struct roc_npc_action_sample *in_sample_actions, uint32_t *flowkey_cfg, uint16_t *dst_pf_func) { + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + const struct rte_flow_item_ethdev *rep_eth_dev; + struct rte_eth_dev *portid_eth_dev; + char if_name[RTE_ETH_NAME_MAX_LEN]; + struct cnxk_eth_dev *hw_dst; int i = 0; in_attr->priority = attr->priority; @@ -384,6 +370,27 @@ cnxk_map_flow_data(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr in_pattern[i].mask = pattern->mask; in_pattern[i].type = term[pattern->type].item_type; in_pattern[i].size = term[pattern->type].item_size; + if (pattern->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT) { + rep_eth_dev = (const struct rte_flow_item_ethdev *)pattern->spec; + if (rte_eth_dev_get_name_by_port(rep_eth_dev->port_id, if_name)) { + plt_err("Name not found for output port id"); + return -EINVAL; + } + portid_eth_dev = rte_eth_dev_allocated(if_name); + if (!portid_eth_dev) { + plt_err("eth_dev not found for output port id"); + return -EINVAL; + } + if (strcmp(portid_eth_dev->device->driver->name, + eth_dev->device->driver->name) != 0) { + plt_err("Output port not under same driver"); + return -EINVAL; + } + hw_dst = portid_eth_dev->data->dev_private; + dev->npc.rep_npc = &hw_dst->npc; + dev->npc.rep_port_id = rep_eth_dev->port_id; + dev->npc.rep_pf_func = hw_dst->npc.pf_func; + } pattern++; i++; } From patchwork Tue Dec 19 17:39:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135365 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E80BA43747; Tue, 19 Dec 2023 18:43:13 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8FCDA42E66; Tue, 19 Dec 2023 18:41:33 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D2E6542EB5 for ; Tue, 19 Dec 2023 18:41:31 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJ9nuii003244 for ; Tue, 19 Dec 2023 09:41:31 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=wNKb22Vf0SkGLcqs8iwlC xi1CnA+V5bs4EJ0q5u6RxI=; b=ZahJ8FBjqNTniKuh+vKQsuh/s1+kXUDVm3Eeb bgiO4nQzdOxL56dzSN+iS34uB2OlWcYOFT2LqpAdaGtRKxFx45g5tKWnmPBafTeE yzehPBVXMyzlX2z4CouP9K64qnhiUr0hao4XHRUn2fr27Q5LY9Jz506w2Sar2TmN 8K8guC4U8a6wPTwFSemEICXslOOAqOzKAUoEetE1Z1NfQkRUvAETKoCZJy+gBDQV +VZ1WaCRBteWS/BFLZNwdibHxWvejOdgD8KeGa/kSn3KFnNWZFTsgk2lpQYHyxAD bCIOsy/TuJzEHbwJc49Of5qn1nnj8qqoBvxeCubDtIOzVL/Ew== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3v1c9kumhj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:41:31 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:41:28 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:41:28 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 8DF1A3F708F; Tue, 19 Dec 2023 09:41:26 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 20/24] net/cnxk: add port representor pattern and action Date: Tue, 19 Dec 2023 23:09:59 +0530 Message-ID: <20231219174003.72901-21-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: J8x7jQN3WOI6I47sllVYoSwkgn6zekwG X-Proofpoint-GUID: J8x7jQN3WOI6I47sllVYoSwkgn6zekwG X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adding support for port_representor as item matching and action. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_flow.c | 224 +++++++++++++++++++++++++++++++---- drivers/net/cnxk/cnxk_rep.h | 14 +++ 2 files changed, 212 insertions(+), 26 deletions(-) diff --git a/drivers/net/cnxk/cnxk_flow.c b/drivers/net/cnxk/cnxk_flow.c index a3b21f761f..959d773513 100644 --- a/drivers/net/cnxk/cnxk_flow.c +++ b/drivers/net/cnxk/cnxk_flow.c @@ -2,6 +2,7 @@ * Copyright(C) 2021 Marvell. */ #include +#include const struct cnxk_rte_flow_term_info term[] = { [RTE_FLOW_ITEM_TYPE_ETH] = {ROC_NPC_ITEM_TYPE_ETH, sizeof(struct rte_flow_item_eth)}, @@ -185,11 +186,44 @@ roc_npc_parse_sample_subaction(struct rte_eth_dev *eth_dev, const struct rte_flo return 0; } +static int +representor_portid_action(struct roc_npc_action *in_actions, struct rte_eth_dev *portid_eth_dev, + uint16_t *dst_pf_func, uint8_t has_tunnel_pattern, int *act_cnt) +{ + struct rte_eth_dev *rep_eth_dev = portid_eth_dev; + struct rte_flow_action_mark *act_mark; + struct cnxk_rep_dev *rep_dev; + /* For inserting an action in the list */ + int i = *act_cnt; + + rep_dev = cnxk_rep_pmd_priv(rep_eth_dev); + *dst_pf_func = rep_dev->hw_func; + + /* Add Mark action */ + i++; + act_mark = plt_zmalloc(sizeof(struct rte_flow_action_mark), 0); + if (!act_mark) { + plt_err("Error allocation memory"); + return -ENOMEM; + } + + /* Mark ID format: (tunnel type - VxLAN, Geneve << 6) | Tunnel decap */ + act_mark->id = has_tunnel_pattern ? ((has_tunnel_pattern << 6) | 5) : 1; + in_actions[i].type = ROC_NPC_ACTION_TYPE_MARK; + in_actions[i].conf = (struct rte_flow_action_mark *)act_mark; + + *act_cnt = i; + plt_rep_dbg("Rep port %d ID %d mark ID is %d rep_dev->hw_func 0x%x", rep_dev->port_id, + rep_dev->rep_id, act_mark->id, rep_dev->hw_func); + + return 0; +} + static int cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, const struct rte_flow_action actions[], struct roc_npc_action in_actions[], struct roc_npc_action_sample *in_sample_actions, uint32_t *flowkey_cfg, - uint16_t *dst_pf_func) + uint16_t *dst_pf_func, uint8_t has_tunnel_pattern) { struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); const struct rte_flow_action_queue *act_q = NULL; @@ -256,14 +290,27 @@ cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, plt_err("eth_dev not found for output port id"); goto err_exit; } - if (strcmp(portid_eth_dev->device->driver->name, - eth_dev->device->driver->name) != 0) { - plt_err("Output port not under same driver"); - goto err_exit; + + if (cnxk_ethdev_is_representor(if_name)) { + plt_rep_dbg("Representor port %d act port %d", port_act->id, + act_ethdev->port_id); + if (representor_portid_action(in_actions, portid_eth_dev, + dst_pf_func, has_tunnel_pattern, + &i)) { + plt_err("Representor port action set failed"); + goto err_exit; + } + } else { + if (strcmp(portid_eth_dev->device->driver->name, + eth_dev->device->driver->name) != 0) { + plt_err("Output port not under same driver"); + goto err_exit; + } + + hw_dst = portid_eth_dev->data->dev_private; + roc_npc_dst = &hw_dst->npc; + *dst_pf_func = roc_npc_dst->pf_func; } - hw_dst = portid_eth_dev->data->dev_private; - roc_npc_dst = &hw_dst->npc; - *dst_pf_func = roc_npc_dst->pf_func; break; case RTE_FLOW_ACTION_TYPE_QUEUE: @@ -324,6 +371,8 @@ cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, in_actions[i].type = ROC_NPC_ACTION_TYPE_SAMPLE; in_actions[i].conf = in_sample_actions; break; + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + continue; default: plt_npc_dbg("Action is not supported = %d", actions->type); goto err_exit; @@ -346,12 +395,8 @@ cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, } static int -cnxk_map_flow_data(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - struct roc_npc_attr *in_attr, struct roc_npc_item_info in_pattern[], - struct roc_npc_action in_actions[], - struct roc_npc_action_sample *in_sample_actions, uint32_t *flowkey_cfg, - uint16_t *dst_pf_func) +cnxk_map_pattern(struct rte_eth_dev *eth_dev, const struct rte_flow_item pattern[], + struct roc_npc_item_info in_pattern[], uint8_t *has_tunnel_pattern) { struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); const struct rte_flow_item_ethdev *rep_eth_dev; @@ -360,10 +405,6 @@ cnxk_map_flow_data(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr struct cnxk_eth_dev *hw_dst; int i = 0; - in_attr->priority = attr->priority; - in_attr->ingress = attr->ingress; - in_attr->egress = attr->egress; - while (pattern->type != RTE_FLOW_ITEM_TYPE_END) { in_pattern[i].spec = pattern->spec; in_pattern[i].last = pattern->last; @@ -374,30 +415,81 @@ cnxk_map_flow_data(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr rep_eth_dev = (const struct rte_flow_item_ethdev *)pattern->spec; if (rte_eth_dev_get_name_by_port(rep_eth_dev->port_id, if_name)) { plt_err("Name not found for output port id"); - return -EINVAL; + goto fail; } portid_eth_dev = rte_eth_dev_allocated(if_name); if (!portid_eth_dev) { plt_err("eth_dev not found for output port id"); - return -EINVAL; + goto fail; } if (strcmp(portid_eth_dev->device->driver->name, eth_dev->device->driver->name) != 0) { plt_err("Output port not under same driver"); - return -EINVAL; + goto fail; + } + if (cnxk_ethdev_is_representor(if_name)) { + /* Case where represented port not part of same + * app and represented by a representor port. + */ + struct cnxk_rep_dev *rep_dev; + struct cnxk_eswitch_dev *eswitch_dev; + + rep_dev = cnxk_rep_pmd_priv(portid_eth_dev); + eswitch_dev = rep_dev->parent_dev; + dev->npc.rep_npc = &eswitch_dev->npc; + dev->npc.rep_port_id = rep_eth_dev->port_id; + dev->npc.rep_pf_func = rep_dev->hw_func; + plt_rep_dbg("Represented port %d act port %d rep_dev->hw_func 0x%x", + rep_eth_dev->port_id, eth_dev->data->port_id, + rep_dev->hw_func); + } else { + /* Case where represented port part of same app + * as PF. + */ + hw_dst = portid_eth_dev->data->dev_private; + dev->npc.rep_npc = &hw_dst->npc; + dev->npc.rep_port_id = rep_eth_dev->port_id; + dev->npc.rep_pf_func = hw_dst->npc.pf_func; } - hw_dst = portid_eth_dev->data->dev_private; - dev->npc.rep_npc = &hw_dst->npc; - dev->npc.rep_port_id = rep_eth_dev->port_id; - dev->npc.rep_pf_func = hw_dst->npc.pf_func; } + + if (pattern->type == RTE_FLOW_ITEM_TYPE_VXLAN || + pattern->type == RTE_FLOW_ITEM_TYPE_VXLAN_GPE || + pattern->type == RTE_FLOW_ITEM_TYPE_GRE) + *has_tunnel_pattern = pattern->type; + pattern++; i++; } in_pattern[i].type = ROC_NPC_ITEM_TYPE_END; + return 0; +fail: + return -EINVAL; +} + +static int +cnxk_map_flow_data(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct roc_npc_attr *in_attr, struct roc_npc_item_info in_pattern[], + struct roc_npc_action in_actions[], + struct roc_npc_action_sample *in_sample_actions, uint32_t *flowkey_cfg, + uint16_t *dst_pf_func) +{ + uint8_t has_tunnel_pattern = 0; + int rc; + + in_attr->priority = attr->priority; + in_attr->ingress = attr->ingress; + in_attr->egress = attr->egress; + + rc = cnxk_map_pattern(eth_dev, pattern, in_pattern, &has_tunnel_pattern); + if (rc) { + plt_err("Failed to map pattern list"); + return rc; + } return cnxk_map_actions(eth_dev, attr, actions, in_actions, in_sample_actions, flowkey_cfg, - dst_pf_func); + dst_pf_func, has_tunnel_pattern); } static int @@ -461,6 +553,7 @@ cnxk_flow_create(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, int rc; memset(&in_sample_action, 0, sizeof(in_sample_action)); + memset(&in_attr, 0, sizeof(struct roc_npc_attr)); rc = cnxk_map_flow_data(eth_dev, attr, pattern, actions, &in_attr, in_pattern, in_actions, &in_sample_action, &npc->flowkey_cfg_state, &dst_pf_func); if (rc) { @@ -646,6 +739,81 @@ cnxk_flow_get_aged_flows(struct rte_eth_dev *eth_dev, void **context, return cnt; } +static int +cnxk_flow_tunnel_decap_set(__rte_unused struct rte_eth_dev *dev, struct rte_flow_tunnel *tunnel, + struct rte_flow_action **pmd_actions, uint32_t *num_of_actions, + __rte_unused struct rte_flow_error *err) +{ + struct rte_flow_action *nfp_action; + + nfp_action = rte_zmalloc("nfp_tun_action", sizeof(struct rte_flow_action), 0); + if (nfp_action == NULL) { + plt_err("Alloc memory for nfp tunnel action failed."); + return -ENOMEM; + } + + if (tunnel->is_ipv6) + nfp_action->conf = (void *)~0; + + switch (tunnel->type) { + case RTE_FLOW_ITEM_TYPE_VXLAN: + nfp_action->type = RTE_FLOW_ACTION_TYPE_VXLAN_DECAP; + *pmd_actions = nfp_action; + *num_of_actions = 1; + break; + case RTE_FLOW_ITEM_TYPE_GENEVE: + case RTE_FLOW_ITEM_TYPE_GRE: + nfp_action->type = RTE_FLOW_ACTION_TYPE_RAW_DECAP; + *pmd_actions = nfp_action; + *num_of_actions = 1; + break; + default: + *pmd_actions = NULL; + *num_of_actions = 0; + rte_free(nfp_action); + break; + } + + return 0; +} + +static int +cnxk_flow_tunnel_action_decap_release(__rte_unused struct rte_eth_dev *dev, + struct rte_flow_action *pmd_actions, uint32_t num_of_actions, + __rte_unused struct rte_flow_error *err) +{ + uint32_t i; + struct rte_flow_action *nfp_action; + + for (i = 0; i < num_of_actions; i++) { + nfp_action = &pmd_actions[i]; + nfp_action->conf = NULL; + rte_free(nfp_action); + } + + return 0; +} + +static int +cnxk_flow_tunnel_match(__rte_unused struct rte_eth_dev *dev, + __rte_unused struct rte_flow_tunnel *tunnel, + __rte_unused struct rte_flow_item **pmd_items, uint32_t *num_of_items, + __rte_unused struct rte_flow_error *err) +{ + *num_of_items = 0; + + return 0; +} + +static int +cnxk_flow_tunnel_item_release(__rte_unused struct rte_eth_dev *dev, + __rte_unused struct rte_flow_item *pmd_items, + __rte_unused uint32_t num_of_items, + __rte_unused struct rte_flow_error *err) +{ + return 0; +} + struct rte_flow_ops cnxk_flow_ops = { .validate = cnxk_flow_validate, .flush = cnxk_flow_flush, @@ -653,4 +821,8 @@ struct rte_flow_ops cnxk_flow_ops = { .isolate = cnxk_flow_isolate, .dev_dump = cnxk_flow_dev_dump, .get_aged_flows = cnxk_flow_get_aged_flows, + .tunnel_match = cnxk_flow_tunnel_match, + .tunnel_item_release = cnxk_flow_tunnel_item_release, + .tunnel_decap_set = cnxk_flow_tunnel_decap_set, + .tunnel_action_decap_release = cnxk_flow_tunnel_action_decap_release, }; diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h index 266dd4a688..9ac675426e 100644 --- a/drivers/net/cnxk/cnxk_rep.h +++ b/drivers/net/cnxk/cnxk_rep.h @@ -1,6 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(C) 2023 Marvell. */ + +#include + #include #include @@ -90,6 +93,17 @@ cnxk_rep_pool_buffer_stats(struct rte_mempool *pool) pool->size, rte_mempool_in_use_count(pool), rte_mempool_avail_count(pool)); } +static inline int +cnxk_ethdev_is_representor(const char *if_name) +{ + regex_t regex; + int val; + + val = regcomp(®ex, "net_.*_representor_.*", 0); + val = regexec(®ex, if_name, 0, NULL, 0); + return (val == 0); +} + /* Prototypes */ int cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev); int cnxk_rep_dev_remove(struct cnxk_eswitch_dev *eswitch_dev); From patchwork Tue Dec 19 17:40:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135366 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6D59643747; Tue, 19 Dec 2023 18:43:20 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C6CFE42E6A; Tue, 19 Dec 2023 18:41:36 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 0668142E6A for ; Tue, 19 Dec 2023 18:41:34 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJ9Guf9029878 for ; Tue, 19 Dec 2023 09:41:34 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=Iq8eMCUL/PyH7M070qdvd KNlKSGCr386J/QViMvY4C0=; b=T/bS+ZhihpoC/aBWPiXlQtlmsFQOVnkduKHTu VGI/wPdJNWChxt7rxGjOkNQCDuk/qCeA7q3oR4yzgx53XcZVV9f+vNh7shvR24KX Jn6mZdAajx5liGa2egbsAHUjjbHBlfh5q06kVpC8t7v7idpUDlM29SUe4KsF5q0L H+MUUcyct2SphZ4nbX6AJJh4Z3J/aJ/IOCb9dSPF+i+qU8/dA/S8IstCuSLvVa9g xmcTeCgBG8k+CfRru6loftSTUctsRMdXo2I8V/rtmY7CuTHXL7Xdv7xmv5ZCtZET CS1v3iKHFioRKN/lZrodh9LYwDcp/cqkbAq2NGFAV3lCYt/yw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3v1c9kumhu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:41:34 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:41:31 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:41:31 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 8C2063F7050; Tue, 19 Dec 2023 09:41:29 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 21/24] net/cnxk: generalize flow operation APIs Date: Tue, 19 Dec 2023 23:10:00 +0530 Message-ID: <20231219174003.72901-22-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 95cGv1fG91vdxngoxSzWxmSlSaCaofDR X-Proofpoint-GUID: 95cGv1fG91vdxngoxSzWxmSlSaCaofDR X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Flow operations can be performed on cnxk ports as well as representor ports. Since representor ports are not cnxk ports but have eswitch as base device underneath, special handling is required to align with base infra. Introducing a flag to generic flow APIs to discriminate if the operation request made on normal or representor ports. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_flow.c | 240 +++++++++++++++++++++++++++-------- drivers/net/cnxk/cnxk_flow.h | 19 +++ 2 files changed, 205 insertions(+), 54 deletions(-) diff --git a/drivers/net/cnxk/cnxk_flow.c b/drivers/net/cnxk/cnxk_flow.c index 959d773513..7959f2ed6b 100644 --- a/drivers/net/cnxk/cnxk_flow.c +++ b/drivers/net/cnxk/cnxk_flow.c @@ -223,7 +223,7 @@ static int cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, const struct rte_flow_action actions[], struct roc_npc_action in_actions[], struct roc_npc_action_sample *in_sample_actions, uint32_t *flowkey_cfg, - uint16_t *dst_pf_func, uint8_t has_tunnel_pattern) + uint16_t *dst_pf_func, uint8_t has_tunnel_pattern, bool is_rep) { struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); const struct rte_flow_action_queue *act_q = NULL; @@ -273,6 +273,9 @@ cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: case RTE_FLOW_ACTION_TYPE_PORT_ID: + /* No port ID action on representor ethdevs */ + if (is_rep) + continue; in_actions[i].type = ROC_NPC_ACTION_TYPE_PORT_ID; in_actions[i].conf = actions->conf; act_ethdev = (const struct rte_flow_action_ethdev *) @@ -320,6 +323,9 @@ cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, break; case RTE_FLOW_ACTION_TYPE_RSS: + /* No RSS action on representor ethdevs */ + if (is_rep) + continue; rc = npc_rss_action_validate(eth_dev, attr, actions); if (rc) goto err_exit; @@ -396,22 +402,37 @@ cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, static int cnxk_map_pattern(struct rte_eth_dev *eth_dev, const struct rte_flow_item pattern[], - struct roc_npc_item_info in_pattern[], uint8_t *has_tunnel_pattern) + struct roc_npc_item_info in_pattern[], uint8_t *has_tunnel_pattern, bool is_rep) { - struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); const struct rte_flow_item_ethdev *rep_eth_dev; struct rte_eth_dev *portid_eth_dev; char if_name[RTE_ETH_NAME_MAX_LEN]; struct cnxk_eth_dev *hw_dst; + struct cnxk_rep_dev *rdev; + struct cnxk_eth_dev *dev; + struct roc_npc *npc; int i = 0; + if (!is_rep) { + dev = cnxk_eth_pmd_priv(eth_dev); + npc = &dev->npc; + } else { + rdev = cnxk_rep_pmd_priv(eth_dev); + npc = &rdev->parent_dev->npc; + + npc->rep_npc = npc; + npc->rep_port_id = rdev->port_id; + npc->rep_pf_func = rdev->hw_func; + } + while (pattern->type != RTE_FLOW_ITEM_TYPE_END) { in_pattern[i].spec = pattern->spec; in_pattern[i].last = pattern->last; in_pattern[i].mask = pattern->mask; in_pattern[i].type = term[pattern->type].item_type; in_pattern[i].size = term[pattern->type].item_size; - if (pattern->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT) { + if (pattern->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT || + pattern->type == RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR) { rep_eth_dev = (const struct rte_flow_item_ethdev *)pattern->spec; if (rte_eth_dev_get_name_by_port(rep_eth_dev->port_id, if_name)) { plt_err("Name not found for output port id"); @@ -422,11 +443,6 @@ cnxk_map_pattern(struct rte_eth_dev *eth_dev, const struct rte_flow_item pattern plt_err("eth_dev not found for output port id"); goto fail; } - if (strcmp(portid_eth_dev->device->driver->name, - eth_dev->device->driver->name) != 0) { - plt_err("Output port not under same driver"); - goto fail; - } if (cnxk_ethdev_is_representor(if_name)) { /* Case where represented port not part of same * app and represented by a representor port. @@ -436,20 +452,25 @@ cnxk_map_pattern(struct rte_eth_dev *eth_dev, const struct rte_flow_item pattern rep_dev = cnxk_rep_pmd_priv(portid_eth_dev); eswitch_dev = rep_dev->parent_dev; - dev->npc.rep_npc = &eswitch_dev->npc; - dev->npc.rep_port_id = rep_eth_dev->port_id; - dev->npc.rep_pf_func = rep_dev->hw_func; + npc->rep_npc = &eswitch_dev->npc; + npc->rep_port_id = rep_eth_dev->port_id; + npc->rep_pf_func = rep_dev->hw_func; plt_rep_dbg("Represented port %d act port %d rep_dev->hw_func 0x%x", rep_eth_dev->port_id, eth_dev->data->port_id, rep_dev->hw_func); } else { + if (strcmp(portid_eth_dev->device->driver->name, + eth_dev->device->driver->name) != 0) { + plt_err("Output port not under same driver"); + goto fail; + } /* Case where represented port part of same app * as PF. */ hw_dst = portid_eth_dev->data->dev_private; - dev->npc.rep_npc = &hw_dst->npc; - dev->npc.rep_port_id = rep_eth_dev->port_id; - dev->npc.rep_pf_func = hw_dst->npc.pf_func; + npc->rep_npc = &hw_dst->npc; + npc->rep_port_id = rep_eth_dev->port_id; + npc->rep_pf_func = hw_dst->npc.pf_func; } } @@ -473,7 +494,7 @@ cnxk_map_flow_data(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr struct roc_npc_attr *in_attr, struct roc_npc_item_info in_pattern[], struct roc_npc_action in_actions[], struct roc_npc_action_sample *in_sample_actions, uint32_t *flowkey_cfg, - uint16_t *dst_pf_func) + uint16_t *dst_pf_func, bool is_rep) { uint8_t has_tunnel_pattern = 0; int rc; @@ -481,44 +502,61 @@ cnxk_map_flow_data(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr in_attr->priority = attr->priority; in_attr->ingress = attr->ingress; in_attr->egress = attr->egress; + if (attr->transfer) { + /* For representor ethdevs transfer attribute corresponds to egress rule */ + if (is_rep) + in_attr->egress = attr->transfer; + else + in_attr->ingress = attr->transfer; + } - rc = cnxk_map_pattern(eth_dev, pattern, in_pattern, &has_tunnel_pattern); + rc = cnxk_map_pattern(eth_dev, pattern, in_pattern, &has_tunnel_pattern, is_rep); if (rc) { plt_err("Failed to map pattern list"); return rc; } return cnxk_map_actions(eth_dev, attr, actions, in_actions, in_sample_actions, flowkey_cfg, - dst_pf_func, has_tunnel_pattern); + dst_pf_func, has_tunnel_pattern, is_rep); } -static int -cnxk_flow_validate(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - struct rte_flow_error *error) +int +cnxk_flow_validate_internal(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], struct rte_flow_error *error, + bool is_rep) { struct roc_npc_item_info in_pattern[ROC_NPC_ITEM_TYPE_END + 1]; struct roc_npc_action in_actions[ROC_NPC_MAX_ACTION_COUNT]; - struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); struct roc_npc_action_sample in_sample_action; - struct roc_npc *npc = &dev->npc; + struct cnxk_rep_dev *rep_dev; struct roc_npc_attr in_attr; + struct cnxk_eth_dev *dev; struct roc_npc_flow flow; uint32_t flowkey_cfg = 0; uint16_t dst_pf_func = 0; + struct roc_npc *npc; int rc; - /* Skip flow validation for MACsec. */ - if (actions[0].type == RTE_FLOW_ACTION_TYPE_SECURITY && - cnxk_eth_macsec_sess_get_by_sess(dev, actions[0].conf) != NULL) - return 0; + /* is_rep set for operation performed via representor ports */ + if (!is_rep) { + dev = cnxk_eth_pmd_priv(eth_dev); + npc = &dev->npc; + /* Skip flow validation for MACsec. */ + if (actions[0].type == RTE_FLOW_ACTION_TYPE_SECURITY && + cnxk_eth_macsec_sess_get_by_sess(dev, actions[0].conf) != NULL) + return 0; + } else { + rep_dev = cnxk_rep_pmd_priv(eth_dev); + npc = &rep_dev->parent_dev->npc; + } memset(&flow, 0, sizeof(flow)); memset(&in_sample_action, 0, sizeof(in_sample_action)); flow.is_validate = true; rc = cnxk_map_flow_data(eth_dev, attr, pattern, actions, &in_attr, in_pattern, in_actions, - &in_sample_action, &flowkey_cfg, &dst_pf_func); + &in_sample_action, &flowkey_cfg, &dst_pf_func, is_rep); if (rc) { rte_flow_error_set(error, 0, RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL, "Failed to map flow data"); @@ -535,27 +573,45 @@ cnxk_flow_validate(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr return 0; } +static int +cnxk_flow_validate(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + return cnxk_flow_validate_internal(eth_dev, attr, pattern, actions, error, false); +} + struct roc_npc_flow * -cnxk_flow_create(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_flow_error *error) +cnxk_flow_create_internal(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], struct rte_flow_error *error, + bool is_rep) { - struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); struct roc_npc_item_info in_pattern[ROC_NPC_ITEM_TYPE_END + 1]; struct roc_npc_action in_actions[ROC_NPC_MAX_ACTION_COUNT]; struct roc_npc_action_sample in_sample_action; - struct roc_npc *npc = &dev->npc; + struct cnxk_rep_dev *rep_dev = NULL; + struct cnxk_eth_dev *dev = NULL; struct roc_npc_attr in_attr; struct roc_npc_flow *flow; uint16_t dst_pf_func = 0; + struct roc_npc *npc; int errcode = 0; int rc; + /* is_rep set for operation performed via representor ports */ + if (!is_rep) { + dev = cnxk_eth_pmd_priv(eth_dev); + npc = &dev->npc; + } else { + rep_dev = cnxk_rep_pmd_priv(eth_dev); + npc = &rep_dev->parent_dev->npc; + } + memset(&in_sample_action, 0, sizeof(in_sample_action)); memset(&in_attr, 0, sizeof(struct roc_npc_attr)); rc = cnxk_map_flow_data(eth_dev, attr, pattern, actions, &in_attr, in_pattern, in_actions, - &in_sample_action, &npc->flowkey_cfg_state, &dst_pf_func); + &in_sample_action, &npc->flowkey_cfg_state, &dst_pf_func, is_rep); if (rc) { rte_flow_error_set(error, 0, RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL, "Failed to map flow data"); @@ -571,14 +627,32 @@ cnxk_flow_create(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, return flow; } +struct roc_npc_flow * +cnxk_flow_create(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + return cnxk_flow_create_internal(eth_dev, attr, pattern, actions, error, false); +} + int -cnxk_flow_destroy(struct rte_eth_dev *eth_dev, struct roc_npc_flow *flow, - struct rte_flow_error *error) +cnxk_flow_destroy_internal(struct rte_eth_dev *eth_dev, struct roc_npc_flow *flow, + struct rte_flow_error *error, bool is_rep) { - struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); - struct roc_npc *npc = &dev->npc; + struct cnxk_rep_dev *rep_dev; + struct cnxk_eth_dev *dev; + struct roc_npc *npc; int rc; + /* is_rep set for operation performed via representor ports */ + if (!is_rep) { + dev = cnxk_eth_pmd_priv(eth_dev); + npc = &dev->npc; + } else { + rep_dev = cnxk_rep_pmd_priv(eth_dev); + npc = &rep_dev->parent_dev->npc; + } + rc = roc_npc_flow_destroy(npc, flow); if (rc) rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -586,13 +660,30 @@ cnxk_flow_destroy(struct rte_eth_dev *eth_dev, struct roc_npc_flow *flow, return rc; } -static int -cnxk_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *error) +int +cnxk_flow_destroy(struct rte_eth_dev *eth_dev, struct roc_npc_flow *flow, + struct rte_flow_error *error) { - struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); - struct roc_npc *npc = &dev->npc; + return cnxk_flow_destroy_internal(eth_dev, flow, error, false); +} + +int +cnxk_flow_flush_internal(struct rte_eth_dev *eth_dev, struct rte_flow_error *error, bool is_rep) +{ + struct cnxk_rep_dev *rep_dev; + struct cnxk_eth_dev *dev; + struct roc_npc *npc; int rc; + /* is_rep set for operation performed via representor ports */ + if (!is_rep) { + dev = cnxk_eth_pmd_priv(eth_dev); + npc = &dev->npc; + } else { + rep_dev = cnxk_rep_pmd_priv(eth_dev); + npc = &rep_dev->parent_dev->npc; + } + rc = roc_npc_mcam_free_all_resources(npc); if (rc) { rte_flow_error_set(error, EIO, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -604,14 +695,21 @@ cnxk_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *error) } static int -cnxk_flow_query(struct rte_eth_dev *eth_dev, struct rte_flow *flow, - const struct rte_flow_action *action, void *data, - struct rte_flow_error *error) +cnxk_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *error) +{ + return cnxk_flow_flush_internal(eth_dev, error, false); +} + +int +cnxk_flow_query_internal(struct rte_eth_dev *eth_dev, struct rte_flow *flow, + const struct rte_flow_action *action, void *data, + struct rte_flow_error *error, bool is_rep) { struct roc_npc_flow *in_flow = (struct roc_npc_flow *)flow; - struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); - struct roc_npc *npc = &dev->npc; struct rte_flow_query_count *query = data; + struct cnxk_rep_dev *rep_dev; + struct cnxk_eth_dev *dev; + struct roc_npc *npc; const char *errmsg = NULL; int errcode = ENOTSUP; int rc; @@ -626,6 +724,15 @@ cnxk_flow_query(struct rte_eth_dev *eth_dev, struct rte_flow *flow, goto err_exit; } + /* is_rep set for operation performed via representor ports */ + if (!is_rep) { + dev = cnxk_eth_pmd_priv(eth_dev); + npc = &dev->npc; + } else { + rep_dev = cnxk_rep_pmd_priv(eth_dev); + npc = &rep_dev->parent_dev->npc; + } + if (in_flow->use_pre_alloc) rc = roc_npc_inl_mcam_read_counter(in_flow->ctr_id, &query->hits); else @@ -658,6 +765,14 @@ cnxk_flow_query(struct rte_eth_dev *eth_dev, struct rte_flow *flow, return -rte_errno; } +static int +cnxk_flow_query(struct rte_eth_dev *eth_dev, struct rte_flow *flow, + const struct rte_flow_action *action, void *data, + struct rte_flow_error *error) +{ + return cnxk_flow_query_internal(eth_dev, flow, action, data, error, false); +} + static int cnxk_flow_isolate(struct rte_eth_dev *eth_dev __rte_unused, int enable __rte_unused, struct rte_flow_error *error) @@ -672,12 +787,22 @@ cnxk_flow_isolate(struct rte_eth_dev *eth_dev __rte_unused, return -rte_errno; } -static int -cnxk_flow_dev_dump(struct rte_eth_dev *eth_dev, struct rte_flow *flow, - FILE *file, struct rte_flow_error *error) +int +cnxk_flow_dev_dump_internal(struct rte_eth_dev *eth_dev, struct rte_flow *flow, FILE *file, + struct rte_flow_error *error, bool is_rep) { - struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); - struct roc_npc *npc = &dev->npc; + struct cnxk_rep_dev *rep_dev; + struct cnxk_eth_dev *dev; + struct roc_npc *npc; + + /* is_rep set for operation performed via representor ports */ + if (!is_rep) { + dev = cnxk_eth_pmd_priv(eth_dev); + npc = &dev->npc; + } else { + rep_dev = cnxk_rep_pmd_priv(eth_dev); + npc = &rep_dev->parent_dev->npc; + } if (file == NULL) { rte_flow_error_set(error, EINVAL, @@ -699,6 +824,13 @@ cnxk_flow_dev_dump(struct rte_eth_dev *eth_dev, struct rte_flow *flow, return 0; } +static int +cnxk_flow_dev_dump(struct rte_eth_dev *eth_dev, struct rte_flow *flow, + FILE *file, struct rte_flow_error *error) +{ + return cnxk_flow_dev_dump_internal(eth_dev, flow, file, error, false); +} + static int cnxk_flow_get_aged_flows(struct rte_eth_dev *eth_dev, void **context, uint32_t nb_contexts, struct rte_flow_error *err) diff --git a/drivers/net/cnxk/cnxk_flow.h b/drivers/net/cnxk/cnxk_flow.h index bb23629819..84333e7f9d 100644 --- a/drivers/net/cnxk/cnxk_flow.h +++ b/drivers/net/cnxk/cnxk_flow.h @@ -24,4 +24,23 @@ struct roc_npc_flow *cnxk_flow_create(struct rte_eth_dev *dev, int cnxk_flow_destroy(struct rte_eth_dev *dev, struct roc_npc_flow *flow, struct rte_flow_error *error); +struct roc_npc_flow *cnxk_flow_create_internal(struct rte_eth_dev *eth_dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error, bool is_rep); +int cnxk_flow_validate_internal(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], struct rte_flow_error *error, + bool is_rep); +int cnxk_flow_destroy_internal(struct rte_eth_dev *eth_dev, struct roc_npc_flow *flow, + struct rte_flow_error *error, bool is_rep); +int cnxk_flow_flush_internal(struct rte_eth_dev *eth_dev, struct rte_flow_error *error, + bool is_rep); +int cnxk_flow_query_internal(struct rte_eth_dev *eth_dev, struct rte_flow *flow, + const struct rte_flow_action *action, void *data, + struct rte_flow_error *error, bool is_rep); +int cnxk_flow_dev_dump_internal(struct rte_eth_dev *eth_dev, struct rte_flow *flow, FILE *file, + struct rte_flow_error *error, bool is_rep); + #endif /* __CNXK_RTE_FLOW_H__ */ From patchwork Tue Dec 19 17:40:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135367 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A423A43747; Tue, 19 Dec 2023 18:43:30 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7ED6C42EA8; Tue, 19 Dec 2023 18:41:43 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 047BA42EAF for ; Tue, 19 Dec 2023 18:41:37 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJACDTv006452 for ; Tue, 19 Dec 2023 09:41:37 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=dUOIx+x7mgxii/d2Z5d81 4V0iP9QuHseirWEP1B/2pk=; b=iNOalcG1iIR2/c0eHsUf6pP1MnVYpLzPff7Yt 2BauWCQ4AAi8PqOsS5gh0clDqKw+dnOBQxMHpoIRQ4rHGcUGwZX3ajfPLsxwNslA ekiHe808PRQ6SEroQm27ECNo5vSZUqLj486HxlhhOKRzMKqqla9tq8qj4T09MxGd Fkl4m/3TycrQdG9p+wW8Pks4tccSy8PhbWS+yItBBVo1Ek9Jnrs/oSNzu/EMHlbA KBETMK3H+DRrSOiQH/hJWqBdsaqY6EGvA1G1pWrkDUjF5iUJywVNevBdmEQZnzF4 9kUpevV4SlSl5kZ4RcqCLlSCI87Bz1+diAc7eBqHi1fdK5BQA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3v1c9kumj2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:41:37 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:41:34 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:41:34 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 92F253F708D; Tue, 19 Dec 2023 09:41:32 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 22/24] net/cnxk: flow create on representor ports Date: Tue, 19 Dec 2023 23:10:01 +0530 Message-ID: <20231219174003.72901-23-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: VZace1hJcTFAGwKSLcwTyYfT5uLrHYx9 X-Proofpoint-GUID: VZace1hJcTFAGwKSLcwTyYfT5uLrHYx9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org - Implementing base infra for handling flow operations performed on representor ports, where these representor ports may be representing native representees or part of companian apps. - Handling flow create operation Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_flow.h | 9 +- drivers/net/cnxk/cnxk_rep.h | 3 + drivers/net/cnxk/cnxk_rep_flow.c | 399 +++++++++++++++++++++++++++++++ drivers/net/cnxk/cnxk_rep_msg.h | 27 +++ drivers/net/cnxk/cnxk_rep_ops.c | 3 +- drivers/net/cnxk/meson.build | 1 + 6 files changed, 439 insertions(+), 3 deletions(-) create mode 100644 drivers/net/cnxk/cnxk_rep_flow.c diff --git a/drivers/net/cnxk/cnxk_flow.h b/drivers/net/cnxk/cnxk_flow.h index 84333e7f9d..26384400c1 100644 --- a/drivers/net/cnxk/cnxk_flow.h +++ b/drivers/net/cnxk/cnxk_flow.h @@ -16,8 +16,13 @@ struct cnxk_rte_flow_term_info { uint16_t item_size; }; -struct roc_npc_flow *cnxk_flow_create(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, +struct cnxk_rte_flow_action_info { + uint16_t conf_size; +}; + +extern const struct cnxk_rte_flow_term_info term[]; + +struct roc_npc_flow *cnxk_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error); diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h index 9ac675426e..2b850e7e59 100644 --- a/drivers/net/cnxk/cnxk_rep.h +++ b/drivers/net/cnxk/cnxk_rep.h @@ -20,6 +20,9 @@ /* Common ethdev ops */ extern struct eth_dev_ops cnxk_rep_dev_ops; +/* Flow ops for representor ports */ +extern struct rte_flow_ops cnxk_rep_flow_ops; + struct cnxk_rep_queue_stats { uint64_t pkts; uint64_t bytes; diff --git a/drivers/net/cnxk/cnxk_rep_flow.c b/drivers/net/cnxk/cnxk_rep_flow.c new file mode 100644 index 0000000000..ab9ced6ece --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep_flow.c @@ -0,0 +1,399 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include +#include + +#include +#include +#include + +#define DEFAULT_DUMP_FILE_NAME "/tmp/fdump" +#define MAX_BUFFER_SIZE 1500 + +const struct cnxk_rte_flow_action_info action_info[] = { + [RTE_FLOW_ACTION_TYPE_MARK] = {sizeof(struct rte_flow_action_mark)}, + [RTE_FLOW_ACTION_TYPE_VF] = {sizeof(struct rte_flow_action_vf)}, + [RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT] = {sizeof(struct rte_flow_action_port_id)}, + [RTE_FLOW_ACTION_TYPE_PORT_ID] = {sizeof(struct rte_flow_action_port_id)}, + [RTE_FLOW_ACTION_TYPE_QUEUE] = {sizeof(struct rte_flow_action_queue)}, + [RTE_FLOW_ACTION_TYPE_RSS] = {sizeof(struct rte_flow_action_rss)}, + [RTE_FLOW_ACTION_TYPE_SECURITY] = {sizeof(struct rte_flow_action_security)}, + [RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID] = {sizeof(struct rte_flow_action_of_set_vlan_vid)}, + [RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = {sizeof(struct rte_flow_action_of_push_vlan)}, + [RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP] = {sizeof(struct rte_flow_action_of_set_vlan_pcp)}, + [RTE_FLOW_ACTION_TYPE_METER] = {sizeof(struct rte_flow_action_meter)}, + [RTE_FLOW_ACTION_TYPE_OF_POP_MPLS] = {sizeof(struct rte_flow_action_of_pop_mpls)}, + [RTE_FLOW_ACTION_TYPE_OF_PUSH_MPLS] = {sizeof(struct rte_flow_action_of_push_mpls)}, + [RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP] = {sizeof(struct rte_flow_action_vxlan_encap)}, + [RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP] = {sizeof(struct rte_flow_action_nvgre_encap)}, + [RTE_FLOW_ACTION_TYPE_RAW_ENCAP] = {sizeof(struct rte_flow_action_raw_encap)}, + [RTE_FLOW_ACTION_TYPE_RAW_DECAP] = {sizeof(struct rte_flow_action_raw_decap)}, + [RTE_FLOW_ACTION_TYPE_COUNT] = {sizeof(struct rte_flow_action_count)}, +}; + +static void +cnxk_flow_params_count(const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + uint16_t *n_pattern, uint16_t *n_action) +{ + int i = 0; + + for (; pattern->type != RTE_FLOW_ITEM_TYPE_END; pattern++) + i++; + + *n_pattern = ++i; + plt_rep_dbg("Total patterns is %d", *n_pattern); + + i = 0; + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) + i++; + *n_action = ++i; + plt_rep_dbg("Total actions is %d", *n_action); +} + +static void +populate_attr_data(void *buffer, uint32_t *length, const struct rte_flow_attr *attr) +{ + uint32_t sz = sizeof(struct rte_flow_attr); + uint32_t len; + + cnxk_rep_msg_populate_type(buffer, length, CNXK_TYPE_ATTR, sz); + + len = *length; + /* Populate the attribute data */ + rte_memcpy(RTE_PTR_ADD(buffer, len), attr, sz); + len += sz; + + *length = len; +} + +static uint16_t +prepare_pattern_data(const struct rte_flow_item *pattern, uint16_t nb_pattern, + uint64_t *pattern_data) +{ + cnxk_pattern_hdr_t hdr; + uint16_t len = 0; + int i = 0; + + for (i = 0; i < nb_pattern; i++) { + /* Populate the pattern type hdr */ + memset(&hdr, 0, sizeof(cnxk_pattern_hdr_t)); + hdr.type = pattern->type; + if (pattern->spec) { + hdr.spec_sz = term[pattern->type].item_size; + hdr.last_sz = 0; + hdr.mask_sz = term[pattern->type].item_size; + } + + rte_memcpy(RTE_PTR_ADD(pattern_data, len), &hdr, sizeof(cnxk_pattern_hdr_t)); + len += sizeof(cnxk_pattern_hdr_t); + + /* Copy pattern spec data */ + if (pattern->spec) { + rte_memcpy(RTE_PTR_ADD(pattern_data, len), pattern->spec, + term[pattern->type].item_size); + len += term[pattern->type].item_size; + } + + /* Copy pattern last data */ + if (pattern->last) { + rte_memcpy(RTE_PTR_ADD(pattern_data, len), pattern->last, + term[pattern->type].item_size); + len += term[pattern->type].item_size; + } + + /* Copy pattern mask data */ + if (pattern->mask) { + rte_memcpy(RTE_PTR_ADD(pattern_data, len), pattern->mask, + term[pattern->type].item_size); + len += term[pattern->type].item_size; + } + pattern++; + } + + return len; +} + +static void +populate_pattern_data(void *buffer, uint32_t *length, const struct rte_flow_item *pattern, + uint16_t nb_pattern) +{ + uint64_t pattern_data[BUFSIZ]; + uint32_t len; + uint32_t sz; + + /* Prepare pattern_data */ + sz = prepare_pattern_data(pattern, nb_pattern, pattern_data); + + cnxk_rep_msg_populate_type(buffer, length, CNXK_TYPE_PATTERN, sz); + + len = *length; + /* Populate the pattern data */ + rte_memcpy(RTE_PTR_ADD(buffer, len), pattern_data, sz); + len += sz; + + *length = len; +} + +static uint16_t +populate_rss_action_conf(const struct rte_flow_action_rss *conf, void *rss_action_conf) +{ + int len, sz; + + len = sizeof(struct rte_flow_action_rss) - sizeof(conf->key) - sizeof(conf->queue); + + if (rss_action_conf) + rte_memcpy(rss_action_conf, conf, len); + + if (conf->key) { + sz = conf->key_len; + if (rss_action_conf) + rte_memcpy(RTE_PTR_ADD(rss_action_conf, len), conf->key, sz); + len += sz; + } + + if (conf->queue) { + sz = conf->queue_num * sizeof(conf->queue); + if (rss_action_conf) + rte_memcpy(RTE_PTR_ADD(rss_action_conf, len), conf->queue, sz); + len += sz; + } + + return len; +} + +static uint16_t +populate_vxlan_encap_action_conf(const struct rte_flow_action_vxlan_encap *vxlan_conf, + void *vxlan_encap_action_data) +{ + const struct rte_flow_item *pattern; + uint64_t nb_patterns = 0; + uint16_t len, sz; + + pattern = vxlan_conf->definition; + for (; pattern->type != RTE_FLOW_ITEM_TYPE_END; pattern++) + nb_patterns++; + + len = sizeof(uint64_t); + rte_memcpy(vxlan_encap_action_data, &nb_patterns, len); + pattern = vxlan_conf->definition; + /* Prepare pattern_data */ + sz = prepare_pattern_data(pattern, nb_patterns, RTE_PTR_ADD(vxlan_encap_action_data, len)); + + len += sz; + if (len > BUFSIZ) { + plt_err("Incomplete item definition loaded, len %d", len); + return 0; + } + + return len; +} + +static uint16_t +prepare_action_data(const struct rte_flow_action *action, uint16_t nb_action, uint64_t *action_data) +{ + void *action_conf_data = NULL; + cnxk_action_hdr_t hdr; + uint16_t len = 0, sz = 0; + int i = 0; + + for (i = 0; i < nb_action; i++) { + if (action->conf) { + switch (action->type) { + case RTE_FLOW_ACTION_TYPE_RSS: + sz = populate_rss_action_conf(action->conf, NULL); + action_conf_data = plt_zmalloc(sz, 0); + if (populate_rss_action_conf(action->conf, action_conf_data) != + sz) { + plt_err("Populating RSS action config failed"); + return 0; + } + break; + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + action_conf_data = plt_zmalloc(BUFSIZ, 0); + sz = populate_vxlan_encap_action_conf(action->conf, + action_conf_data); + if (!sz) { + plt_err("Populating vxlan action action config failed"); + return 0; + } + break; + default: + sz = action_info[action->type].conf_size; + action_conf_data = plt_zmalloc(sz, 0); + rte_memcpy(action_conf_data, action->conf, sz); + break; + }; + } + + /* Populate the action type hdr */ + memset(&hdr, 0, sizeof(cnxk_action_hdr_t)); + hdr.type = action->type; + hdr.conf_sz = sz; + + rte_memcpy(RTE_PTR_ADD(action_data, len), &hdr, sizeof(cnxk_action_hdr_t)); + len += sizeof(cnxk_action_hdr_t); + + /* Copy action conf data */ + if (action_conf_data) { + rte_memcpy(RTE_PTR_ADD(action_data, len), action_conf_data, sz); + len += sz; + plt_free(action_conf_data); + action_conf_data = NULL; + } + + action++; + } + + return len; +} + +static void +populate_action_data(void *buffer, uint32_t *length, const struct rte_flow_action *action, + uint16_t nb_action) +{ + uint64_t action_data[BUFSIZ]; + uint32_t len; + uint32_t sz; + + /* Prepare action_data */ + sz = prepare_action_data(action, nb_action, action_data); + + cnxk_rep_msg_populate_type(buffer, length, CNXK_TYPE_ACTION, sz); + + len = *length; + /* Populate the action data */ + rte_memcpy(RTE_PTR_ADD(buffer, len), action_data, sz); + len += sz; + + *length = len; +} + +static int +process_flow_rule(struct cnxk_rep_dev *rep_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + cnxk_rep_msg_ack_data_t *adata, cnxk_rep_msg_t msg) +{ + cnxk_rep_msg_flow_create_meta_t msg_fc_meta; + uint16_t n_pattern, n_action; + uint32_t len = 0, rc = 0; + void *buffer; + size_t size; + + size = MAX_BUFFER_SIZE; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + /* Get no of actions and patterns */ + cnxk_flow_params_count(pattern, actions, &n_pattern, &n_action); + + /* Adding the header */ + cnxk_rep_msg_populate_header(buffer, &len); + + /* Representor port identified as rep_xport queue */ + msg_fc_meta.portid = rep_dev->rep_id; + msg_fc_meta.nb_pattern = n_pattern; + msg_fc_meta.nb_action = n_action; + + cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_fc_meta, + sizeof(cnxk_rep_msg_flow_create_meta_t), msg); + + /* Populate flow create parameters data */ + populate_attr_data(buffer, &len, attr); + populate_pattern_data(buffer, &len, pattern, n_pattern); + populate_action_data(buffer, &len, actions, n_action); + + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto fail; + } + + return 0; +fail: + return rc; +} + +static struct rte_flow * +cnxk_rep_flow_create_native(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], struct rte_flow_error *error) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + struct roc_npc_flow *flow; + uint16_t new_entry; + int rc; + + flow = cnxk_flow_create_internal(eth_dev, attr, pattern, actions, error, true); + /* Shifting the rules with higher priority than exception path rules */ + new_entry = (uint16_t)flow->mcam_id; + rc = cnxk_eswitch_flow_rule_shift(rep_dev->hw_func, &new_entry); + if (rc) { + plt_err("Failed to shift the flow rule entry, err %d", rc); + goto fail; + } + + flow->mcam_id = new_entry; + + return (struct rte_flow *)flow; +fail: + return NULL; +} + +static struct rte_flow * +cnxk_rep_flow_create(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + struct rte_flow *flow = NULL; + cnxk_rep_msg_ack_data_t adata; + int rc = 0; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) { + rte_flow_error_set(error, -EAGAIN, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Represented VF not active yet"); + return 0; + } + + if (rep_dev->native_repte) + return cnxk_rep_flow_create_native(eth_dev, attr, pattern, actions, error); + + rc = process_flow_rule(rep_dev, attr, pattern, actions, &adata, CNXK_REP_MSG_FLOW_CREATE); + if (!rc || adata.u.sval < 0) { + if (adata.u.sval < 0) { + rc = (int)adata.u.sval; + rte_flow_error_set(error, adata.u.sval, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Failed to validate flow"); + goto fail; + } + + flow = adata.u.data; + if (!flow) { + rte_flow_error_set(error, adata.u.sval, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Failed to create flow"); + goto fail; + } + } else { + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to create flow"); + goto fail; + } + plt_rep_dbg("Flow %p created successfully", adata.u.data); + + return flow; +fail: + return NULL; +} + +struct rte_flow_ops cnxk_rep_flow_ops = { + .create = cnxk_rep_flow_create, +}; diff --git a/drivers/net/cnxk/cnxk_rep_msg.h b/drivers/net/cnxk/cnxk_rep_msg.h index 3236de50ad..2a7b5e3bc5 100644 --- a/drivers/net/cnxk/cnxk_rep_msg.h +++ b/drivers/net/cnxk/cnxk_rep_msg.h @@ -12,6 +12,10 @@ typedef enum CNXK_TYPE { CNXK_TYPE_HEADER = 0, CNXK_TYPE_MSG, + CNXK_TYPE_ATTR, + CNXK_TYPE_PATTERN, + CNXK_TYPE_ACTION, + CNXK_TYPE_FLOW } cnxk_type_t; typedef enum CNXK_REP_MSG { @@ -23,6 +27,8 @@ typedef enum CNXK_REP_MSG { CNXK_REP_MSG_ETH_SET_MAC, CNXK_REP_MSG_ETH_STATS_GET, CNXK_REP_MSG_ETH_STATS_CLEAR, + /* Flow operation msgs */ + CNXK_REP_MSG_FLOW_CREATE, /* End of messaging sequence */ CNXK_REP_MSG_END, } cnxk_rep_msg_t; @@ -96,6 +102,27 @@ typedef struct cnxk_rep_msg_eth_stats_meta { uint16_t portid; } __rte_packed cnxk_rep_msg_eth_stats_meta_t; +/* Flow create msg meta */ +typedef struct cnxk_rep_msg_flow_create_meta { + uint16_t portid; + uint16_t nb_pattern; + uint16_t nb_action; +} __rte_packed cnxk_rep_msg_flow_create_meta_t; + +/* Type pattern meta */ +typedef struct cnxk_pattern_hdr { + uint16_t type; + uint16_t spec_sz; + uint16_t last_sz; + uint16_t mask_sz; +} __rte_packed cnxk_pattern_hdr_t; + +/* Type action meta */ +typedef struct cnxk_action_hdr { + uint16_t type; + uint16_t conf_sz; +} __rte_packed cnxk_action_hdr_t; + void cnxk_rep_msg_populate_command(void *buffer, uint32_t *length, cnxk_rep_msg_t type, uint32_t size); void cnxk_rep_msg_populate_command_meta(void *buffer, uint32_t *length, void *msg_meta, uint32_t sz, diff --git a/drivers/net/cnxk/cnxk_rep_ops.c b/drivers/net/cnxk/cnxk_rep_ops.c index e07c63dcb2..a461ae1dc3 100644 --- a/drivers/net/cnxk/cnxk_rep_ops.c +++ b/drivers/net/cnxk/cnxk_rep_ops.c @@ -637,7 +637,8 @@ int cnxk_rep_flow_ops_get(struct rte_eth_dev *ethdev, const struct rte_flow_ops **ops) { PLT_SET_USED(ethdev); - PLT_SET_USED(ops); + *ops = &cnxk_rep_flow_ops; + return 0; } diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build index 9ca7732713..8cc06f4967 100644 --- a/drivers/net/cnxk/meson.build +++ b/drivers/net/cnxk/meson.build @@ -39,6 +39,7 @@ sources = files( 'cnxk_rep.c', 'cnxk_rep_msg.c', 'cnxk_rep_ops.c', + 'cnxk_rep_flow.c', 'cnxk_stats.c', 'cnxk_tm.c', ) From patchwork Tue Dec 19 17:40:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135368 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 329C143747; Tue, 19 Dec 2023 18:43:39 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BD9A842EAF; Tue, 19 Dec 2023 18:41:44 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 0225642E8A for ; Tue, 19 Dec 2023 18:41:40 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJ9d535021347 for ; Tue, 19 Dec 2023 09:41:40 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=iKD9paSx51qawwpIc/eBm K4ZpWPFMPJBvlETh1djwvY=; b=TCdhraeeJ+AozMUJJk7FCLUnQlOabX/jSCbb4 RosodxpKJ0AjX6eByDKjDN3Qtf44xSlEHz//5DPP6gRo6MfMMkBei+Bjostkjvmi qxrP/Ek8n9C26RCq8nmkZ+v8j7JFLEE926fnhH/yuIHRHkjgJ6N7HV48YmqH1dpz kbIJdvEoI350s1P7X9i95h835ZpK4yIXHhAtIi7PVA0IEhMzBv+NjbyryJ3T2SWQ gY/fWpH+KfAtbZ0RHkppbeLC3kHVyQjyXfoaNeNVznTQMnc/VJo2L8lDiASXz33L YapNFJiL0Ub6m3TY2/4eipgofZLAmoXBx+N0gqr2YOecIFe/w== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3v1c9kumj8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:41:40 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:41:37 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:41:37 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 903FE3F7050; Tue, 19 Dec 2023 09:41:35 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 23/24] net/cnxk: other flow operations Date: Tue, 19 Dec 2023 23:10:02 +0530 Message-ID: <20231219174003.72901-24-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: go-nYqrZvjmwwsPD8MPshIQwf4AUtThy X-Proofpoint-GUID: go-nYqrZvjmwwsPD8MPshIQwf4AUtThy X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implementing other flow operations - validate, destroy, query, flush, dump for representor ports Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_rep_flow.c | 414 +++++++++++++++++++++++++++++++ drivers/net/cnxk/cnxk_rep_msg.h | 32 +++ 2 files changed, 446 insertions(+) diff --git a/drivers/net/cnxk/cnxk_rep_flow.c b/drivers/net/cnxk/cnxk_rep_flow.c index ab9ced6ece..2abec485bc 100644 --- a/drivers/net/cnxk/cnxk_rep_flow.c +++ b/drivers/net/cnxk/cnxk_rep_flow.c @@ -270,6 +270,221 @@ populate_action_data(void *buffer, uint32_t *length, const struct rte_flow_actio *length = len; } +static int +process_flow_destroy(struct cnxk_rep_dev *rep_dev, void *flow, cnxk_rep_msg_ack_data_t *adata) +{ + cnxk_rep_msg_flow_destroy_meta_t msg_fd_meta; + uint32_t len = 0, rc; + void *buffer; + size_t size; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) + return 0; + + size = MAX_BUFFER_SIZE; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + cnxk_rep_msg_populate_header(buffer, &len); + + msg_fd_meta.portid = rep_dev->rep_id; + msg_fd_meta.flow = (uint64_t)flow; + plt_rep_dbg("Flow Destroy: flow 0x%" PRIu64 ", portid %d", msg_fd_meta.flow, + msg_fd_meta.portid); + cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_fd_meta, + sizeof(cnxk_rep_msg_flow_destroy_meta_t), + CNXK_REP_MSG_FLOW_DESTROY); + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto fail; + } + + return 0; +fail: + return rc; +} + +static int +copy_flow_dump_file(FILE *target) +{ + FILE *source = NULL; + int pos; + char ch; + + source = fopen(DEFAULT_DUMP_FILE_NAME, "r"); + if (source == NULL) { + plt_err("Failed to read default dump file: %s, err %d", DEFAULT_DUMP_FILE_NAME, + errno); + return errno; + } + + fseek(source, 0L, SEEK_END); + pos = ftell(source); + fseek(source, 0L, SEEK_SET); + while (pos--) { + ch = fgetc(source); + fputc(ch, target); + } + + fclose(source); + + /* Remove the default file after reading */ + remove(DEFAULT_DUMP_FILE_NAME); + + return 0; +} + +static int +process_flow_dump(struct cnxk_rep_dev *rep_dev, struct rte_flow *flow, FILE *file, + cnxk_rep_msg_ack_data_t *adata) +{ + cnxk_rep_msg_flow_dump_meta_t msg_fp_meta; + uint32_t len = 0, rc; + void *buffer; + size_t size; + + size = MAX_BUFFER_SIZE; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + cnxk_rep_msg_populate_header(buffer, &len); + + msg_fp_meta.portid = rep_dev->rep_id; + msg_fp_meta.flow = (uint64_t)flow; + msg_fp_meta.is_stdout = (file == stdout) ? 1 : 0; + + plt_rep_dbg("Flow Dump: flow 0x%" PRIu64 ", portid %d stdout %d", msg_fp_meta.flow, + msg_fp_meta.portid, msg_fp_meta.is_stdout); + cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_fp_meta, + sizeof(cnxk_rep_msg_flow_dump_meta_t), + CNXK_REP_MSG_FLOW_DUMP); + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto fail; + } + + /* Copy contents from default file to user file */ + if (file != stdout) + copy_flow_dump_file(file); + + return 0; +fail: + return rc; +} + +static int +process_flow_flush(struct cnxk_rep_dev *rep_dev, cnxk_rep_msg_ack_data_t *adata) +{ + cnxk_rep_msg_flow_flush_meta_t msg_ff_meta; + uint32_t len = 0, rc; + void *buffer; + size_t size; + + size = MAX_BUFFER_SIZE; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + cnxk_rep_msg_populate_header(buffer, &len); + + msg_ff_meta.portid = rep_dev->rep_id; + plt_rep_dbg("Flow Flush: portid %d", msg_ff_meta.portid); + cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_ff_meta, + sizeof(cnxk_rep_msg_flow_flush_meta_t), + CNXK_REP_MSG_FLOW_FLUSH); + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto fail; + } + + return 0; +fail: + return rc; +} + +static int +process_flow_query(struct cnxk_rep_dev *rep_dev, struct rte_flow *flow, + const struct rte_flow_action *action, void *data, cnxk_rep_msg_ack_data_t *adata) +{ + cnxk_rep_msg_flow_query_meta_t *msg_fq_meta; + struct rte_flow_query_count *query = data; + uint32_t len = 0, rc, sz, total_sz; + uint64_t action_data[BUFSIZ]; + void *buffer; + size_t size; + + size = MAX_BUFFER_SIZE; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + cnxk_rep_msg_populate_header(buffer, &len); + + sz = prepare_action_data(action, 1, action_data); + total_sz = sz + sizeof(cnxk_rep_msg_flow_query_meta_t); + + msg_fq_meta = plt_zmalloc(total_sz, 0); + if (!msg_fq_meta) { + plt_err("Failed to allocate memory"); + rc = -ENOMEM; + goto fail; + } + + msg_fq_meta->portid = rep_dev->rep_id; + msg_fq_meta->reset = query->reset; + ; + msg_fq_meta->flow = (uint64_t)flow; + /* Populate the action data */ + rte_memcpy(msg_fq_meta->action_data, action_data, sz); + msg_fq_meta->action_data_sz = sz; + + plt_rep_dbg("Flow query: flow 0x%" PRIu64 ", portid %d, action type %d total sz %d " + "action sz %d", msg_fq_meta->flow, msg_fq_meta->portid, action->type, total_sz, + sz); + cnxk_rep_msg_populate_command_meta(buffer, &len, msg_fq_meta, total_sz, + CNXK_REP_MSG_FLOW_QUERY); + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto free; + } + + rte_free(msg_fq_meta); + + return 0; + +free: + rte_free(msg_fq_meta); +fail: + return rc; +} + static int process_flow_rule(struct cnxk_rep_dev *rep_dev, const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], @@ -394,6 +609,205 @@ cnxk_rep_flow_create(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *at return NULL; } +static int +cnxk_rep_flow_validate(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + cnxk_rep_msg_ack_data_t adata; + int rc = 0; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) { + rte_flow_error_set(error, -EAGAIN, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Represented VF not active yet"); + return 0; + } + + if (rep_dev->native_repte) + return cnxk_flow_validate_internal(eth_dev, attr, pattern, actions, error, true); + + rc = process_flow_rule(rep_dev, attr, pattern, actions, &adata, CNXK_REP_MSG_FLOW_VALIDATE); + if (!rc || adata.u.sval < 0) { + if (adata.u.sval < 0) { + rc = (int)adata.u.sval; + rte_flow_error_set(error, adata.u.sval, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Failed to validate flow"); + goto fail; + } + } else { + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to validate flow"); + } + + plt_rep_dbg("Flow %p validated successfully", adata.u.data); + +fail: + return rc; +} + +static int +cnxk_rep_flow_destroy(struct rte_eth_dev *eth_dev, struct rte_flow *flow, + struct rte_flow_error *error) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + cnxk_rep_msg_ack_data_t adata; + int rc; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) { + rte_flow_error_set(error, -EAGAIN, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Represented VF not active yet"); + return 0; + } + + if (rep_dev->native_repte) + return cnxk_flow_destroy_internal(eth_dev, (struct roc_npc_flow *)flow, error, + true); + + rc = process_flow_destroy(rep_dev, flow, &adata); + if (rc || adata.u.sval < 0) { + if (adata.u.sval < 0) + rc = adata.u.sval; + + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to destroy flow"); + goto fail; + } + + return 0; +fail: + return rc; +} + +static int +cnxk_rep_flow_query(struct rte_eth_dev *eth_dev, struct rte_flow *flow, + const struct rte_flow_action *action, void *data, struct rte_flow_error *error) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + cnxk_rep_msg_ack_data_t adata; + int rc; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) { + rte_flow_error_set(error, -EAGAIN, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Represented VF not active yet"); + return 0; + } + + if (action->type != RTE_FLOW_ACTION_TYPE_COUNT) { + rc = -ENOTSUP; + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Only COUNT is supported in query"); + goto fail; + } + + if (rep_dev->native_repte) + return cnxk_flow_query_internal(eth_dev, flow, action, data, error, true); + + rc = process_flow_query(rep_dev, flow, action, data, &adata); + if (rc || adata.u.sval < 0) { + if (adata.u.sval < 0) + rc = adata.u.sval; + + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to query the flow"); + goto fail; + } + + rte_memcpy(data, adata.u.data, adata.size); + + return 0; +fail: + return rc; +} + +static int +cnxk_rep_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *error) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + cnxk_rep_msg_ack_data_t adata; + int rc; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) { + rte_flow_error_set(error, -EAGAIN, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Represented VF not active yet"); + return 0; + } + + if (rep_dev->native_repte) + return cnxk_flow_flush_internal(eth_dev, error, true); + + rc = process_flow_flush(rep_dev, &adata); + if (rc || adata.u.sval < 0) { + if (adata.u.sval < 0) + rc = adata.u.sval; + + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to destroy flow"); + goto fail; + } + + return 0; +fail: + return rc; +} + +static int +cnxk_rep_flow_dev_dump(struct rte_eth_dev *eth_dev, struct rte_flow *flow, FILE *file, + struct rte_flow_error *error) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + cnxk_rep_msg_ack_data_t adata; + int rc; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) { + rte_flow_error_set(error, -EAGAIN, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Represented VF not active yet"); + return 0; + } + + if (rep_dev->native_repte) + return cnxk_flow_dev_dump_internal(eth_dev, flow, file, error, true); + + rc = process_flow_dump(rep_dev, flow, file, &adata); + if (rc || adata.u.sval < 0) { + if (adata.u.sval < 0) + rc = adata.u.sval; + + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to destroy flow"); + goto fail; + } + + return 0; +fail: + return rc; +} + +static int +cnxk_rep_flow_isolate(struct rte_eth_dev *eth_dev __rte_unused, int enable __rte_unused, + struct rte_flow_error *error) +{ + /* If we support, we need to un-install the default mcam + * entry for this port. + */ + + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Flow isolation not supported"); + + return -rte_errno; +} + struct rte_flow_ops cnxk_rep_flow_ops = { + .validate = cnxk_rep_flow_validate, .create = cnxk_rep_flow_create, + .destroy = cnxk_rep_flow_destroy, + .query = cnxk_rep_flow_query, + .flush = cnxk_rep_flow_flush, + .isolate = cnxk_rep_flow_isolate, + .dev_dump = cnxk_rep_flow_dev_dump, }; diff --git a/drivers/net/cnxk/cnxk_rep_msg.h b/drivers/net/cnxk/cnxk_rep_msg.h index 2a7b5e3bc5..837eb55ba6 100644 --- a/drivers/net/cnxk/cnxk_rep_msg.h +++ b/drivers/net/cnxk/cnxk_rep_msg.h @@ -29,6 +29,11 @@ typedef enum CNXK_REP_MSG { CNXK_REP_MSG_ETH_STATS_CLEAR, /* Flow operation msgs */ CNXK_REP_MSG_FLOW_CREATE, + CNXK_REP_MSG_FLOW_DESTROY, + CNXK_REP_MSG_FLOW_VALIDATE, + CNXK_REP_MSG_FLOW_FLUSH, + CNXK_REP_MSG_FLOW_DUMP, + CNXK_REP_MSG_FLOW_QUERY, /* End of messaging sequence */ CNXK_REP_MSG_END, } cnxk_rep_msg_t; @@ -109,6 +114,33 @@ typedef struct cnxk_rep_msg_flow_create_meta { uint16_t nb_action; } __rte_packed cnxk_rep_msg_flow_create_meta_t; +/* Flow destroy msg meta */ +typedef struct cnxk_rep_msg_flow_destroy_meta { + uint64_t flow; + uint16_t portid; +} __rte_packed cnxk_rep_msg_flow_destroy_meta_t; + +/* Flow flush msg meta */ +typedef struct cnxk_rep_msg_flow_flush_meta { + uint16_t portid; +} __rte_packed cnxk_rep_msg_flow_flush_meta_t; + +/* Flow dump msg meta */ +typedef struct cnxk_rep_msg_flow_dump_meta { + uint64_t flow; + uint16_t portid; + uint8_t is_stdout; +} __rte_packed cnxk_rep_msg_flow_dump_meta_t; + +/* Flow query msg meta */ +typedef struct cnxk_rep_msg_flow_query_meta { + uint64_t flow; + uint16_t portid; + uint8_t reset; + uint32_t action_data_sz; + uint8_t action_data[]; +} __rte_packed cnxk_rep_msg_flow_query_meta_t; + /* Type pattern meta */ typedef struct cnxk_pattern_hdr { uint16_t type; From patchwork Tue Dec 19 17:40:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135369 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8171643747; Tue, 19 Dec 2023 18:43:49 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5D92C42EC7; Tue, 19 Dec 2023 18:41:46 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 2C0F042EAF for ; Tue, 19 Dec 2023 18:41:44 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJ9d537021347; Tue, 19 Dec 2023 09:41:43 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=tZHmM3EhZJdZuYWQ+O/lL rhitxA1WGvQfDQQCbhUh9A=; b=VSb3k3+R022mctlfY3ZOvd7WOzM5m19o0rfNn SQl+Jgt8SIFLBkeVP8+KgvLnwSv4381AoUSgPnZdppE05XmEZcEsdB8bQVuH1C3C iY6LjkQQlFkpVlm6jSJTtFkDRGpYHcQ4LoCq78qU7nYscbwoD7quvB5KciaBv1Kx RZ3PQxcBsSrk1gZEij2fxmGZKY1Vz2hybZYVu2qXribRjnyvfMlP0KM0WjbJCCzI Eclxr14Yxdn52rDzJQKrpIcEgCPyaAdxHXnHwHdTTi8t5RMUsmzt0lmIxZakwH11 Ryk/dW7O0ESI1IrgyI1cGVTAGuMPQiTz12+g3j3AnA0Igl25A== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3v1c9kumjf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 19 Dec 2023 09:41:43 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:41:41 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:41:41 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 8FE733F708D; Tue, 19 Dec 2023 09:41:38 -0800 (PST) From: Harman Kalra To: Thomas Monjalon , Nithin Dabilpuram , Kiran Kumar K , "Sunil Kumar Kori" , Satha Rao , "Harman Kalra" CC: , Subject: [PATCH v2 24/24] doc: port representors in cnxk Date: Tue, 19 Dec 2023 23:10:03 +0530 Message-ID: <20231219174003.72901-25-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 2X26JapTmnpM-BqQW3si9--L7bEYfHSv X-Proofpoint-GUID: 2X26JapTmnpM-BqQW3si9--L7bEYfHSv X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Updating the CNXK PMD documentation with the added support for port representors. Signed-off-by: Harman Kalra --- MAINTAINERS | 1 + doc/guides/nics/cnxk.rst | 58 ++++++++++++++++++++++++++++ doc/guides/nics/features/cnxk.ini | 3 ++ doc/guides/nics/features/cnxk_vf.ini | 4 ++ 4 files changed, 66 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 0d1c8126e3..2716178e18 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -827,6 +827,7 @@ M: Nithin Dabilpuram M: Kiran Kumar K M: Sunil Kumar Kori M: Satha Rao +M: Harman Kalra T: git://dpdk.org/next/dpdk-next-net-mrvl F: drivers/common/cnxk/ F: drivers/net/cnxk/ diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst index 9ec52e380f..5fd1f6513a 100644 --- a/doc/guides/nics/cnxk.rst +++ b/doc/guides/nics/cnxk.rst @@ -37,6 +37,9 @@ Features of the CNXK Ethdev PMD are: - Inline IPsec processing support - Ingress meter support - Queue based priority flow control support +- Port representors +- Represented port pattern matching and action +- Port representor pattern matching and action Prerequisites ------------- @@ -613,6 +616,57 @@ Runtime Config Options for inline device With the above configuration, driver would poll for aging flows every 50 seconds. +Port Representors +----------------- + +The CNXK driver supports port representor model by adding virtual ethernet +ports providing a logical representation in DPDK for physical function(PF) or +SR-IOV virtual function (VF) devices for control and monitoring. + +Base device or parent device underneath these representor ports is a eswitch +device which is not a cnxk ethernet device but has NIC RX and TX capabilities. +Each representor port is represented by a RQ and SQ pair of this eswitch +device. + +Current implementation supports representors for both physical function and +virtual function. + +These port representor ethdev instances can be spawned on an as needed basis +through configuration parameters passed to the driver of the underlying +base device using devargs ``-a ,representor=pf*vf*`` + +.. note:: + + Representor ports to be created for respective representees should be + defined via these representor devargs. + Eg. To create a representor for representee PF1VF0, devargs to be passed + is ``-a ,representor=pf0vf0`` + + For PF representor + ``-a ,representor=pf2`` + + For defining range of vfs, say 5 representor ports under a PF + ``-a ,representor=pf0vf[0-4]`` + + For representing different VFs under different PFs + ``-a ,representor=pf0vf[1,2],representor=pf1vf[2-5]`` + +In case of exception path (i.e. until the flow definition is offloaded to the +hardware), packets transmitted by the VFs shall be received by these +representor port, while packets transmitted by representor ports shall be +received by respective VFs. + +On receiving the VF traffic via these representor ports, applications holding +these representor ports can decide to offload the traffic flow into the HW. +Henceforth the matching traffic shall be directly steered to the respective +VFs without being received by the application. + +Current virtual representor port PMD supports following operations: + +- Get and clear VF statistics +- Set mac address +- Flow operations - create, validate, destroy, query, flush, dump + Debugging Options ----------------- @@ -627,3 +681,7 @@ Debugging Options +---+------------+-------------------------------------------------------+ | 2 | NPC | --log-level='pmd\.net.cnxk\.flow,8' | +---+------------+-------------------------------------------------------+ + | 3 | REP | --log-level='pmd\.net.cnxk\.rep,8' | + +---+------------+-------------------------------------------------------+ + | 4 | ESW | --log-level='pmd\.net.cnxk\.esw,8' | + +---+------------+-------------------------------------------------------+ diff --git a/doc/guides/nics/features/cnxk.ini b/doc/guides/nics/features/cnxk.ini index 94e7a6ab8d..88d5aaaa4e 100644 --- a/doc/guides/nics/features/cnxk.ini +++ b/doc/guides/nics/features/cnxk.ini @@ -73,6 +73,8 @@ mpls = Y nvgre = Y pppoes = Y raw = Y +represented_port = Y +port_representor = Y sctp = Y tcp = Y tx_queue = Y @@ -96,6 +98,7 @@ pf = Y port_id = Y queue = Y represented_port = Y +port_representor = Y rss = Y sample = Y security = Y diff --git a/doc/guides/nics/features/cnxk_vf.ini b/doc/guides/nics/features/cnxk_vf.ini index 53aa2a3d0c..7d7a1cad1b 100644 --- a/doc/guides/nics/features/cnxk_vf.ini +++ b/doc/guides/nics/features/cnxk_vf.ini @@ -64,6 +64,8 @@ mpls = Y nvgre = Y pppoes = Y raw = Y +represented_port = Y +port_representor = Y sctp = Y tcp = Y tx_queue = Y @@ -85,6 +87,8 @@ of_set_vlan_pcp = Y of_set_vlan_vid = Y pf = Y queue = Y +represented port = Y +port_representor = Y rss = Y security = Y skip_cman = Y