From patchwork Tue Dec 19 17:39:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 135350 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6D51D43747; Tue, 19 Dec 2023 18:41:06 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 27A5F42E5F; Tue, 19 Dec 2023 18:40:48 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id F081242E5F for ; Tue, 19 Dec 2023 18:40:45 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJA5Sn2028842; Tue, 19 Dec 2023 09:40:45 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=rhp1Db0Ks0Lhs9jHko5bo KpGITUdkqZvVXsR0ZyKT2E=; b=Qq2H24FNUicfTUU1qf4TPsyjNYQqt5K6az3/d ZsJjZmiDSlg0HOunKpnZKV8YK9ibw1nHnr6LVKVJ2gjOnJyBIcA/e/7zJIGivYux M8aAvSGRgHg3WncmPrf0h76ZKokj6mwcjZdnEQmg05aB26slZGn3A4Ieja9LvvH3 G3IZ5WLuOxziDDtaiFYAemETGVenHM0VcqcRJ8j/sQA2Q2Tou4nk3uqmdrJZ4jPm LUeq1JqZyMBBbEWCKCrMDQwNTeFiREug6/PNoe8kuRf+pHnVClfp6IcmWJ1qf2hH lDQ/i2qHaQo21hBfewUezy+dE9X1jGMrtq4L0QrZgkMVX6kpA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3v39491rw9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 19 Dec 2023 09:40:44 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:40:43 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:40:43 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id ADD933F7094; Tue, 19 Dec 2023 09:40:40 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra , Anatoly Burakov CC: , Subject: [PATCH v2 05/24] net/cnxk: probing representor ports Date: Tue, 19 Dec 2023 23:09:44 +0530 Message-ID: <20231219174003.72901-6-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 7p_UObDWnF1tF2-wnbeP1BD1CAQltJSp X-Proofpoint-ORIG-GUID: 7p_UObDWnF1tF2-wnbeP1BD1CAQltJSp X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Basic skeleton for probing representor devices. If PF device is passed with "representor" devargs, representor ports gets probed as a separate ethdev device. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.c | 12 ++ drivers/net/cnxk/cnxk_eswitch.h | 8 +- drivers/net/cnxk/cnxk_rep.c | 256 ++++++++++++++++++++++++++++++++ drivers/net/cnxk/cnxk_rep.h | 50 +++++++ drivers/net/cnxk/cnxk_rep_ops.c | 129 ++++++++++++++++ drivers/net/cnxk/meson.build | 2 + 6 files changed, 456 insertions(+), 1 deletion(-) create mode 100644 drivers/net/cnxk/cnxk_rep.c create mode 100644 drivers/net/cnxk/cnxk_rep.h create mode 100644 drivers/net/cnxk/cnxk_rep_ops.c diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c index 739a09c034..563b224a6c 100644 --- a/drivers/net/cnxk/cnxk_eswitch.c +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -3,6 +3,7 @@ */ #include +#include #define CNXK_NIX_DEF_SQ_COUNT 512 @@ -42,6 +43,10 @@ cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) eswitch_dev = cnxk_eswitch_pmd_priv(); + /* Remove representor devices associated with PF */ + if (eswitch_dev->repr_cnt.nb_repr_created) + cnxk_rep_dev_remove(eswitch_dev); + eswitch_hw_rsrc_cleanup(eswitch_dev); /* Check if this device is hosting common resource */ nix = roc_idev_npa_nix_get(); @@ -724,6 +729,13 @@ cnxk_eswitch_dev_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pc eswitch_dev->repr_cnt.max_repr, eswitch_dev->repr_cnt.nb_repr_created, roc_nix_get_pf_func(&eswitch_dev->nix)); + /* Probe representor ports */ + rc = cnxk_rep_dev_probe(pci_dev, eswitch_dev); + if (rc) { + plt_err("Failed to probe representor ports"); + goto rsrc_cleanup; + } + /* Spinlock for synchronization between representors traffic and control * messages */ diff --git a/drivers/net/cnxk/cnxk_eswitch.h b/drivers/net/cnxk/cnxk_eswitch.h index dcb787cf02..4908c3ba95 100644 --- a/drivers/net/cnxk/cnxk_eswitch.h +++ b/drivers/net/cnxk/cnxk_eswitch.h @@ -66,6 +66,11 @@ struct cnxk_eswitch_repr_cnt { uint16_t nb_repr_started; }; +struct cnxk_eswitch_switch_domain { + uint16_t switch_domain_id; + uint16_t pf; +}; + struct cnxk_rep_info { struct rte_eth_dev *rep_eth_dev; }; @@ -121,7 +126,8 @@ struct cnxk_eswitch_dev { /* Port representor fields */ rte_spinlock_t rep_lock; - uint16_t switch_domain_id; + uint16_t nb_switch_domain; + struct cnxk_eswitch_switch_domain sw_dom[RTE_MAX_ETHPORTS]; uint16_t eswitch_vdev; struct cnxk_rep_info *rep_info; }; diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c new file mode 100644 index 0000000000..295bea3724 --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep.c @@ -0,0 +1,256 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ +#include + +#define PF_SHIFT 10 +#define PF_MASK 0x3F + +static uint16_t +get_pf(uint16_t hw_func) +{ + return (hw_func >> PF_SHIFT) & PF_MASK; +} + +static uint16_t +switch_domain_id_allocate(struct cnxk_eswitch_dev *eswitch_dev, uint16_t pf) +{ + int i = 0; + + for (i = 0; i < eswitch_dev->nb_switch_domain; i++) { + if (eswitch_dev->sw_dom[i].pf == pf) + return eswitch_dev->sw_dom[i].switch_domain_id; + } + + return RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID; +} + +int +cnxk_rep_dev_uninit(struct rte_eth_dev *ethdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + plt_rep_dbg("Representor port:%d uninit", ethdev->data->port_id); + rte_free(ethdev->data->mac_addrs); + ethdev->data->mac_addrs = NULL; + + return 0; +} + +int +cnxk_rep_dev_remove(struct cnxk_eswitch_dev *eswitch_dev) +{ + int i, rc = 0; + + for (i = 0; i < eswitch_dev->nb_switch_domain; i++) { + rc = rte_eth_switch_domain_free(eswitch_dev->sw_dom[i].switch_domain_id); + if (rc) + plt_err("Failed to alloc switch domain: %d", rc); + } + + return rc; +} + +static int +cnxk_rep_parent_setup(struct cnxk_eswitch_dev *eswitch_dev) +{ + uint16_t pf, prev_pf = 0, switch_domain_id; + int rc, i, j = 0; + + if (eswitch_dev->rep_info) + return 0; + + eswitch_dev->rep_info = + plt_zmalloc(sizeof(eswitch_dev->rep_info[0]) * eswitch_dev->repr_cnt.max_repr, 0); + if (!eswitch_dev->rep_info) { + plt_err("Failed to alloc memory for rep info"); + rc = -ENOMEM; + goto fail; + } + + /* Allocate switch domain for all PFs (VFs will be under same domain as PF) */ + for (i = 0; i < eswitch_dev->repr_cnt.max_repr; i++) { + pf = get_pf(eswitch_dev->nix.rep_pfvf_map[i]); + if (pf == prev_pf) + continue; + + rc = rte_eth_switch_domain_alloc(&switch_domain_id); + if (rc) { + plt_err("Failed to alloc switch domain: %d", rc); + goto fail; + } + plt_rep_dbg("Allocated switch domain id %d for pf %d\n", switch_domain_id, pf); + eswitch_dev->sw_dom[j].switch_domain_id = switch_domain_id; + eswitch_dev->sw_dom[j].pf = pf; + prev_pf = pf; + j++; + } + eswitch_dev->nb_switch_domain = j; + + return 0; +fail: + return rc; +} + +static uint16_t +cnxk_rep_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + PLT_SET_USED(tx_queue); + PLT_SET_USED(tx_pkts); + PLT_SET_USED(nb_pkts); + + return 0; +} + +static uint16_t +cnxk_rep_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + PLT_SET_USED(rx_queue); + PLT_SET_USED(rx_pkts); + PLT_SET_USED(nb_pkts); + + return 0; +} + +static int +cnxk_rep_dev_init(struct rte_eth_dev *eth_dev, void *params) +{ + struct cnxk_rep_dev *rep_params = (struct cnxk_rep_dev *)params; + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + + rep_dev->port_id = rep_params->port_id; + rep_dev->switch_domain_id = rep_params->switch_domain_id; + rep_dev->parent_dev = rep_params->parent_dev; + rep_dev->hw_func = rep_params->hw_func; + rep_dev->rep_id = rep_params->rep_id; + + eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR; + eth_dev->data->representor_id = rep_params->port_id; + eth_dev->data->backer_port_id = eth_dev->data->port_id; + + eth_dev->data->mac_addrs = plt_zmalloc(RTE_ETHER_ADDR_LEN, 0); + if (!eth_dev->data->mac_addrs) { + plt_err("Failed to allocate memory for mac addr"); + return -ENOMEM; + } + + rte_eth_random_addr(rep_dev->mac_addr); + memcpy(eth_dev->data->mac_addrs, rep_dev->mac_addr, RTE_ETHER_ADDR_LEN); + + /* Set the device operations */ + eth_dev->dev_ops = &cnxk_rep_dev_ops; + + /* Rx/Tx functions stubs to avoid crashing */ + eth_dev->rx_pkt_burst = cnxk_rep_rx_burst; + eth_dev->tx_pkt_burst = cnxk_rep_tx_burst; + + /* Only single queues for representor devices */ + eth_dev->data->nb_rx_queues = 1; + eth_dev->data->nb_tx_queues = 1; + + eth_dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE; + eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP; + eth_dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED; + + return 0; +} + +static int +create_representor_ethdev(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev, + struct cnxk_eswitch_devargs *esw_da, int idx) +{ + char name[RTE_ETH_NAME_MAX_LEN]; + struct rte_eth_dev *rep_eth_dev; + uint16_t hw_func; + int rc = 0; + + struct cnxk_rep_dev rep = {.port_id = eswitch_dev->repr_cnt.nb_repr_probed, + .parent_dev = eswitch_dev}; + + if (esw_da->type == CNXK_ESW_DA_TYPE_PFVF) { + hw_func = esw_da->repr_hw_info[idx].hw_func; + rep.switch_domain_id = switch_domain_id_allocate(eswitch_dev, get_pf(hw_func)); + if (rep.switch_domain_id == RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID) { + plt_err("Failed to get a valid switch domain id"); + rc = -EINVAL; + goto fail; + } + + esw_da->repr_hw_info[idx].port_id = rep.port_id; + /* Representor port net_bdf_port */ + snprintf(name, sizeof(name), "net_%s_hw_%x_representor_%d", pci_dev->device.name, + hw_func, rep.port_id); + + rep.hw_func = hw_func; + rep.rep_id = esw_da->repr_hw_info[idx].rep_id; + + } else { + snprintf(name, sizeof(name), "net_%s_representor_%d", pci_dev->device.name, + rep.port_id); + rep.switch_domain_id = RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID; + } + + rc = rte_eth_dev_create(&pci_dev->device, name, sizeof(struct cnxk_rep_dev), NULL, NULL, + cnxk_rep_dev_init, &rep); + if (rc) { + plt_err("Failed to create cnxk vf representor %s", name); + rc = -EINVAL; + goto fail; + } + + rep_eth_dev = rte_eth_dev_allocated(name); + if (!rep_eth_dev) { + plt_err("Failed to find the eth_dev for VF-Rep: %s.", name); + rc = -ENODEV; + goto fail; + } + + plt_rep_dbg("Representor portid %d (%s) type %d probe done", rep_eth_dev->data->port_id, + name, esw_da->da.type); + eswitch_dev->rep_info[rep.port_id].rep_eth_dev = rep_eth_dev; + eswitch_dev->repr_cnt.nb_repr_probed++; + + return 0; +fail: + return rc; +} + +int +cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev) +{ + struct cnxk_eswitch_devargs *esw_da; + uint16_t num_rep; + int i, j, rc; + + if (eswitch_dev->repr_cnt.nb_repr_created > RTE_MAX_ETHPORTS) { + plt_err("nb_representor_ports %d > %d MAX ETHPORTS\n", + eswitch_dev->repr_cnt.nb_repr_created, RTE_MAX_ETHPORTS); + rc = -EINVAL; + goto fail; + } + + /* Initialize the internals of representor ports */ + rc = cnxk_rep_parent_setup(eswitch_dev); + if (rc) { + plt_err("Failed to setup the parent device, err %d", rc); + goto fail; + } + + for (i = eswitch_dev->last_probed; i < eswitch_dev->nb_esw_da; i++) { + esw_da = &eswitch_dev->esw_da[i]; + /* Check the representor devargs */ + num_rep = esw_da->nb_repr_ports; + for (j = 0; j < num_rep; j++) { + rc = create_representor_ethdev(pci_dev, eswitch_dev, esw_da, j); + if (rc) + goto fail; + } + } + eswitch_dev->last_probed = i; + + return 0; +fail: + return rc; +} diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h new file mode 100644 index 0000000000..2cb3ae8ac5 --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ +#include +#include + +#ifndef __CNXK_REP_H__ +#define __CNXK_REP_H__ + +/* Common ethdev ops */ +extern struct eth_dev_ops cnxk_rep_dev_ops; + +struct cnxk_rep_dev { + uint16_t port_id; + uint16_t rep_id; + uint16_t switch_domain_id; + struct cnxk_eswitch_dev *parent_dev; + uint16_t hw_func; + uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; +}; + +static inline struct cnxk_rep_dev * +cnxk_rep_pmd_priv(const struct rte_eth_dev *eth_dev) +{ + return eth_dev->data->dev_private; +} + +int cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev); +int cnxk_rep_dev_remove(struct cnxk_eswitch_dev *eswitch_dev); +int cnxk_rep_dev_uninit(struct rte_eth_dev *ethdev); +int cnxk_rep_dev_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info); +int cnxk_rep_representor_info_get(struct rte_eth_dev *dev, struct rte_eth_representor_info *info); +int cnxk_rep_dev_configure(struct rte_eth_dev *eth_dev); + +int cnxk_rep_link_update(struct rte_eth_dev *eth_dev, int wait_to_compl); +int cnxk_rep_dev_start(struct rte_eth_dev *eth_dev); +int cnxk_rep_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t queue_idx, uint16_t nb_desc, + unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); +int cnxk_rep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t queue_idx, uint16_t nb_desc, + unsigned int socket_id, const struct rte_eth_txconf *tx_conf); +void cnxk_rep_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx); +void cnxk_rep_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx); +int cnxk_rep_dev_stop(struct rte_eth_dev *eth_dev); +int cnxk_rep_dev_close(struct rte_eth_dev *eth_dev); +int cnxk_rep_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats); +int cnxk_rep_stats_reset(struct rte_eth_dev *eth_dev); +int cnxk_rep_flow_ops_get(struct rte_eth_dev *ethdev, const struct rte_flow_ops **ops); + +#endif /* __CNXK_REP_H__ */ diff --git a/drivers/net/cnxk/cnxk_rep_ops.c b/drivers/net/cnxk/cnxk_rep_ops.c new file mode 100644 index 0000000000..67dcc422e3 --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep_ops.c @@ -0,0 +1,129 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include + +int +cnxk_rep_link_update(struct rte_eth_dev *ethdev, int wait_to_complete) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(wait_to_complete); + return 0; +} + +int +cnxk_rep_dev_info_get(struct rte_eth_dev *ethdev, struct rte_eth_dev_info *devinfo) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(devinfo); + return 0; +} + +int +cnxk_rep_dev_configure(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_dev_start(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_dev_close(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_dev_stop(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_rx_queue_setup(struct rte_eth_dev *ethdev, uint16_t rx_queue_id, uint16_t nb_rx_desc, + unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mb_pool) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(rx_queue_id); + PLT_SET_USED(nb_rx_desc); + PLT_SET_USED(socket_id); + PLT_SET_USED(rx_conf); + PLT_SET_USED(mb_pool); + return 0; +} + +void +cnxk_rep_rx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(queue_id); +} + +int +cnxk_rep_tx_queue_setup(struct rte_eth_dev *ethdev, uint16_t tx_queue_id, uint16_t nb_tx_desc, + unsigned int socket_id, const struct rte_eth_txconf *tx_conf) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(tx_queue_id); + PLT_SET_USED(nb_tx_desc); + PLT_SET_USED(socket_id); + PLT_SET_USED(tx_conf); + return 0; +} + +void +cnxk_rep_tx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(queue_id); +} + +int +cnxk_rep_stats_get(struct rte_eth_dev *ethdev, struct rte_eth_stats *stats) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(stats); + return 0; +} + +int +cnxk_rep_stats_reset(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_flow_ops_get(struct rte_eth_dev *ethdev, const struct rte_flow_ops **ops) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(ops); + return 0; +} + +/* CNXK platform representor dev ops */ +struct eth_dev_ops cnxk_rep_dev_ops = { + .dev_infos_get = cnxk_rep_dev_info_get, + .dev_configure = cnxk_rep_dev_configure, + .dev_start = cnxk_rep_dev_start, + .rx_queue_setup = cnxk_rep_rx_queue_setup, + .rx_queue_release = cnxk_rep_rx_queue_release, + .tx_queue_setup = cnxk_rep_tx_queue_setup, + .tx_queue_release = cnxk_rep_tx_queue_release, + .link_update = cnxk_rep_link_update, + .dev_close = cnxk_rep_dev_close, + .dev_stop = cnxk_rep_dev_stop, + .stats_get = cnxk_rep_stats_get, + .stats_reset = cnxk_rep_stats_reset, + .flow_ops_get = cnxk_rep_flow_ops_get +}; diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build index ea7e363e89..fcd5d3d569 100644 --- a/drivers/net/cnxk/meson.build +++ b/drivers/net/cnxk/meson.build @@ -34,6 +34,8 @@ sources = files( 'cnxk_lookup.c', 'cnxk_ptp.c', 'cnxk_flow.c', + 'cnxk_rep.c', + 'cnxk_rep_ops.c', 'cnxk_stats.c', 'cnxk_tm.c', )