From patchwork Mon Jun 7 17:59:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 94010 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 56722A034F; Mon, 7 Jun 2021 20:09:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1656F41109; Mon, 7 Jun 2021 20:05:56 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id CCBB2411C3 for ; Mon, 7 Jun 2021 20:05:54 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 157I1bFA017524 for ; Mon, 7 Jun 2021 11:05:53 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=UhDU2HIijQaFnjid7ZvcCvRdJfw8g0hJymtbiwmTQYI=; b=CeJErT72BN0wBV/lOB5G+p/x8kvDzXMHT7ivFFOidsiViNYdfT/+Fyk0cxX3xAsFmJXO RYhKQ4t+thPyFLGOwOzooMOsWLRNwQV789v9MRlwo2ad8W77cp1gtTlXO6+Gbu9h2dZn uvKmcca+J3dzkLO4AnrGy3f1O8fjsokWQT3ely4X9dG/nXzsen4f+S6qfSrOW2HgH5ht MzZOAC2ZDeSMORSR6YH9N78jvieEhdfuMo67cG0Fg4Nw3WXdcDL5MEzSGU/QPuLdC3Zz IPbBWGfzEzP6AiUFpc6Hlabf+azdiI5i7YQr4I/OjSMeBDjq56jEyvNLfvEtpk9mE86v dQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 391ecv2eqr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 07 Jun 2021 11:05:53 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 7 Jun 2021 11:05:51 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 7 Jun 2021 11:05:51 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id A53F13F703F; Mon, 7 Jun 2021 11:05:48 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , Date: Mon, 7 Jun 2021 23:29:35 +0530 Message-ID: <20210607175943.31690-55-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210607175943.31690-1-ndabilpuram@marvell.com> References: <20210306153404.10781-1-ndabilpuram@marvell.com> <20210607175943.31690-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: TDU7Z8JCbqEsZQuPCRprRX0a0ZioWmyb X-Proofpoint-ORIG-GUID: TDU7Z8JCbqEsZQuPCRprRX0a0ZioWmyb X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.761 definitions=2021-06-07_14:2021-06-04, 2021-06-07 signatures=0 Subject: [dpdk-dev] [PATCH v2 54/62] net/cnxk: register callback to get PTP status X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Once PTP status is changed at H/W i.e. enable/disable then it is propagated to user via registered callback. So corresponding callback is registered to get PTP status. Signed-off-by: Sunil Kumar Kori --- drivers/net/cnxk/cn10k_ethdev.c | 87 ++++++++++++++++++++++++++++++++++++-- drivers/net/cnxk/cn10k_rx.h | 1 + drivers/net/cnxk/cn9k_ethdev.c | 73 ++++++++++++++++++++++++++++++++ drivers/net/cnxk/cn9k_rx.h | 1 + drivers/net/cnxk/cnxk_ethdev.c | 2 +- drivers/net/cnxk/cnxk_ethdev.h | 5 +++ drivers/net/cnxk/cnxk_ethdev_ops.c | 2 + 7 files changed, 166 insertions(+), 5 deletions(-) diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c index fa7b835..7f02b6b 100644 --- a/drivers/net/cnxk/cn10k_ethdev.c +++ b/drivers/net/cnxk/cn10k_ethdev.c @@ -266,6 +266,76 @@ cn10k_nix_configure(struct rte_eth_dev *eth_dev) return 0; } +/* Function to enable ptp config for VFs */ +static void +nix_ptp_enable_vf(struct rte_eth_dev *eth_dev) +{ + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + + if (nix_recalc_mtu(eth_dev)) + plt_err("Failed to set MTU size for ptp"); + + dev->scalar_ena = true; + dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F; + + /* Setting up the function pointers as per new offload flags */ + cn10k_eth_set_rx_function(eth_dev); + cn10k_eth_set_tx_function(eth_dev); +} + +static uint16_t +nix_ptp_vf_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts) +{ + struct cn10k_eth_rxq *rxq = queue; + struct cnxk_eth_rxq_sp *rxq_sp; + struct rte_eth_dev *eth_dev; + + RTE_SET_USED(mbufs); + RTE_SET_USED(pkts); + + rxq_sp = ((struct cnxk_eth_rxq_sp *)rxq) - 1; + eth_dev = rxq_sp->dev->eth_dev; + nix_ptp_enable_vf(eth_dev); + + return 0; +} + +static int +cn10k_nix_ptp_info_update_cb(struct roc_nix *nix, bool ptp_en) +{ + struct cnxk_eth_dev *dev = (struct cnxk_eth_dev *)nix; + struct rte_eth_dev *eth_dev; + struct cn10k_eth_rxq *rxq; + int i; + + if (!dev) + return -EINVAL; + + eth_dev = dev->eth_dev; + if (!eth_dev) + return -EINVAL; + + dev->ptp_en = ptp_en; + + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + rxq = eth_dev->data->rx_queues[i]; + rxq->mbuf_initializer = cnxk_nix_rxq_mbuf_setup(dev); + } + + if (roc_nix_is_vf_or_sdp(nix) && !(roc_nix_is_sdp(nix)) && + !(roc_nix_is_lbk(nix))) { + /* In case of VF, setting of MTU cant be done directly in this + * function as this is running as part of MBOX request(PF->VF) + * and MTU setting also requires MBOX message to be + * sent(VF->PF) + */ + eth_dev->rx_pkt_burst = nix_ptp_vf_burst; + rte_mb(); + } + + return 0; +} + static int cn10k_nix_dev_start(struct rte_eth_dev *eth_dev) { @@ -317,6 +387,7 @@ static int cn10k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) { struct rte_eth_dev *eth_dev; + struct cnxk_eth_dev *dev; int rc; if (RTE_CACHE_LINE_SIZE != 64) { @@ -337,15 +408,23 @@ cn10k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) if (rc) return rc; + /* Find eth dev allocated */ + eth_dev = rte_eth_dev_allocated(pci_dev->device.name); + if (!eth_dev) + return -ENOENT; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) { - eth_dev = rte_eth_dev_allocated(pci_dev->device.name); - if (!eth_dev) - return -ENOENT; - /* Setup callbacks for secondary process */ cn10k_eth_set_tx_function(eth_dev); cn10k_eth_set_rx_function(eth_dev); + return 0; } + + dev = cnxk_eth_pmd_priv(eth_dev); + + /* Register up msg callbacks for PTP information */ + roc_nix_ptp_info_cb_register(&dev->nix, cn10k_nix_ptp_info_update_cb); + return 0; } diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h index 7bb9dd8..29ee0ac 100644 --- a/drivers/net/cnxk/cn10k_rx.h +++ b/drivers/net/cnxk/cn10k_rx.h @@ -12,6 +12,7 @@ #define NIX_RX_OFFLOAD_PTYPE_F BIT(1) #define NIX_RX_OFFLOAD_CHECKSUM_F BIT(2) #define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(3) +#define NIX_RX_OFFLOAD_TSTAMP_F BIT(4) /* Flags to control cqe_to_mbuf conversion function. * Defining it from backwards to denote its been diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c index e47fa9a..798a03d 100644 --- a/drivers/net/cnxk/cn9k_ethdev.c +++ b/drivers/net/cnxk/cn9k_ethdev.c @@ -275,6 +275,76 @@ cn9k_nix_configure(struct rte_eth_dev *eth_dev) return 0; } +/* Function to enable ptp config for VFs */ +static void +nix_ptp_enable_vf(struct rte_eth_dev *eth_dev) +{ + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + + if (nix_recalc_mtu(eth_dev)) + plt_err("Failed to set MTU size for ptp"); + + dev->scalar_ena = true; + dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F; + + /* Setting up the function pointers as per new offload flags */ + cn9k_eth_set_rx_function(eth_dev); + cn9k_eth_set_tx_function(eth_dev); +} + +static uint16_t +nix_ptp_vf_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts) +{ + struct cn9k_eth_rxq *rxq = queue; + struct cnxk_eth_rxq_sp *rxq_sp; + struct rte_eth_dev *eth_dev; + + RTE_SET_USED(mbufs); + RTE_SET_USED(pkts); + + rxq_sp = ((struct cnxk_eth_rxq_sp *)rxq) - 1; + eth_dev = rxq_sp->dev->eth_dev; + nix_ptp_enable_vf(eth_dev); + + return 0; +} + +static int +cn9k_nix_ptp_info_update_cb(struct roc_nix *nix, bool ptp_en) +{ + struct cnxk_eth_dev *dev = (struct cnxk_eth_dev *)nix; + struct rte_eth_dev *eth_dev; + struct cn9k_eth_rxq *rxq; + int i; + + if (!dev) + return -EINVAL; + + eth_dev = dev->eth_dev; + if (!eth_dev) + return -EINVAL; + + dev->ptp_en = ptp_en; + + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + rxq = eth_dev->data->rx_queues[i]; + rxq->mbuf_initializer = cnxk_nix_rxq_mbuf_setup(dev); + } + + if (roc_nix_is_vf_or_sdp(nix) && !(roc_nix_is_sdp(nix)) && + !(roc_nix_is_lbk(nix))) { + /* In case of VF, setting of MTU cant be done directly in this + * function as this is running as part of MBOX request(PF->VF) + * and MTU setting also requires MBOX message to be + * sent(VF->PF) + */ + eth_dev->rx_pkt_burst = nix_ptp_vf_burst; + rte_mb(); + } + + return 0; +} + static int cn9k_nix_dev_start(struct rte_eth_dev *eth_dev) { @@ -379,6 +449,9 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) dev->hwcap = 0; + /* Register up msg callbacks for PTP information */ + roc_nix_ptp_info_cb_register(&dev->nix, cn9k_nix_ptp_info_update_cb); + /* Update HW erratas */ if (roc_model_is_cn96_A0() || roc_model_is_cn95_A0()) dev->cq_min_4k = 1; diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h index bc04f5c..f4b3282 100644 --- a/drivers/net/cnxk/cn9k_rx.h +++ b/drivers/net/cnxk/cn9k_rx.h @@ -13,6 +13,7 @@ #define NIX_RX_OFFLOAD_PTYPE_F BIT(1) #define NIX_RX_OFFLOAD_CHECKSUM_F BIT(2) #define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(3) +#define NIX_RX_OFFLOAD_TSTAMP_F BIT(4) /* Flags to control cqe_to_mbuf conversion function. * Defining it from backwards to denote its been diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index a40625e..afb5b56 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -59,7 +59,7 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq) } } -static int +int nix_recalc_mtu(struct rte_eth_dev *eth_dev) { struct rte_eth_dev_data *data = eth_dev->data; diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index e5a7e98..b070df7 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -100,6 +100,7 @@ /* Default mark value used when none is provided. */ #define CNXK_FLOW_ACTION_FLAG_DEFAULT 0xffff +#define CNXK_NIX_TIMESYNC_RX_OFFSET 8 #define PTYPE_NON_TUNNEL_WIDTH 16 #define PTYPE_TUNNEL_WIDTH 12 #define PTYPE_NON_TUNNEL_ARRAY_SZ BIT(PTYPE_NON_TUNNEL_WIDTH) @@ -153,6 +154,7 @@ struct cnxk_eth_dev { uint16_t flags; uint8_t ptype_disable; bool scalar_ena; + bool ptp_en; /* Pointer back to rte */ struct rte_eth_dev *eth_dev; @@ -316,6 +318,9 @@ int cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, int cnxk_nix_dev_get_reg(struct rte_eth_dev *eth_dev, struct rte_dev_reg_info *regs); +/* Other private functions */ +int nix_recalc_mtu(struct rte_eth_dev *eth_dev); + /* Inlines */ static __rte_always_inline uint64_t cnxk_pktmbuf_detach(struct rte_mbuf *m) diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c index a15d87c..d437e2c 100644 --- a/drivers/net/cnxk/cnxk_ethdev_ops.c +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c @@ -379,6 +379,8 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) int rc = -EINVAL; uint32_t buffsz; + frame_size += CNXK_NIX_TIMESYNC_RX_OFFSET * dev->ptp_en; + /* Check if MTU is within the allowed range */ if ((frame_size - RTE_ETHER_CRC_LEN) < NIX_MIN_HW_FRS) { plt_err("MTU is lesser than minimum");