From patchwork Tue Dec 20 03:41:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simei Su X-Patchwork-Id: 121050 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F4A5A00C5; Tue, 20 Dec 2022 04:41:44 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4942142D1A; Tue, 20 Dec 2022 04:41:36 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 8EC5F42D15 for ; Tue, 20 Dec 2022 04:41:33 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671507693; x=1703043693; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=cS5VVwtoiZlQtZRxRZaXxpQi0jIqyn025vjROKQrnnk=; b=JvN/SZkjdWUwNUG82qjWtLXW77PZjQBcfRcZHw4Bex8PfnPRf2CkAroZ +AxReSf+Z0MAjn1VRpYueJRoCyw0rE0Z+MNuBngAYoyYhIozErp1UjNo+ CphVIqvTZp0a87n2ErfxFBbKftxsKm4oDTeX8LTMWSbVW+rGqHNeiVyhk Hfz7s+cXeCVTjnoxLTbpMogrNhND/hXrluPPRkKOTv/+ULwWxq02b6nkF 8ZoqiIoycdVELAAg6mJcDgS7NfODiikY9Jg/hXvKIA4N1G43lYZeLDxQp i/crSkXn7zM+rKUDaGgHT4KuFELHPMUJZdyGHMo3f4YCK+7Ky/2ObFUN2 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10566"; a="307201133" X-IronPort-AV: E=Sophos;i="5.96,258,1665471600"; d="scan'208";a="307201133" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Dec 2022 19:41:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10566"; a="683267318" X-IronPort-AV: E=Sophos;i="5.96,258,1665471600"; d="scan'208";a="683267318" Received: from unknown (HELO npg-dpdk-simeisu-cvl-119d218.sh.intel.com) ([10.67.119.208]) by orsmga001.jf.intel.com with ESMTP; 19 Dec 2022 19:41:31 -0800 From: Simei Su To: qi.z.zhang@intel.com, junfeng.guo@intel.com Cc: dev@dpdk.org, wenjun1.wu@intel.com, Simei Su Subject: [PATCH 3/3] net/igc: support IEEE 1588 PTP Date: Tue, 20 Dec 2022 11:41:03 +0800 Message-Id: <20221220034103.441524-4-simei.su@intel.com> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20221220034103.441524-1-simei.su@intel.com> References: <20221220034103.441524-1-simei.su@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add igc support for new ethdev APIs to enable/disable and read/write/adjust IEEE1588 PTP timestamps. The example command for running ptpclient is as below: ./build/examples/dpdk-ptpclient -c 1 -n 3 -- -T 0 -p 0x1 Signed-off-by: Simei Su --- drivers/net/igc/igc_ethdev.c | 222 +++++++++++++++++++++++++++++++++++++++++++ drivers/net/igc/igc_ethdev.h | 4 +- drivers/net/igc/igc_txrx.c | 50 +++++++++- drivers/net/igc/igc_txrx.h | 1 + 4 files changed, 272 insertions(+), 5 deletions(-) diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c index dcd262f..ef3346b 100644 --- a/drivers/net/igc/igc_ethdev.c +++ b/drivers/net/igc/igc_ethdev.c @@ -78,6 +78,16 @@ #define IGC_ALARM_INTERVAL 8000000u /* us, about 13.6s some per-queue registers will wrap around back to 0. */ +/* Transmit and receive latency (for PTP timestamps) */ +#define IGC_I225_TX_LATENCY_10 240 +#define IGC_I225_TX_LATENCY_100 58 +#define IGC_I225_TX_LATENCY_1000 80 +#define IGC_I225_TX_LATENCY_2500 1325 +#define IGC_I225_RX_LATENCY_10 6450 +#define IGC_I225_RX_LATENCY_100 185 +#define IGC_I225_RX_LATENCY_1000 300 +#define IGC_I225_RX_LATENCY_2500 1485 + static const struct rte_eth_desc_lim rx_desc_lim = { .nb_max = IGC_MAX_RXD, .nb_min = IGC_MIN_RXD, @@ -245,6 +255,18 @@ eth_igc_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); static int eth_igc_vlan_offload_set(struct rte_eth_dev *dev, int mask); static int eth_igc_vlan_tpid_set(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type, uint16_t tpid); +static int eth_igc_timesync_enable(struct rte_eth_dev *dev); +static int eth_igc_timesync_disable(struct rte_eth_dev *dev); +static int eth_igc_timesync_read_rx_timestamp(struct rte_eth_dev *dev, + struct timespec *timestamp, + uint32_t flags); +static int eth_igc_timesync_read_tx_timestamp(struct rte_eth_dev *dev, + struct timespec *timestamp); +static int eth_igc_timesync_adjust_time(struct rte_eth_dev *dev, int64_t delta); +static int eth_igc_timesync_read_time(struct rte_eth_dev *dev, + struct timespec *timestamp); +static int eth_igc_timesync_write_time(struct rte_eth_dev *dev, + const struct timespec *timestamp); static const struct eth_dev_ops eth_igc_ops = { .dev_configure = eth_igc_configure, @@ -298,6 +320,13 @@ static const struct eth_dev_ops eth_igc_ops = { .vlan_tpid_set = eth_igc_vlan_tpid_set, .vlan_strip_queue_set = eth_igc_vlan_strip_queue_set, .flow_ops_get = eth_igc_flow_ops_get, + .timesync_enable = eth_igc_timesync_enable, + .timesync_disable = eth_igc_timesync_disable, + .timesync_read_rx_timestamp = eth_igc_timesync_read_rx_timestamp, + .timesync_read_tx_timestamp = eth_igc_timesync_read_tx_timestamp, + .timesync_adjust_time = eth_igc_timesync_adjust_time, + .timesync_read_time = eth_igc_timesync_read_time, + .timesync_write_time = eth_igc_timesync_write_time, }; /* @@ -2582,6 +2611,199 @@ eth_igc_vlan_tpid_set(struct rte_eth_dev *dev, } static int +eth_igc_timesync_enable(struct rte_eth_dev *dev) +{ + struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); + struct timespec system_time; + struct igc_rx_queue *rxq; + uint32_t val; + uint16_t i; + + IGC_WRITE_REG(hw, IGC_TSAUXC, 0x0); + + clock_gettime(CLOCK_REALTIME, &system_time); + IGC_WRITE_REG(hw, IGC_SYSTIML, system_time.tv_nsec); + IGC_WRITE_REG(hw, IGC_SYSTIMH, system_time.tv_sec); + + /* Enable timestamping of received PTP packets. */ + val = IGC_READ_REG(hw, IGC_RXPBS); + val |= IGC_RXPBS_CFG_TS_EN; + IGC_WRITE_REG(hw, IGC_RXPBS, val); + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + val = IGC_READ_REG(hw, IGC_SRRCTL(i)); + /* For now, only support retrieving Rx timestamp from timer0. */ + val |= IGC_SRRCTL_TIMER1SEL(0) | IGC_SRRCTL_TIMER0SEL(0) | + IGC_SRRCTL_TIMESTAMP; + IGC_WRITE_REG(hw, IGC_SRRCTL(i), val); + } + + val = IGC_TSYNCRXCTL_ENABLED | IGC_TSYNCRXCTL_TYPE_ALL | + IGC_TSYNCRXCTL_RXSYNSIG; + IGC_WRITE_REG(hw, IGC_TSYNCRXCTL, val); + + /* Enable Timestamping of transmitted PTP packets. */ + IGC_WRITE_REG(hw, IGC_TSYNCTXCTL, IGC_TSYNCTXCTL_ENABLED | + IGC_TSYNCTXCTL_TXSYNSIG); + + /* Read TXSTMP registers to discard any timestamp previously stored. */ + IGC_READ_REG(hw, IGC_TXSTMPL); + IGC_READ_REG(hw, IGC_TXSTMPH); + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + rxq->offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP; + } + + return 0; +} + +static int +eth_igc_timesync_read_time(struct rte_eth_dev *dev, struct timespec *ts) +{ + struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); + + ts->tv_nsec = IGC_READ_REG(hw, IGC_SYSTIML); + ts->tv_sec = IGC_READ_REG(hw, IGC_SYSTIMH); + + return 0; +} + +static int +eth_igc_timesync_write_time(struct rte_eth_dev *dev, const struct timespec *ts) +{ + struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); + + IGC_WRITE_REG(hw, IGC_SYSTIML, ts->tv_nsec); + IGC_WRITE_REG(hw, IGC_SYSTIMH, ts->tv_sec); + + return 0; +} + +static int +eth_igc_timesync_adjust_time(struct rte_eth_dev *dev, int64_t delta) +{ + struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); + uint32_t nsec, sec; + uint64_t systime, ns; + struct timespec ts; + + nsec = (uint64_t)IGC_READ_REG(hw, IGC_SYSTIML); + sec = (uint64_t)IGC_READ_REG(hw, IGC_SYSTIMH); + systime = sec * NSEC_PER_SEC + nsec; + + ns = systime + delta; + ts = rte_ns_to_timespec(ns); + + IGC_WRITE_REG(hw, IGC_SYSTIML, ts.tv_nsec); + IGC_WRITE_REG(hw, IGC_SYSTIMH, ts.tv_sec); + + return 0; +} + +static int +eth_igc_timesync_read_rx_timestamp(__rte_unused struct rte_eth_dev *dev, + struct timespec *timestamp, + uint32_t flags) +{ + struct rte_eth_link link; + int adjust = 0; + struct igc_rx_queue *rxq; + uint64_t rx_timestamp; + + /* Get current link speed. */ + eth_igc_link_update(dev, 1); + rte_eth_linkstatus_get(dev, &link); + + switch (link.link_speed) { + case SPEED_10: + adjust = IGC_I225_RX_LATENCY_10; + break; + case SPEED_100: + adjust = IGC_I225_RX_LATENCY_100; + break; + case SPEED_1000: + adjust = IGC_I225_RX_LATENCY_1000; + break; + case SPEED_2500: + adjust = IGC_I225_RX_LATENCY_2500; + break; + } + + rxq = dev->data->rx_queues[flags]; + rx_timestamp = rxq->rx_timestamp - adjust; + *timestamp = rte_ns_to_timespec(rx_timestamp); + + return 0; +} + +static int +eth_igc_timesync_read_tx_timestamp(struct rte_eth_dev *dev, + struct timespec *timestamp) +{ + struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); + struct rte_eth_link link; + uint32_t val, nsec, sec; + uint64_t tx_timestamp; + int adjust = 0; + + val = IGC_READ_REG(hw, IGC_TSYNCTXCTL); + if (!(val & IGC_TSYNCTXCTL_VALID)) + return -EINVAL; + + nsec = (uint64_t)IGC_READ_REG(hw, IGC_TXSTMPL); + sec = (uint64_t)IGC_READ_REG(hw, IGC_TXSTMPH); + tx_timestamp = sec * NSEC_PER_SEC + nsec; + + /* Get current link speed. */ + eth_igc_link_update(dev, 1); + rte_eth_linkstatus_get(dev, &link); + + switch (link.link_speed) { + case SPEED_10: + adjust = IGC_I225_TX_LATENCY_10; + break; + case SPEED_100: + adjust = IGC_I225_TX_LATENCY_100; + break; + case SPEED_1000: + adjust = IGC_I225_TX_LATENCY_1000; + break; + case SPEED_2500: + adjust = IGC_I225_TX_LATENCY_2500; + break; + } + + tx_timestamp += adjust; + *timestamp = rte_ns_to_timespec(tx_timestamp); + + return 0; +} + +static int +eth_igc_timesync_disable(struct rte_eth_dev *dev) +{ + struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); + uint32_t val; + + /* Disable timestamping of transmitted PTP packets. */ + IGC_WRITE_REG(hw, IGC_TSYNCTXCTL, 0); + + /* Disable timestamping of received PTP packets. */ + IGC_WRITE_REG(hw, IGC_TSYNCRXCTL, 0); + + val = IGC_READ_REG(hw, IGC_RXPBS); + val &= IGC_RXPBS_CFG_TS_EN; + IGC_WRITE_REG(hw, IGC_RXPBS, val); + + val = IGC_READ_REG(hw, IGC_SRRCTL(0)); + val &= ~IGC_SRRCTL_TIMESTAMP; + IGC_WRITE_REG(hw, IGC_SRRCTL(0), val); + + return 0; +} + +static int eth_igc_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev) { diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h index f56cad7..237d3c1 100644 --- a/drivers/net/igc/igc_ethdev.h +++ b/drivers/net/igc/igc_ethdev.h @@ -7,6 +7,7 @@ #include #include +#include #include "base/igc_osdep.h" #include "base/igc_hw.h" @@ -75,7 +76,8 @@ extern "C" { RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ RTE_ETH_RX_OFFLOAD_KEEP_CRC | \ RTE_ETH_RX_OFFLOAD_SCATTER | \ - RTE_ETH_RX_OFFLOAD_RSS_HASH) + RTE_ETH_RX_OFFLOAD_RSS_HASH | \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP) #define IGC_TX_OFFLOAD_ALL ( \ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \ diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c index c462e91..0236c7f 100644 --- a/drivers/net/igc/igc_txrx.c +++ b/drivers/net/igc/igc_txrx.c @@ -81,7 +81,8 @@ RTE_MBUF_F_TX_IP_CKSUM | \ RTE_MBUF_F_TX_L4_MASK | \ RTE_MBUF_F_TX_TCP_SEG | \ - RTE_MBUF_F_TX_UDP_SEG) + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_IEEE1588_TMST) #define IGC_TX_OFFLOAD_SEG (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG) @@ -93,6 +94,8 @@ #define IGC_TX_OFFLOAD_NOTSUP_MASK (RTE_MBUF_F_TX_OFFLOAD_MASK ^ IGC_TX_OFFLOAD_MASK) +#define IGC_TS_HDR_LEN 16 + static inline uint64_t rx_desc_statuserr_to_pkt_flags(uint32_t statuserr) { @@ -222,6 +225,9 @@ rx_desc_get_pkt_info(struct igc_rx_queue *rxq, struct rte_mbuf *rxm, pkt_flags |= rx_desc_statuserr_to_pkt_flags(staterr); + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) + pkt_flags |= RTE_MBUF_F_RX_IEEE1588_PTP; + rxm->ol_flags = pkt_flags; pkt_info = rte_le_to_cpu_16(rxd->wb.lower.lo_dword.hs_rss.pkt_info); rxm->packet_type = rx_desc_pkt_info_to_pkt_type(pkt_info); @@ -328,8 +334,15 @@ igc_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxm = rxe->mbuf; rxe->mbuf = nmb; rxdp->read.hdr_addr = 0; - rxdp->read.pkt_addr = + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) + rxdp->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb)) - + IGC_TS_HDR_LEN; + else + rxdp->read.pkt_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb)); + rxm->next = NULL; rxm->data_off = RTE_PKTMBUF_HEADROOM; @@ -340,6 +353,14 @@ igc_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rx_desc_get_pkt_info(rxq, rxm, &rxd, staterr); + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { + uint32_t *ts = rte_pktmbuf_mtod_offset(rxm, + uint32_t *, -IGC_TS_HDR_LEN); + rxq->rx_timestamp = (uint64_t)ts[3] * NSEC_PER_SEC + + ts[2]; + rxm->timesync = rxq->queue_id; + } + /* * Store the mbuf address into the next entry of the array * of returned packets. @@ -472,8 +493,15 @@ igc_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rxm = rxe->mbuf; rxe->mbuf = nmb; rxdp->read.hdr_addr = 0; - rxdp->read.pkt_addr = + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) + rxdp->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb)) - + IGC_TS_HDR_LEN; + else + rxdp->read.pkt_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb)); + rxm->next = NULL; /* @@ -537,6 +565,14 @@ igc_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_desc_get_pkt_info(rxq, first_seg, &rxd, staterr); + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { + uint32_t *ts = rte_pktmbuf_mtod_offset(first_seg, + uint32_t *, -IGC_TS_HDR_LEN); + rxq->rx_timestamp = (uint64_t)ts[3] * NSEC_PER_SEC + + ts[2]; + rxm->timesync = rxq->queue_id; + } + /* * Store the mbuf address into the next entry of the array * of returned packets. @@ -682,7 +718,10 @@ igc_alloc_rx_queue_mbufs(struct igc_rx_queue *rxq) dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); rxd = &rxq->rx_ring[i]; rxd->read.hdr_addr = 0; - rxd->read.pkt_addr = dma_addr; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) + rxd->read.pkt_addr = dma_addr - IGC_TS_HDR_LEN; + else + rxd->read.pkt_addr = dma_addr; rxe[i].mbuf = mbuf; } @@ -985,6 +1024,9 @@ igc_rx_init(struct rte_eth_dev *dev) rxq = dev->data->rx_queues[i]; rxq->flags = 0; + if (offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) + rxq->offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP; + /* Allocate buffers for descriptor rings and set up queue */ ret = igc_alloc_rx_queue_mbufs(rxq); if (ret) diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h index 5731761..e7272f8 100644 --- a/drivers/net/igc/igc_txrx.h +++ b/drivers/net/igc/igc_txrx.h @@ -42,6 +42,7 @@ struct igc_rx_queue { uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En. */ uint32_t flags; /**< RX flags. */ uint64_t offloads; /**< offloads of RTE_ETH_RX_OFFLOAD_* */ + uint64_t rx_timestamp; }; /** Offload features */