From patchwork Fri Jul 16 08:35:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Heinrich Kuhn X-Patchwork-Id: 95959 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 12ED9A0C50; Fri, 16 Jul 2021 10:37:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E99CA40151; Fri, 16 Jul 2021 10:37:02 +0200 (CEST) Received: from mail-ed1-f47.google.com (mail-ed1-f47.google.com [209.85.208.47]) by mails.dpdk.org (Postfix) with ESMTP id A4A964014D for ; Fri, 16 Jul 2021 10:37:01 +0200 (CEST) Received: by mail-ed1-f47.google.com with SMTP id x17so11866301edd.12 for ; Fri, 16 Jul 2021 01:37:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NNGijOi0BcY27rDpYn6W9ya8507NL0+lHnin/UxAuVM=; b=RybS5K2qQmCFOapnYO6cXlTzbg+wKuOeObejqCx61S3qmpZXEgA40H/MWJSglF/n99 vFZAlP9VyunLOcviuUejTRZCwnGyylVbBTppSMj1toGIgBgN3spgUhgNPpmSdXje/IuO 9ofVJV+IpzjObQI+lRSZnp90QYIrgdfEESA641cFeUH9CmyVtAertMEiPrLizwMJ1lgV EnuNatbPPhUPXRGwFfhKmm3XJ6QnhuZjtsJSIkFivJGT4hV6QBezy5upoPjrdnzroxwr 0IB6uas+SS+TZuaAoxIGzOAHmvhfY1iUduldnqKMOxDiJ+giL3EBfOZCR83CNDKQvYqf YZIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NNGijOi0BcY27rDpYn6W9ya8507NL0+lHnin/UxAuVM=; b=Egx4ggfmBDb5pesGt3an8dBJq8gzYbAFI5xJl3u2gt4WC/njg+SuOaKtjM08jGoW2x 0xnB4RxDt6zvvvQV0l6BAkFf8OLqOwMVOounxTv65oxRhHow702gZHeXDpKXbiS6nEof Qd4AAV4sfzQY5vv07R8LpybrFO/EW3CdTqkBLFiQJSvv1Dg66epHwG3YDyqZtl4ldBIY yPnmWOx38744G/JHpUuEVNHMhlYmF7NNF4YKc4U9o3X8omzUjKQCkqUuMVtgij/65cOq wQISsV3BR0diH+TrYqiZ57EOWVmpW8LO82LD9tDXEcszhRP183EHjF+Nah/Z9H6wgIiF rihQ== X-Gm-Message-State: AOAM532BK1bcXm2CmAXaEZVLHH4w+KfNfSxnyly8XE2Tch98WjinDz/7 0PDtwCKDZIufoJ5rbO/SNiAQuyZtU+Wco8w1e/6J0GkUPqL252d6qHEx3cSvMusrJg9Yx/p2wGr QBw+gVRc9hpywc1taHQ4QdY7lomwgxwlihvvpt42c0jZBJnyPkEZYmSfJaqJuoeE2 X-Google-Smtp-Source: ABdhPJzK6w28BwGKLIOpFvOI6PIQT+l+UTTOSLy7w5dsJnOxx4h/iTBQ6mGPf1MdPoJQYv8OJ0QVIw== X-Received: by 2002:a05:6402:d5a:: with SMTP id ec26mr13239935edb.4.1626424621168; Fri, 16 Jul 2021 01:37:01 -0700 (PDT) Received: from localhost.localdomain ([155.93.216.150]) by smtp.gmail.com with ESMTPSA id e6sm3371650edk.63.2021.07.16.01.36.59 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 16 Jul 2021 01:37:00 -0700 (PDT) From: Heinrich Kuhn To: dev@dpdk.org Cc: Heinrich Kuhn , Simon Horman Date: Fri, 16 Jul 2021 10:35:40 +0200 Message-Id: <20210716083545.34444-2-heinrich.kuhn@netronome.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210716083545.34444-1-heinrich.kuhn@netronome.com> References: <20210716082314.33865-1-heinrich.kuhn@netronome.com> <20210716083545.34444-1-heinrich.kuhn@netronome.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 1/7] net/nfp: split rxtx headers into separate file X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This change splits out the rx/tx specific structs and defines from the main nfp_net_pmd header file and into their own header file. Signed-off-by: Heinrich Kuhn Signed-off-by: Simon Horman --- drivers/net/nfp/nfp_net.c | 1 + drivers/net/nfp/nfp_net_pmd.h | 248 ------------------------------ drivers/net/nfp/nfp_rxtx.h | 276 ++++++++++++++++++++++++++++++++++ 3 files changed, 277 insertions(+), 248 deletions(-) create mode 100644 drivers/net/nfp/nfp_rxtx.h diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c index b18edd8c7b..67288abeff 100644 --- a/drivers/net/nfp/nfp_net.c +++ b/drivers/net/nfp/nfp_net.c @@ -38,6 +38,7 @@ #include "nfpcore/nfp_nsp.h" #include "nfp_net_pmd.h" +#include "nfp_rxtx.h" #include "nfp_net_logs.h" #include "nfp_net_ctrl.h" diff --git a/drivers/net/nfp/nfp_net_pmd.h b/drivers/net/nfp/nfp_net_pmd.h index 212f9ef162..a3a3ba32d6 100644 --- a/drivers/net/nfp/nfp_net_pmd.h +++ b/drivers/net/nfp/nfp_net_pmd.h @@ -23,19 +23,6 @@ /* Forward declaration */ struct nfp_net_adapter; -/* - * The maximum number of descriptors is limited by design as - * DPDK uses uint16_t variables for these values - */ -#define NFP_NET_MAX_TX_DESC (32 * 1024) -#define NFP_NET_MIN_TX_DESC 64 - -#define NFP_NET_MAX_RX_DESC (32 * 1024) -#define NFP_NET_MIN_RX_DESC 64 - -/* Descriptor alignment */ -#define NFP_ALIGN_RING_DESC 128 - #define NFP_TX_MAX_SEG UINT8_MAX #define NFP_TX_MAX_MTU_SEG 8 @@ -150,241 +137,6 @@ static inline void nn_writeq(uint64_t val, volatile void *addr) nn_writel(val, addr); } -/* TX descriptor format */ -#define PCIE_DESC_TX_EOP (1 << 7) -#define PCIE_DESC_TX_OFFSET_MASK (0x7f) - -/* Flags in the host TX descriptor */ -#define PCIE_DESC_TX_CSUM (1 << 7) -#define PCIE_DESC_TX_IP4_CSUM (1 << 6) -#define PCIE_DESC_TX_TCP_CSUM (1 << 5) -#define PCIE_DESC_TX_UDP_CSUM (1 << 4) -#define PCIE_DESC_TX_VLAN (1 << 3) -#define PCIE_DESC_TX_LSO (1 << 2) -#define PCIE_DESC_TX_ENCAP_NONE (0) -#define PCIE_DESC_TX_ENCAP_VXLAN (1 << 1) -#define PCIE_DESC_TX_ENCAP_GRE (1 << 0) - -struct nfp_net_tx_desc { - union { - struct { - uint8_t dma_addr_hi; /* High bits of host buf address */ - __le16 dma_len; /* Length to DMA for this desc */ - uint8_t offset_eop; /* Offset in buf where pkt starts + - * highest bit is eop flag. - */ - __le32 dma_addr_lo; /* Low 32bit of host buf addr */ - - __le16 mss; /* MSS to be used for LSO */ - uint8_t lso_hdrlen; /* LSO, where the data starts */ - uint8_t flags; /* TX Flags, see @PCIE_DESC_TX_* */ - - union { - struct { - /* - * L3 and L4 header offsets required - * for TSOv2 - */ - uint8_t l3_offset; - uint8_t l4_offset; - }; - __le16 vlan; /* VLAN tag to add if indicated */ - }; - __le16 data_len; /* Length of frame + meta data */ - } __rte_packed; - __le32 vals[4]; - }; -}; - -struct nfp_net_txq { - struct nfp_net_hw *hw; /* Backpointer to nfp_net structure */ - - /* - * Queue information: @qidx is the queue index from Linux's - * perspective. @tx_qcidx is the index of the Queue - * Controller Peripheral queue relative to the TX queue BAR. - * @cnt is the size of the queue in number of - * descriptors. @qcp_q is a pointer to the base of the queue - * structure on the NFP - */ - uint8_t *qcp_q; - - /* - * Read and Write pointers. @wr_p and @rd_p are host side pointer, - * they are free running and have little relation to the QCP pointers * - * @qcp_rd_p is a local copy queue controller peripheral read pointer - */ - - uint32_t wr_p; - uint32_t rd_p; - - uint32_t tx_count; - - uint32_t tx_free_thresh; - - /* - * For each descriptor keep a reference to the mbuf and - * DMA address used until completion is signalled. - */ - struct { - struct rte_mbuf *mbuf; - } *txbufs; - - /* - * Information about the host side queue location. @txds is - * the virtual address for the queue, @dma is the DMA address - * of the queue and @size is the size in bytes for the queue - * (needed for free) - */ - struct nfp_net_tx_desc *txds; - - /* - * At this point 48 bytes have been used for all the fields in the - * TX critical path. We have room for 8 bytes and still all placed - * in a cache line. We are not using the threshold values below but - * if we need to, we can add the most used in the remaining bytes. - */ - uint32_t tx_rs_thresh; /* not used by now. Future? */ - uint32_t tx_pthresh; /* not used by now. Future? */ - uint32_t tx_hthresh; /* not used by now. Future? */ - uint32_t tx_wthresh; /* not used by now. Future? */ - uint16_t port_id; - int qidx; - int tx_qcidx; - __le64 dma; -} __rte_aligned(64); - -/* RX and freelist descriptor format */ -#define PCIE_DESC_RX_DD (1 << 7) -#define PCIE_DESC_RX_META_LEN_MASK (0x7f) - -/* Flags in the RX descriptor */ -#define PCIE_DESC_RX_RSS (1 << 15) -#define PCIE_DESC_RX_I_IP4_CSUM (1 << 14) -#define PCIE_DESC_RX_I_IP4_CSUM_OK (1 << 13) -#define PCIE_DESC_RX_I_TCP_CSUM (1 << 12) -#define PCIE_DESC_RX_I_TCP_CSUM_OK (1 << 11) -#define PCIE_DESC_RX_I_UDP_CSUM (1 << 10) -#define PCIE_DESC_RX_I_UDP_CSUM_OK (1 << 9) -#define PCIE_DESC_RX_SPARE (1 << 8) -#define PCIE_DESC_RX_EOP (1 << 7) -#define PCIE_DESC_RX_IP4_CSUM (1 << 6) -#define PCIE_DESC_RX_IP4_CSUM_OK (1 << 5) -#define PCIE_DESC_RX_TCP_CSUM (1 << 4) -#define PCIE_DESC_RX_TCP_CSUM_OK (1 << 3) -#define PCIE_DESC_RX_UDP_CSUM (1 << 2) -#define PCIE_DESC_RX_UDP_CSUM_OK (1 << 1) -#define PCIE_DESC_RX_VLAN (1 << 0) - -#define PCIE_DESC_RX_L4_CSUM_OK (PCIE_DESC_RX_TCP_CSUM_OK | \ - PCIE_DESC_RX_UDP_CSUM_OK) -struct nfp_net_rx_desc { - union { - /* Freelist descriptor */ - struct { - uint8_t dma_addr_hi; - __le16 spare; - uint8_t dd; - - __le32 dma_addr_lo; - } __rte_packed fld; - - /* RX descriptor */ - struct { - __le16 data_len; - uint8_t reserved; - uint8_t meta_len_dd; - - __le16 flags; - __le16 vlan; - } __rte_packed rxd; - - __le32 vals[2]; - }; -}; - -struct nfp_net_rx_buff { - struct rte_mbuf *mbuf; -}; - -struct nfp_net_rxq { - struct nfp_net_hw *hw; /* Backpointer to nfp_net structure */ - - /* - * @qcp_fl and @qcp_rx are pointers to the base addresses of the - * freelist and RX queue controller peripheral queue structures on the - * NFP - */ - uint8_t *qcp_fl; - uint8_t *qcp_rx; - - /* - * Read and Write pointers. @wr_p and @rd_p are host side - * pointer, they are free running and have little relation to - * the QCP pointers. @wr_p is where the driver adds new - * freelist descriptors and @rd_p is where the driver start - * reading descriptors for newly arrive packets from. - */ - uint32_t rd_p; - - /* - * For each buffer placed on the freelist, record the - * associated SKB - */ - struct nfp_net_rx_buff *rxbufs; - - /* - * Information about the host side queue location. @rxds is - * the virtual address for the queue - */ - struct nfp_net_rx_desc *rxds; - - /* - * The mempool is created by the user specifying a mbuf size. - * We save here the reference of the mempool needed in the RX - * path and the mbuf size for checking received packets can be - * safely copied to the mbuf using the NFP_NET_RX_OFFSET - */ - struct rte_mempool *mem_pool; - uint16_t mbuf_size; - - /* - * Next two fields are used for giving more free descriptors - * to the NFP - */ - uint16_t rx_free_thresh; - uint16_t nb_rx_hold; - - /* the size of the queue in number of descriptors */ - uint16_t rx_count; - - /* - * Fields above this point fit in a single cache line and are all used - * in the RX critical path. Fields below this point are just used - * during queue configuration or not used at all (yet) - */ - - /* referencing dev->data->port_id */ - uint16_t port_id; - - uint8_t crc_len; /* Not used by now */ - uint8_t drop_en; /* Not used by now */ - - /* DMA address of the queue */ - __le64 dma; - - /* - * Queue information: @qidx is the queue index from Linux's - * perspective. @fl_qcidx is the index of the Queue - * Controller peripheral queue relative to the RX queue BAR - * used for the freelist and @rx_qcidx is the Queue Controller - * Peripheral index for the RX queue. - */ - int qidx; - int fl_qcidx; - int rx_qcidx; -} __rte_aligned(64); - struct nfp_pf_dev { /* Backpointer to associated pci device */ struct rte_pci_device *pci_dev; diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h new file mode 100644 index 0000000000..41a3a4b4e7 --- /dev/null +++ b/drivers/net/nfp/nfp_rxtx.h @@ -0,0 +1,276 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2014-2021 Netronome Systems, Inc. + * All rights reserved. + */ + +/* + * vim:shiftwidth=8:noexpandtab + * + * @file dpdk/pmd/nfp_rxtx.h + * + * Netronome NFP Rx/Tx specific header file + */ + +#ifndef _NFP_RXTX_H_ +#define _NFP_RXTX_H_ + +#include +#include + +/* + * The maximum number of descriptors is limited by design as + * DPDK uses uint16_t variables for these values + */ +#define NFP_NET_MAX_TX_DESC (32 * 1024) +#define NFP_NET_MIN_TX_DESC 64 + +#define NFP_NET_MAX_RX_DESC (32 * 1024) +#define NFP_NET_MIN_RX_DESC 64 + +/* Descriptor alignment */ +#define NFP_ALIGN_RING_DESC 128 + +/* TX descriptor format */ +#define PCIE_DESC_TX_EOP (1 << 7) +#define PCIE_DESC_TX_OFFSET_MASK (0x7f) + +/* Flags in the host TX descriptor */ +#define PCIE_DESC_TX_CSUM (1 << 7) +#define PCIE_DESC_TX_IP4_CSUM (1 << 6) +#define PCIE_DESC_TX_TCP_CSUM (1 << 5) +#define PCIE_DESC_TX_UDP_CSUM (1 << 4) +#define PCIE_DESC_TX_VLAN (1 << 3) +#define PCIE_DESC_TX_LSO (1 << 2) +#define PCIE_DESC_TX_ENCAP_NONE (0) +#define PCIE_DESC_TX_ENCAP_VXLAN (1 << 1) +#define PCIE_DESC_TX_ENCAP_GRE (1 << 0) + +struct nfp_net_tx_desc { + union { + struct { + uint8_t dma_addr_hi; /* High bits of host buf address */ + __le16 dma_len; /* Length to DMA for this desc */ + uint8_t offset_eop; /* Offset in buf where pkt starts + + * highest bit is eop flag. + */ + __le32 dma_addr_lo; /* Low 32bit of host buf addr */ + + __le16 mss; /* MSS to be used for LSO */ + uint8_t lso_hdrlen; /* LSO, where the data starts */ + uint8_t flags; /* TX Flags, see @PCIE_DESC_TX_* */ + + union { + struct { + /* + * L3 and L4 header offsets required + * for TSOv2 + */ + uint8_t l3_offset; + uint8_t l4_offset; + }; + __le16 vlan; /* VLAN tag to add if indicated */ + }; + __le16 data_len; /* Length of frame + meta data */ + } __rte_packed; + __le32 vals[4]; + }; +}; + +struct nfp_net_txq { + struct nfp_net_hw *hw; /* Backpointer to nfp_net structure */ + + /* + * Queue information: @qidx is the queue index from Linux's + * perspective. @tx_qcidx is the index of the Queue + * Controller Peripheral queue relative to the TX queue BAR. + * @cnt is the size of the queue in number of + * descriptors. @qcp_q is a pointer to the base of the queue + * structure on the NFP + */ + uint8_t *qcp_q; + + /* + * Read and Write pointers. @wr_p and @rd_p are host side pointer, + * they are free running and have little relation to the QCP pointers * + * @qcp_rd_p is a local copy queue controller peripheral read pointer + */ + + uint32_t wr_p; + uint32_t rd_p; + + uint32_t tx_count; + + uint32_t tx_free_thresh; + + /* + * For each descriptor keep a reference to the mbuf and + * DMA address used until completion is signalled. + */ + struct { + struct rte_mbuf *mbuf; + } *txbufs; + + /* + * Information about the host side queue location. @txds is + * the virtual address for the queue, @dma is the DMA address + * of the queue and @size is the size in bytes for the queue + * (needed for free) + */ + struct nfp_net_tx_desc *txds; + + /* + * At this point 48 bytes have been used for all the fields in the + * TX critical path. We have room for 8 bytes and still all placed + * in a cache line. We are not using the threshold values below but + * if we need to, we can add the most used in the remaining bytes. + */ + uint32_t tx_rs_thresh; /* not used by now. Future? */ + uint32_t tx_pthresh; /* not used by now. Future? */ + uint32_t tx_hthresh; /* not used by now. Future? */ + uint32_t tx_wthresh; /* not used by now. Future? */ + uint16_t port_id; + int qidx; + int tx_qcidx; + __le64 dma; +} __rte_aligned(64); + +/* RX and freelist descriptor format */ +#define PCIE_DESC_RX_DD (1 << 7) +#define PCIE_DESC_RX_META_LEN_MASK (0x7f) + +/* Flags in the RX descriptor */ +#define PCIE_DESC_RX_RSS (1 << 15) +#define PCIE_DESC_RX_I_IP4_CSUM (1 << 14) +#define PCIE_DESC_RX_I_IP4_CSUM_OK (1 << 13) +#define PCIE_DESC_RX_I_TCP_CSUM (1 << 12) +#define PCIE_DESC_RX_I_TCP_CSUM_OK (1 << 11) +#define PCIE_DESC_RX_I_UDP_CSUM (1 << 10) +#define PCIE_DESC_RX_I_UDP_CSUM_OK (1 << 9) +#define PCIE_DESC_RX_SPARE (1 << 8) +#define PCIE_DESC_RX_EOP (1 << 7) +#define PCIE_DESC_RX_IP4_CSUM (1 << 6) +#define PCIE_DESC_RX_IP4_CSUM_OK (1 << 5) +#define PCIE_DESC_RX_TCP_CSUM (1 << 4) +#define PCIE_DESC_RX_TCP_CSUM_OK (1 << 3) +#define PCIE_DESC_RX_UDP_CSUM (1 << 2) +#define PCIE_DESC_RX_UDP_CSUM_OK (1 << 1) +#define PCIE_DESC_RX_VLAN (1 << 0) + +#define PCIE_DESC_RX_L4_CSUM_OK (PCIE_DESC_RX_TCP_CSUM_OK | \ + PCIE_DESC_RX_UDP_CSUM_OK) + +struct nfp_net_rx_desc { + union { + /* Freelist descriptor */ + struct { + uint8_t dma_addr_hi; + __le16 spare; + uint8_t dd; + + __le32 dma_addr_lo; + } __rte_packed fld; + + /* RX descriptor */ + struct { + __le16 data_len; + uint8_t reserved; + uint8_t meta_len_dd; + + __le16 flags; + __le16 vlan; + } __rte_packed rxd; + + __le32 vals[2]; + }; +}; + +struct nfp_net_rx_buff { + struct rte_mbuf *mbuf; +}; + +struct nfp_net_rxq { + struct nfp_net_hw *hw; /* Backpointer to nfp_net structure */ + + /* + * @qcp_fl and @qcp_rx are pointers to the base addresses of the + * freelist and RX queue controller peripheral queue structures on the + * NFP + */ + uint8_t *qcp_fl; + uint8_t *qcp_rx; + + /* + * Read and Write pointers. @wr_p and @rd_p are host side + * pointer, they are free running and have little relation to + * the QCP pointers. @wr_p is where the driver adds new + * freelist descriptors and @rd_p is where the driver start + * reading descriptors for newly arrive packets from. + */ + uint32_t rd_p; + + /* + * For each buffer placed on the freelist, record the + * associated SKB + */ + struct nfp_net_rx_buff *rxbufs; + + /* + * Information about the host side queue location. @rxds is + * the virtual address for the queue + */ + struct nfp_net_rx_desc *rxds; + + /* + * The mempool is created by the user specifying a mbuf size. + * We save here the reference of the mempool needed in the RX + * path and the mbuf size for checking received packets can be + * safely copied to the mbuf using the NFP_NET_RX_OFFSET + */ + struct rte_mempool *mem_pool; + uint16_t mbuf_size; + + /* + * Next two fields are used for giving more free descriptors + * to the NFP + */ + uint16_t rx_free_thresh; + uint16_t nb_rx_hold; + + /* the size of the queue in number of descriptors */ + uint16_t rx_count; + + /* + * Fields above this point fit in a single cache line and are all used + * in the RX critical path. Fields below this point are just used + * during queue configuration or not used at all (yet) + */ + + /* referencing dev->data->port_id */ + uint16_t port_id; + + uint8_t crc_len; /* Not used by now */ + uint8_t drop_en; /* Not used by now */ + + /* DMA address of the queue */ + __le64 dma; + + /* + * Queue information: @qidx is the queue index from Linux's + * perspective. @fl_qcidx is the index of the Queue + * Controller peripheral queue relative to the RX queue BAR + * used for the freelist and @rx_qcidx is the Queue Controller + * Peripheral index for the RX queue. + */ + int qidx; + int fl_qcidx; + int rx_qcidx; +} __rte_aligned(64); + +#endif /* _NFP_RXTX_H_ */ +/* + * Local variables: + * c-file-style: "Linux" + * indent-tabs-mode: t + * End: + */ + From patchwork Fri Jul 16 08:35:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Heinrich Kuhn X-Patchwork-Id: 95960 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9A399A0C50; Fri, 16 Jul 2021 10:37:09 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8156D4133F; Fri, 16 Jul 2021 10:37:09 +0200 (CEST) Received: from mail-ej1-f52.google.com (mail-ej1-f52.google.com [209.85.218.52]) by mails.dpdk.org (Postfix) with ESMTP id 617D74133A for ; Fri, 16 Jul 2021 10:37:07 +0200 (CEST) Received: by mail-ej1-f52.google.com with SMTP id ga14so13976374ejc.6 for ; Fri, 16 Jul 2021 01:37:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=66oeN0YIq+bfFZaINYVTEAHypR1WiK4/NQKRH5u7rnQ=; b=FMJ2MtTLrFfgPkGtZgOekF+IDGEilY/4lwC23dLS8xW7o6jETzaghC6usLtVqMuglT Qs49cX5pygNr2y54macdwrbVln521fMqTGVDK5PAB/ruM0+CT08ofLj7sSz8WV2GaYV5 da7JHr9OJawHJWmeltZvGM4x60NoTOtQOtv3vx29R6imhEkC7lk34LHoddQUd/imzcNo 8OsWOsw52XBMRhhOm1mx2JH0YIuLb/VtmcAJX0RHaPe+Lr53Ke++4AXpKuziAz9V7i+C DYHaUqGbljZPyRSs/zz43ykYeQhB0LELrZRngzeSt8+ZkTFHX8NTxDx10BGwZoafsrkU mKDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=66oeN0YIq+bfFZaINYVTEAHypR1WiK4/NQKRH5u7rnQ=; b=p7VBy94zMzBdgdBaq6MRiLWkt/sTr7kQ2Npf9lZO1YZZ7oQcWmAzEmABNd+enftmvd jE95WduoFfshqd8fWYgitaA6YmzsBmm55+6TAaMMw2E3HNQX36yUhHkbAG8YKJwhsaZu N20RIl7dBW/cP6Xtil83IvshRkJvJXVOfFMJt0xKCflN2Fo/sp2hxZR2qVN654c8+ra+ /rLQkBgryazODxq173YldTDaBvYKrn0knBYhYz+Ccf+DjsoOUXA3ZvaimtMNTeRMzIR8 BsrGB+GrtPJFQhNQuJWiWCPlVOWgntAkCKqUA3Bq1a5w1pfW2ITdyPx13N10K080jPdg 7O0A== X-Gm-Message-State: AOAM532ZlC8er5K0ARP6I8GqmHQRsHfRCC+H0V91JvwciNTf7Zf8yOI9 pDg4mmY0tofttMi9LpzpQjUeX2oi3DTH1/7CAXE6BUqNfT2KpcBgkZoCRckUD+zLHf1Sj9/nwFq tnoYh6POTFrkmgcJLyGlHf4KnuSHBa8XWLMdrHKDDqTpTDghtfp6QOVHHDSoTnw6s X-Google-Smtp-Source: ABdhPJyN21SjavUZSN39APP3/Taln18LBNSwhFKZsLBDWRuw/B2G0MjcM0HtCChZVNoa4jouEi+tsw== X-Received: by 2002:a17:907:24d1:: with SMTP id e17mr10351065ejn.427.1626424626268; Fri, 16 Jul 2021 01:37:06 -0700 (PDT) Received: from localhost.localdomain ([155.93.216.150]) by smtp.gmail.com with ESMTPSA id e6sm3371650edk.63.2021.07.16.01.37.04 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 16 Jul 2021 01:37:06 -0700 (PDT) From: Heinrich Kuhn To: dev@dpdk.org Cc: Heinrich Kuhn , Simon Horman Date: Fri, 16 Jul 2021 10:35:41 +0200 Message-Id: <20210716083545.34444-3-heinrich.kuhn@netronome.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210716083545.34444-1-heinrich.kuhn@netronome.com> References: <20210716082314.33865-1-heinrich.kuhn@netronome.com> <20210716083545.34444-1-heinrich.kuhn@netronome.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 2/7] net/nfp: move rxtx functions to their own file X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Create a new rxtx file and move the Rx/Tx functions to this file. This commit will also move the needed shared functions to the nfp_net_pmd.h file as needed. Signed-off-by: Heinrich Kuhn Signed-off-by: Simon Horman --- drivers/net/nfp/meson.build | 1 + drivers/net/nfp/nfp_net.c | 1090 --------------------------------- drivers/net/nfp/nfp_net_pmd.h | 184 ++++-- drivers/net/nfp/nfp_rxtx.c | 1002 ++++++++++++++++++++++++++++++ drivers/net/nfp/nfp_rxtx.h | 27 + 5 files changed, 1173 insertions(+), 1131 deletions(-) create mode 100644 drivers/net/nfp/nfp_rxtx.c diff --git a/drivers/net/nfp/meson.build b/drivers/net/nfp/meson.build index b51e2e5f20..1b289e2354 100644 --- a/drivers/net/nfp/meson.build +++ b/drivers/net/nfp/meson.build @@ -19,4 +19,5 @@ sources = files( 'nfpcore/nfp_nsp_eth.c', 'nfpcore/nfp_hwinfo.c', 'nfp_net.c', + 'nfp_rxtx.c', ) diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c index 67288abeff..5bfc23ba04 100644 --- a/drivers/net/nfp/nfp_net.c +++ b/drivers/net/nfp/nfp_net.c @@ -66,29 +66,11 @@ static int nfp_init_phyports(struct nfp_pf_dev *pf_dev); static int nfp_net_link_update(struct rte_eth_dev *dev, int wait_to_complete); static int nfp_net_promisc_enable(struct rte_eth_dev *dev); static int nfp_net_promisc_disable(struct rte_eth_dev *dev); -static int nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq); -static uint32_t nfp_net_rx_queue_count(struct rte_eth_dev *dev, - uint16_t queue_idx); -static uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts); -static void nfp_net_rx_queue_release(void *rxq); -static int nfp_net_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, - uint16_t nb_desc, unsigned int socket_id, - const struct rte_eth_rxconf *rx_conf, - struct rte_mempool *mp); -static int nfp_net_tx_free_bufs(struct nfp_net_txq *txq); -static void nfp_net_tx_queue_release(void *txq); -static int nfp_net_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, - uint16_t nb_desc, unsigned int socket_id, - const struct rte_eth_txconf *tx_conf); static int nfp_net_start(struct rte_eth_dev *dev); static int nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); static int nfp_net_stats_reset(struct rte_eth_dev *dev); static int nfp_net_stop(struct rte_eth_dev *dev); -static uint16_t nfp_net_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, - uint16_t nb_pkts); - static int nfp_net_rss_config_default(struct rte_eth_dev *dev); static int nfp_net_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); @@ -106,184 +88,6 @@ static int nfp_fw_setup(struct rte_pci_device *dev, struct nfp_eth_table *nfp_eth_table, struct nfp_hwinfo *hwinfo); - -/* The offset of the queue controller queues in the PCIe Target */ -#define NFP_PCIE_QUEUE(_q) (0x80000 + (NFP_QCP_QUEUE_ADDR_SZ * ((_q) & 0xff))) - -/* Maximum value which can be added to a queue with one transaction */ -#define NFP_QCP_MAX_ADD 0x7f - -#define RTE_MBUF_DMA_ADDR_DEFAULT(mb) \ - (uint64_t)((mb)->buf_iova + RTE_PKTMBUF_HEADROOM) - -/* nfp_qcp_ptr - Read or Write Pointer of a queue */ -enum nfp_qcp_ptr { - NFP_QCP_READ_PTR = 0, - NFP_QCP_WRITE_PTR -}; - -/* - * nfp_qcp_ptr_add - Add the value to the selected pointer of a queue - * @q: Base address for queue structure - * @ptr: Add to the Read or Write pointer - * @val: Value to add to the queue pointer - * - * If @val is greater than @NFP_QCP_MAX_ADD multiple writes are performed. - */ -static inline void -nfp_qcp_ptr_add(uint8_t *q, enum nfp_qcp_ptr ptr, uint32_t val) -{ - uint32_t off; - - if (ptr == NFP_QCP_READ_PTR) - off = NFP_QCP_QUEUE_ADD_RPTR; - else - off = NFP_QCP_QUEUE_ADD_WPTR; - - while (val > NFP_QCP_MAX_ADD) { - nn_writel(rte_cpu_to_le_32(NFP_QCP_MAX_ADD), q + off); - val -= NFP_QCP_MAX_ADD; - } - - nn_writel(rte_cpu_to_le_32(val), q + off); -} - -/* - * nfp_qcp_read - Read the current Read/Write pointer value for a queue - * @q: Base address for queue structure - * @ptr: Read or Write pointer - */ -static inline uint32_t -nfp_qcp_read(uint8_t *q, enum nfp_qcp_ptr ptr) -{ - uint32_t off; - uint32_t val; - - if (ptr == NFP_QCP_READ_PTR) - off = NFP_QCP_QUEUE_STS_LO; - else - off = NFP_QCP_QUEUE_STS_HI; - - val = rte_cpu_to_le_32(nn_readl(q + off)); - - if (ptr == NFP_QCP_READ_PTR) - return val & NFP_QCP_QUEUE_STS_LO_READPTR_mask; - else - return val & NFP_QCP_QUEUE_STS_HI_WRITEPTR_mask; -} - -/* - * Functions to read/write from/to Config BAR - * Performs any endian conversion necessary. - */ -static inline uint8_t -nn_cfg_readb(struct nfp_net_hw *hw, int off) -{ - return nn_readb(hw->ctrl_bar + off); -} - -static inline void -nn_cfg_writeb(struct nfp_net_hw *hw, int off, uint8_t val) -{ - nn_writeb(val, hw->ctrl_bar + off); -} - -static inline uint32_t -nn_cfg_readl(struct nfp_net_hw *hw, int off) -{ - return rte_le_to_cpu_32(nn_readl(hw->ctrl_bar + off)); -} - -static inline void -nn_cfg_writel(struct nfp_net_hw *hw, int off, uint32_t val) -{ - nn_writel(rte_cpu_to_le_32(val), hw->ctrl_bar + off); -} - -static inline uint64_t -nn_cfg_readq(struct nfp_net_hw *hw, int off) -{ - return rte_le_to_cpu_64(nn_readq(hw->ctrl_bar + off)); -} - -static inline void -nn_cfg_writeq(struct nfp_net_hw *hw, int off, uint64_t val) -{ - nn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off); -} - -static void -nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq) -{ - unsigned i; - - if (rxq->rxbufs == NULL) - return; - - for (i = 0; i < rxq->rx_count; i++) { - if (rxq->rxbufs[i].mbuf) { - rte_pktmbuf_free_seg(rxq->rxbufs[i].mbuf); - rxq->rxbufs[i].mbuf = NULL; - } - } -} - -static void -nfp_net_rx_queue_release(void *rx_queue) -{ - struct nfp_net_rxq *rxq = rx_queue; - - if (rxq) { - nfp_net_rx_queue_release_mbufs(rxq); - rte_free(rxq->rxbufs); - rte_free(rxq); - } -} - -static void -nfp_net_reset_rx_queue(struct nfp_net_rxq *rxq) -{ - nfp_net_rx_queue_release_mbufs(rxq); - rxq->rd_p = 0; - rxq->nb_rx_hold = 0; -} - -static void -nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq) -{ - unsigned i; - - if (txq->txbufs == NULL) - return; - - for (i = 0; i < txq->tx_count; i++) { - if (txq->txbufs[i].mbuf) { - rte_pktmbuf_free_seg(txq->txbufs[i].mbuf); - txq->txbufs[i].mbuf = NULL; - } - } -} - -static void -nfp_net_tx_queue_release(void *tx_queue) -{ - struct nfp_net_txq *txq = tx_queue; - - if (txq) { - nfp_net_tx_queue_release_mbufs(txq); - rte_free(txq->txbufs); - rte_free(txq); - } -} - -static void -nfp_net_reset_tx_queue(struct nfp_net_txq *txq) -{ - nfp_net_tx_queue_release_mbufs(txq); - txq->wr_p = 0; - txq->rd_p = 0; -} - static int __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update) { @@ -461,18 +265,6 @@ nfp_net_disable_queues(struct rte_eth_dev *dev) hw->ctrl = new_ctrl; } -static int -nfp_net_rx_freelist_setup(struct rte_eth_dev *dev) -{ - int i; - - for (i = 0; i < dev->data->nb_rx_queues; i++) { - if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) < 0) - return -1; - } - return 0; -} - static void nfp_net_params_setup(struct nfp_net_hw *hw) { @@ -1349,44 +1141,6 @@ nfp_net_supported_ptypes_get(struct rte_eth_dev *dev) return NULL; } -static uint32_t -nfp_net_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx) -{ - struct nfp_net_rxq *rxq; - struct nfp_net_rx_desc *rxds; - uint32_t idx; - uint32_t count; - - rxq = (struct nfp_net_rxq *)dev->data->rx_queues[queue_idx]; - - idx = rxq->rd_p; - - count = 0; - - /* - * Other PMDs are just checking the DD bit in intervals of 4 - * descriptors and counting all four if the first has the DD - * bit on. Of course, this is not accurate but can be good for - * performance. But ideally that should be done in descriptors - * chunks belonging to the same cache line - */ - - while (count < rxq->rx_count) { - rxds = &rxq->rxds[idx]; - if ((rxds->rxd.meta_len_dd & PCIE_DESC_RX_DD) == 0) - break; - - count++; - idx++; - - /* Wrapping? */ - if ((idx) == rxq->rx_count) - idx = 0; - } - - return count; -} - static int nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { @@ -1568,850 +1322,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return 0; } -static int -nfp_net_rx_queue_setup(struct rte_eth_dev *dev, - uint16_t queue_idx, uint16_t nb_desc, - unsigned int socket_id, - const struct rte_eth_rxconf *rx_conf, - struct rte_mempool *mp) -{ - const struct rte_memzone *tz; - struct nfp_net_rxq *rxq; - struct nfp_net_hw *hw; - uint32_t rx_desc_sz; - - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - - PMD_INIT_FUNC_TRACE(); - - /* Validating number of descriptors */ - rx_desc_sz = nb_desc * sizeof(struct nfp_net_rx_desc); - if (rx_desc_sz % NFP_ALIGN_RING_DESC != 0 || - nb_desc > NFP_NET_MAX_RX_DESC || - nb_desc < NFP_NET_MIN_RX_DESC) { - PMD_DRV_LOG(ERR, "Wrong nb_desc value"); - return -EINVAL; - } - - /* - * Free memory prior to re-allocation if needed. This is the case after - * calling nfp_net_stop - */ - if (dev->data->rx_queues[queue_idx]) { - nfp_net_rx_queue_release(dev->data->rx_queues[queue_idx]); - dev->data->rx_queues[queue_idx] = NULL; - } - - /* Allocating rx queue data structure */ - rxq = rte_zmalloc_socket("ethdev RX queue", sizeof(struct nfp_net_rxq), - RTE_CACHE_LINE_SIZE, socket_id); - if (rxq == NULL) - return -ENOMEM; - - /* Hw queues mapping based on firmware configuration */ - rxq->qidx = queue_idx; - rxq->fl_qcidx = queue_idx * hw->stride_rx; - rxq->rx_qcidx = rxq->fl_qcidx + (hw->stride_rx - 1); - rxq->qcp_fl = hw->rx_bar + NFP_QCP_QUEUE_OFF(rxq->fl_qcidx); - rxq->qcp_rx = hw->rx_bar + NFP_QCP_QUEUE_OFF(rxq->rx_qcidx); - - /* - * Tracking mbuf size for detecting a potential mbuf overflow due to - * RX offset - */ - rxq->mem_pool = mp; - rxq->mbuf_size = rxq->mem_pool->elt_size; - rxq->mbuf_size -= (sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM); - hw->flbufsz = rxq->mbuf_size; - - rxq->rx_count = nb_desc; - rxq->port_id = dev->data->port_id; - rxq->rx_free_thresh = rx_conf->rx_free_thresh; - rxq->drop_en = rx_conf->rx_drop_en; - - /* - * Allocate RX ring hardware descriptors. A memzone large enough to - * handle the maximum ring size is allocated in order to allow for - * resizing in later calls to the queue setup function. - */ - tz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx, - sizeof(struct nfp_net_rx_desc) * - NFP_NET_MAX_RX_DESC, NFP_MEMZONE_ALIGN, - socket_id); - - if (tz == NULL) { - PMD_DRV_LOG(ERR, "Error allocating rx dma"); - nfp_net_rx_queue_release(rxq); - return -ENOMEM; - } - - /* Saving physical and virtual addresses for the RX ring */ - rxq->dma = (uint64_t)tz->iova; - rxq->rxds = (struct nfp_net_rx_desc *)tz->addr; - - /* mbuf pointers array for referencing mbufs linked to RX descriptors */ - rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs", - sizeof(*rxq->rxbufs) * nb_desc, - RTE_CACHE_LINE_SIZE, socket_id); - if (rxq->rxbufs == NULL) { - nfp_net_rx_queue_release(rxq); - return -ENOMEM; - } - - PMD_RX_LOG(DEBUG, "rxbufs=%p hw_ring=%p dma_addr=0x%" PRIx64, - rxq->rxbufs, rxq->rxds, (unsigned long int)rxq->dma); - - nfp_net_reset_rx_queue(rxq); - - dev->data->rx_queues[queue_idx] = rxq; - rxq->hw = hw; - - /* - * Telling the HW about the physical address of the RX ring and number - * of descriptors in log2 format - */ - nn_cfg_writeq(hw, NFP_NET_CFG_RXR_ADDR(queue_idx), rxq->dma); - nn_cfg_writeb(hw, NFP_NET_CFG_RXR_SZ(queue_idx), rte_log2_u32(nb_desc)); - - return 0; -} - -static int -nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) -{ - struct nfp_net_rx_buff *rxe = rxq->rxbufs; - uint64_t dma_addr; - unsigned i; - - PMD_RX_LOG(DEBUG, "nfp_net_rx_fill_freelist for %u descriptors", - rxq->rx_count); - - for (i = 0; i < rxq->rx_count; i++) { - struct nfp_net_rx_desc *rxd; - struct rte_mbuf *mbuf = rte_pktmbuf_alloc(rxq->mem_pool); - - if (mbuf == NULL) { - PMD_DRV_LOG(ERR, "RX mbuf alloc failed queue_id=%u", - (unsigned)rxq->qidx); - return -ENOMEM; - } - - dma_addr = rte_cpu_to_le_64(RTE_MBUF_DMA_ADDR_DEFAULT(mbuf)); - - rxd = &rxq->rxds[i]; - rxd->fld.dd = 0; - rxd->fld.dma_addr_hi = (dma_addr >> 32) & 0xff; - rxd->fld.dma_addr_lo = dma_addr & 0xffffffff; - rxe[i].mbuf = mbuf; - PMD_RX_LOG(DEBUG, "[%d]: %" PRIx64, i, dma_addr); - } - - /* Make sure all writes are flushed before telling the hardware */ - rte_wmb(); - - /* Not advertising the whole ring as the firmware gets confused if so */ - PMD_RX_LOG(DEBUG, "Increment FL write pointer in %u", - rxq->rx_count - 1); - - nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, rxq->rx_count - 1); - - return 0; -} - -static int -nfp_net_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, - uint16_t nb_desc, unsigned int socket_id, - const struct rte_eth_txconf *tx_conf) -{ - const struct rte_memzone *tz; - struct nfp_net_txq *txq; - uint16_t tx_free_thresh; - struct nfp_net_hw *hw; - uint32_t tx_desc_sz; - - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - - PMD_INIT_FUNC_TRACE(); - - /* Validating number of descriptors */ - tx_desc_sz = nb_desc * sizeof(struct nfp_net_tx_desc); - if (tx_desc_sz % NFP_ALIGN_RING_DESC != 0 || - nb_desc > NFP_NET_MAX_TX_DESC || - nb_desc < NFP_NET_MIN_TX_DESC) { - PMD_DRV_LOG(ERR, "Wrong nb_desc value"); - return -EINVAL; - } - - tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ? - tx_conf->tx_free_thresh : - DEFAULT_TX_FREE_THRESH); - - if (tx_free_thresh > (nb_desc)) { - PMD_DRV_LOG(ERR, - "tx_free_thresh must be less than the number of TX " - "descriptors. (tx_free_thresh=%u port=%d " - "queue=%d)", (unsigned int)tx_free_thresh, - dev->data->port_id, (int)queue_idx); - return -(EINVAL); - } - - /* - * Free memory prior to re-allocation if needed. This is the case after - * calling nfp_net_stop - */ - if (dev->data->tx_queues[queue_idx]) { - PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d", - queue_idx); - nfp_net_tx_queue_release(dev->data->tx_queues[queue_idx]); - dev->data->tx_queues[queue_idx] = NULL; - } - - /* Allocating tx queue data structure */ - txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct nfp_net_txq), - RTE_CACHE_LINE_SIZE, socket_id); - if (txq == NULL) { - PMD_DRV_LOG(ERR, "Error allocating tx dma"); - return -ENOMEM; - } - - /* - * Allocate TX ring hardware descriptors. A memzone large enough to - * handle the maximum ring size is allocated in order to allow for - * resizing in later calls to the queue setup function. - */ - tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx, - sizeof(struct nfp_net_tx_desc) * - NFP_NET_MAX_TX_DESC, NFP_MEMZONE_ALIGN, - socket_id); - if (tz == NULL) { - PMD_DRV_LOG(ERR, "Error allocating tx dma"); - nfp_net_tx_queue_release(txq); - return -ENOMEM; - } - - txq->tx_count = nb_desc; - txq->tx_free_thresh = tx_free_thresh; - txq->tx_pthresh = tx_conf->tx_thresh.pthresh; - txq->tx_hthresh = tx_conf->tx_thresh.hthresh; - txq->tx_wthresh = tx_conf->tx_thresh.wthresh; - - /* queue mapping based on firmware configuration */ - txq->qidx = queue_idx; - txq->tx_qcidx = queue_idx * hw->stride_tx; - txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx); - - txq->port_id = dev->data->port_id; - - /* Saving physical and virtual addresses for the TX ring */ - txq->dma = (uint64_t)tz->iova; - txq->txds = (struct nfp_net_tx_desc *)tz->addr; - - /* mbuf pointers array for referencing mbufs linked to TX descriptors */ - txq->txbufs = rte_zmalloc_socket("txq->txbufs", - sizeof(*txq->txbufs) * nb_desc, - RTE_CACHE_LINE_SIZE, socket_id); - if (txq->txbufs == NULL) { - nfp_net_tx_queue_release(txq); - return -ENOMEM; - } - PMD_TX_LOG(DEBUG, "txbufs=%p hw_ring=%p dma_addr=0x%" PRIx64, - txq->txbufs, txq->txds, (unsigned long int)txq->dma); - - nfp_net_reset_tx_queue(txq); - - dev->data->tx_queues[queue_idx] = txq; - txq->hw = hw; - - /* - * Telling the HW about the physical address of the TX ring and number - * of descriptors in log2 format - */ - nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma); - nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(nb_desc)); - - return 0; -} - -/* nfp_net_tx_tso - Set TX descriptor for TSO */ -static inline void -nfp_net_tx_tso(struct nfp_net_txq *txq, struct nfp_net_tx_desc *txd, - struct rte_mbuf *mb) -{ - uint64_t ol_flags; - struct nfp_net_hw *hw = txq->hw; - - if (!(hw->cap & NFP_NET_CFG_CTRL_LSO_ANY)) - goto clean_txd; - - ol_flags = mb->ol_flags; - - if (!(ol_flags & PKT_TX_TCP_SEG)) - goto clean_txd; - - txd->l3_offset = mb->l2_len; - txd->l4_offset = mb->l2_len + mb->l3_len; - txd->lso_hdrlen = mb->l2_len + mb->l3_len + mb->l4_len; - txd->mss = rte_cpu_to_le_16(mb->tso_segsz); - txd->flags = PCIE_DESC_TX_LSO; - return; - -clean_txd: - txd->flags = 0; - txd->l3_offset = 0; - txd->l4_offset = 0; - txd->lso_hdrlen = 0; - txd->mss = 0; -} - -/* nfp_net_tx_cksum - Set TX CSUM offload flags in TX descriptor */ -static inline void -nfp_net_tx_cksum(struct nfp_net_txq *txq, struct nfp_net_tx_desc *txd, - struct rte_mbuf *mb) -{ - uint64_t ol_flags; - struct nfp_net_hw *hw = txq->hw; - - if (!(hw->cap & NFP_NET_CFG_CTRL_TXCSUM)) - return; - - ol_flags = mb->ol_flags; - - /* IPv6 does not need checksum */ - if (ol_flags & PKT_TX_IP_CKSUM) - txd->flags |= PCIE_DESC_TX_IP4_CSUM; - - switch (ol_flags & PKT_TX_L4_MASK) { - case PKT_TX_UDP_CKSUM: - txd->flags |= PCIE_DESC_TX_UDP_CSUM; - break; - case PKT_TX_TCP_CKSUM: - txd->flags |= PCIE_DESC_TX_TCP_CSUM; - break; - } - - if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)) - txd->flags |= PCIE_DESC_TX_CSUM; -} - -/* nfp_net_rx_cksum - set mbuf checksum flags based on RX descriptor flags */ -static inline void -nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd, - struct rte_mbuf *mb) -{ - struct nfp_net_hw *hw = rxq->hw; - - if (!(hw->ctrl & NFP_NET_CFG_CTRL_RXCSUM)) - return; - - /* If IPv4 and IP checksum error, fail */ - if (unlikely((rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM) && - !(rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM_OK))) - mb->ol_flags |= PKT_RX_IP_CKSUM_BAD; - else - mb->ol_flags |= PKT_RX_IP_CKSUM_GOOD; - - /* If neither UDP nor TCP return */ - if (!(rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM) && - !(rxd->rxd.flags & PCIE_DESC_RX_UDP_CSUM)) - return; - - if (likely(rxd->rxd.flags & PCIE_DESC_RX_L4_CSUM_OK)) - mb->ol_flags |= PKT_RX_L4_CKSUM_GOOD; - else - mb->ol_flags |= PKT_RX_L4_CKSUM_BAD; -} - -#define NFP_HASH_OFFSET ((uint8_t *)mbuf->buf_addr + mbuf->data_off - 4) -#define NFP_HASH_TYPE_OFFSET ((uint8_t *)mbuf->buf_addr + mbuf->data_off - 8) - -#define NFP_DESC_META_LEN(d) (d->rxd.meta_len_dd & PCIE_DESC_RX_META_LEN_MASK) - -/* - * nfp_net_set_hash - Set mbuf hash data - * - * The RSS hash and hash-type are pre-pended to the packet data. - * Extract and decode it and set the mbuf fields. - */ -static inline void -nfp_net_set_hash(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd, - struct rte_mbuf *mbuf) -{ - struct nfp_net_hw *hw = rxq->hw; - uint8_t *meta_offset; - uint32_t meta_info; - uint32_t hash = 0; - uint32_t hash_type = 0; - - if (!(hw->ctrl & NFP_NET_CFG_CTRL_RSS)) - return; - - /* this is true for new firmwares */ - if (likely(((hw->cap & NFP_NET_CFG_CTRL_RSS2) || - (NFD_CFG_MAJOR_VERSION_of(hw->ver) == 4)) && - NFP_DESC_META_LEN(rxd))) { - /* - * new metadata api: - * <---- 32 bit -----> - * m field type word - * e data field #2 - * t data field #1 - * a data field #0 - * ==================== - * packet data - * - * Field type word contains up to 8 4bit field types - * A 4bit field type refers to a data field word - * A data field word can have several 4bit field types - */ - meta_offset = rte_pktmbuf_mtod(mbuf, uint8_t *); - meta_offset -= NFP_DESC_META_LEN(rxd); - meta_info = rte_be_to_cpu_32(*(uint32_t *)meta_offset); - meta_offset += 4; - /* NFP PMD just supports metadata for hashing */ - switch (meta_info & NFP_NET_META_FIELD_MASK) { - case NFP_NET_META_HASH: - /* next field type is about the hash type */ - meta_info >>= NFP_NET_META_FIELD_SIZE; - /* hash value is in the data field */ - hash = rte_be_to_cpu_32(*(uint32_t *)meta_offset); - hash_type = meta_info & NFP_NET_META_FIELD_MASK; - break; - default: - /* Unsupported metadata can be a performance issue */ - return; - } - } else { - if (!(rxd->rxd.flags & PCIE_DESC_RX_RSS)) - return; - - hash = rte_be_to_cpu_32(*(uint32_t *)NFP_HASH_OFFSET); - hash_type = rte_be_to_cpu_32(*(uint32_t *)NFP_HASH_TYPE_OFFSET); - } - - mbuf->hash.rss = hash; - mbuf->ol_flags |= PKT_RX_RSS_HASH; - - switch (hash_type) { - case NFP_NET_RSS_IPV4: - mbuf->packet_type |= RTE_PTYPE_INNER_L3_IPV4; - break; - case NFP_NET_RSS_IPV6: - mbuf->packet_type |= RTE_PTYPE_INNER_L3_IPV6; - break; - case NFP_NET_RSS_IPV6_EX: - mbuf->packet_type |= RTE_PTYPE_INNER_L3_IPV6_EXT; - break; - case NFP_NET_RSS_IPV4_TCP: - mbuf->packet_type |= RTE_PTYPE_INNER_L3_IPV6_EXT; - break; - case NFP_NET_RSS_IPV6_TCP: - mbuf->packet_type |= RTE_PTYPE_INNER_L3_IPV6_EXT; - break; - case NFP_NET_RSS_IPV4_UDP: - mbuf->packet_type |= RTE_PTYPE_INNER_L3_IPV6_EXT; - break; - case NFP_NET_RSS_IPV6_UDP: - mbuf->packet_type |= RTE_PTYPE_INNER_L3_IPV6_EXT; - break; - default: - mbuf->packet_type |= RTE_PTYPE_INNER_L4_MASK; - } -} - -static inline void -nfp_net_mbuf_alloc_failed(struct nfp_net_rxq *rxq) -{ - rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed++; -} - -#define NFP_DESC_META_LEN(d) (d->rxd.meta_len_dd & PCIE_DESC_RX_META_LEN_MASK) - -/* - * RX path design: - * - * There are some decisions to take: - * 1) How to check DD RX descriptors bit - * 2) How and when to allocate new mbufs - * - * Current implementation checks just one single DD bit each loop. As each - * descriptor is 8 bytes, it is likely a good idea to check descriptors in - * a single cache line instead. Tests with this change have not shown any - * performance improvement but it requires further investigation. For example, - * depending on which descriptor is next, the number of descriptors could be - * less than 8 for just checking those in the same cache line. This implies - * extra work which could be counterproductive by itself. Indeed, last firmware - * changes are just doing this: writing several descriptors with the DD bit - * for saving PCIe bandwidth and DMA operations from the NFP. - * - * Mbuf allocation is done when a new packet is received. Then the descriptor - * is automatically linked with the new mbuf and the old one is given to the - * user. The main drawback with this design is mbuf allocation is heavier than - * using bulk allocations allowed by DPDK with rte_mempool_get_bulk. From the - * cache point of view it does not seem allocating the mbuf early on as we are - * doing now have any benefit at all. Again, tests with this change have not - * shown any improvement. Also, rte_mempool_get_bulk returns all or nothing - * so looking at the implications of this type of allocation should be studied - * deeply - */ - -static uint16_t -nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) -{ - struct nfp_net_rxq *rxq; - struct nfp_net_rx_desc *rxds; - struct nfp_net_rx_buff *rxb; - struct nfp_net_hw *hw; - struct rte_mbuf *mb; - struct rte_mbuf *new_mb; - uint16_t nb_hold; - uint64_t dma_addr; - int avail; - - rxq = rx_queue; - if (unlikely(rxq == NULL)) { - /* - * DPDK just checks the queue is lower than max queues - * enabled. But the queue needs to be configured - */ - RTE_LOG_DP(ERR, PMD, "RX Bad queue\n"); - return -EINVAL; - } - - hw = rxq->hw; - avail = 0; - nb_hold = 0; - - while (avail < nb_pkts) { - rxb = &rxq->rxbufs[rxq->rd_p]; - if (unlikely(rxb == NULL)) { - RTE_LOG_DP(ERR, PMD, "rxb does not exist!\n"); - break; - } - - rxds = &rxq->rxds[rxq->rd_p]; - if ((rxds->rxd.meta_len_dd & PCIE_DESC_RX_DD) == 0) - break; - - /* - * Memory barrier to ensure that we won't do other - * reads before the DD bit. - */ - rte_rmb(); - - /* - * We got a packet. Let's alloc a new mbuf for refilling the - * free descriptor ring as soon as possible - */ - new_mb = rte_pktmbuf_alloc(rxq->mem_pool); - if (unlikely(new_mb == NULL)) { - RTE_LOG_DP(DEBUG, PMD, - "RX mbuf alloc failed port_id=%u queue_id=%u\n", - rxq->port_id, (unsigned int)rxq->qidx); - nfp_net_mbuf_alloc_failed(rxq); - break; - } - - nb_hold++; - - /* - * Grab the mbuf and refill the descriptor with the - * previously allocated mbuf - */ - mb = rxb->mbuf; - rxb->mbuf = new_mb; - - PMD_RX_LOG(DEBUG, "Packet len: %u, mbuf_size: %u", - rxds->rxd.data_len, rxq->mbuf_size); - - /* Size of this segment */ - mb->data_len = rxds->rxd.data_len - NFP_DESC_META_LEN(rxds); - /* Size of the whole packet. We just support 1 segment */ - mb->pkt_len = rxds->rxd.data_len - NFP_DESC_META_LEN(rxds); - - if (unlikely((mb->data_len + hw->rx_offset) > - rxq->mbuf_size)) { - /* - * This should not happen and the user has the - * responsibility of avoiding it. But we have - * to give some info about the error - */ - RTE_LOG_DP(ERR, PMD, - "mbuf overflow likely due to the RX offset.\n" - "\t\tYour mbuf size should have extra space for" - " RX offset=%u bytes.\n" - "\t\tCurrently you just have %u bytes available" - " but the received packet is %u bytes long", - hw->rx_offset, - rxq->mbuf_size - hw->rx_offset, - mb->data_len); - return -EINVAL; - } - - /* Filling the received mbuf with packet info */ - if (hw->rx_offset) - mb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset; - else - mb->data_off = RTE_PKTMBUF_HEADROOM + - NFP_DESC_META_LEN(rxds); - - /* No scatter mode supported */ - mb->nb_segs = 1; - mb->next = NULL; - - mb->port = rxq->port_id; - - /* Checking the RSS flag */ - nfp_net_set_hash(rxq, rxds, mb); - - /* Checking the checksum flag */ - nfp_net_rx_cksum(rxq, rxds, mb); - - if ((rxds->rxd.flags & PCIE_DESC_RX_VLAN) && - (hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN)) { - mb->vlan_tci = rte_cpu_to_le_32(rxds->rxd.vlan); - mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED; - } - - /* Adding the mbuf to the mbuf array passed by the app */ - rx_pkts[avail++] = mb; - - /* Now resetting and updating the descriptor */ - rxds->vals[0] = 0; - rxds->vals[1] = 0; - dma_addr = rte_cpu_to_le_64(RTE_MBUF_DMA_ADDR_DEFAULT(new_mb)); - rxds->fld.dd = 0; - rxds->fld.dma_addr_hi = (dma_addr >> 32) & 0xff; - rxds->fld.dma_addr_lo = dma_addr & 0xffffffff; - - rxq->rd_p++; - if (unlikely(rxq->rd_p == rxq->rx_count)) /* wrapping?*/ - rxq->rd_p = 0; - } - - if (nb_hold == 0) - return nb_hold; - - PMD_RX_LOG(DEBUG, "RX port_id=%u queue_id=%u, %d packets received", - rxq->port_id, (unsigned int)rxq->qidx, nb_hold); - - nb_hold += rxq->nb_rx_hold; - - /* - * FL descriptors needs to be written before incrementing the - * FL queue WR pointer - */ - rte_wmb(); - if (nb_hold > rxq->rx_free_thresh) { - PMD_RX_LOG(DEBUG, "port=%u queue=%u nb_hold=%u avail=%u", - rxq->port_id, (unsigned int)rxq->qidx, - (unsigned)nb_hold, (unsigned)avail); - nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, nb_hold); - nb_hold = 0; - } - rxq->nb_rx_hold = nb_hold; - - return avail; -} - -/* - * nfp_net_tx_free_bufs - Check for descriptors with a complete - * status - * @txq: TX queue to work with - * Returns number of descriptors freed - */ -int -nfp_net_tx_free_bufs(struct nfp_net_txq *txq) -{ - uint32_t qcp_rd_p; - int todo; - - PMD_TX_LOG(DEBUG, "queue %u. Check for descriptor with a complete" - " status", txq->qidx); - - /* Work out how many packets have been sent */ - qcp_rd_p = nfp_qcp_read(txq->qcp_q, NFP_QCP_READ_PTR); - - if (qcp_rd_p == txq->rd_p) { - PMD_TX_LOG(DEBUG, "queue %u: It seems harrier is not sending " - "packets (%u, %u)", txq->qidx, - qcp_rd_p, txq->rd_p); - return 0; - } - - if (qcp_rd_p > txq->rd_p) - todo = qcp_rd_p - txq->rd_p; - else - todo = qcp_rd_p + txq->tx_count - txq->rd_p; - - PMD_TX_LOG(DEBUG, "qcp_rd_p %u, txq->rd_p: %u, qcp->rd_p: %u", - qcp_rd_p, txq->rd_p, txq->rd_p); - - if (todo == 0) - return todo; - - txq->rd_p += todo; - if (unlikely(txq->rd_p >= txq->tx_count)) - txq->rd_p -= txq->tx_count; - - return todo; -} - -/* Leaving always free descriptors for avoiding wrapping confusion */ -static inline -uint32_t nfp_free_tx_desc(struct nfp_net_txq *txq) -{ - if (txq->wr_p >= txq->rd_p) - return txq->tx_count - (txq->wr_p - txq->rd_p) - 8; - else - return txq->rd_p - txq->wr_p - 8; -} - -/* - * nfp_net_txq_full - Check if the TX queue free descriptors - * is below tx_free_threshold - * - * @txq: TX queue to check - * - * This function uses the host copy* of read/write pointers - */ -static inline -uint32_t nfp_net_txq_full(struct nfp_net_txq *txq) -{ - return (nfp_free_tx_desc(txq) < txq->tx_free_thresh); -} - -static uint16_t -nfp_net_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) -{ - struct nfp_net_txq *txq; - struct nfp_net_hw *hw; - struct nfp_net_tx_desc *txds, txd; - struct rte_mbuf *pkt; - uint64_t dma_addr; - int pkt_size, dma_size; - uint16_t free_descs, issued_descs; - struct rte_mbuf **lmbuf; - int i; - - txq = tx_queue; - hw = txq->hw; - txds = &txq->txds[txq->wr_p]; - - PMD_TX_LOG(DEBUG, "working for queue %u at pos %d and %u packets", - txq->qidx, txq->wr_p, nb_pkts); - - if ((nfp_free_tx_desc(txq) < nb_pkts) || (nfp_net_txq_full(txq))) - nfp_net_tx_free_bufs(txq); - - free_descs = (uint16_t)nfp_free_tx_desc(txq); - if (unlikely(free_descs == 0)) - return 0; - - pkt = *tx_pkts; - - i = 0; - issued_descs = 0; - PMD_TX_LOG(DEBUG, "queue: %u. Sending %u packets", - txq->qidx, nb_pkts); - /* Sending packets */ - while ((i < nb_pkts) && free_descs) { - /* Grabbing the mbuf linked to the current descriptor */ - lmbuf = &txq->txbufs[txq->wr_p].mbuf; - /* Warming the cache for releasing the mbuf later on */ - RTE_MBUF_PREFETCH_TO_FREE(*lmbuf); - - pkt = *(tx_pkts + i); - - if (unlikely((pkt->nb_segs > 1) && - !(hw->cap & NFP_NET_CFG_CTRL_GATHER))) { - PMD_INIT_LOG(INFO, "NFP_NET_CFG_CTRL_GATHER not set"); - rte_panic("Multisegment packet unsupported\n"); - } - - /* Checking if we have enough descriptors */ - if (unlikely(pkt->nb_segs > free_descs)) - goto xmit_end; - - /* - * Checksum and VLAN flags just in the first descriptor for a - * multisegment packet, but TSO info needs to be in all of them. - */ - txd.data_len = pkt->pkt_len; - nfp_net_tx_tso(txq, &txd, pkt); - nfp_net_tx_cksum(txq, &txd, pkt); - - if ((pkt->ol_flags & PKT_TX_VLAN_PKT) && - (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)) { - txd.flags |= PCIE_DESC_TX_VLAN; - txd.vlan = pkt->vlan_tci; - } - - /* - * mbuf data_len is the data in one segment and pkt_len data - * in the whole packet. When the packet is just one segment, - * then data_len = pkt_len - */ - pkt_size = pkt->pkt_len; - - while (pkt) { - /* Copying TSO, VLAN and cksum info */ - *txds = txd; - - /* Releasing mbuf used by this descriptor previously*/ - if (*lmbuf) - rte_pktmbuf_free_seg(*lmbuf); - - /* - * Linking mbuf with descriptor for being released - * next time descriptor is used - */ - *lmbuf = pkt; - - dma_size = pkt->data_len; - dma_addr = rte_mbuf_data_iova(pkt); - PMD_TX_LOG(DEBUG, "Working with mbuf at dma address:" - "%" PRIx64 "", dma_addr); - - /* Filling descriptors fields */ - txds->dma_len = dma_size; - txds->data_len = txd.data_len; - txds->dma_addr_hi = (dma_addr >> 32) & 0xff; - txds->dma_addr_lo = (dma_addr & 0xffffffff); - ASSERT(free_descs > 0); - free_descs--; - - txq->wr_p++; - if (unlikely(txq->wr_p == txq->tx_count)) /* wrapping?*/ - txq->wr_p = 0; - - pkt_size -= dma_size; - - /* - * Making the EOP, packets with just one segment - * the priority - */ - if (likely(!pkt_size)) - txds->offset_eop = PCIE_DESC_TX_EOP; - else - txds->offset_eop = 0; - - pkt = pkt->next; - /* Referencing next free TX descriptor */ - txds = &txq->txds[txq->wr_p]; - lmbuf = &txq->txbufs[txq->wr_p].mbuf; - issued_descs++; - } - i++; - } - -xmit_end: - /* Increment write pointers. Force memory write before we let HW know */ - rte_wmb(); - nfp_qcp_ptr_add(txq->qcp_q, NFP_QCP_WRITE_PTR, issued_descs); - - return i; -} - static int nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask) { diff --git a/drivers/net/nfp/nfp_net_pmd.h b/drivers/net/nfp/nfp_net_pmd.h index a3a3ba32d6..9265496bf0 100644 --- a/drivers/net/nfp/nfp_net_pmd.h +++ b/drivers/net/nfp/nfp_net_pmd.h @@ -41,6 +41,12 @@ struct nfp_net_adapter; #define NFP_QCP_QUEUE_STS_HI 0x000c #define NFP_QCP_QUEUE_STS_HI_WRITEPTR_mask (0x3ffff) +/* The offset of the queue controller queues in the PCIe Target */ +#define NFP_PCIE_QUEUE(_q) (0x80000 + (NFP_QCP_QUEUE_ADDR_SZ * ((_q) & 0xff))) + +/* Maximum value which can be added to a queue with one transaction */ +#define NFP_QCP_MAX_ADD 0x7f + /* Interrupt definitions */ #define NFP_NET_IRQ_LSC_IDX 0 @@ -95,47 +101,11 @@ struct nfp_net_adapter; #include #include -static inline uint8_t nn_readb(volatile const void *addr) -{ - return rte_read8(addr); -} - -static inline void nn_writeb(uint8_t val, volatile void *addr) -{ - rte_write8(val, addr); -} - -static inline uint32_t nn_readl(volatile const void *addr) -{ - return rte_read32(addr); -} - -static inline void nn_writel(uint32_t val, volatile void *addr) -{ - rte_write32(val, addr); -} - -static inline void nn_writew(uint16_t val, volatile void *addr) -{ - rte_write16(val, addr); -} - -static inline uint64_t nn_readq(volatile void *addr) -{ - const volatile uint32_t *p = addr; - uint32_t low, high; - - high = nn_readl((volatile const void *)(p + 1)); - low = nn_readl((volatile const void *)p); - - return low + ((uint64_t)high << 32); -} - -static inline void nn_writeq(uint64_t val, volatile void *addr) -{ - nn_writel(val >> 32, (volatile char *)addr + 4); - nn_writel(val, addr); -} +/* nfp_qcp_ptr - Read or Write Pointer of a queue */ +enum nfp_qcp_ptr { + NFP_QCP_READ_PTR = 0, + NFP_QCP_WRITE_PTR +}; struct nfp_pf_dev { /* Backpointer to associated pci device */ @@ -247,6 +217,138 @@ struct nfp_net_adapter { struct nfp_net_hw hw; }; +static inline uint8_t nn_readb(volatile const void *addr) +{ + return rte_read8(addr); +} + +static inline void nn_writeb(uint8_t val, volatile void *addr) +{ + rte_write8(val, addr); +} + +static inline uint32_t nn_readl(volatile const void *addr) +{ + return rte_read32(addr); +} + +static inline void nn_writel(uint32_t val, volatile void *addr) +{ + rte_write32(val, addr); +} + +static inline void nn_writew(uint16_t val, volatile void *addr) +{ + rte_write16(val, addr); +} + +static inline uint64_t nn_readq(volatile void *addr) +{ + const volatile uint32_t *p = addr; + uint32_t low, high; + + high = nn_readl((volatile const void *)(p + 1)); + low = nn_readl((volatile const void *)p); + + return low + ((uint64_t)high << 32); +} + +static inline void nn_writeq(uint64_t val, volatile void *addr) +{ + nn_writel(val >> 32, (volatile char *)addr + 4); + nn_writel(val, addr); +} + +/* + * Functions to read/write from/to Config BAR + * Performs any endian conversion necessary. + */ +static inline uint8_t +nn_cfg_readb(struct nfp_net_hw *hw, int off) +{ + return nn_readb(hw->ctrl_bar + off); +} + +static inline void +nn_cfg_writeb(struct nfp_net_hw *hw, int off, uint8_t val) +{ + nn_writeb(val, hw->ctrl_bar + off); +} + +static inline uint32_t +nn_cfg_readl(struct nfp_net_hw *hw, int off) +{ + return rte_le_to_cpu_32(nn_readl(hw->ctrl_bar + off)); +} + +static inline void +nn_cfg_writel(struct nfp_net_hw *hw, int off, uint32_t val) +{ + nn_writel(rte_cpu_to_le_32(val), hw->ctrl_bar + off); +} + +static inline uint64_t +nn_cfg_readq(struct nfp_net_hw *hw, int off) +{ + return rte_le_to_cpu_64(nn_readq(hw->ctrl_bar + off)); +} + +static inline void +nn_cfg_writeq(struct nfp_net_hw *hw, int off, uint64_t val) +{ + nn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off); +} + +/* + * nfp_qcp_ptr_add - Add the value to the selected pointer of a queue + * @q: Base address for queue structure + * @ptr: Add to the Read or Write pointer + * @val: Value to add to the queue pointer + * + * If @val is greater than @NFP_QCP_MAX_ADD multiple writes are performed. + */ +static inline void +nfp_qcp_ptr_add(uint8_t *q, enum nfp_qcp_ptr ptr, uint32_t val) +{ + uint32_t off; + + if (ptr == NFP_QCP_READ_PTR) + off = NFP_QCP_QUEUE_ADD_RPTR; + else + off = NFP_QCP_QUEUE_ADD_WPTR; + + while (val > NFP_QCP_MAX_ADD) { + nn_writel(rte_cpu_to_le_32(NFP_QCP_MAX_ADD), q + off); + val -= NFP_QCP_MAX_ADD; +} + +nn_writel(rte_cpu_to_le_32(val), q + off); +} + +/* + * nfp_qcp_read - Read the current Read/Write pointer value for a queue + * @q: Base address for queue structure + * @ptr: Read or Write pointer + */ +static inline uint32_t +nfp_qcp_read(uint8_t *q, enum nfp_qcp_ptr ptr) +{ + uint32_t off; + uint32_t val; + + if (ptr == NFP_QCP_READ_PTR) + off = NFP_QCP_QUEUE_STS_LO; + else + off = NFP_QCP_QUEUE_STS_HI; + + val = rte_cpu_to_le_32(nn_readl(q + off)); + + if (ptr == NFP_QCP_READ_PTR) + return val & NFP_QCP_QUEUE_STS_LO_READPTR_mask; + else + return val & NFP_QCP_QUEUE_STS_HI_WRITEPTR_mask; +} + #define NFP_NET_DEV_PRIVATE_TO_HW(adapter)\ (&((struct nfp_net_adapter *)adapter)->hw) diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c new file mode 100644 index 0000000000..9ee9e5c9a3 --- /dev/null +++ b/drivers/net/nfp/nfp_rxtx.c @@ -0,0 +1,1002 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2014-2021 Netronome Systems, Inc. + * All rights reserved. + * + * Small portions derived from code Copyright(c) 2010-2015 Intel Corporation. + */ + +/* + * vim:shiftwidth=8:noexpandtab + * + * @file dpdk/pmd/nfp_rxtx.c + * + * Netronome vNIC DPDK Poll-Mode Driver: Rx/Tx functions + */ + +#include +#include + +#include "nfp_net_pmd.h" +#include "nfp_rxtx.h" +#include "nfp_net_logs.h" +#include "nfp_net_ctrl.h" + +/* Prototypes */ +static int nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq); +static inline void nfp_net_mbuf_alloc_failed(struct nfp_net_rxq *rxq); +static inline void nfp_net_set_hash(struct nfp_net_rxq *rxq, + struct nfp_net_rx_desc *rxd, + struct rte_mbuf *mbuf); +static inline void nfp_net_rx_cksum(struct nfp_net_rxq *rxq, + struct nfp_net_rx_desc *rxd, + struct rte_mbuf *mb); +static void nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq); +static int nfp_net_tx_free_bufs(struct nfp_net_txq *txq); +static void nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq); +static inline uint32_t nfp_free_tx_desc(struct nfp_net_txq *txq); +static inline uint32_t nfp_net_txq_full(struct nfp_net_txq *txq); +static inline void nfp_net_tx_tso(struct nfp_net_txq *txq, + struct nfp_net_tx_desc *txd, + struct rte_mbuf *mb); +static inline void nfp_net_tx_cksum(struct nfp_net_txq *txq, + struct nfp_net_tx_desc *txd, + struct rte_mbuf *mb); + +static int +nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) +{ + struct nfp_net_rx_buff *rxe = rxq->rxbufs; + uint64_t dma_addr; + unsigned int i; + + PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors", + rxq->rx_count); + + for (i = 0; i < rxq->rx_count; i++) { + struct nfp_net_rx_desc *rxd; + struct rte_mbuf *mbuf = rte_pktmbuf_alloc(rxq->mem_pool); + + if (mbuf == NULL) { + PMD_DRV_LOG(ERR, "RX mbuf alloc failed queue_id=%u", + (unsigned int)rxq->qidx); + return -ENOMEM; + } + + dma_addr = rte_cpu_to_le_64(RTE_MBUF_DMA_ADDR_DEFAULT(mbuf)); + + rxd = &rxq->rxds[i]; + rxd->fld.dd = 0; + rxd->fld.dma_addr_hi = (dma_addr >> 32) & 0xff; + rxd->fld.dma_addr_lo = dma_addr & 0xffffffff; + rxe[i].mbuf = mbuf; + PMD_RX_LOG(DEBUG, "[%d]: %" PRIx64, i, dma_addr); + } + + /* Make sure all writes are flushed before telling the hardware */ + rte_wmb(); + + /* Not advertising the whole ring as the firmware gets confused if so */ + PMD_RX_LOG(DEBUG, "Increment FL write pointer in %u", + rxq->rx_count - 1); + + nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, rxq->rx_count - 1); + + return 0; +} + +int +nfp_net_rx_freelist_setup(struct rte_eth_dev *dev) +{ + int i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) < 0) + return -1; + } + return 0; +} + +uint32_t +nfp_net_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx) +{ + struct nfp_net_rxq *rxq; + struct nfp_net_rx_desc *rxds; + uint32_t idx; + uint32_t count; + + rxq = (struct nfp_net_rxq *)dev->data->rx_queues[queue_idx]; + + idx = rxq->rd_p; + + count = 0; + + /* + * Other PMDs are just checking the DD bit in intervals of 4 + * descriptors and counting all four if the first has the DD + * bit on. Of course, this is not accurate but can be good for + * performance. But ideally that should be done in descriptors + * chunks belonging to the same cache line + */ + + while (count < rxq->rx_count) { + rxds = &rxq->rxds[idx]; + if ((rxds->rxd.meta_len_dd & PCIE_DESC_RX_DD) == 0) + break; + + count++; + idx++; + + /* Wrapping? */ + if ((idx) == rxq->rx_count) + idx = 0; + } + + return count; +} + +static inline void +nfp_net_mbuf_alloc_failed(struct nfp_net_rxq *rxq) +{ + rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed++; +} + +/* + * nfp_net_set_hash - Set mbuf hash data + * + * The RSS hash and hash-type are pre-pended to the packet data. + * Extract and decode it and set the mbuf fields. + */ +static inline void +nfp_net_set_hash(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd, + struct rte_mbuf *mbuf) +{ + struct nfp_net_hw *hw = rxq->hw; + uint8_t *meta_offset; + uint32_t meta_info; + uint32_t hash = 0; + uint32_t hash_type = 0; + + if (!(hw->ctrl & NFP_NET_CFG_CTRL_RSS)) + return; + + /* this is true for new firmwares */ + if (likely(((hw->cap & NFP_NET_CFG_CTRL_RSS2) || + (NFD_CFG_MAJOR_VERSION_of(hw->ver) == 4)) && + NFP_DESC_META_LEN(rxd))) { + /* + * new metadata api: + * <---- 32 bit -----> + * m field type word + * e data field #2 + * t data field #1 + * a data field #0 + * ==================== + * packet data + * + * Field type word contains up to 8 4bit field types + * A 4bit field type refers to a data field word + * A data field word can have several 4bit field types + */ + meta_offset = rte_pktmbuf_mtod(mbuf, uint8_t *); + meta_offset -= NFP_DESC_META_LEN(rxd); + meta_info = rte_be_to_cpu_32(*(uint32_t *)meta_offset); + meta_offset += 4; + /* NFP PMD just supports metadata for hashing */ + switch (meta_info & NFP_NET_META_FIELD_MASK) { + case NFP_NET_META_HASH: + /* next field type is about the hash type */ + meta_info >>= NFP_NET_META_FIELD_SIZE; + /* hash value is in the data field */ + hash = rte_be_to_cpu_32(*(uint32_t *)meta_offset); + hash_type = meta_info & NFP_NET_META_FIELD_MASK; + break; + default: + /* Unsupported metadata can be a performance issue */ + return; + } + } else { + if (!(rxd->rxd.flags & PCIE_DESC_RX_RSS)) + return; + + hash = rte_be_to_cpu_32(*(uint32_t *)NFP_HASH_OFFSET); + hash_type = rte_be_to_cpu_32(*(uint32_t *)NFP_HASH_TYPE_OFFSET); + } + + mbuf->hash.rss = hash; + mbuf->ol_flags |= PKT_RX_RSS_HASH; + + switch (hash_type) { + case NFP_NET_RSS_IPV4: + mbuf->packet_type |= RTE_PTYPE_INNER_L3_IPV4; + break; + case NFP_NET_RSS_IPV6: + mbuf->packet_type |= RTE_PTYPE_INNER_L3_IPV6; + break; + case NFP_NET_RSS_IPV6_EX: + mbuf->packet_type |= RTE_PTYPE_INNER_L3_IPV6_EXT; + break; + case NFP_NET_RSS_IPV4_TCP: + mbuf->packet_type |= RTE_PTYPE_INNER_L3_IPV6_EXT; + break; + case NFP_NET_RSS_IPV6_TCP: + mbuf->packet_type |= RTE_PTYPE_INNER_L3_IPV6_EXT; + break; + case NFP_NET_RSS_IPV4_UDP: + mbuf->packet_type |= RTE_PTYPE_INNER_L3_IPV6_EXT; + break; + case NFP_NET_RSS_IPV6_UDP: + mbuf->packet_type |= RTE_PTYPE_INNER_L3_IPV6_EXT; + break; + default: + mbuf->packet_type |= RTE_PTYPE_INNER_L4_MASK; + } +} + +/* nfp_net_rx_cksum - set mbuf checksum flags based on RX descriptor flags */ +static inline void +nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd, + struct rte_mbuf *mb) +{ + struct nfp_net_hw *hw = rxq->hw; + + if (!(hw->ctrl & NFP_NET_CFG_CTRL_RXCSUM)) + return; + + /* If IPv4 and IP checksum error, fail */ + if (unlikely((rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM) && + !(rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM_OK))) + mb->ol_flags |= PKT_RX_IP_CKSUM_BAD; + else + mb->ol_flags |= PKT_RX_IP_CKSUM_GOOD; + + /* If neither UDP nor TCP return */ + if (!(rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM) && + !(rxd->rxd.flags & PCIE_DESC_RX_UDP_CSUM)) + return; + + if (likely(rxd->rxd.flags & PCIE_DESC_RX_L4_CSUM_OK)) + mb->ol_flags |= PKT_RX_L4_CKSUM_GOOD; + else + mb->ol_flags |= PKT_RX_L4_CKSUM_BAD; +} + +/* + * RX path design: + * + * There are some decisions to take: + * 1) How to check DD RX descriptors bit + * 2) How and when to allocate new mbufs + * + * Current implementation checks just one single DD bit each loop. As each + * descriptor is 8 bytes, it is likely a good idea to check descriptors in + * a single cache line instead. Tests with this change have not shown any + * performance improvement but it requires further investigation. For example, + * depending on which descriptor is next, the number of descriptors could be + * less than 8 for just checking those in the same cache line. This implies + * extra work which could be counterproductive by itself. Indeed, last firmware + * changes are just doing this: writing several descriptors with the DD bit + * for saving PCIe bandwidth and DMA operations from the NFP. + * + * Mbuf allocation is done when a new packet is received. Then the descriptor + * is automatically linked with the new mbuf and the old one is given to the + * user. The main drawback with this design is mbuf allocation is heavier than + * using bulk allocations allowed by DPDK with rte_mempool_get_bulk. From the + * cache point of view it does not seem allocating the mbuf early on as we are + * doing now have any benefit at all. Again, tests with this change have not + * shown any improvement. Also, rte_mempool_get_bulk returns all or nothing + * so looking at the implications of this type of allocation should be studied + * deeply + */ + +uint16_t +nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + struct nfp_net_rxq *rxq; + struct nfp_net_rx_desc *rxds; + struct nfp_net_rx_buff *rxb; + struct nfp_net_hw *hw; + struct rte_mbuf *mb; + struct rte_mbuf *new_mb; + uint16_t nb_hold; + uint64_t dma_addr; + int avail; + + rxq = rx_queue; + if (unlikely(rxq == NULL)) { + /* + * DPDK just checks the queue is lower than max queues + * enabled. But the queue needs to be configured + */ + RTE_LOG_DP(ERR, PMD, "RX Bad queue\n"); + return -EINVAL; + } + + hw = rxq->hw; + avail = 0; + nb_hold = 0; + + while (avail < nb_pkts) { + rxb = &rxq->rxbufs[rxq->rd_p]; + if (unlikely(rxb == NULL)) { + RTE_LOG_DP(ERR, PMD, "rxb does not exist!\n"); + break; + } + + rxds = &rxq->rxds[rxq->rd_p]; + if ((rxds->rxd.meta_len_dd & PCIE_DESC_RX_DD) == 0) + break; + + /* + * Memory barrier to ensure that we won't do other + * reads before the DD bit. + */ + rte_rmb(); + + /* + * We got a packet. Let's alloc a new mbuf for refilling the + * free descriptor ring as soon as possible + */ + new_mb = rte_pktmbuf_alloc(rxq->mem_pool); + if (unlikely(new_mb == NULL)) { + RTE_LOG_DP(DEBUG, PMD, + "RX mbuf alloc failed port_id=%u queue_id=%u\n", + rxq->port_id, (unsigned int)rxq->qidx); + nfp_net_mbuf_alloc_failed(rxq); + break; + } + + nb_hold++; + + /* + * Grab the mbuf and refill the descriptor with the + * previously allocated mbuf + */ + mb = rxb->mbuf; + rxb->mbuf = new_mb; + + PMD_RX_LOG(DEBUG, "Packet len: %u, mbuf_size: %u", + rxds->rxd.data_len, rxq->mbuf_size); + + /* Size of this segment */ + mb->data_len = rxds->rxd.data_len - NFP_DESC_META_LEN(rxds); + /* Size of the whole packet. We just support 1 segment */ + mb->pkt_len = rxds->rxd.data_len - NFP_DESC_META_LEN(rxds); + + if (unlikely((mb->data_len + hw->rx_offset) > + rxq->mbuf_size)) { + /* + * This should not happen and the user has the + * responsibility of avoiding it. But we have + * to give some info about the error + */ + RTE_LOG_DP(ERR, PMD, + "mbuf overflow likely due to the RX offset.\n" + "\t\tYour mbuf size should have extra space for" + " RX offset=%u bytes.\n" + "\t\tCurrently you just have %u bytes available" + " but the received packet is %u bytes long", + hw->rx_offset, + rxq->mbuf_size - hw->rx_offset, + mb->data_len); + return -EINVAL; + } + + /* Filling the received mbuf with packet info */ + if (hw->rx_offset) + mb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset; + else + mb->data_off = RTE_PKTMBUF_HEADROOM + + NFP_DESC_META_LEN(rxds); + + /* No scatter mode supported */ + mb->nb_segs = 1; + mb->next = NULL; + + mb->port = rxq->port_id; + + /* Checking the RSS flag */ + nfp_net_set_hash(rxq, rxds, mb); + + /* Checking the checksum flag */ + nfp_net_rx_cksum(rxq, rxds, mb); + + if ((rxds->rxd.flags & PCIE_DESC_RX_VLAN) && + (hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN)) { + mb->vlan_tci = rte_cpu_to_le_32(rxds->rxd.vlan); + mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED; + } + + /* Adding the mbuf to the mbuf array passed by the app */ + rx_pkts[avail++] = mb; + + /* Now resetting and updating the descriptor */ + rxds->vals[0] = 0; + rxds->vals[1] = 0; + dma_addr = rte_cpu_to_le_64(RTE_MBUF_DMA_ADDR_DEFAULT(new_mb)); + rxds->fld.dd = 0; + rxds->fld.dma_addr_hi = (dma_addr >> 32) & 0xff; + rxds->fld.dma_addr_lo = dma_addr & 0xffffffff; + + rxq->rd_p++; + if (unlikely(rxq->rd_p == rxq->rx_count)) /* wrapping?*/ + rxq->rd_p = 0; + } + + if (nb_hold == 0) + return nb_hold; + + PMD_RX_LOG(DEBUG, "RX port_id=%u queue_id=%u, %d packets received", + rxq->port_id, (unsigned int)rxq->qidx, nb_hold); + + nb_hold += rxq->nb_rx_hold; + + /* + * FL descriptors needs to be written before incrementing the + * FL queue WR pointer + */ + rte_wmb(); + if (nb_hold > rxq->rx_free_thresh) { + PMD_RX_LOG(DEBUG, "port=%u queue=%u nb_hold=%u avail=%u", + rxq->port_id, (unsigned int)rxq->qidx, + (unsigned int)nb_hold, (unsigned int)avail); + nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, nb_hold); + nb_hold = 0; + } + rxq->nb_rx_hold = nb_hold; + + return avail; +} + +static void +nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq) +{ + unsigned int i; + + if (rxq->rxbufs == NULL) + return; + + for (i = 0; i < rxq->rx_count; i++) { + if (rxq->rxbufs[i].mbuf) { + rte_pktmbuf_free_seg(rxq->rxbufs[i].mbuf); + rxq->rxbufs[i].mbuf = NULL; + } + } +} + +void +nfp_net_rx_queue_release(void *rx_queue) +{ + struct nfp_net_rxq *rxq = rx_queue; + + if (rxq) { + nfp_net_rx_queue_release_mbufs(rxq); + rte_free(rxq->rxbufs); + rte_free(rxq); + } +} + +void +nfp_net_reset_rx_queue(struct nfp_net_rxq *rxq) +{ + nfp_net_rx_queue_release_mbufs(rxq); + rxq->rd_p = 0; + rxq->nb_rx_hold = 0; +} + +int +nfp_net_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + const struct rte_memzone *tz; + struct nfp_net_rxq *rxq; + struct nfp_net_hw *hw; + uint32_t rx_desc_sz; + + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + /* Validating number of descriptors */ + rx_desc_sz = nb_desc * sizeof(struct nfp_net_rx_desc); + if (rx_desc_sz % NFP_ALIGN_RING_DESC != 0 || + nb_desc > NFP_NET_MAX_RX_DESC || + nb_desc < NFP_NET_MIN_RX_DESC) { + PMD_DRV_LOG(ERR, "Wrong nb_desc value"); + return -EINVAL; + } + + /* + * Free memory prior to re-allocation if needed. This is the case after + * calling nfp_net_stop + */ + if (dev->data->rx_queues[queue_idx]) { + nfp_net_rx_queue_release(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; + } + + /* Allocating rx queue data structure */ + rxq = rte_zmalloc_socket("ethdev RX queue", sizeof(struct nfp_net_rxq), + RTE_CACHE_LINE_SIZE, socket_id); + if (rxq == NULL) + return -ENOMEM; + + /* Hw queues mapping based on firmware configuration */ + rxq->qidx = queue_idx; + rxq->fl_qcidx = queue_idx * hw->stride_rx; + rxq->rx_qcidx = rxq->fl_qcidx + (hw->stride_rx - 1); + rxq->qcp_fl = hw->rx_bar + NFP_QCP_QUEUE_OFF(rxq->fl_qcidx); + rxq->qcp_rx = hw->rx_bar + NFP_QCP_QUEUE_OFF(rxq->rx_qcidx); + + /* + * Tracking mbuf size for detecting a potential mbuf overflow due to + * RX offset + */ + rxq->mem_pool = mp; + rxq->mbuf_size = rxq->mem_pool->elt_size; + rxq->mbuf_size -= (sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM); + hw->flbufsz = rxq->mbuf_size; + + rxq->rx_count = nb_desc; + rxq->port_id = dev->data->port_id; + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + rxq->drop_en = rx_conf->rx_drop_en; + + /* + * Allocate RX ring hardware descriptors. A memzone large enough to + * handle the maximum ring size is allocated in order to allow for + * resizing in later calls to the queue setup function. + */ + tz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx, + sizeof(struct nfp_net_rx_desc) * + NFP_NET_MAX_RX_DESC, NFP_MEMZONE_ALIGN, + socket_id); + + if (tz == NULL) { + PMD_DRV_LOG(ERR, "Error allocating rx dma"); + nfp_net_rx_queue_release(rxq); + return -ENOMEM; + } + + /* Saving physical and virtual addresses for the RX ring */ + rxq->dma = (uint64_t)tz->iova; + rxq->rxds = (struct nfp_net_rx_desc *)tz->addr; + + /* mbuf pointers array for referencing mbufs linked to RX descriptors */ + rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs", + sizeof(*rxq->rxbufs) * nb_desc, + RTE_CACHE_LINE_SIZE, socket_id); + if (rxq->rxbufs == NULL) { + nfp_net_rx_queue_release(rxq); + return -ENOMEM; + } + + PMD_RX_LOG(DEBUG, "rxbufs=%p hw_ring=%p dma_addr=0x%" PRIx64, + rxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma); + + nfp_net_reset_rx_queue(rxq); + + dev->data->rx_queues[queue_idx] = rxq; + rxq->hw = hw; + + /* + * Telling the HW about the physical address of the RX ring and number + * of descriptors in log2 format + */ + nn_cfg_writeq(hw, NFP_NET_CFG_RXR_ADDR(queue_idx), rxq->dma); + nn_cfg_writeb(hw, NFP_NET_CFG_RXR_SZ(queue_idx), rte_log2_u32(nb_desc)); + + return 0; +} + +/* + * nfp_net_tx_free_bufs - Check for descriptors with a complete + * status + * @txq: TX queue to work with + * Returns number of descriptors freed + */ +static int +nfp_net_tx_free_bufs(struct nfp_net_txq *txq) +{ + uint32_t qcp_rd_p; + int todo; + + PMD_TX_LOG(DEBUG, "queue %u. Check for descriptor with a complete" + " status", txq->qidx); + + /* Work out how many packets have been sent */ + qcp_rd_p = nfp_qcp_read(txq->qcp_q, NFP_QCP_READ_PTR); + + if (qcp_rd_p == txq->rd_p) { + PMD_TX_LOG(DEBUG, "queue %u: It seems harrier is not sending " + "packets (%u, %u)", txq->qidx, + qcp_rd_p, txq->rd_p); + return 0; + } + + if (qcp_rd_p > txq->rd_p) + todo = qcp_rd_p - txq->rd_p; + else + todo = qcp_rd_p + txq->tx_count - txq->rd_p; + + PMD_TX_LOG(DEBUG, "qcp_rd_p %u, txq->rd_p: %u, qcp->rd_p: %u", + qcp_rd_p, txq->rd_p, txq->rd_p); + + if (todo == 0) + return todo; + + txq->rd_p += todo; + if (unlikely(txq->rd_p >= txq->tx_count)) + txq->rd_p -= txq->tx_count; + + return todo; +} + +static void +nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq) +{ + unsigned int i; + + if (txq->txbufs == NULL) + return; + + for (i = 0; i < txq->tx_count; i++) { + if (txq->txbufs[i].mbuf) { + rte_pktmbuf_free_seg(txq->txbufs[i].mbuf); + txq->txbufs[i].mbuf = NULL; + } + } +} + +void +nfp_net_tx_queue_release(void *tx_queue) +{ + struct nfp_net_txq *txq = tx_queue; + + if (txq) { + nfp_net_tx_queue_release_mbufs(txq); + rte_free(txq->txbufs); + rte_free(txq); + } +} + +void +nfp_net_reset_tx_queue(struct nfp_net_txq *txq) +{ + nfp_net_tx_queue_release_mbufs(txq); + txq->wr_p = 0; + txq->rd_p = 0; +} + +int +nfp_net_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_txconf *tx_conf) +{ + const struct rte_memzone *tz; + struct nfp_net_txq *txq; + uint16_t tx_free_thresh; + struct nfp_net_hw *hw; + uint32_t tx_desc_sz; + + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + /* Validating number of descriptors */ + tx_desc_sz = nb_desc * sizeof(struct nfp_net_tx_desc); + if (tx_desc_sz % NFP_ALIGN_RING_DESC != 0 || + nb_desc > NFP_NET_MAX_TX_DESC || + nb_desc < NFP_NET_MIN_TX_DESC) { + PMD_DRV_LOG(ERR, "Wrong nb_desc value"); + return -EINVAL; + } + + tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : + DEFAULT_TX_FREE_THRESH); + + if (tx_free_thresh > (nb_desc)) { + PMD_DRV_LOG(ERR, + "tx_free_thresh must be less than the number of TX " + "descriptors. (tx_free_thresh=%u port=%d " + "queue=%d)", (unsigned int)tx_free_thresh, + dev->data->port_id, (int)queue_idx); + return -(EINVAL); + } + + /* + * Free memory prior to re-allocation if needed. This is the case after + * calling nfp_net_stop + */ + if (dev->data->tx_queues[queue_idx]) { + PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d", + queue_idx); + nfp_net_tx_queue_release(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; + } + + /* Allocating tx queue data structure */ + txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct nfp_net_txq), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_DRV_LOG(ERR, "Error allocating tx dma"); + return -ENOMEM; + } + + /* + * Allocate TX ring hardware descriptors. A memzone large enough to + * handle the maximum ring size is allocated in order to allow for + * resizing in later calls to the queue setup function. + */ + tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx, + sizeof(struct nfp_net_tx_desc) * + NFP_NET_MAX_TX_DESC, NFP_MEMZONE_ALIGN, + socket_id); + if (tz == NULL) { + PMD_DRV_LOG(ERR, "Error allocating tx dma"); + nfp_net_tx_queue_release(txq); + return -ENOMEM; + } + + txq->tx_count = nb_desc; + txq->tx_free_thresh = tx_free_thresh; + txq->tx_pthresh = tx_conf->tx_thresh.pthresh; + txq->tx_hthresh = tx_conf->tx_thresh.hthresh; + txq->tx_wthresh = tx_conf->tx_thresh.wthresh; + + /* queue mapping based on firmware configuration */ + txq->qidx = queue_idx; + txq->tx_qcidx = queue_idx * hw->stride_tx; + txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx); + + txq->port_id = dev->data->port_id; + + /* Saving physical and virtual addresses for the TX ring */ + txq->dma = (uint64_t)tz->iova; + txq->txds = (struct nfp_net_tx_desc *)tz->addr; + + /* mbuf pointers array for referencing mbufs linked to TX descriptors */ + txq->txbufs = rte_zmalloc_socket("txq->txbufs", + sizeof(*txq->txbufs) * nb_desc, + RTE_CACHE_LINE_SIZE, socket_id); + if (txq->txbufs == NULL) { + nfp_net_tx_queue_release(txq); + return -ENOMEM; + } + PMD_TX_LOG(DEBUG, "txbufs=%p hw_ring=%p dma_addr=0x%" PRIx64, + txq->txbufs, txq->txds, (unsigned long)txq->dma); + + nfp_net_reset_tx_queue(txq); + + dev->data->tx_queues[queue_idx] = txq; + txq->hw = hw; + + /* + * Telling the HW about the physical address of the TX ring and number + * of descriptors in log2 format + */ + nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma); + nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(nb_desc)); + + return 0; +} + +/* Leaving always free descriptors for avoiding wrapping confusion */ +static inline +uint32_t nfp_free_tx_desc(struct nfp_net_txq *txq) +{ + if (txq->wr_p >= txq->rd_p) + return txq->tx_count - (txq->wr_p - txq->rd_p) - 8; + else + return txq->rd_p - txq->wr_p - 8; +} + +/* + * nfp_net_txq_full - Check if the TX queue free descriptors + * is below tx_free_threshold + * + * @txq: TX queue to check + * + * This function uses the host copy* of read/write pointers + */ +static inline +uint32_t nfp_net_txq_full(struct nfp_net_txq *txq) +{ + return (nfp_free_tx_desc(txq) < txq->tx_free_thresh); +} + +/* nfp_net_tx_tso - Set TX descriptor for TSO */ +static inline void +nfp_net_tx_tso(struct nfp_net_txq *txq, struct nfp_net_tx_desc *txd, + struct rte_mbuf *mb) +{ + uint64_t ol_flags; + struct nfp_net_hw *hw = txq->hw; + + if (!(hw->cap & NFP_NET_CFG_CTRL_LSO_ANY)) + goto clean_txd; + + ol_flags = mb->ol_flags; + + if (!(ol_flags & PKT_TX_TCP_SEG)) + goto clean_txd; + + txd->l3_offset = mb->l2_len; + txd->l4_offset = mb->l2_len + mb->l3_len; + txd->lso_hdrlen = mb->l2_len + mb->l3_len + mb->l4_len; + txd->mss = rte_cpu_to_le_16(mb->tso_segsz); + txd->flags = PCIE_DESC_TX_LSO; + return; + +clean_txd: + txd->flags = 0; + txd->l3_offset = 0; + txd->l4_offset = 0; + txd->lso_hdrlen = 0; + txd->mss = 0; +} + +/* nfp_net_tx_cksum - Set TX CSUM offload flags in TX descriptor */ +static inline void +nfp_net_tx_cksum(struct nfp_net_txq *txq, struct nfp_net_tx_desc *txd, + struct rte_mbuf *mb) +{ + uint64_t ol_flags; + struct nfp_net_hw *hw = txq->hw; + + if (!(hw->cap & NFP_NET_CFG_CTRL_TXCSUM)) + return; + + ol_flags = mb->ol_flags; + + /* IPv6 does not need checksum */ + if (ol_flags & PKT_TX_IP_CKSUM) + txd->flags |= PCIE_DESC_TX_IP4_CSUM; + + switch (ol_flags & PKT_TX_L4_MASK) { + case PKT_TX_UDP_CKSUM: + txd->flags |= PCIE_DESC_TX_UDP_CSUM; + break; + case PKT_TX_TCP_CKSUM: + txd->flags |= PCIE_DESC_TX_TCP_CSUM; + break; + } + + if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)) + txd->flags |= PCIE_DESC_TX_CSUM; +} + +uint16_t +nfp_net_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct nfp_net_txq *txq; + struct nfp_net_hw *hw; + struct nfp_net_tx_desc *txds, txd; + struct rte_mbuf *pkt; + uint64_t dma_addr; + int pkt_size, dma_size; + uint16_t free_descs, issued_descs; + struct rte_mbuf **lmbuf; + int i; + + txq = tx_queue; + hw = txq->hw; + txds = &txq->txds[txq->wr_p]; + + PMD_TX_LOG(DEBUG, "working for queue %u at pos %d and %u packets", + txq->qidx, txq->wr_p, nb_pkts); + + if ((nfp_free_tx_desc(txq) < nb_pkts) || (nfp_net_txq_full(txq))) + nfp_net_tx_free_bufs(txq); + + free_descs = (uint16_t)nfp_free_tx_desc(txq); + if (unlikely(free_descs == 0)) + return 0; + + pkt = *tx_pkts; + + i = 0; + issued_descs = 0; + PMD_TX_LOG(DEBUG, "queue: %u. Sending %u packets", + txq->qidx, nb_pkts); + /* Sending packets */ + while ((i < nb_pkts) && free_descs) { + /* Grabbing the mbuf linked to the current descriptor */ + lmbuf = &txq->txbufs[txq->wr_p].mbuf; + /* Warming the cache for releasing the mbuf later on */ + RTE_MBUF_PREFETCH_TO_FREE(*lmbuf); + + pkt = *(tx_pkts + i); + + if (unlikely(pkt->nb_segs > 1 && + !(hw->cap & NFP_NET_CFG_CTRL_GATHER))) { + PMD_INIT_LOG(INFO, "NFP_NET_CFG_CTRL_GATHER not set"); + rte_panic("Multisegment packet unsupported\n"); + } + + /* Checking if we have enough descriptors */ + if (unlikely(pkt->nb_segs > free_descs)) + goto xmit_end; + + /* + * Checksum and VLAN flags just in the first descriptor for a + * multisegment packet, but TSO info needs to be in all of them. + */ + txd.data_len = pkt->pkt_len; + nfp_net_tx_tso(txq, &txd, pkt); + nfp_net_tx_cksum(txq, &txd, pkt); + + if ((pkt->ol_flags & PKT_TX_VLAN_PKT) && + (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)) { + txd.flags |= PCIE_DESC_TX_VLAN; + txd.vlan = pkt->vlan_tci; + } + + /* + * mbuf data_len is the data in one segment and pkt_len data + * in the whole packet. When the packet is just one segment, + * then data_len = pkt_len + */ + pkt_size = pkt->pkt_len; + + while (pkt) { + /* Copying TSO, VLAN and cksum info */ + *txds = txd; + + /* Releasing mbuf used by this descriptor previously*/ + if (*lmbuf) + rte_pktmbuf_free_seg(*lmbuf); + + /* + * Linking mbuf with descriptor for being released + * next time descriptor is used + */ + *lmbuf = pkt; + + dma_size = pkt->data_len; + dma_addr = rte_mbuf_data_iova(pkt); + PMD_TX_LOG(DEBUG, "Working with mbuf at dma address:" + "%" PRIx64 "", dma_addr); + + /* Filling descriptors fields */ + txds->dma_len = dma_size; + txds->data_len = txd.data_len; + txds->dma_addr_hi = (dma_addr >> 32) & 0xff; + txds->dma_addr_lo = (dma_addr & 0xffffffff); + ASSERT(free_descs > 0); + free_descs--; + + txq->wr_p++; + if (unlikely(txq->wr_p == txq->tx_count)) /* wrapping?*/ + txq->wr_p = 0; + + pkt_size -= dma_size; + + /* + * Making the EOP, packets with just one segment + * the priority + */ + if (likely(!pkt_size)) + txds->offset_eop = PCIE_DESC_TX_EOP; + else + txds->offset_eop = 0; + + pkt = pkt->next; + /* Referencing next free TX descriptor */ + txds = &txq->txds[txq->wr_p]; + lmbuf = &txq->txbufs[txq->wr_p].mbuf; + issued_descs++; + } + i++; + } + +xmit_end: + /* Increment write pointers. Force memory write before we let HW know */ + rte_wmb(); + nfp_qcp_ptr_add(txq->qcp_q, NFP_QCP_WRITE_PTR, issued_descs); + + return i; +} diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h index 41a3a4b4e7..d2d0f3f175 100644 --- a/drivers/net/nfp/nfp_rxtx.h +++ b/drivers/net/nfp/nfp_rxtx.h @@ -17,6 +17,14 @@ #include #include +#define NFP_DESC_META_LEN(d) ((d)->rxd.meta_len_dd & PCIE_DESC_RX_META_LEN_MASK) + +#define NFP_HASH_OFFSET ((uint8_t *)mbuf->buf_addr + mbuf->data_off - 4) +#define NFP_HASH_TYPE_OFFSET ((uint8_t *)mbuf->buf_addr + mbuf->data_off - 8) + +#define RTE_MBUF_DMA_ADDR_DEFAULT(mb) \ + ((uint64_t)((mb)->buf_iova + RTE_PKTMBUF_HEADROOM)) + /* * The maximum number of descriptors is limited by design as * DPDK uses uint16_t variables for these values @@ -266,6 +274,25 @@ struct nfp_net_rxq { int rx_qcidx; } __rte_aligned(64); +int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev); +uint32_t nfp_net_rx_queue_count(struct rte_eth_dev *dev, + uint16_t queue_idx); +uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); +void nfp_net_rx_queue_release(void *rxq); +void nfp_net_reset_rx_queue(struct nfp_net_rxq *rxq); +int nfp_net_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); +void nfp_net_tx_queue_release(void *txq); +void nfp_net_reset_tx_queue(struct nfp_net_txq *txq); +int nfp_net_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_txconf *tx_conf); +uint16_t nfp_net_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + #endif /* _NFP_RXTX_H_ */ /* * Local variables: From patchwork Fri Jul 16 08:35:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Heinrich Kuhn X-Patchwork-Id: 95961 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3E475A0C50; Fri, 16 Jul 2021 10:37:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 01A4541358; Fri, 16 Jul 2021 10:37:11 +0200 (CEST) Received: from mail-ej1-f50.google.com (mail-ej1-f50.google.com [209.85.218.50]) by mails.dpdk.org (Postfix) with ESMTP id 72E284133B for ; Fri, 16 Jul 2021 10:37:09 +0200 (CEST) Received: by mail-ej1-f50.google.com with SMTP id hd33so13949930ejc.9 for ; Fri, 16 Jul 2021 01:37:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YCCoSau/jj49/J/MMOhyjDVoUzIeb4Kx74bwQbXZ7ro=; b=rntIs/5vpSrE4gZ9kd8fX6XK66by7ccHF9eKFCybhru7QQ5fw3u7j2P5DASXc9XBfO p244ff5YOYzBsyUnMhwocLvgaJgcnzi/GrE50dCqUGy7qtHFLeAE+8gx63Ewpc9DHYDN E2mDCGVoBrmnaUfEKq4esB3RnjtAR1Lf46Ya8kBBjp/rfkA4MXecs+wYSaaTGHSBYSFB EV5qChQOcldjhZxDgz2LTn4V97LH+n2LEQhQp7ofKr4LeQ99kOHDbqMvO1YhVaPyikMI BCtV5hvLtLFyUEVmA8daqJQrTYu9nOAJPOvrq+UGqdORj9UwZTszpi1rlF5Ma7kw90xV e3Ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YCCoSau/jj49/J/MMOhyjDVoUzIeb4Kx74bwQbXZ7ro=; b=l+mBLGnbF6bAeMACMv2/ocCJ2Qk68e5/Yl3yuF2p+WWQso8HCKHdImBp/xoQASKoWR g2sJxiRyAbRlwN+QimUFrVfp3rQzDrMJJR7OFsxrD34FC9rEukSe+d2+QlZhiSft50lu F0qFVkcjHHWppuMpncYNRxg8SoWM/f06AP3CmbvbV/2tPNUhnxulLezHsIz1VNcf0D43 vPCgHZd4DrlWVyzgtRzNM4EWRLwbCeuBiz5K8UvSpcT9zQvO8ZChdWp2oR/qlX7n5zfA UFGWupOKXB+8uR9+tLc7wQuhmcv+iTfkTTPKCQjn3/zbAecllGO+WXHr/Tqa8ydUdi2J GoQw== X-Gm-Message-State: AOAM532lNI7mhYX/yXhJOKcAMG/yuYQhu2Qd+CmdLKWuHAZHR5nHu+6U PqUzLA+ZyhGBwh4T5uF68qU6fevJtRxDQD/QUBsiS/z2SOCD+4MsB4402Hcn28PNe1Pfr/nLtMw HrMgv254m9fY9ykJWGqDIPvE7mG5Grp6ad0fVNYuu+qoAzgm34/0sF392Q9IeNKyf X-Google-Smtp-Source: ABdhPJx6JAl2K44xERHMo3J4MloQ6l5xSTdDHeYXVIHXLUWIV/3JJ6SdHbkw8vCXYiY5iP+wUni0cA== X-Received: by 2002:a17:906:b89a:: with SMTP id hb26mr834935ejb.492.1626424628776; Fri, 16 Jul 2021 01:37:08 -0700 (PDT) Received: from localhost.localdomain ([155.93.216.150]) by smtp.gmail.com with ESMTPSA id e6sm3371650edk.63.2021.07.16.01.37.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 16 Jul 2021 01:37:08 -0700 (PDT) From: Heinrich Kuhn To: dev@dpdk.org Cc: Heinrich Kuhn , Simon Horman Date: Fri, 16 Jul 2021 10:35:42 +0200 Message-Id: <20210716083545.34444-4-heinrich.kuhn@netronome.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210716083545.34444-1-heinrich.kuhn@netronome.com> References: <20210716082314.33865-1-heinrich.kuhn@netronome.com> <20210716083545.34444-1-heinrich.kuhn@netronome.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 3/7] net/nfp: move CPP bridge to a separate file X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit moves the CPP bridge logic to a separate file. A new corresponding header file is also created. Signed-off-by: Heinrich Kuhn Signed-off-by: Simon Horman --- drivers/net/nfp/meson.build | 1 + drivers/net/nfp/nfp_cpp_bridge.c | 392 +++++++++++++++++++++++++++++++ drivers/net/nfp/nfp_cpp_bridge.h | 36 +++ drivers/net/nfp/nfp_net.c | 367 +---------------------------- 4 files changed, 430 insertions(+), 366 deletions(-) create mode 100644 drivers/net/nfp/nfp_cpp_bridge.c create mode 100644 drivers/net/nfp/nfp_cpp_bridge.h diff --git a/drivers/net/nfp/meson.build b/drivers/net/nfp/meson.build index 1b289e2354..b46ac2d40f 100644 --- a/drivers/net/nfp/meson.build +++ b/drivers/net/nfp/meson.build @@ -20,4 +20,5 @@ sources = files( 'nfpcore/nfp_hwinfo.c', 'nfp_net.c', 'nfp_rxtx.c', + 'nfp_cpp_bridge.c', ) diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c new file mode 100644 index 0000000000..d916793338 --- /dev/null +++ b/drivers/net/nfp/nfp_cpp_bridge.c @@ -0,0 +1,392 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2014-2021 Netronome Systems, Inc. + * All rights reserved. + * + * Small portions derived from code Copyright(c) 2010-2015 Intel Corporation. + */ + +/* + * vim:shiftwidth=8:noexpandtab + * + * @file dpdk/pmd/nfp_cpp_bridge.c + * + * Netronome vNIC DPDK Poll-Mode Driver: CPP Bridge + */ + +#include + +#include "nfpcore/nfp_cpp.h" +#include "nfpcore/nfp_mip.h" +#include "nfpcore/nfp_nsp.h" + +#include "nfp_net_logs.h" +#include "nfp_cpp_bridge.h" + +#include + +/* Prototypes */ +static int nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp); +static int nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp); +static int nfp_cpp_bridge_serve_ioctl(int sockfd, struct nfp_cpp *cpp); + +void nfp_register_cpp_service(struct nfp_cpp *cpp) +{ + uint32_t *cpp_service_id = NULL; + struct rte_service_spec service; + + memset(&service, 0, sizeof(struct rte_service_spec)); + snprintf(service.name, sizeof(service.name), "nfp_cpp_service"); + service.callback = nfp_cpp_bridge_service_func; + service.callback_userdata = (void *)cpp; + + if (rte_service_component_register(&service, + cpp_service_id)) + RTE_LOG(WARNING, PMD, "NFP CPP bridge service register() failed"); + else + RTE_LOG(DEBUG, PMD, "NFP CPP bridge service registered"); +} + +/* + * Serving a write request to NFP from host programs. The request + * sends the write size and the CPP target. The bridge makes use + * of CPP interface handler configured by the PMD setup. + */ +static int +nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp) +{ + struct nfp_cpp_area *area; + off_t offset, nfp_offset; + uint32_t cpp_id, pos, len; + uint32_t tmpbuf[16]; + size_t count, curlen, totlen = 0; + int err = 0; + + PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__, + sizeof(off_t), sizeof(size_t)); + + /* Reading the count param */ + err = recv(sockfd, &count, sizeof(off_t), 0); + if (err != sizeof(off_t)) + return -EINVAL; + + curlen = count; + + /* Reading the offset param */ + err = recv(sockfd, &offset, sizeof(off_t), 0); + if (err != sizeof(off_t)) + return -EINVAL; + + /* Obtain target's CPP ID and offset in target */ + cpp_id = (offset >> 40) << 8; + nfp_offset = offset & ((1ull << 40) - 1); + + PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count, + offset); + PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__, + cpp_id, nfp_offset); + + /* Adjust length if not aligned */ + if (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) != + (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) { + curlen = NFP_CPP_MEMIO_BOUNDARY - + (nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1)); + } + + while (count > 0) { + /* configure a CPP PCIe2CPP BAR for mapping the CPP target */ + area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev", + nfp_offset, curlen); + if (!area) { + RTE_LOG(ERR, PMD, "%s: area alloc fail\n", __func__); + return -EIO; + } + + /* mapping the target */ + err = nfp_cpp_area_acquire(area); + if (err < 0) { + RTE_LOG(ERR, PMD, "area acquire failed\n"); + nfp_cpp_area_free(area); + return -EIO; + } + + for (pos = 0; pos < curlen; pos += len) { + len = curlen - pos; + if (len > sizeof(tmpbuf)) + len = sizeof(tmpbuf); + + PMD_CPP_LOG(DEBUG, "%s: Receive %u of %zu\n", __func__, + len, count); + err = recv(sockfd, tmpbuf, len, MSG_WAITALL); + if (err != (int)len) { + RTE_LOG(ERR, PMD, + "%s: error when receiving, %d of %zu\n", + __func__, err, count); + nfp_cpp_area_release(area); + nfp_cpp_area_free(area); + return -EIO; + } + err = nfp_cpp_area_write(area, pos, tmpbuf, len); + if (err < 0) { + RTE_LOG(ERR, PMD, "nfp_cpp_area_write error\n"); + nfp_cpp_area_release(area); + nfp_cpp_area_free(area); + return -EIO; + } + } + + nfp_offset += pos; + totlen += pos; + nfp_cpp_area_release(area); + nfp_cpp_area_free(area); + + count -= pos; + curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ? + NFP_CPP_MEMIO_BOUNDARY : count; + } + + return 0; +} + +/* + * Serving a read request to NFP from host programs. The request + * sends the read size and the CPP target. The bridge makes use + * of CPP interface handler configured by the PMD setup. The read + * data is sent to the requester using the same socket. + */ +static int +nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp) +{ + struct nfp_cpp_area *area; + off_t offset, nfp_offset; + uint32_t cpp_id, pos, len; + uint32_t tmpbuf[16]; + size_t count, curlen, totlen = 0; + int err = 0; + + PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__, + sizeof(off_t), sizeof(size_t)); + + /* Reading the count param */ + err = recv(sockfd, &count, sizeof(off_t), 0); + if (err != sizeof(off_t)) + return -EINVAL; + + curlen = count; + + /* Reading the offset param */ + err = recv(sockfd, &offset, sizeof(off_t), 0); + if (err != sizeof(off_t)) + return -EINVAL; + + /* Obtain target's CPP ID and offset in target */ + cpp_id = (offset >> 40) << 8; + nfp_offset = offset & ((1ull << 40) - 1); + + PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count, + offset); + PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__, + cpp_id, nfp_offset); + + /* Adjust length if not aligned */ + if (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) != + (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) { + curlen = NFP_CPP_MEMIO_BOUNDARY - + (nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1)); + } + + while (count > 0) { + area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev", + nfp_offset, curlen); + if (!area) { + RTE_LOG(ERR, PMD, "%s: area alloc failed\n", __func__); + return -EIO; + } + + err = nfp_cpp_area_acquire(area); + if (err < 0) { + RTE_LOG(ERR, PMD, "area acquire failed\n"); + nfp_cpp_area_free(area); + return -EIO; + } + + for (pos = 0; pos < curlen; pos += len) { + len = curlen - pos; + if (len > sizeof(tmpbuf)) + len = sizeof(tmpbuf); + + err = nfp_cpp_area_read(area, pos, tmpbuf, len); + if (err < 0) { + RTE_LOG(ERR, PMD, "nfp_cpp_area_read error\n"); + nfp_cpp_area_release(area); + nfp_cpp_area_free(area); + return -EIO; + } + PMD_CPP_LOG(DEBUG, "%s: sending %u of %zu\n", __func__, + len, count); + + err = send(sockfd, tmpbuf, len, 0); + if (err != (int)len) { + RTE_LOG(ERR, PMD, + "%s: error when sending: %d of %zu\n", + __func__, err, count); + nfp_cpp_area_release(area); + nfp_cpp_area_free(area); + return -EIO; + } + } + + nfp_offset += pos; + totlen += pos; + nfp_cpp_area_release(area); + nfp_cpp_area_free(area); + + count -= pos; + curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ? + NFP_CPP_MEMIO_BOUNDARY : count; + } + return 0; +} + +/* + * Serving a ioctl command from host NFP tools. This usually goes to + * a kernel driver char driver but it is not available when the PF is + * bound to the PMD. Currently just one ioctl command is served and it + * does not require any CPP access at all. + */ +static int +nfp_cpp_bridge_serve_ioctl(int sockfd, struct nfp_cpp *cpp) +{ + uint32_t cmd, ident_size, tmp; + int err; + + /* Reading now the IOCTL command */ + err = recv(sockfd, &cmd, 4, 0); + if (err != 4) { + RTE_LOG(ERR, PMD, "%s: read error from socket\n", __func__); + return -EIO; + } + + /* Only supporting NFP_IOCTL_CPP_IDENTIFICATION */ + if (cmd != NFP_IOCTL_CPP_IDENTIFICATION) { + RTE_LOG(ERR, PMD, "%s: unknown cmd %d\n", __func__, cmd); + return -EINVAL; + } + + err = recv(sockfd, &ident_size, 4, 0); + if (err != 4) { + RTE_LOG(ERR, PMD, "%s: read error from socket\n", __func__); + return -EIO; + } + + tmp = nfp_cpp_model(cpp); + + PMD_CPP_LOG(DEBUG, "%s: sending NFP model %08x\n", __func__, tmp); + + err = send(sockfd, &tmp, 4, 0); + if (err != 4) { + RTE_LOG(ERR, PMD, "%s: error writing to socket\n", __func__); + return -EIO; + } + + tmp = cpp->interface; + + PMD_CPP_LOG(DEBUG, "%s: sending NFP interface %08x\n", __func__, tmp); + + err = send(sockfd, &tmp, 4, 0); + if (err != 4) { + RTE_LOG(ERR, PMD, "%s: error writing to socket\n", __func__); + return -EIO; + } + + return 0; +} + +/* + * This is the code to be executed by a service core. The CPP bridge interface + * is based on a unix socket and requests usually received by a kernel char + * driver, read, write and ioctl, are handled by the CPP bridge. NFP host tools + * can be executed with a wrapper library and LD_LIBRARY being completely + * unaware of the CPP bridge performing the NFP kernel char driver for CPP + * accesses. + */ +int32_t +nfp_cpp_bridge_service_func(void *args) +{ + struct sockaddr address; + struct nfp_cpp *cpp = args; + int sockfd, datafd, op, ret; + + unlink("/tmp/nfp_cpp"); + sockfd = socket(AF_UNIX, SOCK_STREAM, 0); + if (sockfd < 0) { + RTE_LOG(ERR, PMD, "%s: socket creation error. Service failed\n", + __func__); + return -EIO; + } + + memset(&address, 0, sizeof(struct sockaddr)); + + address.sa_family = AF_UNIX; + strcpy(address.sa_data, "/tmp/nfp_cpp"); + + ret = bind(sockfd, (const struct sockaddr *)&address, + sizeof(struct sockaddr)); + if (ret < 0) { + RTE_LOG(ERR, PMD, "%s: bind error (%d). Service failed\n", + __func__, errno); + close(sockfd); + return ret; + } + + ret = listen(sockfd, 20); + if (ret < 0) { + RTE_LOG(ERR, PMD, "%s: listen error(%d). Service failed\n", + __func__, errno); + close(sockfd); + return ret; + } + + for (;;) { + datafd = accept(sockfd, NULL, NULL); + if (datafd < 0) { + RTE_LOG(ERR, PMD, "%s: accept call error (%d)\n", + __func__, errno); + RTE_LOG(ERR, PMD, "%s: service failed\n", __func__); + close(sockfd); + return -EIO; + } + + while (1) { + ret = recv(datafd, &op, 4, 0); + if (ret <= 0) { + PMD_CPP_LOG(DEBUG, "%s: socket close\n", + __func__); + break; + } + + PMD_CPP_LOG(DEBUG, "%s: getting op %u\n", __func__, op); + + if (op == NFP_BRIDGE_OP_READ) + nfp_cpp_bridge_serve_read(datafd, cpp); + + if (op == NFP_BRIDGE_OP_WRITE) + nfp_cpp_bridge_serve_write(datafd, cpp); + + if (op == NFP_BRIDGE_OP_IOCTL) + nfp_cpp_bridge_serve_ioctl(datafd, cpp); + + if (op == 0) + break; + } + close(datafd); + } + close(sockfd); + + return 0; +} +/* + * Local variables: + * c-file-style: "Linux" + * indent-tabs-mode: t + * End: + */ diff --git a/drivers/net/nfp/nfp_cpp_bridge.h b/drivers/net/nfp/nfp_cpp_bridge.h new file mode 100644 index 0000000000..aea5fdc784 --- /dev/null +++ b/drivers/net/nfp/nfp_cpp_bridge.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2014-2021 Netronome Systems, Inc. + * All rights reserved. + * + * Small portions derived from code Copyright(c) 2010-2015 Intel Corporation. + */ + +/* + * vim:shiftwidth=8:noexpandtab + * + * @file dpdk/pmd/nfp_cpp_bridge.h + * + * Netronome vNIC DPDK Poll-Mode Driver: CPP Bridge header file + */ + +#ifndef _NFP_CPP_BRIDGE_H_ +#define _NFP_CPP_BRIDGE_H_ + +#define NFP_CPP_MEMIO_BOUNDARY (1 << 20) +#define NFP_BRIDGE_OP_READ 20 +#define NFP_BRIDGE_OP_WRITE 30 +#define NFP_BRIDGE_OP_IOCTL 40 + +#define NFP_IOCTL 'n' +#define NFP_IOCTL_CPP_IDENTIFICATION _IOW(NFP_IOCTL, 0x8f, uint32_t) + +void nfp_register_cpp_service(struct nfp_cpp *cpp); +int32_t nfp_cpp_bridge_service_func(void *args); + +#endif /* _NFP_CPP_BRIDGE_H_ */ +/* + * Local variables: + * c-file-style: "Linux" + * indent-tabs-mode: t + * End: + */ diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c index 5bfc23ba04..d79c70c5b7 100644 --- a/drivers/net/nfp/nfp_net.c +++ b/drivers/net/nfp/nfp_net.c @@ -41,6 +41,7 @@ #include "nfp_rxtx.h" #include "nfp_net_logs.h" #include "nfp_net_ctrl.h" +#include "nfp_cpp_bridge.h" #include #include @@ -81,8 +82,6 @@ static int nfp_net_rss_hash_write(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); static int nfp_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr); -static int32_t nfp_cpp_bridge_service_func(void *args); -static void nfp_register_cpp_service(struct nfp_cpp *cpp); static int nfp_fw_setup(struct rte_pci_device *dev, struct nfp_cpp *cpp, struct nfp_eth_table *nfp_eth_table, @@ -1926,353 +1925,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev) return err; } -#define NFP_CPP_MEMIO_BOUNDARY (1 << 20) - -/* - * Serving a write request to NFP from host programs. The request - * sends the write size and the CPP target. The bridge makes use - * of CPP interface handler configured by the PMD setup. - */ -static int -nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp) -{ - struct nfp_cpp_area *area; - off_t offset, nfp_offset; - uint32_t cpp_id, pos, len; - uint32_t tmpbuf[16]; - size_t count, curlen, totlen = 0; - int err = 0; - - PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__, - sizeof(off_t), sizeof(size_t)); - - /* Reading the count param */ - err = recv(sockfd, &count, sizeof(off_t), 0); - if (err != sizeof(off_t)) - return -EINVAL; - - curlen = count; - - /* Reading the offset param */ - err = recv(sockfd, &offset, sizeof(off_t), 0); - if (err != sizeof(off_t)) - return -EINVAL; - - /* Obtain target's CPP ID and offset in target */ - cpp_id = (offset >> 40) << 8; - nfp_offset = offset & ((1ull << 40) - 1); - - PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count, - offset); - PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__, - cpp_id, nfp_offset); - - /* Adjust length if not aligned */ - if (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) != - (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) { - curlen = NFP_CPP_MEMIO_BOUNDARY - - (nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1)); - } - - while (count > 0) { - /* configure a CPP PCIe2CPP BAR for mapping the CPP target */ - area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev", - nfp_offset, curlen); - if (!area) { - RTE_LOG(ERR, PMD, "%s: area alloc fail\n", __func__); - return -EIO; - } - - /* mapping the target */ - err = nfp_cpp_area_acquire(area); - if (err < 0) { - RTE_LOG(ERR, PMD, "area acquire failed\n"); - nfp_cpp_area_free(area); - return -EIO; - } - - for (pos = 0; pos < curlen; pos += len) { - len = curlen - pos; - if (len > sizeof(tmpbuf)) - len = sizeof(tmpbuf); - - PMD_CPP_LOG(DEBUG, "%s: Receive %u of %zu\n", __func__, - len, count); - err = recv(sockfd, tmpbuf, len, MSG_WAITALL); - if (err != (int)len) { - RTE_LOG(ERR, PMD, - "%s: error when receiving, %d of %zu\n", - __func__, err, count); - nfp_cpp_area_release(area); - nfp_cpp_area_free(area); - return -EIO; - } - err = nfp_cpp_area_write(area, pos, tmpbuf, len); - if (err < 0) { - RTE_LOG(ERR, PMD, "nfp_cpp_area_write error\n"); - nfp_cpp_area_release(area); - nfp_cpp_area_free(area); - return -EIO; - } - } - - nfp_offset += pos; - totlen += pos; - nfp_cpp_area_release(area); - nfp_cpp_area_free(area); - - count -= pos; - curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ? - NFP_CPP_MEMIO_BOUNDARY : count; - } - - return 0; -} - -/* - * Serving a read request to NFP from host programs. The request - * sends the read size and the CPP target. The bridge makes use - * of CPP interface handler configured by the PMD setup. The read - * data is sent to the requester using the same socket. - */ -static int -nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp) -{ - struct nfp_cpp_area *area; - off_t offset, nfp_offset; - uint32_t cpp_id, pos, len; - uint32_t tmpbuf[16]; - size_t count, curlen, totlen = 0; - int err = 0; - - PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__, - sizeof(off_t), sizeof(size_t)); - - /* Reading the count param */ - err = recv(sockfd, &count, sizeof(off_t), 0); - if (err != sizeof(off_t)) - return -EINVAL; - - curlen = count; - - /* Reading the offset param */ - err = recv(sockfd, &offset, sizeof(off_t), 0); - if (err != sizeof(off_t)) - return -EINVAL; - - /* Obtain target's CPP ID and offset in target */ - cpp_id = (offset >> 40) << 8; - nfp_offset = offset & ((1ull << 40) - 1); - - PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count, - offset); - PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__, - cpp_id, nfp_offset); - - /* Adjust length if not aligned */ - if (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) != - (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) { - curlen = NFP_CPP_MEMIO_BOUNDARY - - (nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1)); - } - - while (count > 0) { - area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev", - nfp_offset, curlen); - if (!area) { - RTE_LOG(ERR, PMD, "%s: area alloc failed\n", __func__); - return -EIO; - } - - err = nfp_cpp_area_acquire(area); - if (err < 0) { - RTE_LOG(ERR, PMD, "area acquire failed\n"); - nfp_cpp_area_free(area); - return -EIO; - } - - for (pos = 0; pos < curlen; pos += len) { - len = curlen - pos; - if (len > sizeof(tmpbuf)) - len = sizeof(tmpbuf); - - err = nfp_cpp_area_read(area, pos, tmpbuf, len); - if (err < 0) { - RTE_LOG(ERR, PMD, "nfp_cpp_area_read error\n"); - nfp_cpp_area_release(area); - nfp_cpp_area_free(area); - return -EIO; - } - PMD_CPP_LOG(DEBUG, "%s: sending %u of %zu\n", __func__, - len, count); - - err = send(sockfd, tmpbuf, len, 0); - if (err != (int)len) { - RTE_LOG(ERR, PMD, - "%s: error when sending: %d of %zu\n", - __func__, err, count); - nfp_cpp_area_release(area); - nfp_cpp_area_free(area); - return -EIO; - } - } - - nfp_offset += pos; - totlen += pos; - nfp_cpp_area_release(area); - nfp_cpp_area_free(area); - - count -= pos; - curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ? - NFP_CPP_MEMIO_BOUNDARY : count; - } - return 0; -} - -#define NFP_IOCTL 'n' -#define NFP_IOCTL_CPP_IDENTIFICATION _IOW(NFP_IOCTL, 0x8f, uint32_t) -/* - * Serving a ioctl command from host NFP tools. This usually goes to - * a kernel driver char driver but it is not available when the PF is - * bound to the PMD. Currently just one ioctl command is served and it - * does not require any CPP access at all. - */ -static int -nfp_cpp_bridge_serve_ioctl(int sockfd, struct nfp_cpp *cpp) -{ - uint32_t cmd, ident_size, tmp; - int err; - - /* Reading now the IOCTL command */ - err = recv(sockfd, &cmd, 4, 0); - if (err != 4) { - RTE_LOG(ERR, PMD, "%s: read error from socket\n", __func__); - return -EIO; - } - - /* Only supporting NFP_IOCTL_CPP_IDENTIFICATION */ - if (cmd != NFP_IOCTL_CPP_IDENTIFICATION) { - RTE_LOG(ERR, PMD, "%s: unknown cmd %d\n", __func__, cmd); - return -EINVAL; - } - - err = recv(sockfd, &ident_size, 4, 0); - if (err != 4) { - RTE_LOG(ERR, PMD, "%s: read error from socket\n", __func__); - return -EIO; - } - - tmp = nfp_cpp_model(cpp); - - PMD_CPP_LOG(DEBUG, "%s: sending NFP model %08x\n", __func__, tmp); - - err = send(sockfd, &tmp, 4, 0); - if (err != 4) { - RTE_LOG(ERR, PMD, "%s: error writing to socket\n", __func__); - return -EIO; - } - - tmp = cpp->interface; - - PMD_CPP_LOG(DEBUG, "%s: sending NFP interface %08x\n", __func__, tmp); - - err = send(sockfd, &tmp, 4, 0); - if (err != 4) { - RTE_LOG(ERR, PMD, "%s: error writing to socket\n", __func__); - return -EIO; - } - - return 0; -} - -#define NFP_BRIDGE_OP_READ 20 -#define NFP_BRIDGE_OP_WRITE 30 -#define NFP_BRIDGE_OP_IOCTL 40 - -/* - * This is the code to be executed by a service core. The CPP bridge interface - * is based on a unix socket and requests usually received by a kernel char - * driver, read, write and ioctl, are handled by the CPP bridge. NFP host tools - * can be executed with a wrapper library and LD_LIBRARY being completely - * unaware of the CPP bridge performing the NFP kernel char driver for CPP - * accesses. - */ -static int32_t -nfp_cpp_bridge_service_func(void *args) -{ - struct sockaddr address; - struct nfp_cpp *cpp = args; - int sockfd, datafd, op, ret; - - unlink("/tmp/nfp_cpp"); - sockfd = socket(AF_UNIX, SOCK_STREAM, 0); - if (sockfd < 0) { - RTE_LOG(ERR, PMD, "%s: socket creation error. Service failed\n", - __func__); - return -EIO; - } - - memset(&address, 0, sizeof(struct sockaddr)); - - address.sa_family = AF_UNIX; - strcpy(address.sa_data, "/tmp/nfp_cpp"); - - ret = bind(sockfd, (const struct sockaddr *)&address, - sizeof(struct sockaddr)); - if (ret < 0) { - RTE_LOG(ERR, PMD, "%s: bind error (%d). Service failed\n", - __func__, errno); - close(sockfd); - return ret; - } - - ret = listen(sockfd, 20); - if (ret < 0) { - RTE_LOG(ERR, PMD, "%s: listen error(%d). Service failed\n", - __func__, errno); - close(sockfd); - return ret; - } - - for (;;) { - datafd = accept(sockfd, NULL, NULL); - if (datafd < 0) { - RTE_LOG(ERR, PMD, "%s: accept call error (%d)\n", - __func__, errno); - RTE_LOG(ERR, PMD, "%s: service failed\n", __func__); - close(sockfd); - return -EIO; - } - - while (1) { - ret = recv(datafd, &op, 4, 0); - if (ret <= 0) { - PMD_CPP_LOG(DEBUG, "%s: socket close\n", - __func__); - break; - } - - PMD_CPP_LOG(DEBUG, "%s: getting op %u\n", __func__, op); - - if (op == NFP_BRIDGE_OP_READ) - nfp_cpp_bridge_serve_read(datafd, cpp); - - if (op == NFP_BRIDGE_OP_WRITE) - nfp_cpp_bridge_serve_write(datafd, cpp); - - if (op == NFP_BRIDGE_OP_IOCTL) - nfp_cpp_bridge_serve_ioctl(datafd, cpp); - - if (op == 0) - break; - } - close(datafd); - } - close(sockfd); - - return 0; -} - #define DEFAULT_FW_PATH "/lib/firmware/netronome" static int @@ -2491,23 +2143,6 @@ static int nfp_init_phyports(struct nfp_pf_dev *pf_dev) return ret; } -static void nfp_register_cpp_service(struct nfp_cpp *cpp) -{ - uint32_t *cpp_service_id = NULL; - struct rte_service_spec service; - - memset(&service, 0, sizeof(struct rte_service_spec)); - snprintf(service.name, sizeof(service.name), "nfp_cpp_service"); - service.callback = nfp_cpp_bridge_service_func; - service.callback_userdata = (void *)cpp; - - if (rte_service_component_register(&service, - cpp_service_id)) - RTE_LOG(WARNING, PMD, "NFP CPP bridge service register() failed"); - else - RTE_LOG(DEBUG, PMD, "NFP CPP bridge service registered"); -} - static int nfp_pf_init(struct rte_pci_device *pci_dev) { struct nfp_pf_dev *pf_dev = NULL; From patchwork Fri Jul 16 08:35:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Heinrich Kuhn X-Patchwork-Id: 95962 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 30A2BA0C50; Fri, 16 Jul 2021 10:37:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 387474135D; Fri, 16 Jul 2021 10:37:12 +0200 (CEST) Received: from mail-ej1-f50.google.com (mail-ej1-f50.google.com [209.85.218.50]) by mails.dpdk.org (Postfix) with ESMTP id 8AC8B4135C for ; Fri, 16 Jul 2021 10:37:11 +0200 (CEST) Received: by mail-ej1-f50.google.com with SMTP id ga14so13976658ejc.6 for ; Fri, 16 Jul 2021 01:37:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xTYnQOR71U/m6WFfRZhVz9fH2WxPw2XpqjScejxbcTk=; b=Zeki0Pmae5JO4azKs9A3EaD1UvAujSfiktFMWQwMUednFcgWTu1JjZx1q8XF/w3Tf3 G5+ZYguN9fy0ALWFBeX5lehtZk45/lkeH25iro5X4zAfF/FOVHfcUqe+LoFo6huSpJXV csoBTMDdo8GUTUkYyPRO0tOdaZwJPw0SxPIbNJPcnYDVU3os0Pn4CdRDd0sPK6KPQrD6 PrWOvpbNraXIsArMGsNfe4z7+SLHLs5qhx6NxIV2lQW6LNIn7EYGJdgFCjOPzwEqQadH 4So01MGtYh9z2JIHCPv/EosI46RdPlrQnYQHrB+u8PavW5L/GEjXAThuwvgyU4H5+RC8 XQnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xTYnQOR71U/m6WFfRZhVz9fH2WxPw2XpqjScejxbcTk=; b=daAcCqX7hj0LgDcNvcZ+IDvIuerqjIxxGfOdwqnQB65STKzgNy4q8hDkOc0j0L9kyN z0J0KN3i0q2c/aVwetSvuA2J+HP98VL+bYXdelrcHyQlknwgWCDDy61/iuClUmRT24km l6+RVpKGnjF1Nwf5ANCNf5gxFbtjLyWhl5fnpjQTb3NT1QvIxgBCDC9EiKYjxd6HTWe0 HBdK5IGYn55J/+4Pm0nbShPsTNx9wl6CPrV60In5cpVPIFUox1Eheg0zuyMCgbgEc4dq Rb1gqh+oucE17tC7uQ73/gIomXH/aof1hHCqZwGq7tHjHuawd8tPecL+7u99gX1azE81 skmg== X-Gm-Message-State: AOAM530hNsUyvXn0qvFeU7UZwlkE0xQF1G6UtfHFGW5r8HpnQw+qm4lj SzZ77fTWHiNkS/AUe53P8b3FYh3YpZKGl7Rk6F2NhTEuWQCnYihPVTwvXLpSAS7BW+SeWC4Fy3T ZIsRcTzEpVazRWpryV3R7dtWHhgBkZlt6MqpWpieZdF21mad1Mu1NK5WJu8jZrArL X-Google-Smtp-Source: ABdhPJyWO9r0M1bWK76kS+OjpB5JBb9zX0VEZp+6aKjOeqfLJpi6RjCKZ6YglQSPXRyfHUiaux9BqA== X-Received: by 2002:a17:906:c34b:: with SMTP id ci11mr10823338ejb.223.1626424631079; Fri, 16 Jul 2021 01:37:11 -0700 (PDT) Received: from localhost.localdomain ([155.93.216.150]) by smtp.gmail.com with ESMTPSA id e6sm3371650edk.63.2021.07.16.01.37.09 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 16 Jul 2021 01:37:10 -0700 (PDT) From: Heinrich Kuhn To: dev@dpdk.org Cc: Heinrich Kuhn , Simon Horman Date: Fri, 16 Jul 2021 10:35:43 +0200 Message-Id: <20210716083545.34444-5-heinrich.kuhn@netronome.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210716083545.34444-1-heinrich.kuhn@netronome.com> References: <20210716082314.33865-1-heinrich.kuhn@netronome.com> <20210716083545.34444-1-heinrich.kuhn@netronome.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 4/7] net/nfp: prototype common functions in header file X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The majority of "ethdev" type functions are used for both PF devices and VF devices. Prototype these functions in the nfp_net_pmd header file in preparation of splitting PF and VF specific functions. Signed-off-by: Heinrich Kuhn Signed-off-by: Simon Horman --- drivers/net/nfp/nfp_net.c | 87 +++++++++++++---------------------- drivers/net/nfp/nfp_net_pmd.h | 49 ++++++++++++++++++++ 2 files changed, 81 insertions(+), 55 deletions(-) diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c index d79c70c5b7..da35bba4ef 100644 --- a/drivers/net/nfp/nfp_net.c +++ b/drivers/net/nfp/nfp_net.c @@ -53,35 +53,12 @@ /* Prototypes */ static int nfp_net_close(struct rte_eth_dev *dev); -static int nfp_net_configure(struct rte_eth_dev *dev); -static void nfp_net_dev_interrupt_handler(void *param); -static void nfp_net_dev_interrupt_delayed_handler(void *param); -static int nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu); -static int nfp_net_infos_get(struct rte_eth_dev *dev, - struct rte_eth_dev_info *dev_info); static int nfp_net_init(struct rte_eth_dev *eth_dev); static int nfp_pf_init(struct rte_pci_device *pci_dev); static int nfp_pf_secondary_init(struct rte_pci_device *pci_dev); static int nfp_pci_uninit(struct rte_eth_dev *eth_dev); static int nfp_init_phyports(struct nfp_pf_dev *pf_dev); -static int nfp_net_link_update(struct rte_eth_dev *dev, int wait_to_complete); -static int nfp_net_promisc_enable(struct rte_eth_dev *dev); -static int nfp_net_promisc_disable(struct rte_eth_dev *dev); -static int nfp_net_start(struct rte_eth_dev *dev); -static int nfp_net_stats_get(struct rte_eth_dev *dev, - struct rte_eth_stats *stats); -static int nfp_net_stats_reset(struct rte_eth_dev *dev); static int nfp_net_stop(struct rte_eth_dev *dev); -static int nfp_net_rss_config_default(struct rte_eth_dev *dev); -static int nfp_net_rss_hash_update(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf); -static int nfp_net_rss_reta_write(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size); -static int nfp_net_rss_hash_write(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf); -static int nfp_set_mac_addr(struct rte_eth_dev *dev, - struct rte_ether_addr *mac_addr); static int nfp_fw_setup(struct rte_pci_device *dev, struct nfp_cpp *cpp, struct nfp_eth_table *nfp_eth_table, @@ -136,7 +113,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update) * Write the update word to the BAR and ping the reconfig queue. Then poll * until the firmware has acknowledged the update by zeroing the update word. */ -static int +int nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t ctrl, uint32_t update) { uint32_t err; @@ -172,7 +149,7 @@ nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t ctrl, uint32_t update) * before any other function in the Ethernet API. This function can * also be re-invoked when a device is in the stopped state. */ -static int +int nfp_net_configure(struct rte_eth_dev *dev) { struct rte_eth_conf *dev_conf; @@ -215,7 +192,7 @@ nfp_net_configure(struct rte_eth_dev *dev) return 0; } -static void +void nfp_net_enable_queues(struct rte_eth_dev *dev) { struct nfp_net_hw *hw; @@ -239,7 +216,7 @@ nfp_net_enable_queues(struct rte_eth_dev *dev) nn_cfg_writeq(hw, NFP_NET_CFG_RXRS_ENABLE, enabled_queues); } -static void +void nfp_net_disable_queues(struct rte_eth_dev *dev) { struct nfp_net_hw *hw; @@ -264,14 +241,14 @@ nfp_net_disable_queues(struct rte_eth_dev *dev) hw->ctrl = new_ctrl; } -static void +void nfp_net_params_setup(struct nfp_net_hw *hw) { nn_cfg_writel(hw, NFP_NET_CFG_MTU, hw->mtu); nn_cfg_writel(hw, NFP_NET_CFG_FLBUFSZ, hw->flbufsz); } -static void +void nfp_net_cfg_queue_setup(struct nfp_net_hw *hw) { hw->qcp_cfg = hw->tx_bar + NFP_QCP_QUEUE_ADDR_SZ; @@ -279,7 +256,7 @@ nfp_net_cfg_queue_setup(struct nfp_net_hw *hw) #define ETH_ADDR_LEN 6 -static void +void nfp_eth_copy_mac(uint8_t *dst, const uint8_t *src) { int i; @@ -318,7 +295,7 @@ nfp_net_vf_read_mac(struct nfp_net_hw *hw) memcpy(&hw->mac_addr[4], &tmp, 2); } -static void +void nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac) { uint32_t mac0 = *(uint32_t *)mac; @@ -366,7 +343,7 @@ nfp_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr) return 0; } -static int +int nfp_configure_rx_interrupt(struct rte_eth_dev *dev, struct rte_intr_handle *intr_handle) { @@ -410,7 +387,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, return 0; } -static uint32_t +uint32_t nfp_check_offloads(struct rte_eth_dev *dev) { struct nfp_net_hw *hw; @@ -746,7 +723,7 @@ nfp_net_close(struct rte_eth_dev *dev) return 0; } -static int +int nfp_net_promisc_enable(struct rte_eth_dev *dev) { uint32_t new_ctrl, update = 0; @@ -783,7 +760,7 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev) return 0; } -static int +int nfp_net_promisc_disable(struct rte_eth_dev *dev) { uint32_t new_ctrl, update = 0; @@ -819,7 +796,7 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev) * Wait to complete is needed as it can take up to 9 seconds to get the Link * status. */ -static int +int nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { struct nfp_net_hw *hw; @@ -869,7 +846,7 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) return ret; } -static int +int nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { int i; @@ -964,7 +941,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) return -EINVAL; } -static int +int nfp_net_stats_reset(struct rte_eth_dev *dev) { int i; @@ -1029,7 +1006,7 @@ nfp_net_stats_reset(struct rte_eth_dev *dev) return 0; } -static int +int nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { struct nfp_net_hw *hw; @@ -1123,7 +1100,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) return 0; } -static const uint32_t * +const uint32_t * nfp_net_supported_ptypes_get(struct rte_eth_dev *dev) { static const uint32_t ptypes[] = { @@ -1140,7 +1117,7 @@ nfp_net_supported_ptypes_get(struct rte_eth_dev *dev) return NULL; } -static int +int nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { struct rte_pci_device *pci_dev; @@ -1160,7 +1137,7 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) return 0; } -static int +int nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) { struct rte_pci_device *pci_dev; @@ -1179,7 +1156,7 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) return 0; } -static void +void nfp_net_dev_link_status_print(struct rte_eth_dev *dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); @@ -1208,7 +1185,7 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev) * If MSI-X auto-masking is enabled clear the mask bit, otherwise * clear the ICR for the entry. */ -static void +void nfp_net_irq_unmask(struct rte_eth_dev *dev) { struct nfp_net_hw *hw; @@ -1229,7 +1206,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev) } } -static void +void nfp_net_dev_interrupt_handler(void *param) { int64_t timeout; @@ -1272,7 +1249,7 @@ nfp_net_dev_interrupt_handler(void *param) * * @return void */ -static void +void nfp_net_dev_interrupt_delayed_handler(void *param) { struct rte_eth_dev *dev = (struct rte_eth_dev *)param; @@ -1286,7 +1263,7 @@ nfp_net_dev_interrupt_delayed_handler(void *param) nfp_net_irq_unmask(dev); } -static int +int nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { struct nfp_net_hw *hw; @@ -1321,7 +1298,7 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return 0; } -static int +int nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask) { uint32_t new_ctrl, update; @@ -1353,7 +1330,7 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask) return ret; } -static int +int nfp_net_rss_reta_write(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) @@ -1404,7 +1381,7 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, } /* Update Redirection Table(RETA) of Receive Side Scaling of Ethernet device */ -static int +int nfp_net_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) @@ -1430,7 +1407,7 @@ nfp_net_reta_update(struct rte_eth_dev *dev, } /* Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. */ -static int +int nfp_net_reta_query(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) @@ -1477,7 +1454,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev, return 0; } -static int +int nfp_net_rss_hash_write(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { @@ -1527,7 +1504,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev, return 0; } -static int +int nfp_net_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { @@ -1563,7 +1540,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev, return 0; } -static int +int nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { @@ -1614,7 +1591,7 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, return 0; } -static int +int nfp_net_rss_config_default(struct rte_eth_dev *dev) { struct rte_eth_conf *dev_conf; diff --git a/drivers/net/nfp/nfp_net_pmd.h b/drivers/net/nfp/nfp_net_pmd.h index 9265496bf0..dc05e888df 100644 --- a/drivers/net/nfp/nfp_net_pmd.h +++ b/drivers/net/nfp/nfp_net_pmd.h @@ -349,6 +349,55 @@ nfp_qcp_read(uint8_t *q, enum nfp_qcp_ptr ptr) return val & NFP_QCP_QUEUE_STS_HI_WRITEPTR_mask; } +/* Prototypes for common NFP functions */ +int nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t ctrl, uint32_t update); +int nfp_net_configure(struct rte_eth_dev *dev); +void nfp_net_enable_queues(struct rte_eth_dev *dev); +void nfp_net_disable_queues(struct rte_eth_dev *dev); +void nfp_net_params_setup(struct nfp_net_hw *hw); +void nfp_eth_copy_mac(uint8_t *dst, const uint8_t *src); +void nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac); +int nfp_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr); +int nfp_configure_rx_interrupt(struct rte_eth_dev *dev, + struct rte_intr_handle *intr_handle); +uint32_t nfp_check_offloads(struct rte_eth_dev *dev); +int nfp_net_promisc_enable(struct rte_eth_dev *dev); +int nfp_net_promisc_disable(struct rte_eth_dev *dev); +int nfp_net_link_update(struct rte_eth_dev *dev, + __rte_unused int wait_to_complete); +int nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); +int nfp_net_stats_reset(struct rte_eth_dev *dev); +int nfp_net_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info); +const uint32_t *nfp_net_supported_ptypes_get(struct rte_eth_dev *dev); +int nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id); +int nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); +void nfp_net_params_setup(struct nfp_net_hw *hw); +void nfp_net_cfg_queue_setup(struct nfp_net_hw *hw); +void nfp_eth_copy_mac(uint8_t *dst, const uint8_t *src); +void nfp_net_dev_link_status_print(struct rte_eth_dev *dev); +void nfp_net_irq_unmask(struct rte_eth_dev *dev); +void nfp_net_dev_interrupt_handler(void *param); +void nfp_net_dev_interrupt_delayed_handler(void *param); +int nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu); +int nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask); +int nfp_net_rss_reta_write(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int nfp_net_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int nfp_net_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int nfp_net_rss_hash_write(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf); +int nfp_net_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf); +int nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf); +int nfp_net_rss_config_default(struct rte_eth_dev *dev); + #define NFP_NET_DEV_PRIVATE_TO_HW(adapter)\ (&((struct nfp_net_adapter *)adapter)->hw) From patchwork Fri Jul 16 08:35:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Heinrich Kuhn X-Patchwork-Id: 95963 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4D92AA0C50; Fri, 16 Jul 2021 10:37:33 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D747741344; Fri, 16 Jul 2021 10:37:15 +0200 (CEST) Received: from mail-ej1-f42.google.com (mail-ej1-f42.google.com [209.85.218.42]) by mails.dpdk.org (Postfix) with ESMTP id EB3F741360 for ; Fri, 16 Jul 2021 10:37:13 +0200 (CEST) Received: by mail-ej1-f42.google.com with SMTP id dp20so12062671ejc.7 for ; Fri, 16 Jul 2021 01:37:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=D/eA6RXf4hUi0HXoB1cT8m4vHNrXiI8Qd2n88z7xqH8=; b=FfEGxwYhHapKJC9V2euW2FDzbbTyxt3j9soMv4T1cxOp7c1MrZ0LQdvnMGya/lkOBx mGJs7jZBnon9s4TYqRIhuefURQbW2Xb59bvKqk/WAYlwMIQEwdTWohSb+WdE52bAHKMf Q4wrf1Me3Ld8PDcptNjN465dojLMUdfX3BnYtAsXm7YVVNLPmgVYXs09f5KZ/V1juLfE srCp6cgxPWZhPbucRW2oXJGEDQbG5VWlpSC9+FTOIY7Pp/r2Rz+vDEl7TbV+hluQU/mC fBoR95lPIaEmvUCAZ11LSI2crYCJOAPVyCY3HTW3hiJYCcZG36RCxB3x4Dcq2DKw5vFp eRzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=D/eA6RXf4hUi0HXoB1cT8m4vHNrXiI8Qd2n88z7xqH8=; b=OzYn9xjb6wjgBE9zKi8jH6gShrgTeMO6gMo6xB9kXZERQz77fej8mEKM1YboKObsn8 f+DYecsUfMPrSWQXwueayMVq0toAkse06ULjOoYNdUOgAHsxohczqblxCZ99j99vZea1 hH/ovtLcjnNUwhP9pCHE3V+/D794BHxCn3TVz1LI97jnQDDdyo7fMPsxvfAaPncq5L5m H1MWJXSRo/7fWrdS2mMoqKTIUVjR6GKT8/HR6TcFfw28cHyTwr9KGjKjPXP5GOj1h8rc IEKLpENfOpbuS8bRQsE9y/OZRkf0jaDRBY+q3nBfn+d7AT2Xd+hfIQgWc7TtzZ7Rifin ZlUQ== X-Gm-Message-State: AOAM530jidUxK/sJndbuzTmmkdeyFS9p2A2BXiEIdLtAoCph+d29vF/U 9Ma+owGs5103WbKgON45NTPo8q1+RmQIf7bcLqhutMSpJWbFwCEfkpCDKM0zdP50UxlYdn0N3ab XpXEBiJ9PPruHiLImIhWO0YVLt+Jwn5wy3g9tGnl1DunEh1nH7uos0h2boURbFEgk X-Google-Smtp-Source: ABdhPJwkO5UosfrV69OnWeEtdMKNdIqGSVGZ/cbd7CZEnkgz+zb8Pe4MLNC9An3oNAMLyl5yllYgWA== X-Received: by 2002:a17:906:c256:: with SMTP id bl22mr10887606ejb.115.1626424633436; Fri, 16 Jul 2021 01:37:13 -0700 (PDT) Received: from localhost.localdomain ([155.93.216.150]) by smtp.gmail.com with ESMTPSA id e6sm3371650edk.63.2021.07.16.01.37.12 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 16 Jul 2021 01:37:13 -0700 (PDT) From: Heinrich Kuhn To: dev@dpdk.org Cc: Heinrich Kuhn , Simon Horman Date: Fri, 16 Jul 2021 10:35:44 +0200 Message-Id: <20210716083545.34444-6-heinrich.kuhn@netronome.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210716083545.34444-1-heinrich.kuhn@netronome.com> References: <20210716082314.33865-1-heinrich.kuhn@netronome.com> <20210716083545.34444-1-heinrich.kuhn@netronome.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 5/7] net/nfp: move VF functions into new file X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Move any ethdev functionality specific to VF devices into a new file called nfp_ethdev_vf.c. Signed-off-by: Heinrich Kuhn Signed-off-by: Simon Horman --- drivers/net/nfp/meson.build | 1 + drivers/net/nfp/nfp_ethdev_vf.c | 504 ++++++++++++++++++++++++++++++++ drivers/net/nfp/nfp_net.c | 42 +-- 3 files changed, 506 insertions(+), 41 deletions(-) create mode 100644 drivers/net/nfp/nfp_ethdev_vf.c diff --git a/drivers/net/nfp/meson.build b/drivers/net/nfp/meson.build index b46ac2d40f..34f4054b3c 100644 --- a/drivers/net/nfp/meson.build +++ b/drivers/net/nfp/meson.build @@ -21,4 +21,5 @@ sources = files( 'nfp_net.c', 'nfp_rxtx.c', 'nfp_cpp_bridge.c', + 'nfp_ethdev_vf.c', ) diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c new file mode 100644 index 0000000000..223142c0ed --- /dev/null +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -0,0 +1,504 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2014-2021 Netronome Systems, Inc. + * All rights reserved. + * + * Small portions derived from code Copyright(c) 2010-2015 Intel Corporation. + */ + +/* + * vim:shiftwidth=8:noexpandtab + * + * @file dpdk/pmd/nfp_ethdev_vf.c + * + * Netronome vNIC VF DPDK Poll-Mode Driver: Main entry point + */ + +#include "nfpcore/nfp_mip.h" +#include "nfpcore/nfp_rtsym.h" + +#include "nfp_net_pmd.h" +#include "nfp_rxtx.h" +#include "nfp_net_logs.h" +#include "nfp_net_ctrl.h" + +static void nfp_netvf_read_mac(struct nfp_net_hw *hw); +static int nfp_netvf_start(struct rte_eth_dev *dev); +static int nfp_netvf_stop(struct rte_eth_dev *dev); +static int nfp_netvf_set_link_up(struct rte_eth_dev *dev); +static int nfp_netvf_set_link_down(struct rte_eth_dev *dev); +static int nfp_netvf_close(struct rte_eth_dev *dev); +static int nfp_netvf_init(struct rte_eth_dev *eth_dev); +static int nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev); +static int eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev); +static int eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev); + +static void +nfp_netvf_read_mac(struct nfp_net_hw *hw) +{ + uint32_t tmp; + + tmp = rte_be_to_cpu_32(nn_cfg_readl(hw, NFP_NET_CFG_MACADDR)); + memcpy(&hw->mac_addr[0], &tmp, 4); + + tmp = rte_be_to_cpu_32(nn_cfg_readl(hw, NFP_NET_CFG_MACADDR + 4)); + memcpy(&hw->mac_addr[4], &tmp, 2); +} + +static int +nfp_netvf_start(struct rte_eth_dev *dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + uint32_t new_ctrl, update = 0; + struct nfp_net_hw *hw; + struct rte_eth_conf *dev_conf; + struct rte_eth_rxmode *rxmode; + uint32_t intr_vector; + int ret; + + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_LOG(DEBUG, "Start"); + + /* Disabling queues just in case... */ + nfp_net_disable_queues(dev); + + /* Enabling the required queues in the device */ + nfp_net_enable_queues(dev); + + /* check and configure queue intr-vector mapping */ + if (dev->data->dev_conf.intr_conf.rxq != 0) { + if (intr_handle->type == RTE_INTR_HANDLE_UIO) { + /* + * Better not to share LSC with RX interrupts. + * Unregistering LSC interrupt handler + */ + rte_intr_callback_unregister(&pci_dev->intr_handle, + nfp_net_dev_interrupt_handler, (void *)dev); + + if (dev->data->nb_rx_queues > 1) { + PMD_INIT_LOG(ERR, "PMD rx interrupt only " + "supports 1 queue with UIO"); + return -EIO; + } + } + intr_vector = dev->data->nb_rx_queues; + if (rte_intr_efd_enable(intr_handle, intr_vector)) + return -1; + + nfp_configure_rx_interrupt(dev, intr_handle); + update = NFP_NET_CFG_UPDATE_MSIX; + } + + rte_intr_enable(intr_handle); + + new_ctrl = nfp_check_offloads(dev); + + /* Writing configuration parameters in the device */ + nfp_net_params_setup(hw); + + dev_conf = &dev->data->dev_conf; + rxmode = &dev_conf->rxmode; + + if (rxmode->mq_mode & ETH_MQ_RX_RSS) { + nfp_net_rss_config_default(dev); + update |= NFP_NET_CFG_UPDATE_RSS; + new_ctrl |= NFP_NET_CFG_CTRL_RSS; + } + + /* Enable device */ + new_ctrl |= NFP_NET_CFG_CTRL_ENABLE; + + update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING; + + if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG) + new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG; + + nn_cfg_writel(hw, NFP_NET_CFG_CTRL, new_ctrl); + if (nfp_net_reconfig(hw, new_ctrl, update) < 0) + return -EIO; + + /* + * Allocating rte mbufs for configured rx queues. + * This requires queues being enabled before + */ + if (nfp_net_rx_freelist_setup(dev) < 0) { + ret = -ENOMEM; + goto error; + } + + hw->ctrl = new_ctrl; + + return 0; + +error: + /* + * An error returned by this function should mean the app + * exiting and then the system releasing all the memory + * allocated even memory coming from hugepages. + * + * The device could be enabled at this point with some queues + * ready for getting packets. This is true if the call to + * nfp_net_rx_freelist_setup() succeeds for some queues but + * fails for subsequent queues. + * + * This should make the app exiting but better if we tell the + * device first. + */ + nfp_net_disable_queues(dev); + + return ret; +} + +static int +nfp_netvf_stop(struct rte_eth_dev *dev) +{ + struct nfp_net_txq *this_tx_q; + struct nfp_net_rxq *this_rx_q; + int i; + + PMD_INIT_LOG(DEBUG, "Stop"); + + nfp_net_disable_queues(dev); + + /* Clear queues */ + for (i = 0; i < dev->data->nb_tx_queues; i++) { + this_tx_q = (struct nfp_net_txq *)dev->data->tx_queues[i]; + nfp_net_reset_tx_queue(this_tx_q); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + this_rx_q = (struct nfp_net_rxq *)dev->data->rx_queues[i]; + nfp_net_reset_rx_queue(this_rx_q); + } + + return 0; +} + +static int +nfp_netvf_set_link_up(struct rte_eth_dev *dev __rte_unused) +{ + return -ENOTSUP; +} + +/* Set the link down. */ +static int +nfp_netvf_set_link_down(struct rte_eth_dev *dev __rte_unused) +{ + return -ENOTSUP; +} + +/* Reset and stop device. The device can not be restarted. */ +static int +nfp_netvf_close(struct rte_eth_dev *dev) +{ + struct rte_pci_device *pci_dev; + struct nfp_net_txq *this_tx_q; + struct nfp_net_rxq *this_rx_q; + int i; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + PMD_INIT_LOG(DEBUG, "Close"); + + pci_dev = RTE_ETH_DEV_TO_PCI(dev); + + /* + * We assume that the DPDK application is stopping all the + * threads/queues before calling the device close function. + */ + + nfp_net_disable_queues(dev); + + /* Clear queues */ + for (i = 0; i < dev->data->nb_tx_queues; i++) { + this_tx_q = (struct nfp_net_txq *)dev->data->tx_queues[i]; + nfp_net_reset_tx_queue(this_tx_q); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + this_rx_q = (struct nfp_net_rxq *)dev->data->rx_queues[i]; + nfp_net_reset_rx_queue(this_rx_q); + } + + rte_intr_disable(&pci_dev->intr_handle); + + /* unregister callback func from eal lib */ + rte_intr_callback_unregister(&pci_dev->intr_handle, + nfp_net_dev_interrupt_handler, + (void *)dev); + + /* + * The ixgbe PMD driver disables the pcie master on the + * device. The i40e does not... + */ + + return 0; +} + +/* Initialise and register VF driver with DPDK Application */ +static const struct eth_dev_ops nfp_netvf_eth_dev_ops = { + .dev_configure = nfp_net_configure, + .dev_start = nfp_netvf_start, + .dev_stop = nfp_netvf_stop, + .dev_set_link_up = nfp_netvf_set_link_up, + .dev_set_link_down = nfp_netvf_set_link_down, + .dev_close = nfp_netvf_close, + .promiscuous_enable = nfp_net_promisc_enable, + .promiscuous_disable = nfp_net_promisc_disable, + .link_update = nfp_net_link_update, + .stats_get = nfp_net_stats_get, + .stats_reset = nfp_net_stats_reset, + .dev_infos_get = nfp_net_infos_get, + .dev_supported_ptypes_get = nfp_net_supported_ptypes_get, + .mtu_set = nfp_net_dev_mtu_set, + .mac_addr_set = nfp_set_mac_addr, + .vlan_offload_set = nfp_net_vlan_offload_set, + .reta_update = nfp_net_reta_update, + .reta_query = nfp_net_reta_query, + .rss_hash_update = nfp_net_rss_hash_update, + .rss_hash_conf_get = nfp_net_rss_hash_conf_get, + .rx_queue_setup = nfp_net_rx_queue_setup, + .rx_queue_release = nfp_net_rx_queue_release, + .tx_queue_setup = nfp_net_tx_queue_setup, + .tx_queue_release = nfp_net_tx_queue_release, + .rx_queue_intr_enable = nfp_rx_queue_intr_enable, + .rx_queue_intr_disable = nfp_rx_queue_intr_disable, +}; + +static int +nfp_netvf_init(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev; + struct nfp_net_hw *hw; + struct rte_ether_addr *tmp_ether_addr; + + uint64_t tx_bar_off = 0, rx_bar_off = 0; + uint32_t start_q; + int stride = 4; + int port = 0; + int err; + + PMD_INIT_FUNC_TRACE(); + + pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + + /* NFP can not handle DMA addresses requiring more than 40 bits */ + if (rte_mem_check_dma_mask(40)) { + RTE_LOG(ERR, PMD, "device %s can not be used:", + pci_dev->device.name); + RTE_LOG(ERR, PMD, "\trestricted dma mask to 40 bits!\n"); + return -ENODEV; + }; + + hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); + + eth_dev->dev_ops = &nfp_netvf_eth_dev_ops; + eth_dev->rx_queue_count = nfp_net_rx_queue_count; + eth_dev->rx_pkt_burst = &nfp_net_recv_pkts; + eth_dev->tx_pkt_burst = &nfp_net_xmit_pkts; + + /* For secondary processes, the primary has done all the work */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + rte_eth_copy_pci_info(eth_dev, pci_dev); + + hw->device_id = pci_dev->id.device_id; + hw->vendor_id = pci_dev->id.vendor_id; + hw->subsystem_device_id = pci_dev->id.subsystem_device_id; + hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id; + + PMD_INIT_LOG(DEBUG, "nfp_net: device (%u:%u) %u:%u:%u:%u", + pci_dev->id.vendor_id, pci_dev->id.device_id, + pci_dev->addr.domain, pci_dev->addr.bus, + pci_dev->addr.devid, pci_dev->addr.function); + + hw->ctrl_bar = (uint8_t *)pci_dev->mem_resource[0].addr; + if (hw->ctrl_bar == NULL) { + PMD_DRV_LOG(ERR, + "hw->ctrl_bar is NULL. BAR0 not configured"); + return -ENODEV; + } + + PMD_INIT_LOG(DEBUG, "ctrl bar: %p", hw->ctrl_bar); + + hw->max_rx_queues = nn_cfg_readl(hw, NFP_NET_CFG_MAX_RXRINGS); + hw->max_tx_queues = nn_cfg_readl(hw, NFP_NET_CFG_MAX_TXRINGS); + + /* Work out where in the BAR the queues start. */ + switch (pci_dev->id.device_id) { + case PCI_DEVICE_ID_NFP6000_VF_NIC: + start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ); + tx_bar_off = (uint64_t)start_q * NFP_QCP_QUEUE_ADDR_SZ; + start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ); + rx_bar_off = (uint64_t)start_q * NFP_QCP_QUEUE_ADDR_SZ; + break; + default: + PMD_DRV_LOG(ERR, "nfp_net: no device ID matching"); + err = -ENODEV; + goto dev_err_ctrl_map; + } + + PMD_INIT_LOG(DEBUG, "tx_bar_off: 0x%" PRIx64 "", tx_bar_off); + PMD_INIT_LOG(DEBUG, "rx_bar_off: 0x%" PRIx64 "", rx_bar_off); + + hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + + tx_bar_off; + hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + + rx_bar_off; + + PMD_INIT_LOG(DEBUG, "ctrl_bar: %p, tx_bar: %p, rx_bar: %p", + hw->ctrl_bar, hw->tx_bar, hw->rx_bar); + + nfp_net_cfg_queue_setup(hw); + + /* Get some of the read-only fields from the config BAR */ + hw->ver = nn_cfg_readl(hw, NFP_NET_CFG_VERSION); + hw->cap = nn_cfg_readl(hw, NFP_NET_CFG_CAP); + hw->max_mtu = nn_cfg_readl(hw, NFP_NET_CFG_MAX_MTU); + hw->mtu = RTE_ETHER_MTU; + + /* VLAN insertion is incompatible with LSOv2 */ + if (hw->cap & NFP_NET_CFG_CTRL_LSO2) + hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN; + + if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2) + hw->rx_offset = NFP_NET_RX_OFFSET; + else + hw->rx_offset = nn_cfg_readl(hw, NFP_NET_CFG_RX_OFFSET_ADDR); + + PMD_INIT_LOG(INFO, "VER: %u.%u, Maximum supported MTU: %d", + NFD_CFG_MAJOR_VERSION_of(hw->ver), + NFD_CFG_MINOR_VERSION_of(hw->ver), hw->max_mtu); + + PMD_INIT_LOG(INFO, "CAP: %#x, %s%s%s%s%s%s%s%s%s%s%s%s%s%s", hw->cap, + hw->cap & NFP_NET_CFG_CTRL_PROMISC ? "PROMISC " : "", + hw->cap & NFP_NET_CFG_CTRL_L2BC ? "L2BCFILT " : "", + hw->cap & NFP_NET_CFG_CTRL_L2MC ? "L2MCFILT " : "", + hw->cap & NFP_NET_CFG_CTRL_RXCSUM ? "RXCSUM " : "", + hw->cap & NFP_NET_CFG_CTRL_TXCSUM ? "TXCSUM " : "", + hw->cap & NFP_NET_CFG_CTRL_RXVLAN ? "RXVLAN " : "", + hw->cap & NFP_NET_CFG_CTRL_TXVLAN ? "TXVLAN " : "", + hw->cap & NFP_NET_CFG_CTRL_SCATTER ? "SCATTER " : "", + hw->cap & NFP_NET_CFG_CTRL_GATHER ? "GATHER " : "", + hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR ? "LIVE_ADDR " : "", + hw->cap & NFP_NET_CFG_CTRL_LSO ? "TSO " : "", + hw->cap & NFP_NET_CFG_CTRL_LSO2 ? "TSOv2 " : "", + hw->cap & NFP_NET_CFG_CTRL_RSS ? "RSS " : "", + hw->cap & NFP_NET_CFG_CTRL_RSS2 ? "RSSv2 " : ""); + + hw->ctrl = 0; + + hw->stride_rx = stride; + hw->stride_tx = stride; + + PMD_INIT_LOG(INFO, "max_rx_queues: %u, max_tx_queues: %u", + hw->max_rx_queues, hw->max_tx_queues); + + /* Initializing spinlock for reconfigs */ + rte_spinlock_init(&hw->reconfig_lock); + + /* Allocating memory for mac addr */ + eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", + RTE_ETHER_ADDR_LEN, 0); + if (eth_dev->data->mac_addrs == NULL) { + PMD_INIT_LOG(ERR, "Failed to space for MAC address"); + err = -ENOMEM; + goto dev_err_queues_map; + } + + nfp_netvf_read_mac(hw); + + tmp_ether_addr = (struct rte_ether_addr *)&hw->mac_addr; + if (!rte_is_valid_assigned_ether_addr(tmp_ether_addr)) { + PMD_INIT_LOG(INFO, "Using random mac address for port %d", + port); + /* Using random mac addresses for VFs */ + rte_eth_random_addr(&hw->mac_addr[0]); + nfp_net_write_mac(hw, (uint8_t *)&hw->mac_addr); + } + + /* Copying mac address to DPDK eth_dev struct */ + rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, + ð_dev->data->mac_addrs[0]); + + if (!(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR)) + eth_dev->data->dev_flags |= RTE_ETH_DEV_NOLIVE_MAC_ADDR; + + eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; + + PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x " + "mac=%02x:%02x:%02x:%02x:%02x:%02x", + eth_dev->data->port_id, pci_dev->id.vendor_id, + pci_dev->id.device_id, + hw->mac_addr[0], hw->mac_addr[1], hw->mac_addr[2], + hw->mac_addr[3], hw->mac_addr[4], hw->mac_addr[5]); + + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + /* Registering LSC interrupt handler */ + rte_intr_callback_register(&pci_dev->intr_handle, + nfp_net_dev_interrupt_handler, + (void *)eth_dev); + /* Telling the firmware about the LSC interrupt entry */ + nn_cfg_writeb(hw, NFP_NET_CFG_LSC, NFP_NET_IRQ_LSC_IDX); + /* Recording current stats counters values */ + nfp_net_stats_reset(eth_dev); + } + + return 0; + +dev_err_queues_map: + nfp_cpp_area_free(hw->hwqueues_area); +dev_err_ctrl_map: + nfp_cpp_area_free(hw->ctrl_area); + + return err; +} + +static const struct rte_pci_id pci_id_nfp_vf_net_map[] = { + { + RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME, + PCI_DEVICE_ID_NFP6000_VF_NIC) + }, + { + .vendor_id = 0, + }, +}; + +static int nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev) +{ + /* VF cleanup, just free private port data */ + return nfp_netvf_close(eth_dev); +} + +static int eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + return rte_eth_dev_pci_generic_probe(pci_dev, + sizeof(struct nfp_net_adapter), nfp_netvf_init); +} + +static int eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev) +{ + return rte_eth_dev_pci_generic_remove(pci_dev, nfp_vf_pci_uninit); +} + +static struct rte_pci_driver rte_nfp_net_vf_pmd = { + .id_table = pci_id_nfp_vf_net_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, + .probe = eth_nfp_vf_pci_probe, + .remove = eth_nfp_vf_pci_remove, +}; + +RTE_PMD_REGISTER_PCI(net_nfp_vf, rte_nfp_net_vf_pmd); +RTE_PMD_REGISTER_PCI_TABLE(net_nfp_vf, pci_id_nfp_vf_net_map); +RTE_PMD_REGISTER_KMOD_DEP(net_nfp_vf, "* igb_uio | uio_pci_generic | vfio"); +/* + * Local variables: + * c-file-style: "Linux" + * indent-tabs-mode: t + * End: + */ diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c index da35bba4ef..a1722886ba 100644 --- a/drivers/net/nfp/nfp_net.c +++ b/drivers/net/nfp/nfp_net.c @@ -56,6 +56,7 @@ static int nfp_net_close(struct rte_eth_dev *dev); static int nfp_net_init(struct rte_eth_dev *eth_dev); static int nfp_pf_init(struct rte_pci_device *pci_dev); static int nfp_pf_secondary_init(struct rte_pci_device *pci_dev); +static int nfp_net_pf_read_mac(struct nfp_pf_dev *pf_dev, int port); static int nfp_pci_uninit(struct rte_eth_dev *eth_dev); static int nfp_init_phyports(struct nfp_pf_dev *pf_dev); static int nfp_net_stop(struct rte_eth_dev *dev); @@ -283,18 +284,6 @@ nfp_net_pf_read_mac(struct nfp_pf_dev *pf_dev, int port) return 0; } -static void -nfp_net_vf_read_mac(struct nfp_net_hw *hw) -{ - uint32_t tmp; - - tmp = rte_be_to_cpu_32(nn_cfg_readl(hw, NFP_NET_CFG_MACADDR)); - memcpy(&hw->mac_addr[0], &tmp, 4); - - tmp = rte_be_to_cpu_32(nn_cfg_readl(hw, NFP_NET_CFG_MACADDR + 4)); - memcpy(&hw->mac_addr[4], &tmp, 2); -} - void nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac) { @@ -1852,8 +1841,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev) if (hw->is_phyport) { nfp_net_pf_read_mac(pf_dev, port); nfp_net_write_mac(hw, (uint8_t *)&hw->mac_addr); - } else { - nfp_net_vf_read_mac(hw); } if (!rte_is_valid_assigned_ether_addr( @@ -2369,16 +2356,6 @@ static const struct rte_pci_id pci_id_nfp_pf_net_map[] = { }, }; -static const struct rte_pci_id pci_id_nfp_vf_net_map[] = { - { - RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME, - PCI_DEVICE_ID_NFP6000_VF_NIC) - }, - { - .vendor_id = 0, - }, -}; - static int nfp_pci_uninit(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev; @@ -2402,13 +2379,6 @@ static int nfp_pci_uninit(struct rte_eth_dev *eth_dev) return nfp_net_close(eth_dev); } -static int eth_nfp_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, - struct rte_pci_device *pci_dev) -{ - return rte_eth_dev_pci_generic_probe(pci_dev, - sizeof(struct nfp_net_adapter), nfp_net_init); -} - static int eth_nfp_pci_remove(struct rte_pci_device *pci_dev) { return rte_eth_dev_pci_generic_remove(pci_dev, nfp_pci_uninit); @@ -2421,19 +2391,9 @@ static struct rte_pci_driver rte_nfp_net_pf_pmd = { .remove = eth_nfp_pci_remove, }; -static struct rte_pci_driver rte_nfp_net_vf_pmd = { - .id_table = pci_id_nfp_vf_net_map, - .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, - .probe = eth_nfp_pci_probe, - .remove = eth_nfp_pci_remove, -}; - RTE_PMD_REGISTER_PCI(net_nfp_pf, rte_nfp_net_pf_pmd); -RTE_PMD_REGISTER_PCI(net_nfp_vf, rte_nfp_net_vf_pmd); RTE_PMD_REGISTER_PCI_TABLE(net_nfp_pf, pci_id_nfp_pf_net_map); -RTE_PMD_REGISTER_PCI_TABLE(net_nfp_vf, pci_id_nfp_vf_net_map); RTE_PMD_REGISTER_KMOD_DEP(net_nfp_pf, "* igb_uio | uio_pci_generic | vfio"); -RTE_PMD_REGISTER_KMOD_DEP(net_nfp_vf, "* igb_uio | uio_pci_generic | vfio"); RTE_LOG_REGISTER_SUFFIX(nfp_logtype_init, init, NOTICE); RTE_LOG_REGISTER_SUFFIX(nfp_logtype_driver, driver, NOTICE); /* From patchwork Fri Jul 16 08:35:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Heinrich Kuhn X-Patchwork-Id: 95964 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B37F7A0C50; Fri, 16 Jul 2021 10:37:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1101C41361; Fri, 16 Jul 2021 10:37:17 +0200 (CEST) Received: from mail-ej1-f46.google.com (mail-ej1-f46.google.com [209.85.218.46]) by mails.dpdk.org (Postfix) with ESMTP id 896CE41357 for ; Fri, 16 Jul 2021 10:37:16 +0200 (CEST) Received: by mail-ej1-f46.google.com with SMTP id c17so13930423ejk.13 for ; Fri, 16 Jul 2021 01:37:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qiN+NfDY4iiG+2b4CfNflzqEf0JnaS3DyFOZXrfYFDc=; b=MNmEB0ughr3wziXhOQq6vh0/rJ2D0DStwmJsFiXyme0Yt8yhKdk1YQ/+BfG3ps8d8E /HkeLEvcD0p1PWMY+UylxgtDTMRpAIcjFEi8yMR5WdJk5qPxUUPy43Gnla1Y6SG/VEXl s/ehAR5PsHAiIAAxQ0TyQRMVSxwhVccMj5wQtSeFT5YRO/Bldym/WNrBnrjsgezXyEac UjoEj2Pt14NeWlnxK3oXLOzp79vVQz0p9+qjQOfAwecMjmEBCHAtGc+R6SZAvzl/5o7u h7ANaFiLeqpx5Qv6EnQ7jKhjNRraAaXi9A8+cspmypF9AUdtUwhiWNmF9SyrTazxhu78 iHjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qiN+NfDY4iiG+2b4CfNflzqEf0JnaS3DyFOZXrfYFDc=; b=a54zr8Vfp++xoB/4JnBRcYFQyyAN+7ejsXKMJBGoWKeVN70jPEhTsNhZlorD95YuUn AHKnt2E32UaloCOrg/hpg5RJK6eT+Vu6dSjiKnUusjzQnH8T7TxZuhnrbDe0Ho9bvcby ZNxeAj9I4+1hJ0SOWyGLPHRuUBhBmFmMjWRaOuenwO/QvrfxYm/fE9HnjjMaZHDz4urC fSQoG4IyT9adnmfn/+MjabTixsVB1gyRYwBdDXJr2dfvWxMly3qccsaBPGOcnb8O04V5 nOAKQiybAxBfP1yDPJpwPqNLEZ8B8WHqHfEoMT04nU4SP7TM1TE3Xhnmfdy92Ku/sGdf mF/A== X-Gm-Message-State: AOAM5332MeDOKWWwnzXzN1kKUW8yxY4KQBdwdR+QajlOOD34JXgcWMcZ 9SNL9EQiQpE6yc4OarUUg3vWCr/VcWD7vSGYyn+aSzNYm9v7UFZGdNRfKX9XCPvaCoJR/SvyONY aUljoTz21bpvLfL8utIcu7JWW5MvNNgbRlQ0Qx4EpEJ3qQpQKj+1c7970Mg9WY/ET X-Google-Smtp-Source: ABdhPJwG4VOckxXsN5MAbrrFflUZLq20QEpGCeRcxe0MS2sjGS9Zh0cXJAWDSABs8TFp5S2juOSRdw== X-Received: by 2002:a17:907:a05c:: with SMTP id gz28mr10851719ejc.56.1626424635657; Fri, 16 Jul 2021 01:37:15 -0700 (PDT) Received: from localhost.localdomain ([155.93.216.150]) by smtp.gmail.com with ESMTPSA id e6sm3371650edk.63.2021.07.16.01.37.14 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 16 Jul 2021 01:37:15 -0700 (PDT) From: Heinrich Kuhn To: dev@dpdk.org Cc: Heinrich Kuhn , Simon Horman Date: Fri, 16 Jul 2021 10:35:45 +0200 Message-Id: <20210716083545.34444-7-heinrich.kuhn@netronome.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210716083545.34444-1-heinrich.kuhn@netronome.com> References: <20210716082314.33865-1-heinrich.kuhn@netronome.com> <20210716083545.34444-1-heinrich.kuhn@netronome.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 6/7] net/nfp: move PF functions into new file X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Similar to the last commit, this changeset moves all the PF specific functions to a new file called nfp_ethdev.c. Signed-off-by: Heinrich Kuhn Signed-off-by: Simon Horman --- drivers/net/nfp/meson.build | 1 + drivers/net/nfp/nfp_ethdev.c | 1099 ++++++++++++++++++++++++++++++++++ drivers/net/nfp/nfp_net.c | 1088 +-------------------------------- 3 files changed, 1103 insertions(+), 1085 deletions(-) create mode 100644 drivers/net/nfp/nfp_ethdev.c diff --git a/drivers/net/nfp/meson.build b/drivers/net/nfp/meson.build index 34f4054b3c..ab64d0cac3 100644 --- a/drivers/net/nfp/meson.build +++ b/drivers/net/nfp/meson.build @@ -22,4 +22,5 @@ sources = files( 'nfp_rxtx.c', 'nfp_cpp_bridge.c', 'nfp_ethdev_vf.c', + 'nfp_ethdev.c', ) diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c new file mode 100644 index 0000000000..ab08906704 --- /dev/null +++ b/drivers/net/nfp/nfp_ethdev.c @@ -0,0 +1,1099 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2014-2021 Netronome Systems, Inc. + * All rights reserved. + * + * Small portions derived from code Copyright(c) 2010-2015 Intel Corporation. + */ + +/* + * vim:shiftwidth=8:noexpandtab + * + * @file dpdk/pmd/nfp_ethdev.c + * + * Netronome vNIC DPDK Poll-Mode Driver: Main entry point + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "nfpcore/nfp_cpp.h" +#include "nfpcore/nfp_nffw.h" +#include "nfpcore/nfp_hwinfo.h" +#include "nfpcore/nfp_mip.h" +#include "nfpcore/nfp_rtsym.h" +#include "nfpcore/nfp_nsp.h" + +#include "nfp_net_pmd.h" +#include "nfp_rxtx.h" +#include "nfp_net_logs.h" +#include "nfp_net_ctrl.h" +#include "nfp_cpp_bridge.h" + + +static int nfp_net_pf_read_mac(struct nfp_pf_dev *pf_dev, int port); +static int nfp_net_start(struct rte_eth_dev *dev); +static int nfp_net_stop(struct rte_eth_dev *dev); +static int nfp_net_set_link_up(struct rte_eth_dev *dev); +static int nfp_net_set_link_down(struct rte_eth_dev *dev); +static int nfp_net_close(struct rte_eth_dev *dev); +static int nfp_net_init(struct rte_eth_dev *eth_dev); +static int nfp_fw_upload(struct rte_pci_device *dev, + struct nfp_nsp *nsp, char *card); +static int nfp_fw_setup(struct rte_pci_device *dev, + struct nfp_cpp *cpp, + struct nfp_eth_table *nfp_eth_table, + struct nfp_hwinfo *hwinfo); +static int nfp_init_phyports(struct nfp_pf_dev *pf_dev); +static int nfp_pf_init(struct rte_pci_device *pci_dev); +static int nfp_pf_secondary_init(struct rte_pci_device *pci_dev); +static int nfp_pf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *dev); +static int nfp_pci_uninit(struct rte_eth_dev *eth_dev); +static int eth_nfp_pci_remove(struct rte_pci_device *pci_dev); + +static int +nfp_net_pf_read_mac(struct nfp_pf_dev *pf_dev, int port) +{ + struct nfp_eth_table *nfp_eth_table; + struct nfp_net_hw *hw = NULL; + + /* Grab a pointer to the correct physical port */ + hw = pf_dev->ports[port]; + + nfp_eth_table = nfp_eth_read_ports(pf_dev->cpp); + + nfp_eth_copy_mac((uint8_t *)&hw->mac_addr, + (uint8_t *)&nfp_eth_table->ports[port].mac_addr); + + free(nfp_eth_table); + return 0; +} + +static int +nfp_net_start(struct rte_eth_dev *dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + uint32_t new_ctrl, update = 0; + struct nfp_net_hw *hw; + struct nfp_pf_dev *pf_dev; + struct rte_eth_conf *dev_conf; + struct rte_eth_rxmode *rxmode; + uint32_t intr_vector; + int ret; + + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private); + + PMD_INIT_LOG(DEBUG, "Start"); + + /* Disabling queues just in case... */ + nfp_net_disable_queues(dev); + + /* Enabling the required queues in the device */ + nfp_net_enable_queues(dev); + + /* check and configure queue intr-vector mapping */ + if (dev->data->dev_conf.intr_conf.rxq != 0) { + if (pf_dev->multiport) { + PMD_INIT_LOG(ERR, "PMD rx interrupt is not supported " + "with NFP multiport PF"); + return -EINVAL; + } + if (intr_handle->type == RTE_INTR_HANDLE_UIO) { + /* + * Better not to share LSC with RX interrupts. + * Unregistering LSC interrupt handler + */ + rte_intr_callback_unregister(&pci_dev->intr_handle, + nfp_net_dev_interrupt_handler, (void *)dev); + + if (dev->data->nb_rx_queues > 1) { + PMD_INIT_LOG(ERR, "PMD rx interrupt only " + "supports 1 queue with UIO"); + return -EIO; + } + } + intr_vector = dev->data->nb_rx_queues; + if (rte_intr_efd_enable(intr_handle, intr_vector)) + return -1; + + nfp_configure_rx_interrupt(dev, intr_handle); + update = NFP_NET_CFG_UPDATE_MSIX; + } + + rte_intr_enable(intr_handle); + + new_ctrl = nfp_check_offloads(dev); + + /* Writing configuration parameters in the device */ + nfp_net_params_setup(hw); + + dev_conf = &dev->data->dev_conf; + rxmode = &dev_conf->rxmode; + + if (rxmode->mq_mode & ETH_MQ_RX_RSS) { + nfp_net_rss_config_default(dev); + update |= NFP_NET_CFG_UPDATE_RSS; + new_ctrl |= NFP_NET_CFG_CTRL_RSS; + } + + /* Enable device */ + new_ctrl |= NFP_NET_CFG_CTRL_ENABLE; + + update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING; + + if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG) + new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG; + + nn_cfg_writel(hw, NFP_NET_CFG_CTRL, new_ctrl); + if (nfp_net_reconfig(hw, new_ctrl, update) < 0) + return -EIO; + + /* + * Allocating rte mbufs for configured rx queues. + * This requires queues being enabled before + */ + if (nfp_net_rx_freelist_setup(dev) < 0) { + ret = -ENOMEM; + goto error; + } + + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + /* Configure the physical port up */ + nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1); + else + nfp_eth_set_configured(dev->process_private, + hw->nfp_idx, 1); + + hw->ctrl = new_ctrl; + + return 0; + +error: + /* + * An error returned by this function should mean the app + * exiting and then the system releasing all the memory + * allocated even memory coming from hugepages. + * + * The device could be enabled at this point with some queues + * ready for getting packets. This is true if the call to + * nfp_net_rx_freelist_setup() succeeds for some queues but + * fails for subsequent queues. + * + * This should make the app exiting but better if we tell the + * device first. + */ + nfp_net_disable_queues(dev); + + return ret; +} + +/* Stop device: disable rx and tx functions to allow for reconfiguring. */ +static int +nfp_net_stop(struct rte_eth_dev *dev) +{ + int i; + struct nfp_net_hw *hw; + struct nfp_net_txq *this_tx_q; + struct nfp_net_rxq *this_rx_q; + + PMD_INIT_LOG(DEBUG, "Stop"); + + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + nfp_net_disable_queues(dev); + + /* Clear queues */ + for (i = 0; i < dev->data->nb_tx_queues; i++) { + this_tx_q = (struct nfp_net_txq *)dev->data->tx_queues[i]; + nfp_net_reset_tx_queue(this_tx_q); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + this_rx_q = (struct nfp_net_rxq *)dev->data->rx_queues[i]; + nfp_net_reset_rx_queue(this_rx_q); + } + + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + /* Configure the physical port down */ + nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0); + else + nfp_eth_set_configured(dev->process_private, + hw->nfp_idx, 0); + + return 0; +} + +/* Set the link up. */ +static int +nfp_net_set_link_up(struct rte_eth_dev *dev) +{ + struct nfp_net_hw *hw; + + PMD_DRV_LOG(DEBUG, "Set link up"); + + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + /* Configure the physical port down */ + return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1); + else + return nfp_eth_set_configured(dev->process_private, + hw->nfp_idx, 1); +} + +/* Set the link down. */ +static int +nfp_net_set_link_down(struct rte_eth_dev *dev) +{ + struct nfp_net_hw *hw; + + PMD_DRV_LOG(DEBUG, "Set link down"); + + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + /* Configure the physical port down */ + return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0); + else + return nfp_eth_set_configured(dev->process_private, + hw->nfp_idx, 0); +} + +/* Reset and stop device. The device can not be restarted. */ +static int +nfp_net_close(struct rte_eth_dev *dev) +{ + struct nfp_net_hw *hw; + struct rte_pci_device *pci_dev; + struct nfp_pf_dev *pf_dev; + struct nfp_net_txq *this_tx_q; + struct nfp_net_rxq *this_rx_q; + int i; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + PMD_INIT_LOG(DEBUG, "Close"); + + pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private); + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + pci_dev = RTE_ETH_DEV_TO_PCI(dev); + + /* + * We assume that the DPDK application is stopping all the + * threads/queues before calling the device close function. + */ + + nfp_net_disable_queues(dev); + + /* Clear queues */ + for (i = 0; i < dev->data->nb_tx_queues; i++) { + this_tx_q = (struct nfp_net_txq *)dev->data->tx_queues[i]; + nfp_net_reset_tx_queue(this_tx_q); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + this_rx_q = (struct nfp_net_rxq *)dev->data->rx_queues[i]; + nfp_net_reset_rx_queue(this_rx_q); + } + + /* Only free PF resources after all physical ports have been closed */ + /* Mark this port as unused and free device priv resources*/ + nn_cfg_writeb(hw, NFP_NET_CFG_LSC, 0xff); + pf_dev->ports[hw->idx] = NULL; + rte_eth_dev_release_port(dev); + + for (i = 0; i < pf_dev->total_phyports; i++) { + /* Check to see if ports are still in use */ + if (pf_dev->ports[i]) + return 0; + } + + /* Now it is safe to free all PF resources */ + PMD_INIT_LOG(INFO, "Freeing PF resources"); + nfp_cpp_area_free(pf_dev->ctrl_area); + nfp_cpp_area_free(pf_dev->hwqueues_area); + free(pf_dev->hwinfo); + free(pf_dev->sym_tbl); + nfp_cpp_free(pf_dev->cpp); + rte_free(pf_dev); + + rte_intr_disable(&pci_dev->intr_handle); + + /* unregister callback func from eal lib */ + rte_intr_callback_unregister(&pci_dev->intr_handle, + nfp_net_dev_interrupt_handler, + (void *)dev); + + /* + * The ixgbe PMD driver disables the pcie master on the + * device. The i40e does not... + */ + + return 0; +} + +/* Initialise and register driver with DPDK Application */ +static const struct eth_dev_ops nfp_net_eth_dev_ops = { + .dev_configure = nfp_net_configure, + .dev_start = nfp_net_start, + .dev_stop = nfp_net_stop, + .dev_set_link_up = nfp_net_set_link_up, + .dev_set_link_down = nfp_net_set_link_down, + .dev_close = nfp_net_close, + .promiscuous_enable = nfp_net_promisc_enable, + .promiscuous_disable = nfp_net_promisc_disable, + .link_update = nfp_net_link_update, + .stats_get = nfp_net_stats_get, + .stats_reset = nfp_net_stats_reset, + .dev_infos_get = nfp_net_infos_get, + .dev_supported_ptypes_get = nfp_net_supported_ptypes_get, + .mtu_set = nfp_net_dev_mtu_set, + .mac_addr_set = nfp_set_mac_addr, + .vlan_offload_set = nfp_net_vlan_offload_set, + .reta_update = nfp_net_reta_update, + .reta_query = nfp_net_reta_query, + .rss_hash_update = nfp_net_rss_hash_update, + .rss_hash_conf_get = nfp_net_rss_hash_conf_get, + .rx_queue_setup = nfp_net_rx_queue_setup, + .rx_queue_release = nfp_net_rx_queue_release, + .tx_queue_setup = nfp_net_tx_queue_setup, + .tx_queue_release = nfp_net_tx_queue_release, + .rx_queue_intr_enable = nfp_rx_queue_intr_enable, + .rx_queue_intr_disable = nfp_rx_queue_intr_disable, +}; + +static int +nfp_net_init(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev; + struct nfp_pf_dev *pf_dev; + struct nfp_net_hw *hw; + struct rte_ether_addr *tmp_ether_addr; + + uint64_t tx_bar_off = 0, rx_bar_off = 0; + uint32_t start_q; + int stride = 4; + int port = 0; + int err; + + PMD_INIT_FUNC_TRACE(); + + pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + + /* Use backpointer here to the PF of this eth_dev */ + pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(eth_dev->data->dev_private); + + /* NFP can not handle DMA addresses requiring more than 40 bits */ + if (rte_mem_check_dma_mask(40)) { + RTE_LOG(ERR, PMD, "device %s can not be used:", + pci_dev->device.name); + RTE_LOG(ERR, PMD, "\trestricted dma mask to 40 bits!\n"); + return -ENODEV; + }; + + port = ((struct nfp_net_hw *)eth_dev->data->dev_private)->idx; + if (port < 0 || port > 7) { + PMD_DRV_LOG(ERR, "Port value is wrong"); + return -ENODEV; + } + + /* Use PF array of physical ports to get pointer to + * this specific port + */ + hw = pf_dev->ports[port]; + + PMD_INIT_LOG(DEBUG, "Working with physical port number: %d, " + "NFP internal port number: %d", + port, hw->nfp_idx); + + eth_dev->dev_ops = &nfp_net_eth_dev_ops; + eth_dev->rx_queue_count = nfp_net_rx_queue_count; + eth_dev->rx_pkt_burst = &nfp_net_recv_pkts; + eth_dev->tx_pkt_burst = &nfp_net_xmit_pkts; + + /* For secondary processes, the primary has done all the work */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + rte_eth_copy_pci_info(eth_dev, pci_dev); + + hw->device_id = pci_dev->id.device_id; + hw->vendor_id = pci_dev->id.vendor_id; + hw->subsystem_device_id = pci_dev->id.subsystem_device_id; + hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id; + + PMD_INIT_LOG(DEBUG, "nfp_net: device (%u:%u) %u:%u:%u:%u", + pci_dev->id.vendor_id, pci_dev->id.device_id, + pci_dev->addr.domain, pci_dev->addr.bus, + pci_dev->addr.devid, pci_dev->addr.function); + + hw->ctrl_bar = (uint8_t *)pci_dev->mem_resource[0].addr; + if (hw->ctrl_bar == NULL) { + PMD_DRV_LOG(ERR, + "hw->ctrl_bar is NULL. BAR0 not configured"); + return -ENODEV; + } + + if (port == 0) { + hw->ctrl_bar = pf_dev->ctrl_bar; + } else { + if (!pf_dev->ctrl_bar) + return -ENODEV; + /* Use port offset in pf ctrl_bar for this + * ports control bar + */ + hw->ctrl_bar = pf_dev->ctrl_bar + + (port * NFP_PF_CSR_SLICE_SIZE); + } + + PMD_INIT_LOG(DEBUG, "ctrl bar: %p", hw->ctrl_bar); + + hw->max_rx_queues = nn_cfg_readl(hw, NFP_NET_CFG_MAX_RXRINGS); + hw->max_tx_queues = nn_cfg_readl(hw, NFP_NET_CFG_MAX_TXRINGS); + + /* Work out where in the BAR the queues start. */ + switch (pci_dev->id.device_id) { + case PCI_DEVICE_ID_NFP4000_PF_NIC: + case PCI_DEVICE_ID_NFP6000_PF_NIC: + start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ); + tx_bar_off = (uint64_t)start_q * NFP_QCP_QUEUE_ADDR_SZ; + start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ); + rx_bar_off = (uint64_t)start_q * NFP_QCP_QUEUE_ADDR_SZ; + break; + default: + PMD_DRV_LOG(ERR, "nfp_net: no device ID matching"); + err = -ENODEV; + goto dev_err_ctrl_map; + } + + PMD_INIT_LOG(DEBUG, "tx_bar_off: 0x%" PRIx64 "", tx_bar_off); + PMD_INIT_LOG(DEBUG, "rx_bar_off: 0x%" PRIx64 "", rx_bar_off); + + hw->tx_bar = pf_dev->hw_queues + tx_bar_off; + hw->rx_bar = pf_dev->hw_queues + rx_bar_off; + eth_dev->data->dev_private = hw; + + PMD_INIT_LOG(DEBUG, "ctrl_bar: %p, tx_bar: %p, rx_bar: %p", + hw->ctrl_bar, hw->tx_bar, hw->rx_bar); + + nfp_net_cfg_queue_setup(hw); + + /* Get some of the read-only fields from the config BAR */ + hw->ver = nn_cfg_readl(hw, NFP_NET_CFG_VERSION); + hw->cap = nn_cfg_readl(hw, NFP_NET_CFG_CAP); + hw->max_mtu = nn_cfg_readl(hw, NFP_NET_CFG_MAX_MTU); + hw->mtu = RTE_ETHER_MTU; + + /* VLAN insertion is incompatible with LSOv2 */ + if (hw->cap & NFP_NET_CFG_CTRL_LSO2) + hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN; + + if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2) + hw->rx_offset = NFP_NET_RX_OFFSET; + else + hw->rx_offset = nn_cfg_readl(hw, NFP_NET_CFG_RX_OFFSET_ADDR); + + PMD_INIT_LOG(INFO, "VER: %u.%u, Maximum supported MTU: %d", + NFD_CFG_MAJOR_VERSION_of(hw->ver), + NFD_CFG_MINOR_VERSION_of(hw->ver), hw->max_mtu); + + PMD_INIT_LOG(INFO, "CAP: %#x, %s%s%s%s%s%s%s%s%s%s%s%s%s%s", hw->cap, + hw->cap & NFP_NET_CFG_CTRL_PROMISC ? "PROMISC " : "", + hw->cap & NFP_NET_CFG_CTRL_L2BC ? "L2BCFILT " : "", + hw->cap & NFP_NET_CFG_CTRL_L2MC ? "L2MCFILT " : "", + hw->cap & NFP_NET_CFG_CTRL_RXCSUM ? "RXCSUM " : "", + hw->cap & NFP_NET_CFG_CTRL_TXCSUM ? "TXCSUM " : "", + hw->cap & NFP_NET_CFG_CTRL_RXVLAN ? "RXVLAN " : "", + hw->cap & NFP_NET_CFG_CTRL_TXVLAN ? "TXVLAN " : "", + hw->cap & NFP_NET_CFG_CTRL_SCATTER ? "SCATTER " : "", + hw->cap & NFP_NET_CFG_CTRL_GATHER ? "GATHER " : "", + hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR ? "LIVE_ADDR " : "", + hw->cap & NFP_NET_CFG_CTRL_LSO ? "TSO " : "", + hw->cap & NFP_NET_CFG_CTRL_LSO2 ? "TSOv2 " : "", + hw->cap & NFP_NET_CFG_CTRL_RSS ? "RSS " : "", + hw->cap & NFP_NET_CFG_CTRL_RSS2 ? "RSSv2 " : ""); + + hw->ctrl = 0; + + hw->stride_rx = stride; + hw->stride_tx = stride; + + PMD_INIT_LOG(INFO, "max_rx_queues: %u, max_tx_queues: %u", + hw->max_rx_queues, hw->max_tx_queues); + + /* Initializing spinlock for reconfigs */ + rte_spinlock_init(&hw->reconfig_lock); + + /* Allocating memory for mac addr */ + eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", + RTE_ETHER_ADDR_LEN, 0); + if (eth_dev->data->mac_addrs == NULL) { + PMD_INIT_LOG(ERR, "Failed to space for MAC address"); + err = -ENOMEM; + goto dev_err_queues_map; + } + + nfp_net_pf_read_mac(pf_dev, port); + nfp_net_write_mac(hw, (uint8_t *)&hw->mac_addr); + + tmp_ether_addr = (struct rte_ether_addr *)&hw->mac_addr; + if (!rte_is_valid_assigned_ether_addr(tmp_ether_addr)) { + PMD_INIT_LOG(INFO, "Using random mac address for port %d", + port); + /* Using random mac addresses for VFs */ + rte_eth_random_addr(&hw->mac_addr[0]); + nfp_net_write_mac(hw, (uint8_t *)&hw->mac_addr); + } + + /* Copying mac address to DPDK eth_dev struct */ + rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, + ð_dev->data->mac_addrs[0]); + + if (!(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR)) + eth_dev->data->dev_flags |= RTE_ETH_DEV_NOLIVE_MAC_ADDR; + + eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; + + PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x " + "mac=%02x:%02x:%02x:%02x:%02x:%02x", + eth_dev->data->port_id, pci_dev->id.vendor_id, + pci_dev->id.device_id, + hw->mac_addr[0], hw->mac_addr[1], hw->mac_addr[2], + hw->mac_addr[3], hw->mac_addr[4], hw->mac_addr[5]); + + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + /* Registering LSC interrupt handler */ + rte_intr_callback_register(&pci_dev->intr_handle, + nfp_net_dev_interrupt_handler, + (void *)eth_dev); + /* Telling the firmware about the LSC interrupt entry */ + nn_cfg_writeb(hw, NFP_NET_CFG_LSC, NFP_NET_IRQ_LSC_IDX); + /* Recording current stats counters values */ + nfp_net_stats_reset(eth_dev); + } + + return 0; + +dev_err_queues_map: + nfp_cpp_area_free(hw->hwqueues_area); +dev_err_ctrl_map: + nfp_cpp_area_free(hw->ctrl_area); + + return err; +} + +#define DEFAULT_FW_PATH "/lib/firmware/netronome" + +static int +nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card) +{ + struct nfp_cpp *cpp = nsp->cpp; + int fw_f; + char *fw_buf; + char fw_name[125]; + char serial[40]; + struct stat file_stat; + off_t fsize, bytes; + + /* Looking for firmware file in order of priority */ + + /* First try to find a firmware image specific for this device */ + snprintf(serial, sizeof(serial), + "serial-%02x-%02x-%02x-%02x-%02x-%02x-%02x-%02x", + cpp->serial[0], cpp->serial[1], cpp->serial[2], cpp->serial[3], + cpp->serial[4], cpp->serial[5], cpp->interface >> 8, + cpp->interface & 0xff); + + snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH, + serial); + + PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name); + fw_f = open(fw_name, O_RDONLY); + if (fw_f >= 0) + goto read_fw; + + /* Then try the PCI name */ + snprintf(fw_name, sizeof(fw_name), "%s/pci-%s.nffw", DEFAULT_FW_PATH, + dev->device.name); + + PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name); + fw_f = open(fw_name, O_RDONLY); + if (fw_f >= 0) + goto read_fw; + + /* Finally try the card type and media */ + snprintf(fw_name, sizeof(fw_name), "%s/%s", DEFAULT_FW_PATH, card); + PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name); + fw_f = open(fw_name, O_RDONLY); + if (fw_f < 0) { + PMD_DRV_LOG(INFO, "Firmware file %s not found.", fw_name); + return -ENOENT; + } + +read_fw: + if (fstat(fw_f, &file_stat) < 0) { + PMD_DRV_LOG(INFO, "Firmware file %s size is unknown", fw_name); + close(fw_f); + return -ENOENT; + } + + fsize = file_stat.st_size; + PMD_DRV_LOG(INFO, "Firmware file found at %s with size: %" PRIu64 "", + fw_name, (uint64_t)fsize); + + fw_buf = malloc((size_t)fsize); + if (!fw_buf) { + PMD_DRV_LOG(INFO, "malloc failed for fw buffer"); + close(fw_f); + return -ENOMEM; + } + memset(fw_buf, 0, fsize); + + bytes = read(fw_f, fw_buf, fsize); + if (bytes != fsize) { + PMD_DRV_LOG(INFO, "Reading fw to buffer failed." + "Just %" PRIu64 " of %" PRIu64 " bytes read", + (uint64_t)bytes, (uint64_t)fsize); + free(fw_buf); + close(fw_f); + return -EIO; + } + + PMD_DRV_LOG(INFO, "Uploading the firmware ..."); + nfp_nsp_load_fw(nsp, fw_buf, bytes); + PMD_DRV_LOG(INFO, "Done"); + + free(fw_buf); + close(fw_f); + + return 0; +} + +static int +nfp_fw_setup(struct rte_pci_device *dev, struct nfp_cpp *cpp, + struct nfp_eth_table *nfp_eth_table, struct nfp_hwinfo *hwinfo) +{ + struct nfp_nsp *nsp; + const char *nfp_fw_model; + char card_desc[100]; + int err = 0; + + nfp_fw_model = nfp_hwinfo_lookup(hwinfo, "assembly.partno"); + + if (nfp_fw_model) { + PMD_DRV_LOG(INFO, "firmware model found: %s", nfp_fw_model); + } else { + PMD_DRV_LOG(ERR, "firmware model NOT found"); + return -EIO; + } + + if (nfp_eth_table->count == 0 || nfp_eth_table->count > 8) { + PMD_DRV_LOG(ERR, "NFP ethernet table reports wrong ports: %u", + nfp_eth_table->count); + return -EIO; + } + + PMD_DRV_LOG(INFO, "NFP ethernet port table reports %u ports", + nfp_eth_table->count); + + PMD_DRV_LOG(INFO, "Port speed: %u", nfp_eth_table->ports[0].speed); + + snprintf(card_desc, sizeof(card_desc), "nic_%s_%dx%d.nffw", + nfp_fw_model, nfp_eth_table->count, + nfp_eth_table->ports[0].speed / 1000); + + nsp = nfp_nsp_open(cpp); + if (!nsp) { + PMD_DRV_LOG(ERR, "NFP error when obtaining NSP handle"); + return -EIO; + } + + nfp_nsp_device_soft_reset(nsp); + err = nfp_fw_upload(dev, nsp, card_desc); + + nfp_nsp_close(nsp); + return err; +} + +static int nfp_init_phyports(struct nfp_pf_dev *pf_dev) +{ + struct nfp_net_hw *hw; + struct rte_eth_dev *eth_dev; + struct nfp_eth_table *nfp_eth_table = NULL; + int ret = 0; + int i; + + nfp_eth_table = nfp_eth_read_ports(pf_dev->cpp); + if (!nfp_eth_table) { + PMD_INIT_LOG(ERR, "Error reading NFP ethernet table"); + ret = -EIO; + goto error; + } + + /* Loop through all physical ports on PF */ + for (i = 0; i < pf_dev->total_phyports; i++) { + const unsigned int numa_node = rte_socket_id(); + char port_name[RTE_ETH_NAME_MAX_LEN]; + + snprintf(port_name, sizeof(port_name), "%s_port%d", + pf_dev->pci_dev->device.name, i); + + /* Allocate a eth_dev for this phyport */ + eth_dev = rte_eth_dev_allocate(port_name); + if (!eth_dev) { + ret = -ENODEV; + goto port_cleanup; + } + + /* Allocate memory for this phyport */ + eth_dev->data->dev_private = + rte_zmalloc_socket(port_name, sizeof(struct nfp_net_hw), + RTE_CACHE_LINE_SIZE, numa_node); + if (!eth_dev->data->dev_private) { + ret = -ENOMEM; + rte_eth_dev_release_port(eth_dev); + goto port_cleanup; + } + + hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); + + /* Add this device to the PF's array of physical ports */ + pf_dev->ports[i] = hw; + + hw->pf_dev = pf_dev; + hw->cpp = pf_dev->cpp; + hw->eth_dev = eth_dev; + hw->idx = i; + hw->nfp_idx = nfp_eth_table->ports[i].index; + hw->is_phyport = true; + + eth_dev->device = &pf_dev->pci_dev->device; + + /* ctrl/tx/rx BAR mappings and remaining init happens in + * nfp_net_init + */ + ret = nfp_net_init(eth_dev); + + if (ret) { + ret = -ENODEV; + goto port_cleanup; + } + + rte_eth_dev_probing_finish(eth_dev); + + } /* End loop, all ports on this PF */ + ret = 0; + goto eth_table_cleanup; + +port_cleanup: + for (i = 0; i < pf_dev->total_phyports; i++) { + if (pf_dev->ports[i] && pf_dev->ports[i]->eth_dev) { + struct rte_eth_dev *tmp_dev; + tmp_dev = pf_dev->ports[i]->eth_dev; + rte_eth_dev_release_port(tmp_dev); + pf_dev->ports[i] = NULL; + } + } +eth_table_cleanup: + free(nfp_eth_table); +error: + return ret; +} + +static int nfp_pf_init(struct rte_pci_device *pci_dev) +{ + struct nfp_pf_dev *pf_dev = NULL; + struct nfp_cpp *cpp; + struct nfp_hwinfo *hwinfo; + struct nfp_rtsym_table *sym_tbl; + struct nfp_eth_table *nfp_eth_table = NULL; + char name[RTE_ETH_NAME_MAX_LEN]; + int total_ports; + int ret = -ENODEV; + int err; + + if (!pci_dev) + return ret; + + /* + * When device bound to UIO, the device could be used, by mistake, + * by two DPDK apps, and the UIO driver does not avoid it. This + * could lead to a serious problem when configuring the NFP CPP + * interface. Here we avoid this telling to the CPP init code to + * use a lock file if UIO is being used. + */ + if (pci_dev->kdrv == RTE_PCI_KDRV_VFIO) + cpp = nfp_cpp_from_device_name(pci_dev, 0); + else + cpp = nfp_cpp_from_device_name(pci_dev, 1); + + if (!cpp) { + PMD_INIT_LOG(ERR, "A CPP handle can not be obtained"); + ret = -EIO; + goto error; + } + + hwinfo = nfp_hwinfo_read(cpp); + if (!hwinfo) { + PMD_INIT_LOG(ERR, "Error reading hwinfo table"); + ret = -EIO; + goto error; + } + + nfp_eth_table = nfp_eth_read_ports(cpp); + if (!nfp_eth_table) { + PMD_INIT_LOG(ERR, "Error reading NFP ethernet table"); + ret = -EIO; + goto hwinfo_cleanup; + } + + if (nfp_fw_setup(pci_dev, cpp, nfp_eth_table, hwinfo)) { + PMD_INIT_LOG(ERR, "Error when uploading firmware"); + ret = -EIO; + goto eth_table_cleanup; + } + + /* Now the symbol table should be there */ + sym_tbl = nfp_rtsym_table_read(cpp); + if (!sym_tbl) { + PMD_INIT_LOG(ERR, "Something is wrong with the firmware" + " symbol table"); + ret = -EIO; + goto eth_table_cleanup; + } + + total_ports = nfp_rtsym_read_le(sym_tbl, "nfd_cfg_pf0_num_ports", &err); + if (total_ports != (int)nfp_eth_table->count) { + PMD_DRV_LOG(ERR, "Inconsistent number of ports"); + ret = -EIO; + goto sym_tbl_cleanup; + } + + PMD_INIT_LOG(INFO, "Total physical ports: %d", total_ports); + + if (total_ports <= 0 || total_ports > 8) { + PMD_INIT_LOG(ERR, "nfd_cfg_pf0_num_ports symbol with wrong value"); + ret = -ENODEV; + goto sym_tbl_cleanup; + } + /* Allocate memory for the PF "device" */ + snprintf(name, sizeof(name), "nfp_pf%d", 0); + pf_dev = rte_zmalloc(name, sizeof(*pf_dev), 0); + if (!pf_dev) { + ret = -ENOMEM; + goto sym_tbl_cleanup; + } + + /* Populate the newly created PF device */ + pf_dev->cpp = cpp; + pf_dev->hwinfo = hwinfo; + pf_dev->sym_tbl = sym_tbl; + pf_dev->total_phyports = total_ports; + + if (total_ports > 1) + pf_dev->multiport = true; + + pf_dev->pci_dev = pci_dev; + + /* Map the symbol table */ + pf_dev->ctrl_bar = nfp_rtsym_map(pf_dev->sym_tbl, "_pf0_net_bar0", + pf_dev->total_phyports * 32768, + &pf_dev->ctrl_area); + if (!pf_dev->ctrl_bar) { + PMD_INIT_LOG(ERR, "nfp_rtsym_map fails for _pf0_net_ctrl_bar"); + ret = -EIO; + goto pf_cleanup; + } + + PMD_INIT_LOG(DEBUG, "ctrl bar: %p", pf_dev->ctrl_bar); + + /* configure access to tx/rx vNIC BARs */ + pf_dev->hw_queues = nfp_cpp_map_area(pf_dev->cpp, 0, 0, + NFP_PCIE_QUEUE(0), + NFP_QCP_QUEUE_AREA_SZ, + &pf_dev->hwqueues_area); + if (!pf_dev->hw_queues) { + PMD_INIT_LOG(ERR, "nfp_rtsym_map fails for net.qc"); + ret = -EIO; + goto ctrl_area_cleanup; + } + + PMD_INIT_LOG(DEBUG, "tx/rx bar address: 0x%p", pf_dev->hw_queues); + + /* Initialize and prep physical ports now + * This will loop through all physical ports + */ + ret = nfp_init_phyports(pf_dev); + if (ret) { + PMD_INIT_LOG(ERR, "Could not create physical ports"); + goto hwqueues_cleanup; + } + + /* register the CPP bridge service here for primary use */ + nfp_register_cpp_service(pf_dev->cpp); + + return 0; + +hwqueues_cleanup: + nfp_cpp_area_free(pf_dev->hwqueues_area); +ctrl_area_cleanup: + nfp_cpp_area_free(pf_dev->ctrl_area); +pf_cleanup: + rte_free(pf_dev); +sym_tbl_cleanup: + free(sym_tbl); +eth_table_cleanup: + free(nfp_eth_table); +hwinfo_cleanup: + free(hwinfo); +error: + return ret; +} + +/* + * When attaching to the NFP4000/6000 PF on a secondary process there + * is no need to initilise the PF again. Only minimal work is required + * here + */ +static int nfp_pf_secondary_init(struct rte_pci_device *pci_dev) +{ + struct nfp_cpp *cpp; + struct nfp_rtsym_table *sym_tbl; + int total_ports; + int i; + int err; + + if (!pci_dev) + return -ENODEV; + + /* + * When device bound to UIO, the device could be used, by mistake, + * by two DPDK apps, and the UIO driver does not avoid it. This + * could lead to a serious problem when configuring the NFP CPP + * interface. Here we avoid this telling to the CPP init code to + * use a lock file if UIO is being used. + */ + if (pci_dev->kdrv == RTE_PCI_KDRV_VFIO) + cpp = nfp_cpp_from_device_name(pci_dev, 0); + else + cpp = nfp_cpp_from_device_name(pci_dev, 1); + + if (!cpp) { + PMD_INIT_LOG(ERR, "A CPP handle can not be obtained"); + return -EIO; + } + + /* + * We don't have access to the PF created in the primary process + * here so we have to read the number of ports from firmware + */ + sym_tbl = nfp_rtsym_table_read(cpp); + if (!sym_tbl) { + PMD_INIT_LOG(ERR, "Something is wrong with the firmware" + " symbol table"); + return -EIO; + } + + total_ports = nfp_rtsym_read_le(sym_tbl, "nfd_cfg_pf0_num_ports", &err); + + for (i = 0; i < total_ports; i++) { + struct rte_eth_dev *eth_dev; + char port_name[RTE_ETH_NAME_MAX_LEN]; + + snprintf(port_name, sizeof(port_name), "%s_port%d", + pci_dev->device.name, i); + + PMD_DRV_LOG(DEBUG, "Secondary attaching to port %s", + port_name); + eth_dev = rte_eth_dev_attach_secondary(port_name); + if (!eth_dev) { + RTE_LOG(ERR, EAL, + "secondary process attach failed, " + "ethdev doesn't exist"); + return -ENODEV; + } + eth_dev->process_private = cpp; + eth_dev->dev_ops = &nfp_net_eth_dev_ops; + eth_dev->rx_queue_count = nfp_net_rx_queue_count; + eth_dev->rx_pkt_burst = &nfp_net_recv_pkts; + eth_dev->tx_pkt_burst = &nfp_net_xmit_pkts; + rte_eth_dev_probing_finish(eth_dev); + } + + /* Register the CPP bridge service for the secondary too */ + nfp_register_cpp_service(cpp); + + return 0; +} + +static int nfp_pf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *dev) +{ + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + return nfp_pf_init(dev); + else + return nfp_pf_secondary_init(dev); +} + +static const struct rte_pci_id pci_id_nfp_pf_net_map[] = { + { + RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME, + PCI_DEVICE_ID_NFP4000_PF_NIC) + }, + { + RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME, + PCI_DEVICE_ID_NFP6000_PF_NIC) + }, + { + .vendor_id = 0, + }, +}; + +static int nfp_pci_uninit(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev; + uint16_t port_id; + + pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + + /* Free up all physical ports under PF */ + RTE_ETH_FOREACH_DEV_OF(port_id, &pci_dev->device) + rte_eth_dev_close(port_id); + /* + * Ports can be closed and freed but hotplugging is not + * currently supported + */ + return -ENOTSUP; +} + +static int eth_nfp_pci_remove(struct rte_pci_device *pci_dev) +{ + return rte_eth_dev_pci_generic_remove(pci_dev, nfp_pci_uninit); +} + +static struct rte_pci_driver rte_nfp_net_pf_pmd = { + .id_table = pci_id_nfp_pf_net_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, + .probe = nfp_pf_pci_probe, + .remove = eth_nfp_pci_remove, +}; + +RTE_PMD_REGISTER_PCI(net_nfp_pf, rte_nfp_net_pf_pmd); +RTE_PMD_REGISTER_PCI_TABLE(net_nfp_pf, pci_id_nfp_pf_net_map); +RTE_PMD_REGISTER_KMOD_DEP(net_nfp_pf, "* igb_uio | uio_pci_generic | vfio"); +/* + * Local variables: + * c-file-style: "Linux" + * indent-tabs-mode: t + * End: + */ diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c index a1722886ba..a6097eaab0 100644 --- a/drivers/net/nfp/nfp_net.c +++ b/drivers/net/nfp/nfp_net.c @@ -37,10 +37,10 @@ #include "nfpcore/nfp_rtsym.h" #include "nfpcore/nfp_nsp.h" -#include "nfp_net_pmd.h" +#include "nfp_common.h" #include "nfp_rxtx.h" -#include "nfp_net_logs.h" -#include "nfp_net_ctrl.h" +#include "nfp_logs.h" +#include "nfp_ctrl.h" #include "nfp_cpp_bridge.h" #include @@ -51,20 +51,6 @@ #include #include -/* Prototypes */ -static int nfp_net_close(struct rte_eth_dev *dev); -static int nfp_net_init(struct rte_eth_dev *eth_dev); -static int nfp_pf_init(struct rte_pci_device *pci_dev); -static int nfp_pf_secondary_init(struct rte_pci_device *pci_dev); -static int nfp_net_pf_read_mac(struct nfp_pf_dev *pf_dev, int port); -static int nfp_pci_uninit(struct rte_eth_dev *eth_dev); -static int nfp_init_phyports(struct nfp_pf_dev *pf_dev); -static int nfp_net_stop(struct rte_eth_dev *dev); -static int nfp_fw_setup(struct rte_pci_device *dev, - struct nfp_cpp *cpp, - struct nfp_eth_table *nfp_eth_table, - struct nfp_hwinfo *hwinfo); - static int __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update) { @@ -266,24 +252,6 @@ nfp_eth_copy_mac(uint8_t *dst, const uint8_t *src) dst[i] = src[i]; } -static int -nfp_net_pf_read_mac(struct nfp_pf_dev *pf_dev, int port) -{ - struct nfp_eth_table *nfp_eth_table; - struct nfp_net_hw *hw = NULL; - - /* Grab a pointer to the correct physical port */ - hw = pf_dev->ports[port]; - - nfp_eth_table = nfp_eth_read_ports(pf_dev->cpp); - - nfp_eth_copy_mac((uint8_t *)&hw->mac_addr, - (uint8_t *)&nfp_eth_table->ports[port].mac_addr); - - free(nfp_eth_table); - return 0; -} - void nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac) { @@ -436,282 +404,6 @@ nfp_check_offloads(struct rte_eth_dev *dev) return ctrl; } -static int -nfp_net_start(struct rte_eth_dev *dev) -{ - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; - uint32_t new_ctrl, update = 0; - struct nfp_net_hw *hw; - struct nfp_pf_dev *pf_dev; - struct rte_eth_conf *dev_conf; - struct rte_eth_rxmode *rxmode; - uint32_t intr_vector; - int ret; - - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private); - - PMD_INIT_LOG(DEBUG, "Start"); - - /* Disabling queues just in case... */ - nfp_net_disable_queues(dev); - - /* Enabling the required queues in the device */ - nfp_net_enable_queues(dev); - - /* check and configure queue intr-vector mapping */ - if (dev->data->dev_conf.intr_conf.rxq != 0) { - if (pf_dev->multiport) { - PMD_INIT_LOG(ERR, "PMD rx interrupt is not supported " - "with NFP multiport PF"); - return -EINVAL; - } - if (intr_handle->type == RTE_INTR_HANDLE_UIO) { - /* - * Better not to share LSC with RX interrupts. - * Unregistering LSC interrupt handler - */ - rte_intr_callback_unregister(&pci_dev->intr_handle, - nfp_net_dev_interrupt_handler, (void *)dev); - - if (dev->data->nb_rx_queues > 1) { - PMD_INIT_LOG(ERR, "PMD rx interrupt only " - "supports 1 queue with UIO"); - return -EIO; - } - } - intr_vector = dev->data->nb_rx_queues; - if (rte_intr_efd_enable(intr_handle, intr_vector)) - return -1; - - nfp_configure_rx_interrupt(dev, intr_handle); - update = NFP_NET_CFG_UPDATE_MSIX; - } - - rte_intr_enable(intr_handle); - - new_ctrl = nfp_check_offloads(dev); - - /* Writing configuration parameters in the device */ - nfp_net_params_setup(hw); - - dev_conf = &dev->data->dev_conf; - rxmode = &dev_conf->rxmode; - - if (rxmode->mq_mode & ETH_MQ_RX_RSS) { - nfp_net_rss_config_default(dev); - update |= NFP_NET_CFG_UPDATE_RSS; - new_ctrl |= NFP_NET_CFG_CTRL_RSS; - } - - /* Enable device */ - new_ctrl |= NFP_NET_CFG_CTRL_ENABLE; - - update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING; - - if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG) - new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG; - - nn_cfg_writel(hw, NFP_NET_CFG_CTRL, new_ctrl); - if (nfp_net_reconfig(hw, new_ctrl, update) < 0) - return -EIO; - - /* - * Allocating rte mbufs for configured rx queues. - * This requires queues being enabled before - */ - if (nfp_net_rx_freelist_setup(dev) < 0) { - ret = -ENOMEM; - goto error; - } - - if (hw->is_phyport) { - if (rte_eal_process_type() == RTE_PROC_PRIMARY) - /* Configure the physical port up */ - nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1); - else - nfp_eth_set_configured(dev->process_private, - hw->nfp_idx, 1); - } - - hw->ctrl = new_ctrl; - - return 0; - -error: - /* - * An error returned by this function should mean the app - * exiting and then the system releasing all the memory - * allocated even memory coming from hugepages. - * - * The device could be enabled at this point with some queues - * ready for getting packets. This is true if the call to - * nfp_net_rx_freelist_setup() succeeds for some queues but - * fails for subsequent queues. - * - * This should make the app exiting but better if we tell the - * device first. - */ - nfp_net_disable_queues(dev); - - return ret; -} - -/* Stop device: disable rx and tx functions to allow for reconfiguring. */ -static int -nfp_net_stop(struct rte_eth_dev *dev) -{ - int i; - struct nfp_net_hw *hw; - - PMD_INIT_LOG(DEBUG, "Stop"); - - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - - nfp_net_disable_queues(dev); - - /* Clear queues */ - for (i = 0; i < dev->data->nb_tx_queues; i++) { - nfp_net_reset_tx_queue( - (struct nfp_net_txq *)dev->data->tx_queues[i]); - } - - for (i = 0; i < dev->data->nb_rx_queues; i++) { - nfp_net_reset_rx_queue( - (struct nfp_net_rxq *)dev->data->rx_queues[i]); - } - - if (hw->is_phyport) { - if (rte_eal_process_type() == RTE_PROC_PRIMARY) - /* Configure the physical port down */ - nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0); - else - nfp_eth_set_configured(dev->process_private, - hw->nfp_idx, 0); - } - - return 0; -} - -/* Set the link up. */ -static int -nfp_net_set_link_up(struct rte_eth_dev *dev) -{ - struct nfp_net_hw *hw; - - PMD_DRV_LOG(DEBUG, "Set link up"); - - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - - if (!hw->is_phyport) - return -ENOTSUP; - - if (rte_eal_process_type() == RTE_PROC_PRIMARY) - /* Configure the physical port down */ - return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1); - else - return nfp_eth_set_configured(dev->process_private, - hw->nfp_idx, 1); -} - -/* Set the link down. */ -static int -nfp_net_set_link_down(struct rte_eth_dev *dev) -{ - struct nfp_net_hw *hw; - - PMD_DRV_LOG(DEBUG, "Set link down"); - - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - - if (!hw->is_phyport) - return -ENOTSUP; - - if (rte_eal_process_type() == RTE_PROC_PRIMARY) - /* Configure the physical port down */ - return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0); - else - return nfp_eth_set_configured(dev->process_private, - hw->nfp_idx, 0); -} - -/* Reset and stop device. The device can not be restarted. */ -static int -nfp_net_close(struct rte_eth_dev *dev) -{ - struct nfp_net_hw *hw; - struct rte_pci_device *pci_dev; - int i; - - if (rte_eal_process_type() != RTE_PROC_PRIMARY) - return 0; - - PMD_INIT_LOG(DEBUG, "Close"); - - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - pci_dev = RTE_ETH_DEV_TO_PCI(dev); - - /* - * We assume that the DPDK application is stopping all the - * threads/queues before calling the device close function. - */ - - nfp_net_disable_queues(dev); - - /* Clear queues */ - for (i = 0; i < dev->data->nb_tx_queues; i++) { - nfp_net_reset_tx_queue( - (struct nfp_net_txq *)dev->data->tx_queues[i]); - } - - for (i = 0; i < dev->data->nb_rx_queues; i++) { - nfp_net_reset_rx_queue( - (struct nfp_net_rxq *)dev->data->rx_queues[i]); - } - - /* Only free PF resources after all physical ports have been closed */ - if (pci_dev->id.device_id == PCI_DEVICE_ID_NFP4000_PF_NIC || - pci_dev->id.device_id == PCI_DEVICE_ID_NFP6000_PF_NIC) { - struct nfp_pf_dev *pf_dev; - pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private); - - /* Mark this port as unused and free device priv resources*/ - nn_cfg_writeb(hw, NFP_NET_CFG_LSC, 0xff); - pf_dev->ports[hw->idx] = NULL; - rte_eth_dev_release_port(dev); - - for (i = 0; i < pf_dev->total_phyports; i++) { - /* Check to see if ports are still in use */ - if (pf_dev->ports[i]) - return 0; - } - - /* Now it is safe to free all PF resources */ - PMD_INIT_LOG(INFO, "Freeing PF resources"); - nfp_cpp_area_free(pf_dev->ctrl_area); - nfp_cpp_area_free(pf_dev->hwqueues_area); - free(pf_dev->hwinfo); - free(pf_dev->sym_tbl); - nfp_cpp_free(pf_dev->cpp); - rte_free(pf_dev); - } - - rte_intr_disable(&pci_dev->intr_handle); - - /* unregister callback func from eal lib */ - rte_intr_callback_unregister(&pci_dev->intr_handle, - nfp_net_dev_interrupt_handler, - (void *)dev); - - /* - * The ixgbe PMD driver disables the pcie master on the - * device. The i40e does not... - */ - - return 0; -} - int nfp_net_promisc_enable(struct rte_eth_dev *dev) { @@ -1620,780 +1312,6 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev) return ret; } - -/* Initialise and register driver with DPDK Application */ -static const struct eth_dev_ops nfp_net_eth_dev_ops = { - .dev_configure = nfp_net_configure, - .dev_start = nfp_net_start, - .dev_stop = nfp_net_stop, - .dev_set_link_up = nfp_net_set_link_up, - .dev_set_link_down = nfp_net_set_link_down, - .dev_close = nfp_net_close, - .promiscuous_enable = nfp_net_promisc_enable, - .promiscuous_disable = nfp_net_promisc_disable, - .link_update = nfp_net_link_update, - .stats_get = nfp_net_stats_get, - .stats_reset = nfp_net_stats_reset, - .dev_infos_get = nfp_net_infos_get, - .dev_supported_ptypes_get = nfp_net_supported_ptypes_get, - .mtu_set = nfp_net_dev_mtu_set, - .mac_addr_set = nfp_set_mac_addr, - .vlan_offload_set = nfp_net_vlan_offload_set, - .reta_update = nfp_net_reta_update, - .reta_query = nfp_net_reta_query, - .rss_hash_update = nfp_net_rss_hash_update, - .rss_hash_conf_get = nfp_net_rss_hash_conf_get, - .rx_queue_setup = nfp_net_rx_queue_setup, - .rx_queue_release = nfp_net_rx_queue_release, - .tx_queue_setup = nfp_net_tx_queue_setup, - .tx_queue_release = nfp_net_tx_queue_release, - .rx_queue_intr_enable = nfp_rx_queue_intr_enable, - .rx_queue_intr_disable = nfp_rx_queue_intr_disable, -}; - - -static int -nfp_net_init(struct rte_eth_dev *eth_dev) -{ - struct rte_pci_device *pci_dev; - struct nfp_pf_dev *pf_dev; - struct nfp_net_hw *hw; - - uint64_t tx_bar_off = 0, rx_bar_off = 0; - uint32_t start_q; - int stride = 4; - int port = 0; - int err; - - PMD_INIT_FUNC_TRACE(); - - pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - - /* Use backpointer here to the PF of this eth_dev */ - pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(eth_dev->data->dev_private); - - /* NFP can not handle DMA addresses requiring more than 40 bits */ - if (rte_mem_check_dma_mask(40)) { - RTE_LOG(ERR, PMD, "device %s can not be used:", - pci_dev->device.name); - RTE_LOG(ERR, PMD, "\trestricted dma mask to 40 bits!\n"); - return -ENODEV; - }; - - if ((pci_dev->id.device_id == PCI_DEVICE_ID_NFP4000_PF_NIC) || - (pci_dev->id.device_id == PCI_DEVICE_ID_NFP6000_PF_NIC)) { - port = ((struct nfp_net_hw *)eth_dev->data->dev_private)->idx; - if (port < 0 || port > 7) { - PMD_DRV_LOG(ERR, "Port value is wrong"); - return -ENODEV; - } - - /* Use PF array of physical ports to get pointer to - * this specific port - */ - hw = pf_dev->ports[port]; - - PMD_INIT_LOG(DEBUG, "Working with physical port number: %d, " - "NFP internal port number: %d", - port, hw->nfp_idx); - - } else { - hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); - } - - eth_dev->dev_ops = &nfp_net_eth_dev_ops; - eth_dev->rx_queue_count = nfp_net_rx_queue_count; - eth_dev->rx_pkt_burst = &nfp_net_recv_pkts; - eth_dev->tx_pkt_burst = &nfp_net_xmit_pkts; - - /* For secondary processes, the primary has done all the work */ - if (rte_eal_process_type() != RTE_PROC_PRIMARY) - return 0; - - rte_eth_copy_pci_info(eth_dev, pci_dev); - - hw->device_id = pci_dev->id.device_id; - hw->vendor_id = pci_dev->id.vendor_id; - hw->subsystem_device_id = pci_dev->id.subsystem_device_id; - hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id; - - PMD_INIT_LOG(DEBUG, "nfp_net: device (%u:%u) %u:%u:%u:%u", - pci_dev->id.vendor_id, pci_dev->id.device_id, - pci_dev->addr.domain, pci_dev->addr.bus, - pci_dev->addr.devid, pci_dev->addr.function); - - hw->ctrl_bar = (uint8_t *)pci_dev->mem_resource[0].addr; - if (hw->ctrl_bar == NULL) { - PMD_DRV_LOG(ERR, - "hw->ctrl_bar is NULL. BAR0 not configured"); - return -ENODEV; - } - - if (hw->is_phyport) { - if (port == 0) { - hw->ctrl_bar = pf_dev->ctrl_bar; - } else { - if (!pf_dev->ctrl_bar) - return -ENODEV; - /* Use port offset in pf ctrl_bar for this - * ports control bar - */ - hw->ctrl_bar = pf_dev->ctrl_bar + - (port * NFP_PF_CSR_SLICE_SIZE); - } - } - - PMD_INIT_LOG(DEBUG, "ctrl bar: %p", hw->ctrl_bar); - - hw->max_rx_queues = nn_cfg_readl(hw, NFP_NET_CFG_MAX_RXRINGS); - hw->max_tx_queues = nn_cfg_readl(hw, NFP_NET_CFG_MAX_TXRINGS); - - /* Work out where in the BAR the queues start. */ - switch (pci_dev->id.device_id) { - case PCI_DEVICE_ID_NFP4000_PF_NIC: - case PCI_DEVICE_ID_NFP6000_PF_NIC: - case PCI_DEVICE_ID_NFP6000_VF_NIC: - start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ); - tx_bar_off = (uint64_t)start_q * NFP_QCP_QUEUE_ADDR_SZ; - start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ); - rx_bar_off = (uint64_t)start_q * NFP_QCP_QUEUE_ADDR_SZ; - break; - default: - PMD_DRV_LOG(ERR, "nfp_net: no device ID matching"); - err = -ENODEV; - goto dev_err_ctrl_map; - } - - PMD_INIT_LOG(DEBUG, "tx_bar_off: 0x%" PRIx64 "", tx_bar_off); - PMD_INIT_LOG(DEBUG, "rx_bar_off: 0x%" PRIx64 "", rx_bar_off); - - if (hw->is_phyport) { - hw->tx_bar = pf_dev->hw_queues + tx_bar_off; - hw->rx_bar = pf_dev->hw_queues + rx_bar_off; - eth_dev->data->dev_private = hw; - } else { - hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + - tx_bar_off; - hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + - rx_bar_off; - } - - PMD_INIT_LOG(DEBUG, "ctrl_bar: %p, tx_bar: %p, rx_bar: %p", - hw->ctrl_bar, hw->tx_bar, hw->rx_bar); - - nfp_net_cfg_queue_setup(hw); - - /* Get some of the read-only fields from the config BAR */ - hw->ver = nn_cfg_readl(hw, NFP_NET_CFG_VERSION); - hw->cap = nn_cfg_readl(hw, NFP_NET_CFG_CAP); - hw->max_mtu = nn_cfg_readl(hw, NFP_NET_CFG_MAX_MTU); - hw->mtu = RTE_ETHER_MTU; - - /* VLAN insertion is incompatible with LSOv2 */ - if (hw->cap & NFP_NET_CFG_CTRL_LSO2) - hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN; - - if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2) - hw->rx_offset = NFP_NET_RX_OFFSET; - else - hw->rx_offset = nn_cfg_readl(hw, NFP_NET_CFG_RX_OFFSET_ADDR); - - PMD_INIT_LOG(INFO, "VER: %u.%u, Maximum supported MTU: %d", - NFD_CFG_MAJOR_VERSION_of(hw->ver), - NFD_CFG_MINOR_VERSION_of(hw->ver), hw->max_mtu); - - PMD_INIT_LOG(INFO, "CAP: %#x, %s%s%s%s%s%s%s%s%s%s%s%s%s%s", hw->cap, - hw->cap & NFP_NET_CFG_CTRL_PROMISC ? "PROMISC " : "", - hw->cap & NFP_NET_CFG_CTRL_L2BC ? "L2BCFILT " : "", - hw->cap & NFP_NET_CFG_CTRL_L2MC ? "L2MCFILT " : "", - hw->cap & NFP_NET_CFG_CTRL_RXCSUM ? "RXCSUM " : "", - hw->cap & NFP_NET_CFG_CTRL_TXCSUM ? "TXCSUM " : "", - hw->cap & NFP_NET_CFG_CTRL_RXVLAN ? "RXVLAN " : "", - hw->cap & NFP_NET_CFG_CTRL_TXVLAN ? "TXVLAN " : "", - hw->cap & NFP_NET_CFG_CTRL_SCATTER ? "SCATTER " : "", - hw->cap & NFP_NET_CFG_CTRL_GATHER ? "GATHER " : "", - hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR ? "LIVE_ADDR " : "", - hw->cap & NFP_NET_CFG_CTRL_LSO ? "TSO " : "", - hw->cap & NFP_NET_CFG_CTRL_LSO2 ? "TSOv2 " : "", - hw->cap & NFP_NET_CFG_CTRL_RSS ? "RSS " : "", - hw->cap & NFP_NET_CFG_CTRL_RSS2 ? "RSSv2 " : ""); - - hw->ctrl = 0; - - hw->stride_rx = stride; - hw->stride_tx = stride; - - PMD_INIT_LOG(INFO, "max_rx_queues: %u, max_tx_queues: %u", - hw->max_rx_queues, hw->max_tx_queues); - - /* Initializing spinlock for reconfigs */ - rte_spinlock_init(&hw->reconfig_lock); - - /* Allocating memory for mac addr */ - eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", - RTE_ETHER_ADDR_LEN, 0); - if (eth_dev->data->mac_addrs == NULL) { - PMD_INIT_LOG(ERR, "Failed to space for MAC address"); - err = -ENOMEM; - goto dev_err_queues_map; - } - - if (hw->is_phyport) { - nfp_net_pf_read_mac(pf_dev, port); - nfp_net_write_mac(hw, (uint8_t *)&hw->mac_addr); - } - - if (!rte_is_valid_assigned_ether_addr( - (struct rte_ether_addr *)&hw->mac_addr)) { - PMD_INIT_LOG(INFO, "Using random mac address for port %d", - port); - /* Using random mac addresses for VFs */ - rte_eth_random_addr(&hw->mac_addr[0]); - nfp_net_write_mac(hw, (uint8_t *)&hw->mac_addr); - } - - /* Copying mac address to DPDK eth_dev struct */ - rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, - ð_dev->data->mac_addrs[0]); - - if (!(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR)) - eth_dev->data->dev_flags |= RTE_ETH_DEV_NOLIVE_MAC_ADDR; - - eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; - - PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x " - "mac=%02x:%02x:%02x:%02x:%02x:%02x", - eth_dev->data->port_id, pci_dev->id.vendor_id, - pci_dev->id.device_id, - hw->mac_addr[0], hw->mac_addr[1], hw->mac_addr[2], - hw->mac_addr[3], hw->mac_addr[4], hw->mac_addr[5]); - - if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - /* Registering LSC interrupt handler */ - rte_intr_callback_register(&pci_dev->intr_handle, - nfp_net_dev_interrupt_handler, - (void *)eth_dev); - /* Telling the firmware about the LSC interrupt entry */ - nn_cfg_writeb(hw, NFP_NET_CFG_LSC, NFP_NET_IRQ_LSC_IDX); - /* Recording current stats counters values */ - nfp_net_stats_reset(eth_dev); - } - - return 0; - -dev_err_queues_map: - nfp_cpp_area_free(hw->hwqueues_area); -dev_err_ctrl_map: - nfp_cpp_area_free(hw->ctrl_area); - - return err; -} - -#define DEFAULT_FW_PATH "/lib/firmware/netronome" - -static int -nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card) -{ - struct nfp_cpp *cpp = nsp->cpp; - int fw_f; - char *fw_buf; - char fw_name[125]; - char serial[40]; - struct stat file_stat; - off_t fsize, bytes; - - /* Looking for firmware file in order of priority */ - - /* First try to find a firmware image specific for this device */ - snprintf(serial, sizeof(serial), - "serial-%02x-%02x-%02x-%02x-%02x-%02x-%02x-%02x", - cpp->serial[0], cpp->serial[1], cpp->serial[2], cpp->serial[3], - cpp->serial[4], cpp->serial[5], cpp->interface >> 8, - cpp->interface & 0xff); - - snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH, - serial); - - PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name); - fw_f = open(fw_name, O_RDONLY); - if (fw_f >= 0) - goto read_fw; - - /* Then try the PCI name */ - snprintf(fw_name, sizeof(fw_name), "%s/pci-%s.nffw", DEFAULT_FW_PATH, - dev->device.name); - - PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name); - fw_f = open(fw_name, O_RDONLY); - if (fw_f >= 0) - goto read_fw; - - /* Finally try the card type and media */ - snprintf(fw_name, sizeof(fw_name), "%s/%s", DEFAULT_FW_PATH, card); - PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name); - fw_f = open(fw_name, O_RDONLY); - if (fw_f < 0) { - PMD_DRV_LOG(INFO, "Firmware file %s not found.", fw_name); - return -ENOENT; - } - -read_fw: - if (fstat(fw_f, &file_stat) < 0) { - PMD_DRV_LOG(INFO, "Firmware file %s size is unknown", fw_name); - close(fw_f); - return -ENOENT; - } - - fsize = file_stat.st_size; - PMD_DRV_LOG(INFO, "Firmware file found at %s with size: %" PRIu64 "", - fw_name, (uint64_t)fsize); - - fw_buf = malloc((size_t)fsize); - if (!fw_buf) { - PMD_DRV_LOG(INFO, "malloc failed for fw buffer"); - close(fw_f); - return -ENOMEM; - } - memset(fw_buf, 0, fsize); - - bytes = read(fw_f, fw_buf, fsize); - if (bytes != fsize) { - PMD_DRV_LOG(INFO, "Reading fw to buffer failed." - "Just %" PRIu64 " of %" PRIu64 " bytes read", - (uint64_t)bytes, (uint64_t)fsize); - free(fw_buf); - close(fw_f); - return -EIO; - } - - PMD_DRV_LOG(INFO, "Uploading the firmware ..."); - nfp_nsp_load_fw(nsp, fw_buf, bytes); - PMD_DRV_LOG(INFO, "Done"); - - free(fw_buf); - close(fw_f); - - return 0; -} - -static int -nfp_fw_setup(struct rte_pci_device *dev, struct nfp_cpp *cpp, - struct nfp_eth_table *nfp_eth_table, struct nfp_hwinfo *hwinfo) -{ - struct nfp_nsp *nsp; - const char *nfp_fw_model; - char card_desc[100]; - int err = 0; - - nfp_fw_model = nfp_hwinfo_lookup(hwinfo, "assembly.partno"); - - if (nfp_fw_model) { - PMD_DRV_LOG(INFO, "firmware model found: %s", nfp_fw_model); - } else { - PMD_DRV_LOG(ERR, "firmware model NOT found"); - return -EIO; - } - - if (nfp_eth_table->count == 0 || nfp_eth_table->count > 8) { - PMD_DRV_LOG(ERR, "NFP ethernet table reports wrong ports: %u", - nfp_eth_table->count); - return -EIO; - } - - PMD_DRV_LOG(INFO, "NFP ethernet port table reports %u ports", - nfp_eth_table->count); - - PMD_DRV_LOG(INFO, "Port speed: %u", nfp_eth_table->ports[0].speed); - - snprintf(card_desc, sizeof(card_desc), "nic_%s_%dx%d.nffw", - nfp_fw_model, nfp_eth_table->count, - nfp_eth_table->ports[0].speed / 1000); - - nsp = nfp_nsp_open(cpp); - if (!nsp) { - PMD_DRV_LOG(ERR, "NFP error when obtaining NSP handle"); - return -EIO; - } - - nfp_nsp_device_soft_reset(nsp); - err = nfp_fw_upload(dev, nsp, card_desc); - - nfp_nsp_close(nsp); - return err; -} - -static int nfp_init_phyports(struct nfp_pf_dev *pf_dev) -{ - struct nfp_net_hw *hw; - struct rte_eth_dev *eth_dev; - struct nfp_eth_table *nfp_eth_table = NULL; - int ret = 0; - int i; - - nfp_eth_table = nfp_eth_read_ports(pf_dev->cpp); - if (!nfp_eth_table) { - PMD_INIT_LOG(ERR, "Error reading NFP ethernet table"); - ret = -EIO; - goto error; - } - - /* Loop through all physical ports on PF */ - for (i = 0; i < pf_dev->total_phyports; i++) { - const unsigned int numa_node = rte_socket_id(); - char port_name[RTE_ETH_NAME_MAX_LEN]; - - snprintf(port_name, sizeof(port_name), "%s_port%d", - pf_dev->pci_dev->device.name, i); - - /* Allocate a eth_dev for this phyport */ - eth_dev = rte_eth_dev_allocate(port_name); - if (!eth_dev) { - ret = -ENODEV; - goto port_cleanup; - } - - /* Allocate memory for this phyport */ - eth_dev->data->dev_private = - rte_zmalloc_socket(port_name, sizeof(struct nfp_net_hw), - RTE_CACHE_LINE_SIZE, numa_node); - if (!eth_dev->data->dev_private) { - ret = -ENOMEM; - rte_eth_dev_release_port(eth_dev); - goto port_cleanup; - } - - hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); - - /* Add this device to the PF's array of physical ports */ - pf_dev->ports[i] = hw; - - hw->pf_dev = pf_dev; - hw->cpp = pf_dev->cpp; - hw->eth_dev = eth_dev; - hw->idx = i; - hw->nfp_idx = nfp_eth_table->ports[i].index; - hw->is_phyport = true; - - eth_dev->device = &pf_dev->pci_dev->device; - - /* ctrl/tx/rx BAR mappings and remaining init happens in - * nfp_net_init - */ - ret = nfp_net_init(eth_dev); - - if (ret) { - ret = -ENODEV; - goto port_cleanup; - } - - rte_eth_dev_probing_finish(eth_dev); - - } /* End loop, all ports on this PF */ - ret = 0; - goto eth_table_cleanup; - -port_cleanup: - for (i = 0; i < pf_dev->total_phyports; i++) { - if (pf_dev->ports[i] && pf_dev->ports[i]->eth_dev) { - struct rte_eth_dev *tmp_dev; - tmp_dev = pf_dev->ports[i]->eth_dev; - rte_eth_dev_release_port(tmp_dev); - pf_dev->ports[i] = NULL; - } - } -eth_table_cleanup: - free(nfp_eth_table); -error: - return ret; -} - -static int nfp_pf_init(struct rte_pci_device *pci_dev) -{ - struct nfp_pf_dev *pf_dev = NULL; - struct nfp_cpp *cpp; - struct nfp_hwinfo *hwinfo; - struct nfp_rtsym_table *sym_tbl; - struct nfp_eth_table *nfp_eth_table = NULL; - char name[RTE_ETH_NAME_MAX_LEN]; - int total_ports; - int ret = -ENODEV; - int err; - - if (!pci_dev) - return ret; - - /* - * When device bound to UIO, the device could be used, by mistake, - * by two DPDK apps, and the UIO driver does not avoid it. This - * could lead to a serious problem when configuring the NFP CPP - * interface. Here we avoid this telling to the CPP init code to - * use a lock file if UIO is being used. - */ - if (pci_dev->kdrv == RTE_PCI_KDRV_VFIO) - cpp = nfp_cpp_from_device_name(pci_dev, 0); - else - cpp = nfp_cpp_from_device_name(pci_dev, 1); - - if (!cpp) { - PMD_INIT_LOG(ERR, "A CPP handle can not be obtained"); - ret = -EIO; - goto error; - } - - hwinfo = nfp_hwinfo_read(cpp); - if (!hwinfo) { - PMD_INIT_LOG(ERR, "Error reading hwinfo table"); - ret = -EIO; - goto error; - } - - nfp_eth_table = nfp_eth_read_ports(cpp); - if (!nfp_eth_table) { - PMD_INIT_LOG(ERR, "Error reading NFP ethernet table"); - ret = -EIO; - goto hwinfo_cleanup; - } - - if (nfp_fw_setup(pci_dev, cpp, nfp_eth_table, hwinfo)) { - PMD_INIT_LOG(ERR, "Error when uploading firmware"); - ret = -EIO; - goto eth_table_cleanup; - } - - /* Now the symbol table should be there */ - sym_tbl = nfp_rtsym_table_read(cpp); - if (!sym_tbl) { - PMD_INIT_LOG(ERR, "Something is wrong with the firmware" - " symbol table"); - ret = -EIO; - goto eth_table_cleanup; - } - - total_ports = nfp_rtsym_read_le(sym_tbl, "nfd_cfg_pf0_num_ports", &err); - if (total_ports != (int)nfp_eth_table->count) { - PMD_DRV_LOG(ERR, "Inconsistent number of ports"); - ret = -EIO; - goto sym_tbl_cleanup; - } - - PMD_INIT_LOG(INFO, "Total physical ports: %d", total_ports); - - if (total_ports <= 0 || total_ports > 8) { - PMD_INIT_LOG(ERR, "nfd_cfg_pf0_num_ports symbol with wrong value"); - ret = -ENODEV; - goto sym_tbl_cleanup; - } - /* Allocate memory for the PF "device" */ - snprintf(name, sizeof(name), "nfp_pf%d", 0); - pf_dev = rte_zmalloc(name, sizeof(*pf_dev), 0); - if (!pf_dev) { - ret = -ENOMEM; - goto sym_tbl_cleanup; - } - - /* Populate the newly created PF device */ - pf_dev->cpp = cpp; - pf_dev->hwinfo = hwinfo; - pf_dev->sym_tbl = sym_tbl; - pf_dev->total_phyports = total_ports; - - if (total_ports > 1) - pf_dev->multiport = true; - - pf_dev->pci_dev = pci_dev; - - /* Map the symbol table */ - pf_dev->ctrl_bar = nfp_rtsym_map(pf_dev->sym_tbl, "_pf0_net_bar0", - pf_dev->total_phyports * 32768, - &pf_dev->ctrl_area); - if (!pf_dev->ctrl_bar) { - PMD_INIT_LOG(ERR, "nfp_rtsym_map fails for _pf0_net_ctrl_bar"); - ret = -EIO; - goto pf_cleanup; - } - - PMD_INIT_LOG(DEBUG, "ctrl bar: %p", pf_dev->ctrl_bar); - - /* configure access to tx/rx vNIC BARs */ - pf_dev->hw_queues = nfp_cpp_map_area(pf_dev->cpp, 0, 0, - NFP_PCIE_QUEUE(0), - NFP_QCP_QUEUE_AREA_SZ, - &pf_dev->hwqueues_area); - if (!pf_dev->hw_queues) { - PMD_INIT_LOG(ERR, "nfp_rtsym_map fails for net.qc"); - ret = -EIO; - goto ctrl_area_cleanup; - } - - PMD_INIT_LOG(DEBUG, "tx/rx bar address: 0x%p", pf_dev->hw_queues); - - /* Initialize and prep physical ports now - * This will loop through all physical ports - */ - ret = nfp_init_phyports(pf_dev); - if (ret) { - PMD_INIT_LOG(ERR, "Could not create physical ports"); - goto hwqueues_cleanup; - } - - /* register the CPP bridge service here for primary use */ - nfp_register_cpp_service(pf_dev->cpp); - - return 0; - -hwqueues_cleanup: - nfp_cpp_area_free(pf_dev->hwqueues_area); -ctrl_area_cleanup: - nfp_cpp_area_free(pf_dev->ctrl_area); -pf_cleanup: - rte_free(pf_dev); -sym_tbl_cleanup: - free(sym_tbl); -eth_table_cleanup: - free(nfp_eth_table); -hwinfo_cleanup: - free(hwinfo); -error: - return ret; -} - -/* - * When attaching to the NFP4000/6000 PF on a secondary process there - * is no need to initialize the PF again. Only minimal work is required - * here - */ -static int nfp_pf_secondary_init(struct rte_pci_device *pci_dev) -{ - struct nfp_cpp *cpp; - struct nfp_rtsym_table *sym_tbl; - int total_ports; - int i; - int err; - - if (!pci_dev) - return -ENODEV; - - /* - * When device bound to UIO, the device could be used, by mistake, - * by two DPDK apps, and the UIO driver does not avoid it. This - * could lead to a serious problem when configuring the NFP CPP - * interface. Here we avoid this telling to the CPP init code to - * use a lock file if UIO is being used. - */ - if (pci_dev->kdrv == RTE_PCI_KDRV_VFIO) - cpp = nfp_cpp_from_device_name(pci_dev, 0); - else - cpp = nfp_cpp_from_device_name(pci_dev, 1); - - if (!cpp) { - PMD_INIT_LOG(ERR, "A CPP handle can not be obtained"); - return -EIO; - } - - /* - * We don't have access to the PF created in the primary process - * here so we have to read the number of ports from firmware - */ - sym_tbl = nfp_rtsym_table_read(cpp); - if (!sym_tbl) { - PMD_INIT_LOG(ERR, "Something is wrong with the firmware" - " symbol table"); - return -EIO; - } - - total_ports = nfp_rtsym_read_le(sym_tbl, "nfd_cfg_pf0_num_ports", &err); - - for (i = 0; i < total_ports; i++) { - struct rte_eth_dev *eth_dev; - char port_name[RTE_ETH_NAME_MAX_LEN]; - - snprintf(port_name, sizeof(port_name), "%s_port%d", - pci_dev->device.name, i); - - PMD_DRV_LOG(DEBUG, "Secondary attaching to port %s", - port_name); - eth_dev = rte_eth_dev_attach_secondary(port_name); - if (!eth_dev) { - RTE_LOG(ERR, EAL, - "secondary process attach failed, " - "ethdev doesn't exist"); - return -ENODEV; - } - eth_dev->process_private = cpp; - eth_dev->dev_ops = &nfp_net_eth_dev_ops; - eth_dev->rx_queue_count = nfp_net_rx_queue_count; - eth_dev->rx_pkt_burst = &nfp_net_recv_pkts; - eth_dev->tx_pkt_burst = &nfp_net_xmit_pkts; - rte_eth_dev_probing_finish(eth_dev); - } - - /* Register the CPP bridge service for the secondary too */ - nfp_register_cpp_service(cpp); - - return 0; -} - -static int nfp_pf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, - struct rte_pci_device *dev) -{ - if (rte_eal_process_type() == RTE_PROC_PRIMARY) - return nfp_pf_init(dev); - else - return nfp_pf_secondary_init(dev); -} - -static const struct rte_pci_id pci_id_nfp_pf_net_map[] = { - { - RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME, - PCI_DEVICE_ID_NFP4000_PF_NIC) - }, - { - RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME, - PCI_DEVICE_ID_NFP6000_PF_NIC) - }, - { - .vendor_id = 0, - }, -}; - -static int nfp_pci_uninit(struct rte_eth_dev *eth_dev) -{ - struct rte_pci_device *pci_dev; - uint16_t port_id; - - pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - - if (pci_dev->id.device_id == PCI_DEVICE_ID_NFP4000_PF_NIC || - pci_dev->id.device_id == PCI_DEVICE_ID_NFP6000_PF_NIC) { - /* Free up all physical ports under PF */ - RTE_ETH_FOREACH_DEV_OF(port_id, &pci_dev->device) - rte_eth_dev_close(port_id); - /* - * Ports can be closed and freed but hotplugging is not - * currently supported - */ - return -ENOTSUP; - } - - /* VF cleanup, just free private port data */ - return nfp_net_close(eth_dev); -} - -static int eth_nfp_pci_remove(struct rte_pci_device *pci_dev) -{ - return rte_eth_dev_pci_generic_remove(pci_dev, nfp_pci_uninit); -} - -static struct rte_pci_driver rte_nfp_net_pf_pmd = { - .id_table = pci_id_nfp_pf_net_map, - .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, - .probe = nfp_pf_pci_probe, - .remove = eth_nfp_pci_remove, -}; - -RTE_PMD_REGISTER_PCI(net_nfp_pf, rte_nfp_net_pf_pmd); -RTE_PMD_REGISTER_PCI_TABLE(net_nfp_pf, pci_id_nfp_pf_net_map); -RTE_PMD_REGISTER_KMOD_DEP(net_nfp_pf, "* igb_uio | uio_pci_generic | vfio"); RTE_LOG_REGISTER_SUFFIX(nfp_logtype_init, init, NOTICE); RTE_LOG_REGISTER_SUFFIX(nfp_logtype_driver, driver, NOTICE); /* From patchwork Fri Jul 16 08:35:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Heinrich Kuhn X-Patchwork-Id: 95965 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D31FCA0C50; Fri, 16 Jul 2021 10:37:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9027941372; Fri, 16 Jul 2021 10:37:19 +0200 (CEST) Received: from mail-ed1-f46.google.com (mail-ed1-f46.google.com [209.85.208.46]) by mails.dpdk.org (Postfix) with ESMTP id 320DB4136B for ; Fri, 16 Jul 2021 10:37:18 +0200 (CEST) Received: by mail-ed1-f46.google.com with SMTP id ca14so11967207edb.2 for ; Fri, 16 Jul 2021 01:37:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=htJXQf5ubuO5+HmqeXlW9ABTIEOGfGGlHc/OXoSdGzU=; b=Os9pz63asBQoRbH9UkfdQ+6B/k3LR4sf4yn0OUSkV3Jz6ser/1zOM8D9572sVYnxqX 13YmQbjmwVj+dZ6Wq3WMi0d8MZnvyHN+tilr1R7T0LqstMX9itlEbUepcJIOrUvWRWVk nqmZOq10WNKCq6GlpNUqMrTSVADhnpSAqa3T/Dy6w26jpItuTIgZEuM77DeeI+1jLmHR dsNyldKSCt3oZCRsayFXgMwgwfa/8WxMejeb5jwFmRG7UIvT2OxsrZ06kvGjn8wXulle 2gmdA8iuQ8Wbvflxu7oKt80yL9usHBE0lcRP5dT6H5yzVtmrBXmTmxn3iK86uFgUbM/i dCFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=htJXQf5ubuO5+HmqeXlW9ABTIEOGfGGlHc/OXoSdGzU=; b=baMQ7wNql45ShtoJAX0cj8d0WojfT2/SgBUErPwCbT5u/NMDDIOy5D0EpFHjz8maoS 1wuMsN1JCQ6J9WDTo0I4iFl1gj3ApLrirNv009vqSvXOMMQhyrwR/9oz5Tq4qy3+KLbB EoLD4fSUoFyfv6sUrF86SgKrmr40SBRLmNC3WGcevsNSPW1nTM4W3RlxxlRbC2lDAlBo 1GbPCTF68x6R0jf2K1xjfMa4832G/yazzuHHpP6e+RhGWMXusps0273BWXBWvibg1UU1 k28YbMXd6Jwhti+L3u6oZj5am2Af6qPhqwJI5OdiGRQ1Z0PkP6p4xjMR5Me89zjXRQhc 1T8Q== X-Gm-Message-State: AOAM533KtEumhoQWrXvpLtg68z4a5I6GYeKQFUFjQRYXy57r7XZ9MDmS KIt4fJAOS/T5MGVuTaF3mlXoZLl1zekZdykrI1ZMxQRKPGw29UpSaZMg41+HBH3IKnOtB4E4ij3 cZEPMYGuDk+blWGiCrrIV+tAVM3/GoN6UvGLw9sAVw4UYul2FIgti9/eNgdLtH/TT X-Google-Smtp-Source: ABdhPJxzxQl35e/5sQNnS0MFuMw8jTdTPhhLcfLUVAHyx3SfuzDUWIZX3k9NoZzYTFtz1Mv2rhXc0w== X-Received: by 2002:aa7:ca54:: with SMTP id j20mr13124864edt.137.1626424637787; Fri, 16 Jul 2021 01:37:17 -0700 (PDT) Received: from localhost.localdomain ([155.93.216.150]) by smtp.gmail.com with ESMTPSA id e6sm3371650edk.63.2021.07.16.01.37.16 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 16 Jul 2021 01:37:17 -0700 (PDT) From: Heinrich Kuhn To: dev@dpdk.org Cc: Heinrich Kuhn , Simon Horman Date: Fri, 16 Jul 2021 10:35:46 +0200 Message-Id: <20210716083545.34444-8-heinrich.kuhn@netronome.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210716083545.34444-1-heinrich.kuhn@netronome.com> References: <20210716082314.33865-1-heinrich.kuhn@netronome.com> <20210716083545.34444-1-heinrich.kuhn@netronome.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 7/7] net/nfp: batch file rename for consistency X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rename the nfp_net.c file to nfp_common as it now contains functions common to VF and PF functionality. Rename the header file too to be consistent. Also remove the "net" naming from the _ctrl and _logs files for consistency across the PMD. Signed-off-by: Heinrich Kuhn Signed-off-by: Simon Horman --- drivers/net/nfp/meson.build | 2 +- drivers/net/nfp/{nfp_net.c => nfp_common.c} | 4 ++-- drivers/net/nfp/{nfp_net_pmd.h => nfp_common.h} | 6 +++--- drivers/net/nfp/nfp_cpp_bridge.c | 2 +- drivers/net/nfp/{nfp_net_ctrl.h => nfp_ctrl.h} | 6 +++--- drivers/net/nfp/nfp_ethdev.c | 6 +++--- drivers/net/nfp/nfp_ethdev_vf.c | 6 +++--- drivers/net/nfp/{nfp_net_logs.h => nfp_logs.h} | 6 +++--- drivers/net/nfp/nfp_rxtx.c | 6 +++--- 9 files changed, 22 insertions(+), 22 deletions(-) rename drivers/net/nfp/{nfp_net.c => nfp_common.c} (99%) rename drivers/net/nfp/{nfp_net_pmd.h => nfp_common.h} (99%) rename drivers/net/nfp/{nfp_net_ctrl.h => nfp_ctrl.h} (99%) rename drivers/net/nfp/{nfp_net_logs.h => nfp_logs.h} (94%) diff --git a/drivers/net/nfp/meson.build b/drivers/net/nfp/meson.build index ab64d0cac3..810f02ae5b 100644 --- a/drivers/net/nfp/meson.build +++ b/drivers/net/nfp/meson.build @@ -18,7 +18,7 @@ sources = files( 'nfpcore/nfp_mutex.c', 'nfpcore/nfp_nsp_eth.c', 'nfpcore/nfp_hwinfo.c', - 'nfp_net.c', + 'nfp_common.c', 'nfp_rxtx.c', 'nfp_cpp_bridge.c', 'nfp_ethdev_vf.c', diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_common.c similarity index 99% rename from drivers/net/nfp/nfp_net.c rename to drivers/net/nfp/nfp_common.c index a6097eaab0..87e8f5f333 100644 --- a/drivers/net/nfp/nfp_net.c +++ b/drivers/net/nfp/nfp_common.c @@ -8,9 +8,9 @@ /* * vim:shiftwidth=8:noexpandtab * - * @file dpdk/pmd/nfp_net.c + * @file dpdk/pmd/nfp_common.c * - * Netronome vNIC DPDK Poll-Mode Driver: Main entry point + * Netronome vNIC DPDK Poll-Mode Driver: Common files */ #include diff --git a/drivers/net/nfp/nfp_net_pmd.h b/drivers/net/nfp/nfp_common.h similarity index 99% rename from drivers/net/nfp/nfp_net_pmd.h rename to drivers/net/nfp/nfp_common.h index dc05e888df..54ac937bd2 100644 --- a/drivers/net/nfp/nfp_net_pmd.h +++ b/drivers/net/nfp/nfp_common.h @@ -11,8 +11,8 @@ * Netronome NFP_NET PMD driver */ -#ifndef _NFP_NET_PMD_H_ -#define _NFP_NET_PMD_H_ +#ifndef _NFP_COMMON_H_ +#define _NFP_COMMON_H_ #define NFP_NET_PMD_VERSION "0.1" #define PCI_VENDOR_ID_NETRONOME 0x19ee @@ -404,7 +404,7 @@ int nfp_net_rss_config_default(struct rte_eth_dev *dev); #define NFP_NET_DEV_PRIVATE_TO_PF(dev_priv)\ (((struct nfp_net_hw *)dev_priv)->pf_dev) -#endif /* _NFP_NET_PMD_H_ */ +#endif /* _NFP_COMMON_H_ */ /* * Local variables: * c-file-style: "Linux" diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c index d916793338..74a0eacb3f 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.c +++ b/drivers/net/nfp/nfp_cpp_bridge.c @@ -19,7 +19,7 @@ #include "nfpcore/nfp_mip.h" #include "nfpcore/nfp_nsp.h" -#include "nfp_net_logs.h" +#include "nfp_logs.h" #include "nfp_cpp_bridge.h" #include diff --git a/drivers/net/nfp/nfp_net_ctrl.h b/drivers/net/nfp/nfp_ctrl.h similarity index 99% rename from drivers/net/nfp/nfp_net_ctrl.h rename to drivers/net/nfp/nfp_ctrl.h index 4f26ccf483..4dd62ef194 100644 --- a/drivers/net/nfp/nfp_net_ctrl.h +++ b/drivers/net/nfp/nfp_ctrl.h @@ -8,8 +8,8 @@ * * Netronome network device driver: Control BAR layout */ -#ifndef _NFP_NET_CTRL_H_ -#define _NFP_NET_CTRL_H_ +#ifndef _NFP_CTRL_H_ +#define _NFP_CTRL_H_ /* * Configuration BAR size. @@ -317,7 +317,7 @@ /* PF multiport offset */ #define NFP_PF_CSR_SLICE_SIZE (32 * 1024) -#endif /* _NFP_NET_CTRL_H_ */ +#endif /* _NFP_CTRL_H_ */ /* * Local variables: * c-file-style: "Linux" diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index ab08906704..29e6bb128a 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -30,10 +30,10 @@ #include "nfpcore/nfp_rtsym.h" #include "nfpcore/nfp_nsp.h" -#include "nfp_net_pmd.h" +#include "nfp_common.h" #include "nfp_rxtx.h" -#include "nfp_net_logs.h" -#include "nfp_net_ctrl.h" +#include "nfp_logs.h" +#include "nfp_ctrl.h" #include "nfp_cpp_bridge.h" diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index 223142c0ed..b697b55865 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -16,10 +16,10 @@ #include "nfpcore/nfp_mip.h" #include "nfpcore/nfp_rtsym.h" -#include "nfp_net_pmd.h" +#include "nfp_common.h" #include "nfp_rxtx.h" -#include "nfp_net_logs.h" -#include "nfp_net_ctrl.h" +#include "nfp_logs.h" +#include "nfp_ctrl.h" static void nfp_netvf_read_mac(struct nfp_net_hw *hw); static int nfp_netvf_start(struct rte_eth_dev *dev); diff --git a/drivers/net/nfp/nfp_net_logs.h b/drivers/net/nfp/nfp_logs.h similarity index 94% rename from drivers/net/nfp/nfp_net_logs.h rename to drivers/net/nfp/nfp_logs.h index 27dd87611b..bd5a5e1ec5 100644 --- a/drivers/net/nfp/nfp_net_logs.h +++ b/drivers/net/nfp/nfp_logs.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_NET_LOGS_H_ -#define _NFP_NET_LOGS_H_ +#ifndef _NFP_LOGS_H_ +#define _NFP_LOGS_H_ #include @@ -44,4 +44,4 @@ extern int nfp_logtype_driver; rte_log(RTE_LOG_ ## level, nfp_logtype_driver, \ "%s(): " fmt "\n", __func__, ## args) -#endif /* _NFP_NET_LOGS_H_ */ +#endif /* _NFP_LOGS_H_ */ diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index 9ee9e5c9a3..1402c5f84a 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -16,10 +16,10 @@ #include #include -#include "nfp_net_pmd.h" +#include "nfp_common.h" #include "nfp_rxtx.h" -#include "nfp_net_logs.h" -#include "nfp_net_ctrl.h" +#include "nfp_logs.h" +#include "nfp_ctrl.h" /* Prototypes */ static int nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq);