From patchwork Thu Feb 25 08:08:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 88185 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8E4B2A034F; Thu, 25 Feb 2021 09:08:37 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 303BF40692; Thu, 25 Feb 2021 09:08:37 +0100 (CET) Received: from smtpproxy21.qq.com (smtpbg704.qq.com [203.205.195.105]) by mails.dpdk.org (Postfix) with ESMTP id 4094B40692 for ; Thu, 25 Feb 2021 09:08:34 +0100 (CET) X-QQ-mid: bizesmtp20t1614240508t8hb64j6 Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 25 Feb 2021 16:08:28 +0800 (CST) X-QQ-SSF: 01400000002000C0D000000A0000000 X-QQ-FEAT: nnTZLeLrWPAVSOx3hFrpTLohdNj2IHXL+70catiQexi2Sqq40vQCJdUNSsq8P Nua60ErVnoM/NV5ywC+sbmHX8GHRJJL2yxKApTZApx+tyB07wxpMGyqcvAmZO4vBL+0SWvo bLvmtrsf/TWVCCoThPzBdKfWSpbuLrwJFLTmZekk2aj1AnJZNR/RQgmW1Ag6KBLS+qhIOqw r/lul745Kzp+1qYrWSyQ3dgCvZws2SUmkLIbG/vTkArQTKUfDpOq6WWvF1/nVEWZs+r0iVj 63OeVECWquf+h1rZaU5PYsK5ocdz0hgm/j0ZL/L6guccRm8sqye5CMNrOkdUmem8aHYbK8Q lS0bmvbwum9wIotlJ5Wm64lXTM9Hg== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 25 Feb 2021 16:08:45 +0800 Message-Id: <20210225080901.3645291-2-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210225080901.3645291-1-jiawenwu@trustnetic.com> References: <20210225080901.3645291-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign5 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v3 01/17] net/txgbe: add ethdev probe and remove for VF device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Introduce virtual function driver in txgbe PMD, add simple init and uninit function to probe and remove the device. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/txgbe_vf.ini | 11 +++ drivers/net/txgbe/meson.build | 1 + drivers/net/txgbe/txgbe_ethdev_vf.c | 132 ++++++++++++++++++++++++++ 3 files changed, 144 insertions(+) create mode 100644 doc/guides/nics/features/txgbe_vf.ini create mode 100644 drivers/net/txgbe/txgbe_ethdev_vf.c diff --git a/doc/guides/nics/features/txgbe_vf.ini b/doc/guides/nics/features/txgbe_vf.ini new file mode 100644 index 000000000..5035c5eea --- /dev/null +++ b/doc/guides/nics/features/txgbe_vf.ini @@ -0,0 +1,11 @@ +; +; Supported features of the 'txgbe_vf' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Multiprocess aware = Y +Linux = Y +ARMv8 = Y +x86-32 = Y +x86-64 = Y diff --git a/drivers/net/txgbe/meson.build b/drivers/net/txgbe/meson.build index 60505e211..3b9994aa9 100644 --- a/drivers/net/txgbe/meson.build +++ b/drivers/net/txgbe/meson.build @@ -12,6 +12,7 @@ objs = [base_objs] sources = files( 'txgbe_ethdev.c', + 'txgbe_ethdev_vf.c', 'txgbe_fdir.c', 'txgbe_flow.c', 'txgbe_ipsec.c', diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c new file mode 100644 index 000000000..1c3765e4e --- /dev/null +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -0,0 +1,132 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2015-2020 + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "base/txgbe.h" +#include "txgbe_ethdev.h" +#include "txgbe_rxtx.h" + +static int txgbevf_dev_close(struct rte_eth_dev *dev); + +/* + * The set of PCI devices this driver supports (for VF) + */ +static const struct rte_pci_id pci_id_txgbevf_map[] = { + { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, TXGBE_DEV_ID_RAPTOR_VF) }, + { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, TXGBE_DEV_ID_RAPTOR_VF_HV) }, + { .vendor_id = 0, /* sentinel */ }, +}; + +static const struct eth_dev_ops txgbevf_eth_dev_ops; + +/* + * Virtual Function device init + */ +static int +eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev); + PMD_INIT_FUNC_TRACE(); + + eth_dev->dev_ops = &txgbevf_eth_dev_ops; + + /* for secondary processes, we don't initialise any further as primary + * has already done this work. Only check we don't need a different + * RX function + */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) { + struct txgbe_tx_queue *txq; + uint16_t nb_tx_queues = eth_dev->data->nb_tx_queues; + /* TX queue function in primary, set by last queue initialized + * Tx queue may not initialized by primary process + */ + if (eth_dev->data->tx_queues) { + txq = eth_dev->data->tx_queues[nb_tx_queues - 1]; + txgbe_set_tx_function(eth_dev, txq); + } else { + /* Use default TX function if we get here */ + PMD_INIT_LOG(NOTICE, + "No TX queues configured yet. Using default TX function."); + } + + txgbe_set_rx_function(eth_dev); + + return 0; + } + + rte_eth_copy_pci_info(eth_dev, pci_dev); + + hw->device_id = pci_dev->id.device_id; + hw->vendor_id = pci_dev->id.vendor_id; + hw->subsystem_device_id = pci_dev->id.subsystem_device_id; + hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id; + hw->hw_addr = (void *)pci_dev->mem_resource[0].addr; + + return 0; +} + +/* Virtual Function device uninit */ +static int +eth_txgbevf_dev_uninit(struct rte_eth_dev *eth_dev) +{ + PMD_INIT_FUNC_TRACE(); + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + txgbevf_dev_close(eth_dev); + + return 0; +} + +static int eth_txgbevf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + return rte_eth_dev_pci_generic_probe(pci_dev, + sizeof(struct txgbe_adapter), eth_txgbevf_dev_init); +} + +static int eth_txgbevf_pci_remove(struct rte_pci_device *pci_dev) +{ + return rte_eth_dev_pci_generic_remove(pci_dev, eth_txgbevf_dev_uninit); +} + +/* + * virtual function driver struct + */ +static struct rte_pci_driver rte_txgbevf_pmd = { + .id_table = pci_id_txgbevf_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING, + .probe = eth_txgbevf_pci_probe, + .remove = eth_txgbevf_pci_remove, +}; + +static int +txgbevf_dev_close(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + return 0; +} + +/* + * dev_ops for virtual function, bare necessities for basic vf + * operation have been implemented + */ +static const struct eth_dev_ops txgbevf_eth_dev_ops = { +}; + +RTE_PMD_REGISTER_PCI(net_txgbe_vf, rte_txgbevf_pmd); +RTE_PMD_REGISTER_PCI_TABLE(net_txgbe_vf, pci_id_txgbevf_map); +RTE_PMD_REGISTER_KMOD_DEP(net_txgbe_vf, "* igb_uio | vfio-pci"); From patchwork Thu Feb 25 08:08:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 88186 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5529CA034F; Thu, 25 Feb 2021 09:08:45 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4EA4916076C; Thu, 25 Feb 2021 09:08:38 +0100 (CET) Received: from smtpproxy21.qq.com (smtpbg704.qq.com [203.205.195.105]) by mails.dpdk.org (Postfix) with ESMTP id 3E12F4067B for ; Thu, 25 Feb 2021 09:08:35 +0100 (CET) X-QQ-mid: bizesmtp20t1614240510t2q908lr Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 25 Feb 2021 16:08:30 +0800 (CST) X-QQ-SSF: 01400000002000C0D000000A0000000 X-QQ-FEAT: QSPOcZ5XRR8EtOZ5aZE9yARR9eQEPTFH9rb3A2gvQ9zfRj2ZdEPHubFPByV1h 2eRWy1pKdk0BM9LnxNgdez2JSdKsdhWFbH3w5/zRWnb07rxO800gnOx/FtBpBRxoCzc+wAD LdRXU98Rs+q2b5TlJeL0cn/r1P1TAFS+29W68/dOrVntDd6Fxan9LNfOsKNhTOz6gAxzvy3 2BcbPNWjW5RzrM5D24uWlKI5HPm6sOyEUj4xlv15GXP+qsFdi0a+ZBcV5CTMeRgmAkX+bDX lMJqwdne6I616DfOAuGVtw7uB/CY/rpTSq1nzkLx/8yeguvbXDtxtKLsRzTuRx0VyRrxmXG NZzppNWFhLgu8zHOkIF1T9QUHFJSw== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 25 Feb 2021 16:08:46 +0800 Message-Id: <20210225080901.3645291-3-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210225080901.3645291-1-jiawenwu@trustnetic.com> References: <20210225080901.3645291-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign5 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v3 02/17] net/txgbe: add base code for VF driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Implement VF device init and uninit function with hardware operations, and negotiate with PF in mailbox. Signed-off-by: Jiawen Wu --- drivers/net/txgbe/base/meson.build | 1 + drivers/net/txgbe/base/txgbe.h | 1 + drivers/net/txgbe/base/txgbe_hw.c | 4 + drivers/net/txgbe/base/txgbe_mbx.c | 354 ++++++++++++++++++++++++++++ drivers/net/txgbe/base/txgbe_mbx.h | 16 ++ drivers/net/txgbe/base/txgbe_type.h | 7 + drivers/net/txgbe/base/txgbe_vf.c | 285 ++++++++++++++++++++++ drivers/net/txgbe/base/txgbe_vf.h | 20 ++ drivers/net/txgbe/txgbe_ethdev_vf.c | 116 +++++++++ 9 files changed, 804 insertions(+) create mode 100644 drivers/net/txgbe/base/txgbe_vf.c create mode 100644 drivers/net/txgbe/base/txgbe_vf.h diff --git a/drivers/net/txgbe/base/meson.build b/drivers/net/txgbe/base/meson.build index 3c63bf5f4..33d0adf0d 100644 --- a/drivers/net/txgbe/base/meson.build +++ b/drivers/net/txgbe/base/meson.build @@ -9,6 +9,7 @@ sources = [ 'txgbe_mbx.c', 'txgbe_mng.c', 'txgbe_phy.c', + 'txgbe_vf.c', ] error_cflags = [] diff --git a/drivers/net/txgbe/base/txgbe.h b/drivers/net/txgbe/base/txgbe.h index b054bb8d0..d7199512b 100644 --- a/drivers/net/txgbe/base/txgbe.h +++ b/drivers/net/txgbe/base/txgbe.h @@ -11,6 +11,7 @@ #include "txgbe_eeprom.h" #include "txgbe_phy.h" #include "txgbe_hw.h" +#include "txgbe_vf.h" #include "txgbe_dcb.h" #endif /* _TXGBE_H_ */ diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c index dc419d7d4..c357c8658 100644 --- a/drivers/net/txgbe/base/txgbe_hw.c +++ b/drivers/net/txgbe/base/txgbe_hw.c @@ -6,6 +6,7 @@ #include "txgbe_mbx.h" #include "txgbe_phy.h" #include "txgbe_dcb.h" +#include "txgbe_vf.h" #include "txgbe_eeprom.h" #include "txgbe_mng.h" #include "txgbe_hw.h" @@ -2491,6 +2492,9 @@ s32 txgbe_init_shared_code(struct txgbe_hw *hw) case txgbe_mac_raptor: status = txgbe_init_ops_pf(hw); break; + case txgbe_mac_raptor_vf: + status = txgbe_init_ops_vf(hw); + break; default: status = TXGBE_ERR_DEVICE_NOT_SUPPORTED; break; diff --git a/drivers/net/txgbe/base/txgbe_mbx.c b/drivers/net/txgbe/base/txgbe_mbx.c index bfe53478e..b308839e7 100644 --- a/drivers/net/txgbe/base/txgbe_mbx.c +++ b/drivers/net/txgbe/base/txgbe_mbx.c @@ -118,6 +118,360 @@ s32 txgbe_check_for_rst(struct txgbe_hw *hw, u16 mbx_id) return ret_val; } +/** + * txgbe_poll_for_msg - Wait for message notification + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to write + * + * returns SUCCESS if it successfully received a message notification + **/ +STATIC s32 txgbe_poll_for_msg(struct txgbe_hw *hw, u16 mbx_id) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + int countdown = mbx->timeout; + + DEBUGFUNC("txgbe_poll_for_msg"); + + if (!countdown || !mbx->check_for_msg) + goto out; + + while (countdown && mbx->check_for_msg(hw, mbx_id)) { + countdown--; + if (!countdown) + break; + usec_delay(mbx->usec_delay); + } + + if (countdown == 0) + DEBUGOUT("Polling for VF%d mailbox message timedout", mbx_id); + +out: + return countdown ? 0 : TXGBE_ERR_MBX; +} + +/** + * txgbe_poll_for_ack - Wait for message acknowledgment + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to write + * + * returns SUCCESS if it successfully received a message acknowledgment + **/ +STATIC s32 txgbe_poll_for_ack(struct txgbe_hw *hw, u16 mbx_id) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + int countdown = mbx->timeout; + + DEBUGFUNC("txgbe_poll_for_ack"); + + if (!countdown || !mbx->check_for_ack) + goto out; + + while (countdown && mbx->check_for_ack(hw, mbx_id)) { + countdown--; + if (!countdown) + break; + usec_delay(mbx->usec_delay); + } + + if (countdown == 0) + DEBUGOUT("Polling for VF%d mailbox ack timedout", mbx_id); + +out: + return countdown ? 0 : TXGBE_ERR_MBX; +} + +/** + * txgbe_read_posted_mbx - Wait for message notification and receive message + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @mbx_id: id of mailbox to write + * + * returns SUCCESS if it successfully received a message notification and + * copied it into the receive buffer. + **/ +s32 txgbe_read_posted_mbx(struct txgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + s32 ret_val = TXGBE_ERR_MBX; + + DEBUGFUNC("txgbe_read_posted_mbx"); + + if (!mbx->read) + goto out; + + ret_val = txgbe_poll_for_msg(hw, mbx_id); + + /* if ack received read message, otherwise we timed out */ + if (!ret_val) + ret_val = mbx->read(hw, msg, size, mbx_id); +out: + return ret_val; +} + +/** + * txgbe_write_posted_mbx - Write a message to the mailbox, wait for ack + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @mbx_id: id of mailbox to write + * + * returns SUCCESS if it successfully copied message into the buffer and + * received an ack to that message within delay * timeout period + **/ +s32 txgbe_write_posted_mbx(struct txgbe_hw *hw, u32 *msg, u16 size, + u16 mbx_id) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + s32 ret_val = TXGBE_ERR_MBX; + + DEBUGFUNC("txgbe_write_posted_mbx"); + + /* exit if either we can't write or there isn't a defined timeout */ + if (!mbx->write || !mbx->timeout) + goto out; + + /* send msg */ + ret_val = mbx->write(hw, msg, size, mbx_id); + + /* if msg sent wait until we receive an ack */ + if (!ret_val) + ret_val = txgbe_poll_for_ack(hw, mbx_id); +out: + return ret_val; +} + +/** + * txgbe_read_v2p_mailbox - read v2p mailbox + * @hw: pointer to the HW structure + * + * This function is used to read the v2p mailbox without losing the read to + * clear status bits. + **/ +STATIC u32 txgbe_read_v2p_mailbox(struct txgbe_hw *hw) +{ + u32 v2p_mailbox = rd32(hw, TXGBE_VFMBCTL); + + v2p_mailbox |= hw->mbx.v2p_mailbox; + hw->mbx.v2p_mailbox |= v2p_mailbox & TXGBE_VFMBCTL_R2C_BITS; + + return v2p_mailbox; +} + +/** + * txgbe_check_for_bit_vf - Determine if a status bit was set + * @hw: pointer to the HW structure + * @mask: bitmask for bits to be tested and cleared + * + * This function is used to check for the read to clear bits within + * the V2P mailbox. + **/ +STATIC s32 txgbe_check_for_bit_vf(struct txgbe_hw *hw, u32 mask) +{ + u32 v2p_mailbox = txgbe_read_v2p_mailbox(hw); + s32 ret_val = TXGBE_ERR_MBX; + + if (v2p_mailbox & mask) + ret_val = 0; + + hw->mbx.v2p_mailbox &= ~mask; + + return ret_val; +} + +/** + * txgbe_check_for_msg_vf - checks to see if the PF has sent mail + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to check + * + * returns SUCCESS if the PF has set the Status bit or else ERR_MBX + **/ +s32 txgbe_check_for_msg_vf(struct txgbe_hw *hw, u16 mbx_id) +{ + s32 ret_val = TXGBE_ERR_MBX; + + UNREFERENCED_PARAMETER(mbx_id); + DEBUGFUNC("txgbe_check_for_msg_vf"); + + if (!txgbe_check_for_bit_vf(hw, TXGBE_VFMBCTL_PFSTS)) { + ret_val = 0; + hw->mbx.stats.reqs++; + } + + return ret_val; +} + +/** + * txgbe_check_for_ack_vf - checks to see if the PF has ACK'd + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to check + * + * returns SUCCESS if the PF has set the ACK bit or else ERR_MBX + **/ +s32 txgbe_check_for_ack_vf(struct txgbe_hw *hw, u16 mbx_id) +{ + s32 ret_val = TXGBE_ERR_MBX; + + UNREFERENCED_PARAMETER(mbx_id); + DEBUGFUNC("txgbe_check_for_ack_vf"); + + if (!txgbe_check_for_bit_vf(hw, TXGBE_VFMBCTL_PFACK)) { + ret_val = 0; + hw->mbx.stats.acks++; + } + + return ret_val; +} + +/** + * txgbe_check_for_rst_vf - checks to see if the PF has reset + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to check + * + * returns true if the PF has set the reset done bit or else false + **/ +s32 txgbe_check_for_rst_vf(struct txgbe_hw *hw, u16 mbx_id) +{ + s32 ret_val = TXGBE_ERR_MBX; + + UNREFERENCED_PARAMETER(mbx_id); + DEBUGFUNC("txgbe_check_for_rst_vf"); + + if (!txgbe_check_for_bit_vf(hw, (TXGBE_VFMBCTL_RSTD | + TXGBE_VFMBCTL_RSTI))) { + ret_val = 0; + hw->mbx.stats.rsts++; + } + + return ret_val; +} + +/** + * txgbe_obtain_mbx_lock_vf - obtain mailbox lock + * @hw: pointer to the HW structure + * + * return SUCCESS if we obtained the mailbox lock + **/ +STATIC s32 txgbe_obtain_mbx_lock_vf(struct txgbe_hw *hw) +{ + s32 ret_val = TXGBE_ERR_MBX; + + DEBUGFUNC("txgbe_obtain_mbx_lock_vf"); + + /* Take ownership of the buffer */ + wr32(hw, TXGBE_VFMBCTL, TXGBE_VFMBCTL_VFU); + + /* reserve mailbox for vf use */ + if (txgbe_read_v2p_mailbox(hw) & TXGBE_VFMBCTL_VFU) + ret_val = 0; + + return ret_val; +} + +/** + * txgbe_write_mbx_vf - Write a message to the mailbox + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @mbx_id: id of mailbox to write + * + * returns SUCCESS if it successfully copied message into the buffer + **/ +s32 txgbe_write_mbx_vf(struct txgbe_hw *hw, u32 *msg, u16 size, + u16 mbx_id) +{ + s32 ret_val; + u16 i; + + UNREFERENCED_PARAMETER(mbx_id); + + DEBUGFUNC("txgbe_write_mbx_vf"); + + /* lock the mailbox to prevent pf/vf race condition */ + ret_val = txgbe_obtain_mbx_lock_vf(hw); + if (ret_val) + goto out_no_write; + + /* flush msg and acks as we are overwriting the message buffer */ + txgbe_check_for_msg_vf(hw, 0); + txgbe_check_for_ack_vf(hw, 0); + + /* copy the caller specified message to the mailbox memory buffer */ + for (i = 0; i < size; i++) + wr32a(hw, TXGBE_VFMBX, i, msg[i]); + + /* update stats */ + hw->mbx.stats.msgs_tx++; + + /* Drop VFU and interrupt the PF to tell it a message has been sent */ + wr32(hw, TXGBE_VFMBCTL, TXGBE_VFMBCTL_REQ); + +out_no_write: + return ret_val; +} + +/** + * txgbe_read_mbx_vf - Reads a message from the inbox intended for vf + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @mbx_id: id of mailbox to read + * + * returns SUCCESS if it successfully read message from buffer + **/ +s32 txgbe_read_mbx_vf(struct txgbe_hw *hw, u32 *msg, u16 size, + u16 mbx_id) +{ + s32 ret_val = 0; + u16 i; + + DEBUGFUNC("txgbe_read_mbx_vf"); + UNREFERENCED_PARAMETER(mbx_id); + + /* lock the mailbox to prevent pf/vf race condition */ + ret_val = txgbe_obtain_mbx_lock_vf(hw); + if (ret_val) + goto out_no_read; + + /* copy the message from the mailbox memory buffer */ + for (i = 0; i < size; i++) + msg[i] = rd32a(hw, TXGBE_VFMBX, i); + + /* Acknowledge receipt and release mailbox, then we're done */ + wr32(hw, TXGBE_VFMBCTL, TXGBE_VFMBCTL_ACK); + + /* update stats */ + hw->mbx.stats.msgs_rx++; + +out_no_read: + return ret_val; +} + +/** + * txgbe_init_mbx_params_vf - set initial values for vf mailbox + * @hw: pointer to the HW structure + * + * Initializes the hw->mbx struct to correct values for vf mailbox + */ +void txgbe_init_mbx_params_vf(struct txgbe_hw *hw) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + + /* start mailbox as timed out and let the reset_hw call set the timeout + * value to begin communications + */ + mbx->timeout = 0; + mbx->usec_delay = TXGBE_VF_MBX_INIT_DELAY; + + mbx->size = TXGBE_P2VMBX_SIZE; + + mbx->stats.msgs_tx = 0; + mbx->stats.msgs_rx = 0; + mbx->stats.reqs = 0; + mbx->stats.acks = 0; + mbx->stats.rsts = 0; +} + STATIC s32 txgbe_check_for_bit_pf(struct txgbe_hw *hw, u32 mask, s32 index) { u32 mbvficr = rd32(hw, TXGBE_MBVFICR(index)); diff --git a/drivers/net/txgbe/base/txgbe_mbx.h b/drivers/net/txgbe/base/txgbe_mbx.h index 4a058b0bb..ccf5d12f2 100644 --- a/drivers/net/txgbe/base/txgbe_mbx.h +++ b/drivers/net/txgbe/base/txgbe_mbx.h @@ -60,6 +60,8 @@ enum txgbe_pfvf_api_rev { #define TXGBE_VF_GET_RSS_KEY 0x0b /* get RSS key */ #define TXGBE_VF_UPDATE_XCAST_MODE 0x0c +#define TXGBE_VF_BACKUP 0x8001 /* VF requests backup */ + /* mode choices for TXGBE_VF_UPDATE_XCAST_MODE */ enum txgbevf_xcast_modes { TXGBEVF_XCAST_MODE_NONE = 0, @@ -76,12 +78,20 @@ enum txgbevf_xcast_modes { /* length of permanent address message returned from PF */ #define TXGBE_VF_PERMADDR_MSG_LEN 4 +/* word in permanent address message with the current multicast type */ +#define TXGBE_VF_MC_TYPE_WORD 3 + +#define TXGBE_VF_MBX_INIT_TIMEOUT 2000 /* number of retries on mailbox */ +#define TXGBE_VF_MBX_INIT_DELAY 500 /* microseconds between retries */ s32 txgbe_read_mbx(struct txgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id); s32 txgbe_write_mbx(struct txgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id); +s32 txgbe_read_posted_mbx(struct txgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id); +s32 txgbe_write_posted_mbx(struct txgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id); s32 txgbe_check_for_msg(struct txgbe_hw *hw, u16 mbx_id); s32 txgbe_check_for_ack(struct txgbe_hw *hw, u16 mbx_id); s32 txgbe_check_for_rst(struct txgbe_hw *hw, u16 mbx_id); +void txgbe_init_mbx_params_vf(struct txgbe_hw *hw); void txgbe_init_mbx_params_pf(struct txgbe_hw *hw); s32 txgbe_read_mbx_pf(struct txgbe_hw *hw, u32 *msg, u16 size, u16 vf_number); @@ -90,4 +100,10 @@ s32 txgbe_check_for_msg_pf(struct txgbe_hw *hw, u16 vf_number); s32 txgbe_check_for_ack_pf(struct txgbe_hw *hw, u16 vf_number); s32 txgbe_check_for_rst_pf(struct txgbe_hw *hw, u16 vf_number); +s32 txgbe_read_mbx_vf(struct txgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id); +s32 txgbe_write_mbx_vf(struct txgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id); +s32 txgbe_check_for_msg_vf(struct txgbe_hw *hw, u16 mbx_id); +s32 txgbe_check_for_ack_vf(struct txgbe_hw *hw, u16 mbx_id); +s32 txgbe_check_for_rst_vf(struct txgbe_hw *hw, u16 mbx_id); + #endif /* _TXGBE_MBX_H_ */ diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h index 22efcef78..ef8358ae3 100644 --- a/drivers/net/txgbe/base/txgbe_type.h +++ b/drivers/net/txgbe/base/txgbe_type.h @@ -11,6 +11,9 @@ #define TXGBE_LINK_UP_TIME 90 /* 9.0 Seconds */ #define TXGBE_AUTO_NEG_TIME 45 /* 4.5 Seconds */ +#define TXGBE_RX_HDR_SIZE 256 +#define TXGBE_RX_BUF_SIZE 2048 + #define TXGBE_FRAME_SIZE_MAX (9728) /* Maximum frame size, +FCS */ #define TXGBE_FRAME_SIZE_DFT (1518) /* Default frame size, +FCS */ #define TXGBE_NUM_POOL (64) @@ -23,6 +26,7 @@ #define TXGBE_FDIR_INIT_DONE_POLL 10 #define TXGBE_FDIRCMD_CMD_POLL 10 +#define TXGBE_VF_INIT_TIMEOUT 200 /* Number of retries to clear RSTI */ #define TXGBE_ALIGN 128 /* as intel did */ @@ -703,6 +707,7 @@ struct txgbe_mbx_info { struct txgbe_mbx_stats stats; u32 timeout; u32 usec_delay; + u32 v2p_mailbox; u16 size; }; @@ -732,6 +737,7 @@ struct txgbe_hw { u16 subsystem_vendor_id; u8 revision_id; bool adapter_stopped; + int api_version; bool allow_unsupported_sfp; bool need_crosstalk_fix; @@ -755,6 +761,7 @@ struct txgbe_hw { u32 q_rx_regs[128 * 4]; u32 q_tx_regs[128 * 4]; bool offset_loaded; + bool rx_loaded; struct { u64 rx_qp_packets; u64 tx_qp_packets; diff --git a/drivers/net/txgbe/base/txgbe_vf.c b/drivers/net/txgbe/base/txgbe_vf.c new file mode 100644 index 000000000..5d4e10158 --- /dev/null +++ b/drivers/net/txgbe/base/txgbe_vf.c @@ -0,0 +1,285 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2015-2020 + */ + +#include "txgbe_mbx.h" +#include "txgbe_vf.h" + +/** + * txgbe_init_ops_vf - Initialize the pointers for vf + * @hw: pointer to hardware structure + * + * This will assign function pointers, adapter-specific functions can + * override the assignment of generic function pointers by assigning + * their own adapter-specific function pointers. + * Does not touch the hardware. + **/ +s32 txgbe_init_ops_vf(struct txgbe_hw *hw) +{ + struct txgbe_mac_info *mac = &hw->mac; + struct txgbe_mbx_info *mbx = &hw->mbx; + + /* MAC */ + mac->reset_hw = txgbe_reset_hw_vf; + mac->stop_hw = txgbe_stop_hw_vf; + mac->negotiate_api_version = txgbevf_negotiate_api_version; + + mac->max_tx_queues = 1; + mac->max_rx_queues = 1; + + mbx->init_params = txgbe_init_mbx_params_vf; + mbx->read = txgbe_read_mbx_vf; + mbx->write = txgbe_write_mbx_vf; + mbx->read_posted = txgbe_read_posted_mbx; + mbx->write_posted = txgbe_write_posted_mbx; + mbx->check_for_msg = txgbe_check_for_msg_vf; + mbx->check_for_ack = txgbe_check_for_ack_vf; + mbx->check_for_rst = txgbe_check_for_rst_vf; + + return 0; +} + +/* txgbe_virt_clr_reg - Set register to default (power on) state. + * @hw: pointer to hardware structure + */ +static void txgbe_virt_clr_reg(struct txgbe_hw *hw) +{ + int i; + u32 vfsrrctl; + + /* default values (BUF_SIZE = 2048, HDR_SIZE = 256) */ + vfsrrctl = TXGBE_RXCFG_HDRLEN(TXGBE_RX_HDR_SIZE); + vfsrrctl |= TXGBE_RXCFG_PKTLEN(TXGBE_RX_BUF_SIZE); + + for (i = 0; i < 8; i++) { + wr32m(hw, TXGBE_RXCFG(i), + (TXGBE_RXCFG_HDRLEN_MASK | TXGBE_RXCFG_PKTLEN_MASK), + vfsrrctl); + } + + txgbe_flush(hw); +} + +/** + * txgbe_reset_hw_vf - Performs hardware reset + * @hw: pointer to hardware structure + * + * Resets the hardware by resetting the transmit and receive units, masks and + * clears all interrupts. + **/ +s32 txgbe_reset_hw_vf(struct txgbe_hw *hw) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + u32 timeout = TXGBE_VF_INIT_TIMEOUT; + s32 ret_val = TXGBE_ERR_INVALID_MAC_ADDR; + u32 msgbuf[TXGBE_VF_PERMADDR_MSG_LEN]; + u8 *addr = (u8 *)(&msgbuf[1]); + + DEBUGFUNC("txgbevf_reset_hw_vf"); + + /* Call adapter stop to disable tx/rx and clear interrupts */ + hw->mac.stop_hw(hw); + + /* reset the api version */ + hw->api_version = txgbe_mbox_api_10; + + /* backup msix vectors */ + mbx->timeout = TXGBE_VF_MBX_INIT_TIMEOUT; + msgbuf[0] = TXGBE_VF_BACKUP; + mbx->write_posted(hw, msgbuf, 1, 0); + msec_delay(10); + + DEBUGOUT("Issuing a function level reset to MAC\n"); + wr32(hw, TXGBE_VFRST, TXGBE_VFRST_SET); + txgbe_flush(hw); + msec_delay(50); + + hw->offset_loaded = 1; + + /* we cannot reset while the RSTI / RSTD bits are asserted */ + while (!mbx->check_for_rst(hw, 0) && timeout) { + timeout--; + /* if it doesn't work, try in 1 ms */ + usec_delay(5); + } + + if (!timeout) + return TXGBE_ERR_RESET_FAILED; + + /* Reset VF registers to initial values */ + txgbe_virt_clr_reg(hw); + + /* mailbox timeout can now become active */ + mbx->timeout = TXGBE_VF_MBX_INIT_TIMEOUT; + + msgbuf[0] = TXGBE_VF_RESET; + mbx->write_posted(hw, msgbuf, 1, 0); + + msec_delay(10); + + /* + * set our "perm_addr" based on info provided by PF + * also set up the mc_filter_type which is piggy backed + * on the mac address in word 3 + */ + ret_val = mbx->read_posted(hw, msgbuf, + TXGBE_VF_PERMADDR_MSG_LEN, 0); + if (ret_val) + return ret_val; + + if (msgbuf[0] != (TXGBE_VF_RESET | TXGBE_VT_MSGTYPE_ACK) && + msgbuf[0] != (TXGBE_VF_RESET | TXGBE_VT_MSGTYPE_NACK)) + return TXGBE_ERR_INVALID_MAC_ADDR; + + if (msgbuf[0] == (TXGBE_VF_RESET | TXGBE_VT_MSGTYPE_ACK)) + memcpy(hw->mac.perm_addr, addr, ETH_ADDR_LEN); + + hw->mac.mc_filter_type = msgbuf[TXGBE_VF_MC_TYPE_WORD]; + + return ret_val; +} + +/** + * txgbe_stop_hw_vf - Generic stop Tx/Rx units + * @hw: pointer to hardware structure + * + * Sets the adapter_stopped flag within txgbe_hw struct. Clears interrupts, + * disables transmit and receive units. The adapter_stopped flag is used by + * the shared code and drivers to determine if the adapter is in a stopped + * state and should not touch the hardware. + **/ +s32 txgbe_stop_hw_vf(struct txgbe_hw *hw) +{ + u16 i; + + /* + * Set the adapter_stopped flag so other driver functions stop touching + * the hardware + */ + hw->adapter_stopped = true; + + /* Clear interrupt mask to stop from interrupts being generated */ + wr32(hw, TXGBE_VFIMC, TXGBE_VFIMC_MASK); + + /* Clear any pending interrupts, flush previous writes */ + wr32(hw, TXGBE_VFICR, TXGBE_VFICR_MASK); + + /* Disable the transmit unit. Each queue must be disabled. */ + for (i = 0; i < hw->mac.max_tx_queues; i++) + wr32(hw, TXGBE_TXCFG(i), TXGBE_TXCFG_FLUSH); + + /* Disable the receive unit by stopping each queue */ + for (i = 0; i < hw->mac.max_rx_queues; i++) + wr32m(hw, TXGBE_RXCFG(i), TXGBE_RXCFG_ENA, 0); + + /* Clear packet split and pool config */ + wr32(hw, TXGBE_VFPLCFG, 0); + hw->rx_loaded = 1; + + /* flush all queues disables */ + txgbe_flush(hw); + msec_delay(2); + + return 0; +} + +STATIC s32 txgbevf_write_msg_read_ack(struct txgbe_hw *hw, u32 *msg, + u32 *retmsg, u16 size) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + s32 retval = mbx->write_posted(hw, msg, size, 0); + + if (retval) + return retval; + + return mbx->read_posted(hw, retmsg, size, 0); +} + +/** + * txgbevf_negotiate_api_version - Negotiate supported API version + * @hw: pointer to the HW structure + * @api: integer containing requested API version + **/ +int txgbevf_negotiate_api_version(struct txgbe_hw *hw, int api) +{ + int err; + u32 msg[3]; + + /* Negotiate the mailbox API version */ + msg[0] = TXGBE_VF_API_NEGOTIATE; + msg[1] = api; + msg[2] = 0; + + err = txgbevf_write_msg_read_ack(hw, msg, msg, 3); + if (!err) { + msg[0] &= ~TXGBE_VT_MSGTYPE_CTS; + + /* Store value and return 0 on success */ + if (msg[0] == (TXGBE_VF_API_NEGOTIATE | TXGBE_VT_MSGTYPE_ACK)) { + hw->api_version = api; + return 0; + } + + err = TXGBE_ERR_INVALID_ARGUMENT; + } + + return err; +} + +int txgbevf_get_queues(struct txgbe_hw *hw, unsigned int *num_tcs, + unsigned int *default_tc) +{ + int err, i; + u32 msg[5]; + + /* do nothing if API doesn't support txgbevf_get_queues */ + switch (hw->api_version) { + case txgbe_mbox_api_11: + case txgbe_mbox_api_12: + case txgbe_mbox_api_13: + break; + default: + return 0; + } + + /* Fetch queue configuration from the PF */ + msg[0] = TXGBE_VF_GET_QUEUES; + for (i = 1; i < 5; i++) + msg[i] = 0; + + err = txgbevf_write_msg_read_ack(hw, msg, msg, 5); + if (!err) { + msg[0] &= ~TXGBE_VT_MSGTYPE_CTS; + + /* + * if we didn't get an ACK there must have been + * some sort of mailbox error so we should treat it + * as such + */ + if (msg[0] != (TXGBE_VF_GET_QUEUES | TXGBE_VT_MSGTYPE_ACK)) + return TXGBE_ERR_MBX; + + /* record and validate values from message */ + hw->mac.max_tx_queues = msg[TXGBE_VF_TX_QUEUES]; + if (hw->mac.max_tx_queues == 0 || + hw->mac.max_tx_queues > TXGBE_VF_MAX_TX_QUEUES) + hw->mac.max_tx_queues = TXGBE_VF_MAX_TX_QUEUES; + + hw->mac.max_rx_queues = msg[TXGBE_VF_RX_QUEUES]; + if (hw->mac.max_rx_queues == 0 || + hw->mac.max_rx_queues > TXGBE_VF_MAX_RX_QUEUES) + hw->mac.max_rx_queues = TXGBE_VF_MAX_RX_QUEUES; + + *num_tcs = msg[TXGBE_VF_TRANS_VLAN]; + /* in case of unknown state assume we cannot tag frames */ + if (*num_tcs > hw->mac.max_rx_queues) + *num_tcs = 1; + + *default_tc = msg[TXGBE_VF_DEF_QUEUE]; + /* default to queue 0 on out-of-bounds queue number */ + if (*default_tc >= hw->mac.max_tx_queues) + *default_tc = 0; + } + + return err; +} diff --git a/drivers/net/txgbe/base/txgbe_vf.h b/drivers/net/txgbe/base/txgbe_vf.h new file mode 100644 index 000000000..70f90c262 --- /dev/null +++ b/drivers/net/txgbe/base/txgbe_vf.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2015-2020 + */ + +#ifndef _TXGBE_VF_H_ +#define _TXGBE_VF_H_ + +#include "txgbe_type.h" + +#define TXGBE_VF_MAX_TX_QUEUES 8 +#define TXGBE_VF_MAX_RX_QUEUES 8 + +s32 txgbe_init_ops_vf(struct txgbe_hw *hw); +s32 txgbe_reset_hw_vf(struct txgbe_hw *hw); +s32 txgbe_stop_hw_vf(struct txgbe_hw *hw); +int txgbevf_negotiate_api_version(struct txgbe_hw *hw, int api); +int txgbevf_get_queues(struct txgbe_hw *hw, unsigned int *num_tcs, + unsigned int *default_tc); + +#endif /* __TXGBE_VF_H__ */ diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 1c3765e4e..4fbb4f154 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -10,11 +10,14 @@ #include #include +#include "txgbe_logs.h" #include "base/txgbe.h" #include "txgbe_ethdev.h" #include "txgbe_rxtx.h" static int txgbevf_dev_close(struct rte_eth_dev *dev); +static void txgbevf_intr_disable(struct rte_eth_dev *dev); +static void txgbevf_intr_enable(struct rte_eth_dev *dev); /* * The set of PCI devices this driver supports (for VF) @@ -27,14 +30,43 @@ static const struct rte_pci_id pci_id_txgbevf_map[] = { static const struct eth_dev_ops txgbevf_eth_dev_ops; +/* + * Negotiate mailbox API version with the PF. + * After reset API version is always set to the basic one (txgbe_mbox_api_10). + * Then we try to negotiate starting with the most recent one. + * If all negotiation attempts fail, then we will proceed with + * the default one (txgbe_mbox_api_10). + */ +static void +txgbevf_negotiate_api(struct txgbe_hw *hw) +{ + int32_t i; + + /* start with highest supported, proceed down */ + static const int sup_ver[] = { + txgbe_mbox_api_13, + txgbe_mbox_api_12, + txgbe_mbox_api_11, + txgbe_mbox_api_10, + }; + + for (i = 0; i < ARRAY_SIZE(sup_ver); i++) { + if (txgbevf_negotiate_api_version(hw, sup_ver[i]) == 0) + break; + } +} + /* * Virtual Function device init */ static int eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev) { + int err; + uint32_t tc, tcs; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev); + PMD_INIT_FUNC_TRACE(); eth_dev->dev_ops = &txgbevf_eth_dev_ops; @@ -71,6 +103,46 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev) hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id; hw->hw_addr = (void *)pci_dev->mem_resource[0].addr; + /* Initialize the shared code (base driver) */ + err = txgbe_init_shared_code(hw); + if (err != 0) { + PMD_INIT_LOG(ERR, + "Shared code init failed for txgbevf: %d", err); + return -EIO; + } + + /* init_mailbox_params */ + hw->mbx.init_params(hw); + + /* Disable the interrupts for VF */ + txgbevf_intr_disable(eth_dev); + + hw->mac.num_rar_entries = 128; /* The MAX of the underlying PF */ + err = hw->mac.reset_hw(hw); + + /* + * The VF reset operation returns the TXGBE_ERR_INVALID_MAC_ADDR when + * the underlying PF driver has not assigned a MAC address to the VF. + * In this case, assign a random MAC address. + */ + if (err != 0 && err != TXGBE_ERR_INVALID_MAC_ADDR) { + PMD_INIT_LOG(ERR, "VF Initialization Failure: %d", err); + /* + * This error code will be propagated to the app by + * rte_eth_dev_reset, so use a public error code rather than + * the internal-only TXGBE_ERR_RESET_FAILED + */ + return -EAGAIN; + } + + /* negotiate mailbox API version to use with the PF. */ + txgbevf_negotiate_api(hw); + + /* Get Rx/Tx queue count via mailbox, which is ready after reset_hw */ + txgbevf_get_queues(hw, &tcs, &tc); + + txgbevf_intr_enable(eth_dev); + return 0; } @@ -110,13 +182,57 @@ static struct rte_pci_driver rte_txgbevf_pmd = { .remove = eth_txgbevf_pci_remove, }; +/* + * Virtual Function operations + */ +static void +txgbevf_intr_disable(struct rte_eth_dev *dev) +{ + struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev); + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + + PMD_INIT_FUNC_TRACE(); + + /* Clear interrupt mask to stop from interrupts being generated */ + wr32(hw, TXGBE_VFIMS, TXGBE_VFIMS_MASK); + + txgbe_flush(hw); + + /* Clear mask value. */ + intr->mask_misc = TXGBE_VFIMS_MASK; +} + +static void +txgbevf_intr_enable(struct rte_eth_dev *dev) +{ + struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev); + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + + PMD_INIT_FUNC_TRACE(); + + /* VF enable interrupt autoclean */ + wr32(hw, TXGBE_VFIMC, TXGBE_VFIMC_MASK); + + txgbe_flush(hw); + + intr->mask_misc = 0; +} + static int txgbevf_dev_close(struct rte_eth_dev *dev) { + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); PMD_INIT_FUNC_TRACE(); if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + hw->mac.reset_hw(hw); + + txgbe_dev_free_queues(dev); + + /* Disable the interrupts for VF */ + txgbevf_intr_disable(dev); + return 0; } From patchwork Thu Feb 25 08:08:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 88188 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E92B1A034F; Thu, 25 Feb 2021 09:09:05 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0E2A616078C; Thu, 25 Feb 2021 09:08:41 +0100 (CET) Received: from smtpbgeu2.qq.com (smtpbgeu2.qq.com [18.194.254.142]) by mails.dpdk.org (Postfix) with ESMTP id CE22916075D for ; Thu, 25 Feb 2021 09:08:37 +0100 (CET) X-QQ-mid: bizesmtp20t1614240511tf9yuk4o Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 25 Feb 2021 16:08:31 +0800 (CST) X-QQ-SSF: 01400000002000C0D000000A0000000 X-QQ-FEAT: fxTm67mirkP96nzYaKtKtqxsS1fEJI2Jx6Y9mq0HJmX7Pi3DthM4/rI8iIblX R58tGXoelQ48w9uoQydSurUcqOH63xfJHzDHB5lc8HkwLHtwlxTu54AQiUGacYBnI7gLVgN qTWSqcg5GEvqf4C5pNqmOzRjoLpgpPqIGcu0Ie3EHPx1X/OTHsGhWtTzIMB4bHy1GYD01yg zNWTEza/xufdwpVw7xTRH4NVNEIzJu+OQmM4ap3YwU2o013cuRNA648VTRn4WPCJLY+8a1g eg6+axAksFy6LTnmOkZo3DzGc7IBKq2vMPl4RdcVpHDLSIRYLUkaMvcHtTqVzxjrTiKx5Z8 KKtDKyTgIkZ2q075YpTfxN24KUS9g== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 25 Feb 2021 16:08:47 +0800 Message-Id: <20210225080901.3645291-4-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210225080901.3645291-1-jiawenwu@trustnetic.com> References: <20210225080901.3645291-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign6 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v3 03/17] net/txgbe: support add and remove VF device MAC address X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Generate a random MAC address if none was assigned by PF during the initialization of VF device. And support to add and remove MAC address. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/txgbe_vf.ini | 1 + drivers/net/txgbe/base/txgbe_vf.c | 102 ++++++++++++++++ drivers/net/txgbe/base/txgbe_vf.h | 5 + drivers/net/txgbe/txgbe_ethdev_vf.c | 168 ++++++++++++++++++++++++++ 4 files changed, 276 insertions(+) diff --git a/doc/guides/nics/features/txgbe_vf.ini b/doc/guides/nics/features/txgbe_vf.ini index 5035c5eea..97c881d96 100644 --- a/doc/guides/nics/features/txgbe_vf.ini +++ b/doc/guides/nics/features/txgbe_vf.ini @@ -4,6 +4,7 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +Unicast MAC filter = Y Multiprocess aware = Y Linux = Y ARMv8 = Y diff --git a/drivers/net/txgbe/base/txgbe_vf.c b/drivers/net/txgbe/base/txgbe_vf.c index 5d4e10158..fadecaa11 100644 --- a/drivers/net/txgbe/base/txgbe_vf.c +++ b/drivers/net/txgbe/base/txgbe_vf.c @@ -21,9 +21,16 @@ s32 txgbe_init_ops_vf(struct txgbe_hw *hw) /* MAC */ mac->reset_hw = txgbe_reset_hw_vf; + mac->start_hw = txgbe_start_hw_vf; + /* Cannot clear stats on VF */ + mac->get_mac_addr = txgbe_get_mac_addr_vf; mac->stop_hw = txgbe_stop_hw_vf; mac->negotiate_api_version = txgbevf_negotiate_api_version; + /* RAR, Multicast, VLAN */ + mac->set_rar = txgbe_set_rar_vf; + mac->set_uc_addr = txgbevf_set_uc_addr_vf; + mac->max_tx_queues = 1; mac->max_rx_queues = 1; @@ -60,6 +67,23 @@ static void txgbe_virt_clr_reg(struct txgbe_hw *hw) txgbe_flush(hw); } +/** + * txgbe_start_hw_vf - Prepare hardware for Tx/Rx + * @hw: pointer to hardware structure + * + * Starts the hardware by filling the bus info structure and media type, clears + * all on chip counters, initializes receive address registers, multicast + * table, VLAN filter table, calls routine to set up link and flow control + * settings, and leaves transmit and receive units disabled and uninitialized + **/ +s32 txgbe_start_hw_vf(struct txgbe_hw *hw) +{ + /* Clear adapter stopped flag */ + hw->adapter_stopped = false; + + return 0; +} + /** * txgbe_reset_hw_vf - Performs hardware reset * @hw: pointer to hardware structure @@ -195,6 +219,84 @@ STATIC s32 txgbevf_write_msg_read_ack(struct txgbe_hw *hw, u32 *msg, return mbx->read_posted(hw, retmsg, size, 0); } +/** + * txgbe_set_rar_vf - set device MAC address + * @hw: pointer to hardware structure + * @index: Receive address register to write + * @addr: Address to put into receive address register + * @vmdq: VMDq "set" or "pool" index + * @enable_addr: set flag that address is active + **/ +s32 txgbe_set_rar_vf(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq, + u32 enable_addr) +{ + u32 msgbuf[3]; + u8 *msg_addr = (u8 *)(&msgbuf[1]); + s32 ret_val; + UNREFERENCED_PARAMETER(vmdq, enable_addr, index); + + memset(msgbuf, 0, 12); + msgbuf[0] = TXGBE_VF_SET_MAC_ADDR; + memcpy(msg_addr, addr, 6); + ret_val = txgbevf_write_msg_read_ack(hw, msgbuf, msgbuf, 3); + + msgbuf[0] &= ~TXGBE_VT_MSGTYPE_CTS; + + /* if nacked the address was rejected, use "perm_addr" */ + if (!ret_val && + (msgbuf[0] == (TXGBE_VF_SET_MAC_ADDR | TXGBE_VT_MSGTYPE_NACK))) { + txgbe_get_mac_addr_vf(hw, hw->mac.addr); + return TXGBE_ERR_MBX; + } + + return ret_val; +} + +/** + * txgbe_get_mac_addr_vf - Read device MAC address + * @hw: pointer to the HW structure + * @mac_addr: the MAC address + **/ +s32 txgbe_get_mac_addr_vf(struct txgbe_hw *hw, u8 *mac_addr) +{ + int i; + + for (i = 0; i < ETH_ADDR_LEN; i++) + mac_addr[i] = hw->mac.perm_addr[i]; + + return 0; +} + +s32 txgbevf_set_uc_addr_vf(struct txgbe_hw *hw, u32 index, u8 *addr) +{ + u32 msgbuf[3], msgbuf_chk; + u8 *msg_addr = (u8 *)(&msgbuf[1]); + s32 ret_val; + + memset(msgbuf, 0, sizeof(msgbuf)); + /* + * If index is one then this is the start of a new list and needs + * indication to the PF so it can do it's own list management. + * If it is zero then that tells the PF to just clear all of + * this VF's macvlans and there is no new list. + */ + msgbuf[0] |= index << TXGBE_VT_MSGINFO_SHIFT; + msgbuf[0] |= TXGBE_VF_SET_MACVLAN; + msgbuf_chk = msgbuf[0]; + if (addr) + memcpy(msg_addr, addr, 6); + + ret_val = txgbevf_write_msg_read_ack(hw, msgbuf, msgbuf, 3); + if (!ret_val) { + msgbuf[0] &= ~TXGBE_VT_MSGTYPE_CTS; + + if (msgbuf[0] == (msgbuf_chk | TXGBE_VT_MSGTYPE_NACK)) + return TXGBE_ERR_OUT_OF_MEM; + } + + return ret_val; +} + /** * txgbevf_negotiate_api_version - Negotiate supported API version * @hw: pointer to the HW structure diff --git a/drivers/net/txgbe/base/txgbe_vf.h b/drivers/net/txgbe/base/txgbe_vf.h index 70f90c262..f8c6532f6 100644 --- a/drivers/net/txgbe/base/txgbe_vf.h +++ b/drivers/net/txgbe/base/txgbe_vf.h @@ -11,8 +11,13 @@ #define TXGBE_VF_MAX_RX_QUEUES 8 s32 txgbe_init_ops_vf(struct txgbe_hw *hw); +s32 txgbe_start_hw_vf(struct txgbe_hw *hw); s32 txgbe_reset_hw_vf(struct txgbe_hw *hw); s32 txgbe_stop_hw_vf(struct txgbe_hw *hw); +s32 txgbe_get_mac_addr_vf(struct txgbe_hw *hw, u8 *mac_addr); +s32 txgbe_set_rar_vf(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq, + u32 enable_addr); +s32 txgbevf_set_uc_addr_vf(struct txgbe_hw *hw, u32 index, u8 *addr); int txgbevf_negotiate_api_version(struct txgbe_hw *hw, int api); int txgbevf_get_queues(struct txgbe_hw *hw, unsigned int *num_tcs, unsigned int *default_tc); diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 4fbb4f154..86b1e2bfb 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -18,6 +18,7 @@ static int txgbevf_dev_close(struct rte_eth_dev *dev); static void txgbevf_intr_disable(struct rte_eth_dev *dev); static void txgbevf_intr_enable(struct rte_eth_dev *dev); +static void txgbevf_remove_mac_addr(struct rte_eth_dev *dev, uint32_t index); /* * The set of PCI devices this driver supports (for VF) @@ -56,6 +57,22 @@ txgbevf_negotiate_api(struct txgbe_hw *hw) } } +static void +generate_random_mac_addr(struct rte_ether_addr *mac_addr) +{ + uint64_t random; + + /* Set Organizationally Unique Identifier (OUI) prefix. */ + mac_addr->addr_bytes[0] = 0x00; + mac_addr->addr_bytes[1] = 0x09; + mac_addr->addr_bytes[2] = 0xC0; + /* Force indication of locally assigned MAC address. */ + mac_addr->addr_bytes[0] |= RTE_ETHER_LOCAL_ADMIN_ADDR; + /* Generate the last 3 bytes of the MAC address with a random number. */ + random = rte_rand(); + memcpy(&mac_addr->addr_bytes[3], &random, 3); +} + /* * Virtual Function device init */ @@ -66,6 +83,8 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev) uint32_t tc, tcs; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev); + struct rte_ether_addr *perm_addr = + (struct rte_ether_addr *)hw->mac.perm_addr; PMD_INIT_FUNC_TRACE(); @@ -141,8 +160,53 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev) /* Get Rx/Tx queue count via mailbox, which is ready after reset_hw */ txgbevf_get_queues(hw, &tcs, &tc); + /* Allocate memory for storing MAC addresses */ + eth_dev->data->mac_addrs = rte_zmalloc("txgbevf", RTE_ETHER_ADDR_LEN * + hw->mac.num_rar_entries, 0); + if (eth_dev->data->mac_addrs == NULL) { + PMD_INIT_LOG(ERR, + "Failed to allocate %u bytes needed to store " + "MAC addresses", + RTE_ETHER_ADDR_LEN * hw->mac.num_rar_entries); + return -ENOMEM; + } + + /* Generate a random MAC address, if none was assigned by PF. */ + if (rte_is_zero_ether_addr(perm_addr)) { + generate_random_mac_addr(perm_addr); + err = txgbe_set_rar_vf(hw, 1, perm_addr->addr_bytes, 0, 1); + if (err) { + rte_free(eth_dev->data->mac_addrs); + eth_dev->data->mac_addrs = NULL; + return err; + } + PMD_INIT_LOG(INFO, "\tVF MAC address not assigned by Host PF"); + PMD_INIT_LOG(INFO, "\tAssign randomly generated MAC address " + "%02x:%02x:%02x:%02x:%02x:%02x", + perm_addr->addr_bytes[0], + perm_addr->addr_bytes[1], + perm_addr->addr_bytes[2], + perm_addr->addr_bytes[3], + perm_addr->addr_bytes[4], + perm_addr->addr_bytes[5]); + } + + /* Copy the permanent MAC address */ + rte_ether_addr_copy(perm_addr, ð_dev->data->mac_addrs[0]); + + /* reset the hardware with the new settings */ + err = hw->mac.start_hw(hw); + if (err) { + PMD_INIT_LOG(ERR, "VF Initialization Failure: %d", err); + return -EIO; + } + txgbevf_intr_enable(eth_dev); + PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x mac.type=%s", + eth_dev->data->port_id, pci_dev->id.vendor_id, + pci_dev->id.device_id, "txgbe_mac_raptor_vf"); + return 0; } @@ -230,9 +294,110 @@ txgbevf_dev_close(struct rte_eth_dev *dev) txgbe_dev_free_queues(dev); + /** + * Remove the VF MAC address ro ensure + * that the VF traffic goes to the PF + * after stop, close and detach of the VF + **/ + txgbevf_remove_mac_addr(dev, 0); + /* Disable the interrupts for VF */ txgbevf_intr_disable(dev); + rte_free(dev->data->mac_addrs); + dev->data->mac_addrs = NULL; + + return 0; +} + +static int +txgbevf_add_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr, + __rte_unused uint32_t index, + __rte_unused uint32_t pool) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + int err; + + /* + * On a VF, adding again the same MAC addr is not an idempotent + * operation. Trap this case to avoid exhausting the [very limited] + * set of PF resources used to store VF MAC addresses. + */ + if (memcmp(hw->mac.perm_addr, mac_addr, + sizeof(struct rte_ether_addr)) == 0) + return -1; + err = txgbevf_set_uc_addr_vf(hw, 2, mac_addr->addr_bytes); + if (err != 0) + PMD_DRV_LOG(ERR, "Unable to add MAC address " + "%02x:%02x:%02x:%02x:%02x:%02x - err=%d", + mac_addr->addr_bytes[0], + mac_addr->addr_bytes[1], + mac_addr->addr_bytes[2], + mac_addr->addr_bytes[3], + mac_addr->addr_bytes[4], + mac_addr->addr_bytes[5], + err); + return err; +} + +static void +txgbevf_remove_mac_addr(struct rte_eth_dev *dev, uint32_t index) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + struct rte_ether_addr *perm_addr = + (struct rte_ether_addr *)hw->mac.perm_addr; + struct rte_ether_addr *mac_addr; + uint32_t i; + int err; + + /* + * The TXGBE_VF_SET_MACVLAN command of the txgbe-pf driver does + * not support the deletion of a given MAC address. + * Instead, it imposes to delete all MAC addresses, then to add again + * all MAC addresses with the exception of the one to be deleted. + */ + (void)txgbevf_set_uc_addr_vf(hw, 0, NULL); + + /* + * Add again all MAC addresses, with the exception of the deleted one + * and of the permanent MAC address. + */ + for (i = 0, mac_addr = dev->data->mac_addrs; + i < hw->mac.num_rar_entries; i++, mac_addr++) { + /* Skip the deleted MAC address */ + if (i == index) + continue; + /* Skip NULL MAC addresses */ + if (rte_is_zero_ether_addr(mac_addr)) + continue; + /* Skip the permanent MAC address */ + if (memcmp(perm_addr, mac_addr, + sizeof(struct rte_ether_addr)) == 0) + continue; + err = txgbevf_set_uc_addr_vf(hw, 2, mac_addr->addr_bytes); + if (err != 0) + PMD_DRV_LOG(ERR, + "Adding again MAC address " + "%02x:%02x:%02x:%02x:%02x:%02x failed " + "err=%d", + mac_addr->addr_bytes[0], + mac_addr->addr_bytes[1], + mac_addr->addr_bytes[2], + mac_addr->addr_bytes[3], + mac_addr->addr_bytes[4], + mac_addr->addr_bytes[5], + err); + } +} + +static int +txgbevf_set_default_mac_addr(struct rte_eth_dev *dev, + struct rte_ether_addr *addr) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + + hw->mac.set_rar(hw, 0, (void *)addr, 0, 0); + return 0; } @@ -241,6 +406,9 @@ txgbevf_dev_close(struct rte_eth_dev *dev) * operation have been implemented */ static const struct eth_dev_ops txgbevf_eth_dev_ops = { + .mac_addr_add = txgbevf_add_mac_addr, + .mac_addr_remove = txgbevf_remove_mac_addr, + .mac_addr_set = txgbevf_set_default_mac_addr, }; RTE_PMD_REGISTER_PCI(net_txgbe_vf, rte_txgbevf_pmd); From patchwork Thu Feb 25 08:08:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 88191 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A000A034F; Thu, 25 Feb 2021 09:09:32 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B6B061607B4; Thu, 25 Feb 2021 09:08:44 +0100 (CET) Received: from smtpbgau1.qq.com (smtpbgau1.qq.com [54.206.16.166]) by mails.dpdk.org (Postfix) with ESMTP id 0D38D160789 for ; Thu, 25 Feb 2021 09:08:39 +0100 (CET) X-QQ-mid: bizesmtp20t1614240513trhpm5gr Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 25 Feb 2021 16:08:32 +0800 (CST) X-QQ-SSF: 01400000002000C0D000000A0000000 X-QQ-FEAT: yx8uyLBbwpx7sQDgee4rphjR2pOoefGkTqACJhHhXrLwAQ4fuJyX/GJ2uH3tV ytJl6LMXyRu6pcaEwFo8Lgi83PM8nRshMxLaVVgl722dI/tRd6x6bv33xn/U80pPgaV1Xzf RdyYDNIKNewY93gFFOnb2eEyEU3cPFCIEQr/uWTnbg0J/LdIRdvtxNEIMIbsB319TKJrWDQ +bCr4Np+t4ay2EJKG7nEki5lv1QXN9LPDX3CDo9pPhThUzVPr9MZt1aKKa3GguWVfGQpVdS caN7rdBC1EReIYzlHFLjrKyxXmVfaEFUzYrCjgw77NvwGEoNDZ6eB9TkdHJ8Uc5iaQlw7kW LQSledgvLEraNwp/g4= X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 25 Feb 2021 16:08:48 +0800 Message-Id: <20210225080901.3645291-5-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210225080901.3645291-1-jiawenwu@trustnetic.com> References: <20210225080901.3645291-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign6 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v3 04/17] net/txgbe: get VF device information X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add information get operation for VF device. RX and TX offload capabilities are same as the PF device. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/txgbe_vf.ini | 7 +++ drivers/net/txgbe/txgbe_ethdev_vf.c | 70 +++++++++++++++++++++++++++ 2 files changed, 77 insertions(+) diff --git a/doc/guides/nics/features/txgbe_vf.ini b/doc/guides/nics/features/txgbe_vf.ini index 97c881d96..266bc68c6 100644 --- a/doc/guides/nics/features/txgbe_vf.ini +++ b/doc/guides/nics/features/txgbe_vf.ini @@ -5,6 +5,13 @@ ; [Features] Unicast MAC filter = Y +CRC offload = P +VLAN offload = P +QinQ offload = P +L3 checksum offload = P +L4 checksum offload = P +Inner L3 checksum = P +Inner L4 checksum = P Multiprocess aware = Y Linux = Y ARMv8 = Y diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 86b1e2bfb..5dec29ab2 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -15,6 +15,8 @@ #include "txgbe_ethdev.h" #include "txgbe_rxtx.h" +static int txgbevf_dev_info_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info); static int txgbevf_dev_close(struct rte_eth_dev *dev); static void txgbevf_intr_disable(struct rte_eth_dev *dev); static void txgbevf_intr_enable(struct rte_eth_dev *dev); @@ -29,6 +31,20 @@ static const struct rte_pci_id pci_id_txgbevf_map[] = { { .vendor_id = 0, /* sentinel */ }, }; +static const struct rte_eth_desc_lim rx_desc_lim = { + .nb_max = TXGBE_RING_DESC_MAX, + .nb_min = TXGBE_RING_DESC_MIN, + .nb_align = TXGBE_RXD_ALIGN, +}; + +static const struct rte_eth_desc_lim tx_desc_lim = { + .nb_max = TXGBE_RING_DESC_MAX, + .nb_min = TXGBE_RING_DESC_MIN, + .nb_align = TXGBE_TXD_ALIGN, + .nb_seg_max = TXGBE_TX_MAX_SEG, + .nb_mtu_seg_max = TXGBE_TX_MAX_SEG, +}; + static const struct eth_dev_ops txgbevf_eth_dev_ops; /* @@ -246,6 +262,57 @@ static struct rte_pci_driver rte_txgbevf_pmd = { .remove = eth_txgbevf_pci_remove, }; +static int +txgbevf_dev_info_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + + dev_info->max_rx_queues = (uint16_t)hw->mac.max_rx_queues; + dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues; + dev_info->min_rx_bufsize = 1024; + dev_info->max_rx_pktlen = TXGBE_FRAME_SIZE_MAX; + dev_info->max_mac_addrs = hw->mac.num_rar_entries; + dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC; + dev_info->max_vfs = pci_dev->max_vfs; + dev_info->max_vmdq_pools = ETH_64_POOLS; + dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev); + dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) | + dev_info->rx_queue_offload_capa); + dev_info->tx_queue_offload_capa = txgbe_get_tx_queue_offloads(dev); + dev_info->tx_offload_capa = txgbe_get_tx_port_offloads(dev); + dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t); + dev_info->reta_size = ETH_RSS_RETA_SIZE_128; + dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = TXGBE_DEFAULT_RX_PTHRESH, + .hthresh = TXGBE_DEFAULT_RX_HTHRESH, + .wthresh = TXGBE_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = TXGBE_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + .offloads = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = TXGBE_DEFAULT_TX_PTHRESH, + .hthresh = TXGBE_DEFAULT_TX_HTHRESH, + .wthresh = TXGBE_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = TXGBE_DEFAULT_TX_FREE_THRESH, + .offloads = 0, + }; + + dev_info->rx_desc_lim = rx_desc_lim; + dev_info->tx_desc_lim = tx_desc_lim; + + return 0; +} + /* * Virtual Function operations */ @@ -406,8 +473,11 @@ txgbevf_set_default_mac_addr(struct rte_eth_dev *dev, * operation have been implemented */ static const struct eth_dev_ops txgbevf_eth_dev_ops = { + .dev_infos_get = txgbevf_dev_info_get, .mac_addr_add = txgbevf_add_mac_addr, .mac_addr_remove = txgbevf_remove_mac_addr, + .rxq_info_get = txgbe_rxq_info_get, + .txq_info_get = txgbe_txq_info_get, .mac_addr_set = txgbevf_set_default_mac_addr, }; From patchwork Thu Feb 25 08:08:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 88189 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 38FADA034F; Thu, 25 Feb 2021 09:09:14 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 336D5160795; Thu, 25 Feb 2021 09:08:42 +0100 (CET) Received: from smtpbgau1.qq.com (smtpbgau1.qq.com [54.206.16.166]) by mails.dpdk.org (Postfix) with ESMTP id 071D7160771 for ; Thu, 25 Feb 2021 09:08:38 +0100 (CET) X-QQ-mid: bizesmtp20t1614240514telivsao Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 25 Feb 2021 16:08:34 +0800 (CST) X-QQ-SSF: 01400000002000C0D000000A0000000 X-QQ-FEAT: wIWKFFnMzxJVkoVFL+b97mXSP6gLmhEc4MtTmo9cPit7wl+p9B0QphenI+pHA Nsvnz8p/wxw61ZPKOSQOb1WnGF8YMmBhrQeOHKQxhaUnYCRROlNEmUAFi/QN6Az4hCtY1fU /dSUWQ+HUG4DPe4moVNgDRKCupI/lnAmeHHAGX5jr4aYN7tSXhKK4str6PQ8MtdBUat864h Ym4q2svhZ9RfT1BwyD0FIVyuA2O8x3MqtPHQ31PKChdxxg8OU+TSPjkGgKubY/F5Nsh3bzc 51tt5muvP5lEaOCDR2oKvCXBUAH0LzaKaFI8DWwKaslV20Hp/dpDDtD1lAcgJX3LMRWIoM0 a0TBEodKuSbGYKtjH/xmQWcn9IMzg== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 25 Feb 2021 16:08:49 +0800 Message-Id: <20210225080901.3645291-6-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210225080901.3645291-1-jiawenwu@trustnetic.com> References: <20210225080901.3645291-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign6 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v3 05/17] net/txgbe: add interrupt operation for VF device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add VF device interrupt handler, support to enable and disable RX queue interrupt, and configure misx interrupt. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/txgbe_vf.ini | 1 + drivers/net/txgbe/base/txgbe_mbx.h | 2 + drivers/net/txgbe/txgbe_ethdev_vf.c | 181 ++++++++++++++++++++++++++ 3 files changed, 184 insertions(+) diff --git a/doc/guides/nics/features/txgbe_vf.ini b/doc/guides/nics/features/txgbe_vf.ini index 266bc68c6..67aa9e424 100644 --- a/doc/guides/nics/features/txgbe_vf.ini +++ b/doc/guides/nics/features/txgbe_vf.ini @@ -5,6 +5,7 @@ ; [Features] Unicast MAC filter = Y +Rx interrupt = Y CRC offload = P VLAN offload = P QinQ offload = P diff --git a/drivers/net/txgbe/base/txgbe_mbx.h b/drivers/net/txgbe/base/txgbe_mbx.h index ccf5d12f2..786a355f7 100644 --- a/drivers/net/txgbe/base/txgbe_mbx.h +++ b/drivers/net/txgbe/base/txgbe_mbx.h @@ -81,6 +81,8 @@ enum txgbevf_xcast_modes { /* word in permanent address message with the current multicast type */ #define TXGBE_VF_MC_TYPE_WORD 3 +#define TXGBE_PF_CONTROL_MSG 0x0100 /* PF control message */ + #define TXGBE_VF_MBX_INIT_TIMEOUT 2000 /* number of retries on mailbox */ #define TXGBE_VF_MBX_INIT_DELAY 500 /* microseconds between retries */ diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 5dec29ab2..9c6463d81 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -20,7 +20,9 @@ static int txgbevf_dev_info_get(struct rte_eth_dev *dev, static int txgbevf_dev_close(struct rte_eth_dev *dev); static void txgbevf_intr_disable(struct rte_eth_dev *dev); static void txgbevf_intr_enable(struct rte_eth_dev *dev); +static void txgbevf_configure_msix(struct rte_eth_dev *dev); static void txgbevf_remove_mac_addr(struct rte_eth_dev *dev, uint32_t index); +static void txgbevf_dev_interrupt_handler(void *param); /* * The set of PCI devices this driver supports (for VF) @@ -98,6 +100,7 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev) int err; uint32_t tc, tcs; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev); struct rte_ether_addr *perm_addr = (struct rte_ether_addr *)hw->mac.perm_addr; @@ -217,6 +220,9 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev) return -EIO; } + rte_intr_callback_register(intr_handle, + txgbevf_dev_interrupt_handler, eth_dev); + rte_intr_enable(intr_handle); txgbevf_intr_enable(eth_dev); PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x mac.type=%s", @@ -353,6 +359,8 @@ static int txgbevf_dev_close(struct rte_eth_dev *dev) { struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; PMD_INIT_FUNC_TRACE(); if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; @@ -374,9 +382,118 @@ txgbevf_dev_close(struct rte_eth_dev *dev) rte_free(dev->data->mac_addrs); dev->data->mac_addrs = NULL; + rte_intr_disable(intr_handle); + rte_intr_callback_unregister(intr_handle, + txgbevf_dev_interrupt_handler, dev); + return 0; } +static int +txgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev); + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + uint32_t vec = TXGBE_MISC_VEC_ID; + + if (rte_intr_allow_others(intr_handle)) + vec = TXGBE_RX_VEC_START; + intr->mask_misc &= ~(1 << vec); + RTE_SET_USED(queue_id); + wr32(hw, TXGBE_VFIMC, ~intr->mask_misc); + + rte_intr_enable(intr_handle); + + return 0; +} + +static int +txgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev); + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + uint32_t vec = TXGBE_MISC_VEC_ID; + + if (rte_intr_allow_others(intr_handle)) + vec = TXGBE_RX_VEC_START; + intr->mask_misc |= (1 << vec); + RTE_SET_USED(queue_id); + wr32(hw, TXGBE_VFIMS, intr->mask_misc); + + return 0; +} + +static void +txgbevf_set_ivar_map(struct txgbe_hw *hw, int8_t direction, + uint8_t queue, uint8_t msix_vector) +{ + uint32_t tmp, idx; + + if (direction == -1) { + /* other causes */ + msix_vector |= TXGBE_VFIVAR_VLD; + tmp = rd32(hw, TXGBE_VFIVARMISC); + tmp &= ~0xFF; + tmp |= msix_vector; + wr32(hw, TXGBE_VFIVARMISC, tmp); + } else { + /* rx or tx cause */ + /* Workround for ICR lost */ + idx = ((16 * (queue & 1)) + (8 * direction)); + tmp = rd32(hw, TXGBE_VFIVAR(queue >> 1)); + tmp &= ~(0xFF << idx); + tmp |= (msix_vector << idx); + wr32(hw, TXGBE_VFIVAR(queue >> 1), tmp); + } +} + +static void +txgbevf_configure_msix(struct rte_eth_dev *dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + uint32_t q_idx; + uint32_t vector_idx = TXGBE_MISC_VEC_ID; + uint32_t base = TXGBE_MISC_VEC_ID; + + /* Configure VF other cause ivar */ + txgbevf_set_ivar_map(hw, -1, 1, vector_idx); + + /* won't configure msix register if no mapping is done + * between intr vector and event fd. + */ + if (!rte_intr_dp_is_en(intr_handle)) + return; + + if (rte_intr_allow_others(intr_handle)) { + base = TXGBE_RX_VEC_START; + vector_idx = TXGBE_RX_VEC_START; + } + + /* Configure all RX queues of VF */ + for (q_idx = 0; q_idx < dev->data->nb_rx_queues; q_idx++) { + /* Force all queue use vector 0, + * as TXGBE_VF_MAXMSIVECOTR = 1 + */ + txgbevf_set_ivar_map(hw, 0, q_idx, vector_idx); + intr_handle->intr_vec[q_idx] = vector_idx; + if (vector_idx < base + intr_handle->nb_efd - 1) + vector_idx++; + } + + /* As RX queue setting above show, all queues use the vector 0. + * Set only the ITR value of TXGBE_MISC_VEC_ID. + */ + wr32(hw, TXGBE_ITR(TXGBE_MISC_VEC_ID), + TXGBE_ITR_IVAL(TXGBE_QUEUE_ITR_INTERVAL_DEFAULT) + | TXGBE_ITR_WRDSA); +} + static int txgbevf_add_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr, __rte_unused uint32_t index, @@ -468,12 +585,76 @@ txgbevf_set_default_mac_addr(struct rte_eth_dev *dev, return 0; } +static void txgbevf_mbx_process(struct rte_eth_dev *dev) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + u32 in_msg = 0; + + /* peek the message first */ + in_msg = rd32(hw, TXGBE_VFMBX); + + /* PF reset VF event */ + if (in_msg == TXGBE_PF_CONTROL_MSG) { + /* dummy mbx read to ack pf */ + if (txgbe_read_mbx(hw, &in_msg, 1, 0)) + return; + rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET, + NULL); + } +} + +static int +txgbevf_dev_interrupt_get_status(struct rte_eth_dev *dev) +{ + uint32_t eicr; + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev); + txgbevf_intr_disable(dev); + + /* read-on-clear nic registers here */ + eicr = rd32(hw, TXGBE_VFICR); + intr->flags = 0; + + /* only one misc vector supported - mailbox */ + eicr &= TXGBE_VFICR_MASK; + /* Workround for ICR lost */ + intr->flags |= TXGBE_FLAG_MAILBOX; + + return 0; +} + +static int +txgbevf_dev_interrupt_action(struct rte_eth_dev *dev) +{ + struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev); + + if (intr->flags & TXGBE_FLAG_MAILBOX) { + txgbevf_mbx_process(dev); + intr->flags &= ~TXGBE_FLAG_MAILBOX; + } + + txgbevf_intr_enable(dev); + + return 0; +} + +static void +txgbevf_dev_interrupt_handler(void *param) +{ + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + + txgbevf_dev_interrupt_get_status(dev); + txgbevf_dev_interrupt_action(dev); +} + /* * dev_ops for virtual function, bare necessities for basic vf * operation have been implemented */ static const struct eth_dev_ops txgbevf_eth_dev_ops = { .dev_infos_get = txgbevf_dev_info_get, + .rx_queue_intr_enable = txgbevf_dev_rx_queue_intr_enable, + .rx_queue_intr_disable = txgbevf_dev_rx_queue_intr_disable, .mac_addr_add = txgbevf_add_mac_addr, .mac_addr_remove = txgbevf_remove_mac_addr, .rxq_info_get = txgbe_rxq_info_get, From patchwork Thu Feb 25 08:08:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 88190 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AC500A034F; Thu, 25 Feb 2021 09:09:23 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7FAD71607A9; Thu, 25 Feb 2021 09:08:43 +0100 (CET) Received: from smtpproxy21.qq.com (smtpbg704.qq.com [203.205.195.105]) by mails.dpdk.org (Postfix) with ESMTP id 08B1B160787 for ; Thu, 25 Feb 2021 09:08:39 +0100 (CET) X-QQ-mid: bizesmtp20t1614240515t7naslqh Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 25 Feb 2021 16:08:35 +0800 (CST) X-QQ-SSF: 01400000002000C0D000000A0000000 X-QQ-FEAT: Me8y4DzRu2QJo3NzJQFWPmdqXdtXvIfiqe83obHfZhNtY+TKUGN5LIoxtVgax zQuMRgl8W1uxX/YT5bZizqOyyvqMNmEqWk3ejyFsc6/zKpfy4dRna7D7dUPOvwJq3J/xMBs kl8oEX2NL3H0J4K9ESYira8M0q1rELp66B79TZB33SzICsGwJ1sMdEuE06GQsLjhXy7GFYH lpGb704z9ai7K+0vBPcuAjndNecuKeisN6bDt7VMkVyJa4iRV0FxRHmqWkqSt/VpFYk7ZUE iQX0az/nwPG3k+J5ILAby8zwIZBMmeSszgzl2SBV9JSJXIk3AVJ3xXVHIn2scqN0vfTNvXm IIM4am10Zqikc97cWvEkzRaurtBmg== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 25 Feb 2021 16:08:50 +0800 Message-Id: <20210225080901.3645291-7-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210225080901.3645291-1-jiawenwu@trustnetic.com> References: <20210225080901.3645291-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign7 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v3 06/17] net/txgbe: get link status of VF device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support to get link speed, duplex mode and state of VF device. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/txgbe_vf.ini | 1 + drivers/net/txgbe/base/txgbe_vf.c | 99 +++++++++++++++++++++++++++ drivers/net/txgbe/base/txgbe_vf.h | 2 + drivers/net/txgbe/txgbe_ethdev_vf.c | 9 +++ 4 files changed, 111 insertions(+) diff --git a/doc/guides/nics/features/txgbe_vf.ini b/doc/guides/nics/features/txgbe_vf.ini index 67aa9e424..d81604502 100644 --- a/doc/guides/nics/features/txgbe_vf.ini +++ b/doc/guides/nics/features/txgbe_vf.ini @@ -4,6 +4,7 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +Link status = Y Unicast MAC filter = Y Rx interrupt = Y CRC offload = P diff --git a/drivers/net/txgbe/base/txgbe_vf.c b/drivers/net/txgbe/base/txgbe_vf.c index fadecaa11..20ba0b8e2 100644 --- a/drivers/net/txgbe/base/txgbe_vf.c +++ b/drivers/net/txgbe/base/txgbe_vf.c @@ -27,6 +27,9 @@ s32 txgbe_init_ops_vf(struct txgbe_hw *hw) mac->stop_hw = txgbe_stop_hw_vf; mac->negotiate_api_version = txgbevf_negotiate_api_version; + /* Link */ + mac->check_link = txgbe_check_mac_link_vf; + /* RAR, Multicast, VLAN */ mac->set_rar = txgbe_set_rar_vf; mac->set_uc_addr = txgbevf_set_uc_addr_vf; @@ -297,6 +300,102 @@ s32 txgbevf_set_uc_addr_vf(struct txgbe_hw *hw, u32 index, u8 *addr) return ret_val; } +/** + * txgbe_check_mac_link_vf - Get link/speed status + * @hw: pointer to hardware structure + * @speed: pointer to link speed + * @link_up: true is link is up, false otherwise + * @autoneg_wait_to_complete: true when waiting for completion is needed + * + * Reads the links register to determine if link is up and the current speed + **/ +s32 txgbe_check_mac_link_vf(struct txgbe_hw *hw, u32 *speed, + bool *link_up, bool wait_to_complete) +{ + /** + * for a quick link status checking, wait_to_compelet == 0, + * skip PF link status checking + */ + bool no_pflink_check = wait_to_complete == 0; + struct txgbe_mbx_info *mbx = &hw->mbx; + struct txgbe_mac_info *mac = &hw->mac; + s32 ret_val = 0; + u32 links_reg; + u32 in_msg = 0; + + /* If we were hit with a reset drop the link */ + if (!mbx->check_for_rst(hw, 0) || !mbx->timeout) + mac->get_link_status = true; + + if (!mac->get_link_status) + goto out; + + /* if link status is down no point in checking to see if pf is up */ + links_reg = rd32(hw, TXGBE_VFSTATUS); + if (!(links_reg & TXGBE_VFSTATUS_UP)) + goto out; + + /* for SFP+ modules and DA cables it can take up to 500usecs + * before the link status is correct + */ + if (mac->type == txgbe_mac_raptor_vf && wait_to_complete) { + if (po32m(hw, TXGBE_VFSTATUS, TXGBE_VFSTATUS_UP, + 0, NULL, 5, 100)) + goto out; + } + + switch (links_reg & TXGBE_VFSTATUS_BW_MASK) { + case TXGBE_VFSTATUS_BW_10G: + *speed = TXGBE_LINK_SPEED_10GB_FULL; + break; + case TXGBE_VFSTATUS_BW_1G: + *speed = TXGBE_LINK_SPEED_1GB_FULL; + break; + case TXGBE_VFSTATUS_BW_100M: + *speed = TXGBE_LINK_SPEED_100M_FULL; + break; + default: + *speed = TXGBE_LINK_SPEED_UNKNOWN; + } + + if (no_pflink_check) { + if (*speed == TXGBE_LINK_SPEED_UNKNOWN) + mac->get_link_status = true; + else + mac->get_link_status = false; + + goto out; + } + + /* if the read failed it could just be a mailbox collision, best wait + * until we are called again and don't report an error + */ + if (mbx->read(hw, &in_msg, 1, 0)) + goto out; + + if (!(in_msg & TXGBE_VT_MSGTYPE_CTS)) { + /* msg is not CTS and is NACK we must have lost CTS status */ + if (in_msg & TXGBE_VT_MSGTYPE_NACK) + ret_val = -1; + goto out; + } + + /* the pf is talking, if we timed out in the past we reinit */ + if (!mbx->timeout) { + ret_val = -1; + goto out; + } + + /* if we passed all the tests above then the link is up and we no + * longer need to check for link + */ + mac->get_link_status = false; + +out: + *link_up = !mac->get_link_status; + return ret_val; +} + /** * txgbevf_negotiate_api_version - Negotiate supported API version * @hw: pointer to the HW structure diff --git a/drivers/net/txgbe/base/txgbe_vf.h b/drivers/net/txgbe/base/txgbe_vf.h index f8c6532f6..f40a8f084 100644 --- a/drivers/net/txgbe/base/txgbe_vf.h +++ b/drivers/net/txgbe/base/txgbe_vf.h @@ -15,6 +15,8 @@ s32 txgbe_start_hw_vf(struct txgbe_hw *hw); s32 txgbe_reset_hw_vf(struct txgbe_hw *hw); s32 txgbe_stop_hw_vf(struct txgbe_hw *hw); s32 txgbe_get_mac_addr_vf(struct txgbe_hw *hw, u8 *mac_addr); +s32 txgbe_check_mac_link_vf(struct txgbe_hw *hw, u32 *speed, + bool *link_up, bool autoneg_wait_to_complete); s32 txgbe_set_rar_vf(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq, u32 enable_addr); s32 txgbevf_set_uc_addr_vf(struct txgbe_hw *hw, u32 index, u8 *addr); diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 9c6463d81..6a5180e21 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -17,6 +17,8 @@ static int txgbevf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info); +static int txgbevf_dev_link_update(struct rte_eth_dev *dev, + int wait_to_complete); static int txgbevf_dev_close(struct rte_eth_dev *dev); static void txgbevf_intr_disable(struct rte_eth_dev *dev); static void txgbevf_intr_enable(struct rte_eth_dev *dev); @@ -319,6 +321,12 @@ txgbevf_dev_info_get(struct rte_eth_dev *dev, return 0; } +static int +txgbevf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete) +{ + return txgbe_dev_link_update_share(dev, wait_to_complete); +} + /* * Virtual Function operations */ @@ -652,6 +660,7 @@ txgbevf_dev_interrupt_handler(void *param) * operation have been implemented */ static const struct eth_dev_ops txgbevf_eth_dev_ops = { + .link_update = txgbevf_dev_link_update, .dev_infos_get = txgbevf_dev_info_get, .rx_queue_intr_enable = txgbevf_dev_rx_queue_intr_enable, .rx_queue_intr_disable = txgbevf_dev_rx_queue_intr_disable, From patchwork Thu Feb 25 08:08:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 88192 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 876CAA034F; Thu, 25 Feb 2021 09:09:42 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 70DFB1607C3; Thu, 25 Feb 2021 09:08:46 +0100 (CET) Received: from smtpproxy21.qq.com (smtpbg702.qq.com [203.205.195.102]) by mails.dpdk.org (Postfix) with ESMTP id D259A160779 for ; Thu, 25 Feb 2021 09:08:42 +0100 (CET) X-QQ-mid: bizesmtp20t1614240516t80zpnxa Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 25 Feb 2021 16:08:36 +0800 (CST) X-QQ-SSF: 01400000002000C0D000000A0000000 X-QQ-FEAT: Me8y4DzRu2R/246elbiy1PY/iC0keIhT3hF1Djc/3ACbdAMld+Ra0Yygq5T5L WmLLO1k9fK81LhJbjnSsUCSMXENyY5YvrOYSIXhyIMH34QWvMKJncovbdig0lviSITaSnwd gR10AYTsGBOZBSO9Lg8/GpNMPTHXKAFe+wPUbkQLD89h7pbJYuiag0wePiYc/EpC7Q9kQB4 03ilE0apiNZgLOxCITtFt5E8MQY23BS++yJQ/wvY4WZdEDAXCVYlvcPNLMqcjRr76EWHGDV xnEma7r9roOeVY/RdwQL8+8jR2NA9R4a6DleKvl75dSsXvxemRq4xMRDrx58lrMMR2z9btl 1eYRqZXoxroi2fLvmkPLwAbvVLfkQ== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 25 Feb 2021 16:08:51 +0800 Message-Id: <20210225080901.3645291-8-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210225080901.3645291-1-jiawenwu@trustnetic.com> References: <20210225080901.3645291-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign7 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v3 07/17] net/txgbe: add Rx and Tx unit init for VF device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Configure VF device with RX port. Initialize receive and transmit unit, set the receive and transmit functions. And support to check the status of RX and TX descriptors. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/txgbe_vf.ini | 6 + doc/guides/nics/txgbe.rst | 4 + drivers/net/txgbe/base/txgbe_vf.c | 24 ++++ drivers/net/txgbe/base/txgbe_vf.h | 1 + drivers/net/txgbe/txgbe_ethdev.h | 4 + drivers/net/txgbe/txgbe_ethdev_vf.c | 46 +++++++ drivers/net/txgbe/txgbe_rxtx.c | 167 +++++++++++++++++++++++++- 7 files changed, 250 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/features/txgbe_vf.ini b/doc/guides/nics/features/txgbe_vf.ini index d81604502..8f36af0e3 100644 --- a/doc/guides/nics/features/txgbe_vf.ini +++ b/doc/guides/nics/features/txgbe_vf.ini @@ -7,6 +7,10 @@ Link status = Y Unicast MAC filter = Y Rx interrupt = Y +Jumbo frame = Y +Scattered Rx = Y +LRO = Y +TSO = Y CRC offload = P VLAN offload = P QinQ offload = P @@ -14,6 +18,8 @@ L3 checksum offload = P L4 checksum offload = P Inner L3 checksum = P Inner L4 checksum = P +Rx descriptor status = Y +Tx descriptor status = Y Multiprocess aware = Y Linux = Y ARMv8 = Y diff --git a/doc/guides/nics/txgbe.rst b/doc/guides/nics/txgbe.rst index 62aa22932..e520f13f3 100644 --- a/doc/guides/nics/txgbe.rst +++ b/doc/guides/nics/txgbe.rst @@ -63,6 +63,10 @@ Please note that enabling debugging options may affect system performance. Toggle display of transmit descriptor clean messages. +- ``RTE_LIBRTE_TXGBE_PF_DISABLE_STRIP_CRC`` (undefined by default) + + Decide to enable or disable HW CRC in VF PMD. + Dynamic Logging Parameters ~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/drivers/net/txgbe/base/txgbe_vf.c b/drivers/net/txgbe/base/txgbe_vf.c index 20ba0b8e2..1cf09ad42 100644 --- a/drivers/net/txgbe/base/txgbe_vf.c +++ b/drivers/net/txgbe/base/txgbe_vf.c @@ -33,6 +33,7 @@ s32 txgbe_init_ops_vf(struct txgbe_hw *hw) /* RAR, Multicast, VLAN */ mac->set_rar = txgbe_set_rar_vf; mac->set_uc_addr = txgbevf_set_uc_addr_vf; + mac->set_rlpml = txgbevf_rlpml_set_vf; mac->max_tx_queues = 1; mac->max_rx_queues = 1; @@ -396,6 +397,29 @@ s32 txgbe_check_mac_link_vf(struct txgbe_hw *hw, u32 *speed, return ret_val; } +/** + * txgbevf_rlpml_set_vf - Set the maximum receive packet length + * @hw: pointer to the HW structure + * @max_size: value to assign to max frame size + **/ +s32 txgbevf_rlpml_set_vf(struct txgbe_hw *hw, u16 max_size) +{ + u32 msgbuf[2]; + s32 retval; + + msgbuf[0] = TXGBE_VF_SET_LPE; + msgbuf[1] = max_size; + + retval = txgbevf_write_msg_read_ack(hw, msgbuf, msgbuf, 2); + if (retval) + return retval; + if ((msgbuf[0] & TXGBE_VF_SET_LPE) && + (msgbuf[0] & TXGBE_VT_MSGTYPE_NACK)) + return TXGBE_ERR_MBX; + + return 0; +} + /** * txgbevf_negotiate_api_version - Negotiate supported API version * @hw: pointer to the HW structure diff --git a/drivers/net/txgbe/base/txgbe_vf.h b/drivers/net/txgbe/base/txgbe_vf.h index f40a8f084..7c84c6892 100644 --- a/drivers/net/txgbe/base/txgbe_vf.h +++ b/drivers/net/txgbe/base/txgbe_vf.h @@ -20,6 +20,7 @@ s32 txgbe_check_mac_link_vf(struct txgbe_hw *hw, u32 *speed, s32 txgbe_set_rar_vf(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq, u32 enable_addr); s32 txgbevf_set_uc_addr_vf(struct txgbe_hw *hw, u32 index, u8 *addr); +s32 txgbevf_rlpml_set_vf(struct txgbe_hw *hw, u16 max_size); int txgbevf_negotiate_api_version(struct txgbe_hw *hw, int api); int txgbevf_get_queues(struct txgbe_hw *hw, unsigned int *num_tcs, unsigned int *default_tc); diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h index fe36acc25..52ce9c31e 100644 --- a/drivers/net/txgbe/txgbe_ethdev.h +++ b/drivers/net/txgbe/txgbe_ethdev.h @@ -475,6 +475,10 @@ void txgbe_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, void txgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_txq_info *qinfo); +int txgbevf_dev_rx_init(struct rte_eth_dev *dev); + +void txgbevf_dev_tx_init(struct rte_eth_dev *dev); + uint16_t txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 6a5180e21..559af5b16 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -17,6 +17,7 @@ static int txgbevf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info); +static int txgbevf_dev_configure(struct rte_eth_dev *dev); static int txgbevf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete); static int txgbevf_dev_close(struct rte_eth_dev *dev); @@ -110,6 +111,10 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev) PMD_INIT_FUNC_TRACE(); eth_dev->dev_ops = &txgbevf_eth_dev_ops; + eth_dev->rx_descriptor_status = txgbe_dev_rx_descriptor_status; + eth_dev->tx_descriptor_status = txgbe_dev_tx_descriptor_status; + eth_dev->rx_pkt_burst = &txgbe_recv_pkts; + eth_dev->tx_pkt_burst = &txgbe_xmit_pkts; /* for secondary processes, we don't initialise any further as primary * has already done this work. Only check we don't need a different @@ -363,6 +368,43 @@ txgbevf_intr_enable(struct rte_eth_dev *dev) intr->mask_misc = 0; } +static int +txgbevf_dev_configure(struct rte_eth_dev *dev) +{ + struct rte_eth_conf *conf = &dev->data->dev_conf; + struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev); + + PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d", + dev->data->port_id); + + if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) + dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; + + /* + * VF has no ability to enable/disable HW CRC + * Keep the persistent behavior the same as Host PF + */ +#ifndef RTE_LIBRTE_TXGBE_PF_DISABLE_STRIP_CRC + if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) { + PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip"); + conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC; + } +#else + if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) { + PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip"); + conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC; + } +#endif + + /* + * Initialize to TRUE. If any of Rx queues doesn't meet the bulk + * allocation or vector Rx preconditions we will reset it. + */ + adapter->rx_bulk_alloc_allowed = true; + + return 0; +} + static int txgbevf_dev_close(struct rte_eth_dev *dev) { @@ -384,6 +426,9 @@ txgbevf_dev_close(struct rte_eth_dev *dev) **/ txgbevf_remove_mac_addr(dev, 0); + dev->rx_pkt_burst = NULL; + dev->tx_pkt_burst = NULL; + /* Disable the interrupts for VF */ txgbevf_intr_disable(dev); @@ -660,6 +705,7 @@ txgbevf_dev_interrupt_handler(void *param) * operation have been implemented */ static const struct eth_dev_ops txgbevf_eth_dev_ops = { + .dev_configure = txgbevf_dev_configure, .link_update = txgbevf_dev_link_update, .dev_infos_get = txgbevf_dev_info_get, .rx_queue_intr_enable = txgbevf_dev_rx_queue_intr_enable, diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c index ac09e75a3..0c434ae5a 100644 --- a/drivers/net/txgbe/txgbe_rxtx.c +++ b/drivers/net/txgbe/txgbe_rxtx.c @@ -2837,8 +2837,10 @@ txgbe_rss_disable(struct rte_eth_dev *dev) struct txgbe_hw *hw; hw = TXGBE_DEV_HW(dev); - - wr32m(hw, TXGBE_RACTL, TXGBE_RACTL_RSSENA, 0); + if (hw->mac.type == txgbe_mac_raptor_vf) + wr32m(hw, TXGBE_VFPLCFG, TXGBE_VFPLCFG_RSSENA, 0); + else + wr32m(hw, TXGBE_RACTL, TXGBE_RACTL_RSSENA, 0); } int @@ -4722,6 +4724,167 @@ txgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, qinfo->conf.tx_deferred_start = txq->tx_deferred_start; } +/* + * [VF] Initializes Receive Unit. + */ +int __rte_cold +txgbevf_dev_rx_init(struct rte_eth_dev *dev) +{ + struct txgbe_hw *hw; + struct txgbe_rx_queue *rxq; + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; + uint64_t bus_addr; + uint32_t srrctl, psrtype; + uint16_t buf_size; + uint16_t i; + int ret; + + PMD_INIT_FUNC_TRACE(); + hw = TXGBE_DEV_HW(dev); + + if (rte_is_power_of_2(dev->data->nb_rx_queues) == 0) { + PMD_INIT_LOG(ERR, "The number of Rx queue invalid, " + "it should be power of 2"); + return -1; + } + + if (dev->data->nb_rx_queues > hw->mac.max_rx_queues) { + PMD_INIT_LOG(ERR, "The number of Rx queue invalid, " + "it should be equal to or less than %d", + hw->mac.max_rx_queues); + return -1; + } + + /* + * When the VF driver issues a TXGBE_VF_RESET request, the PF driver + * disables the VF receipt of packets if the PF MTU is > 1500. + * This is done to deal with limitations that imposes + * the PF and all VFs to share the same MTU. + * Then, the PF driver enables again the VF receipt of packet when + * the VF driver issues a TXGBE_VF_SET_LPE request. + * In the meantime, the VF device cannot be used, even if the VF driver + * and the Guest VM network stack are ready to accept packets with a + * size up to the PF MTU. + * As a work-around to this PF behaviour, force the call to + * txgbevf_rlpml_set_vf even if jumbo frames are not used. This way, + * VF packets received can work in all cases. + */ + if (txgbevf_rlpml_set_vf(hw, + (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) { + PMD_INIT_LOG(ERR, "Set max packet length to %d failed.", + dev->data->dev_conf.rxmode.max_rx_pkt_len); + return -EINVAL; + } + + /* + * Assume no header split and no VLAN strip support + * on any Rx queue first . + */ + rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP; + + /* Set PSR type for VF RSS according to max Rx queue */ + psrtype = TXGBE_VFPLCFG_PSRL4HDR | + TXGBE_VFPLCFG_PSRL4HDR | + TXGBE_VFPLCFG_PSRL2HDR | + TXGBE_VFPLCFG_PSRTUNHDR | + TXGBE_VFPLCFG_PSRTUNMAC; + wr32(hw, TXGBE_VFPLCFG, TXGBE_VFPLCFG_PSR(psrtype)); + + /* Setup RX queues */ + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + + /* Allocate buffers for descriptor rings */ + ret = txgbe_alloc_rx_queue_mbufs(rxq); + if (ret) + return ret; + + /* Setup the Base and Length of the Rx Descriptor Rings */ + bus_addr = rxq->rx_ring_phys_addr; + + wr32(hw, TXGBE_RXBAL(i), + (uint32_t)(bus_addr & BIT_MASK32)); + wr32(hw, TXGBE_RXBAH(i), + (uint32_t)(bus_addr >> 32)); + wr32(hw, TXGBE_RXRP(i), 0); + wr32(hw, TXGBE_RXWP(i), 0); + + /* Configure the RXCFG register */ + srrctl = TXGBE_RXCFG_RNGLEN(rxq->nb_rx_desc); + + /* Set if packets are dropped when no descriptors available */ + if (rxq->drop_en) + srrctl |= TXGBE_RXCFG_DROP; + + /* + * Configure the RX buffer size in the PKTLEN field of + * the RXCFG register of the queue. + * The value is in 1 KB resolution. Valid values can be from + * 1 KB to 16 KB. + */ + buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) - + RTE_PKTMBUF_HEADROOM); + buf_size = ROUND_UP(buf_size, 1 << 10); + srrctl |= TXGBE_RXCFG_PKTLEN(buf_size); + + /* + * VF modification to write virtual function RXCFG register + */ + wr32(hw, TXGBE_RXCFG(i), srrctl); + + if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER || + /* It adds dual VLAN length for supporting dual VLAN */ + (rxmode->max_rx_pkt_len + + 2 * TXGBE_VLAN_TAG_SIZE) > buf_size) { + if (!dev->data->scattered_rx) + PMD_INIT_LOG(DEBUG, "forcing scatter mode"); + dev->data->scattered_rx = 1; + } + + if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) + rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP; + } + + /* + * Device configured with multiple RX queues. + */ + txgbe_dev_mq_rx_configure(dev); + + txgbe_set_rx_function(dev); + + return 0; +} + +/* + * [VF] Initializes Transmit Unit. + */ +void __rte_cold +txgbevf_dev_tx_init(struct rte_eth_dev *dev) +{ + struct txgbe_hw *hw; + struct txgbe_tx_queue *txq; + uint64_t bus_addr; + uint16_t i; + + PMD_INIT_FUNC_TRACE(); + hw = TXGBE_DEV_HW(dev); + + /* Setup the Base and Length of the Tx Descriptor Rings */ + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + bus_addr = txq->tx_ring_phys_addr; + wr32(hw, TXGBE_TXBAL(i), + (uint32_t)(bus_addr & BIT_MASK32)); + wr32(hw, TXGBE_TXBAH(i), + (uint32_t)(bus_addr >> 32)); + wr32m(hw, TXGBE_TXCFG(i), TXGBE_TXCFG_BUFLEN_MASK, + TXGBE_TXCFG_BUFLEN(txq->nb_tx_desc)); + /* Setup the HW Tx Head and TX Tail descriptor pointers */ + wr32(hw, TXGBE_TXRP(i), 0); + wr32(hw, TXGBE_TXWP(i), 0); + } +} + int txgbe_rss_conf_init(struct txgbe_rte_flow_rss_conf *out, const struct rte_flow_action_rss *in) From patchwork Thu Feb 25 08:08:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 88194 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7AB6FA034F; Thu, 25 Feb 2021 09:09:57 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 014D11607D9; Thu, 25 Feb 2021 09:08:49 +0100 (CET) Received: from smtpbgeu1.qq.com (smtpbgeu1.qq.com [52.59.177.22]) by mails.dpdk.org (Postfix) with ESMTP id 347A11607AF for ; Thu, 25 Feb 2021 09:08:44 +0100 (CET) X-QQ-mid: bizesmtp20t1614240518ts74vd1c Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 25 Feb 2021 16:08:38 +0800 (CST) X-QQ-SSF: 01400000002000C0D000000A0000000 X-QQ-FEAT: YKCDl5A3/aq81NdITrtdKd+5aKj8QXoB88+YD5RFvu4k+C1N7PW18jpw57Dii zcg6SaNgLksKvKQdK3SL+tsqyAMRZCEN/m0l9r/rJjZcSTHTbB4e41o42+KZ26Ix0eH39hc oST9NKyi3AZnOjZzv2JNsNDFDqO2uKPkpRmZYDGnYNF0MF1NXQ6vJh+q04PqhX5s/Pu4wYp gFjbms62eg1Mg8XfD9w80/Ym12wtY0KplJ0HpZI9uIV+fHz8qECxX7eZIAWBY1Z2huxO2HI lz8c8MxLpGjpJzKg5FcN0FcZASN+tbtJdYdjtXUvvHFoP4B4mJpXvDwOkaYH8gb027L6phS pbk8zUGmk0RHQc23RolCUUmtPlXWg== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 25 Feb 2021 16:08:52 +0800 Message-Id: <20210225080901.3645291-9-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210225080901.3645291-1-jiawenwu@trustnetic.com> References: <20210225080901.3645291-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign6 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v3 08/17] net/txgbe: add VF device stats and xstats get operation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add VF device stats and extended stats get from reading hardware registers. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/txgbe_vf.ini | 2 + drivers/net/txgbe/base/txgbe_regs.h | 23 ++++ drivers/net/txgbe/base/txgbe_vf.h | 27 +++++ drivers/net/txgbe/txgbe_ethdev_vf.c | 158 ++++++++++++++++++++++++++ 4 files changed, 210 insertions(+) diff --git a/doc/guides/nics/features/txgbe_vf.ini b/doc/guides/nics/features/txgbe_vf.ini index 8f36af0e3..675e1af83 100644 --- a/doc/guides/nics/features/txgbe_vf.ini +++ b/doc/guides/nics/features/txgbe_vf.ini @@ -20,6 +20,8 @@ Inner L3 checksum = P Inner L4 checksum = P Rx descriptor status = Y Tx descriptor status = Y +Basic stats = Y +Extended stats = Y Multiprocess aware = Y Linux = Y ARMv8 = Y diff --git a/drivers/net/txgbe/base/txgbe_regs.h b/drivers/net/txgbe/base/txgbe_regs.h index 052609e3c..2799e5588 100644 --- a/drivers/net/txgbe/base/txgbe_regs.h +++ b/drivers/net/txgbe/base/txgbe_regs.h @@ -1698,6 +1698,29 @@ enum txgbe_5tuple_protocol { #define TXGBE_REG_RSSTBL TXGBE_RSSTBL(0) #define TXGBE_REG_RSSKEY TXGBE_RSSKEY(0) +/* + * read non-rc counters + */ +#define TXGBE_UPDCNT32(reg, last, cur) \ +do { \ + uint32_t latest = rd32(hw, reg); \ + if (hw->offset_loaded || hw->rx_loaded) \ + last = 0; \ + cur += (latest - last) & UINT_MAX; \ + last = latest; \ +} while (0) + +#define TXGBE_UPDCNT36(regl, last, cur) \ +do { \ + uint64_t new_lsb = rd32(hw, regl); \ + uint64_t new_msb = rd32(hw, regl + 4); \ + uint64_t latest = ((new_msb << 32) | new_lsb); \ + if (hw->offset_loaded || hw->rx_loaded) \ + last = 0; \ + cur += (0x1000000000LL + latest - last) & 0xFFFFFFFFFLL; \ + last = latest; \ +} while (0) + /** * register operations **/ diff --git a/drivers/net/txgbe/base/txgbe_vf.h b/drivers/net/txgbe/base/txgbe_vf.h index 7c84c6892..70d81ca83 100644 --- a/drivers/net/txgbe/base/txgbe_vf.h +++ b/drivers/net/txgbe/base/txgbe_vf.h @@ -10,6 +10,33 @@ #define TXGBE_VF_MAX_TX_QUEUES 8 #define TXGBE_VF_MAX_RX_QUEUES 8 +struct txgbevf_hw_stats { + u64 base_vfgprc; + u64 base_vfgptc; + u64 base_vfgorc; + u64 base_vfgotc; + u64 base_vfmprc; + + struct{ + u64 last_vfgprc; + u64 last_vfgptc; + u64 last_vfgorc; + u64 last_vfgotc; + u64 last_vfmprc; + u64 vfgprc; + u64 vfgptc; + u64 vfgorc; + u64 vfgotc; + u64 vfmprc; + } qp[8]; + + u64 saved_reset_vfgprc; + u64 saved_reset_vfgptc; + u64 saved_reset_vfgorc; + u64 saved_reset_vfgotc; + u64 saved_reset_vfmprc; +}; + s32 txgbe_init_ops_vf(struct txgbe_hw *hw); s32 txgbe_start_hw_vf(struct txgbe_hw *hw); s32 txgbe_reset_hw_vf(struct txgbe_hw *hw); diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 559af5b16..8fec75efe 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -15,6 +15,8 @@ #include "txgbe_ethdev.h" #include "txgbe_rxtx.h" +static int txgbevf_dev_xstats_get(struct rte_eth_dev *dev, + struct rte_eth_xstat *xstats, unsigned int n); static int txgbevf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info); static int txgbevf_dev_configure(struct rte_eth_dev *dev); @@ -23,6 +25,7 @@ static int txgbevf_dev_link_update(struct rte_eth_dev *dev, static int txgbevf_dev_close(struct rte_eth_dev *dev); static void txgbevf_intr_disable(struct rte_eth_dev *dev); static void txgbevf_intr_enable(struct rte_eth_dev *dev); +static int txgbevf_dev_stats_reset(struct rte_eth_dev *dev); static void txgbevf_configure_msix(struct rte_eth_dev *dev); static void txgbevf_remove_mac_addr(struct rte_eth_dev *dev, uint32_t index); static void txgbevf_dev_interrupt_handler(void *param); @@ -52,6 +55,28 @@ static const struct rte_eth_desc_lim tx_desc_lim = { static const struct eth_dev_ops txgbevf_eth_dev_ops; +static const struct rte_txgbe_xstats_name_off rte_txgbevf_stats_strings[] = { + {"rx_multicast_packets_0", + offsetof(struct txgbevf_hw_stats, qp[0].vfmprc)}, + {"rx_multicast_packets_1", + offsetof(struct txgbevf_hw_stats, qp[1].vfmprc)}, + {"rx_multicast_packets_2", + offsetof(struct txgbevf_hw_stats, qp[2].vfmprc)}, + {"rx_multicast_packets_3", + offsetof(struct txgbevf_hw_stats, qp[3].vfmprc)}, + {"rx_multicast_packets_4", + offsetof(struct txgbevf_hw_stats, qp[4].vfmprc)}, + {"rx_multicast_packets_5", + offsetof(struct txgbevf_hw_stats, qp[5].vfmprc)}, + {"rx_multicast_packets_6", + offsetof(struct txgbevf_hw_stats, qp[6].vfmprc)}, + {"rx_multicast_packets_7", + offsetof(struct txgbevf_hw_stats, qp[7].vfmprc)} +}; + +#define TXGBEVF_NB_XSTATS (sizeof(rte_txgbevf_stats_strings) / \ + sizeof(rte_txgbevf_stats_strings[0])) + /* * Negotiate mailbox API version with the PF. * After reset API version is always set to the basic one (txgbe_mbox_api_10). @@ -159,6 +184,9 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev) /* init_mailbox_params */ hw->mbx.init_params(hw); + /* Reset the hw statistics */ + txgbevf_dev_stats_reset(eth_dev); + /* Disable the interrupts for VF */ txgbevf_intr_disable(eth_dev); @@ -275,6 +303,131 @@ static struct rte_pci_driver rte_txgbevf_pmd = { .remove = eth_txgbevf_pci_remove, }; +static int txgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names, unsigned int limit) +{ + unsigned int i; + + if (limit < TXGBEVF_NB_XSTATS && xstats_names != NULL) + return -ENOMEM; + + if (xstats_names != NULL) + for (i = 0; i < TXGBEVF_NB_XSTATS; i++) + snprintf(xstats_names[i].name, + sizeof(xstats_names[i].name), + "%s", rte_txgbevf_stats_strings[i].name); + return TXGBEVF_NB_XSTATS; +} + +static void +txgbevf_update_stats(struct rte_eth_dev *dev) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + struct txgbevf_hw_stats *hw_stats = (struct txgbevf_hw_stats *) + TXGBE_DEV_STATS(dev); + unsigned int i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + /* Good Rx packet, include VF loopback */ + TXGBE_UPDCNT32(TXGBE_QPRXPKT(i), + hw_stats->qp[i].last_vfgprc, hw_stats->qp[i].vfgprc); + + /* Good Rx octets, include VF loopback */ + TXGBE_UPDCNT36(TXGBE_QPRXOCTL(i), + hw_stats->qp[i].last_vfgorc, hw_stats->qp[i].vfgorc); + + /* Rx Multicst Packet */ + TXGBE_UPDCNT32(TXGBE_QPRXMPKT(i), + hw_stats->qp[i].last_vfmprc, hw_stats->qp[i].vfmprc); + } + hw->rx_loaded = 0; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + /* Good Tx packet, include VF loopback */ + TXGBE_UPDCNT32(TXGBE_QPTXPKT(i), + hw_stats->qp[i].last_vfgptc, hw_stats->qp[i].vfgptc); + + /* Good Tx octets, include VF loopback */ + TXGBE_UPDCNT36(TXGBE_QPTXOCTL(i), + hw_stats->qp[i].last_vfgotc, hw_stats->qp[i].vfgotc); + } + hw->offset_loaded = 0; +} + +static int +txgbevf_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, + unsigned int n) +{ + struct txgbevf_hw_stats *hw_stats = (struct txgbevf_hw_stats *) + TXGBE_DEV_STATS(dev); + unsigned int i; + + if (n < TXGBEVF_NB_XSTATS) + return TXGBEVF_NB_XSTATS; + + txgbevf_update_stats(dev); + + if (!xstats) + return 0; + + /* Extended stats */ + for (i = 0; i < TXGBEVF_NB_XSTATS; i++) { + xstats[i].id = i; + xstats[i].value = *(uint64_t *)(((char *)hw_stats) + + rte_txgbevf_stats_strings[i].offset); + } + + return TXGBEVF_NB_XSTATS; +} + +static int +txgbevf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + struct txgbevf_hw_stats *hw_stats = (struct txgbevf_hw_stats *) + TXGBE_DEV_STATS(dev); + uint32_t i; + + txgbevf_update_stats(dev); + + if (stats == NULL) + return -EINVAL; + + stats->ipackets = 0; + stats->ibytes = 0; + stats->opackets = 0; + stats->obytes = 0; + + for (i = 0; i < 8; i++) { + stats->ipackets += hw_stats->qp[i].vfgprc; + stats->ibytes += hw_stats->qp[i].vfgorc; + stats->opackets += hw_stats->qp[i].vfgptc; + stats->obytes += hw_stats->qp[i].vfgotc; + } + + return 0; +} + +static int +txgbevf_dev_stats_reset(struct rte_eth_dev *dev) +{ + struct txgbevf_hw_stats *hw_stats = (struct txgbevf_hw_stats *) + TXGBE_DEV_STATS(dev); + uint32_t i; + + /* Sync HW register to the last stats */ + txgbevf_dev_stats_get(dev, NULL); + + /* reset HW current stats*/ + for (i = 0; i < 8; i++) { + hw_stats->qp[i].vfgprc = 0; + hw_stats->qp[i].vfgorc = 0; + hw_stats->qp[i].vfgptc = 0; + hw_stats->qp[i].vfgotc = 0; + } + + return 0; +} + static int txgbevf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) @@ -707,6 +860,11 @@ txgbevf_dev_interrupt_handler(void *param) static const struct eth_dev_ops txgbevf_eth_dev_ops = { .dev_configure = txgbevf_dev_configure, .link_update = txgbevf_dev_link_update, + .stats_get = txgbevf_dev_stats_get, + .xstats_get = txgbevf_dev_xstats_get, + .stats_reset = txgbevf_dev_stats_reset, + .xstats_reset = txgbevf_dev_stats_reset, + .xstats_get_names = txgbevf_dev_xstats_get_names, .dev_infos_get = txgbevf_dev_info_get, .rx_queue_intr_enable = txgbevf_dev_rx_queue_intr_enable, .rx_queue_intr_disable = txgbevf_dev_rx_queue_intr_disable, From patchwork Thu Feb 25 08:08:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 88193 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A8B17A034F; Thu, 25 Feb 2021 09:09:50 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B3D161607CF; Thu, 25 Feb 2021 09:08:47 +0100 (CET) Received: from smtpbgeu2.qq.com (smtpbgeu2.qq.com [18.194.254.142]) by mails.dpdk.org (Postfix) with ESMTP id 28C1F1607AE for ; Thu, 25 Feb 2021 09:08:44 +0100 (CET) X-QQ-mid: bizesmtp20t1614240519t90rurkf Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 25 Feb 2021 16:08:39 +0800 (CST) X-QQ-SSF: 01400000002000C0D000000A0000000 X-QQ-FEAT: dMLGSgzcGotdqJ+d24ydDvPdmWSAZNTuxo4VEE1bGH3cAIisjWHNguoOesR2q SvKWKJd5H4Rml1bjpaWH74p/ZCEppny7QxlnRPS2CARU4ILBxWhYe9PhIssi1wRFyY4eMNU ly4v2bRTU9MRphkTb/KRu6M0ta0UOtBEweqI31j2LAHg9pT95G9E7wJtOsiLLp6I6pudF5n S4dBsIfGG5VwNLJRsb6JtMSfgQpenSb09k/Bw8eXDoF7balU6BO8TOH/NmThP8KYEeMTz+f BJbFTt92U3jmDVtHJW0uPNlbvPDOXRCpJpSThOAfia+ut0J4uHA1igMRdAD/TSGr+UfxBbM r/Rh8Lb/FgpEgbxTgzahge1CsLtEA== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 25 Feb 2021 16:08:53 +0800 Message-Id: <20210225080901.3645291-10-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210225080901.3645291-1-jiawenwu@trustnetic.com> References: <20210225080901.3645291-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign6 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v3 09/17] net/txgbe: add VLAN handle support to VF driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add VLAN filter, offload and strip set support to VF driver. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/txgbe_vf.ini | 1 + drivers/net/txgbe/base/txgbe_vf.c | 30 +++++++ drivers/net/txgbe/base/txgbe_vf.h | 2 + drivers/net/txgbe/txgbe_ethdev_vf.c | 117 ++++++++++++++++++++++++++ 4 files changed, 150 insertions(+) diff --git a/doc/guides/nics/features/txgbe_vf.ini b/doc/guides/nics/features/txgbe_vf.ini index 675e1af83..0d5d24bcd 100644 --- a/doc/guides/nics/features/txgbe_vf.ini +++ b/doc/guides/nics/features/txgbe_vf.ini @@ -11,6 +11,7 @@ Jumbo frame = Y Scattered Rx = Y LRO = Y TSO = Y +VLAN filter = Y CRC offload = P VLAN offload = P QinQ offload = P diff --git a/drivers/net/txgbe/base/txgbe_vf.c b/drivers/net/txgbe/base/txgbe_vf.c index 1cf09ad42..9718912f8 100644 --- a/drivers/net/txgbe/base/txgbe_vf.c +++ b/drivers/net/txgbe/base/txgbe_vf.c @@ -33,6 +33,7 @@ s32 txgbe_init_ops_vf(struct txgbe_hw *hw) /* RAR, Multicast, VLAN */ mac->set_rar = txgbe_set_rar_vf; mac->set_uc_addr = txgbevf_set_uc_addr_vf; + mac->set_vfta = txgbe_set_vfta_vf; mac->set_rlpml = txgbevf_rlpml_set_vf; mac->max_tx_queues = 1; @@ -256,6 +257,35 @@ s32 txgbe_set_rar_vf(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq, return ret_val; } +/** + * txgbe_set_vfta_vf - Set/Unset vlan filter table address + * @hw: pointer to the HW structure + * @vlan: 12 bit VLAN ID + * @vind: unused by VF drivers + * @vlan_on: if true then set bit, else clear bit + * @vlvf_bypass: boolean flag indicating updating default pool is okay + * + * Turn on/off specified VLAN in the VLAN filter table. + **/ +s32 txgbe_set_vfta_vf(struct txgbe_hw *hw, u32 vlan, u32 vind, + bool vlan_on, bool vlvf_bypass) +{ + u32 msgbuf[2]; + s32 ret_val; + UNREFERENCED_PARAMETER(vind, vlvf_bypass); + + msgbuf[0] = TXGBE_VF_SET_VLAN; + msgbuf[1] = vlan; + /* Setting the 8 bit field MSG INFO to TRUE indicates "add" */ + msgbuf[0] |= vlan_on << TXGBE_VT_MSGINFO_SHIFT; + + ret_val = txgbevf_write_msg_read_ack(hw, msgbuf, msgbuf, 2); + if (!ret_val && (msgbuf[0] & TXGBE_VT_MSGTYPE_ACK)) + return 0; + + return ret_val | (msgbuf[0] & TXGBE_VT_MSGTYPE_NACK); +} + /** * txgbe_get_mac_addr_vf - Read device MAC address * @hw: pointer to the HW structure diff --git a/drivers/net/txgbe/base/txgbe_vf.h b/drivers/net/txgbe/base/txgbe_vf.h index 70d81ca83..45db1b776 100644 --- a/drivers/net/txgbe/base/txgbe_vf.h +++ b/drivers/net/txgbe/base/txgbe_vf.h @@ -47,6 +47,8 @@ s32 txgbe_check_mac_link_vf(struct txgbe_hw *hw, u32 *speed, s32 txgbe_set_rar_vf(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq, u32 enable_addr); s32 txgbevf_set_uc_addr_vf(struct txgbe_hw *hw, u32 index, u8 *addr); +s32 txgbe_set_vfta_vf(struct txgbe_hw *hw, u32 vlan, u32 vind, + bool vlan_on, bool vlvf_bypass); s32 txgbevf_rlpml_set_vf(struct txgbe_hw *hw, u16 max_size); int txgbevf_negotiate_api_version(struct txgbe_hw *hw, int api); int txgbevf_get_queues(struct txgbe_hw *hw, unsigned int *num_tcs, diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 8fec75efe..b1573100e 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -26,6 +26,8 @@ static int txgbevf_dev_close(struct rte_eth_dev *dev); static void txgbevf_intr_disable(struct rte_eth_dev *dev); static void txgbevf_intr_enable(struct rte_eth_dev *dev); static int txgbevf_dev_stats_reset(struct rte_eth_dev *dev); +static int txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask); +static void txgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on); static void txgbevf_configure_msix(struct rte_eth_dev *dev); static void txgbevf_remove_mac_addr(struct rte_eth_dev *dev, uint32_t index); static void txgbevf_dev_interrupt_handler(void *param); @@ -130,6 +132,8 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev) struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev); + struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(eth_dev); + struct txgbe_hwstrip *hwstrip = TXGBE_DEV_HWSTRIP(eth_dev); struct rte_ether_addr *perm_addr = (struct rte_ether_addr *)hw->mac.perm_addr; @@ -173,6 +177,12 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev) hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id; hw->hw_addr = (void *)pci_dev->mem_resource[0].addr; + /* initialize the vfta */ + memset(shadow_vfta, 0, sizeof(*shadow_vfta)); + + /* initialize the hw strip bitmap*/ + memset(hwstrip, 0, sizeof(*hwstrip)); + /* Initialize the shared code (base driver) */ err = txgbe_init_shared_code(hw); if (err != 0) { @@ -595,6 +605,110 @@ txgbevf_dev_close(struct rte_eth_dev *dev) return 0; } +static void txgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(dev); + int i = 0, j = 0, vfta = 0, mask = 1; + + for (i = 0; i < TXGBE_VFTA_SIZE; i++) { + vfta = shadow_vfta->vfta[i]; + if (vfta) { + mask = 1; + for (j = 0; j < 32; j++) { + if (vfta & mask) + txgbe_set_vfta(hw, (i << 5) + j, 0, + on, false); + mask <<= 1; + } + } + } +} + +static int +txgbevf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(dev); + uint32_t vid_idx = 0; + uint32_t vid_bit = 0; + int ret = 0; + + PMD_INIT_FUNC_TRACE(); + + /* vind is not used in VF driver, set to 0, check txgbe_set_vfta_vf */ + ret = hw->mac.set_vfta(hw, vlan_id, 0, !!on, false); + if (ret) { + PMD_INIT_LOG(ERR, "Unable to set VF vlan"); + return ret; + } + vid_idx = (uint32_t)((vlan_id >> 5) & 0x7F); + vid_bit = (uint32_t)(1 << (vlan_id & 0x1F)); + + /* Save what we set and restore it after device reset */ + if (on) + shadow_vfta->vfta[vid_idx] |= vid_bit; + else + shadow_vfta->vfta[vid_idx] &= ~vid_bit; + + return 0; +} + +static void +txgbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + uint32_t ctrl; + + PMD_INIT_FUNC_TRACE(); + + if (queue >= hw->mac.max_rx_queues) + return; + + ctrl = rd32(hw, TXGBE_RXCFG(queue)); + txgbe_dev_save_rx_queue(hw, queue); + if (on) + ctrl |= TXGBE_RXCFG_VLAN; + else + ctrl &= ~TXGBE_RXCFG_VLAN; + wr32(hw, TXGBE_RXCFG(queue), 0); + msec_delay(100); + txgbe_dev_store_rx_queue(hw, queue); + wr32m(hw, TXGBE_RXCFG(queue), + TXGBE_RXCFG_VLAN | TXGBE_RXCFG_ENA, ctrl); + + txgbe_vlan_hw_strip_bitmap_set(dev, queue, on); +} + +static int +txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask) +{ + struct txgbe_rx_queue *rxq; + uint16_t i; + int on = 0; + + /* VF function only support hw strip feature, others are not support */ + if (mask & ETH_VLAN_STRIP_MASK) { + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + on = !!(rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP); + txgbevf_vlan_strip_queue_set(dev, i, on); + } + } + + return 0; +} + +static int +txgbevf_vlan_offload_set(struct rte_eth_dev *dev, int mask) +{ + txgbe_config_vlan_strip_on_all_queues(dev, mask); + + txgbevf_vlan_offload_config(dev, mask); + + return 0; +} + static int txgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { @@ -866,6 +980,9 @@ static const struct eth_dev_ops txgbevf_eth_dev_ops = { .xstats_reset = txgbevf_dev_stats_reset, .xstats_get_names = txgbevf_dev_xstats_get_names, .dev_infos_get = txgbevf_dev_info_get, + .vlan_filter_set = txgbevf_vlan_filter_set, + .vlan_strip_queue_set = txgbevf_vlan_strip_queue_set, + .vlan_offload_set = txgbevf_vlan_offload_set, .rx_queue_intr_enable = txgbevf_dev_rx_queue_intr_enable, .rx_queue_intr_disable = txgbevf_dev_rx_queue_intr_disable, .mac_addr_add = txgbevf_add_mac_addr, From patchwork Thu Feb 25 08:08:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 88195 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1A307A034F; Thu, 25 Feb 2021 09:10:06 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9D1551607E8; Thu, 25 Feb 2021 09:08:50 +0100 (CET) Received: from smtpbg506.qq.com (smtpbg506.qq.com [203.205.250.33]) by mails.dpdk.org (Postfix) with ESMTP id E2F211607D3 for ; Thu, 25 Feb 2021 09:08:47 +0100 (CET) X-QQ-mid: bizesmtp20t1614240521tycwf4im Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 25 Feb 2021 16:08:40 +0800 (CST) X-QQ-SSF: 01400000002000C0D000000A0000000 X-QQ-FEAT: jVO9PhEDKm/aNnXk4p2fa9Zu6wETBq1wwpftSfDvF4NzxXt9vfN/f489i+Jg+ zSBhN3GanWHNd68PiK2cY1nlwgDNworToujz5KBVgdNrw7U693c5CRQcN8/ol3xPLMNUH/e xGKkmPS1PS56PwkN4JidrNxMTeSNyDZ4sq6nf5HZpItSTQbLmVxoOkibLAo9SJ3eoOVxjZA 9FVc8tFtAknGYhrGgG1uugFZaAyLNIcUFjObtHf1k5u5Y5DLEZktyKAS3OHOvoIOicqlrnh Ar0k/MI8CIFqkk4dfOjatEix1lgbdCEGyEuO6jK0iVkqPXBfXa65YR189UYSai83H+7B9eF CVZ586efbgKI2b6OW4JIXBYTwfChA== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 25 Feb 2021 16:08:54 +0800 Message-Id: <20210225080901.3645291-11-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210225080901.3645291-1-jiawenwu@trustnetic.com> References: <20210225080901.3645291-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign7 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v3 10/17] net/txgbe: add RSS support for VF device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support RSS hash and reta operations for VF device. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/txgbe_vf.ini | 3 + drivers/net/txgbe/base/txgbe_regs.h | 26 +++++ drivers/net/txgbe/txgbe_ethdev.c | 7 +- drivers/net/txgbe/txgbe_ethdev_vf.c | 4 + drivers/net/txgbe/txgbe_rxtx.c | 143 ++++++++++++++++++-------- 5 files changed, 135 insertions(+), 48 deletions(-) diff --git a/doc/guides/nics/features/txgbe_vf.ini b/doc/guides/nics/features/txgbe_vf.ini index 0d5d24bcd..58174e9ef 100644 --- a/doc/guides/nics/features/txgbe_vf.ini +++ b/doc/guides/nics/features/txgbe_vf.ini @@ -11,6 +11,9 @@ Jumbo frame = Y Scattered Rx = Y LRO = Y TSO = Y +RSS hash = Y +RSS key update = Y +RSS reta update = Y VLAN filter = Y CRC offload = P VLAN offload = P diff --git a/drivers/net/txgbe/base/txgbe_regs.h b/drivers/net/txgbe/base/txgbe_regs.h index 2799e5588..eb30c60a9 100644 --- a/drivers/net/txgbe/base/txgbe_regs.h +++ b/drivers/net/txgbe/base/txgbe_regs.h @@ -1698,6 +1698,27 @@ enum txgbe_5tuple_protocol { #define TXGBE_REG_RSSTBL TXGBE_RSSTBL(0) #define TXGBE_REG_RSSKEY TXGBE_RSSKEY(0) +static inline u32 +txgbe_map_reg(struct txgbe_hw *hw, u32 reg) +{ + switch (reg) { + case TXGBE_REG_RSSTBL: + if (hw->mac.type == txgbe_mac_raptor_vf) + reg = TXGBE_VFRSSTBL(0); + break; + case TXGBE_REG_RSSKEY: + if (hw->mac.type == txgbe_mac_raptor_vf) + reg = TXGBE_VFRSSKEY(0); + break; + default: + /* you should never reach here */ + reg = TXGBE_REG_DUMMY; + break; + } + + return reg; +} + /* * read non-rc counters */ @@ -1861,6 +1882,11 @@ po32m(struct txgbe_hw *hw, u32 reg, u32 mask, u32 expect, u32 *actual, #define wr32a(hw, reg, idx, val) \ wr32((hw), (reg) + ((idx) << 2), (val)) +#define rd32at(hw, reg, idx) \ + rd32a(hw, txgbe_map_reg(hw, reg), idx) +#define wr32at(hw, reg, idx, val) \ + wr32a(hw, txgbe_map_reg(hw, reg), idx, val) + #define rd32w(hw, reg, mask, slice) do { \ rd32((hw), reg); \ po32m((hw), reg, mask, mask, NULL, 5, slice); \ diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index 23f9d1709..247bb042f 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -3257,7 +3257,7 @@ txgbe_dev_rss_reta_update(struct rte_eth_dev *dev, if (!mask) continue; - reta = rd32a(hw, TXGBE_REG_RSSTBL, i >> 2); + reta = rd32at(hw, TXGBE_REG_RSSTBL, i >> 2); for (j = 0; j < 4; j++) { if (RS8(mask, j, 0x1)) { reta &= ~(MS32(8 * j, 0xFF)); @@ -3265,7 +3265,7 @@ txgbe_dev_rss_reta_update(struct rte_eth_dev *dev, 8 * j, 0xFF); } } - wr32a(hw, TXGBE_REG_RSSTBL, i >> 2, reta); + wr32at(hw, TXGBE_REG_RSSTBL, i >> 2, reta); } adapter->rss_reta_updated = 1; @@ -3298,7 +3298,7 @@ txgbe_dev_rss_reta_query(struct rte_eth_dev *dev, if (!mask) continue; - reta = rd32a(hw, TXGBE_REG_RSSTBL, i >> 2); + reta = rd32at(hw, TXGBE_REG_RSSTBL, i >> 2); for (j = 0; j < 4; j++) { if (RS8(mask, j, 0x1)) reta_conf[idx].reta[shift + j] = @@ -4524,6 +4524,7 @@ txgbe_rss_update_sp(enum txgbe_mac_type mac_type) { switch (mac_type) { case txgbe_mac_raptor: + case txgbe_mac_raptor_vf: return 1; default: return 0; diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index b1573100e..78d3299fb 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -990,6 +990,10 @@ static const struct eth_dev_ops txgbevf_eth_dev_ops = { .rxq_info_get = txgbe_rxq_info_get, .txq_info_get = txgbe_txq_info_get, .mac_addr_set = txgbevf_set_default_mac_addr, + .reta_update = txgbe_dev_rss_reta_update, + .reta_query = txgbe_dev_rss_reta_query, + .rss_hash_update = txgbe_dev_rss_hash_update, + .rss_hash_conf_get = txgbe_dev_rss_hash_conf_get, }; RTE_PMD_REGISTER_PCI(net_txgbe_vf, rte_txgbevf_pmd); diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c index 0c434ae5a..7117dbb6d 100644 --- a/drivers/net/txgbe/txgbe_rxtx.c +++ b/drivers/net/txgbe/txgbe_rxtx.c @@ -2868,36 +2868,68 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev, rss_key |= LS32(hash_key[(i * 4) + 1], 8, 0xFF); rss_key |= LS32(hash_key[(i * 4) + 2], 16, 0xFF); rss_key |= LS32(hash_key[(i * 4) + 3], 24, 0xFF); - wr32a(hw, TXGBE_REG_RSSKEY, i, rss_key); + wr32at(hw, TXGBE_REG_RSSKEY, i, rss_key); } } /* Set configured hashing protocols */ rss_hf = rss_conf->rss_hf & TXGBE_RSS_OFFLOAD_ALL; - mrqc = rd32(hw, TXGBE_RACTL); - mrqc &= ~TXGBE_RACTL_RSSMASK; - if (rss_hf & ETH_RSS_IPV4) - mrqc |= TXGBE_RACTL_RSSIPV4; - if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) - mrqc |= TXGBE_RACTL_RSSIPV4TCP; - if (rss_hf & ETH_RSS_IPV6 || - rss_hf & ETH_RSS_IPV6_EX) - mrqc |= TXGBE_RACTL_RSSIPV6; - if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP || - rss_hf & ETH_RSS_IPV6_TCP_EX) - mrqc |= TXGBE_RACTL_RSSIPV6TCP; - if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) - mrqc |= TXGBE_RACTL_RSSIPV4UDP; - if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP || - rss_hf & ETH_RSS_IPV6_UDP_EX) - mrqc |= TXGBE_RACTL_RSSIPV6UDP; - - if (rss_hf) - mrqc |= TXGBE_RACTL_RSSENA; - else - mrqc &= ~TXGBE_RACTL_RSSENA; + if (hw->mac.type == txgbe_mac_raptor_vf) { + mrqc = rd32(hw, TXGBE_VFPLCFG); + mrqc &= ~TXGBE_VFPLCFG_RSSMASK; + if (rss_hf & ETH_RSS_IPV4) + mrqc |= TXGBE_VFPLCFG_RSSIPV4; + if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) + mrqc |= TXGBE_VFPLCFG_RSSIPV4TCP; + if (rss_hf & ETH_RSS_IPV6 || + rss_hf & ETH_RSS_IPV6_EX) + mrqc |= TXGBE_VFPLCFG_RSSIPV6; + if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP || + rss_hf & ETH_RSS_IPV6_TCP_EX) + mrqc |= TXGBE_VFPLCFG_RSSIPV6TCP; + if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) + mrqc |= TXGBE_VFPLCFG_RSSIPV4UDP; + if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP || + rss_hf & ETH_RSS_IPV6_UDP_EX) + mrqc |= TXGBE_VFPLCFG_RSSIPV6UDP; + + if (rss_hf) + mrqc |= TXGBE_VFPLCFG_RSSENA; + else + mrqc &= ~TXGBE_VFPLCFG_RSSENA; - wr32(hw, TXGBE_RACTL, mrqc); + if (dev->data->nb_rx_queues > 3) + mrqc |= TXGBE_VFPLCFG_RSSHASH(2); + else if (dev->data->nb_rx_queues > 1) + mrqc |= TXGBE_VFPLCFG_RSSHASH(1); + + wr32(hw, TXGBE_VFPLCFG, mrqc); + } else { + mrqc = rd32(hw, TXGBE_RACTL); + mrqc &= ~TXGBE_RACTL_RSSMASK; + if (rss_hf & ETH_RSS_IPV4) + mrqc |= TXGBE_RACTL_RSSIPV4; + if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) + mrqc |= TXGBE_RACTL_RSSIPV4TCP; + if (rss_hf & ETH_RSS_IPV6 || + rss_hf & ETH_RSS_IPV6_EX) + mrqc |= TXGBE_RACTL_RSSIPV6; + if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP || + rss_hf & ETH_RSS_IPV6_TCP_EX) + mrqc |= TXGBE_RACTL_RSSIPV6TCP; + if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) + mrqc |= TXGBE_RACTL_RSSIPV4UDP; + if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP || + rss_hf & ETH_RSS_IPV6_UDP_EX) + mrqc |= TXGBE_RACTL_RSSIPV6UDP; + + if (rss_hf) + mrqc |= TXGBE_RACTL_RSSENA; + else + mrqc &= ~TXGBE_RACTL_RSSENA; + + wr32(hw, TXGBE_RACTL, mrqc); + } return 0; } @@ -2917,7 +2949,7 @@ txgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev, if (hash_key) { /* Return RSS hash key */ for (i = 0; i < 10; i++) { - rss_key = rd32a(hw, TXGBE_REG_RSSKEY, i); + rss_key = rd32at(hw, TXGBE_REG_RSSKEY, i); hash_key[(i * 4) + 0] = RS32(rss_key, 0, 0xFF); hash_key[(i * 4) + 1] = RS32(rss_key, 8, 0xFF); hash_key[(i * 4) + 2] = RS32(rss_key, 16, 0xFF); @@ -2926,24 +2958,45 @@ txgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev, } rss_hf = 0; - mrqc = rd32(hw, TXGBE_RACTL); - if (mrqc & TXGBE_RACTL_RSSIPV4) - rss_hf |= ETH_RSS_IPV4; - if (mrqc & TXGBE_RACTL_RSSIPV4TCP) - rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP; - if (mrqc & TXGBE_RACTL_RSSIPV6) - rss_hf |= ETH_RSS_IPV6 | - ETH_RSS_IPV6_EX; - if (mrqc & TXGBE_RACTL_RSSIPV6TCP) - rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP | - ETH_RSS_IPV6_TCP_EX; - if (mrqc & TXGBE_RACTL_RSSIPV4UDP) - rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP; - if (mrqc & TXGBE_RACTL_RSSIPV6UDP) - rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP | - ETH_RSS_IPV6_UDP_EX; - if (!(mrqc & TXGBE_RACTL_RSSENA)) - rss_hf = 0; + if (hw->mac.type == txgbe_mac_raptor_vf) { + mrqc = rd32(hw, TXGBE_VFPLCFG); + if (mrqc & TXGBE_VFPLCFG_RSSIPV4) + rss_hf |= ETH_RSS_IPV4; + if (mrqc & TXGBE_VFPLCFG_RSSIPV4TCP) + rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP; + if (mrqc & TXGBE_VFPLCFG_RSSIPV6) + rss_hf |= ETH_RSS_IPV6 | + ETH_RSS_IPV6_EX; + if (mrqc & TXGBE_VFPLCFG_RSSIPV6TCP) + rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP | + ETH_RSS_IPV6_TCP_EX; + if (mrqc & TXGBE_VFPLCFG_RSSIPV4UDP) + rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP; + if (mrqc & TXGBE_VFPLCFG_RSSIPV6UDP) + rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP | + ETH_RSS_IPV6_UDP_EX; + if (!(mrqc & TXGBE_VFPLCFG_RSSENA)) + rss_hf = 0; + } else { + mrqc = rd32(hw, TXGBE_RACTL); + if (mrqc & TXGBE_RACTL_RSSIPV4) + rss_hf |= ETH_RSS_IPV4; + if (mrqc & TXGBE_RACTL_RSSIPV4TCP) + rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP; + if (mrqc & TXGBE_RACTL_RSSIPV6) + rss_hf |= ETH_RSS_IPV6 | + ETH_RSS_IPV6_EX; + if (mrqc & TXGBE_RACTL_RSSIPV6TCP) + rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP | + ETH_RSS_IPV6_TCP_EX; + if (mrqc & TXGBE_RACTL_RSSIPV4UDP) + rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP; + if (mrqc & TXGBE_RACTL_RSSIPV6UDP) + rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP | + ETH_RSS_IPV6_UDP_EX; + if (!(mrqc & TXGBE_RACTL_RSSENA)) + rss_hf = 0; + } rss_hf &= TXGBE_RSS_OFFLOAD_ALL; @@ -2975,7 +3028,7 @@ txgbe_rss_configure(struct rte_eth_dev *dev) j = 0; reta = (reta >> 8) | LS32(j, 24, 0xFF); if ((i & 3) == 3) - wr32a(hw, TXGBE_REG_RSSTBL, i >> 2, reta); + wr32at(hw, TXGBE_REG_RSSTBL, i >> 2, reta); } } /* @@ -4961,7 +5014,7 @@ txgbe_config_rss_filter(struct rte_eth_dev *dev, j = 0; reta = (reta >> 8) | LS32(conf->conf.queue[j], 24, 0xFF); if ((i & 3) == 3) - wr32a(hw, TXGBE_REG_RSSTBL, i >> 2, reta); + wr32at(hw, TXGBE_REG_RSSTBL, i >> 2, reta); } /* Configure the RSS key and the RSS protocols used to compute From patchwork Thu Feb 25 08:08:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 88200 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BFEF7A0547; Thu, 25 Feb 2021 09:10:43 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5189116080C; Thu, 25 Feb 2021 09:08:58 +0100 (CET) Received: from smtpbgsg2.qq.com (smtpbgsg2.qq.com [54.254.200.128]) by mails.dpdk.org (Postfix) with ESMTP id 987391607E6 for ; Thu, 25 Feb 2021 09:08:49 +0100 (CET) X-QQ-mid: bizesmtp20t1614240522t2rdvi47 Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 25 Feb 2021 16:08:42 +0800 (CST) X-QQ-SSF: 01400000002000C0D000000A0000000 X-QQ-FEAT: mJep2VbaKxY0MQrK6tqkwKiZOtNmt27nf0pkrciJLhetKQe//GFPjna3F1AVY zJNRPYQXBLzwqkXdOJ3r67dwwpYi9PrMuto1wPfRXCOrctqTQ4ISjI778G9pWVBfnqrYH1T 0OKHRMNM7g/z6/ajnQzxUbQk0eKiqpYAfivyv79DWk7t/JJCJXRM3fqTI6x3Qey6+8uxIsn X+o/FA2vJH7Hjj1dDe1VxKQ6xAJu/WmZX+3vhKJiwskuiZZhqkVdWI5b+bsZU7cSdaSDwDD dYnxfitHvTw+VsSuPRSmY69CYF0N6vu/mF3MMc1m9Q8mp5XqFN2qfEEKf1VKKvnJ3yQO5rZ XQ7j4qG4ood1ko0/+Kgwus8CG6Oag== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 25 Feb 2021 16:08:55 +0800 Message-Id: <20210225080901.3645291-12-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210225080901.3645291-1-jiawenwu@trustnetic.com> References: <20210225080901.3645291-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign5 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v3 11/17] net/txgbe: add VF device promiscuous and allmulticast mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support to enable and disable promiscuous and allmulticast mode on VF device. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/txgbe_vf.ini | 2 + drivers/net/txgbe/base/txgbe_vf.c | 38 +++++++++++ drivers/net/txgbe/base/txgbe_vf.h | 1 + drivers/net/txgbe/txgbe_ethdev_vf.c | 93 +++++++++++++++++++++++++++ 4 files changed, 134 insertions(+) diff --git a/doc/guides/nics/features/txgbe_vf.ini b/doc/guides/nics/features/txgbe_vf.ini index 58174e9ef..aeb93f007 100644 --- a/doc/guides/nics/features/txgbe_vf.ini +++ b/doc/guides/nics/features/txgbe_vf.ini @@ -11,6 +11,8 @@ Jumbo frame = Y Scattered Rx = Y LRO = Y TSO = Y +Promiscuous mode = Y +Allmulticast mode = Y RSS hash = Y RSS key update = Y RSS reta update = Y diff --git a/drivers/net/txgbe/base/txgbe_vf.c b/drivers/net/txgbe/base/txgbe_vf.c index 9718912f8..9105de293 100644 --- a/drivers/net/txgbe/base/txgbe_vf.c +++ b/drivers/net/txgbe/base/txgbe_vf.c @@ -33,6 +33,7 @@ s32 txgbe_init_ops_vf(struct txgbe_hw *hw) /* RAR, Multicast, VLAN */ mac->set_rar = txgbe_set_rar_vf; mac->set_uc_addr = txgbevf_set_uc_addr_vf; + mac->update_xcast_mode = txgbevf_update_xcast_mode; mac->set_vfta = txgbe_set_vfta_vf; mac->set_rlpml = txgbevf_rlpml_set_vf; @@ -257,6 +258,43 @@ s32 txgbe_set_rar_vf(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq, return ret_val; } +/** + * txgbevf_update_xcast_mode - Update Multicast mode + * @hw: pointer to the HW structure + * @xcast_mode: new multicast mode + * + * Updates the Multicast Mode of VF. + **/ +s32 txgbevf_update_xcast_mode(struct txgbe_hw *hw, int xcast_mode) +{ + u32 msgbuf[2]; + s32 err; + + switch (hw->api_version) { + case txgbe_mbox_api_12: + /* New modes were introduced in 1.3 version */ + if (xcast_mode > TXGBEVF_XCAST_MODE_ALLMULTI) + return TXGBE_ERR_FEATURE_NOT_SUPPORTED; + /* Fall through */ + case txgbe_mbox_api_13: + break; + default: + return TXGBE_ERR_FEATURE_NOT_SUPPORTED; + } + + msgbuf[0] = TXGBE_VF_UPDATE_XCAST_MODE; + msgbuf[1] = xcast_mode; + + err = txgbevf_write_msg_read_ack(hw, msgbuf, msgbuf, 2); + if (err) + return err; + + msgbuf[0] &= ~TXGBE_VT_MSGTYPE_CTS; + if (msgbuf[0] == (TXGBE_VF_UPDATE_XCAST_MODE | TXGBE_VT_MSGTYPE_NACK)) + return TXGBE_ERR_FEATURE_NOT_SUPPORTED; + return 0; +} + /** * txgbe_set_vfta_vf - Set/Unset vlan filter table address * @hw: pointer to the HW structure diff --git a/drivers/net/txgbe/base/txgbe_vf.h b/drivers/net/txgbe/base/txgbe_vf.h index 45db1b776..710df838f 100644 --- a/drivers/net/txgbe/base/txgbe_vf.h +++ b/drivers/net/txgbe/base/txgbe_vf.h @@ -47,6 +47,7 @@ s32 txgbe_check_mac_link_vf(struct txgbe_hw *hw, u32 *speed, s32 txgbe_set_rar_vf(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq, u32 enable_addr); s32 txgbevf_set_uc_addr_vf(struct txgbe_hw *hw, u32 index, u8 *addr); +s32 txgbevf_update_xcast_mode(struct txgbe_hw *hw, int xcast_mode); s32 txgbe_set_vfta_vf(struct txgbe_hw *hw, u32 vlan, u32 vind, bool vlan_on, bool vlvf_bypass); s32 txgbevf_rlpml_set_vf(struct txgbe_hw *hw, u16 max_size); diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 78d3299fb..74cdcc6c9 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -29,6 +29,8 @@ static int txgbevf_dev_stats_reset(struct rte_eth_dev *dev); static int txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask); static void txgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on); static void txgbevf_configure_msix(struct rte_eth_dev *dev); +static int txgbevf_dev_promiscuous_enable(struct rte_eth_dev *dev); +static int txgbevf_dev_promiscuous_disable(struct rte_eth_dev *dev); static void txgbevf_remove_mac_addr(struct rte_eth_dev *dev, uint32_t index); static void txgbevf_dev_interrupt_handler(void *param); @@ -265,6 +267,9 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev) return -EIO; } + /* enter promiscuous mode */ + txgbevf_dev_promiscuous_enable(eth_dev); + rte_intr_callback_register(intr_handle, txgbevf_dev_interrupt_handler, eth_dev); rte_intr_enable(intr_handle); @@ -905,6 +910,90 @@ txgbevf_set_default_mac_addr(struct rte_eth_dev *dev, return 0; } +static int +txgbevf_dev_promiscuous_enable(struct rte_eth_dev *dev) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + int ret; + + switch (hw->mac.update_xcast_mode(hw, TXGBEVF_XCAST_MODE_PROMISC)) { + case 0: + ret = 0; + break; + case TXGBE_ERR_FEATURE_NOT_SUPPORTED: + ret = -ENOTSUP; + break; + default: + ret = -EAGAIN; + break; + } + + return ret; +} + +static int +txgbevf_dev_promiscuous_disable(struct rte_eth_dev *dev) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + int ret; + + switch (hw->mac.update_xcast_mode(hw, TXGBEVF_XCAST_MODE_NONE)) { + case 0: + ret = 0; + break; + case TXGBE_ERR_FEATURE_NOT_SUPPORTED: + ret = -ENOTSUP; + break; + default: + ret = -EAGAIN; + break; + } + + return ret; +} + +static int +txgbevf_dev_allmulticast_enable(struct rte_eth_dev *dev) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + int ret; + + switch (hw->mac.update_xcast_mode(hw, TXGBEVF_XCAST_MODE_ALLMULTI)) { + case 0: + ret = 0; + break; + case TXGBE_ERR_FEATURE_NOT_SUPPORTED: + ret = -ENOTSUP; + break; + default: + ret = -EAGAIN; + break; + } + + return ret; +} + +static int +txgbevf_dev_allmulticast_disable(struct rte_eth_dev *dev) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + int ret; + + switch (hw->mac.update_xcast_mode(hw, TXGBEVF_XCAST_MODE_MULTI)) { + case 0: + ret = 0; + break; + case TXGBE_ERR_FEATURE_NOT_SUPPORTED: + ret = -ENOTSUP; + break; + default: + ret = -EAGAIN; + break; + } + + return ret; +} + static void txgbevf_mbx_process(struct rte_eth_dev *dev) { struct txgbe_hw *hw = TXGBE_DEV_HW(dev); @@ -979,6 +1068,10 @@ static const struct eth_dev_ops txgbevf_eth_dev_ops = { .stats_reset = txgbevf_dev_stats_reset, .xstats_reset = txgbevf_dev_stats_reset, .xstats_get_names = txgbevf_dev_xstats_get_names, + .promiscuous_enable = txgbevf_dev_promiscuous_enable, + .promiscuous_disable = txgbevf_dev_promiscuous_disable, + .allmulticast_enable = txgbevf_dev_allmulticast_enable, + .allmulticast_disable = txgbevf_dev_allmulticast_disable, .dev_infos_get = txgbevf_dev_info_get, .vlan_filter_set = txgbevf_vlan_filter_set, .vlan_strip_queue_set = txgbevf_vlan_strip_queue_set, From patchwork Thu Feb 25 08:08:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 88199 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CF1E3A034F; Thu, 25 Feb 2021 09:10:35 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DFB6F160805; Thu, 25 Feb 2021 09:08:56 +0100 (CET) Received: from smtpbgbr2.qq.com (smtpbgbr2.qq.com [54.207.22.56]) by mails.dpdk.org (Postfix) with ESMTP id E5DF01607ED for ; Thu, 25 Feb 2021 09:08:50 +0100 (CET) X-QQ-mid: bizesmtp20t1614240523tofvalmh Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 25 Feb 2021 16:08:43 +0800 (CST) X-QQ-SSF: 01400000002000C0D000000A0000000 X-QQ-FEAT: Yvg9Ua36cyySYZ2KF35wRsA5CwngJef6jeDjSCn8bV5W503M4oDtxwe+hwkbZ A8PM/9nOiiTVbMMStWom7LAzFXcaVlF4MzqJyUyvVnvtldvwPodgXSMrtvViO62CEKBLV4Y p4sQ7gltASEz7HYtfqGEycqNvDgQh4s6o1nd3S+5rKqUcw3/lNhUEr0GSMzHUY9TX+AiBCM 9eoZGdbu6DrHKpK0l2Y3R+OgkjdA5FtTuv2Q+gBJ1bfDwcwiduZZzABRut5BWokg3qXnUp8 BmGjP4/n3h3V7czQ3Kpf5pCvg9kb6mZPVDbk37npI+WwZbUKrLGuw7YGrV6XPtkQKAO0c8L /GuQDuTV7ehh8GU1rJczglu4AKXVw== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 25 Feb 2021 16:08:56 +0800 Message-Id: <20210225080901.3645291-13-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210225080901.3645291-1-jiawenwu@trustnetic.com> References: <20210225080901.3645291-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign5 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v3 12/17] net/txgbe: support multicast MAC filter for VF driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add multicast MAC filter support for VF driver. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/txgbe_vf.ini | 1 + drivers/net/txgbe/base/txgbe_hw.c | 1 + drivers/net/txgbe/base/txgbe_vf.c | 83 +++++++++++++++++++++++++++ drivers/net/txgbe/base/txgbe_vf.h | 3 + drivers/net/txgbe/txgbe_ethdev.c | 2 +- drivers/net/txgbe/txgbe_ethdev_vf.c | 1 + 6 files changed, 90 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/txgbe_vf.ini b/doc/guides/nics/features/txgbe_vf.ini index aeb93f007..ba1812e17 100644 --- a/doc/guides/nics/features/txgbe_vf.ini +++ b/doc/guides/nics/features/txgbe_vf.ini @@ -6,6 +6,7 @@ [Features] Link status = Y Unicast MAC filter = Y +Multicast MAC filter = Y Rx interrupt = Y Jumbo frame = Y Scattered Rx = Y diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c index c357c8658..3cee8b857 100644 --- a/drivers/net/txgbe/base/txgbe_hw.c +++ b/drivers/net/txgbe/base/txgbe_hw.c @@ -2808,6 +2808,7 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw) mac->acquire_swfw_sync = txgbe_acquire_swfw_sync; mac->release_swfw_sync = txgbe_release_swfw_sync; mac->reset_hw = txgbe_reset_hw; + mac->update_mc_addr_list = txgbe_update_mc_addr_list; mac->disable_sec_rx_path = txgbe_disable_sec_rx_path; mac->enable_sec_rx_path = txgbe_enable_sec_rx_path; diff --git a/drivers/net/txgbe/base/txgbe_vf.c b/drivers/net/txgbe/base/txgbe_vf.c index 9105de293..416c8964f 100644 --- a/drivers/net/txgbe/base/txgbe_vf.c +++ b/drivers/net/txgbe/base/txgbe_vf.c @@ -26,6 +26,7 @@ s32 txgbe_init_ops_vf(struct txgbe_hw *hw) mac->get_mac_addr = txgbe_get_mac_addr_vf; mac->stop_hw = txgbe_stop_hw_vf; mac->negotiate_api_version = txgbevf_negotiate_api_version; + mac->update_mc_addr_list = txgbe_update_mc_addr_list_vf; /* Link */ mac->check_link = txgbe_check_mac_link_vf; @@ -213,6 +214,39 @@ s32 txgbe_stop_hw_vf(struct txgbe_hw *hw) return 0; } +/** + * txgbe_mta_vector - Determines bit-vector in multicast table to set + * @hw: pointer to hardware structure + * @mc_addr: the multicast address + **/ +STATIC s32 txgbe_mta_vector(struct txgbe_hw *hw, u8 *mc_addr) +{ + u32 vector = 0; + + switch (hw->mac.mc_filter_type) { + case 0: /* use bits [47:36] of the address */ + vector = ((mc_addr[4] >> 4) | (((u16)mc_addr[5]) << 4)); + break; + case 1: /* use bits [46:35] of the address */ + vector = ((mc_addr[4] >> 3) | (((u16)mc_addr[5]) << 5)); + break; + case 2: /* use bits [45:34] of the address */ + vector = ((mc_addr[4] >> 2) | (((u16)mc_addr[5]) << 6)); + break; + case 3: /* use bits [43:32] of the address */ + vector = ((mc_addr[4]) | (((u16)mc_addr[5]) << 8)); + break; + default: /* Invalid mc_filter_type */ + DEBUGOUT("MC filter type param set incorrectly\n"); + ASSERT(0); + break; + } + + /* vector can only be 12-bits or boundary will be exceeded */ + vector &= 0xFFF; + return vector; +} + STATIC s32 txgbevf_write_msg_read_ack(struct txgbe_hw *hw, u32 *msg, u32 *retmsg, u16 size) { @@ -258,6 +292,55 @@ s32 txgbe_set_rar_vf(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq, return ret_val; } +/** + * txgbe_update_mc_addr_list_vf - Update Multicast addresses + * @hw: pointer to the HW structure + * @mc_addr_list: array of multicast addresses to program + * @mc_addr_count: number of multicast addresses to program + * @next: caller supplied function to return next address in list + * @clear: unused + * + * Updates the Multicast Table Array. + **/ +s32 txgbe_update_mc_addr_list_vf(struct txgbe_hw *hw, u8 *mc_addr_list, + u32 mc_addr_count, txgbe_mc_addr_itr next, + bool clear) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + u32 msgbuf[TXGBE_P2VMBX_SIZE]; + u16 *vector_list = (u16 *)&msgbuf[1]; + u32 vector; + u32 cnt, i; + u32 vmdq; + + UNREFERENCED_PARAMETER(clear); + + DEBUGFUNC("txgbe_update_mc_addr_list_vf"); + + /* Each entry in the list uses 1 16 bit word. We have 30 + * 16 bit words available in our HW msg buffer (minus 1 for the + * msg type). That's 30 hash values if we pack 'em right. If + * there are more than 30 MC addresses to add then punt the + * extras for now and then add code to handle more than 30 later. + * It would be unusual for a server to request that many multi-cast + * addresses except for in large enterprise network environments. + */ + + DEBUGOUT("MC Addr Count = %d\n", mc_addr_count); + + cnt = (mc_addr_count > 30) ? 30 : mc_addr_count; + msgbuf[0] = TXGBE_VF_SET_MULTICAST; + msgbuf[0] |= cnt << TXGBE_VT_MSGINFO_SHIFT; + + for (i = 0; i < cnt; i++) { + vector = txgbe_mta_vector(hw, next(hw, &mc_addr_list, &vmdq)); + DEBUGOUT("Hash value = 0x%03X\n", vector); + vector_list[i] = (u16)vector; + } + + return mbx->write_posted(hw, msgbuf, TXGBE_P2VMBX_SIZE, 0); +} + /** * txgbevf_update_xcast_mode - Update Multicast mode * @hw: pointer to the HW structure diff --git a/drivers/net/txgbe/base/txgbe_vf.h b/drivers/net/txgbe/base/txgbe_vf.h index 710df838f..39714d102 100644 --- a/drivers/net/txgbe/base/txgbe_vf.h +++ b/drivers/net/txgbe/base/txgbe_vf.h @@ -47,6 +47,9 @@ s32 txgbe_check_mac_link_vf(struct txgbe_hw *hw, u32 *speed, s32 txgbe_set_rar_vf(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq, u32 enable_addr); s32 txgbevf_set_uc_addr_vf(struct txgbe_hw *hw, u32 index, u8 *addr); +s32 txgbe_update_mc_addr_list_vf(struct txgbe_hw *hw, u8 *mc_addr_list, + u32 mc_addr_count, txgbe_mc_addr_itr next, + bool clear); s32 txgbevf_update_xcast_mode(struct txgbe_hw *hw, int xcast_mode); s32 txgbe_set_vfta_vf(struct txgbe_hw *hw, u32 vlan, u32 vind, bool vlan_on, bool vlvf_bypass); diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index 247bb042f..90137d0ce 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -4121,7 +4121,7 @@ txgbe_dev_set_mc_addr_list(struct rte_eth_dev *dev, hw = TXGBE_DEV_HW(dev); mc_addr_list = (u8 *)mc_addr_set; - return txgbe_update_mc_addr_list(hw, mc_addr_list, nb_mc_addr, + return hw->mac.update_mc_addr_list(hw, mc_addr_list, nb_mc_addr, txgbe_dev_addr_list_itr, TRUE); } diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 74cdcc6c9..b9d55debf 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -1080,6 +1080,7 @@ static const struct eth_dev_ops txgbevf_eth_dev_ops = { .rx_queue_intr_disable = txgbevf_dev_rx_queue_intr_disable, .mac_addr_add = txgbevf_add_mac_addr, .mac_addr_remove = txgbevf_remove_mac_addr, + .set_mc_addr_list = txgbe_dev_set_mc_addr_list, .rxq_info_get = txgbe_rxq_info_get, .txq_info_get = txgbe_txq_info_get, .mac_addr_set = txgbevf_set_default_mac_addr, From patchwork Thu Feb 25 08:08:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 88196 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BA285A034F; Thu, 25 Feb 2021 09:10:13 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D0BA81607EF; Thu, 25 Feb 2021 09:08:51 +0100 (CET) Received: from smtpbgau1.qq.com (smtpbgau1.qq.com [54.206.16.166]) by mails.dpdk.org (Postfix) with ESMTP id 798D21607DA for ; Thu, 25 Feb 2021 09:08:49 +0100 (CET) X-QQ-mid: bizesmtp20t1614240524t5g941p2 Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 25 Feb 2021 16:08:44 +0800 (CST) X-QQ-SSF: 01400000002000C0D000000A0000000 X-QQ-FEAT: dMLGSgzcGovH+0THdtfTVizyk9chwQPPGXsZ552zEUpfsNR+hg5KJSssxbaHw c0TDJ9lhAL5sPh/ycz3bEXJP60TBQiIgPCDnupq0WHBgPrJDJkni2L5nmjaLpI/mhpE03C2 zHagkVw4mnNeE0x8gexrQAYbCgHy5KuRCQ6xRhMnAMHNqVbihkSXgDM4I/h6aMY0Kn62XlL j3s64/hVPCpASMIiPPWZujNO7Y7fmX3jO+y8MlQ/TBrgMYSvYIpAEDAileq0UV5IkcM0sy4 udL5myxSM3RdP9/KhnS3BdJWyaW3JsGoybPTiYkoR6NkbfQFxpqf/1tV6lVuwMx4DUaYa9W IIDZZBFHBboMvDC9Sm8/ekbMwPLLw== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 25 Feb 2021 16:08:57 +0800 Message-Id: <20210225080901.3645291-14-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210225080901.3645291-1-jiawenwu@trustnetic.com> References: <20210225080901.3645291-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign5 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v3 13/17] net/txgbe: support to update MTU on VF device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add MTU set operation for VF device. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/txgbe_vf.ini | 1 + drivers/net/txgbe/txgbe_ethdev_vf.c | 36 +++++++++++++++++++++++++++ 2 files changed, 37 insertions(+) diff --git a/doc/guides/nics/features/txgbe_vf.ini b/doc/guides/nics/features/txgbe_vf.ini index ba1812e17..69304baa9 100644 --- a/doc/guides/nics/features/txgbe_vf.ini +++ b/doc/guides/nics/features/txgbe_vf.ini @@ -8,6 +8,7 @@ Link status = Y Unicast MAC filter = Y Multicast MAC filter = Y Rx interrupt = Y +MTU update = Y Jumbo frame = Y Scattered Rx = Y LRO = Y diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index b9d55debf..aa1b46766 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -910,6 +910,41 @@ txgbevf_set_default_mac_addr(struct rte_eth_dev *dev, return 0; } +static int +txgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) +{ + struct txgbe_hw *hw; + uint32_t max_frame = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; + struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode; + + hw = TXGBE_DEV_HW(dev); + + if (mtu < RTE_ETHER_MIN_MTU || + max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN) + return -EINVAL; + + /* refuse mtu that requires the support of scattered packets when this + * feature has not been enabled before. + */ + if (!(rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) && + (max_frame + 2 * TXGBE_VLAN_TAG_SIZE > + dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) + return -EINVAL; + + /* + * When supported by the underlying PF driver, use the TXGBE_VF_SET_MTU + * request of the version 2.0 of the mailbox API. + * For now, use the TXGBE_VF_SET_LPE request of the version 1.0 + * of the mailbox API. + */ + if (txgbevf_rlpml_set_vf(hw, max_frame)) + return -EINVAL; + + /* update max frame size */ + dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame; + return 0; +} + static int txgbevf_dev_promiscuous_enable(struct rte_eth_dev *dev) { @@ -1073,6 +1108,7 @@ static const struct eth_dev_ops txgbevf_eth_dev_ops = { .allmulticast_enable = txgbevf_dev_allmulticast_enable, .allmulticast_disable = txgbevf_dev_allmulticast_disable, .dev_infos_get = txgbevf_dev_info_get, + .mtu_set = txgbevf_dev_set_mtu, .vlan_filter_set = txgbevf_vlan_filter_set, .vlan_strip_queue_set = txgbevf_vlan_strip_queue_set, .vlan_offload_set = txgbevf_vlan_offload_set, From patchwork Thu Feb 25 08:08:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 88197 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 378ABA034F; Thu, 25 Feb 2021 09:10:21 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 318731607E1; Thu, 25 Feb 2021 09:08:54 +0100 (CET) Received: from smtpbgeu1.qq.com (smtpbgeu1.qq.com [52.59.177.22]) by mails.dpdk.org (Postfix) with ESMTP id 3C44F1607EF for ; Thu, 25 Feb 2021 09:08:51 +0100 (CET) X-QQ-mid: bizesmtp20t1614240526tsqdjhb0 Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 25 Feb 2021 16:08:45 +0800 (CST) X-QQ-SSF: 01400000002000C0D000000A0000000 X-QQ-FEAT: q7a7ZL9Nkc0N5Kyttqeqw/6rlZDxFtUr9AzkykU05UDjOPPLscIs8pjGiXg47 AF1MGgKtKfqYruGJ2qgXo/xE++IPxX5HIgx9wIdL+GMHQB6gX7fxlm7OlON+krspdYhL2SE 8AreZnW1vBuTDk7cdsdbnUvXzGQdmBdNR7qmwHjAKIrEoyF/mv89TxhwRqULQNt1yCrz5NM EHUQPd6BhXMLIadVWMdeOZTBuWP5AW3eDaFTvsGoMkNhyNSRyh6Q4yqNuWHkvTbk4L4SjpX Giyqgla4sDjQqxZfSDYPVCdeVjSGvnH1SrejvFqmeVRnYw7IPw2u1SFcSN7UUQfBofvFAYS kdmpN27tbgy6xDEr1GV3jwHFpEkEw== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 25 Feb 2021 16:08:58 +0800 Message-Id: <20210225080901.3645291-15-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210225080901.3645291-1-jiawenwu@trustnetic.com> References: <20210225080901.3645291-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign6 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v3 14/17] net/txgbe: support register dump on VF device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support to dump registers for VF. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/txgbe_vf.ini | 1 + drivers/net/txgbe/txgbe_ethdev_vf.c | 74 +++++++++++++++++++++++++++ 2 files changed, 75 insertions(+) diff --git a/doc/guides/nics/features/txgbe_vf.ini b/doc/guides/nics/features/txgbe_vf.ini index 69304baa9..7cc0ad92b 100644 --- a/doc/guides/nics/features/txgbe_vf.ini +++ b/doc/guides/nics/features/txgbe_vf.ini @@ -30,6 +30,7 @@ Rx descriptor status = Y Tx descriptor status = Y Basic stats = Y Extended stats = Y +Registers dump = Y Multiprocess aware = Y Linux = Y ARMv8 = Y diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index aa1b46766..bc373f052 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -14,6 +14,36 @@ #include "base/txgbe.h" #include "txgbe_ethdev.h" #include "txgbe_rxtx.h" +#include "txgbe_regs_group.h" + +static const struct reg_info txgbevf_regs_general[] = { + {TXGBE_VFRST, 1, 1, "TXGBE_VFRST"}, + {TXGBE_VFSTATUS, 1, 1, "TXGBE_VFSTATUS"}, + {TXGBE_VFMBCTL, 1, 1, "TXGBE_VFMAILBOX"}, + {TXGBE_VFMBX, 16, 4, "TXGBE_VFMBX"}, + {TXGBE_VFPBWRAP, 1, 1, "TXGBE_VFPBWRAP"}, + {0, 0, 0, ""} +}; + +static const struct reg_info txgbevf_regs_interrupt[] = { + {0, 0, 0, ""} +}; + +static const struct reg_info txgbevf_regs_rxdma[] = { + {0, 0, 0, ""} +}; + +static const struct reg_info txgbevf_regs_tx[] = { + {0, 0, 0, ""} +}; + +/* VF registers */ +static const struct reg_info *txgbevf_regs[] = { + txgbevf_regs_general, + txgbevf_regs_interrupt, + txgbevf_regs_rxdma, + txgbevf_regs_tx, + NULL}; static int txgbevf_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n); @@ -945,6 +975,49 @@ txgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) return 0; } +static int +txgbevf_get_reg_length(struct rte_eth_dev *dev __rte_unused) +{ + int count = 0; + int g_ind = 0; + const struct reg_info *reg_group; + + while ((reg_group = txgbevf_regs[g_ind++])) + count += txgbe_regs_group_count(reg_group); + + return count; +} + +static int +txgbevf_get_regs(struct rte_eth_dev *dev, + struct rte_dev_reg_info *regs) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + uint32_t *data = regs->data; + int g_ind = 0; + int count = 0; + const struct reg_info *reg_group; + + if (data == NULL) { + regs->length = txgbevf_get_reg_length(dev); + regs->width = sizeof(uint32_t); + return 0; + } + + /* Support only full register dump */ + if (regs->length == 0 || + regs->length == (uint32_t)txgbevf_get_reg_length(dev)) { + regs->version = hw->mac.type << 24 | hw->revision_id << 16 | + hw->device_id; + while ((reg_group = txgbevf_regs[g_ind++])) + count += txgbe_read_regs_group(dev, &data[count], + reg_group); + return 0; + } + + return -ENOTSUP; +} + static int txgbevf_dev_promiscuous_enable(struct rte_eth_dev *dev) { @@ -1120,6 +1193,7 @@ static const struct eth_dev_ops txgbevf_eth_dev_ops = { .rxq_info_get = txgbe_rxq_info_get, .txq_info_get = txgbe_txq_info_get, .mac_addr_set = txgbevf_set_default_mac_addr, + .get_reg = txgbevf_get_regs, .reta_update = txgbe_dev_rss_reta_update, .reta_query = txgbe_dev_rss_reta_query, .rss_hash_update = txgbe_dev_rss_hash_update, From patchwork Thu Feb 25 08:08:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 88198 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7C45EA034F; Thu, 25 Feb 2021 09:10:28 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9F0761607FD; Thu, 25 Feb 2021 09:08:55 +0100 (CET) Received: from smtpbgeu1.qq.com (smtpbgeu1.qq.com [52.59.177.22]) by mails.dpdk.org (Postfix) with ESMTP id 925C61607F0 for ; Thu, 25 Feb 2021 09:08:51 +0100 (CET) X-QQ-mid: bizesmtp20t1614240527tp8iu5u3 Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 25 Feb 2021 16:08:47 +0800 (CST) X-QQ-SSF: 01400000002000C0D000000A0000000 X-QQ-FEAT: l6IKqkG+Nbm16wLeEMz3dBNmJ/RX2fSxeLlADjlLrmU07ZMpQDs7Ujh52cS1A UeXSUM+AnVTh1h9CMjoURdSq0dLGVonIIoTlqIgXqYfTydSbG49CkUt+dQ0PynFq3gIkkzZ HEumRoYTZrJN648PeeXCr1AyJYiwbCcNrrCcfyfV2ewWTMzJ/Do3noGm3Za0m7qcsWcIT+Q OTvNPFK4asHrHB/txGe6TFXsPk5bWDvEZco7+sJEyFZGJ8u7Pk7cqlEwBMn1a1Ao4PRGy0b z8tyFe6PCr/Tv7oOM7h073kw0D/Jie47xh3Dq8TAGnLX/J1CwZ6vV1mWfnD2me5P6XBmKGC l7knntZ4EbJKWQ+0oQvb2qAGSJuysJkpUh6wUKq X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 25 Feb 2021 16:08:59 +0800 Message-Id: <20210225080901.3645291-16-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210225080901.3645291-1-jiawenwu@trustnetic.com> References: <20210225080901.3645291-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign6 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v3 15/17] net/txgbe: start and stop VF device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support to start, stop and reset VF device. Signed-off-by: Jiawen Wu --- drivers/net/txgbe/txgbe_ethdev.h | 2 + drivers/net/txgbe/txgbe_ethdev_vf.c | 176 +++++++++++++++++++++++++++- drivers/net/txgbe/txgbe_rxtx.c | 57 +++++++++ 3 files changed, 234 insertions(+), 1 deletion(-) diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h index 52ce9c31e..5d4d9434a 100644 --- a/drivers/net/txgbe/txgbe_ethdev.h +++ b/drivers/net/txgbe/txgbe_ethdev.h @@ -479,6 +479,8 @@ int txgbevf_dev_rx_init(struct rte_eth_dev *dev); void txgbevf_dev_tx_init(struct rte_eth_dev *dev); +void txgbevf_dev_rxtx_start(struct rte_eth_dev *dev); + uint16_t txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index bc373f052..2e80b9702 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -9,6 +9,7 @@ #include #include #include +#include #include "txgbe_logs.h" #include "base/txgbe.h" @@ -50,8 +51,10 @@ static int txgbevf_dev_xstats_get(struct rte_eth_dev *dev, static int txgbevf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info); static int txgbevf_dev_configure(struct rte_eth_dev *dev); +static int txgbevf_dev_start(struct rte_eth_dev *dev); static int txgbevf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete); +static int txgbevf_dev_stop(struct rte_eth_dev *dev); static int txgbevf_dev_close(struct rte_eth_dev *dev); static void txgbevf_intr_disable(struct rte_eth_dev *dev); static void txgbevf_intr_enable(struct rte_eth_dev *dev); @@ -603,18 +606,168 @@ txgbevf_dev_configure(struct rte_eth_dev *dev) return 0; } +static int +txgbevf_dev_start(struct rte_eth_dev *dev) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + uint32_t intr_vector = 0; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + + int err, mask = 0; + + PMD_INIT_FUNC_TRACE(); + + /* Stop the link setup handler before resetting the HW. */ + rte_eal_alarm_cancel(txgbe_dev_setup_link_alarm_handler, dev); + + err = hw->mac.reset_hw(hw); + if (err) { + PMD_INIT_LOG(ERR, "Unable to reset vf hardware (%d)", err); + return err; + } + hw->mac.get_link_status = true; + + /* negotiate mailbox API version to use with the PF. */ + txgbevf_negotiate_api(hw); + + txgbevf_dev_tx_init(dev); + + /* This can fail when allocating mbufs for descriptor rings */ + err = txgbevf_dev_rx_init(dev); + + /** + * In this case, reuses the MAC address assigned by VF + * initialization. + */ + if (err != 0 && err != TXGBE_ERR_INVALID_MAC_ADDR) { + PMD_INIT_LOG(ERR, "Unable to initialize RX hardware (%d)", err); + txgbe_dev_clear_queues(dev); + return err; + } + + /* Set vfta */ + txgbevf_set_vfta_all(dev, 1); + + /* Set HW strip */ + mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | + ETH_VLAN_EXTEND_MASK; + err = txgbevf_vlan_offload_config(dev, mask); + if (err) { + PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err); + txgbe_dev_clear_queues(dev); + return err; + } + + txgbevf_dev_rxtx_start(dev); + + /* check and configure queue intr-vector mapping */ + if (rte_intr_cap_multiple(intr_handle) && + dev->data->dev_conf.intr_conf.rxq) { + /* According to datasheet, only vector 0/1/2 can be used, + * now only one vector is used for Rx queue + */ + intr_vector = 1; + if (rte_intr_efd_enable(intr_handle, intr_vector)) + return -1; + } + + if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { + intr_handle->intr_vec = + rte_zmalloc("intr_vec", + dev->data->nb_rx_queues * sizeof(int), 0); + if (intr_handle->intr_vec == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" + " intr_vec", dev->data->nb_rx_queues); + return -ENOMEM; + } + } + txgbevf_configure_msix(dev); + + /* When a VF port is bound to VFIO-PCI, only miscellaneous interrupt + * is mapped to VFIO vector 0 in eth_txgbevf_dev_init( ). + * If previous VFIO interrupt mapping setting in eth_txgbevf_dev_init( ) + * is not cleared, it will fail when following rte_intr_enable( ) tries + * to map Rx queue interrupt to other VFIO vectors. + * So clear uio/vfio intr/evevnfd first to avoid failure. + */ + rte_intr_disable(intr_handle); + + rte_intr_enable(intr_handle); + + /* Re-enable interrupt for VF */ + txgbevf_intr_enable(dev); + + /* + * Update link status right before return, because it may + * start link configuration process in a separate thread. + */ + txgbevf_dev_link_update(dev, 0); + + hw->adapter_stopped = false; + + return 0; +} + +static int +txgbevf_dev_stop(struct rte_eth_dev *dev) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + + if (hw->adapter_stopped) + return 0; + + PMD_INIT_FUNC_TRACE(); + + rte_eal_alarm_cancel(txgbe_dev_setup_link_alarm_handler, dev); + + txgbevf_intr_disable(dev); + + hw->adapter_stopped = 1; + hw->mac.stop_hw(hw); + + /* + * Clear what we set, but we still keep shadow_vfta to + * restore after device starts + */ + txgbevf_set_vfta_all(dev, 0); + + /* Clear stored conf */ + dev->data->scattered_rx = 0; + + txgbe_dev_clear_queues(dev); + + /* Clean datapath event and queue/vec mapping */ + rte_intr_efd_disable(intr_handle); + if (intr_handle->intr_vec != NULL) { + rte_free(intr_handle->intr_vec); + intr_handle->intr_vec = NULL; + } + + adapter->rss_reta_updated = 0; + + return 0; +} + static int txgbevf_dev_close(struct rte_eth_dev *dev) { struct txgbe_hw *hw = TXGBE_DEV_HW(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + int ret; + PMD_INIT_FUNC_TRACE(); if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; hw->mac.reset_hw(hw); + ret = txgbevf_dev_stop(dev); + txgbe_dev_free_queues(dev); /** @@ -637,7 +790,24 @@ txgbevf_dev_close(struct rte_eth_dev *dev) rte_intr_callback_unregister(intr_handle, txgbevf_dev_interrupt_handler, dev); - return 0; + return ret; +} + +/* + * Reset VF device + */ +static int +txgbevf_dev_reset(struct rte_eth_dev *dev) +{ + int ret; + + ret = eth_txgbevf_dev_uninit(dev); + if (ret) + return ret; + + ret = eth_txgbevf_dev_init(dev); + + return ret; } static void txgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on) @@ -1170,12 +1340,16 @@ txgbevf_dev_interrupt_handler(void *param) */ static const struct eth_dev_ops txgbevf_eth_dev_ops = { .dev_configure = txgbevf_dev_configure, + .dev_start = txgbevf_dev_start, + .dev_stop = txgbevf_dev_stop, .link_update = txgbevf_dev_link_update, .stats_get = txgbevf_dev_stats_get, .xstats_get = txgbevf_dev_xstats_get, .stats_reset = txgbevf_dev_stats_reset, .xstats_reset = txgbevf_dev_stats_reset, .xstats_get_names = txgbevf_dev_xstats_get_names, + .dev_close = txgbevf_dev_close, + .dev_reset = txgbevf_dev_reset, .promiscuous_enable = txgbevf_dev_promiscuous_enable, .promiscuous_disable = txgbevf_dev_promiscuous_disable, .allmulticast_enable = txgbevf_dev_allmulticast_enable, diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c index 7117dbb6d..e0586100a 100644 --- a/drivers/net/txgbe/txgbe_rxtx.c +++ b/drivers/net/txgbe/txgbe_rxtx.c @@ -4938,6 +4938,63 @@ txgbevf_dev_tx_init(struct rte_eth_dev *dev) } } +/* + * [VF] Start Transmit and Receive Units. + */ +void __rte_cold +txgbevf_dev_rxtx_start(struct rte_eth_dev *dev) +{ + struct txgbe_hw *hw; + struct txgbe_tx_queue *txq; + struct txgbe_rx_queue *rxq; + uint32_t txdctl; + uint32_t rxdctl; + uint16_t i; + int poll_ms; + + PMD_INIT_FUNC_TRACE(); + hw = TXGBE_DEV_HW(dev); + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + /* Setup Transmit Threshold Registers */ + wr32m(hw, TXGBE_TXCFG(txq->reg_idx), + TXGBE_TXCFG_HTHRESH_MASK | + TXGBE_TXCFG_WTHRESH_MASK, + TXGBE_TXCFG_HTHRESH(txq->hthresh) | + TXGBE_TXCFG_WTHRESH(txq->wthresh)); + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + wr32m(hw, TXGBE_TXCFG(i), TXGBE_TXCFG_ENA, TXGBE_TXCFG_ENA); + + poll_ms = 10; + /* Wait until TX Enable ready */ + do { + rte_delay_ms(1); + txdctl = rd32(hw, TXGBE_TXCFG(i)); + } while (--poll_ms && !(txdctl & TXGBE_TXCFG_ENA)); + if (!poll_ms) + PMD_INIT_LOG(ERR, "Could not enable Tx Queue %d", i); + } + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + + wr32m(hw, TXGBE_RXCFG(i), TXGBE_RXCFG_ENA, TXGBE_RXCFG_ENA); + + /* Wait until RX Enable ready */ + poll_ms = 10; + do { + rte_delay_ms(1); + rxdctl = rd32(hw, TXGBE_RXCFG(i)); + } while (--poll_ms && !(rxdctl & TXGBE_RXCFG_ENA)); + if (!poll_ms) + PMD_INIT_LOG(ERR, "Could not enable Rx Queue %d", i); + rte_wmb(); + wr32(hw, TXGBE_RXWP(i), rxq->nb_rx_desc - 1); + } +} + int txgbe_rss_conf_init(struct txgbe_rte_flow_rss_conf *out, const struct rte_flow_action_rss *in) From patchwork Thu Feb 25 08:09:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 88201 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 96FD8A0547; Thu, 25 Feb 2021 09:10:51 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 749C0160813; Thu, 25 Feb 2021 09:08:59 +0100 (CET) Received: from smtpbguseast2.qq.com (smtpbguseast2.qq.com [54.204.34.130]) by mails.dpdk.org (Postfix) with ESMTP id 400921607ED for ; Thu, 25 Feb 2021 09:08:52 +0100 (CET) X-QQ-mid: bizesmtp20t1614240528t52zigrm Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 25 Feb 2021 16:08:48 +0800 (CST) X-QQ-SSF: 01400000002000C0D000000A0000000 X-QQ-FEAT: f75t+IYwLugQ2HOBVYdbNynI3TlOKVZDNwjFZgADdrxhc8MC0I9BZOaMrS58f QCrm3XUBK7NAbVUFDJ4gizryAWzNoQz83DsAcGrHSA74Sg0sPjK1oCIXBU5TXjdz4Cuqz3+ yS/nxtFuVFK2lsbj9t2KYggBfF2JScO9O9qegoax7YYbRVgE0MkCrsSaF0LvNp0wfjeXhXI qGKmivAMjOtrXu9hw+Il/SzSaJnm66sGX8JQ2KYEv6XKz+FxbmFpAAAwiw/3C3gnuvb64tO 0ZJqXHEXgMlbSGr/iC9D1dty+bsSFLfPqpdEeaBjps3DRBg9DMu+YJDKmf0KExjNXyO827U MvdgNvLncvi7AYAYUXSEYsvRF5nBg== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 25 Feb 2021 16:09:00 +0800 Message-Id: <20210225080901.3645291-17-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210225080901.3645291-1-jiawenwu@trustnetic.com> References: <20210225080901.3645291-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign7 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v3 16/17] net/txgbe: add some supports as PF driver implemented X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Some RXTX operations like queue setup and release, packet type get, and Tx done cleanup have been supported on PF device. There are ops functions directly added. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/txgbe_vf.ini | 3 +++ drivers/net/txgbe/txgbe_ethdev_vf.c | 6 ++++++ 2 files changed, 9 insertions(+) diff --git a/doc/guides/nics/features/txgbe_vf.ini b/doc/guides/nics/features/txgbe_vf.ini index 7cc0ad92b..349990cb0 100644 --- a/doc/guides/nics/features/txgbe_vf.ini +++ b/doc/guides/nics/features/txgbe_vf.ini @@ -19,6 +19,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y VLAN filter = Y +Inline crypto = Y CRC offload = P VLAN offload = P QinQ offload = P @@ -26,8 +27,10 @@ L3 checksum offload = P L4 checksum offload = P Inner L3 checksum = P Inner L4 checksum = P +Packet type parsing = Y Rx descriptor status = Y Tx descriptor status = Y +Free Tx mbuf on demand = Y Basic stats = Y Extended stats = Y Registers dump = Y diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 2e80b9702..63a45d32c 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -1355,10 +1355,15 @@ static const struct eth_dev_ops txgbevf_eth_dev_ops = { .allmulticast_enable = txgbevf_dev_allmulticast_enable, .allmulticast_disable = txgbevf_dev_allmulticast_disable, .dev_infos_get = txgbevf_dev_info_get, + .dev_supported_ptypes_get = txgbe_dev_supported_ptypes_get, .mtu_set = txgbevf_dev_set_mtu, .vlan_filter_set = txgbevf_vlan_filter_set, .vlan_strip_queue_set = txgbevf_vlan_strip_queue_set, .vlan_offload_set = txgbevf_vlan_offload_set, + .rx_queue_setup = txgbe_dev_rx_queue_setup, + .rx_queue_release = txgbe_dev_rx_queue_release, + .tx_queue_setup = txgbe_dev_tx_queue_setup, + .tx_queue_release = txgbe_dev_tx_queue_release, .rx_queue_intr_enable = txgbevf_dev_rx_queue_intr_enable, .rx_queue_intr_disable = txgbevf_dev_rx_queue_intr_disable, .mac_addr_add = txgbevf_add_mac_addr, @@ -1372,6 +1377,7 @@ static const struct eth_dev_ops txgbevf_eth_dev_ops = { .reta_query = txgbe_dev_rss_reta_query, .rss_hash_update = txgbe_dev_rss_hash_update, .rss_hash_conf_get = txgbe_dev_rss_hash_conf_get, + .tx_done_cleanup = txgbe_dev_tx_done_cleanup, }; RTE_PMD_REGISTER_PCI(net_txgbe_vf, rte_txgbevf_pmd); From patchwork Thu Feb 25 08:09:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 88202 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C50E4A034F; Thu, 25 Feb 2021 09:11:02 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 848F316082B; Thu, 25 Feb 2021 09:09:01 +0100 (CET) Received: from smtpbgau1.qq.com (smtpbgau1.qq.com [54.206.16.166]) by mails.dpdk.org (Postfix) with ESMTP id 57A611607F8 for ; Thu, 25 Feb 2021 09:08:55 +0100 (CET) X-QQ-mid: bizesmtp20t1614240529t75qezi2 Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 25 Feb 2021 16:08:49 +0800 (CST) X-QQ-SSF: 01400000002000C0D000000A0000000 X-QQ-FEAT: O9RHVi+JMbIRD7o1Xfnab9sNaoOj4E10j5+7Up8rPkW5ANgsmO0ryzF7dTo/R buy5Trjg3N4LMk19A0bndOtAck2HIoxVsnbsLsxaZjauqvDXo78/NPP7F50ekR4TFGlm9Pa sE/Dz1Q0L8Vxr9x+d90NjtOfEKdYXlXNByqR85xFrK120c5BwC6SxuROFJdPAh9C0JUd5rc w+ajuy1aakSmHaJEK2UtvMcfHM/zS1r0Cxm4i8FP7VCiHKc2dKryIc8YEl9jjSUP9IwbBaz GeQtwc/atI0jrSszUsQ5XgvLhy8jelDZU97zoxkgjP3l3N3+AKe68EG5It0Vq7oiUAgWgZM /CVfmzY0RAy7gWP+y+u1Tymr5sgeQ== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 25 Feb 2021 16:09:01 +0800 Message-Id: <20210225080901.3645291-18-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210225080901.3645291-1-jiawenwu@trustnetic.com> References: <20210225080901.3645291-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign6 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v3 17/17] doc: update release note for txgbe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Update release note to add txgbevf PMD support. Signed-off-by: Jiawen Wu --- doc/guides/rel_notes/release_21_05.rst | 3 +++ 1 file changed, 3 insertions(+) diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst index 5aa9ed7db..f160eab45 100644 --- a/doc/guides/rel_notes/release_21_05.rst +++ b/doc/guides/rel_notes/release_21_05.rst @@ -55,6 +55,9 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Updated Wangxun txgbe driver.** + + * Add support for txgbevf PMD. Removed Items -------------