From patchwork Fri Jan 22 09:47:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 87076 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5A342A0A0A; Fri, 22 Jan 2021 10:50:42 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3A510140EDD; Fri, 22 Jan 2021 10:48:26 +0100 (CET) Received: from smtpproxy21.qq.com (smtpbg704.qq.com [203.205.195.105]) by mails.dpdk.org (Postfix) with ESMTP id 3886A140EC3 for ; Fri, 22 Jan 2021 10:48:22 +0100 (CET) X-QQ-mid: bizesmtp9t1611308899tkm3b0gny Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Fri, 22 Jan 2021 17:48:18 +0800 (CST) X-QQ-SSF: 01400000002000C0D000B00A0000000 X-QQ-FEAT: 1X540h/VXvDWsxxy8/WqENmEirl+Oa2ttNX4OmiUyICeaGcXu9Ezi8STeGZ1F VxGhhsW7++tqqRSO2+GLwQ/sdzhw5zvOza9UsqU74OjdGZENUAQq1T6PnikGz/9UhFDAL56 ksx1x5z46w1T8v22S0wRShwdm25NuvmCzQHTbB/SuxisgkNdEY4qRUh+ah28KY47V7e8eCO OMdlxM5VXTtErUD6hO5bS82axKOKlazOZDwyfYwUjroR7lhQ/bXiKyF9V1HS1n8KA3YPu8c 7NsOpdy39ohoUGIR1NfM9UEcUrrwA1sZLs69ukdQUS+qdU+7sn2GZusgOKMi3fewGyB3/VM 9hX+sw8XnVL6+YjWT7X4Np7gd7n4w== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Fri, 22 Jan 2021 17:47:59 +0800 Message-Id: <20210122094800.197748-20-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210122094800.197748-1-jiawenwu@trustnetic.com> References: <20210122094800.197748-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign7 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v1 19/20] net/txgbe: support VLAN filter for VF representor X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support to setup VLAN filter and set VLAN strip for VF representor. Signed-off-by: Jiawen Wu --- drivers/net/txgbe/rte_pmd_txgbe.c | 78 ++++++++++++++++++++++++ drivers/net/txgbe/rte_pmd_txgbe.h | 43 +++++++++++++ drivers/net/txgbe/txgbe_ethdev.c | 15 +++++ drivers/net/txgbe/txgbe_ethdev.h | 1 + drivers/net/txgbe/txgbe_vf_representor.c | 25 ++++++++ 5 files changed, 162 insertions(+) diff --git a/drivers/net/txgbe/rte_pmd_txgbe.c b/drivers/net/txgbe/rte_pmd_txgbe.c index c84233bd6..b34089b75 100644 --- a/drivers/net/txgbe/rte_pmd_txgbe.c +++ b/drivers/net/txgbe/rte_pmd_txgbe.c @@ -43,3 +43,81 @@ rte_pmd_txgbe_set_vf_mac_addr(uint16_t port, uint16_t vf, return -EINVAL; } +int +rte_pmd_txgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on) +{ + struct rte_eth_dev *dev; + struct rte_pci_device *pci_dev; + struct txgbe_hw *hw; + uint16_t queues_per_pool; + uint32_t q; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port, -ENODEV); + + dev = &rte_eth_devices[port]; + pci_dev = RTE_ETH_DEV_TO_PCI(dev); + hw = TXGBE_DEV_HW(dev); + + if (!is_txgbe_supported(dev)) + return -ENOTSUP; + + if (vf >= pci_dev->max_vfs) + return -EINVAL; + + if (on > 1) + return -EINVAL; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_strip_queue_set, -ENOTSUP); + + /* The PF has 128 queue pairs and in SRIOV configuration + * those queues will be assigned to VF's, so RXDCTL + * registers will be dealing with queues which will be + * assigned to VF's. + * Let's say we have SRIOV configured with 31 VF's then the + * first 124 queues 0-123 will be allocated to VF's and only + * the last 4 queues 123-127 will be assigned to the PF. + */ + queues_per_pool = (uint16_t)hw->mac.max_rx_queues / + ETH_64_POOLS; + + for (q = 0; q < queues_per_pool; q++) + (*dev->dev_ops->vlan_strip_queue_set)(dev, + q + vf * queues_per_pool, on); + return 0; +} + +int +rte_pmd_txgbe_set_vf_vlan_filter(uint16_t port, uint16_t vlan, + uint64_t vf_mask, uint8_t vlan_on) +{ + struct rte_eth_dev *dev; + int ret = 0; + uint16_t vf_idx; + struct txgbe_hw *hw; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port, -ENODEV); + + dev = &rte_eth_devices[port]; + + if (!is_txgbe_supported(dev)) + return -ENOTSUP; + + if (vlan > RTE_ETHER_MAX_VLAN_ID || vf_mask == 0) + return -EINVAL; + + hw = TXGBE_DEV_HW(dev); + if (txgbe_vt_check(hw) < 0) + return -ENOTSUP; + + for (vf_idx = 0; vf_idx < 64; vf_idx++) { + if (vf_mask & ((uint64_t)(1ULL << vf_idx))) { + ret = hw->mac.set_vfta(hw, vlan, vf_idx, + vlan_on, false); + if (ret < 0) + return ret; + } + } + + return ret; +} + diff --git a/drivers/net/txgbe/rte_pmd_txgbe.h b/drivers/net/txgbe/rte_pmd_txgbe.h index b994b476e..3d8c41286 100644 --- a/drivers/net/txgbe/rte_pmd_txgbe.h +++ b/drivers/net/txgbe/rte_pmd_txgbe.h @@ -32,6 +32,49 @@ int rte_pmd_txgbe_set_vf_mac_addr(uint16_t port, uint16_t vf, struct rte_ether_addr *mac_addr); +/** + * Enable/Disable vf vlan strip for all queues in a pool + * + * @param port + * The port identifier of the Ethernet device. + * @param vf + * ID specifying VF. + * @param on + * 1 - Enable VF's vlan strip on RX queues. + * 0 - Disable VF's vlan strip on RX queues. + * + * @return + * - (0) if successful. + * - (-ENOTSUP) if hardware doesn't support this feature. + * - (-ENODEV) if *port* invalid. + * - (-EINVAL) if bad parameter. + */ +int +rte_pmd_txgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on); + +/** + * Enable/Disable hardware VF VLAN filtering by an Ethernet device of + * received VLAN packets tagged with a given VLAN Tag Identifier. + * + * @param port + * The port identifier of the Ethernet device. + * @param vlan + * The VLAN Tag Identifier whose filtering must be enabled or disabled. + * @param vf_mask + * Bitmap listing which VFs participate in the VLAN filtering. + * @param vlan_on + * 1 - Enable VFs VLAN filtering. + * 0 - Disable VFs VLAN filtering. + * @return + * - (0) if successful. + * - (-ENOTSUP) if hardware doesn't support. + * - (-ENODEV) if *port_id* invalid. + * - (-EINVAL) if bad parameter. + */ +int +rte_pmd_txgbe_set_vf_vlan_filter(uint16_t port, uint16_t vlan, + uint64_t vf_mask, uint8_t vlan_on); + /** * Response sent back to txgbe driver from user app after callback */ diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index 0fe8e3415..40d98abfb 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -3413,6 +3413,21 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return 0; } +int +txgbe_vt_check(struct txgbe_hw *hw) +{ + uint32_t reg_val; + + /* if Virtualization Technology is enabled */ + reg_val = rd32(hw, TXGBE_PORTCTL); + if (!(reg_val & TXGBE_PORTCTL_NUMVT_MASK)) { + PMD_INIT_LOG(ERR, "VT must be enabled for this setting"); + return -1; + } + + return 0; +} + static uint32_t txgbe_uta_vector(struct txgbe_hw *hw, struct rte_ether_addr *uc_addr) { diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h index 73ca975e7..71ccabcbc 100644 --- a/drivers/net/txgbe/txgbe_ethdev.h +++ b/drivers/net/txgbe/txgbe_ethdev.h @@ -593,6 +593,7 @@ void txgbe_clear_all_ntuple_filter(struct rte_eth_dev *dev); void txgbe_clear_syn_filter(struct rte_eth_dev *dev); int txgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev); +int txgbe_vt_check(struct txgbe_hw *hw); int txgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf, uint16_t tx_rate, uint64_t q_msk); int txgbe_tm_ops_get(struct rte_eth_dev *dev, void *ops); diff --git a/drivers/net/txgbe/txgbe_vf_representor.c b/drivers/net/txgbe/txgbe_vf_representor.c index 87af9b34b..a404e272d 100644 --- a/drivers/net/txgbe/txgbe_vf_representor.c +++ b/drivers/net/txgbe/txgbe_vf_representor.c @@ -81,9 +81,34 @@ txgbe_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev, return 0; } +static int +txgbe_vf_representor_vlan_filter_set(struct rte_eth_dev *ethdev, + uint16_t vlan_id, int on) +{ + struct txgbe_vf_representor *representor = + TXGBE_DEV_REPRESENTOR(ethdev); + uint64_t vf_mask = 1ULL << representor->vf_id; + + return rte_pmd_txgbe_set_vf_vlan_filter( + representor->pf_ethdev->data->port_id, vlan_id, vf_mask, on); +} + +static void +txgbe_vf_representor_vlan_strip_queue_set(struct rte_eth_dev *ethdev, + __rte_unused uint16_t rx_queue_id, int on) +{ + struct txgbe_vf_representor *representor = + TXGBE_DEV_REPRESENTOR(ethdev); + + rte_pmd_txgbe_set_vf_vlan_stripq(representor->pf_ethdev->data->port_id, + representor->vf_id, on); +} + static const struct eth_dev_ops txgbe_vf_representor_dev_ops = { .dev_infos_get = txgbe_vf_representor_dev_infos_get, .link_update = txgbe_vf_representor_link_update, + .vlan_filter_set = txgbe_vf_representor_vlan_filter_set, + .vlan_strip_queue_set = txgbe_vf_representor_vlan_strip_queue_set, .mac_addr_set = txgbe_vf_representor_mac_addr_set, };