From patchwork Wed Dec 24 05:23:03 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ouyang Changchun X-Patchwork-Id: 2149 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 5CEFF9606; Wed, 24 Dec 2014 06:23:28 +0100 (CET) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 74D1495F2 for ; Wed, 24 Dec 2014 06:23:23 +0100 (CET) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP; 23 Dec 2014 21:21:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.07,637,1413270000"; d="scan'208";a="659381831" Received: from shvmail01.sh.intel.com ([10.239.29.42]) by orsmga002.jf.intel.com with ESMTP; 23 Dec 2014 21:23:21 -0800 Received: from shecgisg004.sh.intel.com (shecgisg004.sh.intel.com [10.239.29.89]) by shvmail01.sh.intel.com with ESMTP id sBO5NKUU028124; Wed, 24 Dec 2014 13:23:20 +0800 Received: from shecgisg004.sh.intel.com (localhost [127.0.0.1]) by shecgisg004.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP id sBO5NH7l019588; Wed, 24 Dec 2014 13:23:19 +0800 Received: (from couyang@localhost) by shecgisg004.sh.intel.com (8.13.6/8.13.6/Submit) id sBO5NHxM019584; Wed, 24 Dec 2014 13:23:17 +0800 From: Ouyang Changchun To: dev@dpdk.org Date: Wed, 24 Dec 2014 13:23:03 +0800 Message-Id: <1419398584-19520-6-git-send-email-changchun.ouyang@intel.com> X-Mailer: git-send-email 1.7.12.2 In-Reply-To: <1419398584-19520-1-git-send-email-changchun.ouyang@intel.com> References: <1419389808-9559-1-git-send-email-changchun.ouyang@intel.com> <1419398584-19520-1-git-send-email-changchun.ouyang@intel.com> Subject: [dpdk-dev] [PATCH v3 5/6] ixgbe: Config VF RSS X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" It needs config RSS and IXGBE_MRQC and IXGBE_VFPSRTYPE to enable VF RSS. The psrtype will determine how many queues the received packets will distribute to, and the value of psrtype should depends on both facet: max VF rxq number which has been negotiated with PF, and the number of rxq specified in config on guest. Signed-off-by: Changchun Ouyang --- lib/librte_pmd_ixgbe/ixgbe_pf.c | 15 +++++++ lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 92 ++++++++++++++++++++++++++++++++++----- 2 files changed, 97 insertions(+), 10 deletions(-) diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c b/lib/librte_pmd_ixgbe/ixgbe_pf.c index cbb0145..9c9dad8 100644 --- a/lib/librte_pmd_ixgbe/ixgbe_pf.c +++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c @@ -187,6 +187,21 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev) IXGBE_WRITE_REG(hw, IXGBE_MPSAR_LO(hw->mac.num_rar_entries), 0); IXGBE_WRITE_REG(hw, IXGBE_MPSAR_HI(hw->mac.num_rar_entries), 0); + /* + * VF RSS can support at most 4 queues for each VF, even if + * 8 queues are available for each VF, it need refine to 4 + * queues here due to this limitation, otherwise no queue + * will receive any packet even RSS is enabled. + */ + if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_RSS) { + if (RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool == 8) { + RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS; + RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = 4; + RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = + dev_num_vf(eth_dev) * 4; + } + } + /* set VMDq map to default PF pool */ hw->mac.ops.set_vmdq(hw, 0, RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx); diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c index f69abda..a7c17a4 100644 --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c @@ -3327,6 +3327,39 @@ ixgbe_alloc_rx_queue_mbufs(struct igb_rx_queue *rxq) } static int +ixgbe_config_vf_rss(struct rte_eth_dev *dev) +{ + struct ixgbe_hw *hw; + uint32_t mrqc; + + ixgbe_rss_configure(dev); + + hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + /* MRQC: enable VF RSS */ + mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC); + mrqc &= ~IXGBE_MRQC_MRQE_MASK; + switch (RTE_ETH_DEV_SRIOV(dev).active) { + case ETH_64_POOLS: + mrqc |= IXGBE_MRQC_VMDQRSS64EN; + break; + + case ETH_32_POOLS: + case ETH_16_POOLS: + mrqc |= IXGBE_MRQC_VMDQRSS32EN; + break; + + default: + PMD_INIT_LOG(ERR, "Invalid pool number in IOV mode"); + return -EINVAL; + } + + IXGBE_WRITE_REG(hw, IXGBE_MRQC, mrqc); + + return 0; +} + +static int ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev) { struct ixgbe_hw *hw = @@ -3358,24 +3391,38 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev) default: ixgbe_rss_disable(dev); } } else { - switch (RTE_ETH_DEV_SRIOV(dev).active) { /* * SRIOV active scheme * FIXME if support DCB/RSS together with VMDq & SRIOV */ - case ETH_64_POOLS: - IXGBE_WRITE_REG(hw, IXGBE_MRQC, IXGBE_MRQC_VMDQEN); + switch (dev->data->dev_conf.rxmode.mq_mode) { + case ETH_MQ_RX_RSS: + case ETH_MQ_RX_VMDQ_RSS: + ixgbe_config_vf_rss(dev); break; - case ETH_32_POOLS: - IXGBE_WRITE_REG(hw, IXGBE_MRQC, IXGBE_MRQC_VMDQRT4TCEN); - break; + default: + switch (RTE_ETH_DEV_SRIOV(dev).active) { + case ETH_64_POOLS: + IXGBE_WRITE_REG(hw, IXGBE_MRQC, + IXGBE_MRQC_VMDQEN); + break; - case ETH_16_POOLS: - IXGBE_WRITE_REG(hw, IXGBE_MRQC, IXGBE_MRQC_VMDQRT8TCEN); + case ETH_32_POOLS: + IXGBE_WRITE_REG(hw, IXGBE_MRQC, + IXGBE_MRQC_VMDQRT4TCEN); + break; + + case ETH_16_POOLS: + IXGBE_WRITE_REG(hw, IXGBE_MRQC, + IXGBE_MRQC_VMDQRT8TCEN); + break; + default: + PMD_INIT_LOG(ERR, + "invalid pool number in IOV mode"); + break; + } break; - default: - PMD_INIT_LOG(ERR, "invalid pool number in IOV mode"); } } @@ -3989,10 +4036,32 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev) uint16_t buf_size; uint16_t i; int ret; + uint16_t valid_rxq_num; PMD_INIT_FUNC_TRACE(); hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + valid_rxq_num = RTE_MIN(dev->data->nb_rx_queues, hw->mac.max_rx_queues); + + /* + * VMDq RSS can't support 3 queues, so config it into 4 queues, + * and give user a hint that some packets may loss if it doesn't + * poll the queue where those packets are distributed to. + */ + if (valid_rxq_num == 3) + valid_rxq_num = 4; + + if (dev->data->nb_rx_queues > valid_rxq_num) { + PMD_INIT_LOG(ERR, "The number of Rx queue invalid, " + "it should be equal to or less than %d", + valid_rxq_num); + return -1; + } else if (dev->data->nb_rx_queues < valid_rxq_num) + PMD_INIT_LOG(ERR, "The number of Rx queue is less " + "than the number of available Rx queues:%d, " + "packets in Rx queues(q_id >= %d) may loss.", + valid_rxq_num, dev->data->nb_rx_queues); + /* * When the VF driver issues a IXGBE_VF_RESET request, the PF driver * disables the VF receipt of packets if the PF MTU is > 1500. @@ -4094,6 +4163,9 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev) IXGBE_PSRTYPE_IPV6HDR; #endif + /* Set RQPL for VF RSS according to max Rx queue */ + psrtype |= (valid_rxq_num >> 1) << + IXGBE_PSRTYPE_RQPL_SHIFT; IXGBE_WRITE_REG(hw, IXGBE_VFPSRTYPE, psrtype); if (dev->data->dev_conf.rxmode.enable_scatter) {