From patchwork Mon Jan 8 05:13:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Wenzhuo Lu X-Patchwork-Id: 33047 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A8ADA1B22C; Mon, 8 Jan 2018 06:12:06 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id E5B7B1B1D5 for ; Mon, 8 Jan 2018 06:11:55 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Jan 2018 21:11:55 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,329,1511856000"; d="scan'208";a="8086151" Received: from dpdk26.sh.intel.com ([10.67.110.152]) by fmsmga008.fm.intel.com with ESMTP; 07 Jan 2018 21:11:54 -0800 From: Wenzhuo Lu To: dev@dpdk.org Cc: Jingjing Wu Date: Mon, 8 Jan 2018 13:13:34 +0800 Message-Id: <1515388414-16214-15-git-send-email-wenzhuo.lu@intel.com> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1515388414-16214-1-git-send-email-wenzhuo.lu@intel.com> References: <1515140505-38655-1-git-send-email-wenzhuo.lu@intel.com> <1515388414-16214-1-git-send-email-wenzhuo.lu@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 14/14] net/avf: enable Rx interrupt support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jingjing Wu Update the doc for the AVF features either. Signed-off-by: Jingjing Wu --- doc/guides/nics/features/avf.ini | 1 + doc/guides/nics/features/avf_vec.ini | 1 + doc/guides/nics/intel_vf.rst | 20 +++- doc/guides/rel_notes/release_18_02.rst | 16 +++ drivers/net/avf/avf_ethdev.c | 204 +++++++++++++++++++++++++++------ 5 files changed, 204 insertions(+), 38 deletions(-) diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini index da4d81b..ccb9edd 100644 --- a/doc/guides/nics/features/avf.ini +++ b/doc/guides/nics/features/avf.ini @@ -7,6 +7,7 @@ Speed capabilities = Y Link status = Y Link status event = Y +Rx interrupt = Y Queue start/stop = Y MTU update = Y Jumbo frame = Y diff --git a/doc/guides/nics/features/avf_vec.ini b/doc/guides/nics/features/avf_vec.ini index 45dd5e5..8924994 100644 --- a/doc/guides/nics/features/avf_vec.ini +++ b/doc/guides/nics/features/avf_vec.ini @@ -7,6 +7,7 @@ Speed capabilities = Y Link status = Y Link status event = Y +Rx interrupt = Y Queue start/stop = Y MTU update = Y Jumbo frame = Y diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst index 1e83bf6..66f90b1 100644 --- a/doc/guides/nics/intel_vf.rst +++ b/doc/guides/nics/intel_vf.rst @@ -28,8 +28,8 @@ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -I40E/IXGBE/IGB Virtual Function Driver -====================================== +Intel Virtual Function Driver +============================= Supported Intel® Ethernet Controllers (see the *DPDK Release Notes* for details) support the following modes of operation in a virtualized environment: @@ -93,6 +93,22 @@ and the Physical Function operates on the global resources on behalf of the Virt For this out-of-band communication, an SR-IOV enabled NIC provides a memory buffer for each Virtual Function, which is called a "Mailbox". +Intel® Ethernet Adaptive Virtual Function +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Adaptive Virtual Function (AVF) is a SR-IOV Virtual Function with the same device id (8086:1889) on different Intel Ethernet Controller. +AVF Driver is VF driver which supports for all future Intel devices without requiring a VM update. And since this happens to be an adaptive VF driver, +every new drop of the VF driver would add more and more advanced features that can be turned on in the VM if the underlying HW device supports those +advanced features based on a device agnostic way without ever compromising on the base functionality. AVF provides generic hardware interface and +interface between AVF driver and a compliant PF driver is specified. + +Intel products starting Ethernet Controller 710 Series to support Adaptive Virtual Function. + +The way to generate Virtual Function is like normal, and the resource of VF assignment depends on the NIC Infrastructure. + +For more detail on SR-IOV, please refer to the following documents: + +* `Intel® AVF HAS `_ + The PCIE host-interface of Intel Ethernet Switch FM10000 Series VF infrastructure ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/doc/guides/rel_notes/release_18_02.rst b/doc/guides/rel_notes/release_18_02.rst index 24b67bb..0672b0e 100644 --- a/doc/guides/rel_notes/release_18_02.rst +++ b/doc/guides/rel_notes/release_18_02.rst @@ -41,6 +41,22 @@ New Features Also, make sure to start the actual text at the margin. ========================================================= + * **Add AVF (Adaptive Virtual Function) net PMD.** + + A new net PMD has been added, which supports Intel® Ethernet Adaptive + Virtual Function (AVF) with features list below: + + * Basic Rx/Tx burst + * SSE vectorized Rx/Tx burst + * Promiscuous mode + * MAC/VLAN offload + * Checksum offload + * TSO offload + * Jumbo frame and MTU setting + * RSS configuration + * stats + * Rx/Tx descriptor status + * Link status update/event API Changes ----------- diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c index d9f7cea..13f6329 100644 --- a/drivers/net/avf/avf_ethdev.c +++ b/drivers/net/avf/avf_ethdev.c @@ -67,9 +67,14 @@ static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev, static int avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu); static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev, struct ether_addr *mac_addr); +static int avf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, + uint16_t queue_id); +static int avf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, + uint16_t queue_id); int avf_logtype_init; int avf_logtype_driver; + static const struct rte_pci_id pci_id_avf_map[] = { { RTE_PCI_DEVICE(AVF_INTEL_VENDOR_ID, AVF_DEV_ID_ADAPTIVE_VF) }, { .vendor_id = 0, /* sentinel */ }, @@ -111,6 +116,8 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev, .rx_descriptor_status = avf_dev_rx_desc_status, .tx_descriptor_status = avf_dev_tx_desc_status, .mtu_set = avf_dev_mtu_set, + .rx_queue_intr_enable = avf_dev_rx_queue_intr_enable, + .rx_queue_intr_disable = avf_dev_rx_queue_intr_disable, }; static int @@ -275,6 +282,99 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev, return ret; } +static int avf_config_rx_queues_irqs(struct rte_eth_dev *dev, + struct rte_intr_handle *intr_handle) +{ + struct avf_adapter *adapter = + AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter); + uint16_t interval, i; + int vec; + + if (dev->data->dev_conf.intr_conf.rxq != 0) { + if (rte_intr_efd_enable(intr_handle, dev->data->nb_rx_queues)) + return -1; + } + + if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { + intr_handle->intr_vec = + rte_zmalloc("intr_vec", + dev->data->nb_rx_queues * sizeof(int), 0); + if (!intr_handle->intr_vec) { + PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec", + dev->data->nb_rx_queues); + return -1; + } + } + + if (!dev->data->dev_conf.intr_conf.rxq) { + /* Rx interrupt disabled, Map interrupt only for writeback */ + vf->nb_msix = 1; + if (vf->vf_res->vf_cap_flags & + VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) { + /* If WB_ON_ITR supports, enable it */ + vf->msix_base = AVF_RX_VEC_START; + AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1), + AVFINT_DYN_CTLN1_ITR_INDX_MASK | + AVFINT_DYN_CTLN1_WB_ON_ITR_MASK); + } else { + /* If no WB_ON_ITR offload flags, need to set + * interrupt for descriptor write back. + */ + vf->msix_base = AVF_MISC_VEC_ID; + + /* set ITR to max */ + interval = avf_calc_itr_interval( + AVF_QUEUE_ITR_INTERVAL_MAX); + AVF_WRITE_REG(hw, AVFINT_DYN_CTL01, + AVFINT_DYN_CTL01_INTENA_MASK | + (AVF_ITR_INDEX_DEFAULT << + AVFINT_DYN_CTL01_ITR_INDX_SHIFT) | + (interval << + AVFINT_DYN_CTL01_INTERVAL_SHIFT)); + } + AVF_WRITE_FLUSH(hw); + /* map all queues to the same interrupt */ + for (i = 0; i < dev->data->nb_rx_queues; i++) + vf->rxq_map[0] |= 1 << i; + } else { + if (!rte_intr_allow_others(intr_handle)) { + vf->nb_msix = 1; + vf->msix_base = AVF_MISC_VEC_ID; + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vf->rxq_map[0] |= 1 << i; + intr_handle->intr_vec[i] = AVF_MISC_VEC_ID; + } + PMD_DRV_LOG(DEBUG, + "vector 0 are mapping to all Rx queues"); + } else { + /* If Rx interrupt is reuquired, and we can use + * multi interrupts, then the vec is from 1 + */ + vf->nb_msix = RTE_MIN(vf->vf_res->max_vectors, + intr_handle->nb_efd); + vf->msix_base = AVF_RX_VEC_START; + vec = AVF_RX_VEC_START; + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vf->rxq_map[vec] |= 1 << i; + intr_handle->intr_vec[i] = vec++; + if (vec >= vf->nb_msix) + vec = AVF_RX_VEC_START; + } + PMD_DRV_LOG(DEBUG, + "%u vectors are mapping to %u Rx queues", + vf->nb_msix, dev->data->nb_rx_queues); + } + } + + if (avf_config_irq_map(adapter)) { + PMD_DRV_LOG(ERR, "config interrupt mapping failed"); + return -1; + } + return 0; +} + static int avf_start_queues(struct rte_eth_dev *dev) { @@ -314,8 +414,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev, struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); struct rte_intr_handle *intr_handle = dev->intr_handle; - uint16_t interval; - int i; PMD_INIT_FUNC_TRACE(); @@ -325,8 +423,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev, vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues); - /* TODO: Rx interrupt */ - if (avf_init_queues(dev) != 0) { PMD_DRV_LOG(ERR, "failed to do Queue init"); return -1; @@ -344,36 +440,15 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev, goto err_queue; } - /* Map interrupt for writeback */ - vf->nb_msix = 1; - if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) { - /* If WB_ON_ITR supports, enable it */ - vf->msix_base = AVF_RX_VEC_START; - AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1), - AVFINT_DYN_CTLN1_ITR_INDX_MASK | - AVFINT_DYN_CTLN1_WB_ON_ITR_MASK); - } else { - /* If no WB_ON_ITR offload flags, need to set interrupt for - * descriptor write back. - */ - vf->msix_base = AVF_MISC_VEC_ID; - - /* set ITR to max */ - interval = avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX); - AVF_WRITE_REG(hw, AVFINT_DYN_CTL01, - AVFINT_DYN_CTL01_INTENA_MASK | - (AVF_ITR_INDEX_DEFAULT << - AVFINT_DYN_CTL01_ITR_INDX_SHIFT) | - (interval << AVFINT_DYN_CTL01_INTERVAL_SHIFT)); - } - AVF_WRITE_FLUSH(hw); - /* map all queues to the same interrupt */ - for (i = 0; i < dev->data->nb_rx_queues; i++) - vf->rxq_map[0] |= 1 << i; - if (avf_config_irq_map(adapter)) { - PMD_DRV_LOG(ERR, "config interrupt mapping failed"); + if (avf_config_rx_queues_irqs(dev, intr_handle) != 0) { + PMD_DRV_LOG(ERR, "configure irq failed"); goto err_queue; } + /* re-enable intr again, because efd assign may change */ + if (dev->data->dev_conf.intr_conf.rxq != 0) { + rte_intr_disable(intr_handle); + rte_intr_enable(intr_handle); + } /* Set all mac addrs */ avf_add_del_all_mac_addr(adapter, TRUE); @@ -383,7 +458,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev, goto err_mac; } - /* TODO: enable interrupt for RX interrupt */ return 0; err_mac: @@ -399,6 +473,8 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev, struct avf_adapter *adapter = AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev); + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = dev->intr_handle; int ret, i; PMD_INIT_FUNC_TRACE(); @@ -408,9 +484,13 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev, avf_stop_queues(dev); - /*TODO: Disable the interrupt for Rx*/ - - /* TODO: Rx interrupt vector mapping free */ + /* Disable the interrupt for Rx */ + rte_intr_efd_disable(intr_handle); + /* Rx interrupt vector mapping free */ + if (intr_handle->intr_vec) { + rte_free(intr_handle->intr_vec); + intr_handle->intr_vec = NULL; + } /* remove all mac addrs */ avf_add_del_all_mac_addr(adapter, FALSE); @@ -913,6 +993,58 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev, } static int +avf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct avf_adapter *adapter = + AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter); + uint16_t msix_intr; + + msix_intr = pci_dev->intr_handle.intr_vec[queue_id]; + if (msix_intr == AVF_MISC_VEC_ID) { + PMD_DRV_LOG(INFO, "MISC is also enabled for control"); + AVF_WRITE_REG(hw, AVFINT_DYN_CTL01, + AVFINT_DYN_CTL01_INTENA_MASK | + AVFINT_DYN_CTL01_ITR_INDX_MASK); + } else { + AVF_WRITE_REG(hw, + AVFINT_DYN_CTLN1(msix_intr - AVF_RX_VEC_START), + AVFINT_DYN_CTLN1_INTENA_MASK | + AVFINT_DYN_CTLN1_ITR_INDX_MASK); + } + + AVF_WRITE_FLUSH(hw); + + rte_intr_enable(&pci_dev->intr_handle); + + return 0; +} + +static int +avf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct avf_adapter *adapter = + AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint16_t msix_intr; + + msix_intr = pci_dev->intr_handle.intr_vec[queue_id]; + if (msix_intr == AVF_MISC_VEC_ID) { + PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it"); + return -EIO; + } + + AVF_WRITE_REG(hw, + AVFINT_DYN_CTLN1(msix_intr - AVF_RX_VEC_START), + 0); + + AVF_WRITE_FLUSH(hw); + return 0; +} + +static int avf_check_vf_reset_done(struct avf_hw *hw) { int i, reset;